Big Data Strategy Decision Making

Big Data Strategy and Its Use in Decision Making

The today’s highly competitive and fast moving business environment is only favorable to businesses that make the best decision. This is due to the fact that with big data intelligence at hand, businesses, such retail business industry, are able to come up with any type of data visualization which comprises of dashboards, info graphics amongst other types of data visualization. According to Krish Krishnan (2013) the volumes of data available in varying degrees of complexity, generated at different velocities and varying degrees of ambiguity, that cannot be processed using traditional technologies, processing methods, algorithms or any commercial off-the-shelf solutions.” Huge volumes of data which can give insights about data that was or it has been produced in an identifiable form can be accessed by big data and for this reason various businesses can use this form of technology in coming up with different processes of decision making.

So as the operations of the business to become efficient and effective, the businesses can also use big data. With this technology, the businesses will be able to compete effectively without the fear of being kicked out of the context. There is improved performance with this new technology as it has increased the speed, flexibility and the ease of measuring the quantities of various data automatically. When compared to the traditional technologies of data processing, the data technology due to its ability to process architecture and measure quantities of data, the data technology is found to be of more benefit and thus more advantageous.

Through the big data analytics and the newly advanced computing technology, even the complicated and the most challenging problems that are associated with various businesses are now easy to solve. Therefore, the process of finding out the useful information, hidden patterns and unknown correlations so as to come up with better and productive decision can simply be referred to as the extraction of big data analytics (Chen , et al., 2012). With this kind of technology, the best decision making and even driven innovation amongst other relevant and important steps of a business can be made by any organization’s top management can be made as a means of survival and development. This is because with this technology, with it the application of high performance tools for data mining, optimization, text mining, forecasting and relevant data extraction is much simpler and even faster.

So as to optimize funnel conversion, the data intelligence can play a very big role to ensure the management makes the decisions that are important and vital to the company business. Other roles that can be played by the data intelligence includes prediction of security threats towards the business, predicting the support the company may land, it can also detect fraud, optimization of prices and carrying out an analysis on the available markets for business and also the big data intelligence plays a role in behavioral analytics. It is noted that when it comes to a right decision making, it is very essential to adapt business intelligence in such a highly competitive and an economy that is growing worldwide so fast.

This type of technology is applicable in various situations i.e. in a data repository, customer relations, integrated marketing strategies or even identifying and weighing the constantly changing industry trends. Mobility, social collaboration, cloud and big data are a few but examples of the areas that are influenced by the industry trends  thus the business intelligence is an important aspect as it is the one that keeps the industry trends going (Chaudhuri & Narasayya , 2011). So as to achieve their motives i.e. raising the market reach, increased profits and better performance of finance, the organization groups of various sizes and companies target their customers use these data that is collected from these sources.

The structured or unstructured data in a business environment that is experiencing completion at global level and is at risky position is capable of provision of trading, assets, price related detailed information and industry related trends to an organization thereby enabling the organization to plan ahead of other competitive industries. Thus such data is very crucial to any industry that needs to grow and work for the best of its objectives in a very competitive business environment.

Big data and customer segmentation

All over the world, for the companies to compete effectively, they have had to find out and identify what their customers want .i.e. the customers’ opinions, reviews, customers’ preferences in their day-to-day needs and these companies are also putting in contest the other factors that affect their businesses as an added advantage for their co-existence in the business world. All these information after being collected, it is analyzed by the company’s experts so as to reach conclusions that will aid in their decision making.

A better decision making will put the company at a position to win the price by shooting at the target as per the objectives and aims that the company had put in place earlier. Such important information can be collected from various social media sites, official websites, interactive sites and various search engines. This is achievable when the companies that are in need of such information retrieve the information from the customer’s purchase history, search items made by the customer, the customer’s history of transaction from the data.

A company with such vital information is able to come up with a strategized marketing form of a message that is composed of customized and personalized contents that exhibit certain features that interest a specific group and thus becomes customer specific.

Retailing analytics

In the process of monitoring, to identify and to measure the extent at which the customers are influenced by the products and services offered by the company, the retailers collect the information that is customer related through the built-in data repository globally from public data and social data so as to know the opinions of their customers. Apart from the individual’s opinion on the products and services offered by the company, the data that is gathered by the company is also composed of the opinion of other people i.e. the friends’ opinions and even other parties’ opinions that are found in the sites of social media.

Data warehousing

Through the collection of data, a large quantity of structured as well as unstructured data keeps on piling and very important and valuable information about the customers is extracted from these piles of data by the companies that are seeking the information. In businesses that are retail oriented, big data intelligence and analytics is used to identify and target the relevant customers i.e. as Walmart does. Other relevant tools for customer preferences and needs are used by Walmart.

Due to continuous piling up of data, a huge data is created as a result. To most businesses this huge data pose a challenge as it becomes tiresome in the event of scanning of products at POS as well as some data also piling from the surveillance cameras etc.  A lot more value can be achieved by a business through proper management of data and thus the business processes can be improved by a large extent which as a result can cause the whole business to succeed. This value can also lead to a good decision making which can as well transform the methodology and value tools for undertaking the business activities. By using Enterprise Data Warehouse concept in any operations of the business, for decision making, any business enterprise is capable of transformation of raw data into knowledge that is quite useful.

Data bases and business intelligence

Big data has been declared by Forbes as the greatest weapon for business prosperity for sales and marketing of various products and this is because with data at hand, the businesses are transforming all over the world since the various businesses have sought to find the actionable insights from the unstructured data that is highly piled. A database that consists of data taken from e-commerce sites, customer call records, customer history and social media sites has been put forward by Walmart. This kind of data presented by Walmart is not structured data but is an extensive and complex data that is accumulated in its data repository which in order to mine, collect and manage needs the tools for data analysis and specialized software.

Walmart focuses on implementing business intelligence and the other tools for data visualization like reports, charts and dashboards and maintenance of such information in in their database for easier retrieval and use when the information is in need. For the marketing manager at Walmart to retrieve the customers’ tastes and preferences, complaints, purchase history and opinions, they use the information from the data so as to divide their customers into categories depending on their characteristics. This is because this mode of characterizing enables Walmart to beat the competition as it is able to understand the competition completely.

Micro-segmentation

Through targeting the behavior and the attitude of the customer towards the products and services offered by a company aids the company to go through the decision making of the customer so that it becomes simple for the companies involved to measure and put into records the differences in products that seek to identify the market and those customers participating in the businesses.

Segmentation is where the company is able to come up with relevant a very message that is passed in media perceiving and in it having a greater ability to break through a broad region and generate a response that is desirable and in turn cause a desirable action or behavior on the target group and cause an engagement of that individual or group thereby resulting in their loyalty to the company thus achieving the expected objective. Therefore, the process of understanding the customer and their behavior and engaging them is the only key to micro-segmentation.

Fig I: Customer understanding and benefits

Activity-based data

From mobile data, history made from purchases, data from call center, queries concerning the products incentive responses, the search information and tracking that is made from various websites are some of the activities that enable Walmart to understand its customers. Social media sites like Facebook and tweeter among others have also been monitored and recorded to have given a pushing hand in marketing of products through the marketing messages that they bare. The information about the customer demand can also be provided by the collection of data from the tracking tools on the websites.

The social influence can also be as a result of sentiment data which are affected by the reviews and comments that are made online, the associated company, the involved product and the records that are made concerning the customer in terms of the services offered to such a customer or the products sold to that customer. This information is stored in the database as it can be of use in the future or may be used elsewhere. Through the various activities that leverage the efficiency and the effectiveness of insights that are of high value when it comes to decision making matters, the real cost of micro-segmentation is reached and known as soon as there is an identification of the target market and customers at large.

Master data management

To clean up the old data and create new, accurate, modern and a complete set of data that is necessary for carrying out management and also the growth of the business enterprise, master data management is fondly used. Very crucial and important information about the products, the accounts, relationships, suppliers and customers is exhibited by the master data and through it , the business enterprise is able to understand who it supplies goods and services to, the products or goods that it offers, the services it offers, the accounts it holds with its customers and its suppliers at large. The lack of such information to the enterprise lands it in a dangerous spot and therefore, the enterprise needs this information so as to operate usefully.

A business enterprise is able to allow changing needs in enterprise and also allowing the enterprise to form business processes through integration of data through the master data management which in turn allows the whole activity to go through. There are several advantages that are exhibited by the initiatives on the Master data management. These advantages include the naturally strategic initiatives, the master data management initiative have also affected a good number of business areas at Walmart at large.

The managers at Walmart have been able to solve a number of problems and issues pertaining to their customers, effectiveness of the products they offer etc. And all these are by the business intelligence practices and analytics combined with both the management and the IT involved processes. This is because the master data management has the phrases that direct the organization to the objectives. These phrases that made the master data management relevant include “single view of my customer”, “time span of introducing a new product”, “profitability report of this week” and “how long it would take to penetrate a specific market”. This has thus made Walmart a great competitor when it comes to big data and its management and as a result maximizing its profits and increasing its performance financially.

Big Data Strategy Dissertation
Big Data Strategy Dissertation

Fig II: Master data management at Walmart

IT process management and Big Data

For an organization to come up with good strategy for decision making in various occasions, both the big data and the IT process management must be involved as a single unit. This is because these two are very important in development and maintenance of the data repository which is composed of the information on the opinions, preferences, clients, orders, complaints and the information of many more factors.

The main objective of the IT data is to achieve the goals that were set on the area of business provided while the business analytics that is applied and the decision making process are used and for a good decision making be approached, the business intelligence process, the IT system and the enterprise solution system have to be done harmoniously. For collection and storage of information on an online customer and the transactions carried out, the information technology system is put into use. This information collected and stored can be further utilized during the process of decision making (Mayer-Schönberger, 2013) and therefore making it vital for categorizing of the clients to a given business enterprise.

Merging unstructured data with structured data

The middleware technology is an efficient and affordable and effective technology that is applied and used in the drawing of valuable information. The middleware technology is developed from the merging of both the structured data and unstructured data which is used in the business intelligence system. The various companies develop management systems and data integration and come up with the integration of big data- the integration used for the extraction of reports, data visualization and charts e.g. the Net-mark which was developed at NASA as one of these integrations and the management systems. This software can be scaled and be measured; it is cost effective and very efficient.

Once this software receives unstructured data, for it to be used for analysis or reports, the software cleans automatically and regulates the structure in it. (Mayer-Schönberger, 2013). These are some of its properties and it allows for merging of data as the enterprise involved would like the data to be represented.

Importance of social media in decision making

Social media has today become an important factor in the today’s world of business which is competitive. Social media has enabled various individuals across the world to share and communicate with each other over the internet. There are different activities and behaviors in various social networking sites with a particular trend. Businesses can therefore use the internet to monitor such behaviors and activities so that they can get information about the market trend and the preferences of customers.

Activities, likes, comments, recommendations, time spent on pages, feedback and online reviews about a given product on social sites can provide useful information that managers may need during decision making process. Companies such as Walmart can point out a trend of clothing and fashion that exactly match the needs of their customers (Leonardi, 2013).

Conclusion

The use of business analytics and tools in information technology management, big data technology, data warehousing, etc. has helps various businesses in the present business environment. Development and maintenance of database is today used to monitor and record customers’ behaviors such as purchases and transaction histories. Information from internet search engines, and comments, complaints, tastes and preferences can be used by companies to study their customers. Customer behavior and preferences can also be studied through information about their online activities and participation in internet campaigns.

Such information about customers can be used to make strategic decisions for the effectiveness and efficiency of the company. Business analytics and intelligence is not limited to decision making only but can be used also in marketing, operation and supply chain management among many other applications. Walmart has been using business intelligence in cross selling, segmentation, campaign, advertisement channeling, monitoring of customer behavior and sentiment analysis to study and know their customer needs.

Works Cited

Adamson, I., 2013. Relationship marketing: customer commitment and trust as a strategy for the smaller Hong Kong corporate banking sector.. International journal of bank marketing.

Chaudhuri , S. & Narasayya , V., 2011. An overview of business intelligence technology. Communications of the ACM, 54(8), pp. 88-98.

Chen , H., CHiang , R. H. & Storey , V. C., 2012. Business Intelligence and Analytics: From Big Data to Big Impact. MIS quarterly, 36(4), pp. 1165-1188.

Kumar, V., 2002. Customer relationship management.. s.l.:John Wiley & Sons, Ltd.

Leonardi, P., 2013. Enterprise social media: Definition, history, and prospects for the study of social technologies in organizations. Journal of Computer‐Mediated Communication, pp. 1-19.

Mayer-Schönberger, V., 2013. A revolution that will transform how we live, work, and think. s.l.:Houghton Mifflin Harcourt..

Ryals, L., 2001. Cross-functional issues in the implementation of relationship marketing through customer relationship management. European management journal.

Rygielski, C., 2002. Data mining techniques for customer relationship management. Technology in society, pp. 483-502.

Sharda , D., Delen , D. & Turban , E., 2013. Business Intelligence: A Managerial Perspective on Analytics. s.l., Prentice Hall Press.

Zikopoulos, P., 2011. Understanding big data: Analytics for enterprise class hadoop and streaming data. s.l.:McGraw-Hill Osborne Media.

Other Relevant Blog Posts

IT Dissertation Topics

How Computers Affect Communication

Cloud Manufacturing Dissertation

If you enjoyed reading this post, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.

ERP Dissertation Implementation SMEs

A Study of Enterprise Resource Planning (ERP) Implementation and Critical Factors that Affect Chinese Small-Medium Enterprise

Title: ERP Dissertation Enterprise Resource Planning. Following the global economic recession caused by several economic crises during the first decade of the twenty-first century, the business world market has become more intensely competitive, but also more cooperative between organizations and firms, in order to achieve their goals and reach the next level. SMEs (Small Medium Enterprises) have become one of the most significant forces in leading the economic recovery globally and domestically, as they occupy a large percentage of every domestic economy in the world.

It was quoted by U.S. Presidential candidate Romney in 2012, when running for election, small businesses are the foundation of our nation’s economy. He raised the importance of SME’s development as being crucial, having a strong effect on the economy. Meanwhile, the usage of the ERP (enterprise resource planning) has developed widely. It has been recognised as an effective tool for use by both large organizations and SMEs to improve their performance in the complexity of business expansions, following many real life experiences and experiments over decades. ERP’s implementation and effect on success and failure has been studied by many experts in recent years and is becoming a popular topic in the business field.

At the same time, ERP system implementation’s critical factors and their adoptions have also been reviewed by practitioners worldwide. Therefore much effort has been made on improvements, following many reconsiderations and gap fittings. China, as one of the industrialized nations and being the second-largest economy in the world, is predicted to close the growth gap with the U.S. by 2030. Hence, the examination of ERP system circumstances as they relate to Chinese SMEs has been noted as a critical area for business development, and studying Chinese ERP implementation in SMEs will provide more insights for future development and expansion in business.

Dissertation Objectives

  • To review ERP implementation success models and critical factors for SMEs from existing literature
  • To identify the factors that effect ERP implementation in Chinese SMEs
  • To Evaluate the ERP implementation factor’s influences through interview of Chinese SME users
  • To recommend how to adopt ERP in Chinese SMEs

ERP Dissertation Implementation
ERP Dissertation Implementation

Dissertation Contents

1: Introduction
Background
Objectives of the Study
Structure of Dissertation

2: Literature Review
ERP Systems implementation Success Models
Culture Issues
Models Summary
Critical Factors on ES implementation affect to Chinese SMEs
Implementation costs to SMEs
Culture clash
Lack of top management support
Resistances in workplace
Data Accuracy from legacy system
Literature Review Summary

3: Research Methodology
Methods Analysis
Quantitative Research Methods
Qualitative and Quantitative Comparison
Review Methodologies From Previous Studies
Research Question Design
The Strategy Of Asking Questions
Participant Selections
Interview Limitation

4: Findings
Research Questions
Results
Participants

5: Conclusions
Discussion
Enterprise Capacity
Business Type
Top Management Support
Resistance In Work Place
Culture Clash And Organizational Culture
Data Accuracy

6: Recommendations and Limitations
Recommendations
Limitations

7: Conclusion

References

Appendix

Download This Dissertation Here: IT Dissertation Enterprise Resource Planning China

Other Relevant Blog Posts

Enterprise Resource Planning

Ecommerce Industry

Management Information Systems Overview

Did you find any useful knowledge relating to Enterprise Resource Planning (ERP) implementation and critical factors that affect Chinese Small-Medium Enterprise in this post? What are the key facts that grabbed your attention? Let us know in the comments. Thank you.

Network Design and Structure Dissertations

Network Design and Structure

Title: Network Design – When implementing a network in an organisation, there are some design issues that must be considered before implementation. The requirements of the network must be clearly defined and all the network components to be used have to be clearly defined. Some of the considerations are discussed below.

Network Design and Network Architecture

Network architecture is the infrastructure consisting of software, transmission equipment, and communication protocols define the structural and logical layout of a computer network. The mode of transmission of a network can be wired or wireless depending on the requirements in an organisation. There are various types of networks that can be applied in an organisation depending on the network size. Local area network (LAN) refers to network in a small geographical area, Metropolitan area network (MAN) refers to network in a city, and wide area network (WAN) refers to network that is spread geographically in a wide area. Among the three types of network, the company would implement LAN since it is only covering a small geographical area.

Transmission Media

The transmission medium of a network can be wired or wireless. Wired medium involve use of coaxial cables or fiber-optic cables while wireless media involves wireless transmission of data. Depending on the bandwidth, throughput and goodput we are able to determine the best medium of transmission. Fiber optic cables have low signal loss since they avoid collision, and they are efficient in data transfer in high traffic networks. Coaxial cables are less expensive compared with fiber optic cables, but they have high signal loss caused by collisions. Wireless transmission is efficient in local area network where there are few computers.

Network Design Management Method

The management method of a network can be either peer to peer or client-server. Peer to peer is where there is communication between several computers without a central computer. Client-server is where each client is independent and a central server provides services to the clients. In a peer to peer network, many computers can share a single application installed in one computer. In a client-server, they are designed to support large number of clients where the clients do not share resources. The client-server model security is enhanced because security is handled by the server. It is also easy to upgrade a client server model to meet new requirements in an organisation.

Figure 1, a client server model

 

Network Design
Network Design

Network Topology

Network topology is divided into physical and logical topology. Physical topology refers to the way in which computers and other devices are connected. Logical topology describes the layout of data transmission in a network. Bus, ring, star and mesh topologies are the main types of topologies. Bus topology is a where all devices are connected with a single cable. The topology works for small networks, but it is slow and collisions are common. Ring topology is where the cable runs around where each node is connected to each other. There are fewer collisions compared with a bus topology. A token ring is used to avoid collision. In a star topology, all the devices are connected to a central hub. There is a central management making it is faster in upgrading, but failure of the central hub brings down the entire network. Mesh topology connects all the devices to each other for fault tolerance and redundancy to improve performance.

Network Design Security Requirements

Networks are frequently attacked by hackers and other malicious people. This makes security one of the key considerations when designing a network. To reduce the number of attacks on computer networks, the network should have firewalls, intrusion detection systems, VPN, and DMZ. These measures reduce the threat and detect malicious people in the network.

Scalability

This refers to the ability of the network to grow. The network should be scalable enough to cater for growth in the network infrastructure.

Network Address Translation (NAT)

This is a design consideration where many computers in a private network access the network using one public IP address. This is a measure to enhance security in a network.

Figure 2, Network architecture

Network Architecture Dissertation
Network Architecture

Figure 3, Showing how VPN is implemented

VPN Implementation
VPN Implementation

OSI Reference Model in Network Design

The OSI model has seven layers as highlighted in the diagram. The communication system is sub-dived into layers where each layer sends service requests to the layer below it and receives service requests from the layer above it (FitzGerald & Dennis, 2009).

OSI Model
OSI Model

Layer 1: Physical Layer

Physical layer refers to the hardware and all network devices used in the network. The layer defines the physical devices and the transmission medium. The layer receives service requests of the data-link layer and performs encoding and decoding of data in signals. Protocols in this layer include CSMA/CD, and Ethernet (Liu, 2009).

Layer 2: Data-Link Layer

Data-link layer receives service requests of the network layer and sends service requests to the physical layer. The main function of the data-link layer is to provide reliable delivery of data across networks. Other functions performed by the layer include framing, flow and error control, and error detection and correction. There are two sub layers of the data-link layer; media access control layer, and logical link control layer. Media access control performs frame parsing, data encapsulation and frame assembly. Logical link control is responsible for error checking, flow control and packet synchronisation. Protocols in this layer include; X 25, frame relay and ATM.

Layer 3: Network Layer

Network layer is responsible for managing all the network connections, network congestions, and packet routing between a source and destination. The layer receives service requests of the transport layer and sends service requests to the data-link layer. The main protocols in this layer are IP, ICMP, and IGMP.

Layer 4: Transport Layer

The main purpose of this layer is to provide reliable data delivery which is error free by performing error detection and correction. The layer ensures that there is no loss of data, and data is received as it was sent. The layer provides either connection-less or connection oriented service. There are two protocols in this layer: UDP and TCP.

TCP

  • Sequenced
  • Connection oriented
  • Reliable delivery
  • Acknowledgements and windowing flow control

UDP

  • No sequencing
  • Connection-less
  • No reliable delivery
  • No acknowledgements and no windowing flow control

Layer 5: Session Layer

The main purpose of this layer is to establish and terminate sessions. The layer sets up and terminates connection between two or more processes. It also manages communication between hosts. If there is login or password validation, this layer is responsible for the validation process. Check-pointing mechanism is also provided by this layer. If an error occurs, re-transmission of data occurs from the last check-point. Protocols in this layer include; RIP, SOCKS, and SAP.

Layer 6: Presentation Layer

This layer is responsible for data manipulation, data compression and decompression, and manages how data is presented. The layer receives service requests of the application layer and sends service requests to the session layer. The layer is concerned with the syntax and semantics of the data in transmission. Data encryption and decryption (cryptography) is used to provide security in this layer. Protocols involved in this layer include; ASCII, EBCDIC, MIDI, MPEG, and JPEG.

Layer 7: Application Layer

This layer provides interaction with the end user and provides services such as file and email transfers. The layer sends service requests to the presentation layer. It has several protocols used in communication; FTP, HTTP, SMTP, DNS, TFTP, NFS, and TELNET.

Network Protocols

  • Ethernet – provides transfer of information on Ethernet cable between physical locations
  • Serial Line Internet Protocol (SLIP) – used for data encapsulation in serial lines.
  • Point to point protocol (PPP) – this is an improvement of SLIP, performs data encapsulation of serial lines.
  • Internet Protocol (IP) – provides routing, fragmentation and assembly of packets.
  • Internet Control Management protocol (ICMP) – help manage errors while sending packets and data between computers.
  • Address resolution protocol (ARP) – provides a physical address given an IP address.
  • Transport control protocol (TCP) – provides connection oriented and reliable delivery of packets.
  • User datagram protocol (UDP) – provides connection-less oriented service and unreliable delivery of packets.
  • Domain name service (DNS) – provides a domain name related to a given IP address.
  • Dynamic host configuration protocol (DHCP) – used in the management and control of IP addresses in a given network.
  • Internet group management protocol (IGMP) – support multi-casting.
  • Simple network management protocol (SNMP) – manages all network elements based on data sent and received.
  • Routing information protocol (RIP) – routers use RIP to exchange routing information in an internetwork.
  • File transfer protocol (FTP) – standard protocol for transferring files between hosts over a TCP based network.
  • Simple mail transfer protocol (SMTP) – standard protocol for transferring mails between two servers.
  • Hypertext transfer protocol (HTTP) – standard protocol for transferring documents over the World Wide Web.
  • Telnet – a protocol for accessing remote computers.

Figure 5 shows the TCP/IP architecture

TCP/IP Architecture
TCP/IP Architecture

Layer 1: Network Access Layer

This layer is responsible for placing TCP/IP packet into the medium and receiving the packets off the medium. This layer control hardware and network devices used in the network. Network access layer combines the physical and data-link layer of the OSI model.

Layer 2: Internet Layer

It functions as the network layer in the OSI model. The layer performs routing, addressing and packet addressing in the network (Donahoo & Calvert, 2009).

Layer 3: Transport Layer

The layer has the same functions as the transport layer in the OSI model. The main function of this layer is to provide reliable data delivery which is error free. The layer receives service requests of the application layer and sends service requests to the internet layer.

Layer 4: Application Layer

This is the layer that has applications that perform functions to the user. It combines the application, presentation and session layers of the OSI model.

TCP/IP Commands Used To Troubleshoot Network Problems

There are many TCP/IP commands that can be used to show that there is a break in communication. The commands are: PING, TRACERT, ARP, IPCONFIG, NETSTAT, ROUTE, HOSTNAME, NBSTAT, and NETSH.

Hostname is used to display and show the host name of the computer

Arp is used for editing and viewing of ARP cache.

Ping is used to send ICMP echo to test the reachability of a network

Event viewer shows all the records of errors and events.

References

Donahoo, M. J., & Calvert, K. L. (2009). TCP/IP sockets in C: Practical guide for programmers. Amsterdam: Morgan Kaufmann.

Fall, K. R., & Stevens, W. R. (2012). TCP/IP illustrated. Upper Saddle River, NJ: Addison-Wesley.

FitzGerald, J., & Dennis, A. (2009). Business Data Communications and Network Design. Hoboken, NJ: John Wiley.

Leiden, C., & Wilensky, M. (2009). TCP – IP. Hoboken: For Dummies

Liu, D. (2009). Next generation SSH2 Network Design and Implementation: Securing data in motion. Burlington, MA: Syngress Pub.

Odom, W. (2004). Computer networking first-step. Indianapolis, Ind: Cisco.

Ouellet, E., Padjen, R., Pfund, A., Fuller, R., & Blankenship, T. (2002). Building a Cisco Wireless LAN and Network Design.   Rockland, MA: Syngress Pub.

View More Computing Dissertations Here

If you enjoyed reading this post on Network Design and Structure, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.

Cloud Based Intrusion Detection Systems

Managing Cloud Based Intrusion Detection Systems (IDSs) in Large Organizations

Intrusion Detection Systems (IDSs) are becoming the important priority to secure the organizations’ IT resources from potential damages. However, organizations experience a number of challenges during IDS deployment. The preliminary challenges of IDS deployment involve product selection according to organizational requirements and goals followed by IDS installation. IDS installations frequently fail due to resource conflicts and the lack of expertise necessary for the successful installation. Post installation phases of IDS involve a number of challenges associated with proper configuration and tuning that requires advance skills and supports. Organizations can overcome many obstacles of product installation and IDS configuration through maintaining a test-bed and phased deployment. Once IDS is operational, IDS data undergo various levels of analysis and correlation. To perform data analysis tasks, administrators require advance programing and networking skills and an in-depth knowledge on organizational network, security, and information architecture. Sometimes large organizations need to correlate data from multiple IDSs products. One potential solution to that is the use of SIEM (Security Information and Event Management) software. Organizations also need to ensure the security and integrity of various IDS components and data. Agents’ and data security can be overcome by maintaining a more autonomous design in the agent structure and incorporating appropriate formats, protocols, and cryptographic arrangements in different phases of data lifecycle. IDS products require ongoing human interaction for tuning, configuration, monitoring and maintenance. Hence, Organizations need to gather different levels of skills for the proper deployment and operation of IDS products.

Managing Cloud Based Intrusion Detection Systems in Large Organizations

Intrusion detection is the surveillance of computer hosts and associated networks through observing various events and identifying signs of unauthorized and unprivileged accesses and other anomalous activities that can compromise the confidentiality, availability, and integrity of the system (Singh, Gupta, & Kumar, 2011; Sundaram, 1996; Lasheng & Chantal, 2009). With rising number of malicious attacks on organizational information network, intrusion detections and security incident responses have become the key priorities to organizational security architecture since the widespread industrial adoption of network during the 1990s (Yee, 2003). Today, the placement of a dedicated intrusion detection system (IDS) in organizational IT system is one of the important considerations for organizations (Werlinger, et al., 2008). The aim of intrusion detection system is to ensure adequate privacy and security of the information architecture and save IT resources from potential damages from various internal and external threats (Scarfone & Mell, 2007). Intrusion detection systems (IDSs) monitor and record activities or events in computer and network environment and then analyze them to identify the intrusion.

With industry’s wide spread adoption, intrusion detection systems have become the de facto security tools in corporate worlds. Major organizations and governmental institutions have already deployed or on the verge of deploying IDSs to secure their corporate networks. However, the deployment of IDS, particularly in the distributed network of a large organization, is a non-trivial task. The complexity and the time required for installation depends upon the number of machines that need to be protected, the ways those machines are connected to the network, and the depth of surveillance the organizations need to achieve (Iheagwara, 2003; Innella, McMiIlan & Trout, 2002). As a result, large organizations need elaborate planning during different phases of IDS deployment, including during product evaluation and testing, suitable placement of IDS agents and managers, configuration of IDS components, integration of IDSs with other surveillance products, etc. (Bye, Camtepe, & Albayrak, 2010; Bace & Mell, 2001) The aim of this paper is to discuss various challenges associated with IDS deployment in large-scale distributed network of big corporations. Particular emphases are given to the various challenges associated with the management of agents, collection of agent data, and the correlation of IDS data to identify possible intrusions in large scale distributed networks. The paper will also discuss various “real-world” encounters during different stages of IDS deployment, such as, during evaluation of products, IDS installation and configuration, management and ongoing operation, etc. and make necessary recommendations to overcome those difficulties.

Why are Intrusion Detection Systems Required for Large Organizations?

Networks are ubiquitous in today’s business landscape. Organizations harness network power to develop sophisticated information system, to utilize distributed and secured data storage, and to provide valuable web-based customer services. Software vendors provide their applications to the end users through networks. Networks allow employees to gain remote accesses to their offices or organizational resources. These proliferations of network activities have flooded the internet with different classes of cyber threats, including different classes of hackers, rogue employees, and cyber terrorists. A significant number of these threats derive from competitor organizations seeking to exploit organizational resources or to disrupt productivity and competitive advantages. In recent years, the proliferation of heterogeneous computer networks, including a vast number of cloud networks, has increased the amount of invasive activities. Today cloud based e-commerce sites and business services are major targets of attackers. The damaging costs resulting from cyber-attacks are substantial. The traditional prevention techniques, such as secured authentication, data encryption, various software and hardware firewalls are often inadequate to prevent these threats (Rao, Pal, & Patra, 2009; Anderson, Frivold, & Valdes, 1995). Various kinds of system vulnerabilities are undeniable or typical features of computer and network systems. The intruders frequently search for various weaknesses of defensive products, such as a subtle weak point in the firewall configuration, or a loosely defined authentication mechanism. Hence, the investment in an intrusion detection system within an organization’s security architecture as a second line defense mechanism can increase the overall security postures of the system.

Overview of IDS

General Architecture

A distributed agent-based architecture consists of two main components–i) IDS agents and ii) the management server (Beg, Naru, Ashraf, & Mohsin, 2010). An agent is a software entity that perceives different aspects of its location (networks and hosts) and capable of acting itself according to the supplied protocols (Boudaoud et al., 2000; Mell et al., 1999). Intrusion Detection Systems agents work independently (Brahmi, Yahia, & Poncelet, 2011), interact with central management servers, follow protocols according to the systems’ requirements, and collaborate with other agents in an intelligent manner (Lasheng, & Chantal, 2009). The management server is the cornerstone of an IDS that facilitates centralized management of IDS components. This includes tuning, configuration and control of distributed agents; aggregation and storage of data sent by various agents; correlation of distributed data to identify intrusions; and the generation of alerts (Chatzigiannakis et al., 2004). The central node also performs any update and upgrading of the system (Chatzigiannakis et al., 2004). In case of a mobile agent based distributed IDS, the management server also responsible for dispatching agents and maintaining communication with them. The difference between a normal and a distributed agent based Intrusion Detection Systems is that in a distributed IDS, the significant part of analysis tasks are performed by the agents situated across the network. The agents maintain a flat architectural structure communicating only the main results to the central server as opposed to sending all data to the central node through a hierarchical structure.

IDSs Classification

Based on the locations on which IDS agents are distributed, IDSs can be categorized into two broad classes–hosts based and network based.

Host based IDS. A host intrusion detection system (HIDS) is installed and run on an individual host where it investigates all inbound and outbound packets associated with that host to identify intrusion (Singh, Gupta, & Kumar, 2011; Neelima & Prasanna, 2013). Besides network packets, HIDSs also monitor various system data, such as event logs, operating system processes, file system integrity, and unusual changes to various configuration settings (Scarfone & Mell, 2007; Bace & Mell, 2001; Kittel, 2010). The architecture of a host based IDS is very straight forward. The detection agents are installed on the hosts, and the agents communicate over the existing organizational network (Scarfone & Mell, 2007). The event data are transmitted to the management server and are manipulated through a console or command line interface (Scarfone & Mell, 2007; Ghosh & Sen, 2005). Host-based IDSs have greater analysis capabilities due to the availability of dedicated resources to the IDS (i.e., processing, storage, etc.) and hence work with a greater degree of accuracy (Bace & Mell, 2001; Garfinkel & Rosenblum, 2003). However, HIDSs have some limitations. Installation, configuration, and maintenance of the IDS must be performed in each host individually, which is extremely time consuming (Scarfone & Mell, 2007; Bace & Mell, 2001). HIDSs are also vulnerable themselves due to their poor real-time responses (Bace & Mell, 2001; Kozu.shko, 2003). However, host based IDSs are excellent choices for identifying long term attacks (Kozushko, 2003).

Cloud Based Intrusion Detection Systems
Cloud Based Intrusion Detection Systems

Network based IDS. Network based IDSs identify intrusions through analyzing traffics of a dedicated organizational network in order to secure the associated hosts from malicious attacks (Bace & Mell, 2001). Instead of investigating various activities within the hosts, network based IDSs focus only on packet streams that travel through the network. Network IDSs investigate network, transport, and application protocols and the various network activities, such as port scanning, connection status, port access, etc. to determine attacks. In a network based IDS, multiple sensors or agents are placed on various strategic points on the network (Singh, Gupta, & Kumar, 2011) where they guard a particular segment of network (Scarfone & Mell, 2007), perform local analysis of traffics with the associated hosts, and communicate the results to the central management server. The results from various agents are coordinated to identify planned distributed attacks within the organizational network. ( Bace & Mell, 2001). Network based IDSs are faster to implement and more secured than host-based IDSs. However, there are some disadvantages of NIDSs. One of them is the frequent dropping of packets which normally occurs in a network with high traffic density or during the periods of high network activities (Bace & Mell, 2001; Chatzigiannakis et al., 2004). Network based IDSs are unable to process encrypted information, which is a major drawback in monitoring virtual machine hosts (Bace & Mell, 2001). Network based IDSs only identify signs of attacks but cannot ensure whether the target host is infected (Bace & Mell, 2001), and thus, the manual investigation of host is necessary to trace and confirm associated attacks.

IDS Classification According to the Detection Approaches

According to the mechanism of or approaches to intrusion detection, IDSs can be classified further into two categories: i) anomaly based detection system and ii) misuse or signature based detection system.

Anomaly based detection system. Anomaly detection system is based on the principle that all intrusions are linked with some deviations of normal behavioral patterns (Maciá-Pérez et al., 2011; Ghosh & Sen, 2005; Abraham & Thomas, 2005; Singh, Gupta, & Kumar, 2011). It identifies intrusions by comparing the patterns of suspicious events against the observed behavioral patterns of the monitored system (Beg, Naru, Ashraf, & Mohsin, 2010). The anomaly detection programs collect historical data from the system and construct individual profiles that represent normal patterns of host and network utilization (Bace & Mell, 2001). The constructed database along with appropriate algorithm is used to verify the consistency of the network packets. Anomaly detection agents are preferable in that they can detect attacks that are completely unrecognized before (Beg, Naru, Ashraf, & Mohsin, 2010; Kozushko, 2003). However, the rates of false positive generated by the agents are very high (Ghosh & Sen, 2005; Brahmi et al., 2012), and intruders may disguise themselves by mimicking acceptable behavioral patterns (Ghosh & Sen, 2005).

Misuse or signature based detection system. Misuse detection approaches depend upon the records of existing system vulnerabilities and known attack patterns (Abraham & Thomas, 2005). Misuse detection systems generate fewer false positives compared to the anomaly detection systems (Ghosh & Sen, 2005; Faysel & Haque, 2010). They are also easy to operate and require minimum human interventions. However, misuse detection techniques are vulnerable to new attacks that have no known signature or matching pattern (Brahmi, Yahia, & Poncelet, 2011; Ghosh & Sen, 2005). So, the signature database of a misuse detection system needs to be frequently updated to recognize the most recent attacks (Scarfone & Mell, 2007). 

IDS Design and Development Challenges

Challenges in Managing Intrusion in Distributed Network

In recent years, security concerns are shifting from host to network due to the proliferation of internet based services, distributed work environment, and heterogeneous networks. The majority of the current IDS vendors are adopting network based and distributed approaches to security in their products (Suryawanshi et al., n. d.). However, there are a number of limitations to most of the distributed IDSs. Firstly, the monitoring agents from the distributed hosts and network send event data to the centralized controller components (Suryawanshi et al., n. d.; Brahmi, Yahia, & Poncelet, 2011; Kannadiga & Zulkernine, 2005). Because of the centralized data analysis performed, these systems are vulnerable to a single point of failure (Bye, Camtepe, & Albayrak, 2010; Zhai, Hu, & Weiming, 2014; Brahmi et al., 2012; Tolba et al., 2005; Araújo & Abdelouahab, 2012). Secondly, the architecture of these systems consists of a hierarchical tree-like structure with the main control system at the root level, sensor units at transient or leaf nodes and information aggregation units at some internal nodes. Information collected from local nodes is aggregated at the root level to obtain a global view of the system (Brahmi et al., 2012). Large scale data transfer from transient nodes to the central controller unit during the aggregation process can create network overloads (Suryawanshi et al., n. d.).

These results in a communication delay and an inability to detect large scale distributed attacks efficiently in a real-time manner (Brahmi et al., 2012). In order to overcome these limitations, recent IDSs incorporate various technologies supporting agent-based data analysis and intrusion detection, where agents perform most analysis tasks and send only the important data to the centralized nodes directly through a flat communication structure. Multi-agent based distributed intrusion detection systems (DIDS) are partly autonomous systems capable of self-configuring upon changing contexts of network and hosts and disseminating their analytical capabilities in different corners of network in a distributed manner (Gunawan et al., 2011; Tierney et al., 2001). Through adopting a hybrid approach, such as both the network and host monitoring as well as implementing both anomalous and signature detection methods, distributed agents can coordinate the results of hosts and networks more accurately and perform more comprehensive intrusion detection (Abraham & Thomas, 2005; Brahmi et al., 2012).

Major Challenges with Distributed IDS

The most important challenges associated with distributed IDSs are the correct placement of agents (Sterne et al., 2005). Large number of misplaced agents can drive inefficiency and therefore agent locations must be justified through proper investigation of network topology, such as the characteristics of routers and switches, number of hosts, etc. (Chatzigiannakis et al., 2004). Another major challenge is how the heterogeneous data from different sensors should be collected and analyzed to identify an attack (Chatzigiannakis et al., 2004; Debar & Wespi, 2001). Furthermore, being distributed in nature, agents are vulnerable to become compromised themselves. Agents need to follow a common communication protocol and transfer data to centralized server securely without producing too much extra traffic (Chatzigiannakis et al., 2004). Agents’ security and integrity also largely maintained and ensured by the management server. Hence, securing the management server is an important task for the overall security of an IDS. Organizations should consider a dedicated server for the entire management host (Wotring, 2010), which will lower the number of accesses in the server and eventually reduce the exposure to vulnerability. Further restrictions to both physical and network accesses in the management server must be incorporated through proper authentication mechanism and physical restrictions to the server areas. Sometimes the management server can be put behind a dedicated firewall to enhance the security status (Wotring, 2010; Brennan, 2002).

Deployment Challenges of IDS

Consideration before Deploying an IDS

Due to the various limitations of IDS products and a lack of skilled network security specialists in the market, IDS deployment in large organizational network involves substantial challenges. A successful IDS deployment requires elaborate planning, requirement analysis, prototyping, testing, and training arrangement (Bace & Mell, 2001). A requirement analysis is conducted to prepare an IDC policy document that demonstrates the organization’s structure and resources and reflects its IDS strategies, security policies, and goals (Bace & Mell, 2001). Before specifying organizational requirements, it should be borne in mind that an IDS is not a standalone security application, and the main objective of an IDS is to monitor traffic on the organization’s internal network in order to complement existing security controls (Werlinger, et al., 2008).

Specifying system architecture. Before evaluating and selecting an IDS product, organizations should specify the important requirements for which they seek a potential IDS solution. In order to accomplish this goal, organizations may plan and document important properties of their system, such as –i) system and network characteristics; ii) network architecture diagram; iii) technical specifications of the IT environment, including the operating systems, typical services, the applications running on various hosts, etc.; iv) technical specifications of security structure, including existing IDSs, firewalls, antivirus tools, and various hardware appliances; v) existing network communication protocols; etc. (Scarfone & Mell, 2007; Brandao et al., 2006). These considerations will help organizations to determine which type of IDS is necessary to give optimum protections of their systems.

Specifying goals. Once the system architecture and general requirements of the system is documented, the next steps is to specify technical, operational, and business related security goals that the organization wants to achieve by implementing an IDS (Bace & Mell, 2001; Scarfone & Mell, 2007). Some of these goals may be i) guarding the network from particular threats; ii) preventing unprivileged accesses; iii) protecting important organizational assets; iv) exerting managerial controls over the network; v) preventing violations of security or IT policies through observing and recording suspicious network activities; etc. (Scarfone & Mell, 2007). Some security requirements may have implications with organizational culture; such as, the organization that maintains a high degree of formalization in its culture may look for IDSs suitable for various formal policy configurations and extensive reporting capabilities regarding policy violations (Bace & Mell, 2001). A few security goals may derive from external requirements that the organizations may need to achieve, such as legal requirements for the protection of public information, audit requirements for security practices, or any accreditation requirement (Bace & Mell, 2001). There may be industry specific requirements, and organizations need to ensure whether the proposed IDS can meet those (Bace & Mell, 2001).

Specifying constraints. IDSs are typically resource intensive applications that need substantial organizational commitments. The most important constraints that organizations need to take into account are the budgetary considerations for the acquisition of software and hardware, infrastructure development, and for the ongoing operation and maintenance. Organizations should identify IDSs’ functional requirements or the users’ skill requirements to operate them effectively (Bace & Mell, 2001). Organizations that will not be able to incorporate substantial human resources in IDS monitoring and maintenance activities should choose an IDS that is more automated and requires little staff time (Scarfone & Mell, 2007).

Product Evaluation Challenges

The evaluation of an IDS product is the most challenging aspect on which the success of intrusion detection depends. Today there are a range of commercial and public domain products available for deployment (McHugh, Christie, & Allen, 2000). Each product has distinct drawbacks and advantages. While some products work well in particular types of organizational network, some IDSs may not produce desired results in certain industrial settings. In order to overcome these challenges, organizations must evaluate an IDS product in terms of their system resources and protection requirements (McHugh, Christie, & Allen, 2000). Vendor-specific information, product manuals, whitepapers, third-party reviews, and information from other trusted sources can be valuable resources during the product evaluation (Scarfone & Mell, 2007). IDSs’ detection accuracy, usability, life cycle costs, vendor supports, etc. are some of the most critical aspects during product evaluation. Other features that must be taken into account are security, interoperability, scalability and reporting capabilities (McHugh, Christie, & Allen, 2000; Scarfone & Mell, 2007).

Product performance. The performance of IDS is the measure of event processing speed (Debar, Dacier, & Wespi, 1999). The performance feature of IDS products must take a very high degree of attention, as the anomalous or suspicious events must be detected in real-time and reported as soon as possible to minimize damages (Mell et al., 1999; Scarfone & Mell, 2007). Network based IDSs normally suffer from performance problems, particularly where IDSs have to monitor heavy traffic associated with lots of hosts in a distributed network (McHugh, Christie, & Allen, 2000). The performance of IDSs also largely depends on extensive configuration and fine-grained tuning according to the network architecture (Scarfone & Mell, 2007), and testing IDSs with default settings may not represent original performance of the product. These make the evaluation of the product performance extremely challenging. In addition, IDSs with more robust detection capabilities will consume more processing and storage, which can cause the performance loss (Scarfone & Mell, 2007; Yee, 2003). Hence, the scalability feature that allows IDSs dynamically allocate processing power and storage can be one of the important performance evaluation criteria (Mell et al., 1999).

Security considerations. During the evaluation of an IDS product, various technologies and features associated with product security must be taken into account, such as protection of stored data, protection of transmitted data during communication between various IDS components, authentication and access control mechanisms, IDS hardening features after product installation, etc. (Scarfone & Mell, 2007). Organizations need to identify whether the IDS is resistant to external modifications (Kittel, 2010). This can be accomplished by checking various features, such as the level of isolation (in case of VMI base IDS) (Kittel T, 2010); cryptographic arrangements during inter-agent communication (Mell et al., 1999); isolated monitoring features; (Kourai & Chiba, 2005); etc.

Interoperability, scalability, and reporting features. Interoperability is one of the key challenges for security specialists who aim to develop sophisticated enterprise security architecture incorporating the industry’s leading tools (Yee, 2003). Through interoperability features, IDSs from various platforms are able to correlate their results and effectively communicate data with firewalls and security management tools to enhance the overall surveillance status of the system (Yee, 2003; Scarfone & Mell, 2007). While the interoperability feature provides IDSs with the capabilities to integrate their strengths among multiple security products, the scalability feature helps to incorporate more capabilities within a single IDS product as the organizational requirements grow. For large organizations, IDSs must be able to dynamically allocate processing and storage or be able to implement more agents and various IDS components with the extending demands (Mell et al., 1999). The number of agents implementable in a single management server and the number of management servers in a particular stance of deployment may reflect an IDS’s scalable capacity (Scarfone & Mell, 2007). Another feature that reflects more of the usability than the functionality of an IDS is its reporting capabilities. Technical IDS data needs to be presented in a comprehensible format to the corporate users with various skill levels (Werlinger, et al., 2008). The reporting functionalities help tailoring and presenting data in users’ intended and convenient ways. IDSs should facilitate a comparative view of various states over time, such as before and after the implementation of major changes to the configuration, etc. (Werlinger, et al., 2008).

IDSs maintenance and product supports. Because maintenance activities take substantial overheads in operating IDSs, organizations should give various maintenance considerations as the important priorities during an IDS product selection. These include the requirement of independent versus centralized management of agents; considerations of various local and remote maintenance mechanisms, such as host based GUI, web-based console, command line interfaces, etc.; security protections during various maintenance activities, such as securely transmitting, storing, and backing up IDS data; ease of restoration of various configuration settings; ease of log file maintenance; etc. (Scarfone & Mell, 2007).

Organizations require various levels of supports and should identify vendors’ ability in providing active supports according to the requirements during various stages of installation and configuration (Bace & Mell, 2001; Scarfone & Mell, 2007). Apart from on-demand and direct supports, organizations should check whether the vendors maintain users’ groups, mailing lists, forums and similar categories of support in a free of cost manner (Scarfone & Mell, 2007). The quality and availability of various electronic and paper based support documents, such as installation guides, users’ manuals, policy recommendation principles and guidelines, etc. are some of the typical features on which an IDS product can be justified in considerable extents (Scarfone & Mell, 2007). Organizations also need to carefully evaluate various costs associated with the support structure (Bace & Mell, 2001). A significant part of IDSs’ costs normally derives from the hidden costs associated with professional support services during IDS implementation and maintenance, including the training costs for both the administrators and IDS users (Yee, 2003; Bace & Mell, 2001). Organizations also need to recognize the costs of updates and upgrades if they are not free (Bace & Mell, 2001). In addition, the vendors’ capabilities to frequently release updates and patches as well as their capabilities to release the updates in a timely manner in response to new threats; conveniences of collection of each update; available means to verify the authenticity and integrity of individual updates; the effects of each update and upgrade with existing configurations of the IDS; etc. also need to be considered (Scarfone & Mell, 2007).

IDS Installation and Deployment Challenges

The biggest hurdle of IDSs is associated with the installation of the software (Werlinger, et al., 2008). IDS installations require the involvement of security specialists with a broad knowledge on IT and network security and protocols and an in-depth understanding on the organizational structure, resources, and goals (Werlinger, et al., 2008; McHugh, Christie, & Allen, 2000). Unlike other security products installations, an IDS installation is a time consuming and complex process, and the administrators have to face plenty of issues during the installation period. For example, the entire installation may crash in the middle of the installation, or the IDSs may produce inconsistence error messages that are difficult to deal with (Werlinger, et al., 2008). Due to these reasons, careful documentations of various problems and installation information (e.g., various parameters and settings) are necessary during installation, which can save valuable time and resources over the long run (Innella, McMiIlan & Trout, 2002). The amount of tasks and efforts necessary to install an IDS in a specific network can be daunting and overwhelming (Werlinger, et al., 2008). Hence, the availability of automated features in the Intrusion Detection Systems, such as automatic discovery of network devices, faster and more automated tuning options, and quick configuration supports through grouping related parameters, etc. can overcome the challenges with manually performing those tasks (Werlinger, et al., 2008).

Organizations should consider testing IDSs in a simulated environment before placing them in the actual network to overcome various challenges associated with large and complex network (Werlinger, et al., 2008; Scarfone & Mell, 2007). Some of these challenges are: i) the IDS software or network may crash during installation or testing periods due to the resource conflicts within various parts of the network (Scarfone & Mell, 2007), ii) IDS installation may alter the network characteristics undesirably, or iii) problems during the installation may keep the network temporarily unavailable. Organizations also need to consider a multi-phased installation by primarily selecting a small part of the network with limited number of hosts, or initially activating a few sensors or agents (Scarfone & Mell, 2007). Both test-bed and multi-phased installations will help security specialists to gain valuable insights through planning and rehearsal processes. This can help them to cope with various challenges associated with the installation, scalability, and configuration related problems (Scarfone & Mell, 2007), such as tuning and configuring properly to get rid of large amount of false alarms or efficiently dealing with huge traffics in a robust network (Werlinger, et al., 2008). Based upon various IDS technologies and the system’s characteristics, IDSs require different level of ongoing human interactions and dedication of resources (Bace & Mell, 2001). A multi-phased installation will help to justify the human resources and time that an organization needs to incorporate (Bace & Mell, 2001).

Configuring and Validating IDS

IDS configuration challenges. Whether an IDS will perform as an effective surveillance tool for an organization relies upon the informed justification of various configuration and tuning options and the dedication of resources based upon the IDS’s requirements (Werlinger, et al., 2008). The administrators require an in-depth knowledge on organizational missions, organizational processes, and existing IT services during the configuration process (Werlinger, et al., 2008). This knowledge is necessary to accustom the IDS according to the system structure, users’ behavior, and network traffic patterns, which will subsequently help to reduce the false positive generated by the IDS (Werlinger, et al., 2008). Initially, these challenges can be overcome during an installation through the collaboration of experts or security specialists administering different areas of network and servers within the distributed network (Werlinger, et al., 2008). Organizations should follow their existing security policies to configure various features of IDSs that may help them to recognize various policy violations (Bace & Mell, 2001). Following are the most important considerations that need to be ensured during IDSs configuration.

  1. Justifying the placement of agents to guard mission critical assets (McHugh, Christie, & Allen, 2000);
  2. Aligning IDS configurations with organizational security policies (McHugh, Christie, & Allen, 2000);
  • Installing most up-to-date signatures and updates during the initial stages of installation (McHugh, Christie, & Allen, 2000);
  1. Creating users’ accounts and assign roles and responsibilities (McHugh, Christie, & Allen, 2000);
  2. Customizing filters to generate appropriate levels of alerts;
  3. Determining IDS’s alert handling procedures and correlating alerts with other

IDSs (if exist), existing firewalls, and the system or application logs (McHugh, Christie, & Allen, 2000). The interoperability features of IDSs and the use of common alert formats will allow the administrators to integrate data and alerts (McHugh, Christie, & Allen, 2000).

Security hardening and policy enforcement. Sometimes IDSs may be the attackers’ primary targets, and security hardening is necessary to ensure IDSs’ safety (Scarfone & Mell, 2007). The important tasks during security hardening involve; i) hardening IDSs through implementing latest patches and signature updates immediately after installation; ii) creating separate users’ accounts for general users and administrators with the appropriate level of privileges (Scarfone & Mell, 2007); iii) controlling access to various firewalls, routers, and packet filtering devices; iv) securing IDS communication by implementing suitable encryption technology (Scarfone & Mell, 2007); etc.

Ongoing Operation and Maintenance Challenges

Monitoring, operation, and maintenance of distributed IDSs are normally conducted remotely through the management console or GUI (i.e., menus or options). In addition, command line interfaces may facilitate local management of IDS components. Ongoing operation and maintenance of IDSs are substantial challenges for organizations, which require basic knowledge on system and network administration, information security policies, various IDS principles, organizations’ security policies, and incidence response guidelines (Scarfone & Mell, 2007). Sometimes, there requires some advance skills, such as advance manipulating skills (e.g., report generation) and programming skills (e.g., code customization). The most important operation and maintenance activities are:

  1. performing monitoring, analysis, and reporting activities;
  2. managing IDSs for appropriate level of protections, such as re-configuring IDS components with the necessary changes to the network, applying updates, etc.; and
  3. managing skills for ongoing operation and maintenance. (Scarfone & Mell, 2007).

Monitoring, analysis and reporting. Successful monitoring of IDSs involves monitoring of network traffics and the proper recognition of suspicious behavior. The important tasks during ongoing monitoring includes i) monitoring various IDS components to ensure security (Scarfone & Mell, 2007); ii) monitoring and verifying different operations, such as events processing, alert generations, etc. (Scarfone & Mell, 2007); and iii) periodic vulnerability assessments. IDSs’ vulnerability assessments are conducted through appropriate level of analyses by incorporating various IDS features and tools and by correlating agents’ data (Scarfone & Mell, 2007). For ease of monitoring, IDSs require to generate reports in readable formats, which is done through various levels of customization of views (Scarfone & Mell, 2007). Because monitoring and maintenance involve substantial human interventions, these can consume lots of staff time and resources. Organizations can overcome these challenges in two major ways: i) customizing and automating tasks to enhance control over maintenance activities (Scarfone & Mell, 2007) and ii) incorporating smart sensors that work autonomously in the network to analyze the traffics and recognize trends and patterns (Scarfone & Mell, 2007).

Applying updates. Regular IDS updates need to be implemented in order to achieve appropriate protections for both IDSs and the system. Security officials need to check vendors’ notifications of security information and updates periodically and apply them as soon as they are released (Scarfone & Mell, 2007). Both software updates and signature updates are important for IDS security and appropriate functioning. A software update provides bug fixes and new features to the various components of an IDS product, including sensors or agents, management servers, consoles, etc. (Scarfone & Mell, 2007). A signature update enhances IDSs’ detection capabilities through updating configuration data. Hackers can alter the code of updates; so, verifying the checksum of each update is crucial before applying the update (Scarfone & Mell, 2007; Mell et al., 1999; Hegarty et al., 2009). Apart from software updates, organizations need to justify the positioning of IDS agents and components and ensure their optimal placement by periodically reviewing the network configurations and changes (McHugh, Christie, & Allen, 2000).

Retaining existing IDS configurations is a vital consideration before applying an update. Usually, normal updates will not change existing IDS configurations. But, IDS codes that are tailored and customized by the administrators to incorporate desirable functionalities may be altered during code updates. However, administrators should save and backup both customized codes and configuration settings before applying updates (Scarfone & Mell, 2007). Drastically applying updates to the IDS system or components also poses certain challenges. New signatures or detection capabilities can cause a sudden flooding of alerts (Scarfone & Mell, 2007). To detect and overcome the problematic signature from the updates, administrators should test the signature and software updates in a smaller scale or within a specific host or agent (Scarfone & Mell, 2007).

Generating skills. The ongoing operation and maintenance of IDSs and the appropriate utilization of IDS data require security officials with a set of skills and knowledge. Security teams of many organizations are unable to conduct customization or tuning of IDS products based on the IDS data in their own networks within reasonable time frame (Werlinger, et al., 2008). To ensure the effective manipulation of IDSs in both the user and administrator levels, organizations must consider providing training to all stakeholders involved in IDSs operations. This includes acquiring skills on general IDS principles, operating consoles, customizing and tuning IDS components, generating reports, etc. (Scarfone & Mell, 2007). Organizations should take available training options into considerations according to the users’ needs and conveniences, such as online training, CBT, instructor-led training, lab practices, hands-on exercises, etc. (Scarfone & Mell, 2007). Organizations may also utilize various information resources (Scarfone & Mell, 2007), such as various electronic and paper based documents (e.g., installation guides, users’ manuals, policy recommendation principles and guidelines, etc.) to generate skills required during installation and maintenance activities (Scarfone & Mell, 2007).

Managing Distributed Intrusion Detection System Agents

Managing Agents in a Distributed Environment

Different distributed IDS architecture consists of varieties of role-based agents, such as sniffer, filter, misuse detection, anomalous detection, rule mining, reporter agents, etc. (Scarfone & Mell, 2007; Anderson, Frivold & Valdes, 1995). The distribution of intrusion detection tasks among agents substantially reduce IDSs’ operation loads and increase performance. However, one challenge associated with distributed IDSs is the management of large number of agents. IDS agents in many global companies sit on different geographical regions (Innella, McMiIlan & Trout, 2002). To optimize IDSs’ performance and save valuable resources, large organizations need to justify the options between centralized versus distributed management of agents (Innella, McMiIlan & Trout, 2002). If the management of an IDS does not involve several administrators or a hierarchical structure, a centralize approach of IDS management can provide number of benefits over distributed management (Innella, McMiIlan & Trout, 2002). First, it simplifies the network structure and reduces the vulnerability points through reducing the requirement of multiple agents and sensors. Second, the simplified structure will reduce the management costs and other overheads (Innella, McMiIlan & Trout, 2002). Overall, it reduces the network data transportation costs through minimizing the travel of agent data to multiple IDS managers. Organizations should choose the most efficient approach to data collection, and a centralized management can facilitate administrators to coordinate multiple IDSs or agents efficiently through the smooth and uncluttered network (Innella, McMiIlan & Trout, 2002).

Another challenge of managing distributed agents is to ensure agents’ integrity. Hosts must ensure that the agents are free of malicious codes before permitting them to operate on the platform. This is done by signing agents’ codes, i.e., incorporating valid certificates against which the hosts check the integrity of an agent (Krugel & Toth). Agents are vulnerable to modification during its transmission (Krugel & Toth). Applying an appropriate encryption method during agent transmission can overcome the barrier.

In case of mobile agents in distributed IDS, the central management server dispatches varieties of agents to different nodes of the network. A single mobile agent may carry on multiple functionalities which incorporate large amount of codes into the agent’s structure and attribute some limitations on its mobility (Krugel & Toth). A substantial part of these codes are associated with hosts’ operating system specific functionalities (Krugel & Toth). To overcome this limitation, i.e., to keep the agents small in size, only generic codes can be incorporated into the agent’s structure and the operating system dependent codes into the hosts themselves (Krugel & Toth).

Managing Interactions and Communications between Agents

Agents need to communicate each other to maintain the operational consistency. Agents can perform distant communications through creating communication channels among them and then exchanging messages (Brahmi, Yahia, & Poncelet, 2011). Agents interact with each other using an ACL (Agent Communication Language) language (Brahmi, Yahia, & Poncelet, 2011). Information can be sent in text formats using standard and secured protocols (Brahmi, Yahia, & Poncelet, 2011). In some distributed IDS architecture, a mobile agent can directly visit to a particular host, deploy itself on that host, and then exchange required messages (Brahmi, Yahia, & Poncelet, 2011). Upon receiving the messages, the deployed agent can return to the place of its origin or visit another host as required (Brahmi, Yahia, & Poncelet, 2011).

Collecting and Correlating IDS Agent Data

Collection and Storage of Distributed Data

Data collection, aggregation, and storage are vital concerns for effective manipulation and correlation of events data (Innella, McMiIlan & Trout, 2002). Before data aggregation, organizations need to determine which types of data should be collected and preserved. Distributed IDSs place agents in different corners of the network, where agents collect representative data in a distributed manner according to the organizations’ interests (Holtz, David, & de Sousa Junior, 2011). Once collected, data is filtered and analyzed and inferred locally by the agents. Agents normally send only those data to the management sever that are meaningful. However, the responsibility of distributed IDSs or distributed agents is not only to collect network packets but also audit data traces from the associated hosts, such as logs generated by applications, operating systems, and other defensive software (Holtz, David, & de Sousa Junior, 2011). Organizations need to determine whether all these data will be sent to the management server. For security reasons, IDSs log data should be preserved both locally and centrally (Scarfone & Mell, 2007).

Another challenge of data storage is to determine how long the log data should be preserved. Day-to-day accumulated log data can quickly overrun the capacity of data storage. Organizations may need to store IDS data accumulated in as much as two years period (Innella, McMiIlan & Trout, 2002), and conveniently storing these enormous amount of log data in the centralized server of a distributed IDS is challenging (Scarfone & Mell, 2007). To overcome the barrier of data storage, a number of researchers suggested incorporating cloud based data storage in the IDS architecture for scalability, flexibility, and ease of access (Scarfone & Mell, 2007; Alharkan & Martin, 2012; Chen et al., 2013).

Data storage is not only associated with volume issues, other issues, such as storage management and the level of security applied to the data also implies a great deal of challenges. IDS data is vulnerable during transmission and during storage. To ensure authenticity and integrity of collected data, suitable cryptographic arrangements are made during transmission and storage of agent data (Holtz, David, & de Sousa Junior, 2011; Cloud Security Alliances, 2011, Catteddu & Hogben, 2009). Cryptographic arrangements in a large scale system can be managed effectively by deploying the enterprise wide Public Key Infrastructure (PKI) (Sen, 2010; Tolba et al., 2005).

Analyzing Intrusion Detection System Data

The administrators often need to carry out various analysis tasks through data fusion and events correlation in order to identify subtle attacks (Holtz, David, & de Sousa Junior, 2011). Analysis of IDS data requires appropriate manipulation of data originating from the network and hosts. Administrators need sound analysis skills in order to efficiently accomplish this goal. The fundamental unit of IDS data is event (Jordan, 2000). One way IDSs generate alarms is through context sensitive analysis by counting events and determining thresholds. For example, many connections at a certain time is recognized as a SYN flood, or too many different ports visited at a time is recognized as a port scan (Jordan, 2000). Another way to determine an intrusion is through identifying the quality of uncoupled events in terms of their passing of certain criteria, such as the pattern of a pre-recognized signature (Jordan, 2000). In a distributed IDS, the above analysis of IDS data is locally performed by the distributed agents. A more advance analysis is performed in the centralized server through event correlations.

Correlating Agent Data

While the tasks of each agent are to identify network intrusion and suspicious behavior in its associated network segment, the centralized server is responsible for correlating these individual agent data in order to identify planned and distributed attacks on the network (Yee, 2003). The centralized server aggregates agent data for event correlation. In the process of event correlation, if a network packet with inconsistent signature is identified (Jordan, 2000) or an event is recognized as suspicious, the next step is to identify the correlated events demonstrating similar patterns (Jordan, 2000). In order to accomplish this goal, IDSs will constantly search for connections between suspicious and non-suspicious events (Jordan, 2000). Network administrators may need to adopt various analysis techniques (e.g., data fusion, data correlation, etc.) and tools (e.g., honey pots) to successfully carry on the event correlation tasks (Holtz, David, & de Sousa Junior, 2011). However, in a large scale distributed network where each segment of the network poses distinct characteristics and where the hosts are running on heterogeneous environments, associating one suspicious network event with another event generated from a distant network segment is tremendously challenging (Innella, McMiIlan & Trout, 2002). It requires a broad understanding of entire network as well as the effective communication and coordination between security officials responsible for the management of various segments of the network.

Correlating Data from Multiple Intrusion Detection System Products

Correlation of different types of IDS data facilitates the identification of large scale distributed attacks in a coordinated manner (Brahmi et al., 2012; Brahmi, Yahia, & Poncelet, 2011). There are advantages and limitations of each IDS product. A single product cannot ensure the full protection from all kinds of intrusions and malicious activities. Large organizations that have multiple products (either from the same or different vendors) with different detection methods and strategies need to correlate their IDSs’ data to produce maximum benefits from them (Sallay, AlShalfan, & Fred, 2009). A single management interface (or console) can facilitate the coordination, management and control of IDS data coming from multiple IDS products (Scarfone & Mell, 2007). Organizations may need to identify whether the IDS products can directly share and coordinate various kinds of IDS data directly within their management interfaces (Scarfone & Mell, 2007). This normally occurs with different IDS products coming from the same vendor. On the other hand, organizations also need to ensure whether IDSs have interoperability features to share the log files or other output files from other IDSs and security related products (Scarfone & Mell, 2007). This type of coordination among multiple IDSs is normally accomplished by SIEM (Security Information and Event Management) software (Scarfone & Mell, 2007; Chuvakin, 2010).

References

Abraham, A., & Thomas, J. (2005). Distributed intrusion detection systems: a computational intelligence approach. Applications of information systems to homeland security and defense. USA: Idea Group Inc. Publishers, 105-135.

Alharkan, T., & Martin, P. (2012). IDS aaS: Intrusion detection systems as a service in public clouds. In Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), 686-687. IEEE Computer Society.

Anderson, D., Frivold, T., & Valdes, A. (1995). Next-generation intrusion detection expert system (NIDES): A summary. SRI International, Computer Science Laboratory.

Araújo, J. D., & Abdelouahab, Z. (2012). Virtualization in Intrusion Detection Systems: A Study on Different Approaches for Cloud Computing Environments. International Journal of Computer Science and Network Security (IJCSNS), 12(11), 10.

Bace, R., & Mell, P. (2001). NIST special publication on intrusion detection systems. An NIST (National Institute of Standards and Technology) publication

Beg, S., Naru, U., Ashraf, M., & Mohsin, S. (2010). Feasibility of intrusion detection system with high performance computing: A survey. International Journal for Advances in Computer Science, 1(1), 26-35.

Boudaoud, K., Labiod, H., Boutaba, R., & Guessoum, Z. (2000). Network security management with intelligent agents. In Network Operations and Management Symposium, 2000. (NOMS 2000).

Brandao, J. E. M., da Silva Fraga, J., Mafra, P. M., & Obelheiro, R. R. (2006). A WS-based infrastructure for integrating intrusion detection systems in large-scale environments. In Meersman, R., Tari, Z., & Herrero, P. (2006). On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops; proceedings of the OTM Confederated International Conferences, CoopIS, DOA, GADA, and ODBASE 2006, Montpellier, France.

Brahmi, I., Yahia, S. B., Aouadi, H., & Poncelet, P. (2012). Towards a multiagent-based distributed intrusion detection system using data mining approaches.

Brahmi, I., Yahia, S. B., & Poncelet, P. (2011). A Snort-based Mobile Agent for a Distributed Intrusion Detection System. In SECRYPT, 198-207.

Brennan, M. P. (2002). Using Snort for a Distributed Intrusion Detection System. SANS Institute.

Bye, R., Camtepe, S. A., & Albayrak, S. (2010). Collaborative Intrusion Detection Framework: Characteristics, Adversarial Opportunities and Countermeasures.

Catteddu, D., & Hogben, G. (2009). Cloud Computing: benefits, risks and recommendations for information security. European Network and Information Security Agency (ENISA).

Chuvakin, A. (2010). SIEM: Moving Beyond Compliance – Intrusion Detection Systems. White Paper for RSA.

Chen, Z., Han, F., Cao, J., Jiang, X., & Chen, S. (2013). Cloud computing-based forensic analysis for collaborative network security management system. Tsinghua Science and Technology, 18(1), 40-50.

Chatzigiannakis, V., Androulidakis, G., Grammatikou, M., & Maglaris, B. (2004). A distributed intrusion detection prototype using security agents. HP OpenView University Association.

Cloud Security Alliances (2011). Security guidance for critical areas of focus in cloud computing v3.0. A report by Cloud Security Alliance.

Debar, H., Dacier, M., & Wespi, A. (1999). Towards a taxonomy of intrusion-detection systems. Computer Networks, 31(8), 805-822.

Debar, H., & Wespi, A. (2001). Aggregation and correlation of intrusion-detection alerts. In Recent Advances in Intrusion Detection, 85-103. Springer Berlin Heidelberg.

Faysel, M. A., & Haque, S. S. (2010). Towards cyber defense: research in intrusion detection and intrusion prevention systems. International Journal of Computer Science and Network Security (IJCSNS), 10(7), 316-325.

Garfinkel, T., & Rosenblum, M. (2003). A Virtual Machine Introspection Based Architecture for Intrusion Detection. In NDSS, 3, 191-206.

Ghosh, A., & Sen, S. (2004). Agent-based distributed intrusion alert system, 240-251.In Proceedings of the Sixth International Workshop on Distributed Computing (IWDC’04), 240–251, Kolkata, India.

Gunawan, L. A., Vogel, M., Kraemer, F. A., Schmerl, S., Slåtten, V., Herrmann, P., & König, H. (2011). Modeling a distributed intrusion detection system using collaborative building blocks. ACM SIGSOFT Software Engineering Notes, 36(1), 1-8.

Hegarty, R., Merabti, M., Shi, Q., & Askwith, B. (2009). Forensic analysis of distributed data in a service oriented computing platform. In proceedings of the 10th Annual Postgraduate Symposium on The Convergence of Telecommunications, Networking & Broadcasting, PG Net.

Holtz, M. D., David, B. M., & de Sousa Junior, R. T. (2011). Building Scalable Distributed Intrusion Detection Systems Based on the MapReduce Framework. Revista Telecomunication, 2, 22-31.

Iheagwara, C. (2003). Intrusion Detection Systems–Strategies for improving Performance.

Innella, P., McMiIlan, O., & Trout, D. (2002). Managing Intrusion Detection Systems in Large Organizations.

Jordan, C. (2000). Analyzing Intrusion Detection Systems Data.

Kannadiga, P., & Zulkernine, M. (2005). DIDMA: A distributed intrusion detection system using mobile agents. In Proceedings of the Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Networks (SNPD/SAWN’05), 238-245.

Kittel, T. (2010). Design and Implementation of a Virtual Machine Introspection based Intrusion Detection System.

Kourai, K., & Chiba, S. (2005). HyperSpector: virtual distributed monitoring environments for secure intrusion detection. In Proceedings of the 1st ACM/USENIX international conference on Virtual execution environments, 197-207.

Kozushko, H. (2003). Intrusion detection: Host-based and network-based intrusion detection systems. Independent study.

Krugel, C., & Toth, T. Applying Mobile Agent Technology to Intrusion Detection.

Lasheng, Y., & Chantal, M. (2009). Agent based distributed intrusion detection system (ABD Intrusion Detection Systems). In Proceedings of the Second Symposium International Computer Science and Computational Technology (ISCSCT ’09), 134-138, Huangshan, P. R. China.

Maciá-Pérez, F., Mora-Gimeno, F., Marcos-Jorquera, D., Gil-Martínez-Abarca, J. A.,

Ramos-Morillo, H., & Lorenzo-Fonseca, I. (2011). Network intrusion detection system embedded on a smart sensor. Industrial Electronics, 58(3), 722-732.

McHugh, J., Christie, A., & Allen, J. (2000). The role of intrusion detection systems. IEEE Software, 17(5), 42-51.

Mell, P., Karygiannis, T., Marks, D., & Jansen, W. (1999). Applying mobile agents to intrusion detection and respons. A publication of National Institute of Standards and Technology (NIST), US Department of Commerce.

Neelima, S., Prasanna, L.Y. (2013). A Review on Distributed Cloud Intrusion Detection System. International Journal of Advanced Technology & Engineering Research (IJATER), 3(1), 116-120.

Rao, K. R., Pal, A., & Patra, M. R. (2009). A service oriented architectural design for building intrusion detection systems. International Journal of Recent Trends in Engineering and Technology, 1(2), 11-14.

Sallay, H., AlShalfan, K. A., & Fred, O. B. (2009). A scalable distributed Intrusion Detection Systems Architecture for High speed Networks. International Journal of Computer Science and Network Security (IJCSNS), 9(8).

Scarfone, K., & Mell, P. (2007). Guide to intrusion detection and prevention systems (IDPS). NIST special publication, Technology Administration, U.S. Department of Commerce.

Sen, J. (2010). An Agent-Based Intrusion Detection System for Local Area Networks. International Journal of Communication Networks and Information Security (IJCNIS), 2(2), 128-140.

Singh, R. R., Gupta, N., & Kumar, S. (2011). To reduce the false alarm in intrusion detection system using self-organizing map. International Journal of Soft Computing and Engineering (IJSCE), 1(2), 27-32.

Sterne, D., Balasubramanyam, P., Carman, D., Wilson, B., Talpade, R., Ko, C. & Bowen, T. (2005). General cooperative intrusion detection architecture for MANETs. In Proceedings of the Third IEEE International Workshop on Information Assurance, 57-70.

Sundaram, A. (1996). An introduction to intrusion detection. Crossroads, 2(4), 3-7.

Suryawanshi, G. R., Jondhale, S. D., Korde, S. K., Ghorpade , P. P., Bendre, M. R. (n. d.). Mobile Agent for Distributed Intrusion Detection Systems in Distributed System. International Journal of Computer Technology and Electronics Engineering (IJCTEE), 1(3), 70-75.

Tierney, B., Crowley, B., Gunter, D., Lee, J., & Thompson, M. (2001). A monitoring sensor management system for grid environments. Cluster Computing, 4(1), 19-28.

Tolba, M., Abdel-Wahab, M., Taha, I., & Al-Shishtawy, A. (2005). Distributed Intrusion Detection Systems for Computational Grids. In International Conference on Intelligent Computing and Information Systems, 2.

Werlinger, R., Hawkey, K., Muldner, K., Jaferian, P., & Beznosov, K. (2008). The challenges of using an intrusion detection system: is it worth the effort?. In Proceedings of the 4th symposium on Usable privacy and security, (SOUPS), July 23-25, Pittsburgh, PA, USA.

Wotring, B. (2010). Host Integrity Monitoring: Best Practices for Deployment.

Yee, A. (2003). The intelligent Intrusion Detection Systems: next generation network Intrusion Detection Systems management revealed. NFR security white paper.

Zhai, S., Hu, C., & Weiming, Z. (2014). Multi-Agent Distributed Intrusion Detection Systems Model Based on BP Neural Network. International Journal of Security and Its Applications, 8 (2), 183-192.

View Computing Dissertations Here

Communication Computing

How Computers Affect Communication

Communication – with the current rate of technological improvement, people now have many ways to relate with other. We eventually modify how we form our relationships and come up with more diverse methods of interacting. However, there are strength and weakness as well as drawbacks regarding on which methods we use.

Computers have made a big impact on relationships. The invention of electronic devices such as mobile phones has simplified the way people get into contact with each other (Sara, 2004). Also coming from social networks has made a diversification in communication. With a computer, people can chat with family and friend’s members over the internet. Computers are bringing the world together via communication.

This assignment will study the effects of computers on communication in day to day lives.

Focus questions:

How has the invention of the computer affected communication?

What are the positive and negative impacts of computers on communication?

What are services offered by the computer during communication?

Description

Effects of computers intervention on communication

The invention of the computer has had revolutionary effects on communication (Sara, Jane & Timothy, 2004). What would take years to achieve a century ago can now be accomplished within an hour thanks to the computer. However, this invention has not only had positive impacts on communication; it has also had many weaknesses of communication like face-to-face communication. Communication is not only associated with the act of sending and receiving information. There are many other aspects such as taking non-trivial actions on the information, reacting to the demands of the information and giving feedback.

Computers affect communication, both positively and negatively (Sara, Jane & Timothy, 2004). There are various aspects of modern day communication that would not be achieved in old days since there were no computers. However, there are other vital aspects of traditional communication that have been lost in modern days due to the use of computers in communication (Gordon, 2002). The relationship between computer and communication can be analyzed with a focus on the various effects computers pose on communication. Computers can either aid or frustrate communication between people. Positive impacts of computers in communication describe ways in which computers aid communication. The negative implications of computers on communication using computers describe ways in which computers frustrate communication efforts. These impacts can be analyzed by comparing past day communication with present day communication.

Positive effects of computers on communication

Technological improvement has a great impact on the societal communication methods particular with its improvement in the last centuries (Christina, 2006). From the coming up of telegraph together with telephones to the advancement of internet, technology provided us with a tool to express our feelings and opinions to a wider number of recipients.

Keeping in touch/ long distance communication

Telegrams are quicker than letters; telephone calls, also, are speedier than telegrams, and additionally less demanding and more charming, since they require no go-between and permit clients to hear one another’s voice. Phones make this one stride further, permitting individuals to call and talk with one another regardless of their location. Online communication of various types is the most productive yet, with email being a close quick form of the paper letter; webcams, combined with communication programs, for example, Skype, Google Video Chat, make it conceivable to see the individual you are talking with as opposed to hearing his voice simply.

Doing Business

The same computers that have simplified and enhanced individual communication have additionally had the same valuable consequences for business. Communication between partners is close prompt whether they are a few rooms or a few nations separated (Sundar, 2014). For example, video conferencing permits organizations to have specialists scattered all over the world while yet hold productive gatherings and talks; business systems administration is made simpler by online networking and online systems composed particularly for that reason, for example, LinkedIn. Maybe above all, organizations can extend beyond their nearby market and pick up a more extensive client base basically by keeping up dynamic online vicinity.

Overcoming Disabilities

The computer had both enhanced communications for crippled individuals and made it conceivable where it wasn’t (Gordon, 2002). Portable amplifiers support hence becoming aware of mostly hard of hearing individuals, making it less demanding to comprehend discourse, while cochlear inserts restore hearing to the deaf. Discourse producing gadgets give individuals with extreme discourse hindrances an approach to communicating: maybe the most acclaimed client of such a gadget is researcher Stephen Hawking. Further advances in innovation may bring about useful cerebrum computers interface frameworks, restoring the capacity to impart to individuals who have lost it completely, for example, sufferers of the secured disorder.

Reaching a Wider Audience

As individuals’ capacity to communicate enhances, the reach of their messages enlarges (Gordon, 2002). This can be particularly essential in politics campaigns and activism. For example, photographs and video recorded secretly through a phone can be rapidly and effortlessly shared online through sites, for example, YouTube, making it harder for onerous administrations to keep control; social media, for example, Facebook and Twitter can be utilized to sort out and arrange gatherings and challenges. The Egyptian revolution of 2011-2012 was impelled significantly by online networking.

An individual turns out to be more skilled to take a decision because of the computer because the choice is given by the computers on time to all the data required to be taken. Therefore, any people or establishments get achievement quick.

The individual working at the administrative level turns out to be less subject to low-level staff like representatives and bookkeepers (Christina, 2006). This enhances their working patterns and effectiveness, which advantage the association and eventually influences the general public.

In like manner life Likewise, an individual gets profited with computers innovation. Whenever airplane terminals, healing centers, banks, departmental stores have been electronic, individuals get snappy administration because of the computers framework.

Computing Communication
Computing Communication

Negative Effects of Computers on Communication

Computers have changed the working environment and society as a whole. Individuals and associations have gotten to be reliant on computers to connect them to colleagues, merchants, clients and data (Roundtable., 1999). Computers are utilized to track timetables, streamline data and give required information. In spite of the fact that computers have given laborers innumerable devices to business and simpler access to data close-by or abroad, there are negative impacts. These incorporate more than the clearly feared framework disappointments and digital violations.

Communication Breakdowns

Due to the commonness of computers in the work environment, email is presently a typical method of professional’s communication. This has brought on a plenty of miscommunication issues. Numerous employees lack appropriate writing skills and can accordingly battle with effectively conveyance of their messages. Yet, even the most-gifted author can in any case experience difficulty with passing of tone in electronic messages. Along these lines, without the utilization of articulation and non-verbal communication, messages implied as neutral or even complimentary can be translated as rude or critical. To add to this, numerous specialists are so reliant on email that they have not effectively manufactured a positive establishment relationship by means of exposure or telephone calls.

Anonymity

One of the principle issues with computers interceded communication originates from an absence of accountability with clients (Jungwan Hong, 2014). Individuals can speak to themselves as whatever they need on Internet discussions or informal communities, and this makes communication issues in both headings. A client may contort who he is by not giving precise insights about himself, and this absence of genuineness influences how that individual is seen. The shroud of obscurity permits a client to possibly encroach upon socially acknowledged practices like resistance or respectfulness.

Misinterpretation

The way that most communication is occurring on computers comes as content can really be a negative as far as our capacity to comprehend things clearly. Indeed, even with email (Roundtable., 1999), it is workable for data to be confused or the feeling of an announcement to be missed. Saying “you rock” to somebody in an email message, for example, could be utilized to truly hand-off appreciation. Then again, it could demonstrate a negative sentiment somebody being placed in an intense position. The setting pieces of information that a man gives their non-verbal communication and manner of speaking are lost in this situation. Clients get around some of this disarray by utilizing emoticons – console characters that serve as a shorthand for temperament and feeling – however a lot of nuance can be missed without perceiving how somebody responds with their non-verbal communication and voice.

Dependency

Society’s reliance on computers for communication is likewise a risk amusement, as outside powers can avert communication in an assortment of ways (Christina, 2006). Quakes, surges and sea tempests have brought on different log jams and stoppages of Internet availability for individuals everywhere throughout the world. Moreover, dependence on interpersonal organizations and email can have the unintended result of opening a man up to recognize burglary endeavors and email tricks. Indeed, even the outside power of political agitation can debilitate a client’s capacity to impart, as the 2011 exhibitions in Cairo and Libya brought about government shutdowns of the Internet, radically abridging every nation’s capacity to convey, both broadly and globally

Privacy

Communicating by means of computers can help individuals connect across large geographical crevices and access remote data, however doing as such may open up a man’s privacy more than he may need. With an in-individual meeting or telephone discussion, there is a relative affirmation that subtle elements of those sharing will stay private. On the other hand, with email, content informing or message sheets, there is a record of what individuals say (Christina, 2006). Data is not simply tossed out into the air like discourse, however it, put away as a lasting record. There is an inalienable risk when outsiders can get to these online “discussions.” Similarly, interpersonal organizations and other Internet-based specialized apparatuses are defenseless against protection break, as clients regularly participate in these exercises on open systems; leaving individual data, conceivably, out in the open.

Computers complicate conflict resolution mechanisms in communication: Multiple transfers of information, the increment of anonymity and fast flow of information makes conflict resolution in organizations extremely complex (Christina, 2006). With computers, it has been extremely hard to control access to information among various, unintended groups such as children, junior staff, political opponents, among others (Christina, 2006).

Computers have affected communication in various ways. There is some important difference between modern communication and the old communication. The old day communication was characterized by aspects such as slow movement of information, heavy expenses, ineffective transfer of information and limitations on the amount of transferable information (Sundar, 2014). On the other hand, for the modern communication, it is fast, less expensive, highly efficient, and clear. However, the modern communication without computer is not possible independently functioning very well.

Services Offered By Computer in Communication

The internet is a popular tool for gaining access to information, enlarging commercial works and communication with each other. Studies show that the dominant use of net in different houses is used for a person to person communication. Email on time message is sending, availability of chat rooms and support system sites have changed the way people pass information to others through the use of computers.

Computers allow users to talk to each other via a connected network and social sites (Sara, 2004). Computers also have the capability to connect different users to the internet through phones lines or cabled connections enabling users to share data and information e.g. one can use computers to send a message to other computers through a connected network globally. Data transfer and messaging as a useful services provided by computers in communication globally.

Discussion

Just around a century ago, a substantial number of developments occurred during the first industrial transformation. Within a short time period, numerous nations got to be industrialized. Presently we are in the first place of another technological insurgency (Gordon, 2002). The significant reason for the second technological insurgency is the creation of Computers. Computer is the most flexible machine people have ever made. It assumes an imperative part in our regular life. It covers gigantic region of utilization including instruction, commercial ventures, government, prescription, and exploratory examination, law and even music and expressions.

Over the next decade, national and international computers networks will be generated and connected together (Asthmatics, 2010). Some of these computer networks will be specialized acting as a links between national informational stores and data warehouses where some will give general computing capabilities.

Computers information store on every aspect of the work performed by the society will be generated with a high frequency and with wider coverage and be accessible to a different level to groups and individuals with different qualifications.

The computer networks will be more accessible at a cost low to the whole public and to all groups with small resource. For example, the cost of renting a computer terminal in London currently is around 72 dollars for a single installation per month without adding the cost per hour in which the terminal is used (Jungwan Hong, 2014). Such cost will tend to reduce to a point where a terminal is normal equipment in an office.

There will be an increase in essence at which communicate directly to one another globally. The use of video chats will enlarge while facilitating the effects of individual in many areas over a long period of time.

Conclusion

Computers have positive and negative effects on communication. Weighing the side of positive effects and negative effects, there is clear evidence that computers do more good to communication than harm (Asthmatics, 2010). Computers have a very important role in the facilitation of communication. People should utilize computers to expand the circle of friends and relations globally but also we should learn how to control and treasure our relations and use of computers in order not to let things that can be regretted occur.

References

Asthmatics, G. (2010). The revolution of communication and its effect on our life. Pavaresia. Academics International Scientific Journal, 100-108.

Christina, K. (2006). The Impact of Information and Communication Technologies on Informal Scholarly Scientific Communication: A Literature Review. Maryland: University of Maryland Press.

Gordon, B. (2002). Computers in Communication. UK: McGraw-Hill International Electronic Version.

Jungwan Hong, S. B. (2018). Usability Analysis of Touch Screen for Ground Operators. Journal of Computer and Communications, 1-20.

Roundtable., N. R. (1999). Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology. Washington: National Academies Press.

Sara, K. J. (2004). Social Psychological aspects of Computer Mediated Communication. Pittsburgh: Carnegie Mellon University.

Sundar, S. S. (2014). Communication theory. Journal of Computer-Mediated Communication, 85.

Click Here For Computing Dissertations