Network Design and Structure Dissertations

Network Design and Structure

Title: Network Design – When implementing a network in an organisation, there are some design issues that must be considered before implementation. The requirements of the network must be clearly defined and all the network components to be used have to be clearly defined. Some of the considerations are discussed below.

Network Design and Network Architecture

Network architecture is the infrastructure consisting of software, transmission equipment, and communication protocols define the structural and logical layout of a computer network. The mode of transmission of a network can be wired or wireless depending on the requirements in an organisation. There are various types of networks that can be applied in an organisation depending on the network size. Local area network (LAN) refers to network in a small geographical area, Metropolitan area network (MAN) refers to network in a city, and wide area network (WAN) refers to network that is spread geographically in a wide area. Among the three types of network, the company would implement LAN since it is only covering a small geographical area.

Transmission Media

The transmission medium of a network can be wired or wireless. Wired medium involve use of coaxial cables or fiber-optic cables while wireless media involves wireless transmission of data. Depending on the bandwidth, throughput and goodput we are able to determine the best medium of transmission. Fiber optic cables have low signal loss since they avoid collision, and they are efficient in data transfer in high traffic networks. Coaxial cables are less expensive compared with fiber optic cables, but they have high signal loss caused by collisions. Wireless transmission is efficient in local area network where there are few computers.

Network Design Management Method

The management method of a network can be either peer to peer or client-server. Peer to peer is where there is communication between several computers without a central computer. Client-server is where each client is independent and a central server provides services to the clients. In a peer to peer network, many computers can share a single application installed in one computer. In a client-server, they are designed to support large number of clients where the clients do not share resources. The client-server model security is enhanced because security is handled by the server. It is also easy to upgrade a client server model to meet new requirements in an organisation.

Figure 1, a client server model

 

Network Design
Network Design

Network Topology

Network topology is divided into physical and logical topology. Physical topology refers to the way in which computers and other devices are connected. Logical topology describes the layout of data transmission in a network. Bus, ring, star and mesh topologies are the main types of topologies. Bus topology is a where all devices are connected with a single cable. The topology works for small networks, but it is slow and collisions are common. Ring topology is where the cable runs around where each node is connected to each other. There are fewer collisions compared with a bus topology. A token ring is used to avoid collision. In a star topology, all the devices are connected to a central hub. There is a central management making it is faster in upgrading, but failure of the central hub brings down the entire network. Mesh topology connects all the devices to each other for fault tolerance and redundancy to improve performance.

Network Design Security Requirements

Networks are frequently attacked by hackers and other malicious people. This makes security one of the key considerations when designing a network. To reduce the number of attacks on computer networks, the network should have firewalls, intrusion detection systems, VPN, and DMZ. These measures reduce the threat and detect malicious people in the network.

Scalability

This refers to the ability of the network to grow. The network should be scalable enough to cater for growth in the network infrastructure.

Network Address Translation (NAT)

This is a design consideration where many computers in a private network access the network using one public IP address. This is a measure to enhance security in a network.

Figure 2, Network architecture

Network Architecture Dissertation
Network Architecture

Figure 3, Showing how VPN is implemented

VPN Implementation
VPN Implementation

OSI Reference Model in Network Design

The OSI model has seven layers as highlighted in the diagram. The communication system is sub-dived into layers where each layer sends service requests to the layer below it and receives service requests from the layer above it (FitzGerald & Dennis, 2009).

OSI Model
OSI Model

Layer 1: Physical Layer

Physical layer refers to the hardware and all network devices used in the network. The layer defines the physical devices and the transmission medium. The layer receives service requests of the data-link layer and performs encoding and decoding of data in signals. Protocols in this layer include CSMA/CD, and Ethernet (Liu, 2009).

Layer 2: Data-Link Layer

Data-link layer receives service requests of the network layer and sends service requests to the physical layer. The main function of the data-link layer is to provide reliable delivery of data across networks. Other functions performed by the layer include framing, flow and error control, and error detection and correction. There are two sub layers of the data-link layer; media access control layer, and logical link control layer. Media access control performs frame parsing, data encapsulation and frame assembly. Logical link control is responsible for error checking, flow control and packet synchronisation. Protocols in this layer include; X 25, frame relay and ATM.

Layer 3: Network Layer

Network layer is responsible for managing all the network connections, network congestions, and packet routing between a source and destination. The layer receives service requests of the transport layer and sends service requests to the data-link layer. The main protocols in this layer are IP, ICMP, and IGMP.

Layer 4: Transport Layer

The main purpose of this layer is to provide reliable data delivery which is error free by performing error detection and correction. The layer ensures that there is no loss of data, and data is received as it was sent. The layer provides either connection-less or connection oriented service. There are two protocols in this layer: UDP and TCP.

TCP

  • Sequenced
  • Connection oriented
  • Reliable delivery
  • Acknowledgements and windowing flow control

UDP

  • No sequencing
  • Connection-less
  • No reliable delivery
  • No acknowledgements and no windowing flow control

Layer 5: Session Layer

The main purpose of this layer is to establish and terminate sessions. The layer sets up and terminates connection between two or more processes. It also manages communication between hosts. If there is login or password validation, this layer is responsible for the validation process. Check-pointing mechanism is also provided by this layer. If an error occurs, re-transmission of data occurs from the last check-point. Protocols in this layer include; RIP, SOCKS, and SAP.

Layer 6: Presentation Layer

This layer is responsible for data manipulation, data compression and decompression, and manages how data is presented. The layer receives service requests of the application layer and sends service requests to the session layer. The layer is concerned with the syntax and semantics of the data in transmission. Data encryption and decryption (cryptography) is used to provide security in this layer. Protocols involved in this layer include; ASCII, EBCDIC, MIDI, MPEG, and JPEG.

Layer 7: Application Layer

This layer provides interaction with the end user and provides services such as file and email transfers. The layer sends service requests to the presentation layer. It has several protocols used in communication; FTP, HTTP, SMTP, DNS, TFTP, NFS, and TELNET.

Network Protocols

  • Ethernet – provides transfer of information on Ethernet cable between physical locations
  • Serial Line Internet Protocol (SLIP) – used for data encapsulation in serial lines.
  • Point to point protocol (PPP) – this is an improvement of SLIP, performs data encapsulation of serial lines.
  • Internet Protocol (IP) – provides routing, fragmentation and assembly of packets.
  • Internet Control Management protocol (ICMP) – help manage errors while sending packets and data between computers.
  • Address resolution protocol (ARP) – provides a physical address given an IP address.
  • Transport control protocol (TCP) – provides connection oriented and reliable delivery of packets.
  • User datagram protocol (UDP) – provides connection-less oriented service and unreliable delivery of packets.
  • Domain name service (DNS) – provides a domain name related to a given IP address.
  • Dynamic host configuration protocol (DHCP) – used in the management and control of IP addresses in a given network.
  • Internet group management protocol (IGMP) – support multi-casting.
  • Simple network management protocol (SNMP) – manages all network elements based on data sent and received.
  • Routing information protocol (RIP) – routers use RIP to exchange routing information in an internetwork.
  • File transfer protocol (FTP) – standard protocol for transferring files between hosts over a TCP based network.
  • Simple mail transfer protocol (SMTP) – standard protocol for transferring mails between two servers.
  • Hypertext transfer protocol (HTTP) – standard protocol for transferring documents over the World Wide Web.
  • Telnet – a protocol for accessing remote computers.

Figure 5 shows the TCP/IP architecture

TCP/IP Architecture
TCP/IP Architecture

Layer 1: Network Access Layer

This layer is responsible for placing TCP/IP packet into the medium and receiving the packets off the medium. This layer control hardware and network devices used in the network. Network access layer combines the physical and data-link layer of the OSI model.

Layer 2: Internet Layer

It functions as the network layer in the OSI model. The layer performs routing, addressing and packet addressing in the network (Donahoo & Calvert, 2009).

Layer 3: Transport Layer

The layer has the same functions as the transport layer in the OSI model. The main function of this layer is to provide reliable data delivery which is error free. The layer receives service requests of the application layer and sends service requests to the internet layer.

Layer 4: Application Layer

This is the layer that has applications that perform functions to the user. It combines the application, presentation and session layers of the OSI model.

TCP/IP Commands Used To Troubleshoot Network Problems

There are many TCP/IP commands that can be used to show that there is a break in communication. The commands are: PING, TRACERT, ARP, IPCONFIG, NETSTAT, ROUTE, HOSTNAME, NBSTAT, and NETSH.

Hostname is used to display and show the host name of the computer

Arp is used for editing and viewing of ARP cache.

Ping is used to send ICMP echo to test the reachability of a network

Event viewer shows all the records of errors and events.

References

Donahoo, M. J., & Calvert, K. L. (2009). TCP/IP sockets in C: Practical guide for programmers. Amsterdam: Morgan Kaufmann.

Fall, K. R., & Stevens, W. R. (2012). TCP/IP illustrated. Upper Saddle River, NJ: Addison-Wesley.

FitzGerald, J., & Dennis, A. (2009). Business Data Communications and Network Design. Hoboken, NJ: John Wiley.

Leiden, C., & Wilensky, M. (2009). TCP – IP. Hoboken: For Dummies

Liu, D. (2009). Next generation SSH2 Network Design and Implementation: Securing data in motion. Burlington, MA: Syngress Pub.

Odom, W. (2004). Computer networking first-step. Indianapolis, Ind: Cisco.

Ouellet, E., Padjen, R., Pfund, A., Fuller, R., & Blankenship, T. (2002). Building a Cisco Wireless LAN and Network Design.   Rockland, MA: Syngress Pub.

View More Computing Dissertations Here

If you enjoyed reading this post on Network Design and Structure, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.

Cloud Based Intrusion Detection Systems

Managing Cloud Based Intrusion Detection Systems (IDSs) in Large Organizations

Intrusion Detection Systems (IDSs) are becoming the important priority to secure the organizations’ IT resources from potential damages. However, organizations experience a number of challenges during IDS deployment. The preliminary challenges of IDS deployment involve product selection according to organizational requirements and goals followed by IDS installation. IDS installations frequently fail due to resource conflicts and the lack of expertise necessary for the successful installation. Post installation phases of IDS involve a number of challenges associated with proper configuration and tuning that requires advance skills and supports. Organizations can overcome many obstacles of product installation and IDS configuration through maintaining a test-bed and phased deployment. Once IDS is operational, IDS data undergo various levels of analysis and correlation. To perform data analysis tasks, administrators require advance programing and networking skills and an in-depth knowledge on organizational network, security, and information architecture. Sometimes large organizations need to correlate data from multiple IDSs products. One potential solution to that is the use of SIEM (Security Information and Event Management) software. Organizations also need to ensure the security and integrity of various IDS components and data. Agents’ and data security can be overcome by maintaining a more autonomous design in the agent structure and incorporating appropriate formats, protocols, and cryptographic arrangements in different phases of data lifecycle. IDS products require ongoing human interaction for tuning, configuration, monitoring and maintenance. Hence, Organizations need to gather different levels of skills for the proper deployment and operation of IDS products.

Managing Cloud Based Intrusion Detection Systems in Large Organizations

Intrusion detection is the surveillance of computer hosts and associated networks through observing various events and identifying signs of unauthorized and unprivileged accesses and other anomalous activities that can compromise the confidentiality, availability, and integrity of the system (Singh, Gupta, & Kumar, 2011; Sundaram, 1996; Lasheng & Chantal, 2009). With rising number of malicious attacks on organizational information network, intrusion detections and security incident responses have become the key priorities to organizational security architecture since the widespread industrial adoption of network during the 1990s (Yee, 2003). Today, the placement of a dedicated intrusion detection system (IDS) in organizational IT system is one of the important considerations for organizations (Werlinger, et al., 2008). The aim of intrusion detection system is to ensure adequate privacy and security of the information architecture and save IT resources from potential damages from various internal and external threats (Scarfone & Mell, 2007). Intrusion detection systems (IDSs) monitor and record activities or events in computer and network environment and then analyze them to identify the intrusion.

With industry’s wide spread adoption, intrusion detection systems have become the de facto security tools in corporate worlds. Major organizations and governmental institutions have already deployed or on the verge of deploying IDSs to secure their corporate networks. However, the deployment of IDS, particularly in the distributed network of a large organization, is a non-trivial task. The complexity and the time required for installation depends upon the number of machines that need to be protected, the ways those machines are connected to the network, and the depth of surveillance the organizations need to achieve (Iheagwara, 2003; Innella, McMiIlan & Trout, 2002). As a result, large organizations need elaborate planning during different phases of IDS deployment, including during product evaluation and testing, suitable placement of IDS agents and managers, configuration of IDS components, integration of IDSs with other surveillance products, etc. (Bye, Camtepe, & Albayrak, 2010; Bace & Mell, 2001) The aim of this paper is to discuss various challenges associated with IDS deployment in large-scale distributed network of big corporations. Particular emphases are given to the various challenges associated with the management of agents, collection of agent data, and the correlation of IDS data to identify possible intrusions in large scale distributed networks. The paper will also discuss various “real-world” encounters during different stages of IDS deployment, such as, during evaluation of products, IDS installation and configuration, management and ongoing operation, etc. and make necessary recommendations to overcome those difficulties.

Why are Intrusion Detection Systems Required for Large Organizations?

Networks are ubiquitous in today’s business landscape. Organizations harness network power to develop sophisticated information system, to utilize distributed and secured data storage, and to provide valuable web-based customer services. Software vendors provide their applications to the end users through networks. Networks allow employees to gain remote accesses to their offices or organizational resources. These proliferations of network activities have flooded the internet with different classes of cyber threats, including different classes of hackers, rogue employees, and cyber terrorists. A significant number of these threats derive from competitor organizations seeking to exploit organizational resources or to disrupt productivity and competitive advantages. In recent years, the proliferation of heterogeneous computer networks, including a vast number of cloud networks, has increased the amount of invasive activities. Today cloud based e-commerce sites and business services are major targets of attackers. The damaging costs resulting from cyber-attacks are substantial. The traditional prevention techniques, such as secured authentication, data encryption, various software and hardware firewalls are often inadequate to prevent these threats (Rao, Pal, & Patra, 2009; Anderson, Frivold, & Valdes, 1995). Various kinds of system vulnerabilities are undeniable or typical features of computer and network systems. The intruders frequently search for various weaknesses of defensive products, such as a subtle weak point in the firewall configuration, or a loosely defined authentication mechanism. Hence, the investment in an intrusion detection system within an organization’s security architecture as a second line defense mechanism can increase the overall security postures of the system.

Overview of IDS

General Architecture

A distributed agent-based architecture consists of two main components–i) IDS agents and ii) the management server (Beg, Naru, Ashraf, & Mohsin, 2010). An agent is a software entity that perceives different aspects of its location (networks and hosts) and capable of acting itself according to the supplied protocols (Boudaoud et al., 2000; Mell et al., 1999). Intrusion Detection Systems agents work independently (Brahmi, Yahia, & Poncelet, 2011), interact with central management servers, follow protocols according to the systems’ requirements, and collaborate with other agents in an intelligent manner (Lasheng, & Chantal, 2009). The management server is the cornerstone of an IDS that facilitates centralized management of IDS components. This includes tuning, configuration and control of distributed agents; aggregation and storage of data sent by various agents; correlation of distributed data to identify intrusions; and the generation of alerts (Chatzigiannakis et al., 2004). The central node also performs any update and upgrading of the system (Chatzigiannakis et al., 2004). In case of a mobile agent based distributed IDS, the management server also responsible for dispatching agents and maintaining communication with them. The difference between a normal and a distributed agent based Intrusion Detection Systems is that in a distributed IDS, the significant part of analysis tasks are performed by the agents situated across the network. The agents maintain a flat architectural structure communicating only the main results to the central server as opposed to sending all data to the central node through a hierarchical structure.

IDSs Classification

Based on the locations on which IDS agents are distributed, IDSs can be categorized into two broad classes–hosts based and network based.

Host based IDS. A host intrusion detection system (HIDS) is installed and run on an individual host where it investigates all inbound and outbound packets associated with that host to identify intrusion (Singh, Gupta, & Kumar, 2011; Neelima & Prasanna, 2013). Besides network packets, HIDSs also monitor various system data, such as event logs, operating system processes, file system integrity, and unusual changes to various configuration settings (Scarfone & Mell, 2007; Bace & Mell, 2001; Kittel, 2010). The architecture of a host based IDS is very straight forward. The detection agents are installed on the hosts, and the agents communicate over the existing organizational network (Scarfone & Mell, 2007). The event data are transmitted to the management server and are manipulated through a console or command line interface (Scarfone & Mell, 2007; Ghosh & Sen, 2005). Host-based IDSs have greater analysis capabilities due to the availability of dedicated resources to the IDS (i.e., processing, storage, etc.) and hence work with a greater degree of accuracy (Bace & Mell, 2001; Garfinkel & Rosenblum, 2003). However, HIDSs have some limitations. Installation, configuration, and maintenance of the IDS must be performed in each host individually, which is extremely time consuming (Scarfone & Mell, 2007; Bace & Mell, 2001). HIDSs are also vulnerable themselves due to their poor real-time responses (Bace & Mell, 2001; Kozu.shko, 2003). However, host based IDSs are excellent choices for identifying long term attacks (Kozushko, 2003).

Cloud Based Intrusion Detection Systems
Cloud Based Intrusion Detection Systems

Network based IDS. Network based IDSs identify intrusions through analyzing traffics of a dedicated organizational network in order to secure the associated hosts from malicious attacks (Bace & Mell, 2001). Instead of investigating various activities within the hosts, network based IDSs focus only on packet streams that travel through the network. Network IDSs investigate network, transport, and application protocols and the various network activities, such as port scanning, connection status, port access, etc. to determine attacks. In a network based IDS, multiple sensors or agents are placed on various strategic points on the network (Singh, Gupta, & Kumar, 2011) where they guard a particular segment of network (Scarfone & Mell, 2007), perform local analysis of traffics with the associated hosts, and communicate the results to the central management server. The results from various agents are coordinated to identify planned distributed attacks within the organizational network. ( Bace & Mell, 2001). Network based IDSs are faster to implement and more secured than host-based IDSs. However, there are some disadvantages of NIDSs. One of them is the frequent dropping of packets which normally occurs in a network with high traffic density or during the periods of high network activities (Bace & Mell, 2001; Chatzigiannakis et al., 2004). Network based IDSs are unable to process encrypted information, which is a major drawback in monitoring virtual machine hosts (Bace & Mell, 2001). Network based IDSs only identify signs of attacks but cannot ensure whether the target host is infected (Bace & Mell, 2001), and thus, the manual investigation of host is necessary to trace and confirm associated attacks.

IDS Classification According to the Detection Approaches

According to the mechanism of or approaches to intrusion detection, IDSs can be classified further into two categories: i) anomaly based detection system and ii) misuse or signature based detection system.

Anomaly based detection system. Anomaly detection system is based on the principle that all intrusions are linked with some deviations of normal behavioral patterns (Maciá-Pérez et al., 2011; Ghosh & Sen, 2005; Abraham & Thomas, 2005; Singh, Gupta, & Kumar, 2011). It identifies intrusions by comparing the patterns of suspicious events against the observed behavioral patterns of the monitored system (Beg, Naru, Ashraf, & Mohsin, 2010). The anomaly detection programs collect historical data from the system and construct individual profiles that represent normal patterns of host and network utilization (Bace & Mell, 2001). The constructed database along with appropriate algorithm is used to verify the consistency of the network packets. Anomaly detection agents are preferable in that they can detect attacks that are completely unrecognized before (Beg, Naru, Ashraf, & Mohsin, 2010; Kozushko, 2003). However, the rates of false positive generated by the agents are very high (Ghosh & Sen, 2005; Brahmi et al., 2012), and intruders may disguise themselves by mimicking acceptable behavioral patterns (Ghosh & Sen, 2005).

Misuse or signature based detection system. Misuse detection approaches depend upon the records of existing system vulnerabilities and known attack patterns (Abraham & Thomas, 2005). Misuse detection systems generate fewer false positives compared to the anomaly detection systems (Ghosh & Sen, 2005; Faysel & Haque, 2010). They are also easy to operate and require minimum human interventions. However, misuse detection techniques are vulnerable to new attacks that have no known signature or matching pattern (Brahmi, Yahia, & Poncelet, 2011; Ghosh & Sen, 2005). So, the signature database of a misuse detection system needs to be frequently updated to recognize the most recent attacks (Scarfone & Mell, 2007). 

IDS Design and Development Challenges

Challenges in Managing Intrusion in Distributed Network

In recent years, security concerns are shifting from host to network due to the proliferation of internet based services, distributed work environment, and heterogeneous networks. The majority of the current IDS vendors are adopting network based and distributed approaches to security in their products (Suryawanshi et al., n. d.). However, there are a number of limitations to most of the distributed IDSs. Firstly, the monitoring agents from the distributed hosts and network send event data to the centralized controller components (Suryawanshi et al., n. d.; Brahmi, Yahia, & Poncelet, 2011; Kannadiga & Zulkernine, 2005). Because of the centralized data analysis performed, these systems are vulnerable to a single point of failure (Bye, Camtepe, & Albayrak, 2010; Zhai, Hu, & Weiming, 2014; Brahmi et al., 2012; Tolba et al., 2005; Araújo & Abdelouahab, 2012). Secondly, the architecture of these systems consists of a hierarchical tree-like structure with the main control system at the root level, sensor units at transient or leaf nodes and information aggregation units at some internal nodes. Information collected from local nodes is aggregated at the root level to obtain a global view of the system (Brahmi et al., 2012). Large scale data transfer from transient nodes to the central controller unit during the aggregation process can create network overloads (Suryawanshi et al., n. d.).

These results in a communication delay and an inability to detect large scale distributed attacks efficiently in a real-time manner (Brahmi et al., 2012). In order to overcome these limitations, recent IDSs incorporate various technologies supporting agent-based data analysis and intrusion detection, where agents perform most analysis tasks and send only the important data to the centralized nodes directly through a flat communication structure. Multi-agent based distributed intrusion detection systems (DIDS) are partly autonomous systems capable of self-configuring upon changing contexts of network and hosts and disseminating their analytical capabilities in different corners of network in a distributed manner (Gunawan et al., 2011; Tierney et al., 2001). Through adopting a hybrid approach, such as both the network and host monitoring as well as implementing both anomalous and signature detection methods, distributed agents can coordinate the results of hosts and networks more accurately and perform more comprehensive intrusion detection (Abraham & Thomas, 2005; Brahmi et al., 2012).

Major Challenges with Distributed IDS

The most important challenges associated with distributed IDSs are the correct placement of agents (Sterne et al., 2005). Large number of misplaced agents can drive inefficiency and therefore agent locations must be justified through proper investigation of network topology, such as the characteristics of routers and switches, number of hosts, etc. (Chatzigiannakis et al., 2004). Another major challenge is how the heterogeneous data from different sensors should be collected and analyzed to identify an attack (Chatzigiannakis et al., 2004; Debar & Wespi, 2001). Furthermore, being distributed in nature, agents are vulnerable to become compromised themselves. Agents need to follow a common communication protocol and transfer data to centralized server securely without producing too much extra traffic (Chatzigiannakis et al., 2004). Agents’ security and integrity also largely maintained and ensured by the management server. Hence, securing the management server is an important task for the overall security of an IDS. Organizations should consider a dedicated server for the entire management host (Wotring, 2010), which will lower the number of accesses in the server and eventually reduce the exposure to vulnerability. Further restrictions to both physical and network accesses in the management server must be incorporated through proper authentication mechanism and physical restrictions to the server areas. Sometimes the management server can be put behind a dedicated firewall to enhance the security status (Wotring, 2010; Brennan, 2002).

Deployment Challenges of IDS

Consideration before Deploying an IDS

Due to the various limitations of IDS products and a lack of skilled network security specialists in the market, IDS deployment in large organizational network involves substantial challenges. A successful IDS deployment requires elaborate planning, requirement analysis, prototyping, testing, and training arrangement (Bace & Mell, 2001). A requirement analysis is conducted to prepare an IDC policy document that demonstrates the organization’s structure and resources and reflects its IDS strategies, security policies, and goals (Bace & Mell, 2001). Before specifying organizational requirements, it should be borne in mind that an IDS is not a standalone security application, and the main objective of an IDS is to monitor traffic on the organization’s internal network in order to complement existing security controls (Werlinger, et al., 2008).

Specifying system architecture. Before evaluating and selecting an IDS product, organizations should specify the important requirements for which they seek a potential IDS solution. In order to accomplish this goal, organizations may plan and document important properties of their system, such as –i) system and network characteristics; ii) network architecture diagram; iii) technical specifications of the IT environment, including the operating systems, typical services, the applications running on various hosts, etc.; iv) technical specifications of security structure, including existing IDSs, firewalls, antivirus tools, and various hardware appliances; v) existing network communication protocols; etc. (Scarfone & Mell, 2007; Brandao et al., 2006). These considerations will help organizations to determine which type of IDS is necessary to give optimum protections of their systems.

Specifying goals. Once the system architecture and general requirements of the system is documented, the next steps is to specify technical, operational, and business related security goals that the organization wants to achieve by implementing an IDS (Bace & Mell, 2001; Scarfone & Mell, 2007). Some of these goals may be i) guarding the network from particular threats; ii) preventing unprivileged accesses; iii) protecting important organizational assets; iv) exerting managerial controls over the network; v) preventing violations of security or IT policies through observing and recording suspicious network activities; etc. (Scarfone & Mell, 2007). Some security requirements may have implications with organizational culture; such as, the organization that maintains a high degree of formalization in its culture may look for IDSs suitable for various formal policy configurations and extensive reporting capabilities regarding policy violations (Bace & Mell, 2001). A few security goals may derive from external requirements that the organizations may need to achieve, such as legal requirements for the protection of public information, audit requirements for security practices, or any accreditation requirement (Bace & Mell, 2001). There may be industry specific requirements, and organizations need to ensure whether the proposed IDS can meet those (Bace & Mell, 2001).

Specifying constraints. IDSs are typically resource intensive applications that need substantial organizational commitments. The most important constraints that organizations need to take into account are the budgetary considerations for the acquisition of software and hardware, infrastructure development, and for the ongoing operation and maintenance. Organizations should identify IDSs’ functional requirements or the users’ skill requirements to operate them effectively (Bace & Mell, 2001). Organizations that will not be able to incorporate substantial human resources in IDS monitoring and maintenance activities should choose an IDS that is more automated and requires little staff time (Scarfone & Mell, 2007).

Product Evaluation Challenges

The evaluation of an IDS product is the most challenging aspect on which the success of intrusion detection depends. Today there are a range of commercial and public domain products available for deployment (McHugh, Christie, & Allen, 2000). Each product has distinct drawbacks and advantages. While some products work well in particular types of organizational network, some IDSs may not produce desired results in certain industrial settings. In order to overcome these challenges, organizations must evaluate an IDS product in terms of their system resources and protection requirements (McHugh, Christie, & Allen, 2000). Vendor-specific information, product manuals, whitepapers, third-party reviews, and information from other trusted sources can be valuable resources during the product evaluation (Scarfone & Mell, 2007). IDSs’ detection accuracy, usability, life cycle costs, vendor supports, etc. are some of the most critical aspects during product evaluation. Other features that must be taken into account are security, interoperability, scalability and reporting capabilities (McHugh, Christie, & Allen, 2000; Scarfone & Mell, 2007).

Product performance. The performance of IDS is the measure of event processing speed (Debar, Dacier, & Wespi, 1999). The performance feature of IDS products must take a very high degree of attention, as the anomalous or suspicious events must be detected in real-time and reported as soon as possible to minimize damages (Mell et al., 1999; Scarfone & Mell, 2007). Network based IDSs normally suffer from performance problems, particularly where IDSs have to monitor heavy traffic associated with lots of hosts in a distributed network (McHugh, Christie, & Allen, 2000). The performance of IDSs also largely depends on extensive configuration and fine-grained tuning according to the network architecture (Scarfone & Mell, 2007), and testing IDSs with default settings may not represent original performance of the product. These make the evaluation of the product performance extremely challenging. In addition, IDSs with more robust detection capabilities will consume more processing and storage, which can cause the performance loss (Scarfone & Mell, 2007; Yee, 2003). Hence, the scalability feature that allows IDSs dynamically allocate processing power and storage can be one of the important performance evaluation criteria (Mell et al., 1999).

Security considerations. During the evaluation of an IDS product, various technologies and features associated with product security must be taken into account, such as protection of stored data, protection of transmitted data during communication between various IDS components, authentication and access control mechanisms, IDS hardening features after product installation, etc. (Scarfone & Mell, 2007). Organizations need to identify whether the IDS is resistant to external modifications (Kittel, 2010). This can be accomplished by checking various features, such as the level of isolation (in case of VMI base IDS) (Kittel T, 2010); cryptographic arrangements during inter-agent communication (Mell et al., 1999); isolated monitoring features; (Kourai & Chiba, 2005); etc.

Interoperability, scalability, and reporting features. Interoperability is one of the key challenges for security specialists who aim to develop sophisticated enterprise security architecture incorporating the industry’s leading tools (Yee, 2003). Through interoperability features, IDSs from various platforms are able to correlate their results and effectively communicate data with firewalls and security management tools to enhance the overall surveillance status of the system (Yee, 2003; Scarfone & Mell, 2007). While the interoperability feature provides IDSs with the capabilities to integrate their strengths among multiple security products, the scalability feature helps to incorporate more capabilities within a single IDS product as the organizational requirements grow. For large organizations, IDSs must be able to dynamically allocate processing and storage or be able to implement more agents and various IDS components with the extending demands (Mell et al., 1999). The number of agents implementable in a single management server and the number of management servers in a particular stance of deployment may reflect an IDS’s scalable capacity (Scarfone & Mell, 2007). Another feature that reflects more of the usability than the functionality of an IDS is its reporting capabilities. Technical IDS data needs to be presented in a comprehensible format to the corporate users with various skill levels (Werlinger, et al., 2008). The reporting functionalities help tailoring and presenting data in users’ intended and convenient ways. IDSs should facilitate a comparative view of various states over time, such as before and after the implementation of major changes to the configuration, etc. (Werlinger, et al., 2008).

IDSs maintenance and product supports. Because maintenance activities take substantial overheads in operating IDSs, organizations should give various maintenance considerations as the important priorities during an IDS product selection. These include the requirement of independent versus centralized management of agents; considerations of various local and remote maintenance mechanisms, such as host based GUI, web-based console, command line interfaces, etc.; security protections during various maintenance activities, such as securely transmitting, storing, and backing up IDS data; ease of restoration of various configuration settings; ease of log file maintenance; etc. (Scarfone & Mell, 2007).

Organizations require various levels of supports and should identify vendors’ ability in providing active supports according to the requirements during various stages of installation and configuration (Bace & Mell, 2001; Scarfone & Mell, 2007). Apart from on-demand and direct supports, organizations should check whether the vendors maintain users’ groups, mailing lists, forums and similar categories of support in a free of cost manner (Scarfone & Mell, 2007). The quality and availability of various electronic and paper based support documents, such as installation guides, users’ manuals, policy recommendation principles and guidelines, etc. are some of the typical features on which an IDS product can be justified in considerable extents (Scarfone & Mell, 2007). Organizations also need to carefully evaluate various costs associated with the support structure (Bace & Mell, 2001). A significant part of IDSs’ costs normally derives from the hidden costs associated with professional support services during IDS implementation and maintenance, including the training costs for both the administrators and IDS users (Yee, 2003; Bace & Mell, 2001). Organizations also need to recognize the costs of updates and upgrades if they are not free (Bace & Mell, 2001). In addition, the vendors’ capabilities to frequently release updates and patches as well as their capabilities to release the updates in a timely manner in response to new threats; conveniences of collection of each update; available means to verify the authenticity and integrity of individual updates; the effects of each update and upgrade with existing configurations of the IDS; etc. also need to be considered (Scarfone & Mell, 2007).

IDS Installation and Deployment Challenges

The biggest hurdle of IDSs is associated with the installation of the software (Werlinger, et al., 2008). IDS installations require the involvement of security specialists with a broad knowledge on IT and network security and protocols and an in-depth understanding on the organizational structure, resources, and goals (Werlinger, et al., 2008; McHugh, Christie, & Allen, 2000). Unlike other security products installations, an IDS installation is a time consuming and complex process, and the administrators have to face plenty of issues during the installation period. For example, the entire installation may crash in the middle of the installation, or the IDSs may produce inconsistence error messages that are difficult to deal with (Werlinger, et al., 2008). Due to these reasons, careful documentations of various problems and installation information (e.g., various parameters and settings) are necessary during installation, which can save valuable time and resources over the long run (Innella, McMiIlan & Trout, 2002). The amount of tasks and efforts necessary to install an IDS in a specific network can be daunting and overwhelming (Werlinger, et al., 2008). Hence, the availability of automated features in the Intrusion Detection Systems, such as automatic discovery of network devices, faster and more automated tuning options, and quick configuration supports through grouping related parameters, etc. can overcome the challenges with manually performing those tasks (Werlinger, et al., 2008).

Organizations should consider testing IDSs in a simulated environment before placing them in the actual network to overcome various challenges associated with large and complex network (Werlinger, et al., 2008; Scarfone & Mell, 2007). Some of these challenges are: i) the IDS software or network may crash during installation or testing periods due to the resource conflicts within various parts of the network (Scarfone & Mell, 2007), ii) IDS installation may alter the network characteristics undesirably, or iii) problems during the installation may keep the network temporarily unavailable. Organizations also need to consider a multi-phased installation by primarily selecting a small part of the network with limited number of hosts, or initially activating a few sensors or agents (Scarfone & Mell, 2007). Both test-bed and multi-phased installations will help security specialists to gain valuable insights through planning and rehearsal processes. This can help them to cope with various challenges associated with the installation, scalability, and configuration related problems (Scarfone & Mell, 2007), such as tuning and configuring properly to get rid of large amount of false alarms or efficiently dealing with huge traffics in a robust network (Werlinger, et al., 2008). Based upon various IDS technologies and the system’s characteristics, IDSs require different level of ongoing human interactions and dedication of resources (Bace & Mell, 2001). A multi-phased installation will help to justify the human resources and time that an organization needs to incorporate (Bace & Mell, 2001).

Configuring and Validating IDS

IDS configuration challenges. Whether an IDS will perform as an effective surveillance tool for an organization relies upon the informed justification of various configuration and tuning options and the dedication of resources based upon the IDS’s requirements (Werlinger, et al., 2008). The administrators require an in-depth knowledge on organizational missions, organizational processes, and existing IT services during the configuration process (Werlinger, et al., 2008). This knowledge is necessary to accustom the IDS according to the system structure, users’ behavior, and network traffic patterns, which will subsequently help to reduce the false positive generated by the IDS (Werlinger, et al., 2008). Initially, these challenges can be overcome during an installation through the collaboration of experts or security specialists administering different areas of network and servers within the distributed network (Werlinger, et al., 2008). Organizations should follow their existing security policies to configure various features of IDSs that may help them to recognize various policy violations (Bace & Mell, 2001). Following are the most important considerations that need to be ensured during IDSs configuration.

  1. Justifying the placement of agents to guard mission critical assets (McHugh, Christie, & Allen, 2000);
  2. Aligning IDS configurations with organizational security policies (McHugh, Christie, & Allen, 2000);
  • Installing most up-to-date signatures and updates during the initial stages of installation (McHugh, Christie, & Allen, 2000);
  1. Creating users’ accounts and assign roles and responsibilities (McHugh, Christie, & Allen, 2000);
  2. Customizing filters to generate appropriate levels of alerts;
  3. Determining IDS’s alert handling procedures and correlating alerts with other

IDSs (if exist), existing firewalls, and the system or application logs (McHugh, Christie, & Allen, 2000). The interoperability features of IDSs and the use of common alert formats will allow the administrators to integrate data and alerts (McHugh, Christie, & Allen, 2000).

Security hardening and policy enforcement. Sometimes IDSs may be the attackers’ primary targets, and security hardening is necessary to ensure IDSs’ safety (Scarfone & Mell, 2007). The important tasks during security hardening involve; i) hardening IDSs through implementing latest patches and signature updates immediately after installation; ii) creating separate users’ accounts for general users and administrators with the appropriate level of privileges (Scarfone & Mell, 2007); iii) controlling access to various firewalls, routers, and packet filtering devices; iv) securing IDS communication by implementing suitable encryption technology (Scarfone & Mell, 2007); etc.

Ongoing Operation and Maintenance Challenges

Monitoring, operation, and maintenance of distributed IDSs are normally conducted remotely through the management console or GUI (i.e., menus or options). In addition, command line interfaces may facilitate local management of IDS components. Ongoing operation and maintenance of IDSs are substantial challenges for organizations, which require basic knowledge on system and network administration, information security policies, various IDS principles, organizations’ security policies, and incidence response guidelines (Scarfone & Mell, 2007). Sometimes, there requires some advance skills, such as advance manipulating skills (e.g., report generation) and programming skills (e.g., code customization). The most important operation and maintenance activities are:

  1. performing monitoring, analysis, and reporting activities;
  2. managing IDSs for appropriate level of protections, such as re-configuring IDS components with the necessary changes to the network, applying updates, etc.; and
  3. managing skills for ongoing operation and maintenance. (Scarfone & Mell, 2007).

Monitoring, analysis and reporting. Successful monitoring of IDSs involves monitoring of network traffics and the proper recognition of suspicious behavior. The important tasks during ongoing monitoring includes i) monitoring various IDS components to ensure security (Scarfone & Mell, 2007); ii) monitoring and verifying different operations, such as events processing, alert generations, etc. (Scarfone & Mell, 2007); and iii) periodic vulnerability assessments. IDSs’ vulnerability assessments are conducted through appropriate level of analyses by incorporating various IDS features and tools and by correlating agents’ data (Scarfone & Mell, 2007). For ease of monitoring, IDSs require to generate reports in readable formats, which is done through various levels of customization of views (Scarfone & Mell, 2007). Because monitoring and maintenance involve substantial human interventions, these can consume lots of staff time and resources. Organizations can overcome these challenges in two major ways: i) customizing and automating tasks to enhance control over maintenance activities (Scarfone & Mell, 2007) and ii) incorporating smart sensors that work autonomously in the network to analyze the traffics and recognize trends and patterns (Scarfone & Mell, 2007).

Applying updates. Regular IDS updates need to be implemented in order to achieve appropriate protections for both IDSs and the system. Security officials need to check vendors’ notifications of security information and updates periodically and apply them as soon as they are released (Scarfone & Mell, 2007). Both software updates and signature updates are important for IDS security and appropriate functioning. A software update provides bug fixes and new features to the various components of an IDS product, including sensors or agents, management servers, consoles, etc. (Scarfone & Mell, 2007). A signature update enhances IDSs’ detection capabilities through updating configuration data. Hackers can alter the code of updates; so, verifying the checksum of each update is crucial before applying the update (Scarfone & Mell, 2007; Mell et al., 1999; Hegarty et al., 2009). Apart from software updates, organizations need to justify the positioning of IDS agents and components and ensure their optimal placement by periodically reviewing the network configurations and changes (McHugh, Christie, & Allen, 2000).

Retaining existing IDS configurations is a vital consideration before applying an update. Usually, normal updates will not change existing IDS configurations. But, IDS codes that are tailored and customized by the administrators to incorporate desirable functionalities may be altered during code updates. However, administrators should save and backup both customized codes and configuration settings before applying updates (Scarfone & Mell, 2007). Drastically applying updates to the IDS system or components also poses certain challenges. New signatures or detection capabilities can cause a sudden flooding of alerts (Scarfone & Mell, 2007). To detect and overcome the problematic signature from the updates, administrators should test the signature and software updates in a smaller scale or within a specific host or agent (Scarfone & Mell, 2007).

Generating skills. The ongoing operation and maintenance of IDSs and the appropriate utilization of IDS data require security officials with a set of skills and knowledge. Security teams of many organizations are unable to conduct customization or tuning of IDS products based on the IDS data in their own networks within reasonable time frame (Werlinger, et al., 2008). To ensure the effective manipulation of IDSs in both the user and administrator levels, organizations must consider providing training to all stakeholders involved in IDSs operations. This includes acquiring skills on general IDS principles, operating consoles, customizing and tuning IDS components, generating reports, etc. (Scarfone & Mell, 2007). Organizations should take available training options into considerations according to the users’ needs and conveniences, such as online training, CBT, instructor-led training, lab practices, hands-on exercises, etc. (Scarfone & Mell, 2007). Organizations may also utilize various information resources (Scarfone & Mell, 2007), such as various electronic and paper based documents (e.g., installation guides, users’ manuals, policy recommendation principles and guidelines, etc.) to generate skills required during installation and maintenance activities (Scarfone & Mell, 2007).

Managing Distributed Intrusion Detection System Agents

Managing Agents in a Distributed Environment

Different distributed IDS architecture consists of varieties of role-based agents, such as sniffer, filter, misuse detection, anomalous detection, rule mining, reporter agents, etc. (Scarfone & Mell, 2007; Anderson, Frivold & Valdes, 1995). The distribution of intrusion detection tasks among agents substantially reduce IDSs’ operation loads and increase performance. However, one challenge associated with distributed IDSs is the management of large number of agents. IDS agents in many global companies sit on different geographical regions (Innella, McMiIlan & Trout, 2002). To optimize IDSs’ performance and save valuable resources, large organizations need to justify the options between centralized versus distributed management of agents (Innella, McMiIlan & Trout, 2002). If the management of an IDS does not involve several administrators or a hierarchical structure, a centralize approach of IDS management can provide number of benefits over distributed management (Innella, McMiIlan & Trout, 2002). First, it simplifies the network structure and reduces the vulnerability points through reducing the requirement of multiple agents and sensors. Second, the simplified structure will reduce the management costs and other overheads (Innella, McMiIlan & Trout, 2002). Overall, it reduces the network data transportation costs through minimizing the travel of agent data to multiple IDS managers. Organizations should choose the most efficient approach to data collection, and a centralized management can facilitate administrators to coordinate multiple IDSs or agents efficiently through the smooth and uncluttered network (Innella, McMiIlan & Trout, 2002).

Another challenge of managing distributed agents is to ensure agents’ integrity. Hosts must ensure that the agents are free of malicious codes before permitting them to operate on the platform. This is done by signing agents’ codes, i.e., incorporating valid certificates against which the hosts check the integrity of an agent (Krugel & Toth). Agents are vulnerable to modification during its transmission (Krugel & Toth). Applying an appropriate encryption method during agent transmission can overcome the barrier.

In case of mobile agents in distributed IDS, the central management server dispatches varieties of agents to different nodes of the network. A single mobile agent may carry on multiple functionalities which incorporate large amount of codes into the agent’s structure and attribute some limitations on its mobility (Krugel & Toth). A substantial part of these codes are associated with hosts’ operating system specific functionalities (Krugel & Toth). To overcome this limitation, i.e., to keep the agents small in size, only generic codes can be incorporated into the agent’s structure and the operating system dependent codes into the hosts themselves (Krugel & Toth).

Managing Interactions and Communications between Agents

Agents need to communicate each other to maintain the operational consistency. Agents can perform distant communications through creating communication channels among them and then exchanging messages (Brahmi, Yahia, & Poncelet, 2011). Agents interact with each other using an ACL (Agent Communication Language) language (Brahmi, Yahia, & Poncelet, 2011). Information can be sent in text formats using standard and secured protocols (Brahmi, Yahia, & Poncelet, 2011). In some distributed IDS architecture, a mobile agent can directly visit to a particular host, deploy itself on that host, and then exchange required messages (Brahmi, Yahia, & Poncelet, 2011). Upon receiving the messages, the deployed agent can return to the place of its origin or visit another host as required (Brahmi, Yahia, & Poncelet, 2011).

Collecting and Correlating IDS Agent Data

Collection and Storage of Distributed Data

Data collection, aggregation, and storage are vital concerns for effective manipulation and correlation of events data (Innella, McMiIlan & Trout, 2002). Before data aggregation, organizations need to determine which types of data should be collected and preserved. Distributed IDSs place agents in different corners of the network, where agents collect representative data in a distributed manner according to the organizations’ interests (Holtz, David, & de Sousa Junior, 2011). Once collected, data is filtered and analyzed and inferred locally by the agents. Agents normally send only those data to the management sever that are meaningful. However, the responsibility of distributed IDSs or distributed agents is not only to collect network packets but also audit data traces from the associated hosts, such as logs generated by applications, operating systems, and other defensive software (Holtz, David, & de Sousa Junior, 2011). Organizations need to determine whether all these data will be sent to the management server. For security reasons, IDSs log data should be preserved both locally and centrally (Scarfone & Mell, 2007).

Another challenge of data storage is to determine how long the log data should be preserved. Day-to-day accumulated log data can quickly overrun the capacity of data storage. Organizations may need to store IDS data accumulated in as much as two years period (Innella, McMiIlan & Trout, 2002), and conveniently storing these enormous amount of log data in the centralized server of a distributed IDS is challenging (Scarfone & Mell, 2007). To overcome the barrier of data storage, a number of researchers suggested incorporating cloud based data storage in the IDS architecture for scalability, flexibility, and ease of access (Scarfone & Mell, 2007; Alharkan & Martin, 2012; Chen et al., 2013).

Data storage is not only associated with volume issues, other issues, such as storage management and the level of security applied to the data also implies a great deal of challenges. IDS data is vulnerable during transmission and during storage. To ensure authenticity and integrity of collected data, suitable cryptographic arrangements are made during transmission and storage of agent data (Holtz, David, & de Sousa Junior, 2011; Cloud Security Alliances, 2011, Catteddu & Hogben, 2009). Cryptographic arrangements in a large scale system can be managed effectively by deploying the enterprise wide Public Key Infrastructure (PKI) (Sen, 2010; Tolba et al., 2005).

Analyzing Intrusion Detection System Data

The administrators often need to carry out various analysis tasks through data fusion and events correlation in order to identify subtle attacks (Holtz, David, & de Sousa Junior, 2011). Analysis of IDS data requires appropriate manipulation of data originating from the network and hosts. Administrators need sound analysis skills in order to efficiently accomplish this goal. The fundamental unit of IDS data is event (Jordan, 2000). One way IDSs generate alarms is through context sensitive analysis by counting events and determining thresholds. For example, many connections at a certain time is recognized as a SYN flood, or too many different ports visited at a time is recognized as a port scan (Jordan, 2000). Another way to determine an intrusion is through identifying the quality of uncoupled events in terms of their passing of certain criteria, such as the pattern of a pre-recognized signature (Jordan, 2000). In a distributed IDS, the above analysis of IDS data is locally performed by the distributed agents. A more advance analysis is performed in the centralized server through event correlations.

Correlating Agent Data

While the tasks of each agent are to identify network intrusion and suspicious behavior in its associated network segment, the centralized server is responsible for correlating these individual agent data in order to identify planned and distributed attacks on the network (Yee, 2003). The centralized server aggregates agent data for event correlation. In the process of event correlation, if a network packet with inconsistent signature is identified (Jordan, 2000) or an event is recognized as suspicious, the next step is to identify the correlated events demonstrating similar patterns (Jordan, 2000). In order to accomplish this goal, IDSs will constantly search for connections between suspicious and non-suspicious events (Jordan, 2000). Network administrators may need to adopt various analysis techniques (e.g., data fusion, data correlation, etc.) and tools (e.g., honey pots) to successfully carry on the event correlation tasks (Holtz, David, & de Sousa Junior, 2011). However, in a large scale distributed network where each segment of the network poses distinct characteristics and where the hosts are running on heterogeneous environments, associating one suspicious network event with another event generated from a distant network segment is tremendously challenging (Innella, McMiIlan & Trout, 2002). It requires a broad understanding of entire network as well as the effective communication and coordination between security officials responsible for the management of various segments of the network.

Correlating Data from Multiple Intrusion Detection System Products

Correlation of different types of IDS data facilitates the identification of large scale distributed attacks in a coordinated manner (Brahmi et al., 2012; Brahmi, Yahia, & Poncelet, 2011). There are advantages and limitations of each IDS product. A single product cannot ensure the full protection from all kinds of intrusions and malicious activities. Large organizations that have multiple products (either from the same or different vendors) with different detection methods and strategies need to correlate their IDSs’ data to produce maximum benefits from them (Sallay, AlShalfan, & Fred, 2009). A single management interface (or console) can facilitate the coordination, management and control of IDS data coming from multiple IDS products (Scarfone & Mell, 2007). Organizations may need to identify whether the IDS products can directly share and coordinate various kinds of IDS data directly within their management interfaces (Scarfone & Mell, 2007). This normally occurs with different IDS products coming from the same vendor. On the other hand, organizations also need to ensure whether IDSs have interoperability features to share the log files or other output files from other IDSs and security related products (Scarfone & Mell, 2007). This type of coordination among multiple IDSs is normally accomplished by SIEM (Security Information and Event Management) software (Scarfone & Mell, 2007; Chuvakin, 2010).

References

Abraham, A., & Thomas, J. (2005). Distributed intrusion detection systems: a computational intelligence approach. Applications of information systems to homeland security and defense. USA: Idea Group Inc. Publishers, 105-135.

Alharkan, T., & Martin, P. (2012). IDS aaS: Intrusion detection systems as a service in public clouds. In Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), 686-687. IEEE Computer Society.

Anderson, D., Frivold, T., & Valdes, A. (1995). Next-generation intrusion detection expert system (NIDES): A summary. SRI International, Computer Science Laboratory.

Araújo, J. D., & Abdelouahab, Z. (2012). Virtualization in Intrusion Detection Systems: A Study on Different Approaches for Cloud Computing Environments. International Journal of Computer Science and Network Security (IJCSNS), 12(11), 10.

Bace, R., & Mell, P. (2001). NIST special publication on intrusion detection systems. An NIST (National Institute of Standards and Technology) publication

Beg, S., Naru, U., Ashraf, M., & Mohsin, S. (2010). Feasibility of intrusion detection system with high performance computing: A survey. International Journal for Advances in Computer Science, 1(1), 26-35.

Boudaoud, K., Labiod, H., Boutaba, R., & Guessoum, Z. (2000). Network security management with intelligent agents. In Network Operations and Management Symposium, 2000. (NOMS 2000).

Brandao, J. E. M., da Silva Fraga, J., Mafra, P. M., & Obelheiro, R. R. (2006). A WS-based infrastructure for integrating intrusion detection systems in large-scale environments. In Meersman, R., Tari, Z., & Herrero, P. (2006). On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops; proceedings of the OTM Confederated International Conferences, CoopIS, DOA, GADA, and ODBASE 2006, Montpellier, France.

Brahmi, I., Yahia, S. B., Aouadi, H., & Poncelet, P. (2012). Towards a multiagent-based distributed intrusion detection system using data mining approaches.

Brahmi, I., Yahia, S. B., & Poncelet, P. (2011). A Snort-based Mobile Agent for a Distributed Intrusion Detection System. In SECRYPT, 198-207.

Brennan, M. P. (2002). Using Snort for a Distributed Intrusion Detection System. SANS Institute.

Bye, R., Camtepe, S. A., & Albayrak, S. (2010). Collaborative Intrusion Detection Framework: Characteristics, Adversarial Opportunities and Countermeasures.

Catteddu, D., & Hogben, G. (2009). Cloud Computing: benefits, risks and recommendations for information security. European Network and Information Security Agency (ENISA).

Chuvakin, A. (2010). SIEM: Moving Beyond Compliance – Intrusion Detection Systems. White Paper for RSA.

Chen, Z., Han, F., Cao, J., Jiang, X., & Chen, S. (2013). Cloud computing-based forensic analysis for collaborative network security management system. Tsinghua Science and Technology, 18(1), 40-50.

Chatzigiannakis, V., Androulidakis, G., Grammatikou, M., & Maglaris, B. (2004). A distributed intrusion detection prototype using security agents. HP OpenView University Association.

Cloud Security Alliances (2011). Security guidance for critical areas of focus in cloud computing v3.0. A report by Cloud Security Alliance.

Debar, H., Dacier, M., & Wespi, A. (1999). Towards a taxonomy of intrusion-detection systems. Computer Networks, 31(8), 805-822.

Debar, H., & Wespi, A. (2001). Aggregation and correlation of intrusion-detection alerts. In Recent Advances in Intrusion Detection, 85-103. Springer Berlin Heidelberg.

Faysel, M. A., & Haque, S. S. (2010). Towards cyber defense: research in intrusion detection and intrusion prevention systems. International Journal of Computer Science and Network Security (IJCSNS), 10(7), 316-325.

Garfinkel, T., & Rosenblum, M. (2003). A Virtual Machine Introspection Based Architecture for Intrusion Detection. In NDSS, 3, 191-206.

Ghosh, A., & Sen, S. (2004). Agent-based distributed intrusion alert system, 240-251.In Proceedings of the Sixth International Workshop on Distributed Computing (IWDC’04), 240–251, Kolkata, India.

Gunawan, L. A., Vogel, M., Kraemer, F. A., Schmerl, S., Slåtten, V., Herrmann, P., & König, H. (2011). Modeling a distributed intrusion detection system using collaborative building blocks. ACM SIGSOFT Software Engineering Notes, 36(1), 1-8.

Hegarty, R., Merabti, M., Shi, Q., & Askwith, B. (2009). Forensic analysis of distributed data in a service oriented computing platform. In proceedings of the 10th Annual Postgraduate Symposium on The Convergence of Telecommunications, Networking & Broadcasting, PG Net.

Holtz, M. D., David, B. M., & de Sousa Junior, R. T. (2011). Building Scalable Distributed Intrusion Detection Systems Based on the MapReduce Framework. Revista Telecomunication, 2, 22-31.

Iheagwara, C. (2003). Intrusion Detection Systems–Strategies for improving Performance.

Innella, P., McMiIlan, O., & Trout, D. (2002). Managing Intrusion Detection Systems in Large Organizations.

Jordan, C. (2000). Analyzing Intrusion Detection Systems Data.

Kannadiga, P., & Zulkernine, M. (2005). DIDMA: A distributed intrusion detection system using mobile agents. In Proceedings of the Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and First ACIS International Workshop on Self-Assembling Wireless Networks (SNPD/SAWN’05), 238-245.

Kittel, T. (2010). Design and Implementation of a Virtual Machine Introspection based Intrusion Detection System.

Kourai, K., & Chiba, S. (2005). HyperSpector: virtual distributed monitoring environments for secure intrusion detection. In Proceedings of the 1st ACM/USENIX international conference on Virtual execution environments, 197-207.

Kozushko, H. (2003). Intrusion detection: Host-based and network-based intrusion detection systems. Independent study.

Krugel, C., & Toth, T. Applying Mobile Agent Technology to Intrusion Detection.

Lasheng, Y., & Chantal, M. (2009). Agent based distributed intrusion detection system (ABD Intrusion Detection Systems). In Proceedings of the Second Symposium International Computer Science and Computational Technology (ISCSCT ’09), 134-138, Huangshan, P. R. China.

Maciá-Pérez, F., Mora-Gimeno, F., Marcos-Jorquera, D., Gil-Martínez-Abarca, J. A.,

Ramos-Morillo, H., & Lorenzo-Fonseca, I. (2011). Network intrusion detection system embedded on a smart sensor. Industrial Electronics, 58(3), 722-732.

McHugh, J., Christie, A., & Allen, J. (2000). The role of intrusion detection systems. IEEE Software, 17(5), 42-51.

Mell, P., Karygiannis, T., Marks, D., & Jansen, W. (1999). Applying mobile agents to intrusion detection and respons. A publication of National Institute of Standards and Technology (NIST), US Department of Commerce.

Neelima, S., Prasanna, L.Y. (2013). A Review on Distributed Cloud Intrusion Detection System. International Journal of Advanced Technology & Engineering Research (IJATER), 3(1), 116-120.

Rao, K. R., Pal, A., & Patra, M. R. (2009). A service oriented architectural design for building intrusion detection systems. International Journal of Recent Trends in Engineering and Technology, 1(2), 11-14.

Sallay, H., AlShalfan, K. A., & Fred, O. B. (2009). A scalable distributed Intrusion Detection Systems Architecture for High speed Networks. International Journal of Computer Science and Network Security (IJCSNS), 9(8).

Scarfone, K., & Mell, P. (2007). Guide to intrusion detection and prevention systems (IDPS). NIST special publication, Technology Administration, U.S. Department of Commerce.

Sen, J. (2010). An Agent-Based Intrusion Detection System for Local Area Networks. International Journal of Communication Networks and Information Security (IJCNIS), 2(2), 128-140.

Singh, R. R., Gupta, N., & Kumar, S. (2011). To reduce the false alarm in intrusion detection system using self-organizing map. International Journal of Soft Computing and Engineering (IJSCE), 1(2), 27-32.

Sterne, D., Balasubramanyam, P., Carman, D., Wilson, B., Talpade, R., Ko, C. & Bowen, T. (2005). General cooperative intrusion detection architecture for MANETs. In Proceedings of the Third IEEE International Workshop on Information Assurance, 57-70.

Sundaram, A. (1996). An introduction to intrusion detection. Crossroads, 2(4), 3-7.

Suryawanshi, G. R., Jondhale, S. D., Korde, S. K., Ghorpade , P. P., Bendre, M. R. (n. d.). Mobile Agent for Distributed Intrusion Detection Systems in Distributed System. International Journal of Computer Technology and Electronics Engineering (IJCTEE), 1(3), 70-75.

Tierney, B., Crowley, B., Gunter, D., Lee, J., & Thompson, M. (2001). A monitoring sensor management system for grid environments. Cluster Computing, 4(1), 19-28.

Tolba, M., Abdel-Wahab, M., Taha, I., & Al-Shishtawy, A. (2005). Distributed Intrusion Detection Systems for Computational Grids. In International Conference on Intelligent Computing and Information Systems, 2.

Werlinger, R., Hawkey, K., Muldner, K., Jaferian, P., & Beznosov, K. (2008). The challenges of using an intrusion detection system: is it worth the effort?. In Proceedings of the 4th symposium on Usable privacy and security, (SOUPS), July 23-25, Pittsburgh, PA, USA.

Wotring, B. (2010). Host Integrity Monitoring: Best Practices for Deployment.

Yee, A. (2003). The intelligent Intrusion Detection Systems: next generation network Intrusion Detection Systems management revealed. NFR security white paper.

Zhai, S., Hu, C., & Weiming, Z. (2014). Multi-Agent Distributed Intrusion Detection Systems Model Based on BP Neural Network. International Journal of Security and Its Applications, 8 (2), 183-192.

View Computing Dissertations Here

Communication Computing

How Computers Affect Communication

Communication – with the current rate of technological improvement, people now have many ways to relate with other. We eventually modify how we form our relationships and come up with more diverse methods of interacting. However, there are strength and weakness as well as drawbacks regarding on which methods we use.

Computers have made a big impact on relationships. The invention of electronic devices such as mobile phones has simplified the way people get into contact with each other (Sara, 2004). Also coming from social networks has made a diversification in communication. With a computer, people can chat with family and friend’s members over the internet. Computers are bringing the world together via communication.

This assignment will study the effects of computers on communication in day to day lives.

Focus questions:

How has the invention of the computer affected communication?

What are the positive and negative impacts of computers on communication?

What are services offered by the computer during communication?

Description

Effects of computers intervention on communication

The invention of the computer has had revolutionary effects on communication (Sara, Jane & Timothy, 2004). What would take years to achieve a century ago can now be accomplished within an hour thanks to the computer. However, this invention has not only had positive impacts on communication; it has also had many weaknesses of communication like face-to-face communication. Communication is not only associated with the act of sending and receiving information. There are many other aspects such as taking non-trivial actions on the information, reacting to the demands of the information and giving feedback.

Computers affect communication, both positively and negatively (Sara, Jane & Timothy, 2004). There are various aspects of modern day communication that would not be achieved in old days since there were no computers. However, there are other vital aspects of traditional communication that have been lost in modern days due to the use of computers in communication (Gordon, 2002). The relationship between computer and communication can be analyzed with a focus on the various effects computers pose on communication. Computers can either aid or frustrate communication between people. Positive impacts of computers in communication describe ways in which computers aid communication. The negative implications of computers on communication using computers describe ways in which computers frustrate communication efforts. These impacts can be analyzed by comparing past day communication with present day communication.

Positive effects of computers on communication

Technological improvement has a great impact on the societal communication methods particular with its improvement in the last centuries (Christina, 2006). From the coming up of telegraph together with telephones to the advancement of internet, technology provided us with a tool to express our feelings and opinions to a wider number of recipients.

Keeping in touch/ long distance communication

Telegrams are quicker than letters; telephone calls, also, are speedier than telegrams, and additionally less demanding and more charming, since they require no go-between and permit clients to hear one another’s voice. Phones make this one stride further, permitting individuals to call and talk with one another regardless of their location. Online communication of various types is the most productive yet, with email being a close quick form of the paper letter; webcams, combined with communication programs, for example, Skype, Google Video Chat, make it conceivable to see the individual you are talking with as opposed to hearing his voice simply.

Doing Business

The same computers that have simplified and enhanced individual communication have additionally had the same valuable consequences for business. Communication between partners is close prompt whether they are a few rooms or a few nations separated (Sundar, 2014). For example, video conferencing permits organizations to have specialists scattered all over the world while yet hold productive gatherings and talks; business systems administration is made simpler by online networking and online systems composed particularly for that reason, for example, LinkedIn. Maybe above all, organizations can extend beyond their nearby market and pick up a more extensive client base basically by keeping up dynamic online vicinity.

Overcoming Disabilities

The computer had both enhanced communications for crippled individuals and made it conceivable where it wasn’t (Gordon, 2002). Portable amplifiers support hence becoming aware of mostly hard of hearing individuals, making it less demanding to comprehend discourse, while cochlear inserts restore hearing to the deaf. Discourse producing gadgets give individuals with extreme discourse hindrances an approach to communicating: maybe the most acclaimed client of such a gadget is researcher Stephen Hawking. Further advances in innovation may bring about useful cerebrum computers interface frameworks, restoring the capacity to impart to individuals who have lost it completely, for example, sufferers of the secured disorder.

Reaching a Wider Audience

As individuals’ capacity to communicate enhances, the reach of their messages enlarges (Gordon, 2002). This can be particularly essential in politics campaigns and activism. For example, photographs and video recorded secretly through a phone can be rapidly and effortlessly shared online through sites, for example, YouTube, making it harder for onerous administrations to keep control; social media, for example, Facebook and Twitter can be utilized to sort out and arrange gatherings and challenges. The Egyptian revolution of 2011-2012 was impelled significantly by online networking.

An individual turns out to be more skilled to take a decision because of the computer because the choice is given by the computers on time to all the data required to be taken. Therefore, any people or establishments get achievement quick.

The individual working at the administrative level turns out to be less subject to low-level staff like representatives and bookkeepers (Christina, 2006). This enhances their working patterns and effectiveness, which advantage the association and eventually influences the general public.

In like manner life Likewise, an individual gets profited with computers innovation. Whenever airplane terminals, healing centers, banks, departmental stores have been electronic, individuals get snappy administration because of the computers framework.

Computing Communication
Computing Communication

Negative Effects of Computers on Communication

Computers have changed the working environment and society as a whole. Individuals and associations have gotten to be reliant on computers to connect them to colleagues, merchants, clients and data (Roundtable., 1999). Computers are utilized to track timetables, streamline data and give required information. In spite of the fact that computers have given laborers innumerable devices to business and simpler access to data close-by or abroad, there are negative impacts. These incorporate more than the clearly feared framework disappointments and digital violations.

Communication Breakdowns

Due to the commonness of computers in the work environment, email is presently a typical method of professional’s communication. This has brought on a plenty of miscommunication issues. Numerous employees lack appropriate writing skills and can accordingly battle with effectively conveyance of their messages. Yet, even the most-gifted author can in any case experience difficulty with passing of tone in electronic messages. Along these lines, without the utilization of articulation and non-verbal communication, messages implied as neutral or even complimentary can be translated as rude or critical. To add to this, numerous specialists are so reliant on email that they have not effectively manufactured a positive establishment relationship by means of exposure or telephone calls.

Anonymity

One of the principle issues with computers interceded communication originates from an absence of accountability with clients (Jungwan Hong, 2014). Individuals can speak to themselves as whatever they need on Internet discussions or informal communities, and this makes communication issues in both headings. A client may contort who he is by not giving precise insights about himself, and this absence of genuineness influences how that individual is seen. The shroud of obscurity permits a client to possibly encroach upon socially acknowledged practices like resistance or respectfulness.

Misinterpretation

The way that most communication is occurring on computers comes as content can really be a negative as far as our capacity to comprehend things clearly. Indeed, even with email (Roundtable., 1999), it is workable for data to be confused or the feeling of an announcement to be missed. Saying “you rock” to somebody in an email message, for example, could be utilized to truly hand-off appreciation. Then again, it could demonstrate a negative sentiment somebody being placed in an intense position. The setting pieces of information that a man gives their non-verbal communication and manner of speaking are lost in this situation. Clients get around some of this disarray by utilizing emoticons – console characters that serve as a shorthand for temperament and feeling – however a lot of nuance can be missed without perceiving how somebody responds with their non-verbal communication and voice.

Dependency

Society’s reliance on computers for communication is likewise a risk amusement, as outside powers can avert communication in an assortment of ways (Christina, 2006). Quakes, surges and sea tempests have brought on different log jams and stoppages of Internet availability for individuals everywhere throughout the world. Moreover, dependence on interpersonal organizations and email can have the unintended result of opening a man up to recognize burglary endeavors and email tricks. Indeed, even the outside power of political agitation can debilitate a client’s capacity to impart, as the 2011 exhibitions in Cairo and Libya brought about government shutdowns of the Internet, radically abridging every nation’s capacity to convey, both broadly and globally

Privacy

Communicating by means of computers can help individuals connect across large geographical crevices and access remote data, however doing as such may open up a man’s privacy more than he may need. With an in-individual meeting or telephone discussion, there is a relative affirmation that subtle elements of those sharing will stay private. On the other hand, with email, content informing or message sheets, there is a record of what individuals say (Christina, 2006). Data is not simply tossed out into the air like discourse, however it, put away as a lasting record. There is an inalienable risk when outsiders can get to these online “discussions.” Similarly, interpersonal organizations and other Internet-based specialized apparatuses are defenseless against protection break, as clients regularly participate in these exercises on open systems; leaving individual data, conceivably, out in the open.

Computers complicate conflict resolution mechanisms in communication: Multiple transfers of information, the increment of anonymity and fast flow of information makes conflict resolution in organizations extremely complex (Christina, 2006). With computers, it has been extremely hard to control access to information among various, unintended groups such as children, junior staff, political opponents, among others (Christina, 2006).

Computers have affected communication in various ways. There is some important difference between modern communication and the old communication. The old day communication was characterized by aspects such as slow movement of information, heavy expenses, ineffective transfer of information and limitations on the amount of transferable information (Sundar, 2014). On the other hand, for the modern communication, it is fast, less expensive, highly efficient, and clear. However, the modern communication without computer is not possible independently functioning very well.

Services Offered By Computer in Communication

The internet is a popular tool for gaining access to information, enlarging commercial works and communication with each other. Studies show that the dominant use of net in different houses is used for a person to person communication. Email on time message is sending, availability of chat rooms and support system sites have changed the way people pass information to others through the use of computers.

Computers allow users to talk to each other via a connected network and social sites (Sara, 2004). Computers also have the capability to connect different users to the internet through phones lines or cabled connections enabling users to share data and information e.g. one can use computers to send a message to other computers through a connected network globally. Data transfer and messaging as a useful services provided by computers in communication globally.

Discussion

Just around a century ago, a substantial number of developments occurred during the first industrial transformation. Within a short time period, numerous nations got to be industrialized. Presently we are in the first place of another technological insurgency (Gordon, 2002). The significant reason for the second technological insurgency is the creation of Computers. Computer is the most flexible machine people have ever made. It assumes an imperative part in our regular life. It covers gigantic region of utilization including instruction, commercial ventures, government, prescription, and exploratory examination, law and even music and expressions.

Over the next decade, national and international computers networks will be generated and connected together (Asthmatics, 2010). Some of these computer networks will be specialized acting as a links between national informational stores and data warehouses where some will give general computing capabilities.

Computers information store on every aspect of the work performed by the society will be generated with a high frequency and with wider coverage and be accessible to a different level to groups and individuals with different qualifications.

The computer networks will be more accessible at a cost low to the whole public and to all groups with small resource. For example, the cost of renting a computer terminal in London currently is around 72 dollars for a single installation per month without adding the cost per hour in which the terminal is used (Jungwan Hong, 2014). Such cost will tend to reduce to a point where a terminal is normal equipment in an office.

There will be an increase in essence at which communicate directly to one another globally. The use of video chats will enlarge while facilitating the effects of individual in many areas over a long period of time.

Conclusion

Computers have positive and negative effects on communication. Weighing the side of positive effects and negative effects, there is clear evidence that computers do more good to communication than harm (Asthmatics, 2010). Computers have a very important role in the facilitation of communication. People should utilize computers to expand the circle of friends and relations globally but also we should learn how to control and treasure our relations and use of computers in order not to let things that can be regretted occur.

References

Asthmatics, G. (2010). The revolution of communication and its effect on our life. Pavaresia. Academics International Scientific Journal, 100-108.

Christina, K. (2006). The Impact of Information and Communication Technologies on Informal Scholarly Scientific Communication: A Literature Review. Maryland: University of Maryland Press.

Gordon, B. (2002). Computers in Communication. UK: McGraw-Hill International Electronic Version.

Jungwan Hong, S. B. (2018). Usability Analysis of Touch Screen for Ground Operators. Journal of Computer and Communications, 1-20.

Roundtable., N. R. (1999). Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology. Washington: National Academies Press.

Sara, K. J. (2004). Social Psychological aspects of Computer Mediated Communication. Pittsburgh: Carnegie Mellon University.

Sundar, S. S. (2014). Communication theory. Journal of Computer-Mediated Communication, 85.

Click Here For Computing Dissertations

Wireless Security Dissertation

Wireless Security

The technology of Wireless Security Networking is one of the most common networking and accessible technology but still the security problems are of great interest of this type of technology. This research paper will prospect the wireless networking and its security as well as will focus on the companies which are involved in this technology and future prospects for this technology.

These days’ computer clients are more fascinated in attaining the Internet wirelessly because of its accessibility and mobility. These days, business travelers prefer and utilize the wireless laptops to keep in contact with the office, home, and friends. A wireless network can unites or connect the computers at various places of your business and home without any involvement of any cords and permit you to work anywhere within the range of network on laptop (Bulk, 2006).

The wireless networks are established on the basis of IEEE criteria and standards which belong to 802 family which contains (802.3) Ethernet that is largely utilized today in offices and homes. Even though the growth and development of the 802.11 standards and technology have been in progress since the late 1990s, basic acceptance of “wireless Ethernet” only originated in year 2000-2001 when (AP) or access point devices turned cheap enough for the home user. (Bulk, 2006) The following items simply provide an overview of the 802.11 family which includes: 1-802.11b (a)- Most widespread (b)- 11Mb maximum, 2.4 GHZ band, 2- 802.11 (a)- Next generation (b)- 54MB maximum, 3- 5GHZ band 3- 802.11g (a)- 54MB maximum, 2.4 GHZ band (b)- Compatible with 802.11b 4- 802.11X (a) Uses Extensible Authentication Protocol (EAP) (b) Supports RADIUS 5- 802.11i (a) TKIP (b) Draft (Bulk, 2006).

The disadvantage of a wireless network is that, except someone takes positive protections, any individual with a wireless laptop or computer can access your network. That means that hacker can attain the personal data and information on your laptop or computer. And if an unaccredited person utilizes your network to commend a crime or transmit spam, the action will be traced on your account. The 802.11 authoritative standard is, in its functioning principles, not that practically different from the Ethernet. It employs a conventional “one can speak, others hear” media access control strategy; the simple difference is that instead of wires, the carrier of the signals are assigned radio frequency. In year 2004, (ISRC) or Information Security Research Center of Queensland University declared that any 802.11 network installed in any business environment could be halt in few seconds simply by transmitting a signal that hinders the other users or parties from trying to talk. On the other hand, same is true for Ethernet; apart from that you must be able to join with the network plug at first, which certainly makes the attacker much easy to trace the trouble in order to solve it (Nichols, & Lekkas, 2006).

That’s not the place where problem terminate. Where the 802.11 standard tried to prevent carrier-level hits, it actually failed dejectedly. The (WEP) or Wired Equivalent Privacy (WEP) method was planned for wireless networks to supply a stage of security and protection against hackers on network sessions by outside members or parties, therefore offering security nearly equivalent to conventional LANs. Though, there are numerous design flaws were found in the WEP scheme in 2001 by the researchers of Zero Knowledge System and University of California, which showed the scheme hideously incompatible. Unfortunately, even by that time Wi-Fi had been deployed extensively to create compulsory adjustments hard to execute or implement (Nichols, & Lekkas, 2006).

General Wireless Security

Now the question arises why it is necessary to concentrate on the security of Wi-Fi. As we know that the general area of 802.11 network security zones are one of the leading bases for future security interests and concern: Any hacker or attacker can be laid where nobody considers him or her to be and keep well away from the network’s authorized premises. Furthermore, the other reason to concentrate on the security of Wi-Fi is its great and widespread utilization and use of 802.11 networks. In year 2006, it is reported that the quantity of shipped 802.11 enabled hardware devices were calculated to outgrow 40 million units, even the cost of these units are keep falling. After the popularity of 802.11g devices in the market, the price of the several 802.11 products dropped to the price level of 100BaseT Ethernet client cards. However, there are various speed disadvantages, but not every network requires fast speed, and in majority of the cases the implementation of wireless network is highly demanded. On the other hand, another reason to concentrate on the wireless security is the offices situated on different areas of the engaged streets, office park and highway (Nichols, & Lekkas, 2006).

 Wi-Fi security problems are addressing with 2 different troubles: Privacy and authentication. Authentication or verification guarantees that only valid clients get access to the entire network. Privacy maintains communication safe from hackers. The implementation of WPA technology properly addressed these two fundamental troubles. Although we have the much strong security technology but accidents and mishaps can occur at any time, to obtain an enjoyable and pleasant practice of Wi-Fi technology users should have sufficient knowledge of the security weaknesses and vulnerabilities, that’s the reason the Wi-Fi Associations suggests that users of wireless networks implement the similar point of care they’ve learned to employ to keep away from scams in the wired world. Moreover, end users must modify their passwords frequently, not to answer or respond the doubtful e-mails, and should look for protected and secured connections. Customers require creating various new and uncomplicated security protections. A habit like linking through a supplier or provider that employs encryption with a list of committed and trusted hotspots, using a VPN, and constantly changing and enabling security inside a home network. Furthermore, customer should make it a point to prefer those products that are having Wi-Fi certification for the utilization of WPA™ (Wi-Fi Protected Access) or WPA2™ security (Nichols, & Lekkas, 2006).

As far as technology is concerned, WEP or Wired Equivalent Privacy is a protocol that supplies security to (WLANs) or wireless local area networks based on the 802.11  Wi-Fi regulations. Moreover, WEP is a (layer 2) or OSI, Data Link Layer security technology that can be switched “on” or “off.” This technology is configured to offer wireless networks the equal level of privacy security as a comparable wired network. Moreover, WEP technology is based on the security scheme known as RC4 that utilize an assemblage of system generated values and secret user keys. The primary enforcements of WEP supported so-called 40-bit encryption, containing a key of 40 bits and 24 extra bits of system produced data. Research has revealed that it is very much easier to decode the 40-bit WEP encryption, and as a result product vendors nowadays use 128-bit encryption or better 256-bit (Hardnono, & Dondeti, 2008).

When communicating through wire, WEP utilize keys in order to encrypt the data stream. The keys generally are not sent on the network rather they stored in the windows registry or on the wireless adapter. Despite of how it is configured on a wireless LAN, WEP just signifies only one aspect of an overall WLAN security scheme. The standard of 802.11 explains the communication that takes place within wireless local area networks (LANs). The (WEP) algorithm is utilized to defend wireless communication from hackers. A secondary purpose of WEP is to stop the illegal approach to a wireless network. WEP depends on a confidential key that is distributed between the access point and the mobile station. The secret key is generally used in order to encrypt the packets before they are transmitted, and integrity verification is used to make sure that packets are not customized during transmission. Another WEP technology offered by the Agere Systems is known as WEP+, which enhances the security by neglecting “weak IVs”. The WEP+ provides its maximum excellence when it is used at both ends of the wireless connection, as this can’t easily be enforced, however, possibilities are always there that possible attack against WEP+ will finally be found (Hardnono, & Dondeti, 2008).

Moreover, Wi-Fi protected access or WPA is another security technology for  Wi-Fi networks. WPA technology improves the encryption and authentication feature of WEP. Actually, WPA was developed in response to the deficiencies of WEP by the networking industry. On the other hand, the technology used behind the WPA technology is the utilization of the (TKIP) or Temporal Key Integrity Protocol as this protocol deals with the encryption weaknesses of WEP. Another advantage of the WPA technology is the built-in or default authentication that is not offered by the WEP technology. Furthermore, another variation of the WPA technology is termed as WPA-PSK or WPA Pre Shared Key, is a simple but still strongest structure of WPA most appropriate for business and home Wi-Fi networking (Hardnono, & Dondeti, 2008).

There is another form of WPA technology which is more advanced and safe and is known as WPA2. This technology provides both data integrity and confidentiality. It is observed that WPA2 provides more security to the wireless network. However, WPA2 can’t offer enterprise security alone. Generally IEEE 802.IX port-based protocol is combined with the WPA2 which provides maximum security and guarantees the secure wireless communication. The technology of WPA is utilized with the TKIP protocol, which further utilizes the RC4 cipher, and it can be executed in software having driver or firmware update. WPA supplies integrity checking using MIC, occasionally termed as “Michael. Meanwhile, WPA2, utilizes a new encryption technique known as CCMP or (Counter-Mode with CBC-MAC Protocol), which is stronger than the RC4 (Hardnono, & Dondeti, 2008).

As far as the future of wireless security is concerned the new 802.11 standards simplify this challenge? However time will tells the best. Moreover, the 802.11i standard is the latest wireless security standard designed to put back WEP and give much effective wireless security. 802.11i was believed to be launched together with the 802.11g, however, as we know that we are not living in the perfect world. (WPA) or Wireless Protected Access Alliance certification version 1 executed various features of current 802.11i development, although, not all products of 802.11 which are sold in market having WPA certification. In the current scenario there are currently many 802.11g networks arranged that are still using non-secure and old versions of WEP (Guna, 2009).

In the coming future it is expected that the next Wi-Fi speed standard, 802.11n, will considerably provide a bandwidth of maximum 108Mbps. Moreover, this 108Mbps will become the industry standard. The latest specifications will be launched at least 1 year before by the IEEE. However, draft-based devices and products could come up with compatibility troubles in case if the authorized standard varied from the draft version. Therefore, it’s much better to stay and wait for the ratified standard before building network over non-authorized gears (Tynan, 2004).

On the other side, in coming future, the possible changes regarding Wi-Fi include the methods and procedures to make this technology more and more dependable and secure. Within a year or two the standard of 802.11i standard will be finalized, which will greatly enhance the security level. Moreover, in future in order to handle the data encryption the majority of the 802.11 i-compliant will need the separate co-processors, which indicates that current Wi-Fi devices and equipment will be replaced to achieve the maximum security (Tynan, 2004).

Wireless Security
Wireless Security

Lastly, in the coming future another advanced standard 802.11e will be largely employed for the special tasks. 802.11e will address the issues of quality services and the delivery of data packets on time. This standard is very much important for certain streaming applications like videoconferencing, and its importance is really vital as business more toward utilizing Voice Over IP on their wireless networks. Some companies have already launched 802.11e standard for some of their products like Broadcom (Tynan, 2004).

These days there are several companies around the globe, which are providing wireless security but when the question arises regarding pioneers the name which strikes first to the mind is “Broadcom.” This company is providing network solution and recently “Broadcom” has declared that the (WAPI) security standards required for all WLAN devices which are sold in China. Moreover, this company declares that it is in the list of companies who first made WLAN chips to present WAPI-enabled assistance designed for mobile devices and wireless routers, including In Concert BCM4325 that joins WLAN, FM technologies and Bluetooth “CBR, 2009).

It is mentioned by the Broadcom, that all the chips control its digital architecture and radio to supply the wireless connectivity in order to share support triple play service and broadband connectivity. According to the Michael Hurlston, general manager as well as vice president of Broadcom, said: “We are delighted to maintain the WAPI standard being a leader in the field of WLAN we are confident and feel proud to be a pioneer of the rapidly increasing wireless communications.” Furthermore, Broadcom, is making effort to promote the global standards for the wireless communication (“Broadcom”, 2003).

The fairly new solution offers an entirely new system to protect the wireless networks against the threats of real world by launching active wireless methods of testing, which are able to evaluate the daily deployed wireless access points. By this approach, the wireless Vulnerability Assessment resolution facilitates IT administrators to find vulnerabilities remotely within their wireless network and automates the authorized agreement reporting, facilitating customers to decrease the operating cost (“Computer Technology Review”, 2009).

Motorola is offering a complete and comprehensive range of WLAN infrastructure resolutions planned to enable the genuine wireless venture, despite the dimensions and size of the business. The company is offering IP or Internet Protocol coverage in outdoors and indoors simultaneously. The company’s wireless range includes mesh, enterprise WLAN, fixed broadband and Motorola solutions for AirDefense wireless security. Moreover, Motorola’s solutions decrease network maintenance and deployment costs, and guarantee the accessibility of commercial and cheap wireless connectivity. By replicating active attacks from the hacker’s viewpoint, the Motorola AirDefense wireless solutions in case of vulnerability Assessment permits the administrator proactively evaluate and assess the level of security of delicate systems over the wireless network, like cardholder data systems. Furthermore, by using the radio device of the wireless sensor to stimulate a wireless client station, the method empowers the IT administrators to perform remotely the assessment of wireless vulnerability from a hacker’s approach (Messmer, 2009).

Motorola Company is recognized around the globe for modernization in the field of communication and is generally centralized on progressing and advancing the mode of style through which world connects. From enterprise mobility, broadband communications infrastructure and public security solutions to mobile devices and high-definition video, Motorola is directing to the next phase of advancement and innovation that will enable people, governments and enterprises to be more attached and more mobile. It is reported that Motorola had sales of nearly US $30.1 billion in year 2008 (Darkreading, 2009).

The issues of spectrum pricing, data protection, and level of security are of primary concern and are having great importance for the advancement and growth of the wireless communication industry. The most important regulatory issue in field of wireless technology is associated with the security, as this issue will be face by the wireless industry over the next two decades. On the other side, the confidence level of the customer in online transactions over the wireless medium is also of great importance. Security contains not only the safety of data but also the safety from monitoring. Distaste for investigation is widely considered as the expected inhibitor of wireless network utilization (Green, 2009).

The basic issues that should be considered are: Wireless communication: as it doesn’t need physical connectivity, also having more chances to survive in case of natural disasters like floods, hurricanes, earthquakes, tornadoes and volcanoes. The other issue is Wireless transmission: as it is easier to seize and intercept than those running over wire-line or fiber connections. Though, fiber can also be intercept however, it is hard to hack it as compare to the wireless. Moreover, public is having common perception that physical connections are having more security and protection. Also, the employment of digital encryption is really helpful to protect the wireless transmission, however, the public concern regarding security and privacy are of great importance and will be addressed in the coming decades by the wireless industry (Green, 2009).

As far as the global implication of wireless security is concerned there are few steps which should be adopted for the implication of wireless security. The initial step towards successful and effective global wireless security is creating a standardized verification infrastructure. However, the user should be required to authenticate on the network. But for the global implication users cannot be enforced to organize or manage multiple user authentication, accounts, identity and passwords. Furthermore, there must be a single set of access identification which must offer for verification at any site or location. Preferably this must be the similar set of certification used to login to the user’s own-workplace. On the other hand, the network systems should be arrange to recognize realms, domain names as well as various other regional identifiers in order that verification or authentication requests can be routed to the approved set of validation servers. By implementing this principle, the user will be able to travel to any place in the world or to any enterprise location and will be granted access to most appropriate networks (Green, 2009).

The second step for the implementation of global wireless security is the change in the access method. It required that the access method should be reliable and consistent everywhere the user travels. Moreover, users don’t want to adjust or reconfigure their systems as they travel from the corporate offices to their branch offices and homes. This indicates that there is a need for the same (Service Set Identifier) or SSID, which should be installed or present everywhere with having same encryption and authentication policies. Additionally, every place must have same authentication infrastructure so that user may not feel any problem. Moreover, users are free enough to be able to start their email application at corporate office, put their laptops in a sleep mode, go home, and start their work again without any need of a separate VPN client (Green, 2009).

Conclusively, the third step is the implementation of the voice mobility. It is observed that many companies and organizations are assessing their (Vo WLAN) or Voice over  WI-FI LAN technology today with estimated wide range deployments occasionally in 00 or 0 0. One of the major outcome and benefit for this technology will be the capacity to utilize it everywhere where there is an availability or access of wireless LAN service. When users move to the remote areas or travel to the remote locations, the service of (Vo WLAN) and voice mobility usually permits their Vo-WLAN handsets to start operation in the same manner as they perform in the normal work locations. Moreover, the mobile network infrastructure must offer secure transmission of voice traffic back to the corporate telephony server, quality of service control and reliable encryption and authentication methods all over the global network (Green, 2009).

References

Broadcom, (2003) Broadcom wireless LAN solutions now Wi-Fi protected access

Bulk, Frank. (2006) The Abcs of wpa2 Wi-Fi security. Network Computing, 17(2), 65-3.

CBR, (2009) Broadcom features wapi security standard on its wireless offerings.

Computer Technology Review, (2009, August 20). Motorola debuts wireless LAN security solution for remote wireless security testing.

Cox, John. (2009) what’s next for Wi-Fi?

Darkreading, (2009) Motorola introduces wireless LAN security solution for remote wireless security testing.

Green, Jon. (2009). Building global security policy for wireless LANs.

Guna, (2009) The Future of wireless security.

Hardnono, Thomas, Dondeti and Lakshminath (2008) Security in wireless LANs and mans.      London: Artech House, Inc.

Messmer, Ellen. (2009). Motorola boosts wireless network security.

Nichols, Randall K., & Lekkas, Panos C. (2006) Wireless security: models, threats, and  solutions. Berkshire, UK: McGraw-Hill Telecom.

Tynan, Daniel. (2004) The Future of wireless.

Click Here To View More IT Dissertations

Infrastructure Virtualization

Infrastructure Virtualization

Virtualization is a constantly growing trend in the business organizations of large scale. The approach of virtualization allows the organizations the ability to perform more work with fewer computers as the computer systems are actually relying on the computing powers of the system server which is replicated through the individual systems throughout the organization. Each system represents a single instance of the server computer. Implementing the virtualization techniques help a company save big on resources while keeping the productivity high.

This report is aimed towards analysing the applicability of the virtualization in the real organization of Regional Gardens Ltd. An implementation plan of the virtualization system will be developed as per the requirement specification of the organization. The advantages of the virtualization systems in data centre and legacy systems will be evaluated against productivity and cost-effectiveness. Virtualization techniques are generally implemented with challenges like information security and recover in case of disaster; therefore, both the issues with be discussed here with respect to the selected organization.

Introduction

Virtual desktop infrastructure is the approach of using a virtual machine as a way of hosting a different operating system in a machine other than the original or installed operating system in the machine. It is also possible to run a specific OS on many systems using virtualization technique while the OS is originally running on a centralized server system only. The Virtualization technique of this fashion is a variant of the basic client-server model of computing and is generally adapted by the organizations to enable virtual OS systems in the organizational IT infrastructure without the client-server model in its original form.

This report focuses on the implementation of virtualization of computer systems in the Regional Gardens Ltd organization. The organizations operates through three different offices, which includes a garden that is open for public viewing couple of times every year, A Regional Garden’s Nursery which operates as a seller of plants, and Regional Garden Planners, which is a gardening consultation firm operating for the organization and provides consultation about gardening to the general public. The managerial board at Regional Gardens is seeking the option of adapting the virtualization technique as a way of allowing its old computers to run relatively new OS, and is also planning to adapt to a virtualization technique to ease the pressure on its data centre which is costing a lot in cooling mechanisms. The report will analyse the implementation plan of virtualization for the organization as well as assessing the advantages and disadvantages of it.

What is Virtualization?

Virtualization in the field of computing technology refers to the ability of a machine to run a virtual machine on it even though the machine is not installed on the system. Virtualization can be performed not just for the applications, but for Operating Systems, run time environment and hardware components as well.

Over time, most of the functions that computers perform have in some way benefited from virtualization (Kusnetzky 2011).

Desktop virtualization is the technique of providing a different desktop environment in a already running desktop environment without installing the virtualized environment physically on the system. On the physical device that is hosting the virtualization, this technique provides a in-between layer for the desktop environment and the application software programs running on it by providing a virtual desktop environment to run the applications. With the desktop environment virtualization, it is also possible to use virtual applications on the virtual desktop environment using virtualized applications. It is also possible to run the virtual OS in all the systems of an organization with the use of a centralized OS installed on the server, which makes it very easy to keep the backup of all the information as virtually all the employees are making use of the systems through a OS that is physically present on one device only which is well protected. In such a situation, if one virtualized system is lost or broken, it will be very easy to restore a new system with the same information as it had stored on it before, since all the information is actually stored on virtual data center and a virtual desktop environment that are hosted somewhere else physically.

Proposed Virtualization Technique and Infrastructure

Virtualization technique can be used with different variations of it such as by installing a virtual OS maker on the system itself or by a centralized virtualization model that works using remote systems. Regional Gardens Ltd organization has a staff of 150 people under its payroll with its operations spawned across three different offices and locations. As the company is wishing to make use of a virtualization for its data centre as well as for the old specification systems installed in company’s IT infrastructure, the company should rely on the services of remote desktop virtualization technique to provide the relatively new OS programs to its fleet of old computers.

Hardware virtualization offers several benefits, including consolidation of the infrastructure, ease of replication and relocation, normalization of systems and isolation of resources (Wolf & Halter 2005).

Infrastructure Virtualization
Infrastructure Virtualization

The remote virtualization technique for the computer systems of Regional Gardens Ltd will function as a client server model between the server hosting the OS originally and the computers systems which are relatively old in the organizational structure and are in need of virtualization to make use of new OS systems. In this infrastructure, the execution of different applications and OS features will take place on an OS system that is not installed on the individual systems, but will be installed physically on just the server system which is connected to the local clients using a remote connection mechanism. The interaction of the user with the applications running on the server system takes place through a remote virtual display that replicates the display of the server system on the local system of employees. In the environment of remote functioning, the data stored on the systems in only stored on the physically installed OS device which is the server and only the local hardware information are stored on the local machines. Due to this special properly, the IT infrastructure of the organization will get robustness and high reliability as the data stored is much more secure in this environment. The most common approach of implementing this kind of system architecture is by installing the OS on a server machine that is running hypervisor, and then host a number of different OS instances on the same server system to make use of them by local systems of employees working for Regional Gardens Ltd. This approach is known as the virtual desktop infrastructure technique and is generally abbreviated as VDI.

The remote virtualization technique to provide OS instances on the local computers using a OS instance installed on server side system is generally preferred in the following situations:

  • The virtualization technology is highly efficient in the organizations where the availability requirements for the OS systems are very high and the technical support is not readily available every time. This is true in cases of retail companies and the branch office environment. This is also true in case of Regional Gardens Ltd, and having the OS system installed on the server system only helps the company infrastructure reducing the cost of maintenance and fixing technical errors in individual systems.
  • The virtualization technique is also very helpful in situations where the high latency of the network reduces the performance and productivity of the general client-server module. In such cases, the remote desktop virtualization technique helps reducing the latency by using centralized system based on a variant of client-server model.
  • In the organizations, where the data must entertain the need of remote access as well as data security issues. Both aspects conflict with each other, but with the remote virtualization technique, the remote nature of services is retained while keeping all files stored on the centralized system only that ensures the high level of security.

As some devices in the Regional Gardens Ltd are running on operating systems other than windows, the virtualization technique will allow running a windows OS in such systems without having to install the OS in these systems physically. With the proper implementation of the virtualized technique, the employees can also work on the windows OS through their non-Windows tablets and Smartphones.

The remote virtualization technique for the OS virtualization is also a great way of sharing different kinds of resources in the organization. It will be difficult for the organization to provide every employee with a dedicated high-end specification computer or to replace the old computers with expensive modern technology computers.

In the organizational structure of Regional Gardens Ltd, the virtualization technique will help the company with the accommodation of legacy systems as well as enhancing the security of the entire system architecture.

Data Center Virtualization

The Regional Gardens Ltd has a data centre where all the data information related to the organization and its three different offices is stored. The organization is currently facing the issue of high costing for the cooling of the data centre. The objective is to get the cost low for the data centre maintenance and cooling by using the virtualization techniques. For this purpose, the data centre can make use of the a virtual data centre which appears to be present physically within the organization, but is actually stored in a cloud and is replicated on the Regional Gardens Ltd systems through the use of virtualization. The systems need to be connected to the network in order to use this data centre technique. As the data is basically stored in the cloud networks and there is no physical data stored on the industry systems, the maintenance and cooling expenses can be saved. For security and restore purposes, a continuous backup of the data centre can be produced by the IT department of the organization which will keep a latest copy of the data stored.

Given the state of virtual and cloud-based infrastructure, it’s almost impossible not to think about end-to-end data environments residing in abstract software layers atop physical infrastructure (Cole 2014).

Introducing cloud computing involves moving computing outside this firewall, in effect dismantling the firewall and enabling much richer collaboration with various stakeholders (Willcocks, Venters & Whitley 2013).

In this digital age of networking systems, a large pool of business entities is looking forward to the cloud networks as the ideal solution of handling the responsibility of a network system. There is a big trend of implementing the cloud based networks in the business organizations and a large number of organizations have already implemented such a system. Most of the businesses adapting to the cloud networks are the once which are in need of upgrading their outdated network systems. The cost-effectiveness and flexibility present in the cloud networks is making such networks the ideal solutions to the problem of implementing a new network system.

Cloud Computing is the paradigm where computing resources are available when needed, and you pay for their use in much the same way as for household utilities (Harding 2011).

The character of the internet forces cloud providers and application builders to compromise, often in the form of eventual consistency (Waschke 2012).

Cloud technology in networking helps the organizations in keeping the infrastructure for the future changes in the network as well as keeps the development and maintenance of the system cost-effective for the company. The planning of infrastructure is very simply in this technology and it is also a dynamic approach that makes it very easy to make scalable to develop the cloud applications, for data storage, etc. There are various type of could networks that can be implemented and these types include clouds like Public, private, hybrid network or the latest technology in this field, Community cloud.

All of the different networking technologies in the cloud services are significantly different from one another and have own advantages and disadvantages. Based on the advantages of each cloud technology, the Regional Gardens Ltd can decide which kind of cloud solution to implement on the network. As the organization is having a network of 150 employees and only these employees have the access to the system, a private cloud network virtualization will be suitable for the needs of Regional Gardens Ltd.

In the business organizations where privacy is a big concern, private cloud networks are mostly used to keep the access to the network limited to the staff members. Such a private cloud allows a business entity to host a specialized application on the cloud and it is also possible for the organization to focus on security concerns with this approach which is component missing in the public kind of networks. This kind of network is not shared among a large pool of people or with other organizations. Such kind of a network can be hosted internally or externally.

There are two different types of private network based on the configuration

  1. On-premise private cloud: This is a cost effective method of developing a private cloud network for the organization. The cost for this and the operational costs of such a model will then be attached with the IT department of the company as the network will essentially become responsibility of the IT department. On premise private cloud networks are highly suitable for the applications that are hosted within organizations and offer high level of control or configuration ability to the management as well as high security to the whole network.
  2. Third party private cloud: These are also known as the externally hosted private cloud networks. Such cloud networks are designed by a specialist firm functioning in the networking industry for the exclusive use of one company only. In such a case, the cloud services are organized by the help of a third party network solution which also hosts the service. The service provider then develops a network that is managed for the purpose of that specific organization only and it is also ensured that the network developed is highly secure for use in organizational scenario.

For Regional Gardens Ltd, as the organization is not having an IT department, hiring services of a third party cloud company will be highly beneficial and recommended to keep the cloud secure and functional. Implementation of a cloud based virtual data centre will reduce the cost of maintaining a physical data centre.

The advantages of Virtualization for Regional Gardens Ltd

In the IT infrastructure of the Regional Gardens Ltd, the virtualization technique will provide great benefits. Some of these advantages are listed below:

  • Consolidation: The technique of virtualization in the organization will reduce the workload on the non-efficient systems or the legacy systems by means of consolidation. The approach combines the workload of a number of machines on a single computer. Through the use of this technique, the employees can run more application on fewer hardware components. This is a cost-effective feature of virtualization in Regional Gardens Ltd as it does not require all systems to be of top-notch specification.
  • A full data centre: In the present infrastructure of the organization’s data centre, the data centre is almost full and cannot handle more workload or adding more computers. Additionally, the cooling and maintenance of data centers are also running on their full limits and are costing a lot as well. Virtualization makes it easy on the datacenter by allowing adding as many computers as required to the datacenter without physically combining them. Virtualization of cloud based data center is the perfect solution for the datacenters. In this manner, the data is stored on a cloud network and the maintenance and cooling cost is significantly reduced.
  • Hardware isolation: There are new hardware components coming out every other month with faster and better performance than the previously available components. However, shifting from one server hardware to another is a tough task and requires long configuration. There is also a risk that the new hardware component architecture may not work for the existing applications. In case of virtualized systems, this issue is not present as the systems are virtual and not physically present. Every machine in the network is actually replicating the server machine only hence, this issue is minimized heavily.
  • Legacy operating systems: Regional Gardens Ltd is also facing issue of the legacy systems in its IT infrastructure. The organization wants these legacy systems to run the modern operating system just like the rest of the computers in the company structure. However, it is not natively possible as the hardware specifications of legacy systems are not adequate to hold the modern OS. Using the virtualization, such legacy systems can make use of virtually installed modern OS which are actually physically installed on the server system. This process will allow Regional Gardens Ltd to make use of its legacy systems.

Disaster Recovery in Virtualized Infrastructure

In the virtualized technique of data centre and the company systems, the computers run an OS that is physically installed on the server machine only. All the information that is stored on any of the computers in the organization is actually stored on the server system only. This approach reduces the risk of losing data or information through the individual device or computer damage. However, at the same time, it increases the dependability of the organization on a single server machine which is actually virtually providing its own OS instances to different computers in the network as all the data is stored on this machine only.

As the data is stored on the server only, the organization becomes prone to disastrous situations if the server machine is compromised or damaged. To overcome from this issue and to perform a disaster recovery in the virtualized infrastructure, the organization must take out backups of the data stored in the server machine in a periodic manner. For maximum security and protection of data against a disaster situation, multiple instances of recovery backup files should be stored in locations which are not close in proximity. Storing one of the backup instances in a cloud network is also a good option for the organization.

By using the method of periodic backups, the organization can protect its virtualized systems against any kind of disaster. In case of a disaster, if the server loses all of its information stored, then the latest backup file can be used to restore information in the server system which will be replicated to all the other computers in the system as well. As the network is virtualized, there is no need to restore the backup for individual computers and only restoring it on the server system would be enough to gain back the same functionality.

Information Security Changes Required

Use of a virtual network where all the employees of the company who are working on different computes are actually working on the same server machine system, the security issues are to be entertained differently. In this system all the employees of the company are making use of the same server therefore there needs to be some security constraints regarding the information stored on the server. Some security changes required are listed below:

  • Using different OS instances: Instead of providing same OS instances to all the employees, the virtual server should run different OS instances for different computers.
  • Using employee user-accounts with set privileges: Every employee of the Regional Gardens Ltd should be given a separate OS account to login to the computer and each account should have its own access privileges based on the position of the employee in company.
  • Restricted access to stored files: The server system should have logics applies to restrict the access to the files created by other users. This approach will ensure that the information confidentiality is maintained even in the shared resource and storage scenario.
  • Encrypting the system: For added protection, the files stored on the system should be kept encrypted.

Conclusion

The Regional Gardens Ltd is facing issues like legacy systems in the organization and high cost of maintaining the datacenters which are getting used up to their highest limits. The idea suggested by the management to make use of virtualization technique in order to reduce the cost of cooling the datacenter as well as making use of legacy systems has been analyzed through different perspectives. The implementation of virtual systems in the company infrastructure will assist the origination in better managing its resources and legacy systems as well as reducing the cost of maintenance for datacenter.

The issues of information security and recovery processes are also discussed in the report and it is found that the mentioned approaches will not create an overhead on the organization.

References

Cole, A 2014, Is the Virtual Data Center Inevitable?, IT Business edge, viewed 15 May 2014.

Harding, C 2011, Cloud Computing for Business: The Open Group Guide, Van Haren Publishing.

Kusnetzky, D 2011, Virtualization: A Manager’s Guide, O’Reilly Media, Inc.

Portnoy, M 2012, Virtualization Essentials, John Wiley & Sons.

Waschke, M 2012, Cloud Standards: Agreements That Hold Together Clouds, Apress.

Willcocks, L, Venters, W & Whitley, E 2013, Moving to the Cloud Corporation: How to Face the Challenges and Harness the Potential of Cloud Computing, Palgrave Macmillan.

Wolf, C & Halter, E 2005, Virtualization: From the Desktop to the Enterprise, Apress.

Click Here To View Computing Dissertation Titles