This document analyzes and compares the performance of network security protocols on open source (Linux) and Microsoft Windows platforms. A network simulator tool was used to simulate different scenarios and evaluate selected performance metrics of security protocols like IPSec and SSL across both platforms. The results showed some comparable differences in performance parameter values between the platforms, but these variations were not significant enough to reflect major impacts of the security protocols on the operating system performance.
This document summarizes a seminar on computer network security given on November 22, 2012. It discusses the OSI model layers and security perspectives for each layer. The layers covered are the physical, data link, network, transport, session, presentation, and application layers. Common attacks are listed for each layer such as packet sniffing for the data link layer and SQL injection for the application layer. The document concludes with a reminder that social engineering is also an important security issue.
network security, group policy and firewallsSapna Kumari
The document discusses network security and firewalls. It defines network security as controlling unwanted intrusion and damage to computer networks. It then outlines security objectives like confidentiality, integrity, and availability. It also discusses group policy for centralized management of operating systems and user settings in Active Directory environments. Finally, it describes different types of firewalls like packet filters, application proxies, and stateful inspection firewalls that act as security barriers between network segments.
A Data Hiding Techniques Based on Length of English Text using DES and Attack...IJORCS
The comparing recent proposal for multimedia applications network security remains an important topic for researchers. The security deals with both wired and wireless communication. Network is defined as it is a large system consisting of many similar parts that are connected together to allow the movement or communication between or along the parts or between the parts and a control center. There are the main components of the network information system such as end systems (terminals, servers) and intermediate systems (hubs, switches, gateways). Every node has its own set of vulnerabilities that can be related to hardware, software, protocol stack etc. Nodes are interconnected by physical supports in a network for example connected with cables in wired Local Area Network (LAN) or radio waves (Wi-Fi) in Wireless Local Area Network (WLAN). Some nodes are able to provide services (FTP, HTTP browsing, database access). If two nodes want to communicate together, they must be interconnected physically and logically. Network security deals with also information hiding technique. Now day’s security deals with heterogeneous networks. The use of different wireless and wired network which are working on different platform is heterogeneous. So design of network security for such type of heterogeneous network is difficult task.
The document summarizes network security at the seven layers of the OSI model. It describes attacks that can occur at each layer, from the application layer down to the physical layer. It also lists some common countermeasures that can be implemented at each layer to enhance security, such as virus scanners, encryption protocols, access control systems, and virtual private networks. Overall, implementing additional security controls and limiting unnecessary access helps strengthen defenses across all layers of the OSI model.
This document summarizes a research paper that classifies different types of networks and discusses their associated security issues. It categorizes networks based on size (LAN, MAN, WAN), design (peer-to-peer, client-server, standalone), layering (layered, non-layered), and provides examples such as Ethernet, Wi-Fi, VPNs. It also discusses common security threats for different network types like viruses, denial of service attacks, and evaluates security measures including encryption, firewalls, access control. The paper aims to provide a comprehensive classification of networks and analyze how security needs vary depending on the network and software development stages.
This document provides background information on the history and importance of network security. It discusses how the advent of the internet led to security becoming a major concern, as the internet's architecture allowed for many security threats. The document outlines the internet and network security timeline, from the creation of the ARPANET in 1969 to the crimes of Kevin Mitnick in the 1990s that heightened awareness of information security. It also examines the differences between data security and network security, and how a layered security model corresponds to the OSI model layers.
RESOLVING NETWORK DEFENSE CONFLICTS WITH ZERO TRUST ARCHITECTURES AND OTHER E...IJNSA Journal
Network defense implies a comprehensive set of software tools to preclude malicious entities from conducting activities such as exfiltration of data, theft of credentials, blocking of services and other nefarious activities. For most enterprises at this time, that defense builds upon a clear concept of the fortress approach. Many of the requirements are based on inspection and reporting prior to delivery of the communication to the intended target. These inspections require decryption of packets and this implies that the defensive suite either impersonates the requestor, or has access to the private cryptographic keysof the servers that are the target of communication. This is in contrast to an end-to-end paradigm where known good entities can communicate directly and no other entity has access to the content unless that content is provided to them. There are many new processes that require end-to-end encrypted communication, including distributed computing, endpoint architectures, and zero trust architectures and enterprise level security. In an end-to-end paradigm, the keys used for authentication, confidentiality, and integrity reside only with the endpoints. This paper examines a formulation that allows unbroken communication, while meeting the inspection and reporting requirements of a network defense. This work is part of a broader security architecture termed Enterprise Level Security (ELS)framework.
This document summarizes a seminar on computer network security given on November 22, 2012. It discusses the OSI model layers and security perspectives for each layer. The layers covered are the physical, data link, network, transport, session, presentation, and application layers. Common attacks are listed for each layer such as packet sniffing for the data link layer and SQL injection for the application layer. The document concludes with a reminder that social engineering is also an important security issue.
network security, group policy and firewallsSapna Kumari
The document discusses network security and firewalls. It defines network security as controlling unwanted intrusion and damage to computer networks. It then outlines security objectives like confidentiality, integrity, and availability. It also discusses group policy for centralized management of operating systems and user settings in Active Directory environments. Finally, it describes different types of firewalls like packet filters, application proxies, and stateful inspection firewalls that act as security barriers between network segments.
A Data Hiding Techniques Based on Length of English Text using DES and Attack...IJORCS
The comparing recent proposal for multimedia applications network security remains an important topic for researchers. The security deals with both wired and wireless communication. Network is defined as it is a large system consisting of many similar parts that are connected together to allow the movement or communication between or along the parts or between the parts and a control center. There are the main components of the network information system such as end systems (terminals, servers) and intermediate systems (hubs, switches, gateways). Every node has its own set of vulnerabilities that can be related to hardware, software, protocol stack etc. Nodes are interconnected by physical supports in a network for example connected with cables in wired Local Area Network (LAN) or radio waves (Wi-Fi) in Wireless Local Area Network (WLAN). Some nodes are able to provide services (FTP, HTTP browsing, database access). If two nodes want to communicate together, they must be interconnected physically and logically. Network security deals with also information hiding technique. Now day’s security deals with heterogeneous networks. The use of different wireless and wired network which are working on different platform is heterogeneous. So design of network security for such type of heterogeneous network is difficult task.
The document summarizes network security at the seven layers of the OSI model. It describes attacks that can occur at each layer, from the application layer down to the physical layer. It also lists some common countermeasures that can be implemented at each layer to enhance security, such as virus scanners, encryption protocols, access control systems, and virtual private networks. Overall, implementing additional security controls and limiting unnecessary access helps strengthen defenses across all layers of the OSI model.
This document summarizes a research paper that classifies different types of networks and discusses their associated security issues. It categorizes networks based on size (LAN, MAN, WAN), design (peer-to-peer, client-server, standalone), layering (layered, non-layered), and provides examples such as Ethernet, Wi-Fi, VPNs. It also discusses common security threats for different network types like viruses, denial of service attacks, and evaluates security measures including encryption, firewalls, access control. The paper aims to provide a comprehensive classification of networks and analyze how security needs vary depending on the network and software development stages.
This document provides background information on the history and importance of network security. It discusses how the advent of the internet led to security becoming a major concern, as the internet's architecture allowed for many security threats. The document outlines the internet and network security timeline, from the creation of the ARPANET in 1969 to the crimes of Kevin Mitnick in the 1990s that heightened awareness of information security. It also examines the differences between data security and network security, and how a layered security model corresponds to the OSI model layers.
RESOLVING NETWORK DEFENSE CONFLICTS WITH ZERO TRUST ARCHITECTURES AND OTHER E...IJNSA Journal
Network defense implies a comprehensive set of software tools to preclude malicious entities from conducting activities such as exfiltration of data, theft of credentials, blocking of services and other nefarious activities. For most enterprises at this time, that defense builds upon a clear concept of the fortress approach. Many of the requirements are based on inspection and reporting prior to delivery of the communication to the intended target. These inspections require decryption of packets and this implies that the defensive suite either impersonates the requestor, or has access to the private cryptographic keysof the servers that are the target of communication. This is in contrast to an end-to-end paradigm where known good entities can communicate directly and no other entity has access to the content unless that content is provided to them. There are many new processes that require end-to-end encrypted communication, including distributed computing, endpoint architectures, and zero trust architectures and enterprise level security. In an end-to-end paradigm, the keys used for authentication, confidentiality, and integrity reside only with the endpoints. This paper examines a formulation that allows unbroken communication, while meeting the inspection and reporting requirements of a network defense. This work is part of a broader security architecture termed Enterprise Level Security (ELS)framework.
A novel approach for Multi-Tier security for XML based documentsIOSR Journals
This document proposes a novel multi-tier security approach for XML-based documents. It discusses applying both digital signatures and encryption at the XML node level to provide integrity, non-repudiation, and role-based access control. Overlapping and sequential digital signatures can authorize a document signed by multiple parties. Encryption of specific XML nodes means different users only see allowed document sections. This approach aims to improve security for electronic documents beyond current proprietary formats.
RESOLVING NETWORK DEFENSE CONFLICTS WITH ZERO TRUST ARCHITECTURES AND OTHER E...IJNSA Journal
Network defense implies a comprehensive set of software tools to preclude malicious entities from conducting activities such as exfiltration of data, theft of credentials, blocking of services and other nefarious activities. For most enterprises at this time, that defense builds upon a clear concept of the fortress approach. Many of the requirements are based on inspection and reporting prior to delivery of the communication to the intended target. These inspections require decryption of packets and this implies that the defensive suite either impersonates the requestor, or has access to the private cryptographic keysof the servers that are the target of communication. This is in contrast to an end-to-end paradigm where known good entities can communicate directly and no other entity has access to the content unless that content is provided to them. There are many new processes that require end-to-end encrypted communication, including distributed computing, endpoint architectures, and zero trust architectures and enterprise level security. In an end-to-end paradigm, the keys used for authentication, confidentiality, and integrity reside only with the endpoints. This paper examines a formulation that allows unbroken communication, while meeting the inspection and reporting requirements of a network defense. This work is part of a broader security architecture termed Enterprise Level Security (ELS)framework.
This document summarizes the key topics covered in a class on network security. It introduces common security concepts like authentication, access control, data confidentiality and integrity. It also discusses common security threats like passive attacks, active attacks, and security services defined by the ITU-T standard X.800. The document provides examples of security mechanisms and an outline of the topics to be covered, including a whirlwind tour of computer networks and an anatomy of an attack in five phases.
The document discusses the OSI security architecture and common network security threats and defenses. It begins with an introduction to the OSI security architecture proposed by ITU-T as a standard for defining and providing security across network layers. It then discusses (1) specific security mechanisms like encryption and digital signatures and pervasive mechanisms like security audits; (2) common passive and active security threats like eavesdropping and denial of service attacks; and (3) that passive attacks focus on prevention while active attacks require detection and recovery. It concludes with exercises asking about these topics.
Information security has evolved from securing physical access to mainframes during World War II to modern concerns over networked and digital assets. It began with physical controls but now addresses software, data, networks and more. Effective security requires balancing protection with reasonable access and is best achieved through a structured methodology like SecSDLC that considers security in all phases from analysis to maintenance. Information security seeks to preserve the confidentiality, integrity and availability of information through technical, operational and personnel countermeasures.
Security Key Management Model for Low Rate Wireless Personal Area NetworksCSCJournals
IEEE 802.15.4-based devices networks known by the name of LR-WPAN (Low Rate Wireless Personal Area Network) are characterized by low computation, memory and storage space, and they do not possess an infrastructure. This makes them dynamic and easy to deploy, but in the other hand, this makes them very vulnerable to security issues, as they are low energy so they cant implement current security solutions, and they are deployed in non-secure environments that makes them susceptible to eavesdropping attacks. Most proposed solutions draw out the security of the bootstrapping and commissioning phases as the percentage of existing of an intruder in this time is very low. In this paper, we propose a security model for LR-WPANs based on symmetric cryptography, which takes into account securing the bootstrapping phase, with an analysis of the effectiveness of this proposal and the measures of its implementation.
Building Trust Despite Digital Personal DevicesJavier González
Talk given at OpenIT (Tech talks at IT University of Copenhagen) in 2014. The talk covers different aspects of how to protect our privacy when using personal devices.
A comparitive analysis of wireless security protocols (wep and wpa2)pijans
Wireless local area networks (WLANs) are become popular as they are fast, cost effective, flexible and easy
to use. There are some challenges of security and for IT administrators the choice of security protocol is a
critical issue. The main motive of this paper is to make the non-specialist reader knowledgeable about
threats in the wireless security and make them aware about the disadvantages of wireless security
protocols. WEP (Wired Equivalent privacy), WPA (Wi-Fi Protected Access) and RSN (Robust Security
Network) security protocols are defined and examined here. This security protocols are compared with the
common.
This paper is a comparative analysis of WEP, WPA and WPA2. We have tried to perform and check
authentication of all 3 protocols by implying the legendary attack vector scripts i.e. Air crack set of tools.
The test was conducted on Back Track operating system which is considered as dedicated pentesting
operating system. In the test result, we found out that WEP is the weakest, to which WPA was a temporary
solution and WPA2 is a very solid and long term solution.
This paper is a mixture of wireless security weaknesses and counter measures to the problems faced until
recently. After reading this paper the non specialist reader will have complete review and awareness about
the wireless security and vulnerabilities involved with it.
Efficient Data Aggregation in Wireless Sensor NetworksIJAEMSJORNAL
Sensor network is a term used to refer to a heterogeneous system combining tiny sensors and actuators with general/special-purpose processors. Sensor networks are assumed to grow in size to include hundreds or thousands of low-power, low-cost, static or mobile nodes. This system is created by observing that for any densely deployed sensor network, high redundancy exists in the gathered information from the sensor nodes that are close to each other we have exploited the redundancy and designed schemes to secure different kinds of aggregation processing against both inside and outside attacks.
This document provides an introduction and overview of key concepts in computer and network security. It defines three main security goals of confidentiality, integrity and availability. It also discusses common security attacks that threaten these goals and security services and mechanisms to protect against attacks. Finally, it introduces cryptography and steganography as two main techniques used to implement security mechanisms.
The Design of Convoluted Kernel Architectural Framework for Trusted Systems –...rahulmonikasharma
This paper presents the overview of the Convoluted Kernel Architectural framework and a comparative study with the traditional Linux kernel. The architecture is specially designed for trusted sever environment. It has an integrated layer of a customized Unified Threat Management (UTM) and Stealth-Obfuscation OK Authentication algorithm, which is a highly improved and novel zero knowledge authentication algorithm, for secure web gateway to the kernel mode. The framework used is a combined monolithic and microkernel based (hybrid) architecture code-named – the integrated approach, to trade in the benefits of both designs. The architecture serves as the base framework for the Trust Resilient Enhanced Network Defense Operating System (TREND-OS) currently being experimented in the lab. The aim is to develop an architecture that can protect the kernel against itself and applications.
This document discusses the Address Resolution Protocol (ARP) and its use in intrusion detection systems. It proposes a standardized 64-byte ARP protocol structure to more easily capture ARP packets from a network. The structure includes fields for frame information, destination and source addresses, ARP type details, and sender/target MAC and IP addresses. This standardized structure could be integrated into network monitoring to help detect intrusions without affecting normal data transfer processes. Overall, the document aims to optimize the ARP sequence for use in intrusion detection systems.
This document discusses various network security mechanisms including firewalls, intrusion detection systems, encryption, authentication, and wireless security. It covers Cisco router security strategies for the different network planes (data, control, management, service). It also discusses Windows server security topics such as centralized user authentication, group policy, and the roles of DNS, DHCP, FTP, VPN, and ISA servers. Wireless security standards, topologies, and attacks are explained as well as protocols like WEP, WPA, and WPA2.
The document discusses securing symmetric key distribution in a network through implementing identity-based cryptography. It proposes a new security scheme that overcomes limitations of public- and symmetric-key protocols by using a one-way hash function for data authenticity between nodes and a mix of symmetric and public key cryptography for data confidentiality. The scheme guarantees secure communication between in-network nodes using symmetric keys, and secure data delivery between source and sink nodes using public keys. Testing shows the scheme is scalable and provides strong security while maintaining efficiency.
Solving Downgrade and DoS Attack Due to the Four Ways Handshake Vulnerabiliti...Dr. Amarjeet Singh
The growing volume of attacks on the Internet has
increased the demand for more robust systems and
sophisticated tools for vulnerability analysis, intrusion
detection, forensic investigations, and possible responses.
Current hacker tools and technologies warrant reengineering
to address cyber crime and homeland security. The being
aware of the flaws on a network is necessary to secure the
information infrastructure by gathering network topology,
intelligence, internal/external vulnerability analysis, and
penetration testing. This paper has as main objective to
minimize damages and preventing the attackers from
exploiting weaknesses and vulnerabilities in the 4 ways
handshake (WIFI).
We equally present a detail study on various attacks and
some solutions to avoid or prevent such attacks in WLAN.
The document discusses network security terminology such as threats, attacks, risk analysis, and cryptography. It defines common threats like spoofing, tampering, repudiation, and denial-of-service attacks. The document also outlines the steps for performing risk analysis and includes an exercise asking questions about finding, removing, and preventing vulnerabilities.
Access controls are critical security measures to prevent unauthorized access to information, as demonstrated by a former FBI agent who sold US secrets to the Soviet Union by exploiting the access privileges to various levels of classified information granted by his role. Proper implementation of access controls through user accounts, roles, profiles, attributes, and privileges is important to establish checks and balances that can prevent espionage and protect resources from misuse.
A novel approach for Multi-Tier security for XML based documentsIOSR Journals
This document proposes a novel multi-tier security approach for XML-based documents. It discusses applying both digital signatures and encryption at the XML node level to provide integrity, non-repudiation, and role-based access control. Overlapping and sequential digital signatures can authorize a document signed by multiple parties. Encryption of specific XML nodes means different users only see allowed document sections. This approach aims to improve security for electronic documents beyond current proprietary formats.
RESOLVING NETWORK DEFENSE CONFLICTS WITH ZERO TRUST ARCHITECTURES AND OTHER E...IJNSA Journal
Network defense implies a comprehensive set of software tools to preclude malicious entities from conducting activities such as exfiltration of data, theft of credentials, blocking of services and other nefarious activities. For most enterprises at this time, that defense builds upon a clear concept of the fortress approach. Many of the requirements are based on inspection and reporting prior to delivery of the communication to the intended target. These inspections require decryption of packets and this implies that the defensive suite either impersonates the requestor, or has access to the private cryptographic keysof the servers that are the target of communication. This is in contrast to an end-to-end paradigm where known good entities can communicate directly and no other entity has access to the content unless that content is provided to them. There are many new processes that require end-to-end encrypted communication, including distributed computing, endpoint architectures, and zero trust architectures and enterprise level security. In an end-to-end paradigm, the keys used for authentication, confidentiality, and integrity reside only with the endpoints. This paper examines a formulation that allows unbroken communication, while meeting the inspection and reporting requirements of a network defense. This work is part of a broader security architecture termed Enterprise Level Security (ELS)framework.
This document summarizes the key topics covered in a class on network security. It introduces common security concepts like authentication, access control, data confidentiality and integrity. It also discusses common security threats like passive attacks, active attacks, and security services defined by the ITU-T standard X.800. The document provides examples of security mechanisms and an outline of the topics to be covered, including a whirlwind tour of computer networks and an anatomy of an attack in five phases.
The document discusses the OSI security architecture and common network security threats and defenses. It begins with an introduction to the OSI security architecture proposed by ITU-T as a standard for defining and providing security across network layers. It then discusses (1) specific security mechanisms like encryption and digital signatures and pervasive mechanisms like security audits; (2) common passive and active security threats like eavesdropping and denial of service attacks; and (3) that passive attacks focus on prevention while active attacks require detection and recovery. It concludes with exercises asking about these topics.
Information security has evolved from securing physical access to mainframes during World War II to modern concerns over networked and digital assets. It began with physical controls but now addresses software, data, networks and more. Effective security requires balancing protection with reasonable access and is best achieved through a structured methodology like SecSDLC that considers security in all phases from analysis to maintenance. Information security seeks to preserve the confidentiality, integrity and availability of information through technical, operational and personnel countermeasures.
Security Key Management Model for Low Rate Wireless Personal Area NetworksCSCJournals
IEEE 802.15.4-based devices networks known by the name of LR-WPAN (Low Rate Wireless Personal Area Network) are characterized by low computation, memory and storage space, and they do not possess an infrastructure. This makes them dynamic and easy to deploy, but in the other hand, this makes them very vulnerable to security issues, as they are low energy so they cant implement current security solutions, and they are deployed in non-secure environments that makes them susceptible to eavesdropping attacks. Most proposed solutions draw out the security of the bootstrapping and commissioning phases as the percentage of existing of an intruder in this time is very low. In this paper, we propose a security model for LR-WPANs based on symmetric cryptography, which takes into account securing the bootstrapping phase, with an analysis of the effectiveness of this proposal and the measures of its implementation.
Building Trust Despite Digital Personal DevicesJavier González
Talk given at OpenIT (Tech talks at IT University of Copenhagen) in 2014. The talk covers different aspects of how to protect our privacy when using personal devices.
A comparitive analysis of wireless security protocols (wep and wpa2)pijans
Wireless local area networks (WLANs) are become popular as they are fast, cost effective, flexible and easy
to use. There are some challenges of security and for IT administrators the choice of security protocol is a
critical issue. The main motive of this paper is to make the non-specialist reader knowledgeable about
threats in the wireless security and make them aware about the disadvantages of wireless security
protocols. WEP (Wired Equivalent privacy), WPA (Wi-Fi Protected Access) and RSN (Robust Security
Network) security protocols are defined and examined here. This security protocols are compared with the
common.
This paper is a comparative analysis of WEP, WPA and WPA2. We have tried to perform and check
authentication of all 3 protocols by implying the legendary attack vector scripts i.e. Air crack set of tools.
The test was conducted on Back Track operating system which is considered as dedicated pentesting
operating system. In the test result, we found out that WEP is the weakest, to which WPA was a temporary
solution and WPA2 is a very solid and long term solution.
This paper is a mixture of wireless security weaknesses and counter measures to the problems faced until
recently. After reading this paper the non specialist reader will have complete review and awareness about
the wireless security and vulnerabilities involved with it.
Efficient Data Aggregation in Wireless Sensor NetworksIJAEMSJORNAL
Sensor network is a term used to refer to a heterogeneous system combining tiny sensors and actuators with general/special-purpose processors. Sensor networks are assumed to grow in size to include hundreds or thousands of low-power, low-cost, static or mobile nodes. This system is created by observing that for any densely deployed sensor network, high redundancy exists in the gathered information from the sensor nodes that are close to each other we have exploited the redundancy and designed schemes to secure different kinds of aggregation processing against both inside and outside attacks.
This document provides an introduction and overview of key concepts in computer and network security. It defines three main security goals of confidentiality, integrity and availability. It also discusses common security attacks that threaten these goals and security services and mechanisms to protect against attacks. Finally, it introduces cryptography and steganography as two main techniques used to implement security mechanisms.
The Design of Convoluted Kernel Architectural Framework for Trusted Systems –...rahulmonikasharma
This paper presents the overview of the Convoluted Kernel Architectural framework and a comparative study with the traditional Linux kernel. The architecture is specially designed for trusted sever environment. It has an integrated layer of a customized Unified Threat Management (UTM) and Stealth-Obfuscation OK Authentication algorithm, which is a highly improved and novel zero knowledge authentication algorithm, for secure web gateway to the kernel mode. The framework used is a combined monolithic and microkernel based (hybrid) architecture code-named – the integrated approach, to trade in the benefits of both designs. The architecture serves as the base framework for the Trust Resilient Enhanced Network Defense Operating System (TREND-OS) currently being experimented in the lab. The aim is to develop an architecture that can protect the kernel against itself and applications.
This document discusses the Address Resolution Protocol (ARP) and its use in intrusion detection systems. It proposes a standardized 64-byte ARP protocol structure to more easily capture ARP packets from a network. The structure includes fields for frame information, destination and source addresses, ARP type details, and sender/target MAC and IP addresses. This standardized structure could be integrated into network monitoring to help detect intrusions without affecting normal data transfer processes. Overall, the document aims to optimize the ARP sequence for use in intrusion detection systems.
This document discusses various network security mechanisms including firewalls, intrusion detection systems, encryption, authentication, and wireless security. It covers Cisco router security strategies for the different network planes (data, control, management, service). It also discusses Windows server security topics such as centralized user authentication, group policy, and the roles of DNS, DHCP, FTP, VPN, and ISA servers. Wireless security standards, topologies, and attacks are explained as well as protocols like WEP, WPA, and WPA2.
The document discusses securing symmetric key distribution in a network through implementing identity-based cryptography. It proposes a new security scheme that overcomes limitations of public- and symmetric-key protocols by using a one-way hash function for data authenticity between nodes and a mix of symmetric and public key cryptography for data confidentiality. The scheme guarantees secure communication between in-network nodes using symmetric keys, and secure data delivery between source and sink nodes using public keys. Testing shows the scheme is scalable and provides strong security while maintaining efficiency.
Solving Downgrade and DoS Attack Due to the Four Ways Handshake Vulnerabiliti...Dr. Amarjeet Singh
The growing volume of attacks on the Internet has
increased the demand for more robust systems and
sophisticated tools for vulnerability analysis, intrusion
detection, forensic investigations, and possible responses.
Current hacker tools and technologies warrant reengineering
to address cyber crime and homeland security. The being
aware of the flaws on a network is necessary to secure the
information infrastructure by gathering network topology,
intelligence, internal/external vulnerability analysis, and
penetration testing. This paper has as main objective to
minimize damages and preventing the attackers from
exploiting weaknesses and vulnerabilities in the 4 ways
handshake (WIFI).
We equally present a detail study on various attacks and
some solutions to avoid or prevent such attacks in WLAN.
The document discusses network security terminology such as threats, attacks, risk analysis, and cryptography. It defines common threats like spoofing, tampering, repudiation, and denial-of-service attacks. The document also outlines the steps for performing risk analysis and includes an exercise asking questions about finding, removing, and preventing vulnerabilities.
Access controls are critical security measures to prevent unauthorized access to information, as demonstrated by a former FBI agent who sold US secrets to the Soviet Union by exploiting the access privileges to various levels of classified information granted by his role. Proper implementation of access controls through user accounts, roles, profiles, attributes, and privileges is important to establish checks and balances that can prevent espionage and protect resources from misuse.
This document provides an overview of Linux security and auditing. It discusses the history and architecture of Linux, important security concepts like physical security, operating system security, network security, file system security and user/group security. It also describes various Linux security tools that can be used for tasks like vulnerability scanning, auditing, intrusion detection and password cracking.
This document discusses Linux network security and the xFirewall program. It provides an overview of Linux and its networking capabilities. It then describes iptables, the built-in Linux firewall, and xFirewall, a user-friendly frontend for iptables. xFirewall detects network attacks and logs unauthorized access based on allowed ports in its configuration file. The document shows nmap scan results for a system running xFirewall, demonstrating that it only allows connections to specified open ports and blocks other ports from being discovered.
This document discusses securing Linux systems and applications as a developer. It begins by outlining common security risks like weak passwords, lack of input validation, and unintended data exposure. It then provides strategies to improve security in three levels: basics like validation and encryption; taking ownership of code and systems; and performing security audits. Specific techniques are covered like hardening operating systems, software, and network configuration. The document recommends using the Lynis security auditing tool for its flexibility and simplicity. It concludes by discussing the importance of continuous auditing and leveraging security to save time instead of crisis management.
This document discusses basic Linux system security. It recommends securing physical access to machines, using the principle of least privilege by limiting accounts, ports, and applications. It also recommends strong passwords, closing unnecessary ports, encrypting network connections, keeping software updated, using intrusion detection, and advanced techniques like auditing OSes and using virtual machines.
This document provides an introduction to Linux security. It covers turning off unnecessary servers and services, limiting access to needed servers using IPTables, updating the system regularly, and reading Linux log files. The document recommends keeping daemons and services disabled or bound to localhost when possible, using tools like netstat, IPTables, and log checking utilities to monitor open ports and system activity. It concludes with a question and answer section and recommends additional security resources.
Linux is considered to be a secure operating system by default. Still there is a lot to learn about system hardening and technical auditing. This 1-hour presentation explains the need for hardening and auditing of your systems. We discussed some additional documents and tools, to further help this endeavor.
This presentation is suitable for both beginners and those with experience in system hardening.
The document summarizes a presentation on network security and Linux security. The presentation covered introduction to security, computer security, and network security. It discussed why security is needed, who is vulnerable, common security attacks like dictionary attacks, denial of service attacks, TCP attacks, and packet sniffing. It also covered Linux security topics like securing the Linux kernel, file and filesystem permissions, password security, and network security using firewalls, IPSEC, and intrusion detection systems. The presentation concluded with a reference to an ID-CERT cybercrime report and a call for questions.
How Many Linux Security Layers Are Enough?Michael Boelen
Talk about Linux security and the related possibilities to secure your systems. Several areas are discussed, like what is possible, how to select the right security measures and tips to implement them.
Some subjects passing by in the presentation are file integrity (IMA/EVM), containers like Docker, virtualization.
The referenced tool Lynis can be downloaded freely from https://cisofy.com/downloads/
Protecting location privacy in sensor networks against a global eavesdropperShakas Technologies
The document discusses techniques for providing location privacy in sensor networks against a global eavesdropper. It proposes four techniques - periodic collection, source simulation, sink simulation, and backbone flooding - to provide location privacy for monitored objects (source location privacy) and data sinks (sink location privacy). These techniques provide trade-offs between privacy, communication cost, and latency. Analysis and simulation demonstrate that the proposed techniques are efficient and effective for providing source and sink location privacy in sensor networks.
Reference Article1st published in May 2015doi 10.1049etr.docxlorent8
Reference Article
1st published in May 2015
doi: 10.1049/etr.2014.0035
ISSN 2056-4007
www.ietdl.org
Operating System Security
Paul Hopkins Cyber Security Practice, CGI, UK
Abstract
This article focuses on the security of the operating system, a fundamental component of ICT that enables many
different applications to be used in a variety of computing hardware. While, the original operating systems for
large centralised computing focused their security efforts primarily on separating users, operating systems secur-
ity has had to adapt to cater for a wider range of technology, such as desktop computers, smartphones and
cloud platforms, and the different threats that have evolved as a consequence. This article examines some of
the core security mechanisms that every operating system needs and the gradual evolution towards offering
a more secure platform.
Introduction: What is the Operating
System?
All too frequently the words operating system conjure
up thoughts of Microsoft Windows made popular as
an operating system that enabled desktop computing.
However, there have been, and still continue to be a
large number of operating system types and versions
in operation [1] for all sorts of devices. These devices
range from those designed to work with mobile
phones, tablets and games consoles of the consumer
world, through to the servers/laptops, network
routers and switches of the IT industry, as well as em-
bedded devices and industrial controllers from indus-
trial engineering. [Dependent upon the hardware
architecture, the operating systems can be significantly
different to the fuller versions that this paper uses to
illustrate the key security mechanisms.]
In essence, the purpose of the operating system is to
provide a layer above the hardware execution environ-
ment, abstracting away low level details, such that it
appropriately shares and enables access to the mul-
tiple hardware components, such as processors,
memory, USB devices, network cards, monitors and
keyboards. It thus provides an environment in which
multiple applications (ranging from advanced
weather forecasting through to word processors,
games and industrial control processes) can all be po-
tentially executed and accessed by multiple users.
Operating systems have a history and timeline dating
back to the development of the first computers in
the early 50s, given that the users, then also needed
a way to execute their applications or programs.
Since that time operating systems have adapted to
Eng. Technol. Ref., pp. 1–8
doi: 10.1049/etr.2014.0035
take advantage of increases in speed and performance
of hardware and communications. The changes either
enable new functionality and applications or adapt to
optimise the performance of certain hardware, such as
in the case of telecommunications routers and
switches that can have additional networking func-
tions integrated into their operating system. So while
the UNIX and Microsoft Windows family of operating
systems have dominated .
Multilayer security mechanism in computer networks (2)Alexander Decker
This document discusses multilayer security mechanisms in computer networks. It recommends a secure network system that uses security at three layers: application (end-to-end), transport, and network. At each layer, different protocols provide authentication, integrity, confidentiality, and other protections. When combined across layers, vulnerabilities in one layer cannot compromise other layers, strengthening overall security. Popular protocols mentioned for each layer include S/MIME, SSL, and IPSec.
This document discusses common tools used for network reconnaissance, including Wireshark, NetWitness Investigator, OpenVAS, FileZilla, PuTTY, and Zenmap. Wireshark is used to capture network packet data, which is then analyzed by NetWitness Investigator. OpenVAS scans networks remotely for vulnerabilities. FileZilla and PuTTY transfer files securely. Zenmap performs detailed scans to reveal network information, programs, and firewall configurations. Fisheye bubble charts can visually display network activity and relationships between devices. Identifying these tools is important for security experts to understand networks and protect against cyberattacks.
Multilayer security mechanism in computer networksAlexander Decker
This document discusses multilayer security mechanisms in computer networks. It proposes a multilayered security architecture with security at the application layer using techniques like authentication and encryption, security at the transport layer using cryptographic tunnels between nodes, and security at the network IP layer to protect against external attacks. Specifically, it recommends an infrastructure with application layer security for end users, transport layer security for establishing encrypted tunnels, and network layer security to protect the whole system. The goal is for vulnerabilities in one layer not to compromise other layers.
11.multilayer security mechanism in computer networksAlexander Decker
This document discusses multilayer security mechanisms in computer networks. It proposes a multilayered security architecture implemented across three layers: application layer security using techniques like digital signatures and certificates; transport layer security using cryptographic tunnels; and network IP layer security. This layered approach limits the impact of attacks by making the compromise of one layer unable to impact other layers. Application layer security provides end-to-end protection using authentication, signatures, encryption, and hardware tokens. Transport layer security establishes encrypted tunnels between nodes using symmetric cryptography. Network layer security provides bulk protection from external attacks.
This document provides an overview of file security systems and encryption techniques. It begins with an introduction to access control and the need to protect important files from unauthorized access. It then reviews 13 relevant research papers on topics like parallel AES encryption on GPUs, key management in secure network file systems, image encryption using color, and evaluations of existing file security systems. The document discusses techniques like separating key management from file security, hybrid encryption algorithms, and performance evaluations of encrypted file systems. Overall, it covers a range of cryptographic techniques and file security systems aimed at securely storing and sharing files.
It is the control of unwanted intrusion into or damage to communications on our organizations computer network.
It supports essential communications which are necessary to the organizations mission and goals.
It includes elements that prevent unwanted activities while supporting desirable activities.
It involves the authorization of access to data in a network which is controlled by the network administrator.
It involved in organizations , enterprises and other types of institutions.
We have evolved an IT system that is ubiquitous and pervasive and integrated into most aspects of our lives. Many of us are working on 4th and 5th level refinements in efficiency and functionality. But, we stand on the shoulders of those who came before and this restricts our freedom of action. The prior work has left us with an ecosystem which is the living embodiment
of our state-of-the-art. While we work on integration, refinement, broader application and efficiency, the results must move seamlessly into the ecosystem. Fundamental concepts are
being researched in the lab and may rebuild the world we all live in, until that happens, we must work within the ecosystem.
Types of Networks Week7 Part4-IS RevisionSu2013 .docxwillcoxjanay
Types of Networks
Week7 Part4-IS
RevisionSu2013
Types of Networks
There are different types of networks. Each type has different characteristics and
therefore different security needs. Some of the fundamental differentiating attributes of
the various types of networks are:
the physical distance the network spans
the topology of the network nodes
the types of media used for communication between nodes in the network
the different devices supported on the network
the different applications supported on the network
the different groups of users permitted on the network
the different protocols supported on each network
Depending on the type of network there may be different information security
requirements requiring that various protocols, security services, security mechanisms are
used in a fashion to support that type of network.
While each network environment has some characteristics and security needs unique to
that environment, there are many security techniques that should be universally applied to
all environments. For example; sound policies and procedures, risk assessment of the
assets, user awareness training, encryption technology, authentication technology, sound
credential (password) selection and protection, malware protection, firewalls are a few
security techniques that need to be applied in all of the networks albeit in configurations
that best suits a particular environment.
Local Area Network (LAN)
A LAN network covers a small geographic area that takes advantage of high speed data
transfers usually implemented through Ethernet or fiber. A LAN could be a home, office,
group of building with local proximity (university, business). LANs typically share
resources such as file servers and printers.
Wide Area Network (WAN)
A WAN covers a large geographic area that may require connection through satellite,
high speed dedicated lines and other means. The internet is a WAN. WANs can connect
LANs together into a larger organizational structure that can be used to share resources
such as file, email, dns servers to name a few. Resources can be shared using slower
connections on geographically separated areas across the WAN.
Wireless Networks and Mobile Networks
The movement to laptop systems at home and workplaces accelerated the mobility of
computing.
As employees traveled between offices, client sites, home and various other remote
locations they could remain connected to company servers as long as the remote site had
connectivity to the companies’ intranet. Initially this connectivity was provided by
having Ethernet cabling available for remote users to physically plug their laptops into.
Eventually, companies started installing wireless hotspots that could be automatically
detected by systems that had wireless cards.
The proliferation of wireless connectivity and internet use spread from the workplace to
genera ...
Linux is poised to replace Windows NT as the dominant server operating system of choice. Linux offers a cheaper, more versatile, scalable, and reliable server solution compared to NT. It meets or exceeds all user requirements provided by NT. As a free and open-source multi-vendor platform, Linux is growing in popularity for network services. Linux will likely surpass NT adoption in most server applications as businesses seek more cost-effective options.
This document discusses network software and protocols. It covers two main types of network software: network operating systems and network protocols. It provides examples of network operating systems like UNIX, Windows NT, Linux, Novell NetWare, and Windows 2000. It also discusses the seven-layer OSI model and provides examples of protocols at the application, transport, and network layers, such as SMTP, FTP, TCP, IP, and SPX.
Integrity and Privacy through Authentication Key Exchange Protocols for Distr...BRNSSPublicationHubI
This document summarizes an article about authentication key exchange protocols for distributed systems. It discusses how authenticated key exchange (AKE) protocols allow users and servers to authenticate each other and generate session keys for secure communication. The document then provides background on network security goals like integrity, availability, and privacy. It also discusses challenges like attacks that can compromise these goals in distributed systems and the need for scalable key exchange protocols.
This document contains a list of probable questions related to operating systems, file systems, networking, Windows commands, and troubleshooting. Some of the topics covered include types of operating systems, differences between FAT and NTFS file systems, Active Directory, firewall types, OSI model layers, and RAID levels. The list provides definitions and explanations for many common computer and networking concepts.
The document lists several probable questions about operating systems and computer security topics. It includes definitions and comparisons of different types of operating systems like real-time, multi-user, multi-tasking, distributed, and embedded operating systems. It also summarizes the differences between FAT and NTFS file systems, enhancements in Windows 2003, defines what an active directory is, describes types of firewalls like network-level, circuit-level, application-level, and stateful multi-level firewalls, and compares hardware and software firewalls.
Resist Dictionary Attacks Using Password Based Protocols For Authenticated Ke...IJERA Editor
A parallel file system is a type of distributed file system that distributes file data across multiple servers and
provides for concurrent access by multiple tasks of a parallel application. In many to many communications or
multiple tasks, key establishments are a major problem in parallel file system. So we propose a variety of
authenticated key exchange protocols that are designed to address the above issue. In this paper, we also study
the password-based protocols for authenticated key exchange (AKE) to resist dictionary attacks. Password-based
protocols for authenticated key exchange (AKE) are designed to work to resist the use of passwords drawn from
a space so small that attacker might well specify, off line, all possible passwords. While many such protocols
have been suggested, the elemental theory has been lagging. We commence by interpreting a model for this
problem, to approach password guessing, forward secrecy, server compromise, and loss of session keys.
Characteristics of a network operating systemRon McGary
A network operating system (NOS) controls software and hardware on a network, allowing computers to communicate and share resources. Key characteristics of a NOS include supporting multiple processors and devices, managing security through user authentication and authorization, setting up user accounts and access permissions, providing print and file services, and managing email services. Common NOS software includes Microsoft Windows Server, Mac OS X, and UNIX/Linux.
Rapid increases in information technology also changed the existing markets and transformed them into emarkets
(e-commerce) from physical markets. Equally with the e-commerce evolution, enterprises have to
recover a safer approach for implementing E-commerce and maintaining its logical security. SOA is one of
the best techniques to fulfill these requirements. SOA holds the vantage of being easy to use, flexible, and
recyclable. With the advantages, SOA is also endowed with ease for message tampering and unauthorized
access. This causes the security technology implementation of E-commerce very difficult at other
engineering sciences. This paper discusses the importance of using SOA in E-commerce and identifies the
flaws in the existing security analysis of E-commerce platforms. On the foundation of identifying defects,
this editorial also suggested an implementation design of the logical security framework for SOA supported
E-commerce system.
Similar to Performance evaluation of network security protocols on open source and microsoft windows platforms (20)
Abnormalities of hormones and inflammatory cytokines in women affected with p...Alexander Decker
Women with polycystic ovary syndrome (PCOS) have elevated levels of hormones like luteinizing hormone and testosterone, as well as higher levels of insulin and insulin resistance compared to healthy women. They also have increased levels of inflammatory markers like C-reactive protein, interleukin-6, and leptin. This study found these abnormalities in the hormones and inflammatory cytokines of women with PCOS ages 23-40, indicating that hormone imbalances associated with insulin resistance and elevated inflammatory markers may worsen infertility in women with PCOS.
A usability evaluation framework for b2 c e commerce websitesAlexander Decker
This document presents a framework for evaluating the usability of B2C e-commerce websites. It involves user testing methods like usability testing and interviews to identify usability problems in areas like navigation, design, purchasing processes, and customer service. The framework specifies goals for the evaluation, determines which website aspects to evaluate, and identifies target users. It then describes collecting data through user testing and analyzing the results to identify usability problems and suggest improvements.
A universal model for managing the marketing executives in nigerian banksAlexander Decker
This document discusses a study that aimed to synthesize motivation theories into a universal model for managing marketing executives in Nigerian banks. The study was guided by Maslow and McGregor's theories. A sample of 303 marketing executives was used. The results showed that managers will be most effective at motivating marketing executives if they consider individual needs and create challenging but attainable goals. The emerged model suggests managers should provide job satisfaction by tailoring assignments to abilities and monitoring performance with feedback. This addresses confusion faced by Nigerian bank managers in determining effective motivation strategies.
A unique common fixed point theorems in generalized dAlexander Decker
This document presents definitions and properties related to generalized D*-metric spaces and establishes some common fixed point theorems for contractive type mappings in these spaces. It begins by introducing D*-metric spaces and generalized D*-metric spaces, defines concepts like convergence and Cauchy sequences. It presents lemmas showing the uniqueness of limits in these spaces and the equivalence of different definitions of convergence. The goal of the paper is then stated as obtaining a unique common fixed point theorem for generalized D*-metric spaces.
A trends of salmonella and antibiotic resistanceAlexander Decker
This document provides a review of trends in Salmonella and antibiotic resistance. It begins with an introduction to Salmonella as a facultative anaerobe that causes nontyphoidal salmonellosis. The emergence of antimicrobial-resistant Salmonella is then discussed. The document proceeds to cover the historical perspective and classification of Salmonella, definitions of antimicrobials and antibiotic resistance, and mechanisms of antibiotic resistance in Salmonella including modification or destruction of antimicrobial agents, efflux pumps, modification of antibiotic targets, and decreased membrane permeability. Specific resistance mechanisms are discussed for several classes of antimicrobials.
A transformational generative approach towards understanding al-istifhamAlexander Decker
This document discusses a transformational-generative approach to understanding Al-Istifham, which refers to interrogative sentences in Arabic. It begins with an introduction to the origin and development of Arabic grammar. The paper then explains the theoretical framework of transformational-generative grammar that is used. Basic linguistic concepts and terms related to Arabic grammar are defined. The document analyzes how interrogative sentences in Arabic can be derived and transformed via tools from transformational-generative grammar, categorizing Al-Istifham into linguistic and literary questions.
A time series analysis of the determinants of savings in namibiaAlexander Decker
This document summarizes a study on the determinants of savings in Namibia from 1991 to 2012. It reviews previous literature on savings determinants in developing countries. The study uses time series analysis including unit root tests, cointegration, and error correction models to analyze the relationship between savings and variables like income, inflation, population growth, deposit rates, and financial deepening in Namibia. The results found inflation and income have a positive impact on savings, while population growth negatively impacts savings. Deposit rates and financial deepening were found to have no significant impact. The study reinforces previous work and emphasizes the importance of improving income levels to achieve higher savings rates in Namibia.
A therapy for physical and mental fitness of school childrenAlexander Decker
This document summarizes a study on the importance of exercise in maintaining physical and mental fitness for school children. It discusses how physical and mental fitness are developed through participation in regular physical exercises and cannot be achieved solely through classroom learning. The document outlines different types and components of fitness and argues that developing fitness should be a key objective of education systems. It recommends that schools ensure pupils engage in graded physical activities and exercises to support their overall development.
A theory of efficiency for managing the marketing executives in nigerian banksAlexander Decker
This document summarizes a study examining efficiency in managing marketing executives in Nigerian banks. The study was examined through the lenses of Kaizen theory (continuous improvement) and efficiency theory. A survey of 303 marketing executives from Nigerian banks found that management plays a key role in identifying and implementing efficiency improvements. The document recommends adopting a "3H grand strategy" to improve the heads, hearts, and hands of management and marketing executives by enhancing their knowledge, attitudes, and tools.
This document discusses evaluating the link budget for effective 900MHz GSM communication. It describes the basic parameters needed for a high-level link budget calculation, including transmitter power, antenna gains, path loss, and propagation models. Common propagation models for 900MHz that are described include Okumura model for urban areas and Hata model for urban, suburban, and open areas. Rain attenuation is also incorporated using the updated ITU model to improve communication during rainfall.
A synthetic review of contraceptive supplies in punjabAlexander Decker
This document discusses contraceptive use in Punjab, Pakistan. It begins by providing background on the benefits of family planning and contraceptive use for maternal and child health. It then analyzes contraceptive commodity data from Punjab, finding that use is still low despite efforts to improve access. The document concludes by emphasizing the need for strategies to bridge gaps and meet the unmet need for effective and affordable contraceptive methods and supplies in Punjab in order to improve health outcomes.
A synthesis of taylor’s and fayol’s management approaches for managing market...Alexander Decker
1) The document discusses synthesizing Taylor's scientific management approach and Fayol's process management approach to identify an effective way to manage marketing executives in Nigerian banks.
2) It reviews Taylor's emphasis on efficiency and breaking tasks into small parts, and Fayol's focus on developing general management principles.
3) The study administered a survey to 303 marketing executives in Nigerian banks to test if combining elements of Taylor and Fayol's approaches would help manage their performance through clear roles, accountability, and motivation. Statistical analysis supported combining the two approaches.
A survey paper on sequence pattern mining with incrementalAlexander Decker
This document summarizes four algorithms for sequential pattern mining: GSP, ISM, FreeSpan, and PrefixSpan. GSP is an Apriori-based algorithm that incorporates time constraints. ISM extends SPADE to incrementally update patterns after database changes. FreeSpan uses frequent items to recursively project databases and grow subsequences. PrefixSpan also uses projection but claims to not require candidate generation. It recursively projects databases based on short prefix patterns. The document concludes by stating the goal was to find an efficient scheme for extracting sequential patterns from transactional datasets.
A survey on live virtual machine migrations and its techniquesAlexander Decker
This document summarizes several techniques for live virtual machine migration in cloud computing. It discusses works that have proposed affinity-aware migration models to improve resource utilization, energy efficient migration approaches using storage migration and live VM migration, and a dynamic consolidation technique using migration control to avoid unnecessary migrations. The document also summarizes works that have designed methods to minimize migration downtime and network traffic, proposed a resource reservation framework for efficient migration of multiple VMs, and addressed real-time issues in live migration. Finally, it provides a table summarizing the techniques, tools used, and potential future work or gaps identified for each discussed work.
A survey on data mining and analysis in hadoop and mongo dbAlexander Decker
This document discusses data mining of big data using Hadoop and MongoDB. It provides an overview of Hadoop and MongoDB and their uses in big data analysis. Specifically, it proposes using Hadoop for distributed processing and MongoDB for data storage and input. The document reviews several related works that discuss big data analysis using these tools, as well as their capabilities for scalable data storage and mining. It aims to improve computational time and fault tolerance for big data analysis by mining data stored in Hadoop using MongoDB and MapReduce.
1. The document discusses several challenges for integrating media with cloud computing including media content convergence, scalability and expandability, finding appropriate applications, and reliability.
2. Media content convergence challenges include dealing with the heterogeneity of media types, services, networks, devices, and quality of service requirements as well as integrating technologies used by media providers and consumers.
3. Scalability and expandability challenges involve adapting to the increasing volume of media content and being able to support new media formats and outlets over time.
This document surveys trust architectures that leverage provenance in wireless sensor networks. It begins with background on provenance, which refers to the documented history or derivation of data. Provenance can be used to assess trust by providing metadata about how data was processed. The document then discusses challenges for using provenance to establish trust in wireless sensor networks, which have constraints on energy and computation. Finally, it provides background on trust, which is the subjective probability that a node will behave dependably. Trust architectures need to be lightweight to account for the constraints of wireless sensor networks.
This document discusses private equity investments in Kenya. It provides background on private equity and discusses trends in various regions. The objectives of the study discussed are to establish the extent of private equity adoption in Kenya, identify common forms of private equity utilized, and determine typical exit strategies. Private equity can involve venture capital, leveraged buyouts, or mezzanine financing. Exits allow recycling of capital into new opportunities. The document provides context on private equity globally and in developing markets like Africa to frame the goals of the study.
This document discusses a study that analyzes the financial health of the Indian logistics industry from 2005-2012 using Altman's Z-score model. The study finds that the average Z-score for selected logistics firms was in the healthy to very healthy range during the study period. The average Z-score increased from 2006 to 2010 when the Indian economy was hit by the global recession, indicating the overall performance of the Indian logistics industry was good. The document reviews previous literature on measuring financial performance and distress using ratios and Z-scores, and outlines the objectives and methodology used in the current study.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
20240605 QFM017 Machine Intelligence Reading List May 2024
Performance evaluation of network security protocols on open source and microsoft windows platforms
1. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
www.iiste.org
Performance Evaluation of Network Security Protocols on Open
Source and Microsoft Windows Platforms
Oluwaranti A.I. (Corresponding Author)
Department of Computer Science and Engineering,Obafemi Awolowo University, Ile-Ife.
E-mail: aranti@oauife.edu.ng
Adejumo, E.O.
Department of Computer Science and Engineering, Obafemi Awolowo University, Ile-Ife.
E-mail: yinkaadejumo@gmail.com
Abstract
Internet is increasingly being used to support collaborative applications such as voice and video-conferencing,
replicated servers and databases of different types. Since most communication over the Internet involves the
traversal of insecure open networks, basic security services such as data privacy, integrity and authentication are
necessary. One of the levels of computer security is operating system security. This paper analyzes the
limitations and behavioral patterns of security protocols across different platform. It compared the performance
of security protocols in terms of authentication, encryption algorithm, cryptographic methods etc.; in order to
determine which platform provides better support for security protocols.
Network simulator tool was used to simulate different scenarios to show the performance of security protocols
across two Operating System Platforms (Linux and Windows). Analysis of the simulation values of selected
performance metrics of the security protocols, across both platforms, were evaluated.
Results obtained showed comparable differences in the values of the performance parameters considered. For
instance, IP processing delay of the Windows Client node was initially high (about 0.0125 milliseconds), but
later decreases to about 0.0115 milliseconds, while the Linux Client node is constant at about 0.0115
milliseconds. Variations in the values of the performance parameters for both platforms, in both network
scenarios are not significant enough to reflect a noticeable difference in the impacts of the network security
protocols on the performance of the operating system platforms.
Keywords: Open Source, IP Security, SSL, OPNET, Security Protocol, Operating Systems
1.
Introduction
The Internet consists of an enormous number of heterogeneous, independently managed computer
networks. It interconnects mutually distrustful organizations and people with no central management. Internet
Users has come to depend on it for reliability inspite of its security issues. More reliance on the Internet is
predictable in the coming years, along with increased concern over its security. Security and privacy are
growing concerns in the Internet community, due to the Internet’s rapid growth and the desire to conduct
business over it safely. Basically, the security of a system builds on the combination of its ability to maintain
confidentiality, integrity, and availability. This desire has led to the advent of several proposals for security
standards such as secure Internet Protocol (IPSec), and the Secure Socket Layer (SSL) (Erich et al., 1996).
Most network security protocols on the Internet run on open source and/or windows platforms. They
are prone to some limitations in their operations, which vary across platforms. These limitations occur in areas
such as authentication, encryption algorithms etc. Maintaining a secure operating environment on a computer
network requires familiarity with key security capabilities that meet the need for functionality, reduce risk and
ensure compliance. This paper investigates the operating system platform on which network security protocols
would have the best performance.
This work is targeted at analyzing the limitations and behavioral patterns of security protocols across
different platforms; comparing the performance of security protocols in terms of authentication, encryption
algorithm, packet header, mode of key exchange, cryptographic methods etc.; and determining which platform
provides better support for security protocols.
1.1
Open Source Operating System Platforms
Open Source refers to an approach to design, development, and distribution offering practical
accessibility to a product's source (goods and knowledge). Open Source projects are generally proposed by a
single developer or group of core developers who make the software application codes available to the public
and use a system of peer review to test and refine the application.
Linux, a clone of the Unix operating system that has been popular in academia and many business
environments for years, is the flagship of open source model. It consists of a kernel, which is the core control
software, and many libraries and utilities that rely upon the kernel to provide features with which users interact.
Most Linux software are available in open source form, and can be compiled on any Unix machine. Linux allows
12
2. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
www.iiste.org
customization of configuration files in ways that a Graphic User Interface (GUI) does not allow. Linux can be
easily administered remotely, using common remote login tools such as Secure Shell (SSH), which allows
running of text-based Linux programs from another system (Nash and Nash, 2001). Linux is not the only open
source operating system in existence. Most competing open source operating systems are, like Linux, clones of
Unix. The main competing family: FreeBSD, NetBSD, and OpenBSD are derived directly from mainstream
Unix.
1.2
Microsoft Windows Operating Systems
Microsoft Windows refers to a series of operating system software and Graphical User Interfaces (GUIs)
produced by Microsoft. The first version of Windows was introduced as an add-on to MS-DOS in response to
the growing interest in graphical user interfaces (GUIs). Microsoft Windows dominates the world's personal
computer market, with approximately 90% of the client operating system market as of 2004 (Hitslink, 2009).
Microsoft markets Windows as the competition to Unix and Linux. This operating system branch uses a
kernel with support for features such as file-system security and multitasking. One of the main differences
between Windows and Linux is that the former is much more integrated with its graphical user interface (GUI).
This makes Windows easier to learn, but at the same time, reduces its flexibility. However, a major advantage of
any Microsoft or Microsoft-related operating system is the application base for desktop use.
2.
Literature Review
Security has become an increasingly important issue in modern distributed systems. The Internet is
increasingly being used to support collaborative applications such as voice and video-conferencing, white-boards,
distributed simulations, replicated servers and databases of different types. Since most communication over the
Internet involves the traversal of insecure open networks, basic security services such as data privacy, integrity
and authentication are necessary. A well-guarded enterprise deploys different security technologies. A computer
network can be secured at many levels. One of these levels of computer security is operating system security.
It is easy to misunderstand assumptions on the environment in which security protocols are to be used and what
their secure functioning may rely on. Security violations often occur at the boundaries between security
mechanisms and the general system (Cole et al., 2005).
2.1
Network and Security Protocols
A protocol is a standard that controls or enables the connection, communication, and data transfer
between two computing endpoints. A protocol can be defined as the rules governing the syntax, semantics, and
synchronization of communication. Network security mechanisms are essential in other to prevent security
threats. Protocols prevent security threats by providing the following; confidentiality (concealing the quantity or
destination of data), data integrity (detecting and preventing tampering), originality (detecting replays),
timeliness (detecting delaying tactics), authentication (ensuring that the communication is between the supposed
parties), availability, non repudiation (detects bogus denial of transactions) and non forge-ability (detects claims
of bogus transactions) (Joshi et al., 2008).
2.1.1
IP Security (IPSec)
IP Security (IPSec) is the leading standard for cryptographically based authentication, integrity, and
confidentiality services at the IP datagram layer. Support for the IPSec architecture is mandatory in IPv6 but
optional in IPv4. IPSec is a framework for providing a number of security services, as opposed to a single
protocol or system.
When viewed from a high level, IPSec consists of two parts. The first part is a pair of protocols that
implement the available security services. They are the Authentication Header (AH), which provides access
control, connectionless message integrity, authentication, and antireplay protection, and the Encapsulating
Security Payload (ESP), which supports these same services, plus confidentiality. The second part of IPSec is
support for key management, which fits under an umbrella protocol known as Internet Security Association and
Key Management Protocol (ISAKMP). The abstraction that binds these two parts together is the security
association (SA). A security association is a simplex (one-way) connection with one or more of the available
security properties.
IPSec supports a tunnel mode as well as the more straightforward transport mode. Each SA operates in
one or the other mode. In a transport mode SA, ESP’s payload data is simply a message for a higher layer such
as UDP or TCP. In this mode, IPSec acts as an intermediate protocol layer, much like SSL/TLS does between
TCP and a higher layer (Joshi et al., 2008).
In a tunnel mode SA, however, ESP’s payload data is itself an IP packet. The source and destination of this inner
IP packet may be different from those of the outer IP packet. The most common way to use the ESP is to build
an IPSec tunnel between two routers, typically firewalls. According to Anderson (2001), the tunnel may be
configured to use ESP with confidentiality and authentication, thus preventing unauthorized access to the data
that passes through the link and ensuring that no spurious data is received at the far end of the tunnel. A network
13
3. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
www.iiste.org
of such tunnels can be used to implement an entire virtual private network (VPN).
2.1.2
Web security (Secure Sockets Layer and Transport Layer Security)
The design goals and requirements for the Transport Layer Security (TLS) standard and the Secure
Socket Layer (SSL) on which TLS is based, are based on solving problems that emerged from the growth of the
Internet and digital data transmission. As commercial enterprises began to take an interest in the World Wide
Web, the need for some level of security for transactions on the Web became obvious; confidentiality, integrity,
and authentication. SSL was the first widely used solution to this problem. It was originally developed by
Netscape and subsequently the basis for the IETF’s TLS standard. TLS is the latest enhancement of SSL.
SSL was not designed exclusively for Web transactions (i.e., those using HTTP). Rather, it was built as
a general-purpose protocol that sits between an application protocol such as HTTP and a transport protocol such
as TCP. From the application’s perspective, this protocol layer looks just like a normal transport protocol except
for the fact that it is secure. That is, the sender can open connections and deliver bytes for transmission, and the
secure transport layer will get them to the receiver with the necessary confidentiality, integrity, and
authentication.
2.2
Related Work
Series of research work have been carried out and are still being carried out on performance of network
security protocols. In a work of Miltchev et al., (2001) the authors investigated the performance of IPSec by
considering the type of encryption algorithm used by IPSec, the network topology, and the effects that the added
security has on the performance of the system. IPSec was compared with SSL as used by HTTPS. The OpenBSD
operating system was used as the experimental platform.
Also, in the work of Argyroudis et al., (2004), the performance analysis of three commonly used
security protocols, SSL, IPSec and S/MIME was presented. This work compares the performance of a mobile
platform with and without security protocols, to prove that the complexity of sophisticated cryptographic
protocols do not prevent them from being used on a mobile platform. In contrast, this work investigates the
difference in the performance of security protocols on two different operating system platforms.
3.
Methodology
In order to study and exploit the properties of security protocols, a computer tool is needed, by which
computer networks can be modeled, simulated and evaluated. This work evaluates the performance of security
protocols using a network simulator, which uses packet level analysis to measure network performance. The
following sub section discusses the performance metrics, the network simulation tool, and network models used
in evaluating the performance of the security protocols.
3.1
Performance Metrics
According to Agarwal and Wang (2005), the performance impact of security policies on a system’s
Quality of Service (QoS) can be measured with the following metrics:
(a)
Authentication Time (AT) is defined as the time involved in an authentication phase of a
security protocol. Here, the steps to calculate the authentication time (AT) are described as follows:
i.
Assume that security policy Pϕ is configured in the network. Now, through experiments the
time involved in processing kth packet by Pϕ during its authentication phase is determined.
Let, it be denoted as tk(Pϕ).
ii.
Assume N packets are exchanged during authentication phase. Let total time in processing N
packets be represented by TN(Pϕ), which can be calculated as follows:
(1)
iii.
Let AT denote authentication time. As it depends on mobility scenarios N, R and security
policies P as defined above, therefore AT can be represented as AT(N,R,P) and can be
calculated using the equation for AT above, as follows:
(2)
(b)
Number of Authentication Messages (AM) is concerned about the messages exchanged during an
authentication phase. Ethereal snapshots have been taken to obtain messages exchanged for different
security protocols. This parameter is related to overhead
signaling of authentication.
(c)
Policy Overhead (Bytes/Second) O(Pϕ) refers to the overhead associated in encrypting and decrypting.
Once data transfer phase is initiated after initial protocol negotiation, encryption and decryption is the
only operation on data. So their cost affects total
overhead of security policies. It is assumed in the
experiments that security policies do not renegotiate security parameters during a session, thus
eliminating the overhead introduced by renegotiation of security policies.
(d)
Traffic Streams (Tr) is considered with regards to TCP and UDP traffic streams in the experiments.
Since most of the applications run over TCP or UDP, the experimental data is applicable to many
applications in wireless LANs.
14
4. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
(e)
(f)
•
•
•
•
www.iiste.org
Response Time (End-to-End) (RS) is a measure of the delay in transmission of data between a sender
and a receiver, usually in seconds.
Throughput (Bytes/Second) (Th) is a measure of the data transfer during per unit time between
participating nodes. The throughput is obtained according to following steps:
Determine time tf(Pϕ) when first data packet is sent from a sender to a receiver with security policy Pϕ.
Determine time tl(Pϕ) when last data packet is delivered to a receiver j from a sender i with security
policy Pϕ.
Calculate total time, denoted as tt, by subtracting tf (Pϕ) from tl(Pϕ) which can be given as follows:
(3)
Assume that total data exchanged between users i and j are denoted as D in bytes. Since data rate,
denoted as dt, is defined as data sent per unit time, therefore dt can be represented using the equation for
tt above, as follows:
(4)
•
Since throughput Th depends on factors such as N, R, P, Tr and DS, where Tr represents traffic types
such as TCP or UDP, DS denotes total data sent between a sender i and receiver j and other denotations
are the same as defined above. Therefore, throughput can be represented as Th(N,R,P, Tr, DS), which
can be obtained by using the equation for dt above, as follows:
(5)
3.2
Simulation
Performance of the network security protocols were evaluated by measuring the values of performance
metrics using OPNET. OPNET provides a GUI for network topology design, which allows for realistic
simulation of networks, and has a performance data collection and display module. It has been used extensively
and there is wide confidence in the validity of the results it produces (Guo et al., 2007).
OPNET IT Guru Academic Edition (ITGAE) is a free simulation tool, offered from the manufacturer of
OPNET, and is intended for educational University programs. It is useful within education process concerned
with communication technologies through practical simulation examples. Within the widely-supported
components library can be found computer workstations and servers, routers, switches, bridges, stars, access
points, links, firewalls, gateways, servers etc. The software is user-friendly, because the whole application can
be constructed in a graphical project editor.
4.
System Design and Implementation
Events in the modelled system are scheduled to occur at discrete points in time. The design of a network
topology model to analyze the performance of security protocols in the operating systems described earlier is
required.
4.1 Network Models
Network models were simulated, the first network model was used to evaluate IPSec and the second to
evaluate SSL.
4.1.1 IPSec Network Model
This network model consist two Point-to-Point Protocol (PPP) workstations connected to the Internet
via a router each; and a PPP server connected to the Internet through a router and a firewall. All devices are
connected with PPP DS1 (Digital Signal 1) lines as shown in Figure 1. PPP DS1 connects two nodes running IP;
its data rate is 1.544 Mbps.
The statistics selected for both the Linux Client and Windows Client nodes are:
• Client DB Response Time (seconds)
• Client HTTP Page Response Time (seconds)
• Client Remote Login Response Time (seconds)
• IP Processing Delay (seconds)
4.1.2
SSL Network Model
This network model consists of two PPP workstations connected to the Internet via a router each; a
server farm consisting of a database server, an e-mail server, a FTP server, and a general server, all connected to
a router which is connected to the Internet via firewall. Also connected to the Internet were two Web servers
Yahoo and Amazon. All devices are connected with PPP DS1 lines as shown in Figure 2.
The statistics selected for both the Linux Client and Windows Client nodes are:
• Client Email Download Response Time (seconds)
• Client HTTP Page Response Time (seconds)
15
5. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
•
•
4.3
www.iiste.org
Client FTP Download Response Time (seconds)
IP Processing Delay (seconds)
Simulation Results
The results of the simulations of the IPSec and SSL models are presented and discussed in the following
section.
4.3.1
IPSec Simulation Scenario
This is a simulation of the network and traffic models as shown in Figure 1. Database, HTTP and
Remote Access traffic are transmitted through the VPN. IPSec provides confidentiality and authentication for the
VPN tunnel. Figure 3 shows a graph of the average database query response time for the Windows Client and
Linux Client nodes. The graph in Figure 4 shows the average HTTP page response time for the Windows Client
and Linux Client nodes. Figure 5 shows the average remote access response time for the Windows Client and
Linux Client nodes. Figure 6 shows the average IP processing delay for the Windows Client and Linux Client
nodes.
4.3.2
SSL Simulation Scenario
This simulation model consists of the network and traffic models as shown in Figure 2. E-mail, HTTP
and FTP traffic are transmitted over the network. SSL provides authentication and encryption for HTTP and FTP.
Figure 7 shows a graph of the average e-mail download response time for the Windows Client and Linux Client
nodes. The graph in Figure 8 shows the average HTTP page response time for the Windows Client and Linux
Client nodes. Figure 9 shows the average FTP download response time for the Windows Client and Linux Client
nodes. Figure 10 shows the average IP processing delay for the Windows Client and Linux Client nodes.
For both the IPSec and SSL Network scenarios, both the Linux Client node and the Windows Client
node are connected simultaneously. This is done to ensure that the performance evaluation results obtained from
both nodes are gotten under the same network conditions, for more accurate comparison.
4.4
Summary of Simulation Results
As discussed earlier, response time is a measure of the delay in transmission of data between a sender
and a receiver, usually in seconds. The performance metrics obtained from the IPSec Network Scenario
simulation include database response time, HTTP page response time and remote access response time.
The database response time is measured from the time when the database query application sends a
request to the server to the time it receives a response packet. From Figure 3, the DB response time for the
Windows Client node is initially about 0.190 seconds. It decreases to about 0.175 seconds quickly and remains
constant. That of the Linux Client node on the other hand remains constant at about 0.180 seconds throughout.
HTTP page response time specifies the time required to retrieve an entire web page with all the
contained inline objects. The HTTP response time for the Linux Client node is constantly about 0.55 seconds,
while that of the Windows Client node is constant at about 0.54 seconds, as can be seen from Figure 4. Remote
access response time is the time taken for the request for access to a remote resource to be granted. From Figure
5, the Remote access response time of the Linux Client node remains constant at about 0.165 seconds, while that
of Windows Client node starts at 0.165 and then decreases to about 0.16 seconds.
IP processing delay is the time it takes routers to process the packet header. Additional overhead is
caused by the IPSec datagram encapsulating the original IP datagram. This delay is evaluated with the following
assumption. If N traffics arrive at each security router at rate V (Mbps), and if the security router has a
processing capacity of rate R (Mbps), the security router processing delay, dsec for data encryption or decryption
is given as:
(6)
where the security coefficient α = 1 second.
From Figure 6, it can be seen that the IP processing delay of the Windows Client node is initially high
(about 0.0125 milliseconds), but later decreases to about 0.0115 milliseconds. That of the Linux Client node is
constant at about 0.0115 milliseconds, with a little spike initially. The IP processing delay of the Windows Client
node is initially slightly higher than that of the Linux Client node, but later, both are comparably equivalent. The
performance metrics obtained from the SSL Network Scenario simulation include e-mail response time, HTTP
response time and FTP response time.
E-mail download response time is measured from the time an e-mail is requested to the time it is starts
to download. From Figure 7, the e-mail download response time of the Windows Client node is 0.60 seconds at
the start of the simulation; the graph curves slightly downward, as it becomes constant at 0.58 seconds. For the
Linux Client node, the graph is slightly higher and parallel to that of the Windows Client node, starting at 0.64
seconds, and stabilizing at 0.62 seconds. HTTP response time for the Linux Client node ranges from 0.73
seconds to 0.77 seconds and is slightly higher than that of the Windows Client node, which is 0.7 seconds at the
start of the simulation, spikes to 0.79 seconds, and then ranges from 0.72 seconds to 0.76 seconds as seen in
16
6. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
www.iiste.org
Figure 8.
FTP download response time is measured from the time a file is requested to the time it starts to
download. The FTP download response time for the Linux Client node is constantly about 3.1 seconds, while
that of the Windows Client node is 3.5 seconds at the start of the simulation, and decreases to and remains
constant at 3.09 seconds. The IP processing delay of both the Linux Client and Windows Client nodes are
approximately equal and constant at about 0.018 milliseconds, though at the start of the simulation that of the
Linux client node is 0.0175 milliseconds, and that of the Windows Client node is about 0.017 seconds.
For the IPSec Network Scenario, the average response time for the database, HTTP and remote access
applications is slightly greater for the Linux Client node than for the Windows Client mode. However, the IP
Processing Delay is slightly greater for the Windows Client node than it is for the Linux Client node.
For the SSL Network Scenario, the average response time for the e-mail and HTTP applications is
greater for the Linux Client node than it is for the Windows Client node. The FTP download response time is
initially higher for the Windows Client node, and then later, lower. The IP Processing Delay is approximately
equal for both the Linux Client and Windows Client nodes. It can be seen that the values of the performance
parameters of the two operating system platforms considered for both scenarios are somewhat comparable.
5.0
Conclusion
For the IPSec Network Scenario, the average response time for the database, HTTP and remote access
applications is slightly greater for the Linux Client node than for the Windows Client mode. However, the IP
Processing Delay is slightly greater for the Windows Client node than it is for the Linux Client node.
For the SSL Network Scenario, the average response time for the e-mail and HTTP applications is greater for the
Linux Client node than it is for the Windows Client node. The FTP download response time is initially higher for
the Windows Client node, and then later, lower. The IP Processing Delay is approximately equal for both the
Linux Client and Windows Client nodes. In each case, the differences in the values of the performance
parameters are less than 5%. It can be seen from these results that the variations in the values of the performance
parameters considered for the Linux and Windows operating system platforms, in both the IPSec and SSL
Network Scenarios, are not significant enough to reflect a noticeable difference in the impacts of the network
security protocols on the performance of the operating system platforms. Thus, it can be concluded that the
effects of the network security protocols considered on the performances of both operating system platforms are
comparable.
References
Agarwal, A. K., and Wang, W. (2005). Measuring Performance Impact of Security Protocols in Wireless Local
Area Networks. Proceedings from the Second International Conference on Broadband Networks. (IEEE
BROADNETS 2005).
Anderson, R. (2001). Security Engineering: A Guide to Building Dependable Distributed Systems, 2nd ed. Wiley
Publishing Inc. Indiana.
Argyroudis P. G., Verma R., Tewari H., O’Mahony D. (2004). Performance Analysis of Cryptographic Protocols
on Handheld Devices. Proceedings of the Third IEEE International Symposium on Network Computing
and Applications (NCA’04).
Cole, E., Krutz, R., and Conley, J. W. (2005). Network Security Bible. Wiley Publishing Inc., Indiana.
Erich, N., Yates, D. J., O’Malley, S., Orman, H., and Schroeppel, R. (1996). Parallelized Network Security
Protocols. Proceedings of the 1996 IEEE Symposium on Network and Distributed Systems Security, 1-3.
Guo, J., Xiang, W., and Wang, S. (2007). Reinforce Networking Theory with OPNET Simulation. Journal of
Information
Technology
Education,
6,
215-226.
[Online]
http://www.usenix.org/event/usenix02/tech/freenix/fullpapers/miltchev/miltchev.ps [Accessed February,
2011]
Hitslink website, [Online] <http://www.hitslink.com > [Accessed February, 2011]
Joshi, J., Peterson, L. L., Bruce, S. D., Krishnamurthy, P. (2008). Network Security: Know it all, 2nd ed. Morgan
Kaufmann, San Francisco.
Miltchev, S., Ioannidis, S., Keromytis, A. D. (2001). A Study of the Relative Costs of Network Security
Protocols.
Nash, A. and Nash, J. (2001). LPIC Certification Bible. Hungry Minds Inc., New York.
17
7. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
www.iiste.org
Figure 1: Network Topology Model to simulate IPSec
Figure 2: Network Topology Model to simulate SSL
18
8. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
www.iiste.org
Figure 3: Comparison of Database Query Response Time between Windows and Linux nodes (IPSec).
Figure 4: Comparison of HTTP Page Response Time between Windows and Linux nodes (IPSec).
Figure 5: Comparison of Remote Login Response Time between Windows and Linux nodes (IPSec).
19
9. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
www.iiste.org
Figure 6: Comparison of IP Processing Delay between Windows and Linux nodes (IPSec).
Figure 7: Comparison of Email Download Response Time between Windows and Linux nodes (SSL)
Figure 8: Comparison of HTTP Page Response Time between Windows and Linux nodes (SSL)
20
10. Network and Complex Systems
ISSN 2224-610X (Paper) ISSN 2225-0603 (Online)
Vol.3, No.7, 2013
www.iiste.org
Figure 9: Comparison of FTP Download Response Time between Windows and Linux nodes (SSL)
Figure 10: Comparison of IP Processing Delay between Windows and Linux nodes (SSL)
21
11. This academic article was published by The International Institute for Science,
Technology and Education (IISTE). The IISTE is a pioneer in the Open Access
Publishing service based in the U.S. and Europe. The aim of the institute is
Accelerating Global Knowledge Sharing.
More information about the publisher can be found in the IISTE’s homepage:
http://www.iiste.org
CALL FOR JOURNAL PAPERS
The IISTE is currently hosting more than 30 peer-reviewed academic journals and
collaborating with academic institutions around the world. There’s no deadline for
submission. Prospective authors of IISTE journals can find the submission
instruction on the following page: http://www.iiste.org/journals/
The IISTE
editorial team promises to the review and publish all the qualified submissions in a
fast manner. All the journals articles are available online to the readers all over the
world without financial, legal, or technical barriers other than those inseparable from
gaining access to the internet itself. Printed version of the journals is also available
upon request of readers and authors.
MORE RESOURCES
Book publication information: http://www.iiste.org/book/
Recent conferences: http://www.iiste.org/conference/
IISTE Knowledge Sharing Partners
EBSCO, Index Copernicus, Ulrich's Periodicals Directory, JournalTOCS, PKP Open
Archives Harvester, Bielefeld Academic Search Engine, Elektronische
Zeitschriftenbibliothek EZB, Open J-Gate, OCLC WorldCat, Universe Digtial
Library , NewJour, Google Scholar