Security metrics serve as a powerful tool for organizations to understand the effectiveness of protecting computer networks. However majority of these measurement techniques don’t adequately help corporations to make informed risk management decisions. In this paper we present a stochastic security framework for obtaining quantitative measures of security by taking into account the dynamic attributes associated with vulnerabilities that can change over time. Our model is novel as existing research in attack graph analysis do not consider the temporal aspects associated with the vulnerabilities, such as the availability of exploits and patches which can affect the overall network security based on how the vulnerabilities are interconnected and leveraged to compromise the system. In order to have a more realistic representation of how the security state of the network would vary over time, a nonhomogeneous model is developed which incorporates a time dependent covariate, namely the vulnerability age. The daily transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact measures evolve over a time period for a given network.
Numerous security metrics have been proposed in the past for protecting computer networks.
However we still lack effective techniques to accurately measure the predictive security risk of
an enterprise taking into account the dynamic attributes associated with vulnerabilities that can
change over time. In this paper we present a stochastic security framework for obtaining
quantitative measures of security using attack graphs. Our model is novel as existing research
in attack graph analysis do not consider the temporal aspects associated with the
vulnerabilities, such as the availability of exploits and patches which can affect the overall
network security based on how the vulnerabilities are interconnected and leveraged to
compromise the system. Gaining a better understanding of the relationship between
vulnerabilities and their lifecycle events can provide security practitioners a better
understanding of their state of security. In order to have a more realistic representation of how
the security state of the network would vary over time, a nonhomogeneous model is developed
which incorporates a time dependent covariate, namely the vulnerability age. The daily
transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We
also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact
measures evolve over a time period for a given network.
Cyber security is said to be the most concentrated topic as it helps end user to stay away or stay secure from cyber attacks. Cyber security models are crucial.
More...
http://goo.gl/IwhtP2
A3 - Análise de ameaças - Threat analysis in goal oriented security requireme...Spark Security
Goal and threat modelling are important activities of security requirements engineering: goals express why a system is needed, while threats motivate the need for security. Unfortunately, existing approaches mostly consider goals and threats separately, and thus neglect the mutual influence between them. In this paper, we address this deficiency by proposing an approach that extends goal modelling with threat modelling and analysis.
A1 - Cibersegurança - Raising the Bar for CybersecuritySpark Security
In the past few years, a new approach to cybersecurity has emerged, based on the analysis of data on successful attacks. In this approach, continuous diagnostics and mitigation replace the reactive network security methods used in the past. The approach combines continuous monitoring of network health with relatively straightforward mitigation strategies. The strategies used in this approach reduce the opportunities for attack and force attackers to develop more sophisticated (and expensive) techniques or to give up on the target. In combination, continuous monitoring and mitigation strategies provide the basis for better cybersecurity.
Security issues often neglected until coding step in
software development process, and changing in this step leads to
maximize time and cost consuming depending on the size of the
project. Applying security on design phase can fix vulnerabilities
of the software earlier in the project and minimize the time and
cost of the software by identifying security flaws earlier in the
software life cycle. This work concerns with discussing security
metrics for object oriented class design, and implementing these
metrics from Enterprise Architect class diagram using a
proposed CASE tool.
Vulnerability scanners a proactive approach to assess web application securityijcsa
With the increasing concern for security in the network, many approaches are laid out that try to protect
the network from unauthorised access. New methods have been adopted in order to find the potential
discrepancies that may damage the network. Most commonly used approach is the vulnerability
assessment. By vulnerability, we mean, the potential flaws in the system that make it prone to the attack.
Assessment of these system vulnerabilities provide a means to identify and develop new strategies so as to
protect the system from the risk of being damaged. This paper focuses on the usage of various vulnerability
scanners and their related methodology to detect the various vulnerabilities available in the web
applications or the remote host across the network and tries to identify new mechanisms that can be
deployed to secure the network.
Numerous security metrics have been proposed in the past for protecting computer networks.
However we still lack effective techniques to accurately measure the predictive security risk of
an enterprise taking into account the dynamic attributes associated with vulnerabilities that can
change over time. In this paper we present a stochastic security framework for obtaining
quantitative measures of security using attack graphs. Our model is novel as existing research
in attack graph analysis do not consider the temporal aspects associated with the
vulnerabilities, such as the availability of exploits and patches which can affect the overall
network security based on how the vulnerabilities are interconnected and leveraged to
compromise the system. Gaining a better understanding of the relationship between
vulnerabilities and their lifecycle events can provide security practitioners a better
understanding of their state of security. In order to have a more realistic representation of how
the security state of the network would vary over time, a nonhomogeneous model is developed
which incorporates a time dependent covariate, namely the vulnerability age. The daily
transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We
also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact
measures evolve over a time period for a given network.
Cyber security is said to be the most concentrated topic as it helps end user to stay away or stay secure from cyber attacks. Cyber security models are crucial.
More...
http://goo.gl/IwhtP2
A3 - Análise de ameaças - Threat analysis in goal oriented security requireme...Spark Security
Goal and threat modelling are important activities of security requirements engineering: goals express why a system is needed, while threats motivate the need for security. Unfortunately, existing approaches mostly consider goals and threats separately, and thus neglect the mutual influence between them. In this paper, we address this deficiency by proposing an approach that extends goal modelling with threat modelling and analysis.
A1 - Cibersegurança - Raising the Bar for CybersecuritySpark Security
In the past few years, a new approach to cybersecurity has emerged, based on the analysis of data on successful attacks. In this approach, continuous diagnostics and mitigation replace the reactive network security methods used in the past. The approach combines continuous monitoring of network health with relatively straightforward mitigation strategies. The strategies used in this approach reduce the opportunities for attack and force attackers to develop more sophisticated (and expensive) techniques or to give up on the target. In combination, continuous monitoring and mitigation strategies provide the basis for better cybersecurity.
Security issues often neglected until coding step in
software development process, and changing in this step leads to
maximize time and cost consuming depending on the size of the
project. Applying security on design phase can fix vulnerabilities
of the software earlier in the project and minimize the time and
cost of the software by identifying security flaws earlier in the
software life cycle. This work concerns with discussing security
metrics for object oriented class design, and implementing these
metrics from Enterprise Architect class diagram using a
proposed CASE tool.
Vulnerability scanners a proactive approach to assess web application securityijcsa
With the increasing concern for security in the network, many approaches are laid out that try to protect
the network from unauthorised access. New methods have been adopted in order to find the potential
discrepancies that may damage the network. Most commonly used approach is the vulnerability
assessment. By vulnerability, we mean, the potential flaws in the system that make it prone to the attack.
Assessment of these system vulnerabilities provide a means to identify and develop new strategies so as to
protect the system from the risk of being damaged. This paper focuses on the usage of various vulnerability
scanners and their related methodology to detect the various vulnerabilities available in the web
applications or the remote host across the network and tries to identify new mechanisms that can be
deployed to secure the network.
Attack graph based risk assessment and optimisation approachIJNSA Journal
Attack graphs are models that offer significant cap
abilities to analyse security in network systems. A
n
attack graph allows the representation of vulnerabi
lities, exploits and conditions for each attack in
a single
unifying model. This paper proposes a methodology
to explore the graph using a genetic algorithm (GA)
.
Each attack path is considered as an independent at
tack scenario from the source of attack to the targ
et.
Many such paths form the individuals in the evoluti
onary GA solution. The population-based strategy of
a
GA provides a natural way of exploring a large numb
er of possible attack paths to find the paths that
are
most important. Thus unlike many other optimisation
solutions a range of solutions can be presented to
a
user of the methodology.
Integrating Threat Modeling in Secure Agent-Oriented Software DevelopmentWaqas Tariq
The main objective of this paper is to integrate threat modeling when developing a software application following the Secure Tropos methodology. Secure Tropos is an agent-oriented software development methodology which integrates “security extensions” into all development phases. Threat modeling is used to identify, document, and mitigate security risks, therefore, applying threat modeling when defining the security extensions shall lead to better modeling and increased level of security. After integrating threat modeling into this methodology, security attack scenarios are applied to the models to discuss how the security level of the system has been impacted. Security attack scenarios have been used to test different enhancements made to the Secure Tropos methodology and the Tropos methodology itself. The system modeled using this methodology is an e-Commerce application that will be used to sell handmade products made in Ecuador through the web. The .NET Model-View-Controller framework is used to develop our case study application. Results show that integrating threat modeling in the development process, the level of security of the modeled application has increased. The different actors, goals, tasks, and security constraints that were introduced based on the proposed integration help mitigate different risks and vulnerabilities.
Software security risk mitigation using object oriented design patternseSAT Journals
Abstract It is now well known that requirement and the design phase of software development lifecycle are the phases where security incorporation yields maximum benefits.In this paper, we have tried to tie security requirements, security features and security design patterns together in a single string. It is complete process that will help a designer to choose the most appropriate security design pattern depending on the security requirements. The process includes risk analysis methodology at the design phase of the software that is based on the common criteria requirement as it is a wellknown security standard that is generally used in the development of security requirements. Risk mitigation mechanisms are proposed in the form of security design patterns. Exhaustive list of most reliable and well proven security design patterns is prepared and their categorization is done on the basis of attributes like data sensitivity, sector, number of users etc. Identified patterns are divided into three levels of security. After the selection of security requirement, the software designer can calculate the percentage of security features contribution and on the basis of this percentage; design pattern level can be selected and applied.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
SECURITY VIGILANCE SYSTEM THROUGH LEVEL DRIVEN SECURITY MATURITY MODELIJCSEIT Journal
Success of any software system largely looms upon its vigilance efficiency that prompts organizations to
meet the set of objectives in the arena of networks. In the highly competitive world, everything appears to
be vulnerable; information system is also not an exception to this fact. The security of information system
has become a cause of great concern. On the contrary, till time the software security engineers are trying
hard to develop fully protected and highly secured information systems but all these developments are at
nascent stages. It is quite revelling that in the earlier research studies, little attention is paid to highlight an
accurate status of the security alertness for developed software. Hence, keeping all these factors at the
backdrop, this paper is an attempt to propose a holistic Security Maturity Model (SMM), in which five
levels/stars have been developed, driven on the strength of the security vigilance occurring at the various
stages for any software. SMM is in its conceptual stage; the detailed steps will certainly require time to be
developed so that every software system can reap out the benefits of this model. To categorize/discriminate
the level of potency, SMM will be highlighted through appropriate ranking/star system. It is hoped that if
SMM will be followed in its true letter and sprit; undoubtedly, this will restore the clients’ trust and
confidence on the software as well as their corresponding vendors. Moreover, this will also enable software
industry to follow transparent and ethical practices.
A risk and security assessment of VANET availability using attack tree concept IJECEIAES
The challenging nature of insecure wireless channels and the open-access environment make the protection of vehicular ad hoc network (VANET) a particularly critical issue. Robust approaches to protect this network's security and privacy against attacks need to be improved, trying to achieve an adequate level, to secure the confidential information of drivers and passengers. Accordingly, to improve the security of VANET, it is necessary to carry out a risk assessment, in order to evaluate the risk this network is facing. This paper focuses on the security threats in VANET, particularly on the availability of this network. We propose a novel risk assessment approach to evaluate the risk of the attack against VANET availability. We adopt a tree structure called attack-tree to model the attacker's potential attack strategies. Based on this attack-tree, we can estimate the probability that a specific menace might lead against VANET and detect potential attack sequences that an attacker could launch against VANET availability. Then, we utilize the utility multi-attribute theory to measure the total risk value of the system, including the probabilities of each sequence of attack. The analysis results of the study will help decision-makers take effective precautions to prevent attack on this network’s availability.
A SURVEY ON TECHNIQUES REQUIREMENTS FOR INTEGRATEING SAFETY AND SECURITY ENGI...IJCSES Journal
Nowadays, safety and security have become a requirement, integrated to each other, for information systems as a new generation of infrastructure systems distributed throughout networks. That opened the door for questions on whether these systems are safety-critical especially since they were tested in a closed, separated environment and are now deployed in an uncontrollable environment, namely the internet, where the number of threats is enormous. So it opened the door to talk about new development approach methods that take safety and security into consideration during the system development life cycle and most importantly, identifying hazard, risks and threats. We will conduct a survey exploring technical languages that were created by the scholars to combine safety and security requirement engineering and accident analysis technique languages.
Lightweight Cybersecurity Risk Assessment Tools for Cyberinfrastructurejbasney
Presented Nov 11 2017
http://www.stem-trek.org/news-events/urisc/
“Lightweight Cybersecurity Risk Assessment Tools for Cyberinfrastructure”
Risk assessment provides valuable insights to the cyberinfrastructure security program, but launching a risk assessment process can seem daunting for all but the largest projects. Jim Basney will present risk assessment tools (checklists, spreadsheets, templates) developed by CTSC (trustedci.org) for getting started on a lightweight risk assessment for cyberinfrastructure projects of varying types and sizes.
CYBERSECURITY INFRASTRUCTURE AND SECURITY AUTOMATIONacijjournal
AI-based security systems utilize big data and powerful machine learning algorithms to automate the security management task. The case study methodology is used to examine the effectiveness of AI-enabled security solutions. The result shows that compared with the signature-based system, AI-supported security applications are efficient, accurate, and reliable. This is because the systems are capable of reviewing and correlating large volumes of data to facilitate the detection and response to threats.
Security has always been a great concern for all software systems due to the increased incursion of the wireless devices in recent years. Generally software engineering processes tries to compel the security measures during the various design phases which results into an inefficient measure. So this calls for a new process of software engineering in which we would try to give a proper framework for integrating the security requirements with the SDLC, and in this requirement engineers must discover all the security requirements related to a particular system, so security requirement could be analyzed and simultaneously prioritized in one go. In this paper we will present a new technique for prioritizing these requirement based on the risk measurement techniques. The true security requirements should be easily identified as early as possible so that these could be systematically analyzed and then every architecture team can choose the most appropriate mechanism to implement them.
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...IJCNCJournal
Much research has been conducted to detect vulnerabilities of Web Applications; however, these never proposed a methodology to measure the vulnerabilities either qualitatively or quantitatively. In this paper, a methodology is proposed to investigate the quantification of vulnerabilities in Web Applications. We applied the Goal Question Metrics (GQM) methodology to determine all possible security factors and subfactors of Web Applications in the Department of Transportation (DOT) as our proof of concept. Then we
introduced a Multi-layered Fuzzy Logic (MFL) approach based on the security sub-factors’ prioritization in the Analytic Hierarchy Process (AHP). Using AHP, we weighted each security sub-factor before the quantification process in the Fuzzy Logic to handle imprecise crisp number calculation.
PERFORMANCE EVALUATION OF WIRELESS SENSOR NETWORK UNDER HELLO FLOOD ATTACKIJCNCJournal
Wireless sensor network (WSN) is highly used in many fields. The network consists of tiny lightweight
sensor nodes and is largely used to scan or detect or monitor environments. Since these sensor nodes are
tiny and lightweight, they put some limitations on resources such as usage of power, processing given task,
radio frequency range. These limitations allow network vulnerable to many different types of attacks such
as hello flood attack, black hole, Sybil attack, sinkhole, and many more. Among these attacks, hello flood is
one of the most important attacks. In this paper,we have analyzed the performance of hello flood attack and
compared the network performance as number of attackers increases. Network performance is evaluated
by modifying the ad-hoc on demand distance vector (AODV) routing protocol by using NS2 simulator. It
has been tested under different scenarios like no attacker, single attacker, and multiple attackers to know
how the network performance changes. The simulation results show that as the number of attackers
increases the performance in terms of throughput and delay changes.
PERFORMANCE ANALYSIS OF THE LINK-ADAPTIVE COOPERATIVE AMPLIFY-AND-FORWARD REL...IJCNCJournal
This paper analyzes the performance of cooperative amplify-and-forward (CAF) relay networks that
employ adaptive M-ary quadrature amplitude modulation (M-QAM)/M-ary phase shift keying (M-PSK)
digital modulation techniques in Nakagami-m fading channel. In particular, we present and compared the
analysis of CAF relay networks with different cooperative diversity and opportunistic routing strategies
such as regular Maximal Ratio Combining (MRC), Selection Diversity Combining (SDC), Opportunistic
Relay Selection with Maximal Ratio Combining (ORS-MRC) and Opportunistic Relay Selection with
Selection Diversity Combining (ORS-SDC). We advocate a simple yet unified numerical approach based on
the marginal moment generating function (MGF) of the total received SNR to compute the average symbol
error rate (ASER), mean achievable spectral efficiency, and outage probability performance metrics.
HIDING A MESSAGE IN MP3 USING LSB WITH 1, 2, 3 AND 4 BITSIJCNCJournal
Steganography is the art of hiding information in ways that prevent the detection of hidden messages. This
paper presentsa new method which randomly selects position in MP3 file to hide a text secret messageby
using Least Significant Bit (LSB) technique. The text secret message isused in start and ends locations a
unique signature or key.The methodology focuses to embed one bit, two bits, three bitsor four bits from
secret message into MP3 file by using LSB techniques. The evaluation and performancemethods are based
on robustness (BER and correlation), Imperceptibility (PSNR) and hiding capacity (Ratio between Sizes of
text message and MP3 Cover) indicators.The experimental results show the new method is more security.
Moreover the contribution of this paper is the provision of a robustness-based classification of LSB
steganography models depending on their occurrence in the embedding position.
These days considering expansion of networks, dissemination of information has become one of significant cases for researchers. In social networks in addition to social structures and people effectiveness on each other, Profit increase of sales, publishing a news or rumor, spread or diffusion of an idea can be mentioned. In social societies, people affect each other and with an individual’s membership, his friends
may join that group as well. In publishing a piece of news, independent of its nature there are different ways to expand it. Since information isn’t always suitable and positive, this article is trying to introduce the immunization mechanism against this information. The meaning of immunization is a kind of Slow Publishing of such information in network. Therefor it has been tried in this article to slow down the
publishing of information or even stop them. With comparison of presented methods for immunization and also presenting rate delay parameter, the immunization of methods were evaluated and we identified the most effective immunization method. Among existing methods for immunization and recommended methods, recommended methods also have an effective role in preventing spread of malicious rumor.
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...IJCNCJournal
Along with handling and poor storage capacity, each sensor in wireless sensor network (WSN) is equipped
with a limited power source and very difficult to be replaced in most application environments. Improving
the energy savings in applications for wireless sensor networks is necessary. In this paper, we mainly focus
on energy consumption savings in applications for wireless sensor networks time critical requirements. Our
Paper accompanying analysis of advanced technologies for energy saving techniques for the optimization
of energy efficiency together with the data transmission is optimal. Moreover, we propose improvements to
increase energy savings in applications for wireless sensor networks require time critical (LEACH
improvements). Simulation results show that our proposed protocol significantly better than LEACH about
the formation of clusters in each round, the average power, the number of nodes alive and average total
received data in base stations.
PERFORMANCE ASSESSMENT OF CHAOTIC SEQUENCE DERIVED FROM BIFURCATION DEPENDENT...IJCNCJournal
In CDMA system, m-sequence and Gold codes are often utilized for spreading-despreading and
scrambling-descrambling operations. In a previous work, a design framework was created for generating
large family of codes from logistic map, which have comparable autocorrelation and cross correlation to
m-sequence and Gold codes. The purpose of this work is to evaluate the performance of these chaotic
codes in a CDMA environment. In the bit error rate (BER) simulation, matched filter, decorrelator and
MMSE receiver have been utilized. The received signal was modelled for synchronous CDMA uplink for
simulation simplicity purpose. Additive White Gaussian Noise channel model was assumed for the
simulation.
As Wireless Sensor Networks are penetrating into the industrial domain, many research opportunities are emerging. One such essential and challenging application is that of node localization. A feed-forward neural network based methodology is adopted in this paper. The Received Signal Strength Indicator (RSSI) values of the anchor node beacons are used. The number of anchor nodes and their configurations has an impact on the accuracy of the localization system, which is also addressed in this paper. Five different training algorithms are evaluated to find the training algorithm that gives the best result. The multi-layer Perceptron (MLP) neural network model was trained using Matlab. In order to evaluate the performance of the proposed method in real time, the model obtained was then implemented on the Arduino microcontroller. With four anchor nodes, an average 2D localization error of 0.2953 m has been achieved with a 12-12-2 neural network structure. The proposed method can also be implemented on any other embedded microcontroller system.
Attack graph based risk assessment and optimisation approachIJNSA Journal
Attack graphs are models that offer significant cap
abilities to analyse security in network systems. A
n
attack graph allows the representation of vulnerabi
lities, exploits and conditions for each attack in
a single
unifying model. This paper proposes a methodology
to explore the graph using a genetic algorithm (GA)
.
Each attack path is considered as an independent at
tack scenario from the source of attack to the targ
et.
Many such paths form the individuals in the evoluti
onary GA solution. The population-based strategy of
a
GA provides a natural way of exploring a large numb
er of possible attack paths to find the paths that
are
most important. Thus unlike many other optimisation
solutions a range of solutions can be presented to
a
user of the methodology.
Integrating Threat Modeling in Secure Agent-Oriented Software DevelopmentWaqas Tariq
The main objective of this paper is to integrate threat modeling when developing a software application following the Secure Tropos methodology. Secure Tropos is an agent-oriented software development methodology which integrates “security extensions” into all development phases. Threat modeling is used to identify, document, and mitigate security risks, therefore, applying threat modeling when defining the security extensions shall lead to better modeling and increased level of security. After integrating threat modeling into this methodology, security attack scenarios are applied to the models to discuss how the security level of the system has been impacted. Security attack scenarios have been used to test different enhancements made to the Secure Tropos methodology and the Tropos methodology itself. The system modeled using this methodology is an e-Commerce application that will be used to sell handmade products made in Ecuador through the web. The .NET Model-View-Controller framework is used to develop our case study application. Results show that integrating threat modeling in the development process, the level of security of the modeled application has increased. The different actors, goals, tasks, and security constraints that were introduced based on the proposed integration help mitigate different risks and vulnerabilities.
Software security risk mitigation using object oriented design patternseSAT Journals
Abstract It is now well known that requirement and the design phase of software development lifecycle are the phases where security incorporation yields maximum benefits.In this paper, we have tried to tie security requirements, security features and security design patterns together in a single string. It is complete process that will help a designer to choose the most appropriate security design pattern depending on the security requirements. The process includes risk analysis methodology at the design phase of the software that is based on the common criteria requirement as it is a wellknown security standard that is generally used in the development of security requirements. Risk mitigation mechanisms are proposed in the form of security design patterns. Exhaustive list of most reliable and well proven security design patterns is prepared and their categorization is done on the basis of attributes like data sensitivity, sector, number of users etc. Identified patterns are divided into three levels of security. After the selection of security requirement, the software designer can calculate the percentage of security features contribution and on the basis of this percentage; design pattern level can be selected and applied.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
SECURITY VIGILANCE SYSTEM THROUGH LEVEL DRIVEN SECURITY MATURITY MODELIJCSEIT Journal
Success of any software system largely looms upon its vigilance efficiency that prompts organizations to
meet the set of objectives in the arena of networks. In the highly competitive world, everything appears to
be vulnerable; information system is also not an exception to this fact. The security of information system
has become a cause of great concern. On the contrary, till time the software security engineers are trying
hard to develop fully protected and highly secured information systems but all these developments are at
nascent stages. It is quite revelling that in the earlier research studies, little attention is paid to highlight an
accurate status of the security alertness for developed software. Hence, keeping all these factors at the
backdrop, this paper is an attempt to propose a holistic Security Maturity Model (SMM), in which five
levels/stars have been developed, driven on the strength of the security vigilance occurring at the various
stages for any software. SMM is in its conceptual stage; the detailed steps will certainly require time to be
developed so that every software system can reap out the benefits of this model. To categorize/discriminate
the level of potency, SMM will be highlighted through appropriate ranking/star system. It is hoped that if
SMM will be followed in its true letter and sprit; undoubtedly, this will restore the clients’ trust and
confidence on the software as well as their corresponding vendors. Moreover, this will also enable software
industry to follow transparent and ethical practices.
A risk and security assessment of VANET availability using attack tree concept IJECEIAES
The challenging nature of insecure wireless channels and the open-access environment make the protection of vehicular ad hoc network (VANET) a particularly critical issue. Robust approaches to protect this network's security and privacy against attacks need to be improved, trying to achieve an adequate level, to secure the confidential information of drivers and passengers. Accordingly, to improve the security of VANET, it is necessary to carry out a risk assessment, in order to evaluate the risk this network is facing. This paper focuses on the security threats in VANET, particularly on the availability of this network. We propose a novel risk assessment approach to evaluate the risk of the attack against VANET availability. We adopt a tree structure called attack-tree to model the attacker's potential attack strategies. Based on this attack-tree, we can estimate the probability that a specific menace might lead against VANET and detect potential attack sequences that an attacker could launch against VANET availability. Then, we utilize the utility multi-attribute theory to measure the total risk value of the system, including the probabilities of each sequence of attack. The analysis results of the study will help decision-makers take effective precautions to prevent attack on this network’s availability.
A SURVEY ON TECHNIQUES REQUIREMENTS FOR INTEGRATEING SAFETY AND SECURITY ENGI...IJCSES Journal
Nowadays, safety and security have become a requirement, integrated to each other, for information systems as a new generation of infrastructure systems distributed throughout networks. That opened the door for questions on whether these systems are safety-critical especially since they were tested in a closed, separated environment and are now deployed in an uncontrollable environment, namely the internet, where the number of threats is enormous. So it opened the door to talk about new development approach methods that take safety and security into consideration during the system development life cycle and most importantly, identifying hazard, risks and threats. We will conduct a survey exploring technical languages that were created by the scholars to combine safety and security requirement engineering and accident analysis technique languages.
Lightweight Cybersecurity Risk Assessment Tools for Cyberinfrastructurejbasney
Presented Nov 11 2017
http://www.stem-trek.org/news-events/urisc/
“Lightweight Cybersecurity Risk Assessment Tools for Cyberinfrastructure”
Risk assessment provides valuable insights to the cyberinfrastructure security program, but launching a risk assessment process can seem daunting for all but the largest projects. Jim Basney will present risk assessment tools (checklists, spreadsheets, templates) developed by CTSC (trustedci.org) for getting started on a lightweight risk assessment for cyberinfrastructure projects of varying types and sizes.
CYBERSECURITY INFRASTRUCTURE AND SECURITY AUTOMATIONacijjournal
AI-based security systems utilize big data and powerful machine learning algorithms to automate the security management task. The case study methodology is used to examine the effectiveness of AI-enabled security solutions. The result shows that compared with the signature-based system, AI-supported security applications are efficient, accurate, and reliable. This is because the systems are capable of reviewing and correlating large volumes of data to facilitate the detection and response to threats.
Security has always been a great concern for all software systems due to the increased incursion of the wireless devices in recent years. Generally software engineering processes tries to compel the security measures during the various design phases which results into an inefficient measure. So this calls for a new process of software engineering in which we would try to give a proper framework for integrating the security requirements with the SDLC, and in this requirement engineers must discover all the security requirements related to a particular system, so security requirement could be analyzed and simultaneously prioritized in one go. In this paper we will present a new technique for prioritizing these requirement based on the risk measurement techniques. The true security requirements should be easily identified as early as possible so that these could be systematically analyzed and then every architecture team can choose the most appropriate mechanism to implement them.
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIE...IJCNCJournal
Much research has been conducted to detect vulnerabilities of Web Applications; however, these never proposed a methodology to measure the vulnerabilities either qualitatively or quantitatively. In this paper, a methodology is proposed to investigate the quantification of vulnerabilities in Web Applications. We applied the Goal Question Metrics (GQM) methodology to determine all possible security factors and subfactors of Web Applications in the Department of Transportation (DOT) as our proof of concept. Then we
introduced a Multi-layered Fuzzy Logic (MFL) approach based on the security sub-factors’ prioritization in the Analytic Hierarchy Process (AHP). Using AHP, we weighted each security sub-factor before the quantification process in the Fuzzy Logic to handle imprecise crisp number calculation.
PERFORMANCE EVALUATION OF WIRELESS SENSOR NETWORK UNDER HELLO FLOOD ATTACKIJCNCJournal
Wireless sensor network (WSN) is highly used in many fields. The network consists of tiny lightweight
sensor nodes and is largely used to scan or detect or monitor environments. Since these sensor nodes are
tiny and lightweight, they put some limitations on resources such as usage of power, processing given task,
radio frequency range. These limitations allow network vulnerable to many different types of attacks such
as hello flood attack, black hole, Sybil attack, sinkhole, and many more. Among these attacks, hello flood is
one of the most important attacks. In this paper,we have analyzed the performance of hello flood attack and
compared the network performance as number of attackers increases. Network performance is evaluated
by modifying the ad-hoc on demand distance vector (AODV) routing protocol by using NS2 simulator. It
has been tested under different scenarios like no attacker, single attacker, and multiple attackers to know
how the network performance changes. The simulation results show that as the number of attackers
increases the performance in terms of throughput and delay changes.
PERFORMANCE ANALYSIS OF THE LINK-ADAPTIVE COOPERATIVE AMPLIFY-AND-FORWARD REL...IJCNCJournal
This paper analyzes the performance of cooperative amplify-and-forward (CAF) relay networks that
employ adaptive M-ary quadrature amplitude modulation (M-QAM)/M-ary phase shift keying (M-PSK)
digital modulation techniques in Nakagami-m fading channel. In particular, we present and compared the
analysis of CAF relay networks with different cooperative diversity and opportunistic routing strategies
such as regular Maximal Ratio Combining (MRC), Selection Diversity Combining (SDC), Opportunistic
Relay Selection with Maximal Ratio Combining (ORS-MRC) and Opportunistic Relay Selection with
Selection Diversity Combining (ORS-SDC). We advocate a simple yet unified numerical approach based on
the marginal moment generating function (MGF) of the total received SNR to compute the average symbol
error rate (ASER), mean achievable spectral efficiency, and outage probability performance metrics.
HIDING A MESSAGE IN MP3 USING LSB WITH 1, 2, 3 AND 4 BITSIJCNCJournal
Steganography is the art of hiding information in ways that prevent the detection of hidden messages. This
paper presentsa new method which randomly selects position in MP3 file to hide a text secret messageby
using Least Significant Bit (LSB) technique. The text secret message isused in start and ends locations a
unique signature or key.The methodology focuses to embed one bit, two bits, three bitsor four bits from
secret message into MP3 file by using LSB techniques. The evaluation and performancemethods are based
on robustness (BER and correlation), Imperceptibility (PSNR) and hiding capacity (Ratio between Sizes of
text message and MP3 Cover) indicators.The experimental results show the new method is more security.
Moreover the contribution of this paper is the provision of a robustness-based classification of LSB
steganography models depending on their occurrence in the embedding position.
These days considering expansion of networks, dissemination of information has become one of significant cases for researchers. In social networks in addition to social structures and people effectiveness on each other, Profit increase of sales, publishing a news or rumor, spread or diffusion of an idea can be mentioned. In social societies, people affect each other and with an individual’s membership, his friends
may join that group as well. In publishing a piece of news, independent of its nature there are different ways to expand it. Since information isn’t always suitable and positive, this article is trying to introduce the immunization mechanism against this information. The meaning of immunization is a kind of Slow Publishing of such information in network. Therefor it has been tried in this article to slow down the
publishing of information or even stop them. With comparison of presented methods for immunization and also presenting rate delay parameter, the immunization of methods were evaluated and we identified the most effective immunization method. Among existing methods for immunization and recommended methods, recommended methods also have an effective role in preventing spread of malicious rumor.
ENERGY SAVINGS IN APPLICATIONS FOR WIRELESS SENSOR NETWORKS TIME CRITICAL REQ...IJCNCJournal
Along with handling and poor storage capacity, each sensor in wireless sensor network (WSN) is equipped
with a limited power source and very difficult to be replaced in most application environments. Improving
the energy savings in applications for wireless sensor networks is necessary. In this paper, we mainly focus
on energy consumption savings in applications for wireless sensor networks time critical requirements. Our
Paper accompanying analysis of advanced technologies for energy saving techniques for the optimization
of energy efficiency together with the data transmission is optimal. Moreover, we propose improvements to
increase energy savings in applications for wireless sensor networks require time critical (LEACH
improvements). Simulation results show that our proposed protocol significantly better than LEACH about
the formation of clusters in each round, the average power, the number of nodes alive and average total
received data in base stations.
PERFORMANCE ASSESSMENT OF CHAOTIC SEQUENCE DERIVED FROM BIFURCATION DEPENDENT...IJCNCJournal
In CDMA system, m-sequence and Gold codes are often utilized for spreading-despreading and
scrambling-descrambling operations. In a previous work, a design framework was created for generating
large family of codes from logistic map, which have comparable autocorrelation and cross correlation to
m-sequence and Gold codes. The purpose of this work is to evaluate the performance of these chaotic
codes in a CDMA environment. In the bit error rate (BER) simulation, matched filter, decorrelator and
MMSE receiver have been utilized. The received signal was modelled for synchronous CDMA uplink for
simulation simplicity purpose. Additive White Gaussian Noise channel model was assumed for the
simulation.
As Wireless Sensor Networks are penetrating into the industrial domain, many research opportunities are emerging. One such essential and challenging application is that of node localization. A feed-forward neural network based methodology is adopted in this paper. The Received Signal Strength Indicator (RSSI) values of the anchor node beacons are used. The number of anchor nodes and their configurations has an impact on the accuracy of the localization system, which is also addressed in this paper. Five different training algorithms are evaluated to find the training algorithm that gives the best result. The multi-layer Perceptron (MLP) neural network model was trained using Matlab. In order to evaluate the performance of the proposed method in real time, the model obtained was then implemented on the Arduino microcontroller. With four anchor nodes, an average 2D localization error of 0.2953 m has been achieved with a 12-12-2 neural network structure. The proposed method can also be implemented on any other embedded microcontroller system.
ENHANCEMENT OF TRANSMISSION RANGE ASSIGNMENT FOR CLUSTERED WIRELESS SENSOR NE...IJCNCJournal
Transmitter range assignment in clustered wireless networks is the bottleneck of the balance between
energy conservation and the connectivity to deliver data to the sink or gateway node. The aim of this
research is to optimize the energy consumption through reducing the transmission ranges of the nodes,
while maintaining high probability to have end-to-end connectivity to the network’s data sink. We modified
the approach given in [1] to achieve more than 25% power saving through reducing cluster head (CH)
transmission range of the backbone nodes in a multihop wireless sensor network with ensuring at least
95% end-to-end connectivity probability.
ADAPTIVE HANDOVER HYSTERESIS AND CALL ADMISSION CONTROL FOR MOBILE RELAY NODESIJCNCJournal
The aim of equipping a wireless network with a mobile relay node is to support broadband wireless communications for vehicular users and their devices. The high mobility of vehicular users, possibly at a very high velocity in the area in which two cells overlap, could cause the network to suffer from a reduced handover success rate and, hence, increased radio link failure. The combined impact of these problems is service interruptions to vehicular users. Thus, the handover schemes are crucial in solving these problems. In this work, we first present the adaptive handover hysteresis scheme for the wireless network with mobile relay nodes in the high-speed train scenario. Specifically, our proposed adaptive hysteresis scheme is based on the velocity of the train. Second, the handover call dropping probability is reduced by introducing a modified call admission control scheme to support radio resource reservation for handover calls that prioritizes handover calls of mobile relay over the other calls. The proposed solution in which adaptive parameter is combined with call admission control is evaluated by system level simulation. Our simulation results illustrate an increased handover success rate and reduced radio link failures.
Sensor nodes are highly mobile, which makes the application running on them face network related problems like node failure, link failure, network level disconnection, scarcity of resources etc. Node failure and Network fault are need to be monitored continuously by supervising the network status especially for critical applications like Health Monitoring System. We propose Node Monitoring protocol (NMP) to monitor the node good conditions using agents and ensure that node gets promised quality of service. These Nodes senses environment and communicates important data to the sink or base station. To establish the correct event time, these nodes need to be synchronized with global clock. Therefore, time synchronization is very important parameter. We have built a simulating environment for Validating Node Monitoring Protocol (NMP) to assess the reliability of Health Monitoring systems. Formal Specification and Description Language tool (SDL) has been used to validate the NMP at design time in order to increase the confidence and efficiency of the system.
PERFORMANCE ANALYSIS OF MULTI-PATH TCP NETWORKIJCNCJournal
MPTCP is proposed by IETF working group, it allows a single TCP stream to be split across multiple
paths. It has obvious benefits in performance and reliability. MPTCP has implemented in Linux-based
distributions that can be compiled and installed to be used for both real and experimental scenarios. In this
article, we provide performance analyses for MPTCP with a laptop connected to WiFi access point and 3G
cellular network at the same time. We prove experimentally that MPTCP outperforms regular TCP for
WiFi or 3G interfaces. We also compare four types of congestion control algorithms for MPTCP that are
also implemented in the Linux Kernel. Results show that Alias Linked Increase Congestion Control
algorithm outperforms the others in the normal traffic load while Balanced Linked Adaptation algorithm
outperforms the rest when the paths are shared with heavy traffic, which is not supported by MPTCP.
Using spectral radius ratio for node degreeIJCNCJournal
In this paper, we show that the spectral radius ratio for node degree could be used to analyze the variation of node degree during the evolution of complex networks. We focus on three commonly studied models of complex networks: random networks, scale-free networks and small-world networks. The spectral radius ratio for node degree is defined as the ratio of the principal (largest) eigenvalue of the adjacency matrix of a network graph to that of the average node degree. During the evolution of each of the above three categories of networks (using the appropriate evolution model for each category), we observe the spectral radius ratio for node degree to exhibit high-very high positive correlation (0.75 or above) to that of the
coefficient of variation of node degree (ratio of the standard deviation of node degree and average node degree). We show that the spectral radius ratio for node degree could be used as the basis to tune the operating parameters of the evolution models for each of the three categories of complex networks as well as analyze the impact of specific operating parameters for each model.
In this paper, an application-based QoS evaluation approach for heterogeneous networks is proposed.It is possible to expand the network capacity and coverage in a dynamic fashion by applying heterogeneous wireless network architecture. However, the Quality of Service (QoS) evaluation of this type of network architecture is very challenging due to the presence of different communication technologies. Different communication technologies have different characteristics and the applications that utilize them have unique QoS requirements. Although, the communication technologies have different performance measurement parameters, the applications using these radio access networks have the same QoS requirements. As a result, it would be easier to evaluate the QoS of the access networks and the overall network configuration based on the performance of applications running on them. Using such applicationbased QoS evaluation approach, the heterogeneous nature of the underlying networks and the diversity of their traffic can be adequately taken into account. Through simulation studies, we show that the application performance based assessment approach facilitates better QoS management and monitoring of heterogeneous network configurations.
Maximizing network interruption in wirelessIJCNCJournal
With the colossal growth of wireless sensor networks (WSNs) in different applications starting from home
automation to military affairs, the pressure on ensuring security in such a network is paramount.
Considering the security challenges, it is really a hard-hitting effort to develop a secured WSN system.
Moreover, as the information technology is getting popular, the intruders are also planning new ideas to
break the system security, to harm the network and to make the system quality down with the target of
taking the control of the network to corrupt it or to get benefits from it anyway. The intruders corrupt the
system only when the security breaking cost (SBC) is lower compared with the benefits they attained or the
harm it can make to others. In this paper, the authors define the term “maximizing network interruption
problem” and propose a technique, called the grid point approximation algorithm, to estimate the SBC of a
multi-hop WSN so that it can be made tougher for an intruder to break the system security. It is assumed
that the intruder has the complete picture of the entire network. The technique is designed from the
intruder’s point of view for completely jamming all the sensor nodes in the network through placing
jammers or malicious nodes strategically and at the same time keeping the number of jammer nodes to
minimum or near minimum. To the best of the authors’ knowledge, there is no work proposed so far of the
same kind. Experimental results with the changes of the different network parameters show that the
proposed algorithm is able to provide excellent performances to achieve the targets.
A NOVEL METHOD TO TEST DEPENDABLE COMPOSED SERVICE COMPONENTSIJCNCJournal
Assessing Web service systems performance and their dependability are crucial for the development of
today’s applications. Testing the performance and Fault Tolerance Mechanisms (FTMs) of composed
service components is hard to be measured at design time due to service instability is often caused by the
nature of the network conditions. Using a real internet environment for testing systems is difficult to set up
and control. We have introduced a fault injection toolkit that emulates a WAN within a LAN environment
between composed service components and offers full control over the emulated environment in addition to
the capability to inject network-related faults and application specific faults. The toolkit also generates
background workloads on the system under test so as to produce more realistic results. We describe an
experiment that has been performed to examine the impact of fault tolerance protocols deployed at a
service client by using our toolkit system.
Peer-to-Peer streaming technology has become one of the major Internet applications as it offers the opportunity of broadcasting high quality video content to a large number of peers with low costs. It is widely accepted that with the efficient utilization of peers and server's upload capacities, peers can enjoy watching a high bit rate video with minimal end-to-end delay. In this paper, we present a practical scheduling algorithm that works in the challenging condition where no spare capacity is available, i.e., it maximally utilizes the resources and broadcasts the maximum streaming rate. Each peer contacts with only a small number of neighbours in the overlay network and autonomously subscribes to sub-streams according to a budget-model in such a way that the number of peers forwarding exactly one sub-stream will be maximized. The hop-count delay is also taken into account to construct a short depth trees. Finally, we show through simulation that peers dynamically converge to an efficient overlay structure with a short hop-count delay. Moreover, the proposed scheme gives nice features in the homogeneous case and overcomes SplitStream in all simulated scenarios.
APPROXIMATING NASH EQUILIBRIUM UNIQUENESS OF POWER CONTROL IN PRACTICAL WSNSIJCNCJournal
Transmission power has a major impact on link and communication reliability and network lifetime in Wireless Sensor Networks. We study power control in a multi-hop Wireless Sensor Network where nodes' communication interfere with each other. Our objective is to determine each node's transmission power level that will reduce the communication interference and keep energy consumption to a minimum. We propose a potential game approach to obtain the unique equilibrium of the network transmission power allocation. The unique equilibrium is located in a continuous domain. However, radio transceivers accept only discrete values for transmission power level setting. We study the viability and performance of mapping the continuous solution from the potential game to the discrete domain required by the radio. We demonstrate the success of our approach through TOSSIM simulation when nodes use the Collection Tree Protocol for routing the data. Also, we show results of our method from the Indriya testbed. We compare it with the case where the motes use Collection Tree Protocol with the maximum transmission power.
Advancements in Complementary Metal Oxide Semiconductor (CMOS) technology have enabled Wireless Sensor Networks (WSN) to gather, process and transport multimedia (MM) data as well and not just limited to handling ordinary scalar data anymore. This new generation of WSN type is called Wireless Multimedia Sensor Networks (WMSNs). Better and yet relatively cheaper sensors – sensors that are able to sense both scalar data and multimedia data with more advanced functionalities such as being able to handle rather intense computations easily - have sprung up. In this paper, the applications, architectures, challenges and issues faced in the design of WMSNs are explored. Security and privacy issues, over all requirements, proposed and implemented solutions so far, some of the successful achievements and other related works in the field are also highlighted. Open research areas are pointed out and a few solution suggestions to the still persistent problems are made, which, to the best of my knowledge, so far haven’t been explored yet.
On the approximation of the sum of lognormals by a log skew normal distributionIJCNCJournal
Several methods have been proposed to approximate the sum of lognormal RVs. However the accuracy of each method relies highly on the region of the resulting distribution being examined, and the individual lognormal parameters, i.e., mean and variance. There is no such method which can provide the needed accuracy for all cases. This paper propose a universal yet very simple approximation method for the sum of Lognormals based on log skew normal approximation. The main contribution on this work is to propose an analytical method for log skew normal parameters estimation. The proposed method provides highly accurate approximation to the sum of lognormal distributions over the whole range of dB spreads for any correlation coefficient. Simulation results show that our method outperforms all previously proposed methods and provides an accuracy within 0.01 dB for all cases.
PREDICTIVE CYBER SECURITY ANALYTICS FRAMEWORK: A NONHOMOGENOUS MARKOV MODEL F...cscpconf
Numerous security metrics have been proposed in the past for protecting computer networks.
However we still lack effective techniques to accurately measure the predictive security risk of
an enterprise taking into account the dynamic attributes associated with vulnerabilities that can
change over time. In this paper we present a stochastic security framework for obtaining
quantitative measures of security using attack graphs. Our model is novel as existing research
in attack graph analysis do not consider the temporal aspects associated with the
vulnerabilities, such as the availability of exploits and patches which can affect the overall
network security based on how the vulnerabilities are interconnected and leveraged to
compromise the system. Gaining a better understanding of the relationship between
vulnerabilities and their lifecycle events can provide security practitioners a better
understanding of their state of security. In order to have a more realistic representation of how
the security state of the network would vary over time, a nonhomogeneous model is developed
which incorporates a time dependent covariate, namely the vulnerability age. The daily
transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We
also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact
measures evolve over a time period for a given network.
ONTOLOGY-BASED MODEL FOR SECURITY ASSESSMENT: PREDICTING CYBERATTACKS THROUGH...IJNSA Journal
The prediction of attacks is essential for the prevention of potential risk. Therefore, risk forecasting contributes a lot to the optimization of the information security budget. This article focuses on the ontology and stages of a cyberattack. It introduces the main representatives of the attacking side and describes their motivation.
ISSA Journal September 2008Article Title Article Author.docxbagotjesusa
ISSA Journal | September 2008Article Title | Article Author
1�1�
ISSA The Global Voice of Information Security
Extending the McCumber Cube
to Model Network Defense
By Sean M. Price – ISSA member Northern Virginia, USA chapter
This article proposes an extension to the McCumber
Cube information security model to determine the best
countermeasures to achieve a desired security goal.
Confidentiality, integrity, and availability are the se-curity services of a system. In other words they are the security goals of system defense, intangible at-
tributes� providing assurances for the information protected.
Each service is realized when the appropriate countermea-
sures for a given information state are in place. But, it is not
enough to select countermeasures ad hoc. Countermeasures
should be selected to defend a system and its information
against specific types of attacks. When attacks against partic-
ular information states are considered, the necessary coun-
termeasures can be selected to achieve the desired security
service or goal. This article proposes an extension to the Mc-
Cumber Cube information security model as a way for the
security practitioner to consider the best countermeasures to
achieve the desired security goal.
Security models
Models are useful tools to help understand complex topics. A
well-developed model can often be represented graphically,
allowing a deeper understanding of the relationships of the
components that make the whole. A formal security model
is broadly applicable and rigorously developed using formal
methods.2 In contrast, an informal model is considered lack-
ing one or both of these qualities. There are a variety of in-
formal models in the information security world which are
regularly used by security practitioners to understand basic
information and concepts.
� Security goals often lack explicit definitions and are difficult to quantify. They are
usually based on policies with broad interpretations and tend to be qualitative. It is
true that security goals emerge from the confluence of information states and coun-
termeasures which have measurable attributes. But, the subjective nature of security
goals combined with informal modeling characterizes their attributes as intangible.
2 P. T. Devanbu and S. Stubblebine, “Software Engineering for Security: A Roadmap,”
Proceedings of the Conference on The Future of Software Engineering (2000), 227-239.
One such informal model is the generally accepted risk as-
sessment framework. This model is used to assess risk by
estimating asset values, vulnerabilities, threats with their
likelihood of exploiting a vulnerability, and losses. Figure �
illustrates this model. Note that this commonly used model
requires a substantial amount of estimating on the part of
the risk assessment participants. This is problematic when
reliable estimates cannot be obtained. Another problem with
this model is that it does not guide th.
future internetArticleERMOCTAVE A Risk Management FraDustiBuckner14
future internet
Article
ERMOCTAVE: A Risk Management Framework for IT
Systems Which Adopt Cloud Computing
Masky Mackita 1, Soo-Young Shin 2 and Tae-Young Choe 3,*
1 ING Bank, B-1040 Brussels, Belgium; [email protected]
2 Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea;
[email protected]
3 Department of Computer Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
* Correspondence: [email protected]; Tel.: +82-54-478-7526
Received: 22 June 2019; Accepted: 3 September 2019; Published: 10 September 2019
����������
�������
Abstract: Many companies are adapting cloud computing technology because moving to the cloud
has an array of benefits. During decision-making, having processed for adopting cloud computing,
the importance of risk management is progressively recognized. However, traditional risk management
methods cannot be applied directly to cloud computing when data are transmitted and processed by
external providers. When they are directly applied, risk management processes can fail by ignoring
the distributed nature of cloud computing and leaving numerous risks unidentified. In order to fix
this backdrop, this paper introduces a new risk management method, Enterprise Risk Management
for Operationally Critical Threat, Asset, and Vulnerability Evaluation (ERMOCTAVE), which combines
Enterprise Risk Management and Operationally Critical Threat, Asset, and Vulnerability Evaluation for
mitigating risks that can arise with cloud computing. ERMOCTAVE is composed of two risk management
methods by combining each component with another processes for comprehensive perception of risks.
In order to explain ERMOCTAVE in detail, a case study scenario is presented where an Internet seller
migrates some modules to Microsoft Azure cloud. The functionality comparison with ENISA and
Microsoft cloud risk assessment shows that ERMOCTAVE has additional features, such as key objectives
and strategies, critical assets, and risk measurement criteria.
Keywords: risk management; ERM; OCTAVE; cloud computing; Microsoft Azure
1. Introduction
Cloud computing is a technology that uses virtualized resources to deliver IT services through the
Internet. It can also be defined as a model that allows network access to a pool of computing resources
such as servers, applications, storage, and services, which can be quickly offered by service providers [1].
One of properties of the cloud is its distributed nature [2]. Data in the cloud environments had become
gradually distributed, moving from a centralized model to a distributed model. That distributed nature
causes cloud computing actors to face problems like loss of data control, difficulties to demonstrate
compliance, and additional legal risks as data migration from one legal jurisdiction to another. An example
is Salesforce.com, which suffered a huge outage, locking more than 900,000 subscribers out of important
resources needed for business trans ...
future internetArticleERMOCTAVE A Risk Management Fra.docxgilbertkpeters11344
future internet
Article
ERMOCTAVE: A Risk Management Framework for IT
Systems Which Adopt Cloud Computing
Masky Mackita 1, Soo-Young Shin 2 and Tae-Young Choe 3,*
1 ING Bank, B-1040 Brussels, Belgium; [email protected]
2 Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea;
[email protected]
3 Department of Computer Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
* Correspondence: [email protected]; Tel.: +82-54-478-7526
Received: 22 June 2019; Accepted: 3 September 2019; Published: 10 September 2019
����������
�������
Abstract: Many companies are adapting cloud computing technology because moving to the cloud
has an array of benefits. During decision-making, having processed for adopting cloud computing,
the importance of risk management is progressively recognized. However, traditional risk management
methods cannot be applied directly to cloud computing when data are transmitted and processed by
external providers. When they are directly applied, risk management processes can fail by ignoring
the distributed nature of cloud computing and leaving numerous risks unidentified. In order to fix
this backdrop, this paper introduces a new risk management method, Enterprise Risk Management
for Operationally Critical Threat, Asset, and Vulnerability Evaluation (ERMOCTAVE), which combines
Enterprise Risk Management and Operationally Critical Threat, Asset, and Vulnerability Evaluation for
mitigating risks that can arise with cloud computing. ERMOCTAVE is composed of two risk management
methods by combining each component with another processes for comprehensive perception of risks.
In order to explain ERMOCTAVE in detail, a case study scenario is presented where an Internet seller
migrates some modules to Microsoft Azure cloud. The functionality comparison with ENISA and
Microsoft cloud risk assessment shows that ERMOCTAVE has additional features, such as key objectives
and strategies, critical assets, and risk measurement criteria.
Keywords: risk management; ERM; OCTAVE; cloud computing; Microsoft Azure
1. Introduction
Cloud computing is a technology that uses virtualized resources to deliver IT services through the
Internet. It can also be defined as a model that allows network access to a pool of computing resources
such as servers, applications, storage, and services, which can be quickly offered by service providers [1].
One of properties of the cloud is its distributed nature [2]. Data in the cloud environments had become
gradually distributed, moving from a centralized model to a distributed model. That distributed nature
causes cloud computing actors to face problems like loss of data control, difficulties to demonstrate
compliance, and additional legal risks as data migration from one legal jurisdiction to another. An example
is Salesforce.com, which suffered a huge outage, locking more than 900,000 subscribers out of important
resources needed for business trans.
INFORMATION AND COMMUNICATION SECURITY MECHANISMS FOR MICROSERVICES-BASED SYS...IJNSA Journal
Security has become paramount in modern software services as more and more security breaches emerge, impacting final users and organizations alike. Trends like the Microservice Architecture bring new security challenges related to communication, system design, development, and operation. The literature presents a plethora of security-related solutions for microservices-based systems, but the spread of information difficult practitioners' adoption of novel security related solutions. In this study, we aim to present a catalogue and discussion of security solutions based on algorithms, protocols, standards, or implementations; supporting principles or characteristics of information security, considering the three possible states of data, according to the McCumber Cube. Our research follows a Systematic Literature Review, synthesizing the results with a meta-aggregation process. We identified a total of 30 primary studies, yielding 75 security solutions for the communication of microservices.
EXPLORING CRITICAL VULNERABILITIES IN SIEM IMPLEMENTATION AND SOC SERVICE PRO...IJNSA Journal
This research paper examines the high risks encountered while using a Security Information and Event Management (SIEM) product or acquiring Security Operations Center (SOC) services. The paper focuses on key challenges such as insufficient logging, the importance of live log retentions, scalability concerns, and the critical aspect of correlation within SIEM. It also emphasizes the significance of compliance with various standards and regulations, as well as industry best practices for effective cybersecurity incident detection, response, and management.
Network Threat Characterization in Multiple Intrusion Perspectives using Data...IJNSA Journal
For effective security incidence response on the network, a reputable approach must be in place at both protected and unprotected region of the network. This is because compromise in the demilitarized zone could be precursor to threat inside the network. The improved complexity of attacks in present times and vulnerability of system are motivations for this work. Past and present approaches to intrusion detection and prevention have neglected victim and attacker properties despite the fact that for intrusion to occur, an overt act by an attacker and a manifestation, observable by the intended victim, which results from that act are required. Therefore, this paper presents a threat characterization model for attacks from the victim and the attacker perspective of intrusion using data mining technique. The data mining technique combines Frequent Temporal Sequence Association Mining and Fuzzy Logic. Apriori Association Mining algorithm was used to mine temporal rule patterns from alert sequences while Fuzzy Control System was used to rate exploits. The results of the experiment show that accurate threat characterization in multiple intrusion perspectives could be actualized using Fuzzy Association Mining. Also, the results proved that sequence of exploits could be used to rate threat and are motivated by victim properties and attacker objectives.
Include at least 250 words in your posting and at least 250 words inmaribethy2y
Include at least 250 words in your posting and at least 250 words in your reply. Indicate at least one source or reference in your original post. Please see syllabus for details on submission requirements.
Module 1 Discussion Question
Search "scholar.google.com" for a company, school, or person that has been the target of a network
or system intrusion? What information was targeted? Was the attack successful? If so, what changes
were made to ensure that this vulnerability was controlled? If not, what mechanisms were in-place to protect against the intrusion.
Reply-1(Shravan)
Introduction:
Interruption location frameworks (IDSs) are programming or equipment frameworks that robotize the way toward observing the occasions happening in a PC framework or system, examining them for indications of security issues. As system assaults have expanded in number and seriousness in the course of recent years, interruption recognition frameworks have turned into an essential expansion to the security foundation of generally associations. This direction archive is planned as a preliminary in interruption recognition, created for the individuals who need to comprehend what security objectives interruption location components serve, how to choose and design interruption discovery frameworks for their particular framework and system situations, how to deal with the yield of interruption identification frameworks, and how to incorporate interruption recognition capacities with whatever remains of the authoritative security foundation. References to other data sources are likewise accommodated the peruse who requires particular or more point by point guidance on particular interruption identification issues.
In the most recent years there has been an expanding enthusiasm for the security of process control and SCADA frameworks. Moreover, ongoing PC assaults, for example, the Stunt worm, host appeared there are gatherings with the inspiration and assets to viably assault control frameworks.
While past work has proposed new security components for control frameworks, few of them have investigated new and in a general sense distinctive research issues for anchoring control frameworks when contrasted with anchoring conventional data innovation (IT) frameworks. Specifically, the complexity of new malware assaulting control frameworks - malware including zero-days assaults, rootkits made for control frameworks, and programming marked by confided in declaration specialists - has demonstrated that it is exceptionally hard to avert and identify these assaults dependent on IT framework data.
In this paper we demonstrate how, by joining information of the physical framework under control, we can distinguish PC assaults that change the conduct of the focused on control framework. By utilizing information of the physical framework we can center around the last goal of the assault, and not on the specific instruments of how vulnerabilities are misused, and how ...
For further details contact:
N.RAJASEKARAN B.E M.S 9841091117,9840103301.
IMPULSE TECHNOLOGIES,
Old No 251, New No 304,
2nd Floor,
Arcot road ,
Vadapalani ,
Chennai-26.
www.impulse.net.in
Email: ieeeprojects@yahoo.com/ imbpulse@gmail.com
Optimizing cybersecurity incident response decisions using deep reinforcemen...IJECEIAES
The main purpose of this paper is to explore and investigate the role of deep reinforcement learning (DRL) in optimizing the post-alert incident response process in security incident and event management (SIEM) systems. Although machine learning is used at multiple levels of SIEM systems, the last mile decision process is often ignored. Few papers reported efforts regarding the use of DRL to improve the post-alert decision and incident response processes. All the reported efforts applied only shallow (traditional) machine learning approaches to solve the problem. This paper explores the possibility of solving the problem using DRL approaches. The main attraction of DRL models is their ability to make accurate decisions based on live streams of data without the need for prior training, and they proved to be very successful in other fields of applications. Using standard datasets, a number of experiments have been conducted using different DRL configurations The results showed that DRL models can provide highly accurate decisions without the need for prior training.
Exploring network security threats through text mining techniques: a comprehe...CSITiaesprime
In response to the escalating cybersecurity threats, this research focuses on leveraging text mining techniques to analyze network security data effectively. The study utilizes user-generated reports detailing attacks on server networks. Employing clustering algorithms, these reports are grouped based on threat levels. Additionally, a classification algorithm discerns whether network activities pose security risks. The research achieves a noteworthy 93% accuracy in text classification, showcasing the efficacy of these techniques. The novelty lies in classifying security threat report logs according to their threat levels. Prioritizing high-risk threats, this approach aids network management in strategic focus. By enabling swift identification and categorization of network security threats, this research equips organizations to take prompt, targeted actions, enhancing overall network security.
A Lightweight Method for Detecting Cyber Attacks in High-traffic Large Networ...IJCNCJournal
Protecting information systems is a difficult and long-term task. The size and traffic intensity of computer networks are diverse and no one protection solution is universal for all cases. A certain solution protects well in the campus network, but it is unlikely to protect well in the service provider's network. A key component of a cyber defence system is a network attack detector. This component needs to be designed to have a good way to scale detection capabilities with network size and traffic intensity beyond the size and intensity of a campus network. From this point of view, this paper aims to build a network attack detection method suitable for the scale of large and high-traffic networks based on machine learning models using clustering techniques and our proposed detection technique. The detection technique is different from outlier detection commonly used in clustering-based anomaly detection applications. The method was evaluated in cases using different feature extraction methods and different clustering algorithms. Experimental results on the NSL-KDD data set are positive with a detection accuracy of over 97%.
A LIGHTWEIGHT METHOD FOR DETECTING CYBER ATTACKS IN HIGH-TRAFFIC LARGE NETWOR...IJCNCJournal
Protecting information systems is a difficult and long-term task. The size and traffic intensity of computer
networks are diverse and no one protection solution is universal for all cases. A certain solution protects
well in the campus network, but it is unlikely to protect well in the service provider's network. A key
component of a cyber defence system is a network attack detector. This component needs to be designed to
have a good way to scale detection capabilities with network size and traffic intensity beyond the size and
intensity of a campus network. From this point of view, this paper aims to build a network attack detection
method suitable for the scale of large and high-traffic networks based on machine learning models using
clustering techniques and our proposed detection technique. The detection technique is different from
outlier detection commonly used in clustering-based anomaly detection applications. The method was
evaluated in cases using different feature extraction methods and different clustering algorithms.
Experimental results on the NSL-KDD data set are positive with a detection accuracy of over 97%.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF based Intrusion Detection System for Big Data IoT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
IoT Guardian: A Novel Feature Discovery and Cooperative Game Theory Empowered...IJCNCJournal
Cyber intrusion attacks increasingly target the Internet of Things (IoT) ecosystem, exploiting vulnerable devices and networks. Malicious activities must be identified early to minimize damage and mitigate threats. Using actual benign and attack traffic from the CICIoT2023 dataset, this WORK aims to evaluate and benchmark machine-learning techniques for IoT intrusion detection. There are four main phases to the system. First, the CICIoT2023 dataset is refined to remove irrelevant features and clean up missing and duplicate data. The second phase employs statistical models and artificial intelligence to discover novel features. The most significant features are then selected in the third phase based on cooperative game theory. Using the original CICIoT2023 dataset and a dataset containing only novel features, we train and evaluate a variety of machine learning classifiers. On the original dataset, Random Forest achieved the highest accuracy of 99%. Still, with novel features, Random Forest's performance dropped only slightly (96%) while other models achieved significantly lower accuracy. As a whole, the work contributes substantial contributions to tailored feature engineering, feature selection, and rigorous benchmarking of IoT intrusion detection techniques. IoT networks and devices face continuously evolving threats, making it necessary to develop robust intrusion detection systems.
** Connect, Collaborate, And Innovate: IJCNC - Where Networking Futures Take ...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhancing Traffic Routing Inside a Network through IoT Technology & Network C...IJCNCJournal
IoT networking uses real items as stationary or mobile nodes. Mobile nodes complicate networking. Internet of Things (IoT) networks have a lot of control overhead messages because devices are mobile. These signals are generated by the constant flow of control data as such device identity, geographical positioning, node mobility, device configuration, and others. Network clustering is a popular overhead communication management method. Many cluster-based routing methods have been developed to address system restrictions. Node clustering based on the Internet of Things (IoT) protocol, may be used to cluster all network nodes according to predefined criteria. Each cluster will have a Smart Designated Node. SDN cluster management is efficient. Many intelligent nodes remain in the network. The network design spreads these signals. This paper presents an intelligent and responsive routing approach for clustered nodes in IoT networks. An existing method builds a new sub-area clustered topology. The Nodes Clustering Based on the Internet of Things (NCIoT) method improves message transmission between any two nodes. This will facilitate the secure and reliable interchange of healthcare data between professionals and patients. NCIoT is a system that organizes nodes in the Internet of Things (IoT) by grouping them together based on their proximity. It also picks SDN routes for these nodes. This approach involves selecting one option from a range of choices and preparing for likely outcomes problem addressing limitations on activities is a primary focus during the review process. Predictive inquiry employs the process of analyzing data to forecast and anticipate future events. This document provides an explanation of compact units. The Predictive Inquiry Small Packets (PISP) improved its backup system and partnered with SDN to establish a routing information table for each intelligent node, resulting in higher routing performance. Both principal and secondary roads are available for use. The simulation findings indicate that NCIoT algorithms outperform CBR protocols. Enhancements lead to a substantial 78% boost in network performance. In addition, the end-to-end latency dropped by 12.5%. The PISP methodology produces 5.9% more inquiry packets compared to alternative approaches. The algorithms are constructed and evaluated against academic ones.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
A predictive framework for cyber security analytics using attack graphs
1. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
DOI : 10.5121/ijcnc.2015.7101 1
A PREDICTIVE FRAMEWORK FOR CYBER SECURITY
ANALYTICS USING ATTACK GRAPHS
Subil Abraham1
and Suku Nair2
1
IBM Global Solution Center, Coppell, Texas, USA
2
Southern Methodist University, Dallas, Texas, USA
ABSTRACT
Security metrics serve as a powerful tool for organizations to understand the effectiveness of protecting
computer networks. However majority of these measurement techniques don’t adequately help corporations
to make informed risk management decisions. In this paper we present a stochastic security framework for
obtaining quantitative measures of security by taking into account the dynamic attributes associated with
vulnerabilities that can change over time. Our model is novel as existing research in attack graph analysis
do not consider the temporal aspects associated with the vulnerabilities, such as the availability of exploits
and patches which can affect the overall network security based on how the vulnerabilities are
interconnected and leveraged to compromise the system. In order to have a more realistic representation of
how the security state of the network would vary over time, a nonhomogeneous model is developed which
incorporates a time dependent covariate, namely the vulnerability age. The daily transition-probability
matrices are estimated using Frei's Vulnerability Lifecycle model. We also leverage the trusted CVSS
metric domain to analyze how the total exploitability and impact measures evolve over a time period for a
given network.
KEYWORDS
Attack Graph, Non-homogeneous Markov Model, Markov Reward Models, CVSS, Security Evaluation,
Cyber Situational Awareness
1.INTRODUCTION
With the large scale expansion of network connectivity, there has been a rapid increase in the
number of cyber-attacks on corporations and government offices resulting in disruptions to
business operations, damaging the reputation as well as financial stability of these corporations.
The recent cyber-attack incident at Target Corp illustrates how these security breaches can
seriously affect profits and shareholder value. According to a report by Secunia[1], the number of
reported security vulnerabilities in 2013 increased by 32% compared to 2012. However in spite of
these increasing rate of attacks on corporate and government systems, corporations have fallen
behind on ramping up their defenses due to limited budgets as well as weak security practices.
One of the main challenges currently faced in the field of security measurement is to develop a
mechanism to aggregate the security of all the systems in a network in order assess the overall
security of the network. For example INFOSEC [2] has identified security metrics as being one of
the top 8 security research priorities. Similarly Cyber Security IT Advisory Committee has also
identified this area to be among the top 10 security research priorities.
This proposed research falls in the realm of "Cyber Situational Awareness" which provides a
holistic approach to understanding threats and vulnerabilities. There are multiple levels to
Situational Awareness as depicted in Figure 1. Situational awareness is a universal concept and is
2. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
2
the ability to identify, comprehend and forecast the integral features of a system. Situational
awareness was defined by [3] as “the perception of the elements in the environment with a
volume of time and space, the comprehension of their meaning, and the projection of their status
in the near future”.
Figure 1: Cyber Situational Awareness Model
In addition, traditional security efforts in corporations have focused on protecting key assets
against known threats which have been disclosed publicly. But today, advanced attackers are
developing exploits for vulnerabilities that have not yet been disclosed called “zero-day” exploits.
So it is necessary for security teams to focus on activities that are beyond expected or pre-defined.
By building appropriate stochastic models and understanding the relationship between
vulnerabilities and their lifecycle events, it is possible to predict the future when it comes to
cybercrime such as identifying vulnerability trends, anticipating security gaps in the network,
optimizing resource allocation decisions and ensuring the protection of key corporate assets in the
most efficient manner.
In this paper, we propose a stochastic model for security evaluation based on Attack Graphs,
taking into account the temporal factors associated with vulnerabilities that can change over time.
By providing a single platform and using a trusted open vulnerability scoring framework such as
CVSS[4-6], it is possible to visualize the current as well as future security state of the network
leading to actionable knowledge. Several well established approaches for Attack graph analysis
[7-15] have been proposed using probabilistic analysis as well as graph theory to measure the
security of a network. Our model is novel as existing research in attack graph analysis do not
consider the temporal factors associated with the vulnerabilities, such as the availability of
exploits and patches. In this paper, a nonhomogeneous model is developed which incorporates a
time dependent covariate, namely the vulnerability age. The proposed model can help identify
critical systems that need to be hardened based on the likelihood of being intruded by an attacker
as well as risk to the corporation of being compromised.
The remainder of the paper is organized as follows. In Section 2, we discuss about previous
research proposed for security metrics and quantification. In Section 3, we explore the Cyber-
Security Analytics Framework and then realize a non-homogenous Markov Model for security
evaluation that is capable of analyzing the evolving exploitability and impact measures of a given
network. In Section 4, we present the results of our analysis with an example. Finally, we
conclude the paper in Section 5.
3. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
3
2. BACKGROUND AND RELATED WORK
There are several topics which span the discussion on previous work. Here we briefly
discuss about Attack Graphs and then provide an overview of some of the most prominent works
that have been proposed for quantifying security in a network.
Figure 2. Security Metric Classification
2.1. Attack Graph
An attack graph is an abstract representation of the different scenarios and paths an attacker may
use compromise the security of a system by using multiple vulnerabilities. It is a very effective
tool for security and risk analysis as it takes into account the relationships between multiple
vulnerabilities. Computer attacks have been graphically modeled since the late 1980s by the US
DoD as discussed in their paper [16]. Most of the attack modeling performed by analysts was
constructed by hand and hence it was a tedious and error-prone process especially if the number
of nodes were very large. In 1994 Dacier et al [17] published one of the earliest mathematical
models for a security system based on privilege graphs. By the late 1990's a couple of other
papers [18, 19] came out which enabled automatic generation of attack graphs using computer
aided tools. In [18] the authors describes a method of modeling network risks based on an attack
graph where each node in the graph represented an attack state and the edges represented a
transition or a change of state caused by an action of the attacker. Since then researchers have
proposed a variety of graph-based algorithms to generate attack graphs for security evaluation.
2.2. Classes of Security
There are different classes under which network security metrics fall under. These classes are
depicted in Fig 2. Here are some examples of metrics that fall under each category.
2.2.1. Core Metrics
These are aggregation metrics that typically don’t use any structure or dependency to quantify the
security of the network. A few examples that fall under this category are Total Vulnerability
Measure (TVM) [20] and Langweg Metric (LW) [21]. TVM is the aggregation of two other
metrics called the Existing Vulnerabilities Measure (EVM) and the Aggregated Historical
Vulnerability Measure (AHVM). Langweg metric is a measure of how resistant an application is
to malware attacks. The metric considers both the standards of security as well as capabilities of
attacker. CVSS [4-5] is an open standard for scoring IT security vulnerabilities. It was developed
to provide organizations with a mechanism to measure vulnerabilities and prioritize their
4. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
4
mitigation. For example the US Federal government uses the CVSS standard as the scoring
engine for its National Vulnerability database (NVD) [6] which has a repository of over forty-five
thousand known vulnerabilities and is updated on an ongoing basis
2.2.2. Structural Metrics
These metrics use the underlying structure of the Attack graph to aggregate the security properties
of individual systems in order to quantify network security. The Shortest Path (SP) [18], [7]
metric measures the shortest path for an attacker to reach an end goal. The Number of Paths (NP)
[7] metric measures the total number of paths for an attacker to reach the final goal. The Mean of
Path Lengths (MPL) metric [8] measures the arithmetic mean of the length of all paths to the final
goal in an attack graph. The above structural metrics have shortcomings and in [9], Idika et al
have proposed a suite of attack graph based security metrics to overcome some of these inherent
weaknesses. In [22], Ghosh et al provides an analysis and comparison of all the existing structural
metrics.
2.2.3. Probability-Based Metrics
These metrics associate probabilities with individual entities to quantify the aggregated security
state of the network. A few examples that fall under this category are Attack Graph-based
Probabilistic (AGP) and Bayesian network (BN) based metrics [23-25]. In AGP each node/edge
in the graph represents a vulnerability being exploited and is assigned a probability score. The
score assigned represents the likelihood of an attacker exploiting the exploit given that all
prerequisite conditions are satisfied. In BN based metrics the probabilities for the attach graph is
updated based on new evidence and prior probabilities associated with the graph.
2.2.4. Time-Based Metrics
These metrics quantify how fast a network can be compromised or how quickly a network can
take preemptive measures to respond to attacks. Common metric that fall in this category are
Mean Time to Breach (MTTB), Mean Time to Recovery (MTTR) [26] and Mean Time to First
Failure (MTFF) [27]. MTTB represents the total amount of time spent by a red team to break into
a system divided by the total number of breaches exploited during that time period. MTTR
measures the expected amount of time it takes to recover a system back to a safe state after being
compromised by an attacker. MTFF corresponds to the time it takes for a system to enter into a
compromised or failed state for the first time from the point the system was first initialized.
The drawback with all these classes of metrics is that they take a more static approach to security
analysis and do not leverage the granularity provided by the CVSS metric framework in order to
assess overall dynamic security situation and help locate critical nodes for optimization
2.3. Vulnerability Lifecycle Models
Presently, there is research [28-31] on analyzing the evolution of life cycle of different types of
vulnerabilities. Frei et al. [31] in particular developed a distribution model to calculate the
likelihood of an exploit or patch being available a certain number of days after its disclosure date.
To the best of our knowledge, no previous work has been done to analyze overall security of a
network, by considering the temporal relationships between all the vulnerabilities that are present
in the network, which can be exploited by an attacker.
5. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
5
2.4. Cyber Situation Awareness
Tim Bass [32] first introduced the concept of cyberspace situation awareness and built a
framework for it which laid the foundation for subsequent research in Network Security
Situational Awareness [33]. In [34-37] the framework was extended to model security at a large-
scale level where existing techniques have been integrated to gain richer insights. Researchers
have also proposed many evaluation models and algorithms for NSSA [38, 39] to reflect the
security environment and capture the trends of changes in network state. The drawback with most
of these NSSA models is that they don't adopt a consistent integrated framework for describing
the relationships between the vulnerabilities in the network nor do they use an open scoring
framework such as CVSS for analyzing the dynamic attributes of a vulnerability using stochastic
modeling techniques.
3. CYBER-SECURITY ANALYTICS FRAMEWORK
Figure 3. Cyber Security Analytics Framework
In this section, we explore the concept of modeling the Attack graph as a stochastic process. In
[40, 41], we established the cyber-security analytics framework (Figure 3) where we have
captured all the processes involved in building our security metric framework. By providing a
single platform and using an open vulnerability scoring framework such as CVSS it is possible
visualize the current as well as future security state of the network and optimize the necessary
steps to harden the enterprise network from external threats. In this paper we will extend the
model by taking into account the temporal aspects associated with the individual vulnerabilities.
By capturing their interrelationship using Attack Graphs, we can predict how the total security of
the network changes over time. The fundamental assumption we make in our model is that the
time-parameter plays an important role in capturing the progression of the attack process. Markov
model is one such modeling technique that has been widely used in a variety of areas such as
system performance analysis and dependability analysis [11, 27, 42-43]. While formulating the
stochastic model, we need to take into account the behavior of the attacker. In this paper, we
assume that the attacker will choose the vulnerability that maximizes his or her probability of
succeeding in compromising the security goal.
3.1. State Based Stochastic Modelling
State-space stochastic methods enable the probabilistic modeling of complex relationships and
dependencies between known and latent variables. Several research fields have long used
Modeling and Simulation to study the behavior of a system under different varying conditions. By
6. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
6
applying the same methodology in the area of security enables an IT security administrator to
assess the current security state of the entire network as well as predict how this state will change
based on new threat levels, new vulnerabilities etc. Another capability is analyzing what-if
scenarios to calculate metrics based on making certain changes to system. Most IT departments
are faced with limited budgets and hence applying patches to all systems in a timely manner may
not be feasible. Hence it is necessary to optimize the application of such security controls without
compromising the network or disrupting business operations.
3.2. Architecture
Figure 4 shows a high level view of our proposed cyber security analytics architecture which
comprises of 4 layers where each layer builds upon the previous one below.
4. Cyber Security Analytics Architecture
Layer 1 (Attack Graph Model): The core component of our architecture is the Attack Graph
Model which is generated using a network model builder by taking as input network topology,
services running on each host and a set of attack rules based on the vulnerabilities associated with
the different services.
Layer 2 (CVSS Framework): The underlying metric domain is provided by the trusted CVSS
framework which quantifies the security attributes of individual vulnerabilities associated with
the attack graph. We divide our security analysis by leveraging two CVSS metric domains. One
captures the exploitability characteristics of the network and the other analyzes the impact a
successful attack can have on a corporations key assets. We believe that both these types of
analysis are necessary for a security practitioner to gain a better understanding of the overall
security of the network.
Layer 3 (Stochastic Model): In this layer relevant stochastic processes are applied over the
Attack Graph to describe the attacks by taking into account the relationships between the different
vulnerabilities in a system. For example, in our approach, we utilize an Absorbing Markov chain
for performing exploitability analysis and a Markov Reward Model for Impact analysis. In [40,
41] we discussed how we can model an attack-graph as a discrete time absorbing Markov chain
where the absorbing state represents the security goal being compromised.
So far we have been focusing on the security properties that are intrinsic to a particular
vulnerability and that doesn’t change with time. These measures are calculated from the CVSS
7. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
7
Base metric group which aggregates several security properties to formulate the base score for a
particular vulnerability.
Layer 4 (Vulnerability Lifecycle Model): In order to account for the dynamic/temporal security
properties of the vulnerability, we apply a Vulnerability Lifecycle model on the stochastic process
to identify trends and understand how the security state of the network will evolve with time.
Security teams can thus analyze how availability of exploits and patches can affect the overall
network security based on how the vulnerabilities are interconnected and leveraged to
compromise the system.
We believe that such a framework also facilitates communication between security engineers and
business stakeholders and aids in building an effective cyber-security analytics strategy.
3.2. Model Representation
In [40, 41] we have discussed how we can model an attack-graph as a discrete time absorbing
Markov chain due to the following two properties
1. An attack graph has at least one absorbing state or goal state.
2. In an attack graph it is possible to go from every state to an absorbing state.
A state si of a Markov chain is called absorbing if it is impossible to leave it (i.e., pii = 1). A state
which is not absorbing is called a transient state. In the following example below (Figure 5), the
state 4 is in an absorbing state with a probability of 1 and the rest of states 1, 2 and 3 are transient.
Figure 5: Absorbing Markov Chain
We also presented a formula for calculating the transition probabilities of the Markov chain by
normalizing the CVSS exploitability scores over all the transitions starting from the attacker’s
source state. In this paper, we will extend the model to analyze and measure two key aspects.
First, we take into account both the exploitability as well impact properties associated with a
security goal being compromised. By considering both these measures separately, we can derive a
complementary suite of metrics to aid the security engineer in optimizing their decisions. Second
we combine the temporal trends associated with the individual vulnerabilities into the model to
reason about the future security state of the network. In our model we will calculate the daily
transition-probability matrices using the well-established Frei's Vulnerability lifecycle model
[31].
8. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
8
The security of the network is dependent on the exploitability level of the different vulnerabilities
associated with the services running on the machines in the enterprise. In addition the security
metrics will dynamically vary based on the temporal aspects of the vulnerabilities taken into
consideration. As an example, consider CVE-2014-0416 which is an unspecified vulnerability in
Oracle Java SE related to the Java Authentication and Authorization Service (JAAS) component.
We define the base exploitability score as the measure of complexity in exploiting the
vulnerability . The CVSS standard provides a framework for computing these scores using the
access vector , access complexity and authentication as follows
The constant 20 represents the severity factor of the vulnerability. The access vector,
authentication and access complexity for this vulnerability CVE-2014-0416 is 1.0, 0.704 and 0.71
respectively. Therefore the base exploitability score of CVE-2014-0416 is 10.0 which indicate
that it has very high exploitability. As of this writing the state of exploitability for this
vulnerability is documented as “Unproven” which indicate that no exploit code is available.
Hence it has a temporal weight score of 0.85. Given the base exploitability score and the
temporal weight, the effective temporal exploitability score is as follows
The temporal exploitability score for CVE-2014-0416 is 8.5. As the vulnerability ages and exploit
code become readily availability to exploit the vulnerability, the value of the exploitability score
will move closer towards its base value. As a comparison, consider CVE-2012-0551 which is
another unspecified vulnerability in Oracle Java SE that has a base exploitability score of 8.6
which is lower than CVE-2014-0416. However the state of exploitability for this vulnerability is
documented as “High” which indicates that exploit code is widely available. Hence it has a
temporal weight score of 1. Therefore CVE-2012-0551 is considered more exploitable than CVE-
2014-0416 even though it has a lower base metric score because we have factored in the lifecycle
of the vulnerability.
An Attack Graph (AG) can be modeled as an absorbing Markov chain since it satisfies the
property of having at least one absorbing state and it is also possible to reach that state from every
other transient state. The absorbing state is the node where the security goal is violated. So this is
the final state and the attacker will continue to remain in this state until preventive measures have
been taken by the security team to remove the attacker’s presence from the system. The transition
matrix for an absorbing Markov chain has the following Canonical form.
Here P is the transition matrix, R is the matrix of absorbing states, and Q is the matrix of transient
states. The set of states, S in the model represent the different vulnerabilities associated with
services running on the nodes that are part of the network. The transition probability matrix P of
the Markov chain was estimated using the formula
9. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
9
where pij is the probability that an attacker currently in state i exploits a vulnerability e(vj) in state
j and e(vj) is the exploitability score for vulnerability vj obtained from the CVSS Base Metric
group. Further, each row of P is a probability vector, which requires that
In an absorbing Markov chain the probability that the chain will be absorbed is always 1. Hence
where Q is the matrix of transient states. Therefore for an absorbing Markov chain , we can
derive a matrix which is called the fundamental matrix
for . This matrix N provides considerable insight into the behavior of an attacker who is
trying to penetrate the network. The elements of the fundamental matrix nij describe the expected
number of times the chain is in state j, given that the chain started in state i. This matrix is critical
for a security engineer while analyzing the security situation of the network. Each element in the
Fundamental matrix N gives the expected number of times an attacker will visited a state j given
that he started at state i. By analyzing this matrix we can identify points in the network where
vulnerabilities need to be patched if the attacker is more likely to target or visit a state which is
critical to the function of the business.
3.3. Non-homogenous model
The temporal weight score that we considered in the example above was a function of the time
since the vulnerability was disclosed on a publicly trusted SIP (Security Information Providers).
The temporal values recorded by CVSS are discrete and are not suitable for inclusion in a non-
homogenous model. It would be more appropriate to use a distribution that would take as input
the age of the vulnerability. Therefore we will use the result of Frei’s model [31] to calculate the
temporal weight score of the vulnerabilities that is part of the Attack graph model. The
probability that an exploit is available for a given vulnerability is obtained using a Pareto
distribution of the form:
a = 0.26, k = 0.00161
where t is the age of vulnerability and the parameter k is called the shape factor. The age t is
calculated by taking the difference between the dates the CVSS scoring is conducted and when
the vulnerability was first disclosed on an SIP. By combining this distribution in our model, we
can get a more realistic estimate of the security of the network based on the age of the different
vulnerabilities which are still unpatched in the enterprise.
In the non-homogenous Markov model presented here, we consider the following time dependent
covariate which is the age of the vulnerability. In order to have a more realistic model, we update
this covariate every day for each of the vulnerabilities present in the network and recalculate the
discrete time transition probability matrix P. Given the exploitability scores for each of the
vulnerabilities in the Attack Graph, we can estimate the transition probabilities of the Absorbing
Markov chain by normalizing the exploitability scores over all the edges starting from the
attacker’s source state. Let pij be the probability that an attacker currently in state i exploits a
vulnerability in state j. We can then formally define the transition probability below where n is
10. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
10
the number of outgoing edges from state i in the attack model and ej is the temporal exploitability
score for the vulnerability in state j.
The matrix represents the transition probability matrix of the Absorbing Markov chain
computed on day m where, p (i, j) ≥ 0 for all i, j ∈ S. In an absorbing Markov chain the
probability that the chain will be absorbed is always 1. Further, each row of is a probability
vector, which requires that
In equation (5), we use the result of Frei’s Vulnerability Lifecycle model [31] to calculate the
temporal weight score of the vulnerabilities that is part of the Attack graph model. Therefore by
calculating the individual temporal scores of the vulnerabilities and by analyzing their causal
relationships using an Attack Graph, we can integrate the effect of temporal score to derive the
total dynamic security of network.
3.4. Exploitability Analysis
We present quantitative analysis using our cyber security analytics model. The focus of our
analysis will on assessing the evolving security state of the network.
3.4.1 Node Rank Analysis
It is important to determine the critical nodes in the Attack graph where the attacker is most likely
to visit. Based on this insight, the necessary systems can be patched for improved security. The
unique property associated with the Fundamental matrix N is that each element nij gives the
expected number of times that the process is in the transient state sj given that it started in the
transient state si. This matrix is critical for a security engineer while analyzing the security
situation of the network. Each element in the Fundamental matrix N gives the expected number of
times an attacker will visited a state j given that he started at state i. By analyzing this matrix we
can identify points in the network where vulnerabilities need to be patched if the attacker is more
likely to target or visit a state which is critical to the function of the business.
3.4.2 Expected Path length (EPL) metric
This metric measures the expected number of steps the attacker will have to take starting from the
initial state to compromise the security goal. Using the Fundamental matrix N of the non-
homogenous Markov model, we can compute the expected number of steps before the chain goes
to the absorbed state. For example let ti be the expected number of steps before the chain is
absorbed, given that the chain starts in state si, and let t be the column vector whose ith
entry is ti.
Then
11. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
11
This security metric is analyzing the expected number of steps or the resistance of the network.
3.4.3 Probabilistic Path (PP) metric
This metric measures the likelihood of an attacker to reach the absorbing states of the graph. For
this we will calculate the following matrix B where where N is the fundamental Matrix
of the Markov chain and R is obtained from the Canonical form. The element bij in the matrix
measure the probability of reaching the security goal state j given that the attacker started in state
i. The Probabilistic Path (PP) metric also aids the security engineer in making decisions on
optimizing the network and we will label this as the Probabilistic Path (PP) metric.
3.5. Impact Analysis
The CVSS standard provides a framework for computing the impact associated with an individual
vulnerability using confidentiality impact , integrity impact and availability impact
measures as follows
By associating the individual impact/reward scores with each state in our Markov chain, we can
extend the underlying stochastic process as a discrete-time Markov reward model (MRM). Since
the attack graph encodes the causal relationship between the individual vulnerabilities in the
network, we can determine from the MRM model the combined impact or risk associated with
compromising a security goal. Given our existing DTMC model, we can represent the Markov
Reward process as where is a DMTC and is a reward function for each state.
Since we are considering only constant rewards or impact scores, the reward function can be
represented as a vector ]. Hence the expected impact at time t is given as
This value will be termed as the Expected Impact Metric (EI). A non-homogenous MRM can
be built by incorporating the temporal trend of individual vulnerabilities given their age. Hence
by formulating daily transition probability matrices using Frei’s model we can reason about how
the expected cost of breaching a security goal can vary over time.
4. ILLUSTRATION
To illustrate the proposed approach in detail, a network similar to [10, 24-25, 44-45] has been
considered (refer Figure 6).
Figure 6. Network Topology
12. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
12
The network is comprised of 4 machines that are interconnected together and operating internally
behind a firewall. The Attacker or threat is outside the firewall and is connected to an external
network which has access to this firewall. The firewall has only one port open (port 80) to the
outside network for access to its web-server. The machine hosting the web-server M1 is running
Apache Webserver. The aim of the attacker is to infiltrate the network and gain root access on
M4. In order to accomplish this, the attacker needs to first start with the apache web-service since
that is the only port (80) accessible from the firewall
4.1. Environment Information
Table 1 contains a list of all the vulnerabilities in the network that can be exploited by an attacker
if certain conditions are met.
Table 1. Vulnerability Data.
Service Name CVE-ID
Exploitability
Subscore Host Disclosure Date
apache CVE-2014-0098 10.0 M1 03/18/2014
postgresql CVE-2014-0063 7.9 M2 02/17/2014
linux CVE-2014-0038 3.4 M3 02/06/2014
ms-office CVE-2013-1324 8.6 M3 11/13/2013
bmc CVE-2013-4782 10 M4 07/08/2013
radius CVE-2014-1878 10 M4 02/28/2014
Each of the six vulnerabilities is unique and publicly known and is denoted by a CVE (Common
Vulnerability and Exposure) identifier. For example Apache web-server was found to have
vulnerability CVE-2014-0098 on 03/18/2014 which allows remote attackers to cause
segmentation faults. Similarly the postgresql service hosted by M2 had a vulnerability denoted by
CVE-2014-0063 which allowed remote attackers to execute arbitrary code. Several public sites
such as NVD (National Vulnerability Database), MITRE, Nessus provide information about well-
known vulnerabilities and also the severity of it using scores values adopted by the CVSS
(Common Vulnerability Scoring System) framework.
4.2. Attack Graph Generation
By combining the vulnerabilities present in the network configuration (Figure 7), we can build
several scenarios whereby an attacker can reach a goal state. In this particular case, the attacker's
goal state would be to obtain root access on Machine M4. Figure 5 depicts the different paths an
attacker can take to reach the goal state. By combining these different paths we are able to obtain
an Attack Graph. A couple of practical approaches have been proposed [44- 46] to automate
generation of attack graphs without the intervention of a red team. In [47] the authors have
compared and analyzed several open source AG tools like MulVal, TVA, Attack Graph Toolkit,
NetSPA as well as commercial tools from aspects of scalability and degree of attack graph
visualization. Table 2 provides an overview of the different available AG toolkits.
13. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
13
Figure 7. Network Topology
In our analysis we have used the MulVAL tool discussed in [45] to generate logical attack graph
in polynomial time.
Table 2. Attack Graph Toolkits
Toolkit Name Complexity
Open
Source Developer
MulVAL O(n2
) ~ O(n3
) yes Kansas State University
TVA O(n2
) no George Mason University
Cauldron O(n2
) no Commercial
NetSPA O(nlogn) no MIT
Firemon O(nlogn) no Commercial
4.3. Security Analysis
In the analysis, we first investigate how the distribution of our proposed attack graph metrics vary
over a given time period. In the attack graph model, each node corresponds to a software related
vulnerability that exists on a particular machine in the network. The transition probability for a
particular edge in the attack graph is calculated by normalizing the CVSS Exploitability scores
over all the edges from the attacker’s source node. By formulating an Absorbing Markov chain
over the Attack graph and applying the Vulnerability lifecycle model to the exploitability scores,
we are able to project how these metrics will change in the immediate future.
Figure 8 (a, b, c) depicts the distribution of our proposed attack graph metrics (EPL, Probabilistic
Path & Expected Impact) over a period of 150 days. The general trend for the Expected Path
Length (EPL) metric (Fig 8a) is upward over the next 150 days which signifies that it will take
fewer steps (less effort) for an attacker to compromise the security goal as the vulnerabilities in
the network age. This visualization graph is very useful for security practitioners for optimizing
patch management processes in the organization. By establishing thresholds for these metrics, the
security teams can plan in advance as to when to patch a system versus working on an ad-hoc
basis. For example, the organization may have a threshold score of 4.86 for the EPL metric. From
the graph (Fig 8 a), the team can reasonably conclude that the systems in their network are safe
from exploits breaching the security goal for the next 50 days as the EPL score is above the
threshold value. The thresholds values are typically set by the security team based on how fast
they can respond to a breach once it is detected.
14. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
14
The Probability Path (PP) distribution (Fig 8b) which signifies the likelihood of an attacker
compromising the security goal follows a different trend where it is seen reducing during the first
few days and then picks up gradually with time. As vulnerabilities age, exploits become readily
available for vulnerabilities which leads to an increase in their CVSS exploitability scores. As a
result the transitions probabilities in the model will change likely causing other attack paths in the
tree to become more favorable to the attacker. In the figure, we see that the probability of
reaching the security goal tapers off after 50 days. The Expected Impact metric (Fig 8c) or cost of
an attack to the business reduces with time and this is an indication that the attacker will likely
choose a different path due to more exploits being available for other vulnerabilities that have a
lower impact score. It is important to note that every organization has a network configuration
that is very unique to their operations and therefore the distribution for our proposed attack graph
metrics will be different for each of these configurations.
Figure 8. Network Topology
4.4. Simulation
Based on the Attack Graph generated for the network, a simulation of the Absorbing Markov
chain is conducted. In our experiment we model an attacker and simulate over 2000 different
instances of attacks over the Attack Graph based on the probability distribution of the nodes. We
used the R statistic package [48] to generate the model and run the simulations.
The transition probabilities are formulated from the CVSS scoring framework as described in
section 3. Each simulation run uses the transition probability row vector of the Absorbing Markov
Chain to move from one state to another until it reaches the final absorbing state. Figure 4.3d
depicts a multi-bar graph of the distribution of attack path lengths X1, X2 ….. X2000 from 2000
simulated attack paths where the security goal was compromised. Each of the colors indicates the
trend forecast over different periods of time. For example, in this distribution model, the length of
the attack paths with the most frequency is 3, given the current age of all the vulnerabilities
(shown in green). However as the vulnerabilities in the network age over a period of 300 days, the
length of the paths with most frequency gets updated to 4 (shown in red). We also notice that the
frequency of paths of length 3 and 5 decreases gradually over 300 days.
15. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
15
Similarly Figure 4.3e depicts a multi-bar graph of the distribution of the state visits for the Attack
Graph for all the 2000 instances of attack paths that were simulated using the non-homogenous
Markov chain model. This graph represents the number of times the attacker is likely to visit a
particular state/node in the attack graph over those 2000 simulated runs. Based on the simulation
results depicted in Fig 4.3e, if we were to exclude the start state (1) and the absorbing state (10),
we can find that an attacker is most likely to visit state 2 and least likely to visit state 6. Hence
from Table 1, we can conclude that the attacker is most likely to exploit the vulnerability of the
bmc service running on M4 (State 2) and least likely to exploit the linux service on M3 (State 6).
This information is valuable for a security engineer to prioritize which exploit needs to be patched
and how it will affect the strength of the network against attacks. This insight is further enriched
when we also consider the trends over a period of 300 days. For example, in Fig 4.3e there is an
upward trend in the expected number of times the attacker visits state 4, while there is a
downward trend for state 5 during the same time. Hence if the security engineer had to decide
whether to patch node 4 or node 5 during this time period, it would make more sense to patch
node 4 since it is most susceptible to an outside attack in the future.
One the major challenges when performing patch management is timing when to install patches
and which patches have priority. By analyzing the trends over time of how the security state of a
network changes, a security engineer can make a more informed and intelligent decision on
optimizing the application of patches, thereby strengthening the current as well as the future
security state of the enterprise.
5. CONCLUSIONS
In this paper, we presented a non-homogenous Markov model for quantitative assessment of
security attributes using Attack graphs. Since existing metrics have potential short-comings for
accurately quantifying the security of a system with respect to the age of the vulnerabilities, our
framework aids the security engineer to make a more realistic and objective security evaluation of
the network. What sets our model apart from the rest is the use of the trusted CVSS framework
and the incorporation of a well-established Vulnerability lifecycle framework, to comprehend and
analyze both the evolving exploitability and impact trends of a given network using Attack
Graphs. We used a realistic network to analyze the merits of our model to capture security
properties and optimize the application of patches.
REFERENCES
[1] Secunia Vulnerability Review 2014: Key figures and facts from a global IT-Security perspective,
February 2014
[2] INFOSEC Research Council Hard problem List, 2005.
[3] M. Endsley. Toward a theory of situation awareness in dynamic systems. In Human Factors Journal,
volume 37(1), pages 32–64, March 1995.
[4] M. Schiffman, “Common Vulnerability Scoring System (CVSS),” http://www.first.org/cvss/
[5] Assad Ali, Pavol Zavarsky, Dale Lindskog, and Ron Ruhl, " A Software Application to Analyze
Affects of Temporal and Environmental Metrics on Overall CVSS v2 Score", Concordia University
College of Alberta, Edmonton, Canada, October 2010.
[6] National Vulnerability Database 2014.
[7] R. Ortalo, Y. Deswarte, and M. Kaaniche, “Experimenting with quantitative evaluation tools for
monitoring operational security,” IEEE Transactions on Software Engineering, vol. 25, pp. 633–650,
September 1999.
[8] W. Li and R. Vaughn, “Cluster security research involving the modeling of network exploitations
using exploitation graphs,” in Sixth IEEE International Symposium on Cluster Computing and Grid
Workshops, May 2006.
16. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
16
[9] N. Idika and B. Bhargava, “Extending attack graph-based security metrics and aggregating their
application,” Dependable and Secure Computing, IEEE Transactions on, no. 99, pp. 1–1, 2010.
[10] L. Wang, A. Singhal, and S. Jajodia, "Toward measuring network security using attack graphs," in
Proceedings of the 2007 ACM workshop on Quality Protection, pp. 49-54, 2007.
[11] Trivedi KS, Kim DS, Roy A,Medhi D. Dependability and security models. Proc. DRCN, IEEE, 2009;
11–20.
[12] A Roy, D S Kim, and K S. Trivedi. Attack countermeasure trees (ACT): towards unifying the
constructs of attack and defense trees.J. of Security and Communication Networks, SI: Insider
Threats, 2011.
[13] O. Sheyner, J. W. Haines, S. Jha, R. Lippmann, and J. M. Wing, “Automated generation and analysis
of attack graphs,” in IEEE Symposium on Security and Privacy, 2002, pp. 273–284.
[14] Xinming Ou, Sudhakar Govindavajhala, and Andrew W. Appel. MulVAL: A logic-based network
security analyzer. In
[15] 14th USENIX Security Symposium, 2005.S. Jajodia and S. Noel, "Advanced Cyber Attack Modeling,
Analysis, and Visualization," George Mason University, Fairfax, VA, Technical Report 2010.
[16] S DoD, MIL-STD-1785, “System security engineering program management requirements,” 1988.
[17] M. Dacier, Y. Deswarte, “Privilege Graph: an Extension to the Typed Access Matrix Model,”Proc.
ESORICS, 1994.
[18] C. Phillips, L.P. Swiler, “A graph-based system for network-vulnerability analysis,”Proc. WNSP’98,
pp.71-79.
[19] L. Swiler, C. Phillips, D. Ellis, S. Chakerian, “Computer-attack graph generation tool,” Proc.
DISCEX01, pp.307-321.
[20] M. Abedin, S. Nessa, E. Al-Shaer, and L. Khan, “Vulnerability analysis for evaluating quality of
protection of security policies," in Proceedings of Quality of Protection 2006 (QoP '06), October
2006.
[21] H. Langweg, "Framework for malware resistance metrics," in Proceedings of the Quality of
Protection 2006 (QoP '06), October 2006.
[22] A. Kundu, N. Ghosh, I. Chokshi and SK. Ghosh, "Analysis of attack graph-based metrics for
quantification of network security", India Conference (INDICON), IEEE, pages 530 – 535, 2012.
[23] M.Frigault and L. Wang, "Measuring Network Security Using Bayesian Network-Based Attack
Graphs," in Proceedings of the 3rd IEEE International Workshop on Security, Trist and Privacy for
Software Applications (STPSA’08), 2008.
[24] L. Wang, T. Islam, T. Long, A. Singhal, and S. Jajodia, "An attack graph-based probabilistic security
metric," DAS 2008, LNCS 5094, pp. 283-296, 2008.
[25] L. Wang, A. Singhal, and S. Jajodia, "Measuring overall security of network configurations using
attack graphs," Data and Applications Security XXI, vol. 4602, pp. 98-112, August 2007.
[26] A. Jaquith, Security Metrics: Replacing Fear, Uncertainty, and Doubt. Addison-Wesley, Pearson
Education, 2007.
[27] K. Sallhammar, B. Helvik, and S. Knapskog, "On stochastic modeling for integrated security and
dependability evaluation," Journal of Networks, vol. 1, 2006.
[28] W. Arbaugh, W. Fithen, and J. McHugh, “Windows of vulnerability: a case study analysis,”
Computer, vol. 33, no. 12, pp. 52–59, Dec 2000
[29] N. Fischbach, “Le cycle de vie d’une vulnrabilit,” http://www.securite.org, 2003.
[30] J. Jones, “Estimating software vulnerabilities,” Security & Privacy,IEEE, vol. 5, no. 4, pp. 28–32,
July-Aug. 2007.
[31] S. Frei, “Security econometrics - the dynamics of (in)security,” ETHZurich, Dissertation 18197, ETH
Zurich, 2009.
[32] Tim Bass. “Intrusion Detection System and Multi-sensor Data Fusion”. Communications of the ACM,
43, 4 (2000), pp.99-105.
[33] Huiqiang Wang,etc. “Survey of Network Situation Awareness System”. Computer
Science,vol.33,2006,pp.5-10.
[34] Bat sell S G,etc. “Distributed Intrusion Detection and Attack Containment for Organizational Cyber
Security”. [Online] http://www.ioc.ornl.gov/projects/documents/containment.pdf, 2005.
[35] Shifflet J. “A Technique Independent Fusion Model For Network Intrusion Detection”. Proceedings
of the Midstates Conference on Undergraduate Research in Computer Science and Mat hematics ,
vol.3,2005,pp.13-19.
17. International Journal of Computer Networks & Communications (IJCNC) Vol.7, No.1, January 2015
17
[36] Robert Ball, Glenn A. Fink. “Home-centric visualization of network traffic for security
administration”. Proceedings of the 2004 ACM workshop on Visualization and data mining for
computer security. Washington DC, October 2004, 55-64.
[37] Soon Tee Teoh, Kwan-Liu Ma, S.Felix Wu, et al. “Case Study: Interactive Visualization for Internet
Security”. Proceedings of IEEE VIS. Boson, October 2002, 505-508.
[38] C. Xiuzhen, Z. Qinghua,, G. Xiaohong, L. Chenguang; Quantitative Hierarchical Threat Evaluation
Model for Network Security[J]; Journal of Software, Vol.17, No.4, April 2006, pp.885−897
[39] S. Shaoyi and Z. Yongzheng "A Novel Extended Algorithm for Network Security Situation
Awareness", International Conference on Computer and Management (CAMAN), pages 1-3 2011
[40] S.Abraham and S.Nair, "Cyber Security Analytics: A stochastic model for Security Quantification
using Absorbing Markov Chains" 5th International Conference on Networking and Information
Technology, ICNIT 2014
[41] S.Abraham and S.Nair, "A Stochastic Model for Cyber Security Analytics" Tech Report 13-CSE-02 ,
CSE Dept, Southern Methodist University, Dallas, Texas, 2013.
[42] K. S. Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Science
[43] R . A. Sahner, K. S. Trivedi, and A. Puliafito, Performance and Reliability Analysis of Computer
Systems: An Example-Based Approach Using the SHARPE Software Package. Kluwer Academic
Publishers,1996.
[44] O. Sheyner, J. W. Haines, S. Jha, R. Lippmann, and J. M. Wing, “Automated generation and analysis
of attack graphs,” in IEEE Symposium on Security and Privacy, 2002, pp. 273–284.
[45] Xinming Ou, Sudhakar Govindavajhala, and Andrew W. Appel. MulVAL: A logic-based network
security analyzer. In
[46] S. Jajodia and S. Noel, "Advanced Cyber Attack Modeling, Analysis, and Visualization," George
Mason University, Fairfax, VA, Technical Report 2010.
[47] Shengwei Yi, Yong Peng, Qi Xiong, Ting Wang, Zhonghua Dai, Haihui Gao, Junfeng Xu, Jiteng
Wang and Lijuan Xu "Overview on attack graph generation and visualization technology", Anti-
Counterfeiting, Security and Identification (ASID), 2013 IEEE International Conference on, Issue
Date: Oct. 2013
[48] R statistics tool. http://www.r-project.org/
Authors
Subil Abraham received his B.S. degree in computer engineering from the University of
Kerala, India. He obtained his M.S. in Computer Science in 2002 from Southern
Methodist University, Dallas, TX. His research interests include vulnerability assessment,
network security, and security metrics.
Suku Nair received his B.S. degree in Electronics and Communication Engineering from
the University of Kerala. He received his M.S. and Ph.D. in Electrical and Computer
Engineering from the University of Illinois at Urbana in 1988 and 1990, respectively.
Currently, he is the Chair and Professor in the Computer Science and Engineering
Department at the Southern Methodist University at Dallas where he held a J. Lindsay
Embrey Trustee Professorship in Engineering. His research interests include Network Security, Software
Defined Networks, and Fault-Tolerant Computing. He is the founding director of HACNet (High
Assurance Computing and Networking) Labs. He is a member of the IEEE and Upsilon Pi Epsilon..