Risk analysis happens to be a fundamental part of risk management. It helps to determine the magnitude of risk a system
is faced with. This study applies Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE) to
analyse the risk extent of an information security system. The weights obtained through AHP were used for both the
single-factor and multi-level analysis of the FCE. The rule of highest membership was used to arrive at the conclusion of
the evaluation. The maximum membership of the risk degree is 0.3254, which implies that the risk level for the system is
low. The results of risk assessment will help in recommending the necessary controls for the information security system.
Keywords
Analytic Hierarchy Process (AHP), Fuzzy Comprehensive Evaluation (FCE), Information Security, Risk Analysis
Attack graph based risk assessment and optimisation approachIJNSA Journal
Attack graphs are models that offer significant cap
abilities to analyse security in network systems. A
n
attack graph allows the representation of vulnerabi
lities, exploits and conditions for each attack in
a single
unifying model. This paper proposes a methodology
to explore the graph using a genetic algorithm (GA)
.
Each attack path is considered as an independent at
tack scenario from the source of attack to the targ
et.
Many such paths form the individuals in the evoluti
onary GA solution. The population-based strategy of
a
GA provides a natural way of exploring a large numb
er of possible attack paths to find the paths that
are
most important. Thus unlike many other optimisation
solutions a range of solutions can be presented to
a
user of the methodology.
A Survey on Data Mining Techniques for Crime Hotspots PredictionIJSRD
A crime is an act which is against the laws of a country or region. The technique which is used to find areas on a map which have high crime intensity is known as crime hotspot prediction. The technique uses the crime data which includes the area with crime rate and predict the future location with high crime intensity. The motivation of crime hotspot prediction is to raise people’s awareness regarding the dangerous location in certain time period. It can help for police resource allocation for creating a safe environment. The paper presents survey of different types of data mining techniques for crime hotspots prediction.
A statistical data fusion technique in virtual data integration environmentIJDKP
Data fusion in the virtual data integration environment starts after detecting and clustering duplicated
records from the different integrated data sources. It refers to the process of selecting or fusing attribute
values from the clustered duplicates into a single record representing the real world object. In this paper, a
statistical technique for data fusion is introduced based on some probabilistic scores from both data
sources and clustered duplicates
Overview of soft intelligent computing technique for supercritical fluid extr...IJAAS Team
Optimization of Supercritical Fluid Extraction process with mathematical modeling is essential for industrial applications. The response surface methodology (RSM) has been proven to be a useful and effective statistical method for studying the relationships between measured responses and independent factors. Recently there are growing interest in applying smart system or artificial technique to model and simulate a chemical process and also to predict, compute, classify and optimize as well as for process control. This system works by generalizing the experimental result and the process behavior and finally predict and estimate the problem. This smart system is a major assistance in the development of process from laboratory to pilot or industrial. The main advantage of intelligent systems is that the predictions can be performed easily, fast, and accurate way, which physical models unable to do. This paper shares several works that have been utilizing intelligent systems for modeling and simulating the supercritical fluid extraction process.
PREDICTING BANKRUPTCY USING MACHINE LEARNING ALGORITHMSIJCI JOURNAL
This paper is written for predicting Bankruptcy using different Machine Learning Algorithms. Whether the company will go bankrupt or not is one of the most challenging and toughest question to answer in the 21st Century. Bankruptcy is defined as the final stage of failure for a firm. A company declares that it has gone bankrupt when at that present moment it does not have enough funds to pay the creditors. It is a global
problem. This paper provides a unique methodology to classify companies as bankrupt or healthy by applying predictive analytics. The prediction model stated in this paper yields better accuracy with standard parameters used for bankruptcy prediction than previously applied prediction methodologies.
A REVIEW ON PREDICTIVE ANALYTICS IN DATA MININGijccmsjournal
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s
done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is
mainly used to make predictions about future events which are unknown. Predictive analytics which uses
various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive
analytics are Regression and Classification. It is composed of various analytical and statistical techniques used
for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals
with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to
determine, or influence the organizational processes which pertain across huge numbers of individuals, like in
fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law
enforcement.
Attack graph based risk assessment and optimisation approachIJNSA Journal
Attack graphs are models that offer significant cap
abilities to analyse security in network systems. A
n
attack graph allows the representation of vulnerabi
lities, exploits and conditions for each attack in
a single
unifying model. This paper proposes a methodology
to explore the graph using a genetic algorithm (GA)
.
Each attack path is considered as an independent at
tack scenario from the source of attack to the targ
et.
Many such paths form the individuals in the evoluti
onary GA solution. The population-based strategy of
a
GA provides a natural way of exploring a large numb
er of possible attack paths to find the paths that
are
most important. Thus unlike many other optimisation
solutions a range of solutions can be presented to
a
user of the methodology.
A Survey on Data Mining Techniques for Crime Hotspots PredictionIJSRD
A crime is an act which is against the laws of a country or region. The technique which is used to find areas on a map which have high crime intensity is known as crime hotspot prediction. The technique uses the crime data which includes the area with crime rate and predict the future location with high crime intensity. The motivation of crime hotspot prediction is to raise people’s awareness regarding the dangerous location in certain time period. It can help for police resource allocation for creating a safe environment. The paper presents survey of different types of data mining techniques for crime hotspots prediction.
A statistical data fusion technique in virtual data integration environmentIJDKP
Data fusion in the virtual data integration environment starts after detecting and clustering duplicated
records from the different integrated data sources. It refers to the process of selecting or fusing attribute
values from the clustered duplicates into a single record representing the real world object. In this paper, a
statistical technique for data fusion is introduced based on some probabilistic scores from both data
sources and clustered duplicates
Overview of soft intelligent computing technique for supercritical fluid extr...IJAAS Team
Optimization of Supercritical Fluid Extraction process with mathematical modeling is essential for industrial applications. The response surface methodology (RSM) has been proven to be a useful and effective statistical method for studying the relationships between measured responses and independent factors. Recently there are growing interest in applying smart system or artificial technique to model and simulate a chemical process and also to predict, compute, classify and optimize as well as for process control. This system works by generalizing the experimental result and the process behavior and finally predict and estimate the problem. This smart system is a major assistance in the development of process from laboratory to pilot or industrial. The main advantage of intelligent systems is that the predictions can be performed easily, fast, and accurate way, which physical models unable to do. This paper shares several works that have been utilizing intelligent systems for modeling and simulating the supercritical fluid extraction process.
PREDICTING BANKRUPTCY USING MACHINE LEARNING ALGORITHMSIJCI JOURNAL
This paper is written for predicting Bankruptcy using different Machine Learning Algorithms. Whether the company will go bankrupt or not is one of the most challenging and toughest question to answer in the 21st Century. Bankruptcy is defined as the final stage of failure for a firm. A company declares that it has gone bankrupt when at that present moment it does not have enough funds to pay the creditors. It is a global
problem. This paper provides a unique methodology to classify companies as bankrupt or healthy by applying predictive analytics. The prediction model stated in this paper yields better accuracy with standard parameters used for bankruptcy prediction than previously applied prediction methodologies.
A REVIEW ON PREDICTIVE ANALYTICS IN DATA MININGijccmsjournal
The data mining its main process is to collect, extract and store the valuable information and now-a-days it’s
done by many enterprises actively. In advanced analytics, Predictive analytics is the one of the branch which is
mainly used to make predictions about future events which are unknown. Predictive analytics which uses
various techniques from machine learning, statistics, data mining, modeling, and artificial intelligence for
analyzing the current data and to make predictions about future. The two main objectives of predictive
analytics are Regression and Classification. It is composed of various analytical and statistical techniques used
for developing models which predicts the future occurrence, probabilities or events. Predictive analytics deals
with both continuous changes and discontinuous changes. It provides a predictive score for each individual
(healthcare patient, product SKU, customer, component, machine, or other organizational unit, etc.) to
determine, or influence the organizational processes which pertain across huge numbers of individuals, like in
fraud detection, manufacturing, credit risk assessment, marketing, and government operations including law
enforcement.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Data mining and machine learning have become a vital part of crime detection and prevention. In this
research, we use WEKA, an open source data mining software, to conduct a comparative study between the
violent crime patterns from the Communities and Crime Unnormalized Dataset provided by the University
of California-Irvine repository and actual crime statistical data for the state of Mississippi that has been
provided by neighborhoodscout.com. We implemented the Linear Regression, Additive Regression, and
Decision Stump algorithms using the same finite set of features, on the Communities and Crime Dataset.
Overall, the linear regression algorithm performed the best among the three selected algorithms. The scope
of this project is to prove how effective and accurate the machine learning algorithms used in data mining
analysis can be at predicting violent crime patterns.
Diverse Common Cause Failures in Fault Tree AnalysisJeremy Hynek
A common cause failure (CCF) is a single failure event that affects multiple components or functions of a system. Common cause failures are an important part of any reliability or hazard model, since they work to negate the improvements offered by redundancy. They are often the biggest contributors to risk levels, and should thus always be carefully considered.
For instance, the beta-factor model is a very commonly-used method, found in standards such as IEC 61508. To calculate the failure rate due to common causes, the beta factor is simply multiplied by the component failure rate. In essence, the beta factor simply represents the percentage of component failures that are due to common causes.
Comparative study of decision tree algorithm and naive bayes classifier for s...eSAT Journals
Abstract The modeling and analysis of the epidemic disease outbreaks in huge realistic populations is a data intensive task that requires immense computational resources. Such effective computational support becomes useful to study disease outbreak and to facilitate decision making. Epidemiology is one of the traditional approaches which are being used for studying and analyzing the outbreaks of epidemic diseases. Although useful for obtaining numbers of sick, infected and recovered individuals in a population, this traditional approach does not capture the complexity of human interactions. The model is only limited to the person to person interaction in order to track the surveillance of disease and it also having performance issues with large realistic data. In this paper we propose, the combination of computational epidemiology and modern data mining techniques with their comparative analysis for the Swine Flu prediction. The clustering algorithm K mean is used to make a group or cluster of Swine Flu suspects in a particular area. The Decision tree algorithm and Naive Bayes classifier are applied on the same inputs to find out the actual count of suspects and predict the possible surveillance of a Swine Flu in a nearby area from suspected area. The performances of these techniques are compared, based on accuracy. In our case the Naive Bayes classifier performs better than decision tree algorithm while finding the accurate count of suspects. Keywords: Computational Epidemiology, Aerosol-borne Disease, Clustering, Predictive analysis.
A FRAMEWORK TO DEFENSE AGAINST INSIDER ATTACKS ON INFORMATION SOURCESijmpict
As for development and growth of information systems and security organizations, protecting information
against probable attacks is of great importance. External raids on these organizations, for the most part,
are not practicable due to high defensive layers. Therefore to intrude on such organizations, insiders are
employed. In this paper, by introducing consequence and necessity of recognition of insider attacks perils,
we intend to propose a new framework for detecting and preventing from insider attacks on information
systems.
The suggested framework is defined according to ontology graphs, thus, a structure of so-called ontology is
firstly explained. This composition represents data structure for saving and presenting information and is
then practiced to detect user’s behavioral patterns within such framework. The framework consists of three
phases of construction, comparison and analysis such that it first receives a set of user’s requests alongside
with his legal access level and in case of encountering an attack it communicates an appropriate warning
message to the organization administrative system
INFLUENCE OF THE EVENT RATE ON DISCRIMINATION ABILITIES OF BANKRUPTCY PREDICT...ijdms
In bankruptcy prediction, the proportion of events is very low, which is often oversampled to eliminate this
bias. In this paper, we study the influence of the event rate on discrimination abilities of bankruptcy
prediction models. First the statistical association and significance of public records and firmographics
indicators with the bankruptcy were explored. Then the event rate was oversampled from 0.12% to 10%,
20%, 30%, 40%, and 50%, respectively. Seven models were developed, including Logistic Regression,
Decision Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. Under different event rates, models were comprehensively evaluated and compared based
on Kolmogorov-Smirnov Statistic, accuracy,F1 score, Type I error, Type II error, and ROC curve on the
hold-out dataset with their best probability cut-offs. Results show that Bayesian Network is the most
insensitive to the event rate, while Support Vector Machine is the most sensitive.
Information security risk analysis methods and research trends ahp and fuzzy ...ijcsit
Information security risk analysis becomes an increasingly essential component of organization’s
operations. Traditional Information security risk analysis is quantitative and qualitative analysis methods.
Quantitative and qualitative analysis methods have some advantages for information risk analysis.
However, hierarchy process has been widely used in security assessment. A future research direction may
be development and application of soft computing such as rough sets, grey sets, fuzzy systems, generic
algorithm, support vector machine, and Bayesian network and hybrid model. Hybrid model are developed
by integrating two or more existing model. A Practical advice for evaluation information security risk is
discussed. This approach is combination with AHP and Fuzzy comprehensive method.
PORM: Predictive Optimization of Risk Management to Control Uncertainty Probl...IJECEIAES
Irrespective of different research-based approaches toward risk management, developing a precise model towards risk management is found to be a computationally challenging task owing to critical and vague definition of the origination of the problems. This research work introduces a model called as PROM i.e. Predictive Optimization of Risk Management with the perspective of software engineering. The significant contribution of PORM is to offer a reliable computation of risk analysis by considering generalized practical scenario of software development practices in Information Technology (IT) industry. The proposed PORM system is also designed and equipped with better risk factor assessment with an aid of machine learning approach without having more involvement of iteration. The study outcome shows that PORM system offers computationally cost effective analysis of risk factor as assessed with respect to different quality standards of object oriented system involved in every software projects.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
One of the most important issues that organizations have to deal with is the timely identification and detection of risk factors aimed at preventing incidents. Managers’ and engineers’ tendency towards minimizing risk factors in a service, process or design system has obliged them to analyze the reliability of such systems in order to minimize the risks and identify the probable errors. Concerning what was just mentioned, a more accurate Failure Mode and Effects Analysis (FMEA) is adopted based on fuzzy logic and fuzzy numbers. Fuzzy TOPSIS is also used to identify, rank, and prioritize error and risk factors. This paper uses FMEA as a risk identification tool. Then, Fuzzy Risk Priority Number (FRPN) is calculated and triangular fuzzy numbers are prioritized through Fuzzy TOPSIS. In order to have a better understanding toward the mentioned concepts, a case study is presented.
METRICS FOR EVALUATING ALERTS IN INTRUSION DETECTION SYSTEMSIJNSA Journal
Network intrusions compromise the network’s confidentiality, integrity and availability of resources. Intrusion detection systems (IDSs) have been implemented to prevent the problem. Although IDS technologies are promising, their ability of detecting true alerts is far from being perfect. One problem is that of producing large numbers of false alerts, which are termed as malicious by the IDS. In this paper we propose a set of metrics for evaluating the IDS alerts. The metrics will identify false, low-level and redundant alerts by mapping alerts on a vulnerability database and calculating their impact. The metrics are calculated using a metric tool that we developed. We validated the metrics using Weyuker’s properties and Kaner’s framework. The metrics can be considered as mathematically valid since they satisfied seven of the nine Weyuker’s properties. In addition, they can be considered as workable since they satisfied all the evaluation questions from Kaner’s framework.
Risk assessment of information production using extended risk matrix approachTELKOMNIKA JOURNAL
In many cases poor information quality appears mainly due to in-effectiveness of information management including information production and delivery. Where this situation poses a certain risk. A holistic information risk management model has been previously proposed. But the model has some limitations especially on risk calculation and risk priority ranking as the model does not consider existing control effectiveness. In this paper, a new risk assessment method is proposed in order to improve the model of total impact of risks and to improve the accuracy of risk priority ranking by modifying the extended risk matrix approach (RMA) where we take into account the existing control effectiveness. Using our approach by adding a new dimension on extended RMA. We are able to improve the accuracy (7.15%) and reduced the ambiguity (1.34) of assessment results on real cases illustration.
Risk Assessment Model and its Integration into an Established Test Processijtsrd
In industry, testing has to be performed under severe pressure due to limited resources. Risk based testing which uses risks to guide the test process is applied to allocate resources and to reduce product risks. Risk assessment, i.e., risk identi cation, analysis and evaluation, determines the signi cance of the risk values assigned to tests and therefore the quality of the overall risk based test process. In this paper we provide a risk assessment model and its integration into an established test process. This framework is derived on the basis of best practices extracted from published risk based testing approaches and applied to an industrial test process. Ashwani Kumar | Prince Sood "Risk Assessment Model and its Integration into an Established Test Process" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26757.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/26757/risk-assessment-model-and-its-integration-into-an-established-test-process/ashwani-kumar
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Data mining and machine learning have become a vital part of crime detection and prevention. In this
research, we use WEKA, an open source data mining software, to conduct a comparative study between the
violent crime patterns from the Communities and Crime Unnormalized Dataset provided by the University
of California-Irvine repository and actual crime statistical data for the state of Mississippi that has been
provided by neighborhoodscout.com. We implemented the Linear Regression, Additive Regression, and
Decision Stump algorithms using the same finite set of features, on the Communities and Crime Dataset.
Overall, the linear regression algorithm performed the best among the three selected algorithms. The scope
of this project is to prove how effective and accurate the machine learning algorithms used in data mining
analysis can be at predicting violent crime patterns.
Diverse Common Cause Failures in Fault Tree AnalysisJeremy Hynek
A common cause failure (CCF) is a single failure event that affects multiple components or functions of a system. Common cause failures are an important part of any reliability or hazard model, since they work to negate the improvements offered by redundancy. They are often the biggest contributors to risk levels, and should thus always be carefully considered.
For instance, the beta-factor model is a very commonly-used method, found in standards such as IEC 61508. To calculate the failure rate due to common causes, the beta factor is simply multiplied by the component failure rate. In essence, the beta factor simply represents the percentage of component failures that are due to common causes.
Comparative study of decision tree algorithm and naive bayes classifier for s...eSAT Journals
Abstract The modeling and analysis of the epidemic disease outbreaks in huge realistic populations is a data intensive task that requires immense computational resources. Such effective computational support becomes useful to study disease outbreak and to facilitate decision making. Epidemiology is one of the traditional approaches which are being used for studying and analyzing the outbreaks of epidemic diseases. Although useful for obtaining numbers of sick, infected and recovered individuals in a population, this traditional approach does not capture the complexity of human interactions. The model is only limited to the person to person interaction in order to track the surveillance of disease and it also having performance issues with large realistic data. In this paper we propose, the combination of computational epidemiology and modern data mining techniques with their comparative analysis for the Swine Flu prediction. The clustering algorithm K mean is used to make a group or cluster of Swine Flu suspects in a particular area. The Decision tree algorithm and Naive Bayes classifier are applied on the same inputs to find out the actual count of suspects and predict the possible surveillance of a Swine Flu in a nearby area from suspected area. The performances of these techniques are compared, based on accuracy. In our case the Naive Bayes classifier performs better than decision tree algorithm while finding the accurate count of suspects. Keywords: Computational Epidemiology, Aerosol-borne Disease, Clustering, Predictive analysis.
A FRAMEWORK TO DEFENSE AGAINST INSIDER ATTACKS ON INFORMATION SOURCESijmpict
As for development and growth of information systems and security organizations, protecting information
against probable attacks is of great importance. External raids on these organizations, for the most part,
are not practicable due to high defensive layers. Therefore to intrude on such organizations, insiders are
employed. In this paper, by introducing consequence and necessity of recognition of insider attacks perils,
we intend to propose a new framework for detecting and preventing from insider attacks on information
systems.
The suggested framework is defined according to ontology graphs, thus, a structure of so-called ontology is
firstly explained. This composition represents data structure for saving and presenting information and is
then practiced to detect user’s behavioral patterns within such framework. The framework consists of three
phases of construction, comparison and analysis such that it first receives a set of user’s requests alongside
with his legal access level and in case of encountering an attack it communicates an appropriate warning
message to the organization administrative system
INFLUENCE OF THE EVENT RATE ON DISCRIMINATION ABILITIES OF BANKRUPTCY PREDICT...ijdms
In bankruptcy prediction, the proportion of events is very low, which is often oversampled to eliminate this
bias. In this paper, we study the influence of the event rate on discrimination abilities of bankruptcy
prediction models. First the statistical association and significance of public records and firmographics
indicators with the bankruptcy were explored. Then the event rate was oversampled from 0.12% to 10%,
20%, 30%, 40%, and 50%, respectively. Seven models were developed, including Logistic Regression,
Decision Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. Under different event rates, models were comprehensively evaluated and compared based
on Kolmogorov-Smirnov Statistic, accuracy,F1 score, Type I error, Type II error, and ROC curve on the
hold-out dataset with their best probability cut-offs. Results show that Bayesian Network is the most
insensitive to the event rate, while Support Vector Machine is the most sensitive.
Information security risk analysis methods and research trends ahp and fuzzy ...ijcsit
Information security risk analysis becomes an increasingly essential component of organization’s
operations. Traditional Information security risk analysis is quantitative and qualitative analysis methods.
Quantitative and qualitative analysis methods have some advantages for information risk analysis.
However, hierarchy process has been widely used in security assessment. A future research direction may
be development and application of soft computing such as rough sets, grey sets, fuzzy systems, generic
algorithm, support vector machine, and Bayesian network and hybrid model. Hybrid model are developed
by integrating two or more existing model. A Practical advice for evaluation information security risk is
discussed. This approach is combination with AHP and Fuzzy comprehensive method.
PORM: Predictive Optimization of Risk Management to Control Uncertainty Probl...IJECEIAES
Irrespective of different research-based approaches toward risk management, developing a precise model towards risk management is found to be a computationally challenging task owing to critical and vague definition of the origination of the problems. This research work introduces a model called as PROM i.e. Predictive Optimization of Risk Management with the perspective of software engineering. The significant contribution of PORM is to offer a reliable computation of risk analysis by considering generalized practical scenario of software development practices in Information Technology (IT) industry. The proposed PORM system is also designed and equipped with better risk factor assessment with an aid of machine learning approach without having more involvement of iteration. The study outcome shows that PORM system offers computationally cost effective analysis of risk factor as assessed with respect to different quality standards of object oriented system involved in every software projects.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
One of the most important issues that organizations have to deal with is the timely identification and detection of risk factors aimed at preventing incidents. Managers’ and engineers’ tendency towards minimizing risk factors in a service, process or design system has obliged them to analyze the reliability of such systems in order to minimize the risks and identify the probable errors. Concerning what was just mentioned, a more accurate Failure Mode and Effects Analysis (FMEA) is adopted based on fuzzy logic and fuzzy numbers. Fuzzy TOPSIS is also used to identify, rank, and prioritize error and risk factors. This paper uses FMEA as a risk identification tool. Then, Fuzzy Risk Priority Number (FRPN) is calculated and triangular fuzzy numbers are prioritized through Fuzzy TOPSIS. In order to have a better understanding toward the mentioned concepts, a case study is presented.
METRICS FOR EVALUATING ALERTS IN INTRUSION DETECTION SYSTEMSIJNSA Journal
Network intrusions compromise the network’s confidentiality, integrity and availability of resources. Intrusion detection systems (IDSs) have been implemented to prevent the problem. Although IDS technologies are promising, their ability of detecting true alerts is far from being perfect. One problem is that of producing large numbers of false alerts, which are termed as malicious by the IDS. In this paper we propose a set of metrics for evaluating the IDS alerts. The metrics will identify false, low-level and redundant alerts by mapping alerts on a vulnerability database and calculating their impact. The metrics are calculated using a metric tool that we developed. We validated the metrics using Weyuker’s properties and Kaner’s framework. The metrics can be considered as mathematically valid since they satisfied seven of the nine Weyuker’s properties. In addition, they can be considered as workable since they satisfied all the evaluation questions from Kaner’s framework.
Risk assessment of information production using extended risk matrix approachTELKOMNIKA JOURNAL
In many cases poor information quality appears mainly due to in-effectiveness of information management including information production and delivery. Where this situation poses a certain risk. A holistic information risk management model has been previously proposed. But the model has some limitations especially on risk calculation and risk priority ranking as the model does not consider existing control effectiveness. In this paper, a new risk assessment method is proposed in order to improve the model of total impact of risks and to improve the accuracy of risk priority ranking by modifying the extended risk matrix approach (RMA) where we take into account the existing control effectiveness. Using our approach by adding a new dimension on extended RMA. We are able to improve the accuracy (7.15%) and reduced the ambiguity (1.34) of assessment results on real cases illustration.
Risk Assessment Model and its Integration into an Established Test Processijtsrd
In industry, testing has to be performed under severe pressure due to limited resources. Risk based testing which uses risks to guide the test process is applied to allocate resources and to reduce product risks. Risk assessment, i.e., risk identi cation, analysis and evaluation, determines the signi cance of the risk values assigned to tests and therefore the quality of the overall risk based test process. In this paper we provide a risk assessment model and its integration into an established test process. This framework is derived on the basis of best practices extracted from published risk based testing approaches and applied to an industrial test process. Ashwani Kumar | Prince Sood "Risk Assessment Model and its Integration into an Established Test Process" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26757.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/26757/risk-assessment-model-and-its-integration-into-an-established-test-process/ashwani-kumar
Improving the performance of Intrusion detection systemsyasmen essam
Intrusion detection systems (IDS) are widely studied by
researchers nowadays due to the dramatic growth in
network-based technologies. Policy violations and
unauthorized access is in turn increasing which makes
intrusion detection systems of great importance. Existing
approaches to improve intrusion detection systems focus on feature selection or reduction since some features are
irrelevant or redundant which when removed improve the
accuracy as well as the learning time.
Managing Intrusion Detection Alerts Using Support Vector MachinesCSCJournals
In the computer network world Intrusion detection systems (IDS) are used to identify attacks
against computer systems. They produce security alerts when an attack is done by an intruder.
Since IDSs generate high amount of security alerts, analyzing them are time consuming and error
prone. To solve this problem IDS alert management techniques are introduced. They manage
generated alerts and handle true positive and false positive alerts. In this paper a new alert
management system is presented. It uses support vector machine (SVM) as a core component of
the system that classify generated alerts. The proposed algorithm achieves high accurate result
in false positives reduction and identifying type of true positives. Because of low classification
time per each alert, the system also could be used in active alert management systems.
CYBERSECURITY INFRASTRUCTURE AND SECURITY AUTOMATIONacijjournal
AI-based security systems utilize big data and powerful machine learning algorithms to automate the security management task. The case study methodology is used to examine the effectiveness of AI-enabled security solutions. The result shows that compared with the signature-based system, AI-supported security applications are efficient, accurate, and reliable. This is because the systems are capable of reviewing and correlating large volumes of data to facilitate the detection and response to threats.
CYBERSECURITY INFRASTRUCTURE AND SECURITY AUTOMATIONacijjournal
AI-based security systems utilize big data and powerful machine learning algorithms to automate the security management task. The case study methodology is used to examine the effectiveness of AI-enabled security solutions. The result shows that compared with the signature-based system, AI-supported security applications are efficient, accurate, and reliable. This is because the systems are capable of reviewing and correlating large volumes of data to facilitate the detection and response to threats.
Running head ANNOTATED BIBLIOGRAPHYANNOTATED BIBLIOGRAPHY2.docxhealdkathaleen
Running head: ANNOTATED BIBLIOGRAPHY
ANNOTATED BIBLIOGRAPHY 2
Annotated bibliography
Anil Kumar Bandi
University of The Cumberlands
ITS 835- Enterprise Risk Management
Dr. Oludotun Oni
July 5th, 2019
Annotated bibliography: Cyber Security
Bada, M., Sasse, A. M., & Nurse, J. R. (2019). Cyber security awareness campaigns: Why do they fail to change behaviour. arXiv preprint arXiv:1901.02672.
In this technology era where we are using technology everywhere and at the same time cyber threats also became very common. So, having awareness about the cyber security is always good. security is a touchy issue that should be treated with most extreme classification. Authors through this book, explained about the key factors regarding the security which may lead them to neglecting to properly change individuals' behavior. Past what's more, current endeavors to improve data security rehearses and advance a maintainable society have not had the ideal effect. It is significant in this way to basically think about the difficulties engaged with improving data security practices for natives, buyers and representatives as there are not aware of risks in cyber security. This research paper considers the challenges from a psychology perspective and, they believed that creating awareness is always based on how people react and perceive the risks.
The very important finding from this study is that, people know the answers for the questions asked during the survey about the risks they know about the cyber security but the interested thing, they don’t react how they usually react in real life. Being that said, it is very proposed that it is very essential for having risk awareness and practices from the beginning. This article also explained about the factors influencing the risks awareness failure in cyber security. And other important finding is, intercessions dependent on major hypothetical information to change conduct that consider social convictions and frames of mind and are bound to succeed.
Coffey, K., Maglaras, L. A., Smith, R., Janicke, H., Ferrag, M. A., Derhab, A., ... & Yousaf, A. (2018). Vulnerability Assessment of Cyber Security for SCADA Systems. In Guide to Vulnerability Analysis for Computer Networks and Systems (pp. 59-80). Springer, Cham.
This paper explains about the cyber security risk assessment of Supervisory Control and Data Acquisition system. In this system, security is mainly done by controlling physical access to framework parts which were extraordinary unique restrictive correspondence conventions. According to this paper, security in this system was present as an implication of safety. Modern day SCADA systems are more sophisticated and because of using the advanced technology and it’s complex too and prone to many risks as well. The SCADA systems are also prone to may risks because of rapidly increasing interconnectivity, hard wares and protocols using for communication and their standardization. So, risk assessment ...
Adapting New Data In Intrusion Detection SystemsCSCJournals
Most of the introduced anomaly intrusion detection system (IDS) methods focus on achieving better detection rates and lower false alarm rates. However, when it comes to real-time applications many additional issues come into the picture. One of them is the training datasets that are continuously becoming outdated. It is vital to use an up-to-date dataset while training the system. But the trained system will become insufficient if network behaviors change. As well known, frequent alteration is in the nature of computer networks. On the other hand it is costly to continually collect and label datasets while frequently training the system from scratch and discarding old knowledge is a waste. To overcome this problem, we propose the use of transfer learning which benefits from the previous gained knowledge. The carried out experiments stated that transfer learning helps to utilize previously obtained knowledge, improves the detection rate and reduces the need to recollect the whole dataset.
COMPARISON OF BANKRUPTCY PREDICTION MODELS WITH PUBLIC RECORDS AND FIRMOGRAPHICScscpconf
Many business operations and strategies rely on bankruptcy prediction. In this paper, we aim to
study the impacts of public records and firmographics and predict the bankruptcy in a 12-
month-ahead period with using different classification models and adding values to traditionally
used financial ratios. Univariate analysis shows the statistical association and significance of
public records and firmographics indicators with the bankruptcy. Further, seven statistical
models and machine learning methods were developed, including Logistic Regression, Decision
Tree, Random Forest, Gradient Boosting, Support Vector Machine, Bayesian Network, and
Neural Network. The performance of models were evaluated and compared based on
classification accuracy, Type I error, Type II error, and ROC curves on the hold-out dataset.
Moreover, an experiment was set up to show the importance of oversampling for rare event
prediction. The result also shows that Bayesian Network is comparatively more robust than
other models without oversampling.
Smart Cities- A systems perspective on security risk identification: Methodo...Smart Cities Project
This paper takes a system theoretic perspective to the process of security risk identification in the context of city councils. Based on this approach, we construct a framework that helps to identify risks. We analyze why this methodological framework is suitable for the risk identification process. Research in fifty Flemish city councils reveals the usefulness of our approach of combining a perceived vs. objective perspective with a technical vs. organizational one. We believe such a framework offers a workable tool for dealing with IS security risks in a systems thinking way.
CLASSIFIER SELECTION MODELS FOR INTRUSION DETECTION SYSTEM (IDS)ieijjournal1
Any abnormal activity can be assumed to be anomalies intrusion. In the literature several techniques and
algorithms have been discussed for anomaly detection. In the most of cases true positive and false positive
parameters have been used to compare their performance. However, depending upon the application a
wrong true positive or wrong false positive may have severe detrimental effects. This necessitates inclusion
of cost sensitive parameters in the performance. Moreover the most common testing dataset KDD-CUP-99
has huge size of data which intern require certain amount of pre-processing. Our work in this paper starts
with enumerating the necessity of cost sensitive analysis with some real life examples. After discussing
KDD-CUP-99 an approach is proposed for feature elimination and then features selection to reduce the
number of more relevant features directly and size of KDD-CUP-99 indirectly. From the reported
literature general methods for anomaly detection are selected which perform best for different types of
attacks. These different classifiers are clubbed to form an ensemble. A cost opportunistic technique is
suggested to allocate the relative weights to classifiers ensemble for generating the final result. The cost
sensitivity of true positive and false positive results is done and a method is proposed to select the elements
of cost sensitivity metrics for further improving the results to achieve the overall better performance. The
impact on performance trade of due to incorporating the cost sensitivity is discussed.
Similar to Information Security Risk Analysis Using Analytic Hierarchy Process and Fuzzy Comprehensive Evaluation (20)
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Monitoring Java Application Security with JDK Tools and JFR Events
Information Security Risk Analysis Using Analytic Hierarchy Process and Fuzzy Comprehensive Evaluation
1. Information Security Risk Analysis Using
Analytic Hierarchy Process and Fuzzy
Comprehensive Evaluation
Aliu Folasade Ayeni Olaniyi A. Thompson Aderonke F. Alese Boniface K.
sadealiu@gmail.com oaayeni@futa.edu.ng afthompson@futa.edu.ng bkalese@futa.edu.ng
School of Computing
The Federal University of Technology
Akure, Nigeria.
Abstract
Risk analysis happens to be a fundamental part of risk management. It helps to determine the magnitude of risk a system
is faced with. This study applies Analytic Hierarchy Process (AHP) and Fuzzy Comprehensive Evaluation (FCE) to
analyse the risk extent of an information security system. The weights obtained through AHP were used for both the
single-factor and multi-level analysis of the FCE. The rule of highest membership was used to arrive at the conclusion of
the evaluation. The maximum membership of the risk degree is 0.3254, which implies that the risk level for the system is
low. The results of risk assessment will help in recommending the necessary controls for the information security system.
Keywords
Analytic Hierarchy Process (AHP), Fuzzy Comprehensive Evaluation (FCE), Information Security, Risk Analysis
I. INTRODUCTION
Information security deals with the preservation of data from unauthorized utilization, most especially
electronic data [1]. Every organisation that uses information needs to assess the security of information at their
disposal. Hence, there is need for information security analysis. Risk assessment is the initial operation in the
procedure for management of risk. It helps to ascertain the magnitude of a likely threat and the dangers that may be
connected to an IT system [2]. The outcome of the risk assessment operation helps to pinpoint relevant measures to
help reduce the recognized risks. Security risks for information systems are dangers that come up as a result of
disclosure of confidentiality, lack of integrity, or unavailability of information. The risk degree of an information
system signifies the possible negative effects it has organization’s assets, operations and the nation [3].
Information risk analysis entails four fundamental components, that is, assets, threats, vulnerability and
controls. Asset is equivalent to clients’ private details. The information is probably very important to the clients and
also very delicate. Consequently, if the data is stolen, misplaced or damaged in any way, the effect will be tragic for
both the clients and the corporation [4]. Threats have the ability to create undesirable circumstances that can have
negative effects on the assets of a company. Mouna et al. brought forward a detailed model that outlined several
threat attributes. The model presents a guideline to establish the types of unwanted events that may have impact on
information systems organizations [5]. Vulnerabilities are flaws in a system that threats can take advantage of.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
36 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. Controls can be characterised as measures that can be taken to reduce the effects of threats on the assets of the
establishment. These controls ensure security of assets.
There are several risk assessment tools and they have been classified into two methods; that is, qualitative
and quantitative techniques. Each of these techniques has its benefits and limitations. However, when both of these
techniques are combined to give a hybrid model, they generate improved results [6].
According to [7] and [8], quantitative techniques make use of mathematical methods to determine and
analyse risk; while qualitative procedures apply the use of adjectives to perform risk assessment. Risk assessment
that is carried out using either quantitative procedures or qualitative techniques does not produce adequate
information for use in information security risk management procedures [9].
Due to these limitations, [9] recommended that soft computing should be used along side with both
quantitative and qualitative procedures in order to improve the effectiveness of the analysis. This combination will
yield much better and precise results. As a result, [10] endorsed the hybrid approach of combining AHP and FCE to
assess risks related to information security. AHP transforms risks numeric values while FCE determines the extent
of threats to an establishment [6].
II. RELATED WORKS
In [11], a risk assessment procedure for information system security using information entropy was
proposed, and the security risk analysis model of the system was constructed. The authors in [12] presented a
methodology that correlates the assets, threats, vulnerabilities, and controls of the firm, and shows the relevance of
different controls relating to the values of the firm. The proposed approach used three different grids, that is,
vulnerability grid, threat grid and control grid to acquire the statistics that is required for the risk examination.
However, this methodology works best for an existing organisation. In [13], a prototype of information security
likelihood appraisal was designed using AHP alone and showed that it can be simply applied to assess the
probability of risk in web security. The author in [14] combined FCE with information entropy to determine the risk
extent of the information security structure. The risk degree for the entire system was defined based on estimation of
probability of the frequency and the effect of risk. In [10], AHP and FCE were combined to evaluate the information
security risk of a system in L-company. AHP was applied to find the more important elements of assessments from
many elements in order to simplify the calculation of risk value and provide a strong basis for taking relevant
measures [15]. In [16], AHP was used along with FCE method to numerically assess the information security of the
exigency command system of a dangerous chemical-producing venture and also calculate the risk. The efficacy of
the model was confirmed.
III. ANALYTIC HIERARCHY PROCESS
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
37 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. AHP is a method for decision-making based on numerous yardsticks which converts personalized
estimation of comparable factors in to a set of scores, weights or numbers. The first step in the AHP algorithm is to
make basic or simple comparisons (judgement matrix) between each factor. It is as shown in equation 1.
=
1 ⋯
1 ⋯
⋮ ⋮ ⋱ ⋮
⋯ 1
=
⋯
⋯
⋮ ⋮ ⋱ ⋮
⋯
(1)
Where A = basic comparison matrix,
w1 = weight of factor 1,
w2 = weight of factor 2,
wn = weight of factor n.
Information security metrics to be analysed using AHP are represented in Table 1.
TABLE 1
GUIDE OF EVALUATION FOR INFORMATION SECURITY RISK ANALYSIS
Objective Index of Criterion Layer 1 Index of Criterion Layer 2
Information Security Risk Analysis
Assets (X1)
Confidentiality (X11)
Integrity (X12)
Availability (X13)
Threats (X2)
Natural (X21)
Human (X22)
Environmental (X23)
Vulnerability (X3)
Management (X31)
Operational (X32)
Technical (X33)
Control Measures (X4)
Preventive (X41)
Detective (X42)
A standard scale of preference is used to judge the importance of one factor over the other in a matrix, A
using values 1 to 9. Table 2 shows the standard scale of preference.
TABLE 2
AHP SCALE OF PREFERENCE FOR COMPARISONS
Value Representation
1 Equal significance
3 Average significance over another
5 Crucial importance
7 Very crucial importance
9 Extremely crucial importance
2, 4, 6, 8 Values for in-between comparison
The judgement matrices (pair-wise comparisons) are shown as follows.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
38 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. Criterion Layer 1:
=
1 7 5 3
1
7
1
1
3
1
5
1
5
3 1
1
3
1
3
5 3 1
= 0.558 0.057 0.122 0.263
Asset:
=
1 3 5
1
3
1 3
1
5
1
3
1
= 0.63 0.26 0.11
Threats:
=
1
1
5
1
3
5 1 3
3
1
3
1
= 0.11 0.63 0.26
Vulnerability:
=
1 5
1
3
1
5
1
1
7
3 7 1
= 0.28 0.08 0.64
Controls:
=
1 3
1
3
1
= 0.75 0.25
Obtain a normalised pair-wise matrix by adding the figures in each column of the pair-wise matrix and then
dividing each value in the matrix by its column sum.
= ∑
(2)
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
39 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. To generate the weighted matrix (priority vector), the total of the normalised matrix of the column of
matrix is then divided by the amount of factors used. It is given as:
=
∑
(3)
A product of the pair-wise matrix and the weights vector is used to obtain the value of the consistency
vector (λmax). Thereafter, the sum of row entries is divided by the corresponding criterion weight.
The Consistency Index (CI) is given as:
=
λ
(4)
such that, n is the order of matrix.
Finally, the consistency ratio is computed by dividing the CI with random index (RI). In general, if CR is
smaller than or equal to 0.1, the judgments are in consonance with one another. The formula for CR is:
= (5)
where the value of RI (Random Index) is shown in the Random Consistency Index Table 3.
TABLE 3
RANDOM CONSISTENCY INDEX
n 1 2 3 4 5 6 7 8 9 10
RI 0 0 0.58 0.9 1.12 1.24 1.32 1.41 1.45 1.49
If 0.1, then the judgement is acceptable, else the judgement should be re-examined.
From the pair-wise matrices, the weights are generated and the judgements are consistent. The weights are:
= 0.558 0.057 0.122 0.263
= 0.63 0.26 0.11
= 0.11 0.63 0.26
= 0.28 0.08 0.64
= 0.75 0.25
Table 4 shows the overall weights for the information security risk metrics.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
40 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. TABLE 4
FINAL WEIGHTS FOR INFORMATION SECURITY RISK
Element Weight Combined Weight
Criterion Layer 2
Confidentiality 0.63 0.35154
Integrity 0.26 0.14508
Availability 0.11 0.06138
Natural 0.11 0.00627
Human 0.63 0.03591
Environmental 0.26 0.01482
Management 0.28 0.03416
Operational 0.08 0.00976
Technical 0.64 0.07808
Preventive 0.75 0.19725
Detective 0.25 0.06575
Criterion Layer 1
Assets 0.558
Threats 0.057
Vulnerability 0.122
Controls 0.263
Combined Consistency: 0.09612267
The values in the second column show the weights of the factors in the second criterion layer with respect
to their corresponding factors in the first criterion layer. The values in the third column (combined weights) show
the overall influence of each factor when compared to the objective of the analysis. The results of the combined
weights show that element of confidentiality of information is most important in the assessment of information
security while factors of operational vulnerability have the least effect on information security risk.
The weights in the second column for criterion layer 2 will be used for the lone-element appraisal in the
Fuzzy Comprehensive Evaluation (FCE) while the weights for the first criterion layer will be used for the multi-
level evaluation in the FCE. The overall consistency for the hierarchy is 0.09612267, which shows that the analysis
is acceptable because it is less than 0.1.
IV. FUZZY COMPREHENSIVE EVALUATION
Fuzzy comprehensive evaluation technique is a certain implementation procedure which applies fuzzy
mathematics. The steps are highlighted below.
A. Determine the domain of evaluated objects factors
The object factors, X = {x1, x2, ..., xj}, mean that there are ‘j’ assessment factors from which a person is to
judge the assessed object factor; xi represents the ith
index. According to table 1, the risk factors have been
identified. The fuzzy set X = {X11, X12, X13, X21, X22, X23, X31, X32, X33, X41, X42}, of which X11, X12, X13, X21, X22, X23,
X31, X32, X33, X41, X42 are the risk factors.
Comments set is set up in order to be used by evaluators to evaluate the objects, with Y as an assessment
index set: Y = {y1, y2, ..., yn}. Since risk is a function of probability and impact, two different evaluation sets are
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
41 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
7. built. The interpretation and meaning of the assessment set Y = {Y1, Y2...Y5} of the risk factor set X for the risk
likelihood, Rp, is shown in Table 5.
TABLE 5
DESCRIPTION OF RISK LIKELIHOOD LEVEL
Risk Likelihood Likelihood Description
Y1 Very Low Might never occur.
Y2 Low Might occur once in 3 years.
Y3 Medium Might occur about twice in one year.
Y4 High Might occur at least once in a month.
Y5 Very High Might occur every day.
The assessment set Y = {Y1, Y2...Y5} of risk factor set, X and its interpretation for the risk impact, Rc, is
shown in Table 6.
TABLE 6
DESCRIPTION OF RISK IMPACT LEVEL
Risk Impact Impact Description
Y1 Very Low There is almost no impact on the system.
Y2 Low There is mild impact on the system but can be recovered with little efforts.
Y3 Medium The impact can damage the reputation of the organisation but can be quickly restored if properly handled.
Y4 High There is a partial breakdown of the system which can lead to loss of trust among clients.
Y5 Very High There is complete and devastating breakdown of the entire system.
Each of the experts assesses the likelihood and impact of the risk factors, X, based on table 5 and table 6. A
risk matrix, R is generated for each expert based on table 7.
TABLE 7
RISK MATRIX
Risk Y1 Y2 Y3 Y4 Y5
Y1 VL (Y1) VL (Y1) L (Y2) L (Y2) M (Y3)
Y2 VL (Y1) L (Y2) L (Y2) M (Y3) M (Y3)
Y3 L (Y2) L (Y2) M (Y3) M (Y3) H (Y4)
Y4 L (Y2) M (Y3) M (Y3) H (Y4) VH (Y5)
Y5 M (Y3) M (Y3) H (Y4) VH (Y5) VH (Y5)
B. Evaluate single factor and establish the fuzzy relationship grid, R.
The process of assessing an element individually and establishing the membership degree set ‘Y’ of the
evaluated element is referred to as single-factor fuzzy evaluation. Twenty (20) experts were selected to evaluate the
information security risk. These experts individually decided the level of the evaluated elements in relation to the
information security risk. Considering each xj , rij stands for the grade of affiliation on xj to vi .
= (6)
where n stands for the amount of xj and z represents the sum total of experts. R denotes the fuzzy matrix of element
x j on grade vi as shown in equation 7.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
42 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
8. =
⋯
⋮ ⋱ ⋮
⋯
(7)
Table 8 shows the evaluation reports of the experts.
TABLE 8
EXPERTS EVALUATION REPORTS
Risk
Fuzzy Assessment Level
V1 V2 V3 V4 V5
U11 3 4 5 4 4
U12 2 8 5 2 3
U13 7 3 2 6 2
U21 5 5 5 0 5
U22 2 7 6 0 5
U23 1 11 2 1 5
U31 3 9 4 2 2
U32 3 8 5 3 1
U33 1 8 2 5 4
U41 2 9 1 3 5
U42 3 8 4 3 2
The single factor risk evaluation matrices are:
=
0.15 0.2 0.25 0.2 0.2
0.1 0.4 0.25 0.1 0.15
0.35 0.15 0.1 0.3 0.1
=
0.25 0.25 0.25 0 0.25
0.1 0.35 0.3 0 0.25
0.05 0.55 0.1 0.05 0.25
=
0.15 0.45 0.2 0.1 0.1
0.15 0.4 0.25 0.15 0.05
0.05 0.4 0.1 0.25 0.2
=
0.1 0.45 0.05 0.15 0.25
0.15 0.4 0.2 0.15 0.1
C. Determine the fuzzy weight values of the assessed factors
To help determine the fuzzy level of each element, the weight wi (i = 1,2,...,n) given to the elements of ‘X’,
generally requires that wi satisfies the condition that ≥ 0 and ∑ = 1 such that wi represents the ith
element
weights, and also constitute the fuzzy weight set, ‘W’, for each of the element weights. The weights applied in FCE
have substantial consequence on the final outcome of the evaluation. In this work, AHP is applied to acquire the
weights.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
43 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
9. D. Obtain the comprehensive result
The weight, W, is used to multiply the fuzzy matrix, R, in order to obtain the FCE output vector, D, of each
of the assessed object elements. FCE model is seen in equation 8.
= ∙ = , , … ,
⋯
⋯
⋮ ⋮ ⋱ ⋮
⋯
= , , … , (8)
The results of the single-factor evaluation are:
= ∙ (9)
= 0.159 0.2465 0.2335 0.185 0.176
= 0.1035 0.391 0.2425 0.013 0.25
= 0.086 0.414 0.13 0.2 0.16
= 0.1125 0.4375 0.0875 0.15 0.2125
The results of the multi-factor evaluation are:
= (10)
= ∙ (11)
= 0.1347 0.3254 0.1842 0.1678 0.1879
E. Get the conclusion of the result
The conclusion of the overall assessment is acquired through the concept of topmost integration. The topmost
membership of the risk is 0.3254. This indicates that the overall risk level is low, and the risk index is acceptable.
The results of this risk evaluation procedure will be a guide to recommend relevant procedural and technical security
controls for the selected information security system.
V. CONCLUSION
This research employs the use of AHP and FCE to assess the risk of an information security system. AHP was
applied to analyse the information security metrics. The weights obtained from the analysis were used for the fuzzy
evaluation. The results show that the risk level of the system is low, thus making the risk to be acceptable. The
results obtained will be used to recommend suitable controls for the system.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
44 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
10. REFERENCES
[1] INTERNATIONAL STANDARD ISO/IEC 27005. (2008) Information technology—Security techniques—
Information security risk management.
[2] NIST Special Publication 800-30. (2002). Risk Management Guide for Information Technology Systems.
[3] Ron, R., Janet, C.O., Michael, M. (2014). Systems Security Engineering: An Integrated Approach to Building
Trustworthy Resilient Systems. National Institute of Standards and Technology (NIST) Special Publication
800-160 Initial Public Draft.
[4] Edward, H. (2010). Information Security Risk Management. Handbook for ISO/IEC 27001
[5] Mouna, J., Latifa, B., Arfa, R., & Anis, B.A. (2014). Classification of Security Threats in Information
Systems. 5th International Conference on Ambient Systems, Networks and Technologies (ANT), Procedia
Computer Science 32 (2014 ) 489 – 496. Available online at www.sciencedirect.com
[6] Zabawi, A.Y., Ahmad, R., & Abdul-Latip, S.F. (2015). A Comparative Study for Risk Analysis Tools in
Information Security. ARPN Journal of Engineering and Applied Sciences, Vol. 10, No. 23, ISSN 1819-6608
[7] Wawrzyniak, D. (2006). Information Security Risk Assessment Model for Risk Management.
[8] Neeta, S. & Sachin, K. (2012). A Comparative Study on Information Security Risk Analysis Practices.
International Journal of Computer Applications.
[9] Armaghan, B., Rafhana, A. R. & Junaid, A.C. (2012). A survey of Information Security Risk Analysis
Method. Smart Computing Review, vol. 2, no. 1.
[10] Ming-Chang, L. (2014). Information Security Risk Analysis Methods and Research Trends: AHP and Fuzzy
Comprehensive Method. International Journal of Computer Science & Information Technology (IJCSIT),
Vol 6, No1. DOI: 10.5121/ijcsit.2014.6103 29
[11] Sha, F., Zhongli, L., Hangjun, Z., Wenbin, L., & Bo, L. (2015). A Security Risk Analysis Method for
Information System Based on Information Entropy. The Open Cybernetics & Systemics Journal.
[12] Sanjay, G. & Vicki, C. (2004). Information Security Risk Analysis – A Matrix-Based Approach.
[13] Ning, X., & Dong-Mei, Z. (2011). The Research of Information Security Risk Assessment Method Based on
AHP. Advanced Material Research, Trans Tech Publications, Switzerland.
[14] Cheng, Y. (2014). Quantitative risk analysis method of information security-combining fuzzy comprehensive
analysis with information entropy. Bio Technology An Indian Journal (BTAIJ), 10(21), [12753-12761]
[15] Ming-Xiang, H., & Xin, A. (2016). Information Security Risk Assessment Based on Analytic Hierarchy
Process. Indonesian Journal of Electrical Engineering and Computer Science. Volume 1, No. 3.
[16] Zhang, J., Gai, K., Yang, F., Yang, R., & Wang, S. (2019). Information Security Risk Assessment of
Hazardous Chemicals Emergency Command System Based on AHP-Fuzzy Comprehensive Evaluation
Model. IOP Conference Series: Materials Science and Engineering. doi:10.1088/1757-899X/612/5/052004
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 18, No. 6, June 2020
45 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
11. IJCSIS
ISSN (online): 1947-5500
Please consider to contribute to and/or forward to the appropriate groups the following opportunity to submit and publish
original scientific results.
CALL FOR PAPERS
International Journal of Computer Science and Information Security (IJCSIS)
January-December 2020 Issues
The topics suggested by this issue can be discussed in term of concepts, surveys, state of the art, research,
standards, implementations, running experiments, applications, and industrial case studies. Authors are invited
to submit complete unpublished papers, which are not under review in any other conference or journal in the
following, but not limited to, topic areas.
See authors guide for manuscript preparation and submission guidelines.
Indexed by Google Scholar, DBLP, CiteSeerX, Directory for Open Access Journal (DOAJ), Bielefeld
Academic Search Engine (BASE), SCIRUS, Scopus Database, Cornell University Library, ScientificCommons,
ProQuest, EBSCO and more.
Deadline: see web site
Notification: see web site
Revision: see web site
Publication: see web site
For more topics, please see web site https://sites.google.com/site/ijcsis/
For more information, please visit the journal website (https://sites.google.com/site/ijcsis/)
Context-aware systems
Networking technologies
Security in network, systems, and applications
Evolutionary computation
Industrial systems
Evolutionary computation
Autonomic and autonomous systems
Bio-technologies
Knowledge data systems
Mobile and distance education
Intelligent techniques, logics and systems
Knowledge processing
Information technologies
Internet and web technologies, IoT
Digital information processing
Cognitive science and knowledge
Agent-based systems
Mobility and multimedia systems
Systems performance
Networking and telecommunications
Software development and deployment
Knowledge virtualization
Systems and networks on the chip
Knowledge for global defense
Information Systems [IS]
IPv6 Today - Technology and deployment
Modeling
Software Engineering
Optimization
Complexity
Natural Language Processing
Speech Synthesis
Data Mining