Much research has been conducted to detect vulnerabilities of Web Applications; however, these never proposed a methodology to measure the vulnerabilities either qualitatively or quantitatively. In this paper, a methodology is proposed to investigate the quantification of vulnerabilities in Web Applications. We applied the Goal Question Metrics (GQM) methodology to determine all possible security factors and subfactors of Web Applications in the Department of Transportation (DOT) as our proof of concept. Then we
introduced a Multi-layered Fuzzy Logic (MFL) approach based on the security sub-factors’ prioritization in the Analytic Hierarchy Process (AHP). Using AHP, we weighted each security sub-factor before the quantification process in the Fuzzy Logic to handle imprecise crisp number calculation.
Vulnerability scanners a proactive approach to assess web application securityijcsa
With the increasing concern for security in the network, many approaches are laid out that try to protect
the network from unauthorised access. New methods have been adopted in order to find the potential
discrepancies that may damage the network. Most commonly used approach is the vulnerability
assessment. By vulnerability, we mean, the potential flaws in the system that make it prone to the attack.
Assessment of these system vulnerabilities provide a means to identify and develop new strategies so as to
protect the system from the risk of being damaged. This paper focuses on the usage of various vulnerability
scanners and their related methodology to detect the various vulnerabilities available in the web
applications or the remote host across the network and tries to identify new mechanisms that can be
deployed to secure the network.
This document proposes a Rational Unified Treatment approach for Web Application Vulnerability Assessment (WVA). It implements the Rational Unified Process (RUP) framework to iteratively identify vulnerabilities. The approach discovers assets, audits for risks and threats, exploits identified vulnerabilities, and provides mitigation steps. It was tested on a web application using the w3af security scanner. The results generated reports on discovered vulnerabilities in different formats to help secure the application.
This document discusses model-based vulnerability testing (MBVT) for web applications. It proposes a three-model framework that separates the application specification model from the implementation model to test contexts missed by other approaches. MBVT aims to improve accuracy and precision of vulnerability testing by generating test cases from models and vulnerability test patterns to avoid false positives and negatives. The goal is to automate vulnerability testing and increase detection of vulnerabilities to improve overall security.
Web Applications Assessment Tools: Comparison and DiscussionEECJOURNAL
Recently web applications have proliferated rapidly, with the world increasingly dependent on financial transactions, purchasing, billing, education, medicine, and many more. But the security of these applications is worrying because it directly affects the end-user. Therefore, it is necessary to detect security vulnerabilities in those applications that may cause significant user problems. Most commonly used approach to detect those vulnerabilities are assessments tools like web scanners. This paper will focus on usage of these web scanners and their related methodology to detect the various vulnerabilities in web applications and then compare these scanners depending on results.
COMPARATIVE REVIEW OF MALWARE ANALYSIS METHODOLOGIESIJNSA Journal
This document compares two methodologies for malware analysis: MARE and SAMA. MARE was the first structured malware analysis methodology, introduced in 2010, and consisted of four phases: detection, isolation/extraction, behavioral analysis, and code analysis/reverse engineering. SAMA was introduced more recently in 2020 to address challenges posed by increasingly sophisticated malware. It retains the same four phases but renames and restructures them. The document analyzes the phases of each methodology and compares their approaches. It finds that SAMA's initial phase of establishing an analysis environment before beginning analysis is preferable to MARE's approach of starting with detection.
How can we predict vulnerabilities to prevent them from causing data lossesAbhishek BV
This document discusses various approaches to predict software vulnerabilities in order to prevent data losses. It begins by providing background on software vulnerabilities and their impacts. It then evaluates three main approaches: reactive, reliability, and proactive. The document favors the proactive approach, which predicts flaws before they occur. It summarizes three proposed solutions that take a proactive approach: 1) building a prediction model using software characteristics, 2) using source code static analyzers during development, and 3) employing a combination of statistical methods and code metrics throughout the software lifecycle. For each solution, it discusses the methodology, findings, advantages and disadvantages. Overall, the document analyzes different proactive approaches to vulnerability prediction in order to address how vulnerabilities can be
Formal method techniques provides a suitable platform for the software development in software systems.
Formal methods and formal verification is necessary to prove the correctness and improve performance of
software systems in various levels of design and implementation, too. Security Discussion is an important
issue in computer systems. Since the antivirus applications have very important role in computer systems
security, verifying these applications is very essential and necessary. In this paper, we present four new
approaches for antivirus system behavior and a behavioral model of protection services in the antivirus
system is proposed. We divided the behavioral model in to preventive behavior and control behavior and
then we formal these behaviors. Finally by using some definitions we explain the way these behaviors are
mapped on each other by using our new approaches.
This summarizes a research paper about standardizing source code security audits. The paper proposes assembling literature on security audit techniques to promote standard methodology. It then presents a case study analyzing vulnerabilities in the Apache Traffic Server using two proprietary tools. The study examines potential issues, connects them to a standard taxonomy (CWE), and describes consequences of exploits. The paper concludes by reviewing other security case studies.
Vulnerability scanners a proactive approach to assess web application securityijcsa
With the increasing concern for security in the network, many approaches are laid out that try to protect
the network from unauthorised access. New methods have been adopted in order to find the potential
discrepancies that may damage the network. Most commonly used approach is the vulnerability
assessment. By vulnerability, we mean, the potential flaws in the system that make it prone to the attack.
Assessment of these system vulnerabilities provide a means to identify and develop new strategies so as to
protect the system from the risk of being damaged. This paper focuses on the usage of various vulnerability
scanners and their related methodology to detect the various vulnerabilities available in the web
applications or the remote host across the network and tries to identify new mechanisms that can be
deployed to secure the network.
This document proposes a Rational Unified Treatment approach for Web Application Vulnerability Assessment (WVA). It implements the Rational Unified Process (RUP) framework to iteratively identify vulnerabilities. The approach discovers assets, audits for risks and threats, exploits identified vulnerabilities, and provides mitigation steps. It was tested on a web application using the w3af security scanner. The results generated reports on discovered vulnerabilities in different formats to help secure the application.
This document discusses model-based vulnerability testing (MBVT) for web applications. It proposes a three-model framework that separates the application specification model from the implementation model to test contexts missed by other approaches. MBVT aims to improve accuracy and precision of vulnerability testing by generating test cases from models and vulnerability test patterns to avoid false positives and negatives. The goal is to automate vulnerability testing and increase detection of vulnerabilities to improve overall security.
Web Applications Assessment Tools: Comparison and DiscussionEECJOURNAL
Recently web applications have proliferated rapidly, with the world increasingly dependent on financial transactions, purchasing, billing, education, medicine, and many more. But the security of these applications is worrying because it directly affects the end-user. Therefore, it is necessary to detect security vulnerabilities in those applications that may cause significant user problems. Most commonly used approach to detect those vulnerabilities are assessments tools like web scanners. This paper will focus on usage of these web scanners and their related methodology to detect the various vulnerabilities in web applications and then compare these scanners depending on results.
COMPARATIVE REVIEW OF MALWARE ANALYSIS METHODOLOGIESIJNSA Journal
This document compares two methodologies for malware analysis: MARE and SAMA. MARE was the first structured malware analysis methodology, introduced in 2010, and consisted of four phases: detection, isolation/extraction, behavioral analysis, and code analysis/reverse engineering. SAMA was introduced more recently in 2020 to address challenges posed by increasingly sophisticated malware. It retains the same four phases but renames and restructures them. The document analyzes the phases of each methodology and compares their approaches. It finds that SAMA's initial phase of establishing an analysis environment before beginning analysis is preferable to MARE's approach of starting with detection.
How can we predict vulnerabilities to prevent them from causing data lossesAbhishek BV
This document discusses various approaches to predict software vulnerabilities in order to prevent data losses. It begins by providing background on software vulnerabilities and their impacts. It then evaluates three main approaches: reactive, reliability, and proactive. The document favors the proactive approach, which predicts flaws before they occur. It summarizes three proposed solutions that take a proactive approach: 1) building a prediction model using software characteristics, 2) using source code static analyzers during development, and 3) employing a combination of statistical methods and code metrics throughout the software lifecycle. For each solution, it discusses the methodology, findings, advantages and disadvantages. Overall, the document analyzes different proactive approaches to vulnerability prediction in order to address how vulnerabilities can be
Formal method techniques provides a suitable platform for the software development in software systems.
Formal methods and formal verification is necessary to prove the correctness and improve performance of
software systems in various levels of design and implementation, too. Security Discussion is an important
issue in computer systems. Since the antivirus applications have very important role in computer systems
security, verifying these applications is very essential and necessary. In this paper, we present four new
approaches for antivirus system behavior and a behavioral model of protection services in the antivirus
system is proposed. We divided the behavioral model in to preventive behavior and control behavior and
then we formal these behaviors. Finally by using some definitions we explain the way these behaviors are
mapped on each other by using our new approaches.
This summarizes a research paper about standardizing source code security audits. The paper proposes assembling literature on security audit techniques to promote standard methodology. It then presents a case study analyzing vulnerabilities in the Apache Traffic Server using two proprietary tools. The study examines potential issues, connects them to a standard taxonomy (CWE), and describes consequences of exploits. The paper concludes by reviewing other security case studies.
Intelligence on the Intractable Problem of Software SecurityTyler Shields
More than half of all software failed to meet an acceptable security level and 8 out of 10 web applications failed to comply with OWASP Top 10. Cross-site scripting was the most prevalent vulnerability across all applications. Third-party applications were found to have the lowest security quality, though developers repaired vulnerabilities quickly. Suppliers of cloud/web applications were most frequently subjected to third-party risk assessments. No single testing method was adequate by itself, and financial industry application security did not match business criticality.
An Empirical Study on the Security Measurements of Websites of Jordanian Publ...CSCJournals
Most of the Jordanian universities’ inquiries systems, i.e. educational, financial, administrative, and research systems are accessible through their campus networks. As such, they are vulnerable to security breaches that may compromise confidential information and expose the universities to losses and other risks. At Jordanian universities, security is critical to the physical network, computer operating systems, and application programs and each area has its own set of security issues and risks. This paper presents a comparative study on the security systems at the Jordanian universities from the viewpoint of prevention and intrusion detection. Robustness testing techniques are used to assess the security and robustness of the universities’ online services. In this paper, the analysis concentrates on the distribution of vulnerability categories and identifies the mistakes that lead to a severe type of vulnerability. The distribution of vulnerabilities can be used to avoid security flaws and mistakes.
Computer Worms Based on Monitoring Replication and Damage: Experiment and Eva...IOSRjournaljce
This document describes experiments conducted to evaluate a proposed worm detection system (WDS) and its ability to detect computer worms. The experiments involved networking three machines and transferring files between them that may contain worms. Five known worms that cause specific types of damage were tested to evaluate if the WDS could detect the worms through their damage or replication behavior. Additional experiments tested the WDS's ability to detect unknown worms and known worms through signature matching. The results showed that the WDS successfully detected the worms and met all evaluation criteria.
Research Article On Web Application SecuritySaadSaif6
This Is The Totally Hand Written Research Article On
Web Application Security
(Improving Critical Web-based Applications Quality Through In depth Security Analysis)
This Research Article Was Made By Me After The Hard Working Of One Month. Its Best And Suitable For Your Research Paper And Also Used In Class For Present It And For Submission.
10 Year Impact Award Presentation - Duplicate Bug Reports Considered Harmful ...Nicolas Bettenburg
This document describes a new automated method called SEVERIS that assists NASA test engineers in assigning severity levels to defect reports. SEVERIS uses text mining and machine learning techniques on NASA's Project and Issue Tracking System (PITS) database to predict issue severities. A case study found that SEVERIS accurately predicts severities and provides probability estimates, helping guide decision making during the severity assessment process.
Routine Detection Of Web Application Defence FlawsIJTET Journal
Abstract— The detection process for security vulnerabilities in ASP.NET websites / web applications is a complex one, most of the code is written by somebody else and there is no documentation to determine the purpose of source code. The characteristic of source code defects generates major web application vulnerabilities. The typical software faults that are behind of web application vulnerabilities, taking into different programming languages. To analyze their ability to prevent security vulnerabilities ASP.NET which is part of .NET framework that separate the HTML code from the programming code in two files, aspx file and another for the programming code. It depends on the compiled language (Visual Basic VB, C sharp C#, Java Script). Visual Basic and C# are the most common languages using with ASP.NET files, and these two compiled languages are in the construction of our proposed algorithm in addition to aspx files. The hacker can inject his malicious as a input or script that can destroy the database or steal website files. By using scanning tool the fault detection process can be done. The scanning process inspects three types of files (aspx, VB and C#). then the software faults are identified. By using fault recovery process the prepared replacement statement technique is used to detect the vulnerabilities and recover it with high efficiency and it provides suggestion then the report is generated then it will help to improve the overall security of the system.
Vulnerability Analysis of 802.11 Authentications and Encryption Protocols: CV...AM Publications
This paper analysis vulnerability of known attacks on WLAN cipher suite, authentication mechanisms and credentials using common vulnerability scoring system (CVSS).
A Comparative Study between Vulnerability Assessment and Penetration TestingYogeshIJTSRD
The Internet has drastically changed in the past decade. Now internet has more business than before and therefore there is a increase in Advanced Persistent Threat groups and Adversaries. After all the advancement in technology and innovation Web application Security is still a challenge for most of the organization all over the world, Because every time APT’s groups and Threat actors uses different Tactics Techniques and Procedure TTPs for exploiting any organization. There can be many techniques to mitigate such attacks such as defensive coding, hardening system firewall, implementing IDS and IPS using of SIEM tools etc. The solution contains monitoring different logs, events and regular assessment of organizations network which is known as Vulnerability Assessment which is a generalized or a sequenced review of a security system and the other one is penetration testing also known popularly as ethical hacking or red teaming assessment where the client’s poses themselves as real Hackers and try to penetrate into the company’s network to check if it’s really secure or not.In this paper we will be comparing these two methods and techniques and also decide at the end which of the above two method is more superior and why. Sharique Raza | Feon Jaison "A Comparative Study between Vulnerability Assessment and Penetration Testing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd41145.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/41145/a-comparative-study-between-vulnerability-assessment-and-penetration-testing/sharique-raza
Optimised malware detection in digital forensicsIJNSA Journal
On the Internet, malware is one of the most serious threats to system security. Most complex issues and
problems on any systems are caused by malware and spam. Networks and systems can be accessed and
compromised by malware known as botnets, which compromise other systems through a coordinated
attack. Such malware uses anti-forensic techniques to avoid detection and investigation. To prevent systems
from the malicious activity of this malware, a new framework is required that aims to develop an optimised
technique for malware detection. Hence, this paper demonstrates new approaches to perform malware
analysis in forensic investigations and discusses how such a framework may be developed.
Building a Distributed Secure System on Multi-Agent Platform Depending on the...CSCJournals
Today, applications in mobile multi-agent systems require a high degree of confidence that running code inside the system will not be malicious. Also any malicious agents must be identified and contained. Since the inception of mobile agents, the intruder has been addressed using a multitude of techniques, but many of these implementations have only addressed concerns from the position of either the platform or the agents. Very few approaches have undertaken the problem of mobile agent security from both perspectives simultaneously. Furthermore, no middleware exists to facilitate provisioning of the required security qualities of mobile agent software while extensively focusing on easing the software development burden. The aim is to build a distributed secure system using multi-agents by applying the principles of software engineering. The objectives of this paper is to introduce multi agent systems that enhance security rules through the access right to building a distributed secure system integrating with principles of software engineering system life cycle, as well as satisfy the security access right for both platform and agents to improve the three characteristics of agents adaptively, mobility and flexibility. This project based on the platform of PHP and MYSQL (Database) which can be presented in a website. The implementation and test are applied in both Linux and Windows platforms, including Linux Red Hat 8, Linux Ubuntu 6.06 LTS and Microsoft Windows XP Professional. Since PHP and MySQL are available in almost all operating systems, the result could be tested the platform as long as PHP and MySQL configuration is available. PHP5 and the MySQL (database) software are used to build a secure website. Multiple techniques of security and authentications have been used by multi-agents system. Secure database is encrypted by using md5. Also satisfy the characteristics for security requirements: confidentiality (protection from disclosure to unauthorized persons), integrity (maintaining data consistency) and authentication (assurance of identity of person or originator of data).
A Smart Fuzzing Approach for Integer Overflow DetectionITIIIndustries
Fuzzing is one of the most commonly used methods to detect software vulnerabilities, a major cause of information security incidents. Although it has advantages of simple design and low error report, its efficiency is usually poor. In this paper we present a smart fuzzing approach for integer overflow detection and a tool, SwordFuzzer, which implements this approach. Unlike standard fuzzing techniques, which randomly change parts of the input file with no information about the underlying syntactic structure of the file, SwordFuzzer uses online dynamic taint analysis to identify which bytes in the input file are used in security sensitive operations and then focuses on mutating such bytes. Thus, the generated inputs are more likely to trigger potential vulnerabilities. We evaluated SwordFuzzer with an example program and a number of real-world applications. The experimental results show that SwordFuzzer can accurately locate the key bytes of the input file and dramatically improve the effectiveness of fuzzing in detecting real-world vulnerabilities
Faces in the Distorting Mirror: Revisiting Photo-based Social AuthenticationFACE
This document summarizes research on revisiting photo-based social authentication. The researchers:
1) Demonstrate a new attack on social authentication that matches photos from challenges to a collection of the victim's photos, which is more effective than face recognition attacks.
2) Conduct a user study showing people can identify friends in photos with unrecognizable faces over 99% of the time, whereas software fails.
3) Design a new social authentication system that selects "medium" photos software cannot recognize but people can, and transforms photos via overlays and perspective changes to block matching attacks while retaining human usability, passing 94.38% of challenges in a preliminary study.
IRJET- 3 Juncture based Issuer Driven Pull Out System using Distributed ServersIRJET Journal
This document discusses network security visualization and proposes a classification system for network security visualization systems. It begins by introducing the importance of visualizing network security data due to the large quantities of data produced. It then reviews existing network security visualization systems and outlines key aspects they monitor like host/server monitoring, port activity, and intrusion detection. The document proposes a taxonomy to classify network security visualization systems based on their data sources and techniques. It concludes by stating papers were selected for review based on their relevance to network security, novelty of techniques, and inclusion of evaluations.
Security against Web Application Attacks Using Ontology Based Intrusion Detec...IRJET Journal
The document presents a proposed ontology-based intrusion detection system to provide security against web application attacks. The system aims to address limitations of existing signature-based intrusion detection systems, such as high false positive and negative rates. The proposed system uses an ontology created with Protege to model web application attacks, vulnerabilities, threats and security controls. Rules are also defined to allow the system to predict and classify attacks based on the ontology. The system architecture includes components for system analysis, interface, rule engine, ontology generation and a knowledge base. The system is evaluated on its ability to detect common web attacks like SQL injection, cross-site scripting and buffer overflow attacks.
Mitigating Privilege-Escalation Attacks on Android ReportVinoth Kanna
This document summarizes previous work on mitigating privilege-escalation attacks on Android. It discusses how Android's open framework allows applications to potentially gain unauthorized access to data. It reviews common privilege-escalation attacks like confused deputy attacks and inter-app collusion. The document also summarizes existing security extensions that aim to prevent these attacks, but notes limitations in addressing confused deputy and collusion attacks specifically. It proposes extending reference monitoring at the kernel level in addition to the middleware to better prevent these types of attacks.
Testing an Android Implementation of the Social Engineering Protection Traini...Marcel Teixeira
This document describes testing an Android implementation of the Social Engineering Protection Training Tool (SEPTT) which is based on the Social Engineering Attack Detection Model version 2 (SEADMv2). The SEADMv2 was implemented in an Android application to test if it reduces the probability of users falling victim to social engineering attacks. 20 subjects tested the Android app and their responses to social engineering scenarios were analyzed. The results showed the Android implementation of SEADMv2 significantly reduced the number of subjects that fell victim to social engineering attacks in general as well as specific attack types like malicious, bidirectional, and indirect attacks. However, it did not significantly reduce victims of unidirectional attacks or harmless scenarios.
A survey of online failure prediction methodsunyil96
The document surveys online failure prediction methods. It defines key terms like faults, errors, failures and symptoms. A taxonomy of online failure prediction approaches is developed to structure the wide spectrum of methods. Almost 50 online failure prediction techniques are surveyed and their quality is evaluated using established metrics.
IRJET- Attack Detection Strategies in Wireless Sensor NetworkIRJET Journal
The document discusses various attack detection strategies in wireless sensor networks, including centralized techniques like straightforward detection, set operations, and cluster-based approaches as well as distributed techniques like content-based and attribute-based filtering. It compares the techniques on factors like the type of attack detected and their merits and demerits. The conclusion calls for improved validation mechanisms to authenticate users and reduce deceptions like clone attacks on social media networks.
WEB ATTACK PREDICTION USING STEPWISE CONDITIONAL PARAMETER TUNING IN MACHINE ...IJCNCJournal
There is a rapid growth in internet and website usage. A wide variety of devices are used to access
websites, such as mobile phones, tablets, laptops, and personal computers. Attackers are finding more and
more vulnerabilities on websites that they can exploit for malicious purposes. A web application attack
occurs when cyber criminals gain access to unauthorized areas. Typically, attackers look for
vulnerabilities in web applications at the application layer. SQL injection attacks and Cross Site script
attacks is used to access web applications to obtain sensitive data. A key objective of this work is to
develop new features and investigate how automatic tuning of machine learning techniques can improve
the performance of Web Attack detections that use HTTP CSIC datasets to block and detect attacks. The
Stepwise Conditional parameter tuning in machine learning algorithms is a proposed model. This model is
a dynamic and automated parameter choosing and tuning based on the better outcome. This work also
compares two datasets for performance of the proposed model.
Web Attack Prediction using Stepwise Conditional Parameter Tuning in Machine ...IJCNCJournal
There is a rapid growth in internet and website usage. A wide variety of devices are used to access websites, such as mobile phones, tablets, laptops, and personal computers. Attackers are finding more and more vulnerabilities on websites that they can exploit for malicious purposes. A web application attack occurs when cyber criminals gain access to unauthorized areas. Typically, attackers look for vulnerabilities in web applications at the application layer. SQL injection attacks and Cross Site script attacks is used to access web applications to obtain sensitive data. A key objective of this work is to develop new features and investigate how automatic tuning of machine learning techniques can improve the performance of Web Attack detections that use HTTP CSIC datasets to block and detect attacks. The Stepwise Conditional parameter tuning in machine learning algorithms is a proposed model. This model is a dynamic and automated parameter choosing and tuning based on the better outcome. This work also compares two datasets for performance of the proposed model.
Insights of effectivity analysis of learning-based approaches towards softwar...IJECEIAES
Software defect prediction is one of the essential sets of operation towards mitigating issues of risk management in software development known to contribute towards enhancing the quality of software. There is evolution of various methodologies towards resolving this issue while learning-based methodology is witnessed to be the most dominant contributor. The problem identified is that there are yet many unsolved queries associated with practical viability of such learning-based approach adoption in software quality management. Proposed approaches discussed in this paper contributes towards mitigating this challenge by introducing a simplified, compact, and crisp analysis of effectiveness associated with learning-based schemes. The paper presents its major findings of effectivity analysis of machine learning, deep learning, hybrid, and other miscellaneous approaches deployed for fault prediction followed by highlighting research trend. The major findings infer that feature selection, data imbalance, interpretability, and in adequate involvement of context are prime gaps in existing methods. The paper also contributes towards research gap as well as essential learning outcomes of present review work.
1) The document discusses a proposed Vulnerability Management System (VMS) to identify and manage software vulnerabilities.
2) It provides an overview of vulnerability management and discusses related work that has been done in vulnerability databases and tracking systems.
3) The proposed VMS would use morphological inspection and static analysis to assess vulnerabilities, store information in a database, and rank vulnerabilities based on severity. It would consist of a vulnerability scanner, process control platform, and data storage.
Intelligence on the Intractable Problem of Software SecurityTyler Shields
More than half of all software failed to meet an acceptable security level and 8 out of 10 web applications failed to comply with OWASP Top 10. Cross-site scripting was the most prevalent vulnerability across all applications. Third-party applications were found to have the lowest security quality, though developers repaired vulnerabilities quickly. Suppliers of cloud/web applications were most frequently subjected to third-party risk assessments. No single testing method was adequate by itself, and financial industry application security did not match business criticality.
An Empirical Study on the Security Measurements of Websites of Jordanian Publ...CSCJournals
Most of the Jordanian universities’ inquiries systems, i.e. educational, financial, administrative, and research systems are accessible through their campus networks. As such, they are vulnerable to security breaches that may compromise confidential information and expose the universities to losses and other risks. At Jordanian universities, security is critical to the physical network, computer operating systems, and application programs and each area has its own set of security issues and risks. This paper presents a comparative study on the security systems at the Jordanian universities from the viewpoint of prevention and intrusion detection. Robustness testing techniques are used to assess the security and robustness of the universities’ online services. In this paper, the analysis concentrates on the distribution of vulnerability categories and identifies the mistakes that lead to a severe type of vulnerability. The distribution of vulnerabilities can be used to avoid security flaws and mistakes.
Computer Worms Based on Monitoring Replication and Damage: Experiment and Eva...IOSRjournaljce
This document describes experiments conducted to evaluate a proposed worm detection system (WDS) and its ability to detect computer worms. The experiments involved networking three machines and transferring files between them that may contain worms. Five known worms that cause specific types of damage were tested to evaluate if the WDS could detect the worms through their damage or replication behavior. Additional experiments tested the WDS's ability to detect unknown worms and known worms through signature matching. The results showed that the WDS successfully detected the worms and met all evaluation criteria.
Research Article On Web Application SecuritySaadSaif6
This Is The Totally Hand Written Research Article On
Web Application Security
(Improving Critical Web-based Applications Quality Through In depth Security Analysis)
This Research Article Was Made By Me After The Hard Working Of One Month. Its Best And Suitable For Your Research Paper And Also Used In Class For Present It And For Submission.
10 Year Impact Award Presentation - Duplicate Bug Reports Considered Harmful ...Nicolas Bettenburg
This document describes a new automated method called SEVERIS that assists NASA test engineers in assigning severity levels to defect reports. SEVERIS uses text mining and machine learning techniques on NASA's Project and Issue Tracking System (PITS) database to predict issue severities. A case study found that SEVERIS accurately predicts severities and provides probability estimates, helping guide decision making during the severity assessment process.
Routine Detection Of Web Application Defence FlawsIJTET Journal
Abstract— The detection process for security vulnerabilities in ASP.NET websites / web applications is a complex one, most of the code is written by somebody else and there is no documentation to determine the purpose of source code. The characteristic of source code defects generates major web application vulnerabilities. The typical software faults that are behind of web application vulnerabilities, taking into different programming languages. To analyze their ability to prevent security vulnerabilities ASP.NET which is part of .NET framework that separate the HTML code from the programming code in two files, aspx file and another for the programming code. It depends on the compiled language (Visual Basic VB, C sharp C#, Java Script). Visual Basic and C# are the most common languages using with ASP.NET files, and these two compiled languages are in the construction of our proposed algorithm in addition to aspx files. The hacker can inject his malicious as a input or script that can destroy the database or steal website files. By using scanning tool the fault detection process can be done. The scanning process inspects three types of files (aspx, VB and C#). then the software faults are identified. By using fault recovery process the prepared replacement statement technique is used to detect the vulnerabilities and recover it with high efficiency and it provides suggestion then the report is generated then it will help to improve the overall security of the system.
Vulnerability Analysis of 802.11 Authentications and Encryption Protocols: CV...AM Publications
This paper analysis vulnerability of known attacks on WLAN cipher suite, authentication mechanisms and credentials using common vulnerability scoring system (CVSS).
A Comparative Study between Vulnerability Assessment and Penetration TestingYogeshIJTSRD
The Internet has drastically changed in the past decade. Now internet has more business than before and therefore there is a increase in Advanced Persistent Threat groups and Adversaries. After all the advancement in technology and innovation Web application Security is still a challenge for most of the organization all over the world, Because every time APT’s groups and Threat actors uses different Tactics Techniques and Procedure TTPs for exploiting any organization. There can be many techniques to mitigate such attacks such as defensive coding, hardening system firewall, implementing IDS and IPS using of SIEM tools etc. The solution contains monitoring different logs, events and regular assessment of organizations network which is known as Vulnerability Assessment which is a generalized or a sequenced review of a security system and the other one is penetration testing also known popularly as ethical hacking or red teaming assessment where the client’s poses themselves as real Hackers and try to penetrate into the company’s network to check if it’s really secure or not.In this paper we will be comparing these two methods and techniques and also decide at the end which of the above two method is more superior and why. Sharique Raza | Feon Jaison "A Comparative Study between Vulnerability Assessment and Penetration Testing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd41145.pdf Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/41145/a-comparative-study-between-vulnerability-assessment-and-penetration-testing/sharique-raza
Optimised malware detection in digital forensicsIJNSA Journal
On the Internet, malware is one of the most serious threats to system security. Most complex issues and
problems on any systems are caused by malware and spam. Networks and systems can be accessed and
compromised by malware known as botnets, which compromise other systems through a coordinated
attack. Such malware uses anti-forensic techniques to avoid detection and investigation. To prevent systems
from the malicious activity of this malware, a new framework is required that aims to develop an optimised
technique for malware detection. Hence, this paper demonstrates new approaches to perform malware
analysis in forensic investigations and discusses how such a framework may be developed.
Building a Distributed Secure System on Multi-Agent Platform Depending on the...CSCJournals
Today, applications in mobile multi-agent systems require a high degree of confidence that running code inside the system will not be malicious. Also any malicious agents must be identified and contained. Since the inception of mobile agents, the intruder has been addressed using a multitude of techniques, but many of these implementations have only addressed concerns from the position of either the platform or the agents. Very few approaches have undertaken the problem of mobile agent security from both perspectives simultaneously. Furthermore, no middleware exists to facilitate provisioning of the required security qualities of mobile agent software while extensively focusing on easing the software development burden. The aim is to build a distributed secure system using multi-agents by applying the principles of software engineering. The objectives of this paper is to introduce multi agent systems that enhance security rules through the access right to building a distributed secure system integrating with principles of software engineering system life cycle, as well as satisfy the security access right for both platform and agents to improve the three characteristics of agents adaptively, mobility and flexibility. This project based on the platform of PHP and MYSQL (Database) which can be presented in a website. The implementation and test are applied in both Linux and Windows platforms, including Linux Red Hat 8, Linux Ubuntu 6.06 LTS and Microsoft Windows XP Professional. Since PHP and MySQL are available in almost all operating systems, the result could be tested the platform as long as PHP and MySQL configuration is available. PHP5 and the MySQL (database) software are used to build a secure website. Multiple techniques of security and authentications have been used by multi-agents system. Secure database is encrypted by using md5. Also satisfy the characteristics for security requirements: confidentiality (protection from disclosure to unauthorized persons), integrity (maintaining data consistency) and authentication (assurance of identity of person or originator of data).
A Smart Fuzzing Approach for Integer Overflow DetectionITIIIndustries
Fuzzing is one of the most commonly used methods to detect software vulnerabilities, a major cause of information security incidents. Although it has advantages of simple design and low error report, its efficiency is usually poor. In this paper we present a smart fuzzing approach for integer overflow detection and a tool, SwordFuzzer, which implements this approach. Unlike standard fuzzing techniques, which randomly change parts of the input file with no information about the underlying syntactic structure of the file, SwordFuzzer uses online dynamic taint analysis to identify which bytes in the input file are used in security sensitive operations and then focuses on mutating such bytes. Thus, the generated inputs are more likely to trigger potential vulnerabilities. We evaluated SwordFuzzer with an example program and a number of real-world applications. The experimental results show that SwordFuzzer can accurately locate the key bytes of the input file and dramatically improve the effectiveness of fuzzing in detecting real-world vulnerabilities
Faces in the Distorting Mirror: Revisiting Photo-based Social AuthenticationFACE
This document summarizes research on revisiting photo-based social authentication. The researchers:
1) Demonstrate a new attack on social authentication that matches photos from challenges to a collection of the victim's photos, which is more effective than face recognition attacks.
2) Conduct a user study showing people can identify friends in photos with unrecognizable faces over 99% of the time, whereas software fails.
3) Design a new social authentication system that selects "medium" photos software cannot recognize but people can, and transforms photos via overlays and perspective changes to block matching attacks while retaining human usability, passing 94.38% of challenges in a preliminary study.
IRJET- 3 Juncture based Issuer Driven Pull Out System using Distributed ServersIRJET Journal
This document discusses network security visualization and proposes a classification system for network security visualization systems. It begins by introducing the importance of visualizing network security data due to the large quantities of data produced. It then reviews existing network security visualization systems and outlines key aspects they monitor like host/server monitoring, port activity, and intrusion detection. The document proposes a taxonomy to classify network security visualization systems based on their data sources and techniques. It concludes by stating papers were selected for review based on their relevance to network security, novelty of techniques, and inclusion of evaluations.
Security against Web Application Attacks Using Ontology Based Intrusion Detec...IRJET Journal
The document presents a proposed ontology-based intrusion detection system to provide security against web application attacks. The system aims to address limitations of existing signature-based intrusion detection systems, such as high false positive and negative rates. The proposed system uses an ontology created with Protege to model web application attacks, vulnerabilities, threats and security controls. Rules are also defined to allow the system to predict and classify attacks based on the ontology. The system architecture includes components for system analysis, interface, rule engine, ontology generation and a knowledge base. The system is evaluated on its ability to detect common web attacks like SQL injection, cross-site scripting and buffer overflow attacks.
Mitigating Privilege-Escalation Attacks on Android ReportVinoth Kanna
This document summarizes previous work on mitigating privilege-escalation attacks on Android. It discusses how Android's open framework allows applications to potentially gain unauthorized access to data. It reviews common privilege-escalation attacks like confused deputy attacks and inter-app collusion. The document also summarizes existing security extensions that aim to prevent these attacks, but notes limitations in addressing confused deputy and collusion attacks specifically. It proposes extending reference monitoring at the kernel level in addition to the middleware to better prevent these types of attacks.
Testing an Android Implementation of the Social Engineering Protection Traini...Marcel Teixeira
This document describes testing an Android implementation of the Social Engineering Protection Training Tool (SEPTT) which is based on the Social Engineering Attack Detection Model version 2 (SEADMv2). The SEADMv2 was implemented in an Android application to test if it reduces the probability of users falling victim to social engineering attacks. 20 subjects tested the Android app and their responses to social engineering scenarios were analyzed. The results showed the Android implementation of SEADMv2 significantly reduced the number of subjects that fell victim to social engineering attacks in general as well as specific attack types like malicious, bidirectional, and indirect attacks. However, it did not significantly reduce victims of unidirectional attacks or harmless scenarios.
A survey of online failure prediction methodsunyil96
The document surveys online failure prediction methods. It defines key terms like faults, errors, failures and symptoms. A taxonomy of online failure prediction approaches is developed to structure the wide spectrum of methods. Almost 50 online failure prediction techniques are surveyed and their quality is evaluated using established metrics.
IRJET- Attack Detection Strategies in Wireless Sensor NetworkIRJET Journal
The document discusses various attack detection strategies in wireless sensor networks, including centralized techniques like straightforward detection, set operations, and cluster-based approaches as well as distributed techniques like content-based and attribute-based filtering. It compares the techniques on factors like the type of attack detected and their merits and demerits. The conclusion calls for improved validation mechanisms to authenticate users and reduce deceptions like clone attacks on social media networks.
WEB ATTACK PREDICTION USING STEPWISE CONDITIONAL PARAMETER TUNING IN MACHINE ...IJCNCJournal
There is a rapid growth in internet and website usage. A wide variety of devices are used to access
websites, such as mobile phones, tablets, laptops, and personal computers. Attackers are finding more and
more vulnerabilities on websites that they can exploit for malicious purposes. A web application attack
occurs when cyber criminals gain access to unauthorized areas. Typically, attackers look for
vulnerabilities in web applications at the application layer. SQL injection attacks and Cross Site script
attacks is used to access web applications to obtain sensitive data. A key objective of this work is to
develop new features and investigate how automatic tuning of machine learning techniques can improve
the performance of Web Attack detections that use HTTP CSIC datasets to block and detect attacks. The
Stepwise Conditional parameter tuning in machine learning algorithms is a proposed model. This model is
a dynamic and automated parameter choosing and tuning based on the better outcome. This work also
compares two datasets for performance of the proposed model.
Web Attack Prediction using Stepwise Conditional Parameter Tuning in Machine ...IJCNCJournal
There is a rapid growth in internet and website usage. A wide variety of devices are used to access websites, such as mobile phones, tablets, laptops, and personal computers. Attackers are finding more and more vulnerabilities on websites that they can exploit for malicious purposes. A web application attack occurs when cyber criminals gain access to unauthorized areas. Typically, attackers look for vulnerabilities in web applications at the application layer. SQL injection attacks and Cross Site script attacks is used to access web applications to obtain sensitive data. A key objective of this work is to develop new features and investigate how automatic tuning of machine learning techniques can improve the performance of Web Attack detections that use HTTP CSIC datasets to block and detect attacks. The Stepwise Conditional parameter tuning in machine learning algorithms is a proposed model. This model is a dynamic and automated parameter choosing and tuning based on the better outcome. This work also compares two datasets for performance of the proposed model.
Insights of effectivity analysis of learning-based approaches towards softwar...IJECEIAES
Software defect prediction is one of the essential sets of operation towards mitigating issues of risk management in software development known to contribute towards enhancing the quality of software. There is evolution of various methodologies towards resolving this issue while learning-based methodology is witnessed to be the most dominant contributor. The problem identified is that there are yet many unsolved queries associated with practical viability of such learning-based approach adoption in software quality management. Proposed approaches discussed in this paper contributes towards mitigating this challenge by introducing a simplified, compact, and crisp analysis of effectiveness associated with learning-based schemes. The paper presents its major findings of effectivity analysis of machine learning, deep learning, hybrid, and other miscellaneous approaches deployed for fault prediction followed by highlighting research trend. The major findings infer that feature selection, data imbalance, interpretability, and in adequate involvement of context are prime gaps in existing methods. The paper also contributes towards research gap as well as essential learning outcomes of present review work.
1) The document discusses a proposed Vulnerability Management System (VMS) to identify and manage software vulnerabilities.
2) It provides an overview of vulnerability management and discusses related work that has been done in vulnerability databases and tracking systems.
3) The proposed VMS would use morphological inspection and static analysis to assess vulnerabilities, store information in a database, and rank vulnerabilities based on severity. It would consist of a vulnerability scanner, process control platform, and data storage.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Using Fuzzy Clustering and Software Metrics to Predict Faults in large Indust...IOSR Journals
This document describes a study that uses fuzzy clustering and software metrics to predict faults in large industrial software systems. The study uses fuzzy c-means clustering to group software components into faulty and fault-free clusters based on various software metrics. The study applies this method to the open-source JEdit software project, calculating metrics for 274 classes and identifying faults using repository data. The results show 88.49% accuracy in predicting faulty classes, demonstrating that fuzzy clustering can be an effective technique for fault prediction in large software systems.
This document discusses using Python to build a web application vulnerability scanner called SCANSCX. SCANSCX uses Python tools like AST, CFG, Flask and Django to analyze source code and detect vulnerabilities like command injection and cross-site scripting (XSS). It parses web applications into abstract syntax trees and control flow graphs to identify vulnerabilities. The scanner was tested on purposefully vulnerable applications and was able to correctly detect the vulnerabilities. SCANSCX can scan applications built with both Flask and Django frameworks.
A simplified predictive framework for cost evaluation to fault assessment usi...IJECEIAES
Software engineering is an integral part of any software development scheme which frequently encounters bugs, errors, and faults. Predictive evaluation of software fault contributes towards mitigating this challenge to a large extent; however, there is no benchmarked framework being reported in this case yet. Therefore, this paper introduces a computational framework of the cost evaluation method to facilitate a better form of predictive assessment of software faults. Based on lines of code, the proposed scheme deploys adopts a machine-learning approach to address the perform predictive analysis of faults. The proposed scheme presents an analytical framework of the correlation-based cost model integrated with multiple standards machine learning (ML) models, e.g., linear regression, support vector regression, and artificial neural networks (ANN). These learning models are executed and trained to predict software faults with higher accuracy. The study considers assessing the outcomes based on error-based performance metrics in detail to determine how well each learning model performs and how accurate it is at learning. It also looked at the factors contributing to the training loss of neural networks. The validation result demonstrates that, compared to logistic regression and support vector regression, neural network achieves a significantly lower error score for software fault prediction.
This document describes a proposed vulnerability management system (VMS) that aims to automate the process of scanning software applications to identify vulnerabilities. The proposed system uses a hybrid algorithm approach that incorporates features from existing vulnerability detection tools and algorithms. The algorithm involves five main phases: inspection, scanning, attack detection, analysis, and reporting. The algorithm is intended to increase the accuracy of vulnerability detection compared to existing systems. The proposed VMS system and hybrid algorithm were tested using various vulnerability scanning tools on virtual machines, and results demonstrated that the VMS could automate the vulnerability assessment process and generate reports on detected vulnerabilities with severity levels. The main limitation is that scans using the VMS may take more time than some existing tools.
A predictive framework for cyber security analytics using attack graphsIJCNCJournal
Security metrics serve as a powerful tool for organizations to understand the effectiveness of protecting computer networks. However majority of these measurement techniques don’t adequately help corporations to make informed risk management decisions. In this paper we present a stochastic security framework for obtaining quantitative measures of security by taking into account the dynamic attributes associated with vulnerabilities that can change over time. Our model is novel as existing research in attack graph analysis do not consider the temporal aspects associated with the vulnerabilities, such as the availability of exploits and patches which can affect the overall network security based on how the vulnerabilities are interconnected and leveraged to compromise the system. In order to have a more realistic representation of how the security state of the network would vary over time, a nonhomogeneous model is developed which incorporates a time dependent covariate, namely the vulnerability age. The daily transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact measures evolve over a time period for a given network.
In the software development life cycle (SDLC), testing is an important step to reveal and fix the vulnerabilities and flaws in the software. Testing commercial off-the-shelf applications for security has never been easy, and this is exacerbated when their source code is not accessible. Without access to source code, binary executables of such applications are employed for testing. Binary analysis is commonly used to analyze on the binary executable of an application to discover vulnerabilities. Various means, such as symbolic execution, concolic execution, taint analysis, can be used in binary analysis to help collect control flow information, execution path information, etc. This paper presents the basics of the symbolic execution approach and studies the common tools which utilize symbolic execution in them. With the review, we identified that there are a number of challenges that are associated with the symbolic values fed to the programs as well as the performance and space consumption of the tools. Different tools approached the challenges in different ways, therefore the strengths and weaknesses of each tool are summarized in a table to make it available to interested researchers.
With the development and rapid growth in IT infrastructure, malicious code attacks are considered as the
main threat to cybersecurity. Malicious JavaScript’s which are intentionally crafted by the attackers inside the web page
over the web as an emerging security issue affecting millions of users. In past few years, a number of studies have been
conducted based on machine learning for detection of malicious JavaScript code attacks has demonstrated a poor
detection accuracy and increased performance overheads. In this paper, an effective interceptor approach for detection of
multivariate and novel malicious JavaScript’s based on deep learning is proposed and evaluated. Hybrid feature set based
on static and dynamic analysis were used. The dataset which was used in this study consists of 32,000 benign webpages
and 12,900 malicious pages. The experimental results show that this approach was able to detect 99.01% of new malicious
code variants.
International Journal of Computer Science and Information Security,IJCSIS ISSN 1947-5500, Pittsburgh, PA, USA
Email: ijcsiseditor@gmail.com
http://sites.google.com/site/ijcsis/
https://google.academia.edu/JournalofComputerScience
https://www.linkedin.com/in/ijcsis-research-publications-8b916516/
http://www.researcherid.com/rid/E-1319-2016
IRJET- Android Malware Detection using Machine LearningIRJET Journal
This document discusses using machine learning algorithms to detect Android malware. It aims to extract features from Android applications (APKs) and train machine learning models to classify APKs as malware or benign. The proposed approach extracts features from an APK's manifest file and decompiled code to identify permissions, URLs, API calls, and other indicators. Random forest classifiers are trained on a dataset of benign and malicious APKs to detect known malware families. The models can classify new APKs as either malware or benign, and if malware, identify the specific malware family. The approach aims to detect malware with high accuracy while reducing analysis time by processing multiple APKs in parallel.
COMPARATIVE ANALYSIS OF ANOMALY BASED WEB ATTACK DETECTION METHODSIJCI JOURNAL
In the present scenario, protection of websites from web-based attacks is a great challenge due to the bad intention of the malicious user over the Internet. Researchers are trying to find the optimum solution to prevent these web attack activities. There are several techniques available to prevent the web attacks from happening like firewalls, but most of the firewall is not designed to prevent the attack against the websites. Moreover, firewalls mostly work on signature-based detection method. In this paper, we have analyzed different anomaly-based detection methods for the detection of web-based attacks initiated by malicious users. Working of these methods is in a different direction to the signature-based detection method which only detects the web-based attacks for which a signature has been previously created. In this paper, we have introduced two methods: Attribute Length Method (ALM) and Attribute Character Distribution Method (ACDM) which is based on attribute values. Further, we have done the mathematical analysis of three different web attacks and compare their False Accept Rate (FAR) results for both the methods. Results analysis reveals that ALM is more efficient method than ACDM in the detection of web-based attacks.
Reliability Improvement with PSP of Web-Based Software ApplicationsCSEIJJournal
In diverse industrial and academic environments, the quality of the software has been evaluated using
different analytic studies. The contribution of the present work is focused on the development of a
methodology in order to improve the evaluation and analysis of the reliability of web-based software
applications. The Personal Software Process (PSP) was introduced in our methodology for improving the
quality of the process and the product. The Evaluation + Improvement (Ei) process is performed in our
methodology to evaluate and improve the quality of the software system. We tested our methodology in a
web-based software system and used statistical modeling theory for the analysis and evaluation of the
reliability. The behavior of the system under ideal conditions was evaluated and compared against the
operation of the system executing under real conditions. The results obtained demonstrated the
effectiveness and applicability of our methodology
Sqlas tool to detect and prevent attacks in php web applicationsijsptm
Web applications become an important part of our daily lives. Many other activities are relay on the functionality and security of these applications. Web application injection attacks, such as SQL injection (SQLIA), Cross-Site Scripting (XSS) and Cross-Site Request Forgery (XSRF) are major threats to the
security of the Web Applications. Most of the methods are focused on detection and prevention from these
web application vulnerabilities at Run Time, which need manual monitoring efforts. Main goal of our work
is different in the way it aims to create new systems that are safe against injection attacks to begin with, thus allowing developers the freedom to write and execute code without having to worry about these attacks. In this paper we present SQL Attack Scanner (SQLAS) a Tool which can detect & prevent SQL injection Attack in web applications. We analyzed the performance of our proposed tool SQLAS with various PHP web applications and its results clearly determines the effectiveness of detection and prevention of our proposed tool. SQLAS scans web applications offline, it reduces time and manual effort due to less overhead of runtime monitoring because it only focus on fragments that are vulnerable for attacks. We use XAMPP for client server environment and developed a TESTBED on JAVA for evaluation of our proposed tool SQLAS.
Numerous security metrics have been proposed in the past for protecting computer networks.
However we still lack effective techniques to accurately measure the predictive security risk of
an enterprise taking into account the dynamic attributes associated with vulnerabilities that can
change over time. In this paper we present a stochastic security framework for obtaining
quantitative measures of security using attack graphs. Our model is novel as existing research
in attack graph analysis do not consider the temporal aspects associated with the
vulnerabilities, such as the availability of exploits and patches which can affect the overall
network security based on how the vulnerabilities are interconnected and leveraged to
compromise the system. Gaining a better understanding of the relationship between
vulnerabilities and their lifecycle events can provide security practitioners a better
understanding of their state of security. In order to have a more realistic representation of how
the security state of the network would vary over time, a nonhomogeneous model is developed
which incorporates a time dependent covariate, namely the vulnerability age. The daily
transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We
also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact
measures evolve over a time period for a given network.
PREDICTIVE CYBER SECURITY ANALYTICS FRAMEWORK: A NONHOMOGENOUS MARKOV MODEL F...cscpconf
Numerous security metrics have been proposed in the past for protecting computer networks.
However we still lack effective techniques to accurately measure the predictive security risk of
an enterprise taking into account the dynamic attributes associated with vulnerabilities that can
change over time. In this paper we present a stochastic security framework for obtaining
quantitative measures of security using attack graphs. Our model is novel as existing research
in attack graph analysis do not consider the temporal aspects associated with the
vulnerabilities, such as the availability of exploits and patches which can affect the overall
network security based on how the vulnerabilities are interconnected and leveraged to
compromise the system. Gaining a better understanding of the relationship between
vulnerabilities and their lifecycle events can provide security practitioners a better
understanding of their state of security. In order to have a more realistic representation of how
the security state of the network would vary over time, a nonhomogeneous model is developed
which incorporates a time dependent covariate, namely the vulnerability age. The daily
transition-probability matrices are estimated using Frei's Vulnerability Lifecycle model. We
also leverage the trusted CVSS metric domain to analyze how the total exploitability and impact
measures evolve over a time period for a given network.
Vulnerability Assessment LITERATURE REVIEW. docNuhuHamza
Vulnerability assessment and penetration testing are processes used to evaluate the security of computer systems and networks. Vulnerability assessment identifies weaknesses that could be exploited by threats, while penetration testing simulates cyber attacks to detect vulnerabilities. There are various tools and methods used for assessments and testing, and both processes play an important role in improving network security. However, assessments and tests also have disadvantages like prevalence of vulnerabilities found and challenges in data collection and analysis. Different models and frameworks provide guidance for conducting assessments and tests systematically.
This document analyzes the security vulnerabilities of 26 e-governance websites and web applications in Gujarat, India. It finds that the majority (61.88%) used Microsoft technologies while 38.13% used Apache. Both technologies were most vulnerable to low and medium severity issues. Microsoft sites saw more validation vulnerabilities while Apache sites saw more configuration and informational vulnerabilities. High severity flaws were usually configuration-based, medium flaws validation-based, and low/informational flaws informational-based. The study aims to improve e-governance security by analyzing technology risks and vulnerability types.
Similar to ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIES OF WEB APPLICATIONS (20)
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation w...IJCNCJournal
Paper Title
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation with Hybrid Beam Forming Power Transfer in WSN-IoT Applications
Authors
Reginald Jude Sixtus J and Tamilarasi Muthu, Puducherry Technological University, India
Abstract
Non-Orthogonal Multiple Access (NOMA) helps to overcome various difficulties in future technology wireless communications. NOMA, when utilized with millimeter wave multiple-input multiple-output (MIMO) systems, channel estimation becomes extremely difficult. For reaping the benefits of the NOMA and mm-Wave combination, effective channel estimation is required. In this paper, we propose an enhanced particle swarm optimization based long short-term memory estimator network (PSOLSTMEstNet), which is a neural network model that can be employed to forecast the bandwidth required in the mm-Wave MIMO network. The prime advantage of the LSTM is that it has the capability of dynamically adapting to the functioning pattern of fluctuating channel state. The LSTM stage with adaptive coding and modulation enhances the BER.PSO algorithm is employed to optimize input weights of LSTM network. The modified algorithm splits the power by channel condition of every single user. Participants will be first sorted into distinct groups depending upon respective channel conditions, using a hybrid beamforming approach. The network characteristics are fine-estimated using PSO-LSTMEstNet after a rough approximation of channels parameters derived from the received data.
Keywords
Signal to Noise Ratio (SNR), Bit Error Rate (BER), mm-Wave, MIMO, NOMA, deep learning, optimization.
Volume URL: https://airccse.org/journal/ijc2022.html
Abstract URL:https://aircconline.com/abstract/ijcnc/v14n5/14522cnc05.html
Pdf URL: https://aircconline.com/ijcnc/V14N5/14522cnc05.pdf
#scopuspublication #scopusindexed #callforpapers #researchpapers #cfp #researchers #phdstudent #researchScholar #journalpaper #submission #journalsubmission #WBAN #requirements #tailoredtreatment #MACstrategy #enhancedefficiency #protrcal #computing #analysis #wirelessbodyareanetworks #wirelessnetworks
#adhocnetwork #VANETs #OLSRrouting #routing #MPR #nderesidualenergy #korea #cognitiveradionetworks #radionetworks #rendezvoussequence
Here's where you can reach us : ijcnc@airccse.org or ijcnc@aircconline.com
June 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Abstract: Accurate latency computation is essential for the Internet of Things (IoT) since the connected
devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not
an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while
still allowing communication with the cloud. Many applications rely on fog computing, including traffic
management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to
address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog
computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the
simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses
various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which
are essential for evaluating the performance of the ITCMS. Our system model is also compared with other
models to assess its performance. A comparison is made using two parameters, namely throughput and the
total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend
Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system
outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Call for Papers -International Journal of Computer Networks & Communications ...IJCNCJournal
International Journal of Computer Networks & Communications (IJCNC)
Citations, h-index, i10-index of IJCNC
---- Scopus, ERA Listed, WJCI Indexed ----
Scopus Cite Score 2022--1.8
https://airccse.org/journal/ijcnc.html
IJCNC is listed in ERA 2023 as per the Australian Research Council (ARC) Journal Ranking
Scope & Topics
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the Computer Networks & Communications.
Topics of Interest
· Network Protocols & Wireless Networks
· Network Architectures
· High speed networks
· Routing, switching and addressing techniques
· Next Generation Internet
· Next Generation Web Architectures
· Network Operations & management
· Adhoc and sensor networks
· Internet and Web applications
· Ubiquitous networks
· Mobile networks & Wireless LAN
· Wireless Multimedia systems
· Wireless communications
· Heterogeneous wireless networks
· Measurement & Performance Analysis
· Peer to peer and overlay networks
· QoS and Resource Management
· Network Based applications
· Network Security
· Self-Organizing Networks and Networked Systems
· Optical Networking
· Mobile & Broadband Wireless Internet
· Recent trends & Developments in Computer Networks
Paper Submission
Authors are invited to submit papers for this journal through E-mail: ijcnc@airccse.org or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
· Submission Deadline : June 22, 2024
· Notification : July 22, 2024
· Final Manuscript Due : July 29, 2024
· Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us: ijcnc@airccse.org or ijcnc@aircconline.com
For other details please visit - http://airccse.org/journal/ijcnc.html
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Blockchain Enforced Attribute based Access Control with ZKP for Healthcare Se...IJCNCJournal
The relationship between doctors and patients is reinforced through the expanded communication channels provided by remote healthcare services, resulting in heightened patient satisfaction and loyalty. Nonetheless, the growth of these services is hampered by security and privacy challenges they confront. Additionally, patient electronic health records (EHR) information is dispersed across multiple hospitals in different formats, undermining data sovereignty. It allows any service to assert authority over their EHR, effectively controlling its usage. This paper proposes a blockchain enforced attribute-based access control in healthcare service. To enhance the privacy and data-sovereignty, the proposed system employs attribute-based access control, zero-knowledge proof (ZKP) and blockchain. The role of data within our system is pivotal in defining attributes. These attributes, in turn, form the fundamental basis for access control criteria. Blockchain is used to keep hospital information in public chain but EHR related data in private chain. Furthermore, EHR provides access control by using the attributed based cryptosystem before they are stored in the blockchain. Analysis shows that the proposed system provides data sovereignty with privacy provision based on the attributed based access control.
EECRPSID: Energy-Efficient Cluster-Based Routing Protocol with a Secure Intru...IJCNCJournal
A revolutionary idea that has gained significance in technology for Internet of Things (IoT) networks backed by WSNs is the " Energy-Efficient Cluster-Based Routing Protocol with a Secure Intrusion Detection" (EECRPSID). A WSN-powered IoT infrastructure's hardware foundation is hardware with autonomous sensing capabilities. The significant features of the proposed technology are intelligent environment sensing, independent data collection, and information transfer to connected devices. However, hardware flaws and issues with energy consumption may be to blame for device failures in WSN-assisted IoT networks. This can potentially obstruct the transfer of data. A reliable route significantly reduces data retransmissions, which reduces traffic and conserves energy. The sensor hardware is often widely dispersed by IoT networks that enable WSNs. Data duplication could occur if numerous sensor devices are used to monitor a location. Finding a solution to this issue by using clustering. Clustering lessens network traffic while retaining path dependability compared to the multipath technique. To relieve duplicate data in EECRPSID, we applied the clustering technique. The multipath strategy might make the provided protocol more dependable. Using the EECRPSID algorithm, will reduce the overall energy consumption, minimize the End-to-end delay to 0.14s, achieve a 99.8% Packet Delivery Ratio, and the network's lifespan will be increased. The NS2 simulator is used to run the whole set of simulations. The EECRPSID method has been implemented in NS2, and simulated results indicate that comparing the other three technologies improves the performance measures.
Analysis and Evolution of SHA-1 Algorithm - Analytical TechniqueIJCNCJournal
A 160-bit (20-byte) hash value, sometimes called a message digest, is generated using the SHA-1 (Secure Hash Algorithm 1) hash function in cryptography. This value is commonly represented as 40 hexadecimal digits. It is a Federal Information Processing Standard in the United States and was developed by the National Security Agency. Although it has been cryptographically cracked, the technique is still in widespread usage. In this work, we conduct a detailed and practical analysis of the SHA-1 algorithm's theoretical elements and show how they have been implemented through the use of several different hash configurations.
Optimizing CNN-BiGRU Performance: Mish Activation and Comparative AnalysisIJCNCJournal
Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
An Hybrid Framework OTFS-OFDM Based on Mobile Speed EstimationIJCNCJournal
The Future wireless communication systems face the challenging task of simultaneously providing high-quality service (QoS) and broadband data transmission, while also minimizing power consumption, latency, and system complexity. Although Orthogonal Frequency Division Multiplexing (OFDM) has been widely adopted in 4G and 5G systems, it struggles to cope with a significant delay and Doppler spread in high mobility scenarios. To address these challenges, a novel waveform named Orthogonal Time Frequency Space (OTFS). Designers aim to outperform OFDM by closely aligning signals with the channel behaviour. In this paper, we propose a switching strategy that empowers operators to select the most appropriate waveform based on an estimated speed of the mobile user. This strategy enables the base station to dynamically choose the waveform that best suits the mobile user’s speed. Additionally, we suggest retaining an Integrated Sensing and Communication (ISAC) radar approach for accurate Doppler estimation. This provides precise information to facilitate the waveform selection procedure. By leveraging the switching strategy and harnessing the Doppler estimation capabilities of an ISAC radar.Our proposed approach aims to enhance the performance of wireless communication systems in high mobility cases. Considering the complexity of waveform processing, we introduce an optimized hybrid system that combines OTFS and OFDM, resulting in reduced complexity while still retaining performance benefits.This hybrid system presents a promising solution for improving the performance of wireless communication systems in higher mobility.The simulation results validate the effectiveness of our approach, demonstrating its potential advantages for future wireless communication systems. The effectiveness of the proposed approach is validated by simulation results as it will be illustrated.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Accurate latency computation is essential for the Internet of Things (IoT) since the connected devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while still allowing communication with the cloud. Many applications rely on fog computing, including traffic management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which are essential for evaluating the performance of the ITCMS. Our system model is also compared with other models to assess its performance. A comparison is made using two parameters, namely throughput and the total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
May 2024, Volume 16, Number 3 - The International Journal of Computer Network...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Road construction is not as easy as it seems to be, it includes various steps and it starts with its designing and
structure including the traffic volume consideration. Then base layer is done by bulldozers and levelers and after
base surface coating has to be done. For giving road a smooth surface with flexibility, Asphalt concrete is used.
Asphalt requires an aggregate sub base material layer, and then a base layer to be put into first place. Asphalt road
construction is formulated to support the heavy traffic load and climatic conditions. It is 100% recyclable and
saving non renewable natural resources.
With the advancement of technology, Asphalt technology gives assurance about the good drainage system and with
skid resistance it can be used where safety is necessary such as outsidethe schools.
The largest use of Asphalt is for making asphalt concrete for road surfaces. It is widely used in airports around the
world due to the sturdiness and ability to be repaired quickly, it is widely used for runways dedicated to aircraft
landing and taking off. Asphalt is normally stored and transported at 150’C or 300’F temperature
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
OOPS_Lab_Manual - programs using C++ programming language
ANALYTIC HIERARCHY PROCESS-BASED FUZZY MEASUREMENT TO QUANTIFY VULNERABILITIES OF WEB APPLICATIONS
1. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
DOI: 10.5121/ijcnc.2020.12407 105
ANALYTIC HIERARCHY PROCESS-BASED FUZZY
MEASUREMENT TO QUANTIFY VULNERABILITIES
OF WEB APPLICATIONS
Mohammad Shojaeshafiei1
, Letha Etzkorn1
and Michael Anderson2
1
Department of Computer Science, The University of Alabama in Huntsville,
Huntsville, USA
2
Department of Civil and Environmental Engineering,
The University of Alabama in Huntsville, Huntsville, USA
ABSTRACT
Much research has been conducted to detect vulnerabilities of Web Applications; however, these never
proposed a methodology to measure the vulnerabilities either qualitatively or quantitatively. In this paper,
a methodology is proposed to investigate the quantification of vulnerabilities in Web Applications. We
applied the Goal Question Metrics (GQM) methodology to determine all possible security factors and sub-
factors of Web Applications in the Department of Transportation (DOT) as our proof of concept. Then we
introduced a Multi-layered Fuzzy Logic (MFL) approach based on the security sub-factors’ prioritization
in the Analytic Hierarchy Process (AHP). Using AHP, we weighted each security sub-factor before the
quantification process in the Fuzzy Logic to handle imprecise crisp number calculation.
KEYWORDS
vulnerability, web applications, analytic hierarchy process, fuzzy logic, goal question metrics,
cybersecurity
1. INTRODUCTION
A large number of users for daily work and communications are relying on Web Applications.
These users can potentially be targeted by cyberattacks either due to having interaction with
personal computers or operating in a large network of a susceptible corporation. In most
circumstances, if the security structure of the system is not strict enough, expensive consequences
follow. The total average cost of a data breach on organizations in 2019 was $ 3.92 million based
on the research conducted by IBM [1]. Cross-site Scripting (XSS) and SQL injection are the
most destructive and pervasive types of attacks that threaten organizations’ secret information
through web applications’ vulnerabilities. According to statistics [2] Web Applications have
become the number one target for vulnerability exploitation such that 46% of web applications in
2019 suffered from critical vulnerability, 78% had medium security vulnerability, 30% were
vulnerable to XSS and 4 out of 5 Web Applications contain configuration errors. The lack of a
comprehensive approach to quantify and understand the vulnerabilities of Web Applications has
always been a significant issue. Much research has been conducted in this area and several
valuable methodologies have been proposed either to bolster the security of Web Applications or
to detect potential vulnerabilities [3][4]. Unfortunately, from the perspective of Web
Applications, vulnerability quantification is very intricate and requires strenuous work to obtain a
numerical value due to the qualitative nature of vulnerabilities in the system (i.e., “very good”,
“medium” or “very bad”). To the best of our knowledge, this is the first work that attempts to
2. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
106
investigate Web Applications’ vulnerability quantification through the Fuzzy Logic approach [5].
We accomplish this through assigning an uneven weight for each fuzzy rule in the Fuzzy
Inference System (FIS) processes. The research undertaken for this study has been driven by
three major purposes: (1) define a standard framework for Web Application security factors, (2)
quantify the vulnerabilities of Web Applications by determining the metrics for each sub-factor in
a backward manner using a Fuzzy Logic approach, and (3) apply the Analytic Hierarchy Process
(AHP) to prioritize each fuzzy rule before quantification. In order to meaningfully articulate the
vulnerability measurement processes, we have studied the Department of Transportation (DOT)
as a proof of concept using an in-depth analysis to comprehend the different aspects of Web
Applications’ vulnerabilities that are required for a meticulous security consideration. To do this,
we applied the Goal Question Metrics (GQM) methodology [6] to define and design all factors
and sub-factors of Web Applications for vulnerability quantification. The outputs of GQM are a
security factor and sub-factor graph tree that needs to be evaluated using the Fuzzy Logic
approach to calculate the appropriate metric for each successor (sub-factor) derived from a
predecessor (main factor) of the tree. The important point here is to determine a meaningful
weight for each rule using the AHP [7] for prioritizing the security sub-factors in the steps prior
to the Defuzzification process to obtain a crisp value for vulnerability. In other words, AHP helps
us to make a rational judgment about the prioritization of security sub-factors in Web
Applications. We use a pair-wise comparison [7] of acquired knowledge from the graph tree to
determine the importance of each criterion in the security framework.
The rest of the paper is structured as follows: Section 2 addresses the related work in Web
Applications’ vulnerability from different aspects of security. Section 3 describes the
methodology using GQM, security factors and sub-factors, AHP, and MFL. Section 4 addresses
the implementation using MFL with the comparison of vulnerability quantification with/without
uneven weights.
2. BACKGROUND
2.1. Vulnerabilities in Web Applications
The degree to which the adversary is capable of taking control of the system or some level of the
hosting server due to misconfiguration or the available bugs in the software is a vulnerability in
Web Applications. There has been some research from other researchers to analyze [8], scan [9],
automate [10], predict [11], and detect [12] vulnerabilities of Web Applications either statically
or dynamically. While the state of the art emphasizes different types of predominant attacks (e.g.
XSS) to detect the vulnerabilities of Web Applications [13][14], the lack of research on
vulnerability measurement (quantitatively and qualitatively) is egregious at higher levels.
2.2. Literature Review
Felmetsger et al. [15] researched automated detection of vulnerabilities in Web Applications
using the dynamic analysis to infer a pattern for behavioral specifications. They filtered the
learned specifications from leveraging knowledge about the typical execution paradigm of Web
Applications to pare false positives as much as possible. They developed a tool called Walter to
discern a program path to check if all input symbols violate the behavioral specifications which
derives program invariants for Web Applications to identify unknown application logic flaws.
Jovanovic et al. [16] presented an automated solution for vulnerability detection of Web
Applications to overcome the obstacle of error-prone manual code review. In order to express the
vulnerabilities of Web Applications through static source code, they applied an interprocedural
3. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
107
and context-sensitive data flow to detect flaws in a program. They proposed a tool in an open-
source implementation, Pixy, to detect SQL, XSS, and command injection types of
vulnerabilities. For experimental validation, they discovered 15 previously unknown
vulnerabilities and reconstructed 36 known vulnerabilities (one false positive on average).
Huang et al. [17] conducted research on Web Applications security assessment that led to
vulnerabilities due to poor coding practices. They designed a crawler interface to mimic the
functionalities of the web browser to test Web Applications. To find all data entry points to begin
with they used a reverse engineering methodology phase. The entry points were then applied as
fault injection processes in the Non-Rete (NRE) algorithm to remove false-negatives. They used
software testing techniques in a tool called WAVES to analyze the resulting pages. The authors
claim it has the capability to handle cookie poisoning, parameter tampering, buffer overflow, and
hijacking.
Sonmez [18] addressed a study to define security qualitative metrics for Web Applications. She
provided a list of web application development criteria with 230 qualitative metrics in order to
facilitate the Open Web Application Security Project (OWASP) compliance analysis. The author
claims if the OWASP compliance is more discernible, it may be helpful to qualitatively measure
the attack surface in Web Applications.
Priya et al. [19] worked on the Rational Unified treatment Process (RUP). They implemented and
developed Rational Unified WVA to assess Web Applications’ vulnerabilities in a software
development process framework. This is an iterative approach to develop a framework for any
object-oriented or web-oriented program that brings about vulnerability detection, report, and
vulnerability remediation accordingly due to lack of security in the program that allows the
intruder or adversary to gain unauthorized access to the system. The WVA has four main steps:
discovery, audit, exploit, and evasion. These steps were performed in sequence to provide
appropriate vulnerability reports in the testbed.
Philipsen [20] in 2005 provided a report on Web Applications’ vulnerability assessment in order
to discover and propose mitigations for security issues. The author indicated that three main
reasons make Web Applications vulnerable to attacks as follows: (1) design with lack of security
consideration at early stages (2) reliance on other architecture components such as depending on
the connection of Web Applications to a back-end database that provides all rights on the
database, and (3) abundance of programming languages that makes it complex to provide an
infrastructure with a standard security framework. Furthermore, the author presented several
vulnerability mitigation recommendations in order to ease the organizations’ security standards
and policies to comply with if needed.
Luh et al. [30] proposed a model, PenQuest, to illustrate a complete view of information system
attacks and their mitigation. This model provides a time-enabled attacker and defender behavior
as part of NIST sp800-53. It maps attack patterns to counterpart models through data-centric
procedures to present a gamified approach, a game based on a multi-tiered attacker and a
defender model that is useful for personnel training, risk assessment in IT infrastructure. The
model is composed of Service, Information, Event, and PenQuest Core Layers in the System. The
relationship between these layers creates a concrete class definition to outline the game theories.
It introduces a hybrid model that can be categorized based on some principles of game theory as
follows: non-cooperative nonzero-sum game, asymmetric strategy, dynamic elements, imperfect
information, Bayesian formula, and finite. The authors claim that the model is very flexible and
enables domain experts to enlarge the system by adding new actors and features. This model can
be an appropriate means to develop the foundation for target attack and especially an anomaly
detection interpretation for intrusion detection systems.
4. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
108
Shafee et al. [31] proposed an intrusion detection model that provides a mimic learning strategy
of learning. It adopts the process of using intrusion detection knowledge to be trained in a teacher
model on private data and then transferred to a student model to be able to share knowledge
without sharing the original data. The model framework is composed of Intrusion Detection
Agencies, Organizations, and Intrusion Detection Gateways to provide three modules to reach the
goal as follows: (1) Monitoring Modules, (2) Feature Extraction Modules and, (3) Classifier
Module. These modules are completely independent of the teacher and the student modules. The
teacher module chooses the best classifier with the best performance through the evaluation of
each classifier. This classifier will pass through the processes of training on the original data set.
When the teacher model finishes labeling the unlabeled data, the same process is used to produce
the student model. The critical step is that after constructing the teacher models private data
should be passed to the student model. At the end of the experiment, the authors’ finding on the
identical performance of both the teacher and the student model showed the successful
measurement of the processes in mimicking learning techniques to transfer knowledge using
unlabeled public data.
3. METHODOLOGY AND THE PROPOSED APPROACH
3.1. Goal Question Metrics
Since the main goal of our study is vulnerability quantification in Web Applications, first we are
required to construct appropriate metrics to achieve the goal. A well-known methodology to
define, design, and map such metrics to the goal is the Goal Question Metrics (GQM)
methodology [6]. GQM handles the process of developing and maintaining metrics in a
framework (here, a security framework) based on three substantial steps: (1) conceptual level:
defines the desired goal(s) including the purposes of each goal for the project (2) operational
level: designs meaningful and comprehensive question(s) as an operational mechanism that
relates the goal(s) to the appropriate metrics and (3) quantitative level: calculates metrics based
on answers to the related questions in each level in order to map the metrics to the goal(s).
As shown in Figure 1 a goal may contribute to generating one or more questions and these
questions contribute to the metrics as data for the generated questions [21].
Goal
Question
1
Question
2
Question
3
Question
4
Metric
1
Metric
2
Metric
3
Metric
4
Metric
5
Metric
6
Figure 1. Basic structure of the Goal Question Metrics
5. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
109
One of the primary advantages of applying GQM to our research is that it is a de facto standard
used by software engineers to determine software quality metrics [22] [23] [24]. Moreover, GQM
is a multipurpose methodology that not only decreases project cost, risk and cycle time, it also
increases project quality (to achieve the goals) because it accomplishes only those attributes that
are directly related to the project objectives. Especially, in our case, we can rely on it as a suitable
procedure to map elicitated cybersecurity requirements to the security goals [25].
In this paper, we consider the security requirements of Web Applications in DOT to provide a
fundamental platform in order to construct our GQM framework based on essential security
factors and sub-factors. The analysis of security factors and Sub-factors leads us to generate a
graph tree that comprises the root (Web Applications) and the nodes (sub-factors) that pave the
ground for further security analysis such as detecting and eliminating those unnecessary sub-
factors that overlap conceptually. In the next section 3.2, before moving forward to the
quantification processes, we will express how to articulate security factors and sub-factors to
design security questions from the GQM procedure.
3.2. Security Factors and Sub-factors
The processes of security factors’ extraction are onerous tasks to be captured in an organized
theme. It can be observed that security factors’ determination in Web Applications is not
intricate, but what makes it cumbersome is (1) organizing the factors that cover multiple aspects
of the security in Web Applications as part of crucial components in security of an organization,
(2) designing components that directly reflect the quality of Web Applications’ security in the
system. These security factors and sub-factors should be able to represent the essence of security
quality components such as functionality, reliability, usability, efficiency, portability, and
maintainability. Therefore, each factor is considered individually to assure that it contributes
sufficient broadly to include security components optimally for the design. For that, a
comprehensive list of security factors and sub-factors based on their role in Web Applications is
researched. As shown in Figure 2 we generated a tree graph that represents all security factors
and sub-factors to quantify the vulnerabilities of Web Applications in DOT.
Figure 2. Security factors and sub-factors of Web Applications in DOT
6. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
110
We considered all aspects of security issues in DOT and after a thorough analysis, we extracted
the most highlighted components that are influential either directly or indirectly to the
vulnerability of DOT-related Web Applications. We attempted to eliminate the sub-factors that
overlap other sub-factors; however, in some cases accepting redundancy at some points is
inevitable. For instance, considering the sub-factor “Penetration Testing” in both “Runtime
Security ” and “Authorization and Identification ” is significantly important and there is no way
to remove it from one side, rather it must be accepted as necessary on both sides.
Since the final goal for the proof of concept of this research is to obtain a numerical value for
vulnerabilities of Web Application in DOT, we divided the whole problem into several sub-
problems based on the main security factors at each level. This technique makes the measurement
processes more plausible for the application of Fuzzy Logic methodology. The sub-problems
should be solved in a backward manner from the very last child node towards the parent node.
When all sub-problems are solved, we apply aggregation to acquire the value of each main
problem. Then each main problem comes to another aggregation equation to contribute the final
result of the problem.
We chose one of the main factors (main problem) randomly as an example to address how it
demonstrates the entire metric aggregation process, provided that metrics have already been
calculated. The metric measurement processes should be accomplished with the Fuzzy Logic
approach based on the priorities that AHP provides to the if-then rules in FIS. Figure 3 depicts
the characteristics of backward-manner aggregation to achieve “Runtime Security” as one of the
main factors of Web Applications in DOT.
Figure 3. Factors and sub-factors of Runtime Security based on GQM’s steps
As aforementioned in the operational level of GQM, we are required to design question(s) to
calculate the metrics at each level. Hence, we scrutinized all security aspects of Web
Applications in DOT to provide a meticulously designed questionnaire that overlays all potential
vulnerability concerns. To affirm that we avoid any risk of being involved with irrelevant
questions, we mapped each question to security standards such as NIST SP-800 53 and ISO-
27001 at each level of security factors. The questionnaire is specifically designed to be answered
7. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
111
in a real-world environment by DOT network security experts. For each security sub-factor, we
assigned one or more questions. The number of questions for each factor depends on the
importance and role of that factor in DOT. For example, if the current operation in DOT puts
more emphasis on “Runtime Security” than “Maintenance” then there will evidently be more
questions related to the former than the latter.
The questionnaire was sent to 2000 experts at the Department of Transportation. Twenty-six
experts have replied. Some typical questions from the questionnaire are as follows:
For the Availability quality factor:
• Does DOT make sure security mechanisms and redundancies implemented to protect equipment
from utility service outages (e.g., power failures, network disruptions, etc.)
• What percentage of DOT’s equipment has security mechanisms and redundancy to protect
equipment from utility service outages built-in?
• Does DOT implement network redundancy in its system?
• Does DOT perform industry-standard monitoring and alerting on devices that process and store
sensitive information?
• How frequently does DOT back data up?
For the Integrity quality factor:
• How often does your agency inspect and account for data quality errors and associated risks?
• Which of the following options best describe the Operating System(OS) updates management
and installation in your agency’s computers?
• How are third-party software updates (Adobe, Java, etc.) installed on your agency’s computers?
• How often does your agency verify that software suppliers adhere to industry standards for
software development life cycle security?
After further consideration, we categorized questions based on their appropriate security factors
into the different blocks of questions to be able to easily recognize security attributes of the
parent node on the system for vulnerability measurement processes. Table 1 shows an example of
security questions designed for Web Applications in DOT.
Table 1. Security question examples for each factor
Security Factor Question
Authorization and
Identification
How often does DOT control the authorization restriction
policies for users/employees who have interaction with
the company’s web applications?
Authentication
How often does DOT require/ remind regular web
applications’ users to have two-factor authentication in
order to login to the system?
Software Security
How often does DOT have controls in place to ensure that
standards of quality are being met for all software
development?
Runtime Security
How software-based firewall (windows or third party etc.)
is running on DOTs computers to protect against the
internal spread of a worm virus or hostile attack from
another computer on the network?
Maintenance
How often does DOT use any dependency tracker/map
for its web application?
8. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
112
As a result, we constructed a standard security framework of Web Applications, such that each
security sub-factor is directly derived from either NIST SP800-53 or ISO 27001-2013 guidelines
with the least number of overlap in security measures.
3.3. Security Sub-factor Prioritization using AHP
To achieve more precise vulnerability quantification, the prioritization of security sub-factor
assessment is the primary objective of this section. For that, AHP is the most appropriate tool to
assist us to evaluate variables (security sub-factors in this case) that are very difficult to be
assigned a value. Whereas in each level of vulnerability assessment we encounter multiple
attributes of a factor, decision making for them is critical. The reason is that multiple weights
affect the combinatorial contribution of each sub-factor to calculate the overall value of each
factor. Thus, one of the solutions is to apply a hierarchy methodology to cope with such a
problem. AHP applies a hierarchy procedure to divide the problem into multiple levels of
judgment procedures [7] and it is a widely used approach for decision making. In AHP the
analysis processes have to be accomplished step by step based on our subjective judgment of
each sub-factor. The main advantage of applying AHP before vulnerability quantification in
Fuzzy Logic is that it suggests whether our subjective judgment of attributes is reasonable or not.
In other words, it measures the final judgment value based on some predefined criteria; if the
value does not follow the framework rule, then the subjective judgment should be revised and
repeated until the ideal consistency is achieved. Other than inconsistency detection through the
decision-making process, it has the capability of handling the problem of qualitative values of the
attributes. The whole prioritization in AHP comprises five steps: (1) constructing the hierarchy
structure of the main problem (2) constructing the pairwise comparison matrix based on the
number of security sub-factors (3) calculating the relative weight of each sub-factor at each level
(4) calculating the total weight of each sub-factor in each row and, (5) evaluate and check the
consistency of judgment.
If there are n sub-factor for each factor, then there would be n(n-1)/2 pair-wise comparison of
sub-factors based on our predefined ratio scale. The ratio scale is reciprocal [7] and the
comparison matrix C as in (1), is n×n where Cij indicates the relative importance of i to j.
where the sum of the values for each column of the pair-wise matrix is calculated as (2) [7][26]:
In order to achieve the normalized pair-wise matrix, each element in the matrix should be divided
by its column total as shown in (3) [7][26].
9. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
113
where Xij indicates the normalized pair-wise matrix, and eventually, in order to generate the
Weighted Matrix or Criteria Weight (CW) (4), the summation of the normalized column of the
matrix should be divided by the number of sub-factors used as the criteria in AHP [7][26], where
Wij indicates weigh of each sub-factor.
The final step is to determine whether the weighing processes for each sub-factor is consistent or
not. Once the final matrix (CW) is generated we calculate the eigenvalue of matrix M with the
corresponding eigenvector [26]. The consistency vector is calculated by multiplying the pair-wise
matrix by the weight vector that is called Weighted Sum Value (WSV). The result of the average
(WSV/CW) provides λ which is the eigenvalue. As shown in (5), the Consistency Index (CI) is
calculated as follows:
Consistency Index (CI)= (λ-n)/n-1 (5)
Satty [7] provided a mathematically-proven table to provide the Consistency Index (CI) for a
randomly generated matrix that is called Random Index (RI). For example, a random index for a
randomly generated 4×4 matrix is 0.89. Thus, to calculate the Consistency Ratio (CR) to
determine if the matrix is consistent, CI should be divided by RI. If the result is less than 0.1 then
the matrix is considered consistent which means there are no occurrences of combinations of
pairs that bring about uncertainty, otherwise the entire decision making should be repeated with
the different subjective judgment of attributes.
Since the illustration and implementation of all security sub-factors are beyond the scope of this
paper we show only the implementation of “Runtime Security” as the main factor of Web
Applications that is derived from Figure 2.
As shown in Table 2 we have four sub-factors in “Monitoring and Logging” that comprises
“Firewall and Antivirus”, “XSS”, “Filtering Policy” and “Scanning and Pen Testing” that
generates six pair-wise comparisons (in green) based on predefined scales that we obtained based
on the analysis of each sub-factors’ role and contribution to the security of Web Applications.
This table also addresses the normalized pair-wise matrix (in blue) based on previous steps (of
green) with the division of each element of the columns with the corresponding SUM value of
that column. The normalized pair-wise matrix is applied to generate Criteria Weights (CW) in
Table 3.
10. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
114
Table 2. pair-wise comparison matrix of monitoring and logging
Table 3. criteria weights of monitoring and logging
Firewall and
Antivirus
XSS
Filtering
Policy
Scanning and
pen Testing
Criteria Weights
(CW)
0.528 0.07 0.224 0.164
After calculating Weighted Sum Value (WSV), based on the aforementioned explanation, we
measure the Consistency Ratio, (CR = CI/RI = 0.06479), which is less than 0.1 to prove that the
matrix is reasonably consistent. We repeat all steps of CW calculation that we have accomplished
for sub-factors of “Monitoring and Logging” to calculate the weights of sub-factors in
“Encryption”. The “Encryption” factor comprises two security sub-factors of “HTTP/HTTPS”
and “Other Encryptions”. After repeating the previous steps we obtained CW as follows in Table
4:
Table 4. criteria weights of encryption
Normalized pair-wise
Matrix
HTTP/HTTPS
Other
Encryptions
Criteria Weights
HTTP/HTTPS 0.666 1.333 0.999
Other Encryptions 0.333 0.666 0.499
When the process of weight calculations for all successors are fulfilled and the consistency of the
matrix is assured, bottom-up aggregation is required to prioritize them as input variables in FIS.
Pair-wise
Matrix
Comparison Firewall and
Antivirus
XSS
Filtering
Policy
Scanning and
Pen TestingNormalized
pair-wise
Matrix
Firewall and
Antivirus
1 5 4 3
0.56 0.384 0.695 0.473
XSS
0.2 1 0.25 0.333
0.112 0.076 0.043 0.052
Filtering Policy
0.25 4 1 2
0.14 0.307 0.137 0.315
Scanning and
Pen Testing
0.333 3 0.5 1
0.186 0.23 0.086 0.157
Pair-wise
comparison
SUM
1.783 13 5.75 6.333
11. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
115
This procedure will be addressed in section 3.4 to show how each sub-factors weight contributes
to being mapped to linguistic variables in Fuzzy Logic.
3.4. Multi-layered Fuzzy Logic Implementation
One of the major problems in quantifying vulnerabilities of a system is that we are involved with
linguistic variables (i.e., “very good”, “medium” or “very bad”) to describe the quality of security
or vulnerability of that system. Moreover, there are no sophisticated and standard metrics to
evaluate all available security sub-factors of a system that are linked to vulnerabilities. These two
issues are, in part, inextricably interwoven in the context of system security, especially when we
are considering vulnerability quantification. For that, we apply a Multi-layered Fuzzy Logic
(MFL) to handle such intricate issues of vulnerability measurement.
Fuzzy Logic was proposed and implemented by Zadeh [5] in 1965. One of the essential
components of Fuzzy Logic is the Membership Function (MF) which provides the degree of truth
for measurement [27]. The MF’s value is always limited to the interval [0-1]. In Fuzzy Logic,
there are four steps in measurement: (1) Fuzzification that generates a Membership value (2)
Rules that use linguistic variables in an if-then format (3) Fuzzy Inference System (FIS) that
formalizes if-then rules and maps input attributes to output space and, (4) Defuzzification that
extracts the quantified values from FIS that is called the actual crisp number [27].
In the Fuzzification process, we use the Triangular Fuzzy model for MF [28] to evaluate the
linguistic variables achieved from DOT’s security expert on the questionnaire. Two important
properties of Fuzzy Logic are considered in the fuzzy rules in Fuzzification: (1) μA∩B (x)=max[μA
(x),μB (x)] | x∈X, and (2) μA∪B (x)=min[μA (x),μB (x)] | x∈X. In Defuzzification process to
generate the crisp number, we use Center of Area (COA) to calculate the area that MF covers
within the range of the output variable as shown in (6) where x is the value of the linguistic
variable, and xmin and xmax indicate the range of the linguistic variable and X is universal
discourse [29].
While multiple Fuzzy Logic applications in the related area simply employ fuzzy rules with the
same weight as 1 when processing the variables, we put emphasis on considering each sub-factor
as a completely specific influencer in Web Application security. As shown in Figure 4 the FIS for
each security sub-factor of “Web Application” is designed separately in a backward manner to
attain the parent node in each layer.
12. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
116
Figure 4. MFL for Web Applications in DOT
The results of the first round FIS (i.e., FIS 1 to FIS 5) will be obtained to contribute cumulatively
to the second round FIS (i.e., FIS 6) to obtain the final crisp value for Web Applications. As
shown in Figure 5 the backward aggregation of “Monitoring and Logging” and “Encryption”
provides the value of “Runtime Security” as one of the main security factors of Web Applications
in DOT. The process of vulnerability measurement in Fuzzy Logic, in this case, should take place
in a backward flow from Figure 5, as the input, to Figure 4.
Figure 5. MFL for Runtime Security
After receiving the final result of the questionnaire from DOT’s security experts, we categorize
the answers to some specific groups. The number of groups depends on the number of sub-factors
and the role or the importance of each sub-factor in the security contribution of each level. The
primary reason for this type of classification is that the answers of the questionnaire are in the
13. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
117
format of “very low”, “low”, “medium”, “high” and “very high”. The weights in the interval of
[0-10] are assigned to each sub-factor in the questionnaire. In other words, this way of linguistic-
variable-description to evaluate the answers determines what fuzzy subset from “very low” to
“very high” each sub-factor belongs to. As shown in Figure 6 all weights obtained from AHP are
assigned to each sub-factor to contribute to the security of predecessors as shown in Figure 7.
Furthermore, Figure 7 depicts the main factor “Runtime Security” that comprises four Groups:
(1) two groups belong to “Monitoring and Logging” factor which is the result of interior FIS
attributes such as “Firewall and Antivirus”, “XSS”, “Filtering Policy” and “Scanning and Pen
Testing” (2) two groups belong to “Encryption” factor which is the result of interior FIS layer
“Encryption” such as “HTTP/HTTPS” and “Other Encryptions”.
Figure 6. Assigned weights to each sub-factor based on AHP
As we mentioned before, researchers usually apply the Fuzzy Logic methodology such that the
entire fuzzy rules are assigned with the same weights. However, we show that each rule with its
specific weight of contribution in the security of a system would provide a more precise
vulnerability measurement. As shown in Figure 7 each weight is assigned to the corresponding
security sub-factor. If the number of sub-factors varies in two different main factors then,
depending on the number of calculated weights, they are assigned to the fuzzy subsets. For
instance, in “Monitoring and Logging” we weighed four security sub-factors which are simply
assigned to each fuzzy subset, but in the “Encryption” factor each weight is assigned to two fuzzy
subsets based on the value of the weights. The lower weights represent the lower fuzzy subsets
and the higher ones take the higher subsets. The point in this method is that we always assign the
weight with the highest value 1 to the fuzzy subset “very high”.
14. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
118
Figure 7. Fuzzy subset implementation for Runtime Security based on AHP
4. IMPLEMENTATION
The implementation of all security factors in the DOT’s Web Applications is beyond the scope of
this paper. We show only the implementation of “Runtime Security” to show the work procedure.
We define the fuzzy if-then rules to map the inputs to the outputs in the MFL. This describes the
conditions of the antecedent and consequent proposition. The advantage of applying fuzzy rules
is that they work in parallel which does not affect the order or the sequence of the rules. Thus, we
generate fuzzy rules based on the number of linguistic variables that describe the level of security
in each question of the questionnaire. The more the number of linguistic variables, the more the
number of rules. As shown in Figure 8 we defined 25 fuzzy rules derived from two groups in
which each group has five linguistic variables.
These rules are designed for the first group based on the COA methodology. The weights are
shown at the end of each line of the rule. Decision making based on a single fuzzy set occurs
through aggregation to combine fuzzy sets from if-then rules to a single fuzzy set. For that, we
use the max-min technique (7) for aggregation and the final value is achieved as follow:
Final Value= max(Group1, Group2,…, GroupN) (7)
15. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
119
Figure 8. Fuzzy rules for Runtime Security based on AHP
The three-dimensional plot in Figure 9 maps group 1 and group 2 to the “Monitoring and
Logging” security factor in Web Applications. A range from 0 to 9 is presented in the vertical
axis. The value 0 indicates the least security grading (maximum vulnerability) and the value 9
indicates the most security set-out (least vulnerability) accordingly.
Figure 9. Output curve for Runtime Security based on AHP
As shown in figure 9 there is a smooth output (not flat plateaus) which indicates the general
incremental trend illustrating that we do not have many input combinations that produce the same
output. This figure shows the maximum value of “Monitoring and Logging” happens at the peak
value of the intersection of Group1 and Group2. This is the best evidence to show how the
vulnerability measurements in different inputs are distinguished.
We applied MFL to all aforementioned four groups of security factors derived from Figure 7 to
quantify vulnerabilities of them based on the weights that prioritize each sub-factor in AHP. In
each measurement process, we have two conditions for comparison: (1) uneven weights acquired
from AHP are assigned to fuzzy rules and, (2) weights with the same value 1 are assigned to all
fuzzy rules. As shown in Figure 10, fuzzy inference processes are employed to facilitate the
process of interacting with the input value to obtain the output value based on aggregation in each
fuzzy set for “Monitoring and Logging”. The columns in the left and the middle indicate the if-
part fuzzy rule which is known as “antecedent” and the last column in the right represents the
16. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
120
then-part which is called “consequent”. The average input and output are shown on top of each
column. In this case, Group 1 has an average value of 3.99, and Group 2 has an average value of
5.5 out of 10. The result value for “Monitoring and Logging’ is 4.48 out of 10. This is calculated
at the last plot of the third column (at the very right bottom) which is based on the aggregation of
the weighted decisions in the system from the maximum value of all minimum inputs captured in
COA. The important point here is that the value 4.48 indicates the security of “Monitoring and
Logging” not the vulnerability. In order to obtain its vulnerability, we have to subtract it from 10
(that is 10 - 4.48=5.52).
As shown in Figure 11 the MFL is applied to the same groups as Figure 10 but in this
measurement process, all fuzzy rules have the same weight 1. The output for the security of
“Monitoring and Logging” is 4.46 that quantify the vulnerability with the value 5.54.
We repeat the same measurement processes for the “Encryption” factor. Figure 12 shows that the
security value in Groups 3 and 4 (based on the aggregation of the weighted decision) has the
value 6.54, which quantifies its vulnerability by the value 3.46 out of 10. On the other hand,
Figure 13 addresses that the security and vulnerability values of Groups 3 and 4 (while all fuzzy
rules are assigned with the same weight 1) are 6.47 and 3.53 respectively. Ultimately, in table V
we calculated the total quantified vulnerability value of “Runtime Security” in both conditions of
“uneven Weights” and “Weight 1” to show the differences in measurement values.
Figure 10. Fuzzy inference processes for Monitoring and Logging based on AHP
17. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
121
Figure 11. Fuzzy inference processes for Monitoring and Logging with the same weight 1
Figure 12. Fuzzy inference processes for Encryption based on AHP
Figure 13. Fuzzy inference processes for Encryption with the same weight 1
Table 5. criteria weights of encryption
Runtime Security
Monitoring and
Logging
Encryption Total (Avg)
Weight 1 5.54 3.53 4.53
Uneven Weights 5.52 3.46 4.49
Since we are using the Mamdani type of rules as defined earlier, R1= If x1 is Ai1 and … and
xiN is AiN then y is B1, considering that we may have redundant rules or variables in the process
of rule construction, in the worst-case scenario to measure the vulnerability of a system, there are
a finite number of variables that propose super-polynomial time or exponential complexity. If the
model contains x variables and maximum r fuzzy terms (here we do not have more than five)
then the order is O(rx)based on the search for the rules for specific input. (in our case it is
considered as polynomial-time).
5. CONCLUSIONS AND FUTURE WORK
The lack of a comprehensive approach to vulnerability quantification is an unsolved issue among
cybersecurity researchers. The main reason for this is because the quality description of a
system’s security is usually in the format of human linguistic variables, and this model of
security expression always results in multiple interpretations from the experts. In this paper, we
proposed a Multi-layered Fuzzy Logic (MFL) approach to measure the vulnerability of Web
18. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
122
Applications in DOT based on the prioritization of security sub-factors from AHP. The role and
the contribution of each security sub-factor are extracted based on the answers we receive from
security experts of DOT in a questionnaire. We quantified the vulnerability of “Runtime
Security” in two different scenarios: (1) Uneven weights for fuzzy rules and, (2) Use the same
(even) weight 1 for all fuzzy rules. Since the measurement in the former one is based on the exact
contribution of each security sub-factor in the security of Web Applications, it conveys a more
precise weight while quantifying the vulnerability than the latter. The findings show that when
we quantify the vulnerabilities based on even weight 1 for all rules, it overestimates the
vulnerability of the system with a value of 0.04 out of 10. This value may not represent a
significant difference in vulnerability measurements of the two scenarios in this scale. However,
in a large corporation, this difference would be conspicuous. In other words, although both
methodologies calculate the vulnerability of the system well (very precisely), we prefer to
calculate each metric based on the AHP method to conduct the assessment based on the exact
weight of each security sub-factor, because this method does not overestimate the security of the
system. To our knowledge, this is the first methodology to quantify vulnerabilities based on
GQM, security standards, and AHP in Fuzzy Logic.
To recapitulate briefly, the proposed methodology with the findings in this paper, regardless of
weighting the security sub-factors or not, is a novel approach to quantify the vulnerability of not
only Web Applications, but all components of network security.
This methodology is very efficient, flexible to add layers, tolerant to imprecise data, and easy to
understand. The primary limitation is that the processes of rule construction largely increase
depending on the number of variables.
CONFLICTS OF INTEREST
The authors declare no conflict of interest.
REFERENCES
[1] “Announcements,” IBM News Room. [Online]. Available: https://newsroom.ibm.com/2019-07-23-
IBM-Study-Shows-Data-Breach-Costs-on-the-Rise-Financial-Impact-Felt-for-Years.
[2] “Acunetix Web Application Vulnerability Report 2019,” Acunetix. [Online]. Available:
https://www.acunetix.com/acunetix-web-application-vulnerability-report.
[3] J. Bau, E. Bursztein, D. Gupta, and J. Mitchell, “State of the Art: Automated Black-Box Web
Application Vulnerability Testing,” 2010 IEEE Symposium on Security and Privacy, 2010.
[4] A. Razzaq, K. Latif, H. F. Ahmad, A. Hur, Z. Anwar, and P. C. Bloodsworth, “Semantic security
against web application attacks,” Information Sciences, vol. 254, pp. 19–38, 2014.
[5] L. Zadeh, “Fuzzy sets,” Information and Control, vol. 8, no. 3, pp. 338–353, 1965.
[6] V. R. Basili and S. Green, “Software Process Evolution at the SEL,” Foundations of Empirical
Software Engineering, pp. 142–154.
[7] T. L. Saaty, “How to Make a Decision: The Analytic Hierarchy Process,” Interfaces, vol. 24, no. 6,
pp. 19–43, 1994.
[8] P. J. C. Nunes, J. Fonseca, and M. Vieira, “phpSAFE: A Security Analysis Tool for OOP Web
Application Plugins,” 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems
and Networks, 2015.
[9] Y.-H. Tung, S.-S. Tseng, J.-F. Shih, and H.-L. Shan, “W-VST: A Testbed for Evaluating Web
Vulnerability Scanner,” 2014 14th International Conference on Quality Software, 2014.
[10] B. Eshete, A. Villafiorita, K. Weldemariam, and M. Zulkernine, “Confeagle: Automated Analysis of
Configuration Vulnerabilities in Web Applications,” 2013 IEEE 7th International Conference on
Software Security and Reliability, 2013.
19. International Journal of Computer Networks & Communications (IJCNC) Vol.12, No.4, July 2020
123
[11] M. K. Gupta, M. C. Govil, and G. Singh, “Predicting Cross-Site Scripting (XSS) security
vulnerabilities in web applications,” 2015 12th International Joint Conference on Computer Science
and Software Engineering (JCSSE), 2015.
[12] W. Ze, “Design and Implementation of Core Modules of WEB Application Vulnerability Detection
Model,” 2019 11th International Conference on Measuring Technology and Mechatronics
Automation (ICMTMA), 2019.
[13] J. Wal, M. Doyle, G. A. Welch, and M. Whelan, “Security of open source web applications,” 2009
3rd International Symposium on Empirical Software Engineering and Measurement, 2009.
[14] Z. Su and G. Wassermann, “The essence of command injection attacks in web applications,” ACM
SIGPLAN Notices, vol. 41, no. 1, pp. 372–382, Dec. 2006.
[15] V. Felmetsger, L. Cavedon, C. Kruegel, and G. Vigna, “Toward Automated Detection of Logic
Vulnerabilities in Web Applications,” USENIX Security'10: Proceedings of the 19th USENIX
Conference on Security, Aug. 2010.
[16] N. Jovanovic, C. Kruegel, and E. Kirda, “Pixy: a static analysis tool for detecting Web application
vulnerabilities,” 2006 IEEE Symposium on Security and Privacy (S&P06), 2006.
[17] Y.-W. Huang, S.-K. Huang, T.-P. Lin, and C.-H. Tsai, “Web application security assessment by fault
injection and behavior monitoring,” Proceedings of the twelfth international conference on World
Wide Web - WWW 03, 2003.
[18] F. Ö. Sönmez, “Security Qualitative Metrics for Open Web Application Security Project
Compliance,” Procedia Computer Science, vol. 151, pp. 998–1003, 2019.
[19] P. R. L., L. C. S., D. Jagli, and A. Joy, “Rational Unified Treatment for Web application
Vulnerability Assessment,” 2014 International Conference on Circuits, Systems, Communication and
Information Technology Applications (CSCITA), 2014.
[20] K. Philipsen, “Web Application Vulnerability Assessment Discovering and Mitigating Security
Issues in Web Applications,” Cybertrust.
[21] A. Gray and S. Macdonell, “GQM A Full Life Cycle Framework for the Development and
Implementation of Software Metric Programs (1997),” Implementation of Software Metric
Programs" , Australian Software Measurement Conference, pp. 22–35.
[22] J. C. Abib and T. G. Kirner, “A GQM-based tool to support the development of software quality
measurement plans,” ACM SIGSOFT Software Engineering Notes, vol. 24, no. 4, pp. 75–80, Jan.
1999.
[23] A. Fuggetta, L. Lavazza, S. Morasca, S. Cinti, G. Oldano, and E. Orazi, “Applying GQM in an
industrial software factory,” ACM Transactions on Software Engineering and Methodology
(TOSEM), vol. 7, no. 4, pp. 411–448, 1998.
[24] V. Basili, J. Heidrich, M. Lindvall, J. Munch, M. Regardie, and A. Trendowicz, “GQM Strategies --
Aligning Business Strategies with Software Measurement,” First International Symposium on
Empirical Software Engineering and Measurement (ESEM 2007), 2007.
[25] M. Shojaeshafiei, L. Etzkorn, and M. Anderson, “Cybersecurity Framework Requirements to
Quantify Vulnerabilities Based on GQM,” SpringerLink, 04-Jun-2019. [Online]. Available:
https://link.springer.com/chapter/10.1007/978-3-030-31239-8_20.
[26] J. Watkins and L. Ghan, “AHP Version 5. 1 users manual. [Analytical Hierarchy Process (AHP)],”
Jan. 1992.
[27] H.-J. Zimmermann, “Fuzzy Sets, Decision Making, and Expert Systems,” 1987.
[28] W. Pedrycz, “Why triangular membership functions?,” Fuzzy Sets and Systems, vol. 64, no. 1, pp.
21–30, 1994.
[29] E. Ngai and F. Wat, “Fuzzy decision support system for risk analysis in e-commerce development,”
Decision Support Systems, vol. 40, no. 2, pp. 235–255, 2005.
[30] R. Luh, M. Temper, S. Tjoa, S. Schrittwieser, and H. Janicke, “PenQuest: a gamified
attacker/defender meta model for cyber security assessment and education,” Journal of Computer
Virology and Hacking Techniques, vol. 16, no. 1, pp. 19–61, 2019.
[31] A. Shafee, M. Baza, D. A. Talbert, M. M. Fouda, M. Nabil, and M. Mahmoud, “Mimic Learning to
Generate a Shareable Network Intrusion Detection Model,” 2020 IEEE 17th Annual Consumer
Communications & Networking Conference (CCNC), 2020.