Cyber-attacks are launched through the exploitation of some existing vulnerabilities in the software, hardware, system and/or network. Machine learning algorithms can be used to forecast the number of post release vulnerabilities. Traditional neural networks work like a black box approach; hence it is unclear how reasoning is used in utilizing past data points in inferring the subsequent data points. However, the long short-term memory network (LSTM), a variant of the recurrent neural network, is able to address this limitation by introducing a lot of loops in its network to retain and utilize past data points for future calculations. Moving on from the previous finding, we further enhance the results to predict the number of vulnerabilities by developing a time series-based sequential model using a long short-term memory neural network. Specifically, this study developed a supervised machine learning based on the non-linear sequential time series forecasting model with a long short-term memory neural network to predict the number of vulnerabilities for three vendors having the highest number of vulnerabilities published in the national vulnerability database (NVD), namely microsoft, IBM and oracle. Our proposed model outperforms the existing models with a prediction result root mean squared error (RMSE) of as low as 0.072.
Progress of Machine Learning in the Field of Intrusion Detection Systemsijcisjournal
With the growth in the use of the Internet and local area networks, malicious attacks and intrusions into
computer systems are increasing. Implementing intrusion detection systems have become extremely
important to help maintain good network security. Support vector machines (SVMs), a classic pattern
recognition tool, have been widely used in intrusion detection. They can handle very large data with high
efficiency, are easy to use, and exhibit good prediction behavior. This paper presents a new SVM model
enriched with a Gaussian kernel function based on the features of the training data for intrusion detection.
The new model is tested with the CICIDS2017 dataset. The test proves better results in terms of detection
efficiency and false alarm rate, which can give better coverage and make detection more efficient.
A NOVEL EVALUATION APPROACH TO FINDING LIGHTWEIGHT MACHINE LEARNING ALGORITHM...IJNSA Journal
Building practical and efficient intrusion detection systems in computer network is important in industrial areas today and machine learning technique provides a set of effective algorithms to detect network
intrusion. To find out appropriate algorithms for building such kinds of systems, it is necessary to evaluate various types of machine learning algorithms based on specific criteria. In this paper, we propose a novel evaluation formula which incorporates 6 indexes into our comprehensive measurement, including precision, recall, root mean square error, training time, sample complexity and practicability, in order to
find algorithms which have high detection rate, low training time, need less training samples and are easy
to use like constructing, understanding and analyzing models. Detailed evaluation process is designed to
get all necessary assessment indicators and 6 kinds of machine learning algorithms are evaluated.
Experimental results illustrate that Logistic Regression shows the best overall performance.
n-Tier Modelling of Robust Key management for Secure Data Aggregation in Wire...IJECEIAES
Security problems in Wireless Sensor Network (WSN) have been researched from more than a decade. There are various security approaches being evolving towards resisting various forms of attack using different methodologies. After reviewing the existing security approaches, it can be concluded that such security approaches are highly attack-specific and doesnt address various associated issues in WSN. It is essential for security approach to be computationally lightweight. Therefore, this paper presents a novel analytical modelling that is based on n-tier approach with a target to generate an optimized secret key that could ensure higher degree of security during the process of data aggregation in WSN. The study outcome shows that proposed system is computationally lightweight with good performance on reduced delay and reduced energy consumption. It also exhibits enhanced response time and good data delivery performance to balance the need of security and data forwarding performance in WSN.
Robust Malware Detection using Residual Attention NetworkShamika Ganesan
In this paper, we explore the use of an attention based mechanism known as Residual Attention for malware detection and compare this with existing CNN based methods and conventional Machine Learning algorithms with the help of GIST features. The proposed method outperformed traditional malware detection methods which use Machine Learning and CNN based Deep Learning algorithms, by demonstrating an accuracy of 99.25%.
This paper has been accepted in the International Conference of Consumer Electronics (ICCE 2021).
ADDRESSING IMBALANCED CLASSES PROBLEM OF INTRUSION DETECTION SYSTEM USING WEI...IJCNCJournal
The main issues of the Intrusion Detection Systems (IDS) are in the sensitivity of these systems toward the errors, the inconsistent and inequitable ways in which the evaluation processes of these systems were often performed. Most of the previous efforts concerned with improving the overall accuracy of these models via increasing the detection rate and decreasing the false alarm which is an important issue. Machine Learning (ML) algorithms can classify all or most of the records of the minor classes to one of the main classes with negligible impact on performance. The riskiness of the threats caused by the small classes and the shortcoming of the previous efforts were used to address this issue, in addition to the need for improving the performance of the IDSs were the motivations for this work. In this paper, stratified sampling method and different cost-function schemes were consolidated with Extreme Learning Machine (ELM) method with Kernels, Activation Functions to build competitive ID solutions that improved the performance of these systems and reduced the occurrence of the accuracy paradox problem. The main experiments were performed using the UNB ISCX2012 dataset. The experimental results of the UNB ISCX2012 dataset showed that ELM models with polynomial function outperform other models in overall accuracy, recall, and F-score. Also, it competed with traditional model in Normal, DoS and SSH classes.
Progress of Machine Learning in the Field of Intrusion Detection Systemsijcisjournal
With the growth in the use of the Internet and local area networks, malicious attacks and intrusions into
computer systems are increasing. Implementing intrusion detection systems have become extremely
important to help maintain good network security. Support vector machines (SVMs), a classic pattern
recognition tool, have been widely used in intrusion detection. They can handle very large data with high
efficiency, are easy to use, and exhibit good prediction behavior. This paper presents a new SVM model
enriched with a Gaussian kernel function based on the features of the training data for intrusion detection.
The new model is tested with the CICIDS2017 dataset. The test proves better results in terms of detection
efficiency and false alarm rate, which can give better coverage and make detection more efficient.
A NOVEL EVALUATION APPROACH TO FINDING LIGHTWEIGHT MACHINE LEARNING ALGORITHM...IJNSA Journal
Building practical and efficient intrusion detection systems in computer network is important in industrial areas today and machine learning technique provides a set of effective algorithms to detect network
intrusion. To find out appropriate algorithms for building such kinds of systems, it is necessary to evaluate various types of machine learning algorithms based on specific criteria. In this paper, we propose a novel evaluation formula which incorporates 6 indexes into our comprehensive measurement, including precision, recall, root mean square error, training time, sample complexity and practicability, in order to
find algorithms which have high detection rate, low training time, need less training samples and are easy
to use like constructing, understanding and analyzing models. Detailed evaluation process is designed to
get all necessary assessment indicators and 6 kinds of machine learning algorithms are evaluated.
Experimental results illustrate that Logistic Regression shows the best overall performance.
n-Tier Modelling of Robust Key management for Secure Data Aggregation in Wire...IJECEIAES
Security problems in Wireless Sensor Network (WSN) have been researched from more than a decade. There are various security approaches being evolving towards resisting various forms of attack using different methodologies. After reviewing the existing security approaches, it can be concluded that such security approaches are highly attack-specific and doesnt address various associated issues in WSN. It is essential for security approach to be computationally lightweight. Therefore, this paper presents a novel analytical modelling that is based on n-tier approach with a target to generate an optimized secret key that could ensure higher degree of security during the process of data aggregation in WSN. The study outcome shows that proposed system is computationally lightweight with good performance on reduced delay and reduced energy consumption. It also exhibits enhanced response time and good data delivery performance to balance the need of security and data forwarding performance in WSN.
Robust Malware Detection using Residual Attention NetworkShamika Ganesan
In this paper, we explore the use of an attention based mechanism known as Residual Attention for malware detection and compare this with existing CNN based methods and conventional Machine Learning algorithms with the help of GIST features. The proposed method outperformed traditional malware detection methods which use Machine Learning and CNN based Deep Learning algorithms, by demonstrating an accuracy of 99.25%.
This paper has been accepted in the International Conference of Consumer Electronics (ICCE 2021).
ADDRESSING IMBALANCED CLASSES PROBLEM OF INTRUSION DETECTION SYSTEM USING WEI...IJCNCJournal
The main issues of the Intrusion Detection Systems (IDS) are in the sensitivity of these systems toward the errors, the inconsistent and inequitable ways in which the evaluation processes of these systems were often performed. Most of the previous efforts concerned with improving the overall accuracy of these models via increasing the detection rate and decreasing the false alarm which is an important issue. Machine Learning (ML) algorithms can classify all or most of the records of the minor classes to one of the main classes with negligible impact on performance. The riskiness of the threats caused by the small classes and the shortcoming of the previous efforts were used to address this issue, in addition to the need for improving the performance of the IDSs were the motivations for this work. In this paper, stratified sampling method and different cost-function schemes were consolidated with Extreme Learning Machine (ELM) method with Kernels, Activation Functions to build competitive ID solutions that improved the performance of these systems and reduced the occurrence of the accuracy paradox problem. The main experiments were performed using the UNB ISCX2012 dataset. The experimental results of the UNB ISCX2012 dataset showed that ELM models with polynomial function outperform other models in overall accuracy, recall, and F-score. Also, it competed with traditional model in Normal, DoS and SSH classes.
With the increasing importance of cyberspace security, the research and application of network situational awareness is getting more attention. The research on network security situational awareness is of great significance for
improving the network monitoring ability, emergency response capability and predicting the development trend of network security
Real Time Intrusion Detection System Using Computational Intelligence and Neu...ijtsrd
Today, Intrusion detection system using neural network is interested and measurable area for the researchers. The computational intelligence describe based on following parameters such as computational speed, adaptation, error resilience and fault tolerance. A good intrusion detection system must be satisfied adaptable as requirements. The objective of this paper, provide an outline of the research progress via computational intelligence and neural network over the intrusion detection. In this paper focused, existing research challenges, review analysis, research suggestion regarding Intrusion detection system. Dr. Prabha Shreeraj Nair"Real Time Intrusion Detection System Using Computational Intelligence and Neural Network: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-6 , October 2017, URL: http://www.ijtsrd.com/papers/ijtsrd5781.pdf http://www.ijtsrd.com/engineering/computer-engineering/5781/real-time-intrusion-detection-system-using-computational-intelligence-and-neural-network-a-review/dr-prabha-shreeraj-nair
An effective approach for tackling network security
problems is Intrusion detection systems (IDS). These kind of
systems play a key role in network security as they can detect
different types of attacks in networks, including DoS, U2R Probe
and R2L. In addition, IDS are an increasingly key part of the
system’s defense. Various approaches to IDS are now being used,
but are unfortunately relatively ineffective. Data mining techniques
and artificial intelligence play an important role in security
services. We will present a comparative study of three wellknown
intelligent algorithms in this paper. These are Radial Basis
Functions (RBF), Multilayer Perceptrons (MLP) and Support
Vector Machine (SVM).This work’s main interest is to benchmark
the performance of these3 intelligent algorithms. This is done by
using a dataset of about 9,000 connections, randomly chosen from
KDD'99’s 10% dataset. In addition, we investigate these
algorithms’ performance in terms of their attack classification
accuracy. The Simulation results are also analyzed and the
discussion is then presented. It has been observed that SVM with a
linear kernel (Linear-SVM) gives a better performance than MLP
and RBF in terms of its detection accuracy and processing speed.
DETECTION METHOD FOR CLASSIFYING MALICIOUS FIRMWAREIJNSA Journal
A malicious firmware update may prove devastating to the embedded devices both that make up the Internet of Things (IoT) and that typically lack the same security verifications now applied to full operating systems. This work converts the binary headers of 40,000 firmware examples from bytes into 1024-pixel thumbnail images to train a deep neural network. The aim is to distinguish benign and malicious variants using modern deep learning methods without needing detailed functional or forensic analysis tools. One outcome of this image conversion enables contact with the vast machine learning literature already applied to handle digit recognition (MNIST). Another result indicates that greater than 90% accurate classifications prove possible using image-based convolutional neural networks (CNN) when combined with transfer learning methods. The envisioned CNN application would intercept firmware updates before their distribution to IoT networks and score their likelihood of containing malicious variants. To explain how the model makes classification decisions, the research applies traditional statistical methods such as both single and ensembles of decision trees with identifiable pixel or byte values that contribute the malicious or benign determination.
Evaluation of network intrusion detection using markov chainIJCI JOURNAL
Day today life internet threat has been increased significantly. There is a need to develop model in order to
maintain security of system. The most effective techniques are Intrusion Detection System (IDS).The
purpose of intrusion system through the security devices detect and deal with it. In this paper, a
mathematical approach is used effectively to predict and detect intrusion in the network. Here we discuss
about two algorithms ‘K-Means + Apriori’, a method which classify normal and abnormal activities in
computer network. In K-Means process, it partitions the training set into K-clusters using Euclidean
distance and introduce an outlier factor, then it build Apriori Algorithm to prune the data by removing
infrequent data in the database. Based on defined state the degree of incoming data is evaluated through
the experiment using sample DARPA2000 dataset, and achieves high detection performance in level of
attack in stages.
CLASSIFICATION PROCEDURES FOR INTRUSION DETECTION BASED ON KDD CUP 99 DATA SETIJNSA Journal
In network security framework, intrusion detection is one of a benchmark part and is a fundamental way to protect PC from many threads. The huge issue in intrusion detection is presented as a huge number of false alerts; this issue motivates several experts to discover the solution for minifying false alerts according to data mining that is a consideration as analysis procedure utilized in a large data e.g. KDD CUP 99. This paper presented various data mining classification for handling false alerts in intrusion detection as reviewed. According to the result of testing many procedure of data mining on KDD CUP 99 that is no individual procedure can reveal all attack class, with high accuracy and without false alerts. The best accuracy in Multilayer Perceptron is 92%; however, the best Training Time in Rule based model is 4 seconds . It is concluded that ,various procedures should be utilized to handle several of network attacks.
A PROPOSED MODEL FOR DIMENSIONALITY REDUCTION TO IMPROVE THE CLASSIFICATION C...IJNSA Journal
Over the past few years, intrusion protection systems have drawn a mature research area in the field of computer networks. The problem of excessive features has a significant impact on
intrusion detection performance. The use of machine learning algorithms in many previous researches has been used to identify network traffic, harmful or normal. Therefore, to obtain the accuracy, we must reduce the dimensionality of the data used. A new model design based on a combination of feature selection and machine learning algorithms is proposed in this paper. This model depends on selected genes from every feature to increase the accuracy of intrusion detection systems. We selected from features content only ones which impact in attack detection. The performance has been evaluated based on a comparison of several known algorithms. The NSL-KDD dataset is used for examining classification. The proposed model outperformed the other learning approaches with accuracy 98.8 %.
USE OF MARKOV CHAIN FOR EARLY DETECTING DDOS ATTACKSIJNSA Journal
DDoS has a variety of types of mixed attacks. Botnet attackers can chain different types of DDoS attacks to confuse cybersecurity defenders. In this article, the attack type can be represented as the state of the model. Considering the attack type, we use this model to calculate the final attack probability. The final attack probability is then converted into one prediction vector, and the incoming attacks can be detected early before IDS issues an alert. The experiment results have shown that the prediction model that can make multi-vector DDoS detection and analysis easier.
Multi-stage secure clusterhead selection using discrete rule-set against unkn...IJECEIAES
Security is the rising concern of the wireless network as there are various forms of reonfigurable network that is arised from it. Wireless sensor network (WSN) is one such example that is found to be an integral part of cyber-physical system in upcoming times. After reviewing the existing system, it can be seen that there are less dominant and robust solutions towards mitigating the threats of upcoming applications of WSN. Therefore, this paper introduces a simple and cost-effective modelling of a security system that offers security by ensuring secure selection of clusterhead during the data aggregation process in WSN. The proposed system also makes construct a rule-set in order to learn the nature of the communication iin order to have a discrete knowledge about the intensity of adversaries. With an aid of simulation-based approach over MEMSIC nodes, the proposed system was proven to offer reduced energy consumption with good data delivery performance in contrast to existing approach.
With the surge in modern research focus towards Pervasive Computing, lot of techniques and challenges
needs to be addressed so as to effectively create smart spaces and achieve miniaturization. In the process of
scaling down to compact devices, the real things to ponder upon are the Information Retrieval challenges.
In this work, we discuss the aspects of multimedia which makes information access challenging. An
Example Pattern Recognition scenario is presented and the mathematical techniques that can be used to
model uncertainty are also presented for developing a system that can sense, compute and communicate in
a way that can make human life easy with smart objects assisting from around his surroundings.
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...Konstantinos Demertzis
A Security Operations Center (SOC) is a central technical level unit responsible for monitoring, analyzing, assessing, and defending an organization’s security posture on an ongoing basis. The SOC staff works closely with incident response teams, security analysts, network engineers and organization managers using sophisticated data processing technologies such as security analytics, threat intelligence, and asset criticality to ensure security issues are detected, analyzed and finally addressed quickly. Those techniques are part of a reactive security strategy because they rely on the human factor, experience and the judgment of security experts, using supplementary technology to evaluate the risk impact and minimize the attack surface. This study suggests an active security strategy that adopts a vigorous method including ingenuity, data analysis, processing and decision-making support to face various cyber hazards. Specifically, the paper introduces a novel intelligence driven cognitive computing SOC that is based exclusively on progressive fully automatic procedures. The proposed -Architecture Network Flow Forensics Framework (-NF3) is an efficient cybersecurity defense framework against adversarial attacks. It implements the Lambda machine learning architecture that can analyze a mixture of batch and streaming data, using two accurate novel computational intelligence algorithms. Specifically, it uses an Extreme Learning Machine neural network with Gaussian Radial Basis Function kernel (ELM/GRBFk) for the batch data analysis and a Self-Adjusting Memory k-Nearest Neighbors classifier (SAM/k-NN) to examine patterns from real-time streams. It is a forensics tool for big data that can enhance the automate defense strategies of SOCs to effectively respond to the threats their environments face.
A new clutering approach for anomaly intrusion detectionIJDKP
Recent advances in technology have made our work easier compare to earlier times. Computer network is
growing day by day but while discussing about the security of computers and networks it has always been a
major concerns for organizations varying from smaller to larger enterprises. It is true that organizations
are aware of the possible threats and attacks so they always prepare for the safer side but due to some
loopholes attackers are able to make attacks.
Intrusion detection is one of the major fields of research and researchers are trying to find new algorithms
for detecting intrusions. Clustering techniques of data mining is an interested area of research for detecting
possible intrusions and attacks. This paper presents a new clustering approach for anomaly intrusion
detection by using the approach of K-medoids method of clustering and its certain modifications. The
proposed algorithm is able to achieve high detection rate and overcomes the disadvantages of K-means
algorithm.
Trust-based secure routing against lethal behavior of nodes in wireless adhoc...IJECEIAES
Offering a secure communication in wireless adhoc network is yet an openend problem irrespective of archives of existing literatures towards securityenhancement. Inclination towards solving specific forms of attack in adhocnetwork is majorly seen as an existing trend which lowers the applicability ofexisting security solution while application environment or attack scenario ischanged. Therefore, the proposed system implements an analytical securerouting modeling which performs consistent monitoring of the maliciousbehaviour of its neighboring node and formulates decision towards securerouting by the source nodes. Harnessing the potential ofconceptualprobabilistic modeling, the proposed system is capable as well as applicablefor resisting maximum number / types of threats in wireless network.The study outcome show proposed scheme offer better performance in contrast to existing secure routing scheme.
A NOVEL EVALUATION APPROACH TO FINDING LIGHTWEIGHT MACHINE LEARNING ALGORITHM...IJNSA Journal
Building practical and efficient intrusion detection systems in computer network is important in industrial areas today and machine learning technique provides a set of effective algorithms to detect network intrusion. To find out appropriate algorithms for building such kinds of systems, it is necessary to evaluate various types of machine learning algorithms based on specific criteria. In this paper, we propose a novel evaluation formula which incorporates 6 indexes into our comprehensive measurement, including precision, recall, root mean square error, training time, sample complexity and practicability, in order to find algorithms which have high detection rate, low training time, need less training samples and are easy to use like constructing, understanding and analyzing models. Detailed evaluation process is designed to get all necessary assessment indicators and 6 kinds of machine learning algorithms are evaluated. Experimental results illustrate that Logistic Regression shows the best overall performance.
ATTACK DETECTION AVAILING FEATURE DISCRETION USING RANDOM FOREST CLASSIFIERCSEIJJournal
The widespread use of the Internet has an adverse effect of being vulnerable to cyber attacks. Defensive
mechanisms like firewalls and IDSs have evolved with a lot of research contributions happening in these
areas. Machine learning techniques have been successfully used in these defense mechanisms especially
IDSs. Although they are effective to some extent in identifying new patterns and variants of existing
malicious patterns, many attacks are still left as undetected. The objective is to develop an algorithm for
detecting malicious domains based on passive traffic measurements. In this paper, an anomaly-based
intrusion detection system based on an ensemble based machine learning classifier called Random Forest
with gradient boosting is deployed. NSL-KDD cup dataset is used for analysis and out of 41 features, 32
features were identified as significant using feature discretion. Our observations confirm the conjecture
that both the feature selection and stochastic based genetic operators improves the accuracy and the
effectiveness. The training time is shown to be reduced tremendously by 98.59% and accuracy improved to
98.75%.
Attack Detection Availing Feature Discretion using Random Forest ClassifierCSEIJJournal
The widespread use of the Internet has an adverse effect of being vulnerable to cyber attacks. Defensive
mechanisms like firewalls and IDSs have evolved with a lot of research contributions happening in these
areas. Machine learning techniques have been successfully used in these defense mechanisms especially
IDSs. Although they are effective to some extent in identifying new patterns and variants of existing
malicious patterns, many attacks are still left as undetected. The objective is to develop an algorithm for
detecting malicious domains based on passive traffic measurements. In this paper, an anomaly-based
intrusion detection system based on an ensemble based machine learning classifier called Random Forest
with gradient boosting is deployed. NSL-KDD cup dataset is used for analysis and out of 41 features, 32
features were identified as significant using feature discretion.
11421ijcPROGRESS OF MACHINE LEARNING IN THE FIELD OF INTRUSION DETECTION SYST...ijcisjournal
With the growth in the use of the Internet and local area networks, malicious attacks and intrusions into computer systems are increasing. Implementing intrusion detection systems have become extremely important to help maintain good network security. Support vector machines (SVMs), a classic pattern recognition tool, have been widely used in intrusion detection. They can handle very large data with high efficiency, are easy to use, and exhibit good prediction behavior. This paper presents a new SVM model enriched with a Gaussian kernel function based on the features of the training data for intrusion detection. The new model is tested with the CICIDS2017 dataset. The test proves better results in terms of detection efficiency and false alarm rate, which can give better coverage and make detection more efficient.
With the increasing importance of cyberspace security, the research and application of network situational awareness is getting more attention. The research on network security situational awareness is of great significance for
improving the network monitoring ability, emergency response capability and predicting the development trend of network security
Real Time Intrusion Detection System Using Computational Intelligence and Neu...ijtsrd
Today, Intrusion detection system using neural network is interested and measurable area for the researchers. The computational intelligence describe based on following parameters such as computational speed, adaptation, error resilience and fault tolerance. A good intrusion detection system must be satisfied adaptable as requirements. The objective of this paper, provide an outline of the research progress via computational intelligence and neural network over the intrusion detection. In this paper focused, existing research challenges, review analysis, research suggestion regarding Intrusion detection system. Dr. Prabha Shreeraj Nair"Real Time Intrusion Detection System Using Computational Intelligence and Neural Network: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-6 , October 2017, URL: http://www.ijtsrd.com/papers/ijtsrd5781.pdf http://www.ijtsrd.com/engineering/computer-engineering/5781/real-time-intrusion-detection-system-using-computational-intelligence-and-neural-network-a-review/dr-prabha-shreeraj-nair
An effective approach for tackling network security
problems is Intrusion detection systems (IDS). These kind of
systems play a key role in network security as they can detect
different types of attacks in networks, including DoS, U2R Probe
and R2L. In addition, IDS are an increasingly key part of the
system’s defense. Various approaches to IDS are now being used,
but are unfortunately relatively ineffective. Data mining techniques
and artificial intelligence play an important role in security
services. We will present a comparative study of three wellknown
intelligent algorithms in this paper. These are Radial Basis
Functions (RBF), Multilayer Perceptrons (MLP) and Support
Vector Machine (SVM).This work’s main interest is to benchmark
the performance of these3 intelligent algorithms. This is done by
using a dataset of about 9,000 connections, randomly chosen from
KDD'99’s 10% dataset. In addition, we investigate these
algorithms’ performance in terms of their attack classification
accuracy. The Simulation results are also analyzed and the
discussion is then presented. It has been observed that SVM with a
linear kernel (Linear-SVM) gives a better performance than MLP
and RBF in terms of its detection accuracy and processing speed.
DETECTION METHOD FOR CLASSIFYING MALICIOUS FIRMWAREIJNSA Journal
A malicious firmware update may prove devastating to the embedded devices both that make up the Internet of Things (IoT) and that typically lack the same security verifications now applied to full operating systems. This work converts the binary headers of 40,000 firmware examples from bytes into 1024-pixel thumbnail images to train a deep neural network. The aim is to distinguish benign and malicious variants using modern deep learning methods without needing detailed functional or forensic analysis tools. One outcome of this image conversion enables contact with the vast machine learning literature already applied to handle digit recognition (MNIST). Another result indicates that greater than 90% accurate classifications prove possible using image-based convolutional neural networks (CNN) when combined with transfer learning methods. The envisioned CNN application would intercept firmware updates before their distribution to IoT networks and score their likelihood of containing malicious variants. To explain how the model makes classification decisions, the research applies traditional statistical methods such as both single and ensembles of decision trees with identifiable pixel or byte values that contribute the malicious or benign determination.
Evaluation of network intrusion detection using markov chainIJCI JOURNAL
Day today life internet threat has been increased significantly. There is a need to develop model in order to
maintain security of system. The most effective techniques are Intrusion Detection System (IDS).The
purpose of intrusion system through the security devices detect and deal with it. In this paper, a
mathematical approach is used effectively to predict and detect intrusion in the network. Here we discuss
about two algorithms ‘K-Means + Apriori’, a method which classify normal and abnormal activities in
computer network. In K-Means process, it partitions the training set into K-clusters using Euclidean
distance and introduce an outlier factor, then it build Apriori Algorithm to prune the data by removing
infrequent data in the database. Based on defined state the degree of incoming data is evaluated through
the experiment using sample DARPA2000 dataset, and achieves high detection performance in level of
attack in stages.
CLASSIFICATION PROCEDURES FOR INTRUSION DETECTION BASED ON KDD CUP 99 DATA SETIJNSA Journal
In network security framework, intrusion detection is one of a benchmark part and is a fundamental way to protect PC from many threads. The huge issue in intrusion detection is presented as a huge number of false alerts; this issue motivates several experts to discover the solution for minifying false alerts according to data mining that is a consideration as analysis procedure utilized in a large data e.g. KDD CUP 99. This paper presented various data mining classification for handling false alerts in intrusion detection as reviewed. According to the result of testing many procedure of data mining on KDD CUP 99 that is no individual procedure can reveal all attack class, with high accuracy and without false alerts. The best accuracy in Multilayer Perceptron is 92%; however, the best Training Time in Rule based model is 4 seconds . It is concluded that ,various procedures should be utilized to handle several of network attacks.
A PROPOSED MODEL FOR DIMENSIONALITY REDUCTION TO IMPROVE THE CLASSIFICATION C...IJNSA Journal
Over the past few years, intrusion protection systems have drawn a mature research area in the field of computer networks. The problem of excessive features has a significant impact on
intrusion detection performance. The use of machine learning algorithms in many previous researches has been used to identify network traffic, harmful or normal. Therefore, to obtain the accuracy, we must reduce the dimensionality of the data used. A new model design based on a combination of feature selection and machine learning algorithms is proposed in this paper. This model depends on selected genes from every feature to increase the accuracy of intrusion detection systems. We selected from features content only ones which impact in attack detection. The performance has been evaluated based on a comparison of several known algorithms. The NSL-KDD dataset is used for examining classification. The proposed model outperformed the other learning approaches with accuracy 98.8 %.
USE OF MARKOV CHAIN FOR EARLY DETECTING DDOS ATTACKSIJNSA Journal
DDoS has a variety of types of mixed attacks. Botnet attackers can chain different types of DDoS attacks to confuse cybersecurity defenders. In this article, the attack type can be represented as the state of the model. Considering the attack type, we use this model to calculate the final attack probability. The final attack probability is then converted into one prediction vector, and the incoming attacks can be detected early before IDS issues an alert. The experiment results have shown that the prediction model that can make multi-vector DDoS detection and analysis easier.
Multi-stage secure clusterhead selection using discrete rule-set against unkn...IJECEIAES
Security is the rising concern of the wireless network as there are various forms of reonfigurable network that is arised from it. Wireless sensor network (WSN) is one such example that is found to be an integral part of cyber-physical system in upcoming times. After reviewing the existing system, it can be seen that there are less dominant and robust solutions towards mitigating the threats of upcoming applications of WSN. Therefore, this paper introduces a simple and cost-effective modelling of a security system that offers security by ensuring secure selection of clusterhead during the data aggregation process in WSN. The proposed system also makes construct a rule-set in order to learn the nature of the communication iin order to have a discrete knowledge about the intensity of adversaries. With an aid of simulation-based approach over MEMSIC nodes, the proposed system was proven to offer reduced energy consumption with good data delivery performance in contrast to existing approach.
With the surge in modern research focus towards Pervasive Computing, lot of techniques and challenges
needs to be addressed so as to effectively create smart spaces and achieve miniaturization. In the process of
scaling down to compact devices, the real things to ponder upon are the Information Retrieval challenges.
In this work, we discuss the aspects of multimedia which makes information access challenging. An
Example Pattern Recognition scenario is presented and the mathematical techniques that can be used to
model uncertainty are also presented for developing a system that can sense, compute and communicate in
a way that can make human life easy with smart objects assisting from around his surroundings.
The Next Generation Cognitive Security Operations Center: Adaptive Analytic L...Konstantinos Demertzis
A Security Operations Center (SOC) is a central technical level unit responsible for monitoring, analyzing, assessing, and defending an organization’s security posture on an ongoing basis. The SOC staff works closely with incident response teams, security analysts, network engineers and organization managers using sophisticated data processing technologies such as security analytics, threat intelligence, and asset criticality to ensure security issues are detected, analyzed and finally addressed quickly. Those techniques are part of a reactive security strategy because they rely on the human factor, experience and the judgment of security experts, using supplementary technology to evaluate the risk impact and minimize the attack surface. This study suggests an active security strategy that adopts a vigorous method including ingenuity, data analysis, processing and decision-making support to face various cyber hazards. Specifically, the paper introduces a novel intelligence driven cognitive computing SOC that is based exclusively on progressive fully automatic procedures. The proposed -Architecture Network Flow Forensics Framework (-NF3) is an efficient cybersecurity defense framework against adversarial attacks. It implements the Lambda machine learning architecture that can analyze a mixture of batch and streaming data, using two accurate novel computational intelligence algorithms. Specifically, it uses an Extreme Learning Machine neural network with Gaussian Radial Basis Function kernel (ELM/GRBFk) for the batch data analysis and a Self-Adjusting Memory k-Nearest Neighbors classifier (SAM/k-NN) to examine patterns from real-time streams. It is a forensics tool for big data that can enhance the automate defense strategies of SOCs to effectively respond to the threats their environments face.
A new clutering approach for anomaly intrusion detectionIJDKP
Recent advances in technology have made our work easier compare to earlier times. Computer network is
growing day by day but while discussing about the security of computers and networks it has always been a
major concerns for organizations varying from smaller to larger enterprises. It is true that organizations
are aware of the possible threats and attacks so they always prepare for the safer side but due to some
loopholes attackers are able to make attacks.
Intrusion detection is one of the major fields of research and researchers are trying to find new algorithms
for detecting intrusions. Clustering techniques of data mining is an interested area of research for detecting
possible intrusions and attacks. This paper presents a new clustering approach for anomaly intrusion
detection by using the approach of K-medoids method of clustering and its certain modifications. The
proposed algorithm is able to achieve high detection rate and overcomes the disadvantages of K-means
algorithm.
Trust-based secure routing against lethal behavior of nodes in wireless adhoc...IJECEIAES
Offering a secure communication in wireless adhoc network is yet an openend problem irrespective of archives of existing literatures towards securityenhancement. Inclination towards solving specific forms of attack in adhocnetwork is majorly seen as an existing trend which lowers the applicability ofexisting security solution while application environment or attack scenario ischanged. Therefore, the proposed system implements an analytical securerouting modeling which performs consistent monitoring of the maliciousbehaviour of its neighboring node and formulates decision towards securerouting by the source nodes. Harnessing the potential ofconceptualprobabilistic modeling, the proposed system is capable as well as applicablefor resisting maximum number / types of threats in wireless network.The study outcome show proposed scheme offer better performance in contrast to existing secure routing scheme.
A NOVEL EVALUATION APPROACH TO FINDING LIGHTWEIGHT MACHINE LEARNING ALGORITHM...IJNSA Journal
Building practical and efficient intrusion detection systems in computer network is important in industrial areas today and machine learning technique provides a set of effective algorithms to detect network intrusion. To find out appropriate algorithms for building such kinds of systems, it is necessary to evaluate various types of machine learning algorithms based on specific criteria. In this paper, we propose a novel evaluation formula which incorporates 6 indexes into our comprehensive measurement, including precision, recall, root mean square error, training time, sample complexity and practicability, in order to find algorithms which have high detection rate, low training time, need less training samples and are easy to use like constructing, understanding and analyzing models. Detailed evaluation process is designed to get all necessary assessment indicators and 6 kinds of machine learning algorithms are evaluated. Experimental results illustrate that Logistic Regression shows the best overall performance.
ATTACK DETECTION AVAILING FEATURE DISCRETION USING RANDOM FOREST CLASSIFIERCSEIJJournal
The widespread use of the Internet has an adverse effect of being vulnerable to cyber attacks. Defensive
mechanisms like firewalls and IDSs have evolved with a lot of research contributions happening in these
areas. Machine learning techniques have been successfully used in these defense mechanisms especially
IDSs. Although they are effective to some extent in identifying new patterns and variants of existing
malicious patterns, many attacks are still left as undetected. The objective is to develop an algorithm for
detecting malicious domains based on passive traffic measurements. In this paper, an anomaly-based
intrusion detection system based on an ensemble based machine learning classifier called Random Forest
with gradient boosting is deployed. NSL-KDD cup dataset is used for analysis and out of 41 features, 32
features were identified as significant using feature discretion. Our observations confirm the conjecture
that both the feature selection and stochastic based genetic operators improves the accuracy and the
effectiveness. The training time is shown to be reduced tremendously by 98.59% and accuracy improved to
98.75%.
Attack Detection Availing Feature Discretion using Random Forest ClassifierCSEIJJournal
The widespread use of the Internet has an adverse effect of being vulnerable to cyber attacks. Defensive
mechanisms like firewalls and IDSs have evolved with a lot of research contributions happening in these
areas. Machine learning techniques have been successfully used in these defense mechanisms especially
IDSs. Although they are effective to some extent in identifying new patterns and variants of existing
malicious patterns, many attacks are still left as undetected. The objective is to develop an algorithm for
detecting malicious domains based on passive traffic measurements. In this paper, an anomaly-based
intrusion detection system based on an ensemble based machine learning classifier called Random Forest
with gradient boosting is deployed. NSL-KDD cup dataset is used for analysis and out of 41 features, 32
features were identified as significant using feature discretion.
11421ijcPROGRESS OF MACHINE LEARNING IN THE FIELD OF INTRUSION DETECTION SYST...ijcisjournal
With the growth in the use of the Internet and local area networks, malicious attacks and intrusions into computer systems are increasing. Implementing intrusion detection systems have become extremely important to help maintain good network security. Support vector machines (SVMs), a classic pattern recognition tool, have been widely used in intrusion detection. They can handle very large data with high efficiency, are easy to use, and exhibit good prediction behavior. This paper presents a new SVM model enriched with a Gaussian kernel function based on the features of the training data for intrusion detection. The new model is tested with the CICIDS2017 dataset. The test proves better results in terms of detection efficiency and false alarm rate, which can give better coverage and make detection more efficient.
The main goal of Intrusion Detection Systems (IDSs) is
to detect intrusions. This kind of detection system represents a
significant tool in traditional computer based systems for ensuring
cyber security. IDS model can be faster and reach more accurate
detection rates, by selecting the most related features from the
input dataset. Feature selection is an important stage of any IDs to
select the optimal subset of features that enhance the process of the
training model to become faster and reduce the complexity while
preserving or enhancing the performance of the system. In this
paper, we proposed a method that based on dividing the input
dataset into different subsets according to each attack. Then we
performed a feature selection technique using information gain
filter for each subset. Then the optimal features set is generated by
combining the list of features sets that obtained for each attack.
Experimental results that conducted on NSL-KDD dataset shows
that the proposed method for feature selection with fewer features,
make an improvement to the system accuracy while decreasing the
complexity. Moreover, a comparative study is performed to the
efficiency of technique for feature selection using different
classification methods. To enhance the overall performance,
another stage is conducted using Random Forest and PART on
voting learning algorithm. The results indicate that the best
accuracy is achieved when using the product probability rule.
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
EFFICIENT ATTACK DETECTION IN IOT DEVICES USING FEATURE ENGINEERING-LESS MACH...ijcsit
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
Defects in modules of software systems is a major problem in software development. There are a variety of data mining
techniques used to predict software defects such as regression, association rules, clustering, and classification. This paper is concerned
with classification based software defect prediction. This paper investigates the effectiveness of using a radial basis function neural
network and a probabilistic neural network on prediction accuracy and defect prediction. The conclusions to be drawn from this work is
that the neural networks used in here provide an acceptable level of accuracy but a poor defect prediction ability. Probabilistic neural
networks perform consistently better with respect to the two performance measures used across all datasets. It may be advisable to use
a range of software defect prediction models to complement each other rather than relying on a single technique.
An Efficient Intrusion Detection System with Custom Features using FPA-Gradie...IJCNCJournal
An efficient Intrusion Detection System has to be given high priority while connecting systems with a network to prevent the system before an attack happens. It is a big challenge to the network security group to prevent the system from a variable types of new attacks as technology is growing in parallel. In this paper, an efficient model to detect Intrusion is proposed to predict attacks with high accuracy and less false-negative rate by deriving custom features UNSW-CF by using the benchmark intrusion dataset UNSW-NB15. To reduce the learning complexity, Custom Features are derived and then Significant Features are constructed by applying meta-heuristic FPA (Flower Pollination algorithm) and MRMR (Minimal Redundancy and Maximum Redundancy) which reduces learning time and also increases prediction accuracy. ENC (ElasicNet Classifier), KRRC (Kernel Ridge Regression Classifier), IGBC (Improved Gradient Boosting Classifier) is employed to classify the attacks in the datasets UNSW-CF, UNSW and recorded that UNSW-CF with derived custom features using IGBC integrated with FPA provided high accuracy of 97.38% and a low error rate of 2.16%. Also, the sensitivity and specificity rate for IGB attains a high rate of 97.32% and 97.50% respectively.
AN EFFICIENT INTRUSION DETECTION SYSTEM WITH CUSTOM FEATURES USING FPA-GRADIE...IJCNCJournal
An efficient Intrusion Detection System has to be given high priority while connecting systems with a network to prevent the system before an attack happens. It is a big challenge to the network security group to prevent the system from a variable types of new attacks as technology is growing in parallel. In this paper, an efficient model to detect Intrusion is proposed to predict attacks with high accuracy and less false-negative rate by deriving custom features UNSW-CF by using the benchmark intrusion dataset UNSW-NB15. To reduce the learning complexity, Custom Features are derived and then Significant Features are constructed by applying meta-heuristic FPA (Flower Pollination algorithm) and MRMR (Minimal Redundancy and Maximum Redundancy) which reduces learning time and also increases prediction accuracy. ENC (ElasicNet Classifier), KRRC (Kernel Ridge Regression Classifier), IGBC (Improved Gradient Boosting Classifier) is employed to classify the attacks in the datasets UNSW-CF, UNSW and recorded that UNSW-CF with derived custom features using IGBC integrated with FPA provided high accuracy of 97.38% and a low error rate of 2.16%. Also, the sensitivity and specificity rate for IGB attains a high rate of 97.32% and 97.50% respectively.
Optimizing cybersecurity incident response decisions using deep reinforcemen...IJECEIAES
The main purpose of this paper is to explore and investigate the role of deep reinforcement learning (DRL) in optimizing the post-alert incident response process in security incident and event management (SIEM) systems. Although machine learning is used at multiple levels of SIEM systems, the last mile decision process is often ignored. Few papers reported efforts regarding the use of DRL to improve the post-alert decision and incident response processes. All the reported efforts applied only shallow (traditional) machine learning approaches to solve the problem. This paper explores the possibility of solving the problem using DRL approaches. The main attraction of DRL models is their ability to make accurate decisions based on live streams of data without the need for prior training, and they proved to be very successful in other fields of applications. Using standard datasets, a number of experiments have been conducted using different DRL configurations The results showed that DRL models can provide highly accurate decisions without the need for prior training.
LSTM deep learning method for network intrusion detection system IJECEIAES
The security of the network has become a primary concern for organizations. Attackers use different means to disrupt services, these various attacks push to think of a new way to block them all in one manner. In addition, these intrusions can change and penetrate the devices of security. To solve these issues, we suggest, in this paper, a new idea for Network Intrusion Detection System (NIDS) based on Long Short-Term Memory (LSTM) to recognize menaces and to obtain a long-term memory on them, in order to stop the new attacks that are like the existing ones, and at the same time, to have a single mean to block intrusions. According to the results of the experiments of detections that we have realized, the Accuracy reaches up to 99.98 % and 99.93 % for respectively the classification of two classes and several classes, also the False Positive Rate (FPR) reaches up to only 0,068 % and 0,023 % for respectively the classification of two classes and several classes, which proves that the proposed model is effective, it has a great ability to memorize and differentiate between normal traffic and attacks, and its identification is more accurate than other Machine Learning classifiers.
Deep self-taught learning framework for intrusion detection in cloud computin...IAESIJAI
Cloud has become a target-rich environment for malicious attacks by cyber intruders. Security is a major concern and remains an obstacle to the adoption of cloud computing. The intrusion detection system (IDS) is regarded as defense-in-depth. Unfortunately, most machine learning approaches designed for cloud intrusion detection require large amounts of labeled attack samples, but in real practice, they are limited. Therefore, the key impetus of this work is to introduce self-taught learning (STL) combining stacked sparse autoencoder (SSAE) with long short-term memory (LSTM) as a candidate solution to learn the robust feature representation and efficiently improve the performance of IDS with respect to false alarm rate (FAR) and detection rate (DR). Accordingly, the proposed approach as a first step employs SSAE to achieve dimensional reduction by learning the discriminative features from network traffic. The approach adopts LSTM to recognize the intrusion with the features encoded by SSAE. To evaluate the detective performance of our model, a comprehensive set of experiments are conducted on NSL-KDD. Also, ablation experiments are conducted to show the contribution of each component of our approach. Further, the comparative analysis shows the efficacy of our approach against the existing approaches with an accuracy of 86.31%.
Obfuscated computer virus detection using machine learning algorithmjournalBEEI
Nowadays, computer virus attacks are getting very advanced. New obfuscated computer virus created by computer virus writers will generate a new shape of computer virus automatically for every single iteration and download. This constantly evolving computer virus has caused significant threat to information security of computer users, organizations and even government. However, signature based detection technique which is used by the conventional anti-computer virus software in the market fails to identify it as signatures are unavailable. This research proposed an alternative approach to the traditional signature based detection method and investigated the use of machine learning technique for obfuscated computer virus detection. In this work, text strings are used and have been extracted from virus program codes as the features to generate a suitable classifier model that can correctly classify obfuscated virus files. Text string feature is used as it is informative and potentially only use small amount of memory space. Results show that unknown files can be correctly classified with 99.5% accuracy using SMO classifier model. Thus, it is believed that current computer virus defense can be strengthening through machine learning approach.
Obfuscated computer virus detection using machine learning algorithmjournalBEEI
Nowadays, computer virus attacks are getting very advanced. New obfuscated computer virus created by computer virus writers will generate a new shape of computer virus automatically for every single iteration and download. This constantly evolving computer virus has caused significant threat to information security of computer users, organizations and even government. However, signature based detection technique which is used by the conventional anti-computer virus software in the market fails to identify it as signatures are unavailable. This research proposed an alternative approach to the traditional signature based detection method and investigated the use of machine learning technique for obfuscated computer virus detection. In this work, text strings are used and have been extracted from virus program codes as the features to generate a suitable classifier model that can correctly classify obfuscated virus files. Text string feature is used as it is informative and potentially only use small amount of memory space. Results show that unknown files can be correctly classified with 99.5% accuracy using SMO classifier model. Thus, it is believed that current computer virus defense can be strengthening through machine learning approach.
Utilizing XAI Technique to Improve Autoencoder based Model for Computer Netwo...IJCNCJournal
Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security, such as fraud detection, network anomaly detection, intrusion detection, and much more. However, the lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature, even with such tremendous results. Explainable Artificial Intelligence (XAI) is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output. If the internal working of the ML and DL based models is understandable, then it can further help to improve its performance. The objective of this paper is to show that how XAI can be used to interpret the results of the DL model, the autoencoder in this case. And, based on the interpretation, we improved its performance for computer network anomaly detection. The kernel SHAP method, which is based on the shapley values, is used as a novel feature selection technique. This method is used to identify only those features that are actually causing the anomalous behaviour of the set of attack/anomaly instances. Later, these feature sets are used to train and validate the autoencoderbut on benign data only. Finally, the built SHAP_Model outperformed the other two models proposed based on the feature selection method. This whole experiment is conducted on the subset of the latest CICIDS2017 network dataset. The overall accuracy and AUC of SHAP_Model is 94% and 0.969, respectively.
UTILIZING XAI TECHNIQUE TO IMPROVE AUTOENCODER BASED MODEL FOR COMPUTER NETWO...IJCNCJournal
Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security, such as fraud detection, network anomaly detection, intrusion detection, and much more. However, the lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature, even with such tremendous results. Explainable Artificial Intelligence (XAI) is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output. If the internal working of the ML and DL based models is understandable, then it can further help to improve its performance. The objective of this paper is to show that how XAI can be used to interpret the results of the DL model, the autoencoder in this case. And, based on the interpretation, we improved its performance for computer network anomaly detection. The kernel SHAP method, which is based on the shapley values, is used as a novel feature selection technique. This method is used to identify only those features that are actually causing the anomalous behaviour of the set of attack/anomaly instances. Later, these feature sets are used to train and validate the autoencoderbut on benign data only. Finally, the built SHAP_Model outperformed the other two models proposed based on the feature selection method. This whole experiment is conducted on the subset of the latest CICIDS2017 network dataset. The overall accuracy and AUC of SHAP_Model is 94% and 0.969, respectively.
Cyber security is a Major concern in the world. As a result of frequent and consistent daily cyber attack, this journal was written to enlighten viewers and readers on zero day attack prediction
Similar to Forecasting number of vulnerabilities using long short-term neural memory network (20)
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Forecasting number of vulnerabilities using long short-term neural memory network
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 11, No. 5, October 2021, pp. 4381~4391
ISSN: 2088-8708, DOI: 10.11591/ijece.v11i5.pp4381-4391 4381
Journal homepage: http://ijece.iaescore.com
Forecasting number of vulnerabilities using long short-term
neural memory network
Mohammad Shamsul Hoque1
, Norziana Jamil2
, Nowshad Amin3
, Azril Azam Abdul Rahim4
,
Razali B. Jidin5
1,2,5
College of Computing and Informatics, Universiti Tenaga Nasional, Malaysia
3
Institute of Sustainable Energy, Universiti Tenaga Nasional, Malaysia
4
Tenaga Nasional Berhad, Kuala Lumpur, Malaysia
Article Info ABSTRACT
Article history:
Received Aug 9, 2020
Revised Mar 12, 2021
Accepted Mar 27, 2021
Cyber-attacks are launched through the exploitation of some existing
vulnerabilities in the software, hardware, system and/or network. Machine
learning algorithms can be used to forecast the number of post release
vulnerabilities. Traditional neural networks work like a black box approach;
hence it is unclear how reasoning is used in utilizing past data points in
inferring the subsequent data points. However, the long short-term memory
network (LSTM), a variant of the recurrent neural network, is able to address
this limitation by introducing a lot of loops in its network to retain and utilize
past data points for future calculations. Moving on from the previous finding,
we further enhance the results to predict the number of vulnerabilities by
developing a time series-based sequential model using a long short-term
memory neural network. Specifically, this study developed a supervised
machine learning based on the non-linear sequential time series forecasting
model with a long short-term memory neural network to predict the number
of vulnerabilities for three vendors having the highest number of
vulnerabilities published in the national vulnerability database (NVD),
namely microsoft, IBM and oracle. Our proposed model outperforms the
existing models with a prediction result root mean squared error (RMSE) of
as low as 0.072.
Keywords:
Information security
Long short-term memory
network
Recurrent neural network
Supervised machine learning
Threat intelligence
Time series
Vulnerability prediction model
This is an open access article under the CC BY-SA license.
Corresponding Author:
Mohammad Shamsul Hoque
College of Computing and Informatics
Universiti Tenaga Nasional, Malaysia
Email: shams_uttara@gmail.com
1. INTRODUCTION
Vulnerability reflects the weakness of a system which leads to most security breaches. In some cases
of successful cyber-attacks, the hackers would usually exploit zero-day vulnerabilities before their existence
is noticed by the security administrator [1], [2]. The importance of security focus has dramatically increased,
and therefore past vulnerability trends and reviews become more valuable to be considered in the acquisition
of a new software system [3]. Machine learning [4] is a subfield of computer science and is an application of
artificial intelligence (AI) that "gives computers the ability to learn without being explicitly programmed"
[5]. Data modelling for predicting vulnerabilities which thereby creates deployable machine learning models
is a tool that can be used as a proxy to predict the number of future vulnerabilities or to predict vulnerabilities
most likely likelihood to be exploitable. The authors in [5] and [6] showed that addressing security issues
during software development phase is an arduous task since project managers would usually focus on cost
and the timely delivery of products, thus resulting in the high probability of the existence of vulnerabilities.
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 5, October 2021: 4381 - 4391
4382
During software development phase is an arduous task since project managers would usually focus on cost
and the timely delivery of products, thus resulting in the high probability of the existence of vulnerabilities.
Researchers have used various forms of vulnerability databases in developing models for
vulnerabilities disclosure trends. The aim of most of these researchers is in finding the techniques in
developing models that could predict the number of future product vulnerabilities using their historical data
through regression and time series analysis [7]-[15]. However, these models suffer in various extents from
the limitations stemming from their underlying assumptions, use of machine learning algorithms and high
expectation for accuracy. The utilization of the linear algorithm is unable to deduce the underlying non-linear
aspect of the data. The neural network model application for example, traditionally follows the black box
approach that are not mathematically tractable and cannot be easily interpreted. Besides that, in non-linear
models other than the long short-term memory network, the number of inputs has to be selected beforehand,
thus making learning impossible for functions that depend on historical input that took place a long time ago.
Since security activities for software and systems are highly resource savvy, the models are therefore
expected to have high accuracy of vulnerability prediction by the vendors, end users as well as businesses
[16]. In this research, the prediction of the number of future vulnerabilities is taken as a supervised sequential
time series forecasting problem, and makes the following contributions:
Creation of a new, larger dataset for each selected vendor (Microsoft, Oracle and IBM) using a novel
technique of feature aggregation obtained from open-source datasets which contains a lot more examples
to efficiently train the LSTM network. These datasets contain a lot more examples compared to the
previous work [15]. Our dataset (created by aggregating the “Published Date feature”) for Microsoft
vendor for example, has 1030 examples compared to [15] where the dataset was prepared based on a
monthly aggregation and has approximately 252 examples for each. To the best of our knowledge, these
are the first datasets with such feature created by grouping based on the published date to be utilized for
vulnerability prediction model.
Utilization of deep learning algorithm called the long short-term memory (LSTM), a variant of the
recurrent neural network (RNN), known for having the unique capability in retaining past information of
series in calculating the activation functions and weights in machine learning modelling to forecast future
values with higher accuracy. To the best of our knowledge, this research is the first to utilize the long
short-term memory neural network with the national vulnerability dataset in developing a machine
learning model to forecast the number of future vulnerabilities.
The proposed model in this research has achieved the accuracy (root men squared error) value of as low
as 0.072 which has outperformed the existing models in terms of prediction accuracy and following
distribution trends.
The rest of the paper is organized is being as: Section 2 describes the related work while. Section 3
describes the dataset and method used in the analysis. Section 4 describes the advantages of the LSTM model
in time series prediction, section 5 describes the results and analysis, and finally section 6 describes the
conclusion and future works.
2. RELATED RESEARCH
Lyu and Lyu [9] surveyed software defect detection processes using software reliability growth
models. Anderson proposed the Anderson Thermodynamic (AT) time-based vulnerability discovery which is
considered as a pioneer in such research [16]. Alhazmi and Malaiya proposed a time-based application of
software reliability growth modelling (SRGM) in predicting the number of vulnerabilities, and later have also
proposed another logistic regression model for Windows 98 and NT 4.0 in predicting the number of
undiscovered vulnerabilities [17]. Rescola proposed two time-based trend models, namely the linear model
(RL) and the exponential model (RE) to estimate future vulnerabilities [18]. Kim [19] proposed a new
Weibull distribution-based vulnerability discovery model (VDM) which was compared with [20], and found
that their model performed better in many cases. There are also several other studies that worked further
based on the existing VDMs for various software packages with the aim of improving the vulnerability
discovery rate and the prediction of future vulnerability count [21]-[28].
Movahedi et al. [29] developed nine common vulnerability discovery models (VDMs) which were
compared with a nonlinear neural network model (NNM) over a prediction period of three years. The
common VDMs are the NHPP power-law gamma-based VDM, Weibull-based VDM, AML VDM, normal-
based VDM, rescorla exponential (RE), rescorla quadratic (RQ), younis folded (YF) and linear model (LM).
These models use the NVD dataset with the feedforward NNM with a single hidden layer in forecasting four
well-known OSs and four well-known web browsers and assessed the models in terms of prediction accuracy
and prediction bias. The results showed that in terms of prediction accuracy, the NNM has outperformed the
3. Int J Elec & Comp Eng ISSN: 2088-8708
Forecasting number of vulnerabilities using long short-term neural memory… (Mohammad Shamsul Hoque)
4383
VDMs in all cases while regarding the overall magnitude of bias, the NMM provided the smallest value
against seven common VDMs out of eight.
In recent times, Pokhrel [15] proposed a vulnerability prediction model based on the non-linear
approach using the time series analysis. They utilized the NVD database for the auto regressive moving
average (ARIMA), artificial neural network (ANN), and support vector machine (SVM) to develop the
prediction models and selected the operating systems such windows 7, Mac OS X, and Linux Kernel for the
experiments. The examples of the result show that the best model for windows 7 was produced by SVM with
symmetric mean absolute error (SMAPE), mean absolute error (MAE), and root mean squared error (RMSE)
values of 0.12, 3.15 and 3.58 respectively. However, since nonlinearity is a common trend in vulnerability
disclosure, traditional time series-based modelling may always have limited capability in attaining high
prediction accuracy. In our study, the number of vulnerability predictions is modelled with newly-created
datasets using the sequential LSTM network, and the prediction accuracy was found to have improved in
comparison to other datasets such as the recent one by [15].
3. METHOD
Vulnerability data were extracted using a custom-coded web scrapper to dump the data for the top-
three vendors from the MITRE's CVE website [16] spanning between 1997 to 2019. The number of
vulnerabilities for Microsoft, Oracle and IBM are 6814, 6115 and 4679 respectively as shown in
Figures 1 and 2. A new dataset was then created for each vendor by grouping it according to the “Published
Date” attribute. Another attribute called the “CVE ID” was aggregated as the size of each group, and the
dataset was later chronologically organized (in an ascending order) in preparing for the time series.
Therefore, these datasets contain a lot more examples to train the deep learning algorithms compared to the
previous works. For example, our dataset for the Microsoft vendor contained 1090 examples whereas in [16]
the dataset was prepared according to the monthly aggregation and contained approximately 252 examples
for each dataset.
Figure 1. Top-three vendors by vulnerability count (snipped from [30])
Figure 2. Attributes of dataset (for example Microsoft) (snipped from [30])
The whole research method consists of three phases. The first phase deals with data vulnerability for
the top-three vendors according to the number of vulnerabilities collected from the open-source national
vulnerability database with a custom-built web scrapper in a csv format. The second phase involves data
preparation, cleaning and engineering which covers the bulk of the activities common to usual data modelling
project. Major activities include the analysis of distribution characteristics such as stationarity, seasonality
and linearity, cleaning of data, feature engineering, as well as data wrangling and aggregation in creating the
larger novel training datasets. The third phase involves the experimentation and result analysis where the
existing best performing model [16] was set as the baseline for comparison and improvement purposes. The
long short-term memory network was utilized for the modelling, and parameter optimization was performed
through grid search. The following Figure 3 shows the overall view of the processes of this research:
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 5, October 2021: 4381 - 4391
4384
Figure 3. Method of the research
4. ADVANTAGES OF THE LONG SHORT-TERM MEMORY MODEL IN TIME SERIES
FORECASTING
A special ANN called the recurrent neural network (RNN) is designed to utilize loops to persist
information based on previously learnt knowledge. These looped networks work on each node in a sequence
of series by performing the same processing techniques and are hence termed as recurrent. The RNNs
maintain memory cells to capture information from past data points in the sequence as shown in Figure 4. In
terms of modelling dependencies between two points in a sequence, the RNN models would usually perform
better than NNMs. When applying the NNMs, in most cases, it is required to choose the length of the input
(number of inputs) beforehand. Thus, the algorithm is unable to learn the information of a function that is
dependent on data points of long time ago in a sequence. However, the RNN can avoid this drawback since it
is able to store information from data points of long time ago.
Figure 4. An unrolled recurrent neural network (Source: [31])
However, one significant drawback of the RNNs is that when the data sequence starts to increase,
the network tends to lose information on historical context over time. A variant of the RNN, called the long
short-term memory network (LSTM) can solve this problem since it maintains the cells for keeping
information for the previous nodes irrespective of the sequence size. An LSTM network maintains [32] three
or four gates where the input, output and forget gates are common as shown in Figure 5.
The notations t, C and h indicate one step in time, cell state and the hidden state respectively. The
gates i (input), f (forget) and o (output) that are most usually modelled with a sigmoid layer (values ranging
between 0-1) help in protecting and controlling the addition and removal of information in a cell state. The
5. Int J Elec & Comp Eng ISSN: 2088-8708
Forecasting number of vulnerabilities using long short-term neural memory… (Mohammad Shamsul Hoque)
4385
LSTMs therefore try to overcome the shortcomings of the RNN models regarding the handling of long-term
dependencies by mitigating the issue when the weight matrix of the neurons becomes too small (which may
lead to the vanishing of the gradients) or too large (which may cause the gradients to explode). The following
steps shows the mathematical functions [33], [34] of a long short-term memory neural network for typical
usage, such as for this research experiment:
Stage 1
For a forward pass in an LSTM block, if xt is considered as an input vector at a time t, N and M are
the LSTM block number and input number respectively, an LSTM layer can have four weights:
Weights for input: Wz, Wi, Wf, Wo ∈ R N×M
Weights for peephole: pi, pf, po ∈ R N
Weights for recurrent: Rz, Ri, Rf, Ro ∈ R N×N and
Weights for bias: bz, bi, bf, bo ∈ R N
With reference to the weights above, the formula for vector in a forward pass for an LSTM layer are
as follows, (e σ, g and h are activation functions):
z- t = Wz xt + Rz y t−1 + bz
zt = g (z-t) [block for input]
i-t = Wi xt + Ri y t−1 + pi • c t−1 + bi
it = σ (¯i t) [gate for input]
t-t = Wf xt+Rf yt-1 + pf • ct-1 + bf
ft = σ (f-t) [gate for forget]
ct = zt •it + ct-1 • ft [LSTM cell]
o-t = Wo xt + Ro yt-1+po • ct + bo
ot = σ (o-t) [gate for output]
yt = h (ct) • ot [output for block]
Typically, logistic sigmoid is the activation function at the LSTM gate while the hyperbolic tangent
is the activation function at the block input and output.
Stage 2
After the forward pass functions are computed, the following delta functions are computed inside
the LSTM block for backpropagation through time stamps:
Δy-t = ∆t + Rz T δzt+1 + Rt T δit+1+ RfT δft+1+ RoT δot+1
Δo-t = δyt • h(ct) • σ/(o-t)
Δct = δyt • ot • h/(ct) + po • δo-t + pi • δi-t+1 + pf • δf-t+1 + δct+1 • ft+1
Δf-t = δct • ct-1 • σ/(f-t)
Δit = δct • zt • σ/(i-t)
Δz-t = δct • it • g/(z-t)
Here ∆t denotes the vector of deltas propagated from the upper layers to the downward layers. The
value E corresponds to ∂E /∂yt when treated as a loss function. Input deltas are only required when the lower
layer of the input layer requires training, and in such situation the value of the delta for the input is:
Δxt = WzT δz-t + WiT δi-t + WfT δf-t + WoT δo-t
Stage 3
At the final stage, the gradients to adjust the weights are computed with the following functions,
whereas ¤ denotes any of the vectors z, I, f and o, and (¤ 1. ¤ 2) are the outer products of the corresponding
two vectors:
δW* = ∑t=0T (δ¤ t, xt) δpi = ∑t=0T-1ct • δi-t+1
δR* = ∑t=0T (δ¤ t+1, yt) δpf = ∑t=0T-1ct • δf-t+1
δb* = ∑t=0T δ¤ t δpo = ∑t=0T-1ct • δo-t+1
The RNNs, unlike the ARIMA, have the capability to learn nonlinearities in a data sequence, and
specialized nodes such as the LSTM nodes could handle that even more efficiently. One requirement of the
LSTM network is that a long sequence is needed in learning past dependencies and for this reason this
research therefore aims to create a long sequence as described in the previous section.
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 5, October 2021: 4381 - 4391
4386
Figure 5. An LSTM node (Source: [31])
5. RESULTS AND ANALYSIS
Prior to applying a time series dataset into the machine learning algorithm, it is important to analyse
the stationarity. The initial null hypothesis is that the time series at hand is non-stationary. Figures 6-8 show
the rolling stats plots (red lines) where it can be observed that the number of vulnerabilities is low and stable
at the beginning of time but shows spikes and lows as time grows. Generally, there will be an upward trend in
the number of vulnerabilities over the course of time. However, a reliable trend or seasonality which does not
demonstrate hikes and falls with a clear trend or seasonality signifies that it is a stationary sequence. The
augmented dickey fuller test (ADF) with log transformation shows that the test statistical values as shown in
Figures 9-11 are greater than any of the critical values, thus leading to the rejection of the null hypothesis (the
data distribution is not stationary) in all cases. Although unlike traditional algorithms, the ARIMA
'stationarity' is less brittle with LSTM (RNN), the stationarity of the series will be sufficient for its
performance. Furthermore, if the series is not stationary, it is then much safer for a transformation to be
performed in making the data stationary and then train the LSTM.
Figure 6. Rolling mean standard deviation (Microsoft)
Figure 7. Rolling mean standard deviation (Oracle)
7. Int J Elec & Comp Eng ISSN: 2088-8708
Forecasting number of vulnerabilities using long short-term neural memory… (Mohammad Shamsul Hoque)
4387
Figure 8. Rolling mean standard deviation (IBM)
Figure 9. Test statistics value (Microsoft) (snipped
from Spyder IDE)
Figure 10. Test statistics value (Oracle) (snipped
from Spyder IDE)
Figure 11. Test statistics value (IBM) (snipped from spyder IDE)
Contrary to feedforward neural networks, the RNNs maintain memory cells to remember the context
of the sequence and then use these cells to process the output. The time distributed layer wrapper, provided
by keras, is utilized over the dense layer to model the data similar to a time series sequence which applies the
same transformation or processing to each time step in the sequence and provides the output after each step.
Therefore, it fulfils the requirement of getting the processed output after each input instead of waiting for the
whole process to be completed to obtain the output at the end [34]. In this way, the RNN algorithm is applied
for the same transformation to every element of the time series where the output values are dependent on the
previous values. The LSTM model was created with a parameter optimization for 4 hidden units and 150
epochs. Figures 12-14 show the forecasted plots for different datasets. This proposed model has
outperformed the previous models in terms of accuracy. Table 1 shows the results of the accuracy
performance (RMSE) of the model, together with the comparison with the previous best performing
models [15].
The forecasted plots in Figures 12-14 show a promising picture. The grey, green and blue coloured
lines show the true, training fit and testing forecast respectively. It can be observed that the training fit is
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 5, October 2021: 4381 - 4391
4388
nearly perfect for the IBM vendor vulnerability dataset, having achieved the highest accuracy (root mean
spurred error value of 0.072). The testing performance or the forecast also showed a decent performance.
Although other results involving vulnerability datasets for Microsoft and oracle (RMSE: accuracy values of
0.17 and 0.24 respectively) are not as good as the IBM dataset, they are much better compared to the recent
best research in [15] as shown in Table 1.
The performance variation for microsoft and oracle datasets is due to the presence of high spikes in
data points in recent time since greater number of vulnerabilities have been discovered in recent years (2016-
2019) in the NVD database [35], and in contrary to the absence of such sudden spikes in the past data points;
thus, the models had been unable to learn adequately. When more data are added into the national
vulnerability database, these models will eventually have the capability to learn adequately for improved
performance. Even though the forecast has deviated from the actual at certain places, the overall performance
of both in terms of RMSE is promising. Through the use of the time distributed layer wrapper, these models
do not only show better performance but also become simpler to reduce computational costs in comparison
with the regression approach in some of the previous research. Unlike the previous research, the larger novel
dataset of this research has effectively addressed the requirement for the utilization of large training data in
obtaining meaningful results from the LSTM network.
Figure 12. Forecasted plot (Microsoft)
Figure 13. Test statistics value (Oracle)
Figure 14. Test statistics value (IBM)
9. Int J Elec & Comp Eng ISSN: 2088-8708
Forecasting number of vulnerabilities using long short-term neural memory… (Mohammad Shamsul Hoque)
4389
Table 1. Results and comparison
Models of this research (LSTM) Best previous models proposed by [15]
Vendor RMSE Vendor Best Model RMSE
Microsoft 0.17 MAC OS X ARIMA 19.64
Oracle 0.24 Windows 7 SVM 3.58
IBM 0.072 Linux Kernel SVM 3.99
6. CONCLUSION
In this research, the ability of the long short-term memory network (LSTM) to learn on its own and
use that knowledge to determine the impact of the context and the quantity of past information in forecasting
the number of post release vulnerabilities for the top-three vendors namely microsoft, oracle and IBM has
been utilized effectively. In the process, we have created a novel dataset for the top-three vendors with the
highest number of vulnerabilities in NVD that contain a lot of more historical data points than the previous
research in training the LSTM network which prefers a larger dataset in learning to demonstrate better
accuracy. To the best of our knowledge, this work is the first to utilize such dataset feature and LSTM deep
learning network for this problem in order to develop a vulnerability discovery model (VDM). Our model
demonstrated better performance in accuracy (root mean squared error) than other existing such models. The
developers, user community, and individual organizations can utilize our model as their respective
requirement in managing cyber threats in a cost-effective way. However, an inherent limitation of the LSTM
network is that it is not effective to be used with small datasets. Hence, there is always a race in creating even
larger datasets than the one utilized in this work for future improvements when necessary. Another limitation
is that since the LSTM network uses a lot of loops and memory nodes, it therefore requires longer
computation time. However, this drawback can be addressed with the use of powerful machines in the
production scenario. Since the long short-term memory (LSTM) recurrent neural network is fully capable of
addressing multiple input variables in predicting future values, we therefore plan to extract and generate more
novel features from the NVD database in developing a multivariate time series model for this problem as our
future work. We also plan to develop a model with ‘Published Date’ feature as the target label and to
integrate it with this current model to add novel forecasting service of vulnerabilities.
ACKNOWLEDGEMENT
Authors acknowledge Universiti Tenaga Nasional (@UNITEN) for its support to the researcherfrom
the research grant code of RJO10517844/063 under BOLD program. Appreciations are also extended to ICT
Ministry of Bangladesh for its support.
FUNDING
This project was funded by the TNB Seed Fund 2019 Project with a project code ‘U-TC-RD-19-09’
and ICT Ministry, Bangladesh.
REFERENCES
[1] HP Identifies Top Enterprise Security Threats 2014, [Online]. Available: https://www8.hp.com/us/en/hp-
news/press-release.html?id=1571359.
[2] S. Frei, M. May, U. Fiedler, and B. Plattner, “Large-scale Vulnerability Analysis,” In Proceedings of the 2006
SIGCOMM workshop on Large-scale attack defense, 2006, pp. 131-136, doi: 10.1145/1162666.1162671.
[3] S. Frei, D. Schatzmann, B. Plattner, and B. Trammell, “Modelling the Security Ecosystem-The Dynamics of (In)
Security,” in WEIS, pp. 79-106, 2009, doi: 10.1007/978-1-4419-6967-5_6.
[4] Machine Learning, [Online]. Available: http://www.contrib.andrew.cmu.edu/~mndarwis/ML.html.
[5] A. Mourad, M. A. Laverdiere, and M. Debbabi, “An aspect-oriented approach for the systematic security hardening
of code,” Computer and Security, vol. 27, no. 3-4, pp. 101-114, 2008, doi: 10.1016/j.cose.2008.04.003.
[6] A. Mourad, M. A. Laverdiere, and M. Debbabi, ‘‘High-level aspect-oriented-based framework for software security
hardening, Information Security Journal: A Global Perspective, vol. 17, no. 2, pp. 56-74, 2008, doi:
10.1080/19393550801911230.
[7] Z. Han, X. Li, Z. Xing, H. Liu, and Z. Feng, "Learning to Predict Severity of Software Vulnerability Using Only
Vulnerability Description," 2017 IEEE International Conference on Software Maintenance and Evolution
(ICSME), Shanghai, China, 2017, pp. 125-136, doi: 10.1109/ICSME.2017.52.
[8] H. Okamura, M. Tokuzane and T. Dohi, "Optimal Security Patch Release Timing under Non-homogeneous
Vulnerability-Discovery Processes," 2009 20th International Symposium on Software Reliability Engineering,
Mysuru, India, 2009, pp. 120-128, doi: 10.1109/ISSRE.2009.19.
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 11, No. 5, October 2021: 4381 - 4391
4390
[9] M. R. Lyu, “Handbook of software reliability engineering,” IEEE Computer Society Press; McGraw Hill, Los
Alamitos, California: New York. 1996.
[10] J. A. Ozment, “Vulnerability discovery and software security,” 2007. PhD Thesis, University of Cambridge, 2007.
[11] E. Rescorla, “Security holes... Who cares?,” In USENIX Security Symposium, 2003, p. 75-90.
[12] O. H. Alhazmi and Y. K. Malaiya, "Modeling the vulnerability discovery process," 16th IEEE International
Symposium on Software Reliability Engineering (ISSRE'05), Chicago, IL, USA, 2005, pp. 1-10,
doi: 10.1109/ISSRE.2005.30.
[13] O. H. Alhazmi and Y. K. Malaiya, "Quantitative vulnerability assessment of systems software," Annual Reliability
and Maintainability Symposium, 2005. Proceedings, Alexandria, VA, USA, 2005, pp. 615-620,
doi: 10.1109/RAMS.2005.1408432.
[14] Y. Roumani, J. K. Nwankpa, and Y. F. Roumani, “Time series modeling of vulnerabilities,” Computers and
Security, vol. 51, pp. 32-40, 2015, doi: 10.1016/j.cose.2015.03.003.
[15] N. R. Pokhrel, H. Rodrigo, and C. P. Tsokos, “Cybersecurity: Time Series Predictive Modeling of Vulnerabilities
of Desktop Operating System Using Linear and Non Linear Approach,” Journal of Information Security, vol. 8,
no. 04, pp. 362-382, 2017, doi: 10.4236/jis.2017.84023.
[16] R. J. Anderson, “Security in Opens versus Closed Systems-The Dance of Boltzmann, Coase and Moore,” Open
Source Software: Economics, Law and Policy, Toulouse, France, June 20-21, pp. 1-15, 2002.
[17] O. H. Alhazmi and Y. K. Malaiya, "Application of Vulnerability Discovery Models to Major Operating Systems,”
in IEEE Transactions on Reliability, vol. 57, no. 1, pp. 14-22, March 2008, doi: 10.1109/TR.2008.916872.
[18] E. Rescorla, “Is finding security holes a good idea?,” in IEEE Security and Privacy, vol. 3, no. 1, pp. 14-19, Jan.-
Feb. 2005, doi: 10.1109/MSP.2005.17.
[19] J. Kim, Y. K. Malaiya and I. Ray, “Vulnerability Discovery in Multi-Version Software Systems,” 10th IEEE High
Assurance Systems Engineering Symposium (HASE'07), Plano, TX, USA, 2007, pp. 141-148,
doi: 10.1109/HASE.2007.55.
[20] P. Li, M. Shaw, and J. Herbsleb, “Selecting a defect prediction model for maintenance resource planning and
software insurance,” EDSER-5 Affiliated with ICSE, 2003, pp. 32-37.
[21] M. Xie, “Software Reliability Modelling, World Scientific,” Publishing Co. Pte. Ltd, Singapore, vol. 1, pp. 10-11,
1991.
[22] S. Woo, O. H. Alhazmi, and Y. K. Malaiya, “Assessing Vulnerabilities in Apache and IIS HTTP Servers,” 2006
2nd IEEE International Symposium on Dependable, Autonomic and Secure Computing, Indianapolis, IN, USA,
2006, pp. 103-110, doi: 10.1109/DASC.2006.21.
[23] F. Massacci and V. H. Nguyen, “An Empirical Methodology to Evaluate Vulnerability Discovery Models,” in IEEE
Transactions on Software Engineering, vol. 40, no. 12, pp. 1147-1162, 2014, doi: 10.1109/TSE.2014.2354037.
[24] A. K. Shrivastava, R. Sharma and P. K. Kapur, “Vulnerability discovery model for a software system using
stochastic differential equation,” 2015 International Conference on Futuristic Trends on Computational Analysis
and Knowledge Management (ABLAZE), Greater Noida, India, 2015, pp. 199-205,
doi: 10.1109/ABLAZE.2015.7154992.
[25] H. C. Joh and Y. K. Malaiya, “Modeling Skewness in Vulnerability Discovery: Modeling Skewness in
Vulnerability Discovery,” Quality and Reliability Engineering International, vol. 30, no. 8, pp. 1445-1459, 2014,
doi: https://doi.org/10.1002/qre.1567.
[26] A. A. Younis, H. Joh, and Y. Malaiya, “Modeling Learningless Vulnerability Discovery using a Folded
Distribution,” In: Proceedings of the International Conference on Security and Management (SAM), 2011,
pp. 617-623.
[27] O. H. Alhazmi and Y. K. Malaiya., “Measuring and Enhancing Prediction Capabilities of Vulnerability Discovery
Models for Apache and IIS HTTP Servers,” 2006 17th International Symposium on Software Reliability
Engineering, Raleigh, NC, USA, 2006, pp. 343-352, doi: 10.1109/ISSRE.2006.26.
[28] L. Wang, Y. Zeng, and T. Chen, “Back propagation neural network with adaptive differential evolution algorithm
for time series forecasting,” Expert Systems with Applications, vol. 42, no. 2, pp. 855-863, 2015, doi:
10.1016/j.eswa.2014.08.018.
[29] Y. Movahedi, M. Cukier, A. Andongabo, and I. Gashi, “Cluster-Based Vulnerability Assessment Applied to
Operating Systems,” 2017 13th European Dependable Computing Conference (EDCC), Geneva, Switzerland,
2017, pp. 18-25, doi: 10.1109/EDCC.2017.27.
[30] Top 50 Vendors by Total Number of, "Distinct," Vulnerabilities, [Online]. Available:
https://www.cvedetails.com/top-50-vendors.php.
[31] Understanding LSTM Networks, [Online]. Available: http://colah.github.io/posts/2015-08-Understanding-LSTMs/.
[32] How to work with Time Distributed data in a neural, network, [Online]. Available:
https://medium.com/smileinnovation/how-to-work-with-time-distributed-data-in-a-neural-network-b8b39aa4ce00.
[33] LSTM Forward and Backward Pass, Introduction, [Online]. Available:
http://arunmallya.github.io/writeups/nn/lstm/index.html#/.
[34] A Brief Summary of the Math Behind RNN (Recurrent Neural Networks), [Online]. Available:
https://medium.com/towards-artificial-intelligence/a-brief-summary-of-maths-behind-rnn-recurrent-neural-
networks-b71bbc183ff.
[35] NIST, National Vulnerability Database, [Online]. Available: https://nvd.nist.gov/.
11. Int J Elec & Comp Eng ISSN: 2088-8708
Forecasting number of vulnerabilities using long short-term neural memory… (Mohammad Shamsul Hoque)
4391
BIOGRAPHIES OF AUTHORS
Mohammad Shamsul Hoque holds a Master’s degree in information security and
digital forensics from the University of East London, U.K. He is currently a PhD student at CCI,
Universiti Tenaga Nasional, Malaysia. His research focus is cyber security and artificial
intelligence of critical system and facilities.
Norziana Binti Jamil received her PhD (Security in Computing) from Universiti Putra Malaysia
in 2013. She is currently an Associate Professor at the College of Computing and Informatics,
Universiti Tenaga Nasional, Malaysia. Her area of specialization and research interests are
cryptography, security of cyber physical systems, privacy preserving techniques and security
intelligence and analytics.
Nowshad Amin received his PhD in Electrical and Electronics Engineering from Tokyo Institute
of Technology, Japan. He is a Professor in Renewable Energy and Solar Photovoltaics,
Institute of Sustainable Energy (ISE), Universiti Tenaga Nasional, Malaysia. His research interests
are micro & nano-electronics, solar photovoltaic energy applications, smart controllers, HEMS and
cyber security in power plant system.
Azril Azam Abdul Rahim has currently joined the ranks of a Senior Manager with Tenaga
Nasional Berhad (TNB). He currently manages the Cyber Threat Intelligence Management for
TNB. To date, he has more than 20 years of experience in cyber security. Graduated with a double
degree in computer science and business operation from the University of Missouri, Azril also
holds various certifications such as GCFA, ECSP, CEH and CEI.
Razali B. Jidin past research activities were on embedded system (ES), focusing implementation
of multi-threading semaphores into Field Programmable Gate Arrays (FPGA), targeted for real-
time systems. Recent activities are on architectures and mathematical transforms of lightweight
cryptography for implementation on resource-constraint devices. His other area of research
activities was related to renewable energy with projects such as finding an optimal combination of
diesel and renewable energies for a resort island, photovoltaic and battery storages for underserved
communities and application Internet of things (IoT). Dr. Razali obtained his Ph.D. in Electrical
Engineering & Computer Science, Univ. of Kansas, Lawrence, KS, USA. He is registered with
Board of Engineer Malaysia (BEM) as Professional Electrical Engineer, and with Institute
Engineer Malaysia in area of Control & Instrumentation.