As network traffic increases and new intrusions occur, anomaly detection solutions based on machine learning are necessary to detect previously unknown intrusion patterns. Most of the developed models require a labelled dataset, which can be challenging owing to a shortage of publicly available datasets. These datasets are often too small to effectively train machine learning models, which further motivates the use of real unlabeled traffic. By using real traffic, it is possible to more accurately simulate the types of anomalies that might occur in a real-world network and improve the performance of the detection model. We present a method able to predict and categorize anomalies without the aid of a labelled dataset, demonstrating the model’s usability while also gathering a dataset from real noisy network traffic. The proposed long short-term memory (LTSM) based intrusion detection system was tested in a real-world setting of an antivirus company and was successful in detecting various intrusions using 5-minute windowing over both the predicted and real update curves thereby demonstrating its usefulness. Our contribution was the development of a robust model generally applicable to any hypertext transfer protocol (HTTP) traffic with almost real-time anomaly detection, while also outperforming earlier studies in terms of prediction accuracy.
Research Inventy : International Journal of Engineering and Science is publis...researchinventy
This document summarizes a research paper that proposes using principal component analysis and sequential hypothesis testing in a game theory framework to detect intrusions in a mobile ad hoc network. Specifically, it detects replica node attacks by tracking features like source/destination addresses and routing requests/replies to build profiles for each node. It then applies sequential probability ratio testing on routing requests and replies to test hypotheses about whether nodes are normal or abnormal. A two-player game model is also used to identify the optimal attack and defense strategies between an attacker and defender. Simulation results show that the approach can decrease the number of claims needed for detection while minimizing false positives and negatives.
This document compares the k-means data mining and outlier detection approaches for network-based intrusion detection. It analyzes four datasets capturing network traffic using both approaches. The k-means approach clusters traffic into normal and abnormal flows, while outlier detection calculates an outlier score for each flow. The document finds that k-means was more accurate and precise, with a better classification rate than outlier detection. It requires less computer resources than outlier detection. This comparison of the approaches can help network administrators choose the best intrusion detection method.
Cao nicolau-mc dermott-learning-neural-cybernetics-2018-preprintNam Le
This paper proposes using latent representation models, specifically autoencoders (AEs) and variational autoencoders (VAEs), to improve network anomaly detection. The models are trained on only normal data and introduce regularizers that compress normal data into a tight region around the origin in the latent space, while anomalies will have representations further away. This new latent feature space is then used as input to one-class classifiers to detect anomalies. The goal is for the models to perform well even with limited training data and be insensitive to hyperparameter settings, in order to address challenges of network anomaly detection like lack of labeled anomaly data and high dimensionality.
WEB ATTACK PREDICTION USING STEPWISE CONDITIONAL PARAMETER TUNING IN MACHINE ...IJCNCJournal
There is a rapid growth in internet and website usage. A wide variety of devices are used to access
websites, such as mobile phones, tablets, laptops, and personal computers. Attackers are finding more and
more vulnerabilities on websites that they can exploit for malicious purposes. A web application attack
occurs when cyber criminals gain access to unauthorized areas. Typically, attackers look for
vulnerabilities in web applications at the application layer. SQL injection attacks and Cross Site script
attacks is used to access web applications to obtain sensitive data. A key objective of this work is to
develop new features and investigate how automatic tuning of machine learning techniques can improve
the performance of Web Attack detections that use HTTP CSIC datasets to block and detect attacks. The
Stepwise Conditional parameter tuning in machine learning algorithms is a proposed model. This model is
a dynamic and automated parameter choosing and tuning based on the better outcome. This work also
compares two datasets for performance of the proposed model.
Web Attack Prediction using Stepwise Conditional Parameter Tuning in Machine ...IJCNCJournal
There is a rapid growth in internet and website usage. A wide variety of devices are used to access websites, such as mobile phones, tablets, laptops, and personal computers. Attackers are finding more and more vulnerabilities on websites that they can exploit for malicious purposes. A web application attack occurs when cyber criminals gain access to unauthorized areas. Typically, attackers look for vulnerabilities in web applications at the application layer. SQL injection attacks and Cross Site script attacks is used to access web applications to obtain sensitive data. A key objective of this work is to develop new features and investigate how automatic tuning of machine learning techniques can improve the performance of Web Attack detections that use HTTP CSIC datasets to block and detect attacks. The Stepwise Conditional parameter tuning in machine learning algorithms is a proposed model. This model is a dynamic and automated parameter choosing and tuning based on the better outcome. This work also compares two datasets for performance of the proposed model.
Approximation of regression-based fault minimization for network trafficTELKOMNIKA JOURNAL
This research associates three distinct approaches for computer network traffic prediction. They are the traditional stochastic gradient descent (SGD) using a few random samplings instead of the complete dataset for each iterative calculation, the gradient descent algorithm (GDA) which is a well-known optimization approach in deep learning, and the proposed method. The network traffic is computed from the traffic load (data and multimedia) of the computer network nodes via the Internet. It is apparent that the SGD is a modest iteration but can conclude suboptimal solutions. The GDA is a complicated one, can function more accurate than the SGD but difficult to manipulate parameters, such as the learning rate, the dataset granularity, and the loss function. Network traffic estimation helps improve performance and lower costs for various applications, such as an adaptive rate control, load balancing, the quality of service (QoS), fair bandwidth allocation, and anomaly detection. The proposed method confirms optimal values out of parameters using simulation to compute the minimum figure of specified loss function in each iteration.
Network Traffic Anomaly Detection Through Bayes NetGyan Prakash
Traffic anomaly detection using high performance measurement systems offers the possibility of improving the speed of
detection and enabling detection of important, short lived anomalies. In this paper we investigate the problem of detecting anomalies
using traffic measurements with fine-grained time stamps. We develop a new detection algorithm (called KS3) that utilizes a Bayes
Net to efficiently consider multiple input signals and to explicitly define what is considered “anomalous”.
The input signals considered KS3 are traffic volumes and correlations between ingress egress packet and bit rates. These
complementary signals enable identification of expanded range of anomalies. Using a set of high precision traffic measurements
collected at our campus border router over a 10 month period and an annotated anomaly log supplied by our network operators, we
show that KS3 is highly accurate, identifying 86% of the anomalies listed in the log. Compared with well known time series-based
and wavelet-based detectors, this represents over a 20% improvement in accuracy. Investigation of events identified by KS3 that did
not appear in the operator log indicate many are, in fact, true positives. Deployment of Ks3 in an operational environment supports
this by showing zero false positives during initial tests.
BIG DATA ANALYTICS FOR USER-ACTIVITY ANALYSIS AND USER-ANOMALY DETECTION IN...Nexgen Technology
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
Research Inventy : International Journal of Engineering and Science is publis...researchinventy
This document summarizes a research paper that proposes using principal component analysis and sequential hypothesis testing in a game theory framework to detect intrusions in a mobile ad hoc network. Specifically, it detects replica node attacks by tracking features like source/destination addresses and routing requests/replies to build profiles for each node. It then applies sequential probability ratio testing on routing requests and replies to test hypotheses about whether nodes are normal or abnormal. A two-player game model is also used to identify the optimal attack and defense strategies between an attacker and defender. Simulation results show that the approach can decrease the number of claims needed for detection while minimizing false positives and negatives.
This document compares the k-means data mining and outlier detection approaches for network-based intrusion detection. It analyzes four datasets capturing network traffic using both approaches. The k-means approach clusters traffic into normal and abnormal flows, while outlier detection calculates an outlier score for each flow. The document finds that k-means was more accurate and precise, with a better classification rate than outlier detection. It requires less computer resources than outlier detection. This comparison of the approaches can help network administrators choose the best intrusion detection method.
Cao nicolau-mc dermott-learning-neural-cybernetics-2018-preprintNam Le
This paper proposes using latent representation models, specifically autoencoders (AEs) and variational autoencoders (VAEs), to improve network anomaly detection. The models are trained on only normal data and introduce regularizers that compress normal data into a tight region around the origin in the latent space, while anomalies will have representations further away. This new latent feature space is then used as input to one-class classifiers to detect anomalies. The goal is for the models to perform well even with limited training data and be insensitive to hyperparameter settings, in order to address challenges of network anomaly detection like lack of labeled anomaly data and high dimensionality.
WEB ATTACK PREDICTION USING STEPWISE CONDITIONAL PARAMETER TUNING IN MACHINE ...IJCNCJournal
There is a rapid growth in internet and website usage. A wide variety of devices are used to access
websites, such as mobile phones, tablets, laptops, and personal computers. Attackers are finding more and
more vulnerabilities on websites that they can exploit for malicious purposes. A web application attack
occurs when cyber criminals gain access to unauthorized areas. Typically, attackers look for
vulnerabilities in web applications at the application layer. SQL injection attacks and Cross Site script
attacks is used to access web applications to obtain sensitive data. A key objective of this work is to
develop new features and investigate how automatic tuning of machine learning techniques can improve
the performance of Web Attack detections that use HTTP CSIC datasets to block and detect attacks. The
Stepwise Conditional parameter tuning in machine learning algorithms is a proposed model. This model is
a dynamic and automated parameter choosing and tuning based on the better outcome. This work also
compares two datasets for performance of the proposed model.
Web Attack Prediction using Stepwise Conditional Parameter Tuning in Machine ...IJCNCJournal
There is a rapid growth in internet and website usage. A wide variety of devices are used to access websites, such as mobile phones, tablets, laptops, and personal computers. Attackers are finding more and more vulnerabilities on websites that they can exploit for malicious purposes. A web application attack occurs when cyber criminals gain access to unauthorized areas. Typically, attackers look for vulnerabilities in web applications at the application layer. SQL injection attacks and Cross Site script attacks is used to access web applications to obtain sensitive data. A key objective of this work is to develop new features and investigate how automatic tuning of machine learning techniques can improve the performance of Web Attack detections that use HTTP CSIC datasets to block and detect attacks. The Stepwise Conditional parameter tuning in machine learning algorithms is a proposed model. This model is a dynamic and automated parameter choosing and tuning based on the better outcome. This work also compares two datasets for performance of the proposed model.
Approximation of regression-based fault minimization for network trafficTELKOMNIKA JOURNAL
This research associates three distinct approaches for computer network traffic prediction. They are the traditional stochastic gradient descent (SGD) using a few random samplings instead of the complete dataset for each iterative calculation, the gradient descent algorithm (GDA) which is a well-known optimization approach in deep learning, and the proposed method. The network traffic is computed from the traffic load (data and multimedia) of the computer network nodes via the Internet. It is apparent that the SGD is a modest iteration but can conclude suboptimal solutions. The GDA is a complicated one, can function more accurate than the SGD but difficult to manipulate parameters, such as the learning rate, the dataset granularity, and the loss function. Network traffic estimation helps improve performance and lower costs for various applications, such as an adaptive rate control, load balancing, the quality of service (QoS), fair bandwidth allocation, and anomaly detection. The proposed method confirms optimal values out of parameters using simulation to compute the minimum figure of specified loss function in each iteration.
Network Traffic Anomaly Detection Through Bayes NetGyan Prakash
Traffic anomaly detection using high performance measurement systems offers the possibility of improving the speed of
detection and enabling detection of important, short lived anomalies. In this paper we investigate the problem of detecting anomalies
using traffic measurements with fine-grained time stamps. We develop a new detection algorithm (called KS3) that utilizes a Bayes
Net to efficiently consider multiple input signals and to explicitly define what is considered “anomalous”.
The input signals considered KS3 are traffic volumes and correlations between ingress egress packet and bit rates. These
complementary signals enable identification of expanded range of anomalies. Using a set of high precision traffic measurements
collected at our campus border router over a 10 month period and an annotated anomaly log supplied by our network operators, we
show that KS3 is highly accurate, identifying 86% of the anomalies listed in the log. Compared with well known time series-based
and wavelet-based detectors, this represents over a 20% improvement in accuracy. Investigation of events identified by KS3 that did
not appear in the operator log indicate many are, in fact, true positives. Deployment of Ks3 in an operational environment supports
this by showing zero false positives during initial tests.
BIG DATA ANALYTICS FOR USER-ACTIVITY ANALYSIS AND USER-ANOMALY DETECTION IN...Nexgen Technology
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
Reliable and Efficient Data Acquisition in Wireless Sensor NetworkIJMTST Journal
The sensors in the WSN sense the surrounding, collects the data and transfers the data to the sink node. It
has been observed that the sensor nodes are deactivated or damaged when exposed to certain radiations or
due to energy problems. This damage leads to the temporary isolation of the nodes from the network which
results in the formation of the holes. These holes are dynamic in nature and can grow and shrink depending
upon the factors causing the damage to the sensor nodes. So a solution has been presented in the base paper
where the dual mode i.e. Radio frequency and the Acoustic mode are considered so that the data can be
transferred easily. Based on this a survey has been done where several factors are studied so that the
performance of the system can be increased.
The document discusses using machine learning for efficient attack detection in IoT devices without feature engineering. It proposes a feature-engineering-less machine learning (FEL-ML) process that uses raw packet byte streams as input instead of engineered features. This approach is lighter weight and faster than traditional methods. The FEL-ML model is trained directly on unprocessed packet data to perform malware detection on resource-constrained IoT devices. Prior research that used engineered features or complex deep learning models are not suitable for IoT due to limitations of memory and processing power. The proposed FEL-ML approach aims to enable effective network traffic security for IoT using minimal resources.
EFFICIENT ATTACK DETECTION IN IOT DEVICES USING FEATURE ENGINEERING-LESS MACH...ijcsit
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
A system for denial of-service attack detection based on multivariate correla...IGEEKS TECHNOLOGIES
The document proposes a denial-of-service (DoS) attack detection system using multivariate correlation analysis (MCA) to accurately characterize network traffic. It extracts geometric correlations between network traffic features using triangle area maps. This anomaly-based system can detect both known and unknown DoS attacks by learning patterns in legitimate traffic. Evaluation on the KDD Cup 99 dataset shows it outperforms two previous state-of-the-art methods in detection accuracy.
Internet ttraffic monitering anomalous behiviour detectionGyan Prakash
This document discusses a methodology for monitoring internet traffic and detecting anomalous behavior. It begins by noting the challenges of understanding vast quantities of internet traffic data due to the diversity of applications and services. Recent cyber attacks have made it important to develop techniques to analyze communication patterns in traffic data for network security purposes.
The proposed methodology uses data mining and entropy-based techniques to build behavior profiles of internet backbone traffic. It involves clustering traffic based on communication patterns, automatically classifying behaviors, and modeling structures for analysis. The methodology is validated using data sets from internet core links. It aims to automatically discover significant behaviors, provide interpretations, and quickly identify anomalous events like scanning or denial of service attacks.
This document discusses statistical approaches for detecting anomalies in network traffic. It begins by describing the typical four-stage process for anomaly detection: data collection, data analysis/feature extraction, inference to classify traffic as normal or anomalous, and validation. It then discusses several specific statistical approaches that can be used:
(1) Extracting features using statistical models of the traffic distributions, such as α-stable distributions, which can properly model highly variable network traffic.
(2) Using techniques like the Kalman filter to analyze traffic volume changes at different time scales and detect both short-term and long-term anomalies.
(3) Applying the Holt-Winters forecasting technique to decompose traffic into a baseline, trend,
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
This document summarizes a research paper that proposes a new method for identifying denial of service (DoS) attacks using multivariate correlation analysis (MCA). The method involves three main steps: 1) generating basic features from network traffic, 2) using MCA to extract correlations between features and generate triangle area maps, and 3) using an anomaly-based detection mechanism to distinguish attacks from normal traffic based on differences from pre-generated normal profiles. The researchers evaluate their method on the KDD Cup 99 dataset and achieve moderate detection performance. However, they identify issues related to differences in feature scales that reduce detection of some attacks. They propose using statistical normalization to address this.
Intrusion Detection Systems By Anamoly-Based Using Neural NetworkIOSR Journals
To improve network security different steps has been taken as size and importance of the network has
increases day by day. Then chances of a network attacks increases Network is mainly attacked by some
intrusions that are identified by network intrusion detection system. These intrusions are mainly present in data
packets and each packet has to scan for its detection. This paper works to develop a intrusion detection system
which utilizes the identity and signature of the intrusion for identifying different kinds of intrusions. As network
intrusion detection system need to be efficient enough that chance of false alarm generation should be less,
which means identifying as a intrusion but actually it is not an intrusion. Result obtained after analyzing this
system is quite good enough that nearly 90% of true alarms are generated. It detect intrusion for various
services like Dos, SSH, etc by neural network
A COMBINATION OF TEMPORAL SEQUENCE LEARNING AND DATA DESCRIPTION FOR ANOMALYB...IJNSA Journal
Through continuous observation and modelling of normal behavior in networks, Anomaly-based Network Intrusion Detection System (A-NIDS) offers a way to find possible threats via deviation from the normal model. The analysis of network traffic based on time series model has the advantage of exploiting the relationship between packages within network traffic and observing trends of behaviors over a period of time. It will generate new sequences with good features that support anomaly detection in network traffic and provide the ability to detect new attacks. Besides, an anomaly detection technique, which focuses on the normal data and aims to build a description of it, will be an effective technique for anomaly detection in imbalanced data. In this paper, we propose a combination model of Long Short Term Memory (LSTM) architecture for processing time series and a data description Support Vector Data Description (SVDD) for anomaly detection in A-NIDS to obtain the advantages of them. This model helps parameters in LSTM and SVDD are jointly trained with joint optimization method. Our experimental results with KDD99 dataset show that the proposed combined model obtains high performance in intrusion detection, especially DoS and Probe attacks with 98.0% and 99.8%, respectively.
A COMBINATION OF TEMPORAL SEQUENCE LEARNING AND DATA DESCRIPTION FOR ANOMALYB...IJNSA Journal
This document summarizes a research paper that proposes combining long short-term memory (LSTM) and support vector data description (SVDD) for anomaly detection in anomaly-based network intrusion detection systems (A-NIDS). The paper argues that analyzing network traffic as a time series using LSTM can capture relationships between packets, but LSTM alone does not directly optimize for anomaly detection. It also notes that A-NIDS often only have normal data available for training. Therefore, the paper proposes combining LSTM to learn temporal features from network traffic with SVDD, an anomaly detection technique that builds a description of normal data. The combined model trains LSTM and SVDD parameters jointly using a joint optimization method. An evaluation on the KDD99 dataset
Early Detection and Prevention of Distributed Denial Of Service Attack Using ...IRJET Journal
The document describes a proposed mechanism for early detection and prevention of distributed denial of service (DDoS) attacks using software-defined networking (SDN). The mechanism uses an SDN controller and Mininet network simulator. It calculates entropy values from network traffic to identify anomalous patterns indicating potential DDoS attacks. Once detected, the SDN controller dynamically reconfigures network paths to divert malicious traffic away from the target. The mechanism is evaluated using statistical metrics and shows superior performance over existing methods in detection rate, false positives, and response time. It provides a practical solution for safeguarding network services against DDoS attacks.
Online stream mining approach for clustering network trafficeSAT Journals
Abstract A large number of research have been proposed on intrusion detection system, which leads to the implementation of agent based intelligent IDS (IIDS), Non – intelligent IDS (NIDS), signature based IDS etc. While building such IDS models, learning algorithms from flow of network traffic plays crucial role in accuracy of IDS systems. The proposed work focuses on implementing the novel method to cluster network traffic which eliminates the limitations in existing online clustering algorithms and prove the robustness and accuracy over large stream of network traffic arriving at extremely high rate. We compare the existing algorithm with novel methods to analyse the accuracy and complexity. Keywords— NIDS, Data Stream Mining, Online Clustering, RAH algorithm, Online Efficient Incremental Clustering algorithm
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
COPYRIGHTThis thesis is copyright materials protected under the .docxvoversbyobersby
COPYRIGHT
This thesis is copyright materials protected under the Berne Convection, the copyright Act 1999 and other international and national enactments in that behalf, on intellectual property. It may not be reproduced by any means in full or in part except for short extracts in fair dealing so for research or private study, critical scholarly review or discourse with acknowledgment, with written permission of the Dean School of Graduate Studies on behalf of both the author and XXX XXX University.ABSTRACT
With Fast growing internet world the risk of intrusion has also increased, as a result Intrusion Detection System (IDS) is the admired key research field. IDS are used to identify any suspicious activity or patterns in the network or machine, which endeavors the security features or compromise the machine. IDS majorly use all the features of the data. It is a keen observation that all the features are not of equal relevance for the detection of attacks. Moreover every feature does not contribute in enhancing the system performance significantly. The main aim of the work done is to develop an efficient denial of service network intrusion classification model. The specific objectives included: to analyse existing literature in intrusion detection systems; what are the techniques used to model IDS, types of network attacks, performance of various machine learning tools, how are network intrusion detection systems assessed; to find out top network traffic attributes that can be used to model denial of service intrusion detection; to develop a machine learning model for detection of denial of service network intrusion.Methods: The research design was experimental and data was collected by simulation using NSL-KDD dataset. By implementing Correlation Feature Selection (CFS) mechanism using three search algorithms, a smallest set of features is selected with all the features that are selected very frequently. Findings: The smallest subset of features chosen is the most nominal among all the feature subset found. Further, the performances using Artificial neural networks(ANN), decision trees, Support Vector Machines (SVM) and K-Nearest Neighbour (KNN) classifiers is compared for 7 subsets found by filter model and 41 attributes. Results: The outcome indicates a remarkable improvement in the performance metrics used for comparison of the two classifiers. The results show that using 17/18 selected features improves DOS types classification accuracies as compared to using the 41 features in the NSL-KDD dataset. It was further observed that using an ensemble of three classifiers with decision fusion performs better as compared to using a single classifier for DOS type’s classification. Among machine learning tools experimented, ANN achieved best classification accuracies followed by SVM and DT. KNN registered the lowest classification accuracies. Application: The proposed work with such an improved detection rate and lesser classification time and lar.
Congestion Prediction in Internet of Things Network using Temporal Convolutio...vitsrinu
The document proposes a novel congestion prediction approach for Internet of Things (IoT) networks using a Temporal Convolutional Network (TCN). It aims to more accurately predict network congestion compared to other machine learning models. The key contributions are using a Taguchi method to optimize the TCN model hyperparameters, applying dropout to avoid overfitting on heterogeneous IoT data, and developing a Home IoT testbed to generate real network traffic data for model training and evaluation. Experimental results show the TCN approach achieves 95.52% accuracy in predicting IoT network congestion.
ATTACK DETECTION AVAILING FEATURE DISCRETION USING RANDOM FOREST CLASSIFIERCSEIJJournal
This document discusses using a random forest classifier with feature selection to improve intrusion detection. It begins with background on intrusion detection systems and challenges. It then proposes using genetic algorithms for feature selection to identify the most important features from a dataset. A random forest classifier is used for classification, which combines decision trees to improve accuracy. The methodology involves feature selection, classification with random forest, and detection. Feature weights are calculated and cross-validation is used to analyze detection rates for individual attacks. The goal is to improve accuracy, reduce training time, and better detect minority attacks through this approach.
Attack Detection Availing Feature Discretion using Random Forest ClassifierCSEIJJournal
The widespread use of the Internet has an adverse effect of being vulnerable to cyber attacks. Defensive
mechanisms like firewalls and IDSs have evolved with a lot of research contributions happening in these
areas. Machine learning techniques have been successfully used in these defense mechanisms especially
IDSs. Although they are effective to some extent in identifying new patterns and variants of existing
malicious patterns, many attacks are still left as undetected. The objective is to develop an algorithm for
detecting malicious domains based on passive traffic measurements. In this paper, an anomaly-based
intrusion detection system based on an ensemble based machine learning classifier called Random Forest
with gradient boosting is deployed. NSL-KDD cup dataset is used for analysis and out of 41 features, 32
features were identified as significant using feature discretion.
MACHINE LEARNING FOR QOE PREDICTION AND ANOMALY DETECTION IN SELF-ORGANIZING ...ijwmn
Existing mobile networking systems lack the level of intelligence, scalability, and autonomous adaptability
required to optimally enable next-generation networks like 5G and beyond, which are expected to be Self -
Organizing Networks (SONs). It is anticipated that machine learning (ML) will be instrumental in designing
future “x”G SON networks with their demanding Quality of Experience (QoE) requirements. This paper
evaluates a methodology that uses supervised machine learning to predict the QoE level of the end user
experiences and uses this information to detect anomalous behavior of dysfunctional network nodes
(eNodeBs/base stations) in self-organizing mobile networks. An end-to-end network scenario is created using
the network simulator ns-3, where end users interact with a remote host that is accessed over the Internet to
run the most commonly used applications like file downloads and uploads and the resulting output is used as
a dataset to implement ML algorithms for QoE prediction and eNodeB (eNB) anomaly detection. Three ML
algorithms were implemented and compared to study their effectiveness and the scalability of the
methodology. In the test network, an accuracy score greater than 99% is achieved using the ML algorithms.
As suggested by the ns-3 simulation the use of ML for QoE prediction will help network operators understand
end-user needs and identify network elements that are failing and need attention and recovery.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
More Related Content
Similar to Web server load prediction and anomaly detection from hypertext transfer protocol logs
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
Reliable and Efficient Data Acquisition in Wireless Sensor NetworkIJMTST Journal
The sensors in the WSN sense the surrounding, collects the data and transfers the data to the sink node. It
has been observed that the sensor nodes are deactivated or damaged when exposed to certain radiations or
due to energy problems. This damage leads to the temporary isolation of the nodes from the network which
results in the formation of the holes. These holes are dynamic in nature and can grow and shrink depending
upon the factors causing the damage to the sensor nodes. So a solution has been presented in the base paper
where the dual mode i.e. Radio frequency and the Acoustic mode are considered so that the data can be
transferred easily. Based on this a survey has been done where several factors are studied so that the
performance of the system can be increased.
The document discusses using machine learning for efficient attack detection in IoT devices without feature engineering. It proposes a feature-engineering-less machine learning (FEL-ML) process that uses raw packet byte streams as input instead of engineered features. This approach is lighter weight and faster than traditional methods. The FEL-ML model is trained directly on unprocessed packet data to perform malware detection on resource-constrained IoT devices. Prior research that used engineered features or complex deep learning models are not suitable for IoT due to limitations of memory and processing power. The proposed FEL-ML approach aims to enable effective network traffic security for IoT using minimal resources.
EFFICIENT ATTACK DETECTION IN IOT DEVICES USING FEATURE ENGINEERING-LESS MACH...ijcsit
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
A system for denial of-service attack detection based on multivariate correla...IGEEKS TECHNOLOGIES
The document proposes a denial-of-service (DoS) attack detection system using multivariate correlation analysis (MCA) to accurately characterize network traffic. It extracts geometric correlations between network traffic features using triangle area maps. This anomaly-based system can detect both known and unknown DoS attacks by learning patterns in legitimate traffic. Evaluation on the KDD Cup 99 dataset shows it outperforms two previous state-of-the-art methods in detection accuracy.
Internet ttraffic monitering anomalous behiviour detectionGyan Prakash
This document discusses a methodology for monitoring internet traffic and detecting anomalous behavior. It begins by noting the challenges of understanding vast quantities of internet traffic data due to the diversity of applications and services. Recent cyber attacks have made it important to develop techniques to analyze communication patterns in traffic data for network security purposes.
The proposed methodology uses data mining and entropy-based techniques to build behavior profiles of internet backbone traffic. It involves clustering traffic based on communication patterns, automatically classifying behaviors, and modeling structures for analysis. The methodology is validated using data sets from internet core links. It aims to automatically discover significant behaviors, provide interpretations, and quickly identify anomalous events like scanning or denial of service attacks.
This document discusses statistical approaches for detecting anomalies in network traffic. It begins by describing the typical four-stage process for anomaly detection: data collection, data analysis/feature extraction, inference to classify traffic as normal or anomalous, and validation. It then discusses several specific statistical approaches that can be used:
(1) Extracting features using statistical models of the traffic distributions, such as α-stable distributions, which can properly model highly variable network traffic.
(2) Using techniques like the Kalman filter to analyze traffic volume changes at different time scales and detect both short-term and long-term anomalies.
(3) Applying the Holt-Winters forecasting technique to decompose traffic into a baseline, trend,
A New Way of Identifying DOS Attack Using Multivariate Correlation Analysisijceronline
This document summarizes a research paper that proposes a new method for identifying denial of service (DoS) attacks using multivariate correlation analysis (MCA). The method involves three main steps: 1) generating basic features from network traffic, 2) using MCA to extract correlations between features and generate triangle area maps, and 3) using an anomaly-based detection mechanism to distinguish attacks from normal traffic based on differences from pre-generated normal profiles. The researchers evaluate their method on the KDD Cup 99 dataset and achieve moderate detection performance. However, they identify issues related to differences in feature scales that reduce detection of some attacks. They propose using statistical normalization to address this.
Intrusion Detection Systems By Anamoly-Based Using Neural NetworkIOSR Journals
To improve network security different steps has been taken as size and importance of the network has
increases day by day. Then chances of a network attacks increases Network is mainly attacked by some
intrusions that are identified by network intrusion detection system. These intrusions are mainly present in data
packets and each packet has to scan for its detection. This paper works to develop a intrusion detection system
which utilizes the identity and signature of the intrusion for identifying different kinds of intrusions. As network
intrusion detection system need to be efficient enough that chance of false alarm generation should be less,
which means identifying as a intrusion but actually it is not an intrusion. Result obtained after analyzing this
system is quite good enough that nearly 90% of true alarms are generated. It detect intrusion for various
services like Dos, SSH, etc by neural network
A COMBINATION OF TEMPORAL SEQUENCE LEARNING AND DATA DESCRIPTION FOR ANOMALYB...IJNSA Journal
Through continuous observation and modelling of normal behavior in networks, Anomaly-based Network Intrusion Detection System (A-NIDS) offers a way to find possible threats via deviation from the normal model. The analysis of network traffic based on time series model has the advantage of exploiting the relationship between packages within network traffic and observing trends of behaviors over a period of time. It will generate new sequences with good features that support anomaly detection in network traffic and provide the ability to detect new attacks. Besides, an anomaly detection technique, which focuses on the normal data and aims to build a description of it, will be an effective technique for anomaly detection in imbalanced data. In this paper, we propose a combination model of Long Short Term Memory (LSTM) architecture for processing time series and a data description Support Vector Data Description (SVDD) for anomaly detection in A-NIDS to obtain the advantages of them. This model helps parameters in LSTM and SVDD are jointly trained with joint optimization method. Our experimental results with KDD99 dataset show that the proposed combined model obtains high performance in intrusion detection, especially DoS and Probe attacks with 98.0% and 99.8%, respectively.
A COMBINATION OF TEMPORAL SEQUENCE LEARNING AND DATA DESCRIPTION FOR ANOMALYB...IJNSA Journal
This document summarizes a research paper that proposes combining long short-term memory (LSTM) and support vector data description (SVDD) for anomaly detection in anomaly-based network intrusion detection systems (A-NIDS). The paper argues that analyzing network traffic as a time series using LSTM can capture relationships between packets, but LSTM alone does not directly optimize for anomaly detection. It also notes that A-NIDS often only have normal data available for training. Therefore, the paper proposes combining LSTM to learn temporal features from network traffic with SVDD, an anomaly detection technique that builds a description of normal data. The combined model trains LSTM and SVDD parameters jointly using a joint optimization method. An evaluation on the KDD99 dataset
Early Detection and Prevention of Distributed Denial Of Service Attack Using ...IRJET Journal
The document describes a proposed mechanism for early detection and prevention of distributed denial of service (DDoS) attacks using software-defined networking (SDN). The mechanism uses an SDN controller and Mininet network simulator. It calculates entropy values from network traffic to identify anomalous patterns indicating potential DDoS attacks. Once detected, the SDN controller dynamically reconfigures network paths to divert malicious traffic away from the target. The mechanism is evaluated using statistical metrics and shows superior performance over existing methods in detection rate, false positives, and response time. It provides a practical solution for safeguarding network services against DDoS attacks.
Online stream mining approach for clustering network trafficeSAT Journals
Abstract A large number of research have been proposed on intrusion detection system, which leads to the implementation of agent based intelligent IDS (IIDS), Non – intelligent IDS (NIDS), signature based IDS etc. While building such IDS models, learning algorithms from flow of network traffic plays crucial role in accuracy of IDS systems. The proposed work focuses on implementing the novel method to cluster network traffic which eliminates the limitations in existing online clustering algorithms and prove the robustness and accuracy over large stream of network traffic arriving at extremely high rate. We compare the existing algorithm with novel methods to analyse the accuracy and complexity. Keywords— NIDS, Data Stream Mining, Online Clustering, RAH algorithm, Online Efficient Incremental Clustering algorithm
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
COPYRIGHTThis thesis is copyright materials protected under the .docxvoversbyobersby
COPYRIGHT
This thesis is copyright materials protected under the Berne Convection, the copyright Act 1999 and other international and national enactments in that behalf, on intellectual property. It may not be reproduced by any means in full or in part except for short extracts in fair dealing so for research or private study, critical scholarly review or discourse with acknowledgment, with written permission of the Dean School of Graduate Studies on behalf of both the author and XXX XXX University.ABSTRACT
With Fast growing internet world the risk of intrusion has also increased, as a result Intrusion Detection System (IDS) is the admired key research field. IDS are used to identify any suspicious activity or patterns in the network or machine, which endeavors the security features or compromise the machine. IDS majorly use all the features of the data. It is a keen observation that all the features are not of equal relevance for the detection of attacks. Moreover every feature does not contribute in enhancing the system performance significantly. The main aim of the work done is to develop an efficient denial of service network intrusion classification model. The specific objectives included: to analyse existing literature in intrusion detection systems; what are the techniques used to model IDS, types of network attacks, performance of various machine learning tools, how are network intrusion detection systems assessed; to find out top network traffic attributes that can be used to model denial of service intrusion detection; to develop a machine learning model for detection of denial of service network intrusion.Methods: The research design was experimental and data was collected by simulation using NSL-KDD dataset. By implementing Correlation Feature Selection (CFS) mechanism using three search algorithms, a smallest set of features is selected with all the features that are selected very frequently. Findings: The smallest subset of features chosen is the most nominal among all the feature subset found. Further, the performances using Artificial neural networks(ANN), decision trees, Support Vector Machines (SVM) and K-Nearest Neighbour (KNN) classifiers is compared for 7 subsets found by filter model and 41 attributes. Results: The outcome indicates a remarkable improvement in the performance metrics used for comparison of the two classifiers. The results show that using 17/18 selected features improves DOS types classification accuracies as compared to using the 41 features in the NSL-KDD dataset. It was further observed that using an ensemble of three classifiers with decision fusion performs better as compared to using a single classifier for DOS type’s classification. Among machine learning tools experimented, ANN achieved best classification accuracies followed by SVM and DT. KNN registered the lowest classification accuracies. Application: The proposed work with such an improved detection rate and lesser classification time and lar.
Congestion Prediction in Internet of Things Network using Temporal Convolutio...vitsrinu
The document proposes a novel congestion prediction approach for Internet of Things (IoT) networks using a Temporal Convolutional Network (TCN). It aims to more accurately predict network congestion compared to other machine learning models. The key contributions are using a Taguchi method to optimize the TCN model hyperparameters, applying dropout to avoid overfitting on heterogeneous IoT data, and developing a Home IoT testbed to generate real network traffic data for model training and evaluation. Experimental results show the TCN approach achieves 95.52% accuracy in predicting IoT network congestion.
ATTACK DETECTION AVAILING FEATURE DISCRETION USING RANDOM FOREST CLASSIFIERCSEIJJournal
This document discusses using a random forest classifier with feature selection to improve intrusion detection. It begins with background on intrusion detection systems and challenges. It then proposes using genetic algorithms for feature selection to identify the most important features from a dataset. A random forest classifier is used for classification, which combines decision trees to improve accuracy. The methodology involves feature selection, classification with random forest, and detection. Feature weights are calculated and cross-validation is used to analyze detection rates for individual attacks. The goal is to improve accuracy, reduce training time, and better detect minority attacks through this approach.
Attack Detection Availing Feature Discretion using Random Forest ClassifierCSEIJJournal
The widespread use of the Internet has an adverse effect of being vulnerable to cyber attacks. Defensive
mechanisms like firewalls and IDSs have evolved with a lot of research contributions happening in these
areas. Machine learning techniques have been successfully used in these defense mechanisms especially
IDSs. Although they are effective to some extent in identifying new patterns and variants of existing
malicious patterns, many attacks are still left as undetected. The objective is to develop an algorithm for
detecting malicious domains based on passive traffic measurements. In this paper, an anomaly-based
intrusion detection system based on an ensemble based machine learning classifier called Random Forest
with gradient boosting is deployed. NSL-KDD cup dataset is used for analysis and out of 41 features, 32
features were identified as significant using feature discretion.
MACHINE LEARNING FOR QOE PREDICTION AND ANOMALY DETECTION IN SELF-ORGANIZING ...ijwmn
Existing mobile networking systems lack the level of intelligence, scalability, and autonomous adaptability
required to optimally enable next-generation networks like 5G and beyond, which are expected to be Self -
Organizing Networks (SONs). It is anticipated that machine learning (ML) will be instrumental in designing
future “x”G SON networks with their demanding Quality of Experience (QoE) requirements. This paper
evaluates a methodology that uses supervised machine learning to predict the QoE level of the end user
experiences and uses this information to detect anomalous behavior of dysfunctional network nodes
(eNodeBs/base stations) in self-organizing mobile networks. An end-to-end network scenario is created using
the network simulator ns-3, where end users interact with a remote host that is accessed over the Internet to
run the most commonly used applications like file downloads and uploads and the resulting output is used as
a dataset to implement ML algorithms for QoE prediction and eNodeB (eNB) anomaly detection. Three ML
algorithms were implemented and compared to study their effectiveness and the scalability of the
methodology. In the test network, an accuracy score greater than 99% is achieved using the ML algorithms.
As suggested by the ns-3 simulation the use of ML for QoE prediction will help network operators understand
end-user needs and identify network elements that are failing and need attention and recovery.
Similar to Web server load prediction and anomaly detection from hypertext transfer protocol logs (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Web server load prediction and anomaly detection from hypertext transfer protocol logs
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 13, No. 5, October 2023, pp. 5165~5178
ISSN: 2088-8708, DOI: 10.11591/ijece.v13i5.pp5165-5178 5165
Journal homepage: http://ijece.iaescore.com
Web server load prediction and anomaly detection from
hypertext transfer protocol logs
Lenka Benova, Ladislav Hudec
Faculty of Informatics and Information Technologies, Slovak University of Technology, Bratislava, Slovakia
Article Info ABSTRACT
Article history:
Received Oct 6, 2022
Revised Feb 13, 2023
Accepted Feb 27, 2023
As network traffic increases and new intrusions occur, anomaly detection
solutions based on machine learning are necessary to detect previously
unknown intrusion patterns. Most of the developed models require a labelled
dataset, which can be challenging owing to a shortage of publicly available
datasets. These datasets are often too small to effectively train machine
learning models, which further motivates the use of real unlabeled traffic. By
using real traffic, it is possible to more accurately simulate the types of
anomalies that might occur in a real-world network and improve the
performance of the detection model. We present a method able to predict
and categorize anomalies without the aid of a labelled dataset, demonstrating
the model’s usability while also gathering a dataset from real noisy network
traffic. The proposed long short-term memory (LTSM) based intrusion
detection system was tested in a real-world setting of an antivirus company
and was successful in detecting various intrusions using 5-minute
windowing over both the predicted and real update curves thereby
demonstrating its usefulness. Our contribution was the development of a
robust model generally applicable to any hypertext transfer protocol (HTTP)
traffic with almost real-time anomaly detection, while also outperforming
earlier studies in terms of prediction accuracy.
Keywords:
Anomaly detection
Intrusion detection system
Machine learning
Network traffic prediction
Web server logs
This is an open access article under the CC BY-SA license.
Corresponding Author:
Lenka Benova
Faculty of Informatics and Information Technologies, Slovak University of Technology
Ilkovicova 2, Bratislava, Slovakia
Email: lenka.benova@stuba.sk
1. INTRODUCTION
The volume of dangerous network traffic is continuously increasing. There is a significant demand
for autonomous data processing and intrusion detection systems based on machine-learning approaches that
can detect threats before they become visible. Because a high level of skill is required to comprehend each
log entry from a web server request, and even more so to understand sequences of operations such as
continuous web requests, there is a solid push to design systems that can identify both known and unknown
intrusions.
With an increasing amount of network traffic, the number of anomalies caused by various network
misconfigurations continue to grow, resulting in more successful network attacks. Network anomalies must
be detected and diagnosed to ensure the confidentiality, availability, and integrity of computer systems; as
such, intrusions drain resources and bandwidth, rendering network services unavailable. Crucial computer
systems are constantly under attack by numerous attackers on the internet, and intrusion detection systems
(IDSs) play an essential role in defending them. Signature-based approaches are used to address attacks by
extracting key characteristics and creating a unique signature for an attack. These approaches are highly
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5165-5178
5166
effective in combating previously captured attacks. However, they lack the ability to detect new intrusions or
zero-day attacks and are not suitable for real-time anomaly detection across large amounts of data [1], [2].
Network intrusion detection systems (NIDSs) are security systems that monitor malicious activity in
network traffic and generate alerts when any suspicious activity is detected to further investigate the cause of
the alert and take action. Owing to advancements in technology, traditional approaches to network anomaly
detection are becoming ineffective as network attacks become more sophisticated [3], [4]. Network anomaly
detection is predicated on locating data that do not follow regular behavioural patterns. Despite the
availability of numerous methodologies, numerous research hurdles remain. There is no universally
applicable anomaly detection approach, as data contains noise, which is an abnormality in and of itself,
making it difficult to distinguish. Because intruders are aware of current techniques, there is a dearth of
publicly available labelled datasets and the need for more complex and newer techniques.
Various anomaly detection approaches have been developed, but most have limitations when used in
real-world situations. This study aims to demonstrate the feasibility of utilising machine learning to detect
anomalies and classify metadata in selected types of application servers that offer modular updates to
consumers worldwide, with a focus on automatisation and usability in a real-world setting. Our idea was not
to present a new deep learning method; instead, we wanted to show how to get close to real-time anomaly
detection from unlabelled datasets comprised entirely of hypertext transfer protocol (HTTP) logs.
2. RELATED WORK
Over the past few years, various anomaly detection methods that incorporate several fields of
machine learning have been proposed. Bayesian networks (BNs) have been widely used for grouping
problems. Detecting anomalies using BNs has distinct and complementary strengths in spotting anomalies
[5], [6]. Nie et al. [7] investigated the problem of network traffic modelling. To track flow trends, they
proposed using a BN. To evaluate the performance of their model, they collected traffic datasets from
Abilene and GÉANT networks. Compared to the three leading methodologies, their solutions regularly
surpass them in terms of estimating inaccuracy.
Dynamic BNs were extended by Pauwels and Calders [8] to produce a novel model that allows a
better description of the structure and attributes of a log file. To address the drawbacks of regular dynamic
BNs, several aspects have been added. For example, functional dependencies were added to provide a better
description of the log file structure. They then detailed their approach to generating models that reflected the
multidimensional and sequential nature of the log data. Their method performed well in a variety of contexts
with varying levels of anomalies in both the training and test datasets, with an area under the ROC curve
(AUC) of 0.84 on the BPIC dataset.
A feed-forward neural network was introduced by Poojitha et al. [9], [10] to detect anomalies using
10% of the KDD Cup 99 data encompassing both traffic during normal and abnormal behaviour, trained by a
backpropagation method. The test results showed that the proposed approach is effective at accurately
(94.93% accuracy) detecting various attacks with a low percentage of false positives and negatives. The
proposed method detects normal traffic, disk operating system (DOS), and probe attacks well but fails to
detect R2L and U2R attacks owing to the very large dimensions of the input data.
Anomaly detection has also been performed with autoencoders (AEs), although this time to identify
outliers in the first place. In recent years, AEs have become increasingly popular. AEs have been used in
several investigations to detect anomalies. The anomaly detection performance of AE, denoising autoencoder
(DAE), principal component analysis (PCA), and kernel PCA approaches were examined by Sakurada and
Yairi [11]. A hybrid method with kernel density estimation was utilised in a study by Cao et al. [12], and
successful results were produced on the KDD dataset.
Recurrent neural networks (RNN), such as long short-term memory (LSTM) or gated recurrent unit
(GRU), have been widely utilised in traffic prediction, with excellent results when traffic varies during the
week or day [13]–[15], and are thus suitable for network traffic predictions. Lu and Yang [16] used wavelet
transformations and an LSTM network to forecast network traffic originating in a domain name system
(DNS) server. The purpose was to estimate the mean rate of arriving packets per minute on the following day
using the number of arrived packets per second. They built four prediction models: least squares support
vector machine (LSSVM), backpropagation (BP) neural network, Linan network, and LSTM network
utilising the db3 wavelet, with the proposed model based on wavelet transformation and LSTM network to
achieve the best results.
Using an LSTM network, Trinh et al. [17] investigated the efficiency of RNN on mobile traffic data.
The LSTM network was able to capture the temporal correlation of traffic even for remote timeslots, which
made it particularly helpful in real-time applications. Zhuo et al. [18] used three network traffic datasets to
test their LSTM using a deep neural network (DNN) prediction model. The first dataset includes network
3. Int J Elec & Comp Eng ISSN: 2088-8708
Web server load prediction and anomaly detection from hypertext transfer protocol logs (Lenka Benova)
5167
traffic history data from 11 European cities provided by network service providers. The second dataset was
based on web traffic history data from the United Kingdom Education and Research Networking Association
(UKERNA) academic research website. The third dataset is based on network traffic statistics collected by
the Beijing University of Posts and Telecommunications, which is the backbone node of China’s education
network. Their research revealed that LSTM can be used effectively as a timing sequence forecast model and
that adding auto-correlation features improved accuracy when working with large granularity datasets.
Naseer et al. [19] used multiple DNN architectures such as convolutional neural networks, AEs, and
RNN to propose, implement, and train intrusion detection models. These deep models were trained on the
NSLKDD training dataset and tested on NSLKDD’s NSLKDDTest+ and NSLKDDTest21 test datasets. The
authors implemented traditional machine learning intrusion detection system models with a variety of
well-known classification approaches, including extreme learning machine (ELM), k-nearest neighbors
algorithm (k-NN), decision tree (DT), random forest (RF), support vector machine (SVM) and naive Bayes
(NB). The RoC curve, area under the RoC, precision-recall curve, mean average (mAP) precision and
classification accuracy of both DNNs and conventional machine learning models were tested using
well-known classification measures. On the test dataset, both deep convolutional neural network (CNN) and
LSTM models performed well, with an accuracy of 85 and 89%, respectively.
Malaiya et al. [20] conducted an empirical evaluation of deep learning to investigate if it can be
used to discover network anomalies. The fully connected network (FCN), variational autoEncoder (VAE),
and LSTM with sequence to sequence (LSTM Seq2Seq) structures are used to create a collection of deep
learning models. The authors used publicly available traffic data sets from NSLKDD and Kyoto-Honeypot to
examine the models. Their experimental results are noteworthy, as the model based on the LSTM Seq2Seq
structure performed well on both traffic data sets, with 99% binary classification accuracy.
Kim and Co [21] proved the usefulness of LSTM and CNNs for the task of web traffic anomaly
detection by comparing their suggested model to various machine learning methods and achieving superior
results. The authors presented a C-LSTM architecture, which was found using parametric tests, model
comparison experiments, and data analysis. They employed the C-LSTM to extract patterns from web traffic
data that included spatial and temporal information. The characteristics of normal and abnormal data
categorized by the C-LSTM were revealed by a confusion matrix and t-SNE analysis. The proposed C-LSTM
model classifies and extracts characteristics that could not be extracted using traditional machine learning
approaches. Their approach outperforms other cutting-edge machine learning techniques on Yahoo’s
well-known Webscope S5 dataset, reaching an overall accuracy of 98.6% and recall of 89.7% on the test
dataset.
3. DATASET
This study was conducted on a dataset acquired from an antivirus company offering anti-virus and
firewall software. The company’s application servers provide virus signatures and software module updates
to consumers worldwide. Client software downloads these updates at a time set by the client. Consequently,
the resource use of these servers varies dramatically over time. While some usage patterns are predictable,
such as recently issued virus signature updates, the usual release patterns of various client software modules,
day/night cycles across different time zones, and less traffic on vacations and weekends, others can be sudden
and difficult to predict.
These updates are tracked by the Zabbix monitoring system and NGINX logs client connections
requesting updates. Table 1 lists the currently logged attributes. A dataset containing NGINX logs from
numerous update servers in the same hosting area was gathered for this experiment. Thousands of requests
are generated each minute and millions each day from single server traffic, creating approximately 50 GB of
logs per day. Due to the voluminosity of the data, logs were grouped by minutes and aggregated according to
the timestamp when the request was submitted, as shown in Table 2.
This dataset was used to identify potential anomalies, such as a specific anomaly linked to other
issues such as client misconfiguration or a rapid connection peak, which could lead to a problem and help us
address it early in the process. It is worth noting that the data were noisy and already contained anomalies,
which we could not extract because it would require a long and complex expert examination, which would be
nearly impossible with such a vast amount of data. We attempted to address this problem by collecting
additional data such that anomalies would only account for a small percentage of the total data, and the
remainder would be normal, desirable traffic. The timestamp was a critical feature to include in the dataset
because collected logs alter their behaviour over days and weeks as clients download less on weekends and at
different times during the day. This is why it was split into many features for the model to be able to learn
these cycles.
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5165-5178
5168
Table 1. Logged attributes
Attribute Description
remote addr client IP address
remote user authentication
http x eset updateid license key
time local local time in the Common Log Format
http host HTTP server host
request method HTTP request method
uri path to the update being requested
server protocol server protocol version
status response status
body bytes sent request body length
request length request length including request line, header, and request body
request time request processing time in seconds with a millisecond’s resolution
http user agent identification of the client originating the request
Table 2. Logs grouped by time of origin by minutes
Server Time local Count Body bytes sent Avg body bytes sent Request length Avg request length
1 23/Aug/2020:00:00:00 37607 1388254369 36914 23273524 618
1 23/Aug/2020:00:01:00 34044 1159650997 34063 21327793 626
1 23/Aug/2020:00:02:00 34578 1570963146 45432 22320593 645
1 23/Aug/2020:00:03:00 35372 1302698163 36828 22967956 649
1 23/Aug/2020:00:04:00 33731 1345533685 39890 21899394 649
The logs in the dataset ranged from November to February and were collected from numerous
servers in the same area with similar update curves and attributes. We chose to use data from servers located
in Bratislava and Vienna; a total of five servers. The server is identified by the number above the rows
(number one in the sample shown in the Table 3). Each row has columns with values from all five servers, as
illustrated in Table 3, with only the first server’s data presented. Multiple server lines were concatenated
based on the timestamp, resulting in a single line containing data from all parsed servers at the same time
(each row represents one timestamp with 40 attributes-8 for each server, as seen in Table 3).
The “count” feature was our target for the anticipated output because we wanted to predict future
request count. We were able to forecast future values by shifting the target feature value back in time,
according to the chosen prediction window. The data were split at a 90:10 ratio. We aimed for a scaler that
could scale the data according to our user-defined parameters because the data from our update servers
reached varying maximum and minimum values over time. As a result, we avoided employing library scalers
that scale the data based on the minimum and maximum values in the current vector. The own scaler was
necessary to scale the data and inverse-scale the data using the same values, resulting in the same scaled
values at each time.
Table 3. Input dataset sample showing data from first chosen server
1
count body bytes sent avg body bytes sent request length avg request length day hour minute
37607 1388254369 36914 23273524 618 6 0 0
34044 1159650997 34063 21327793 626 6 0 1
34578 1570963146 45432 22320593 645 6 0 2
35372 1302698163 36828 22967956 649 6 0 3
33731 1345533685 39890 21899394 649 6 0 4
4. NETWORK ARCHITECTURE
Because the update curve is based on previous values, an RNN was chosen to forecast future update
server requests. It was possible to determine whether there was a problem with the updates by comparing the
actual values to the predicted values using the predicted update curve. The network architecture was chosen
according to related work and according to the best prediction results in our experiments comparing GRU
and LSTM with various configurations, as shown in Table 4.
An LSTM neural network model was chosen based on processed and aggregated update server logs.
It can forecast the future update curve based on a variable-length input. This model can learn and predict
common release patterns for various client software modules in the future.
The model was trained by early stopping to avoid overfitting. Each batch consisted of 1,440
timestamps, representing one day. In our situation, the historical traffic patterns throughout workdays,
5. Int J Elec & Comp Eng ISSN: 2088-8708
Web server load prediction and anomaly detection from hypertext transfer protocol logs (Lenka Benova)
5169
weekends, and holidays can be tracked by LSTM. To lessen the influence of unusual traffic on the learning
curve, we acquired a large dataset of 489 600 records collected over three months. From the total amount of
data provided to the network, the anomaly data made up a very minor amount. Naturally, under the
professional assumption that the large majority of data is regular traffic free of anomalies.
For the final implementation, we chose the network that performed the best in our experiments in
Table 4. It consists of a 512-unit LSTM layer and a dense layer that outputs the target value-request count.
The dense layer is used to transform the output of the LSTM layer into a form that can be used to make
predictions. It is connected to all units in the preceding and subsequent layers, allowing it to learn a rich
representation of the data and make more accurate predictions. The rectified linear unit (ReLU) activation
function is used to introduce non-linearity into the network, allowing it to learn more complex patterns in the
data. The mean squared error (MSE) loss function and RMSprop optimizer are used to train the model, while
early stopping is used to prevent overfitting.
The train and validation split for the network was 90:10. The MinMaxScaler was used to scale the
data, as it is non-distorting meaning it preserves the shape of the original distribution. We trained the model
on 20 epochs each consisting of 100 steps. The model’s architecture can be seen in Figures 1 and 2.
Table 4. RNN comparison (average loss from 5 runs)
Type Units Optimizer Activation Loss on the train set Loss on the test set
GRU 512 RMSprop sigmoid 0.0040 0.0041
LSTM 512 RMSprop sigmoid 0.0039 0.0039
GRU 512 Adam sigmoid 0.0041 0.0042
LSTM 512 Adam sigmoid 0.0042 0.0043
GRU 512 RMSprop ReLU 0.0046 0.0047
LSTM 512 RMSprop ReLU 0.0038 0.0038
LSTM 256 RMSprop ReLU 0.0054 0.0054
LSTM 1024 RMSprop ReLU 0.0295 0.0295
Figure 1. Model’s architecture
Figure 2. Model’s description
The update curve prediction trained on one week of data from five servers achieved an MSE loss of
0.00399 on the training set and an MSE loss of 0.0040 on the test set, but it did not represent the peaks
correctly enough, as shown in Figure 3. The prediction trained on two weeks of data achieved an MSE loss of
0.0015 on the training set and the same loss on the test set; however, it still did not represent the peaks
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5165-5178
5170
correctly enough, despite being better than using 1-week data. For the final training, we used 3 months data
and the model’s performance can be seen in Figure 4. From this data, the model was able to deduce update
patterns.
Figure 3. Prediction on 1-week train data
Figure 4. Prediction on train data on three months data
5. CLASSIFICATION
For test data, the model obtained an MSE loss of 0.0021 with a mean absolute error (MAE) of
0.0305 using data representing 489 600 records collected over three months. Figure 4 depicts the prediction
on a sample of 10,000 training records, whereas Figure 5 depicts the prediction on a sample of 10,000 test
records. Although the loss was higher after two weeks of training, the model was still able to predict the
update peaks, which was critical for our future anomaly classification.
Figure 5. Prediction on test data on three months data
7. Int J Elec & Comp Eng ISSN: 2088-8708
Web server load prediction and anomaly detection from hypertext transfer protocol logs (Lenka Benova)
5171
The rules for distinguishing between the anticipated update curve and real traffic were developed
based on the analysis of historical data and network behavior. The system was designed to detect anomalies
such as sudden spikes or drops in network activity, as well as unusual patterns in traffic that deviated
significantly from the expected update curve. By detecting these anomalies, the system could alert network
administrators to potential issues and allow them to take corrective actions before they impacted network
performance or caused downtime. This capability of the system ensured smooth network operations and
reduced the risk of business interruptions.
− Missed update: the anticipated load, which typically lasts several hours, does not occur.
− Unusually large update: the update was not served within an hour because the load did not sink. Clients
may not have received everything yet. The load persisted for several hours.
− Unexpected traffic: clients began downloading without us expecting it, indicating a higher load. Someone
may have released an additional update, or it may be New Year’s Day or the day after the holidays.
− Degraded performance: the amount of traffic is substantially lower than expected.
The actual and predicted update curves were both windowed using a 5-minute window for
classification, and the trends of both curves were observed. Unexpected traffic was defined as an increasing
trend in the actual update curve but a decreasing trend in the prediction. It was classified as a missed update
if the actual update curve had a descending trend but the prediction showed an ascending trend. The
remaining two groups were determined using a threshold based on the window’s maximum value if the
trends match. A degraded performance occurs when the maximum value of the actual curve is less than the
maximum value of the predicted curve by a value greater than or equal to the threshold. An update is
classified as an unusually large update if the maximum value of the actual curve exceeds the predicted
curve’s maximum value by a value greater than or equal to the threshold.
The actual and predicted curves for one day are depicted in detail in Figure 6. At the peak, where the
selected time point is located, the system generates three notifications. The curve was initially classified as an
unexpected update because the actual curve is rising, whereas the predicted curve is not. Then, because the
actual update values achieved larger numbers of requests (“count”), it will be classified as unexpected traffic,
and it will generate a missed update classification because the actual traffic is falling but the prediction curve
is growing.
Figure 6. Real vs. one day ahead predicted traffic
We can alter these two variables when classifying in real-time and generating alerts. Either the
classification window or threshold at which we decide whether to generate an alert. The classifier generates a
large number of alerts when employing a window of 5 min, which provides a good real-time reaction time
and a threshold of 20,000, as shown in Table 5. When we increase the threshold value to 30,000, we generate
all the alerts that we can see with our eyes from Figure 6. Table 6 lists these alerts. Increasing the threshold
further caused us to lose information about the first delayed update; therefore, we settled on a threshold of
30,000.
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5165-5178
5172
Table 5. Alerts generated for Figure 6
Time point (minutes) Alert
20-25 missed update
25-30 missed update
30-35 degraded performance
35-40 degraded performance
40-45 degraded performance
45-50 degraded performance
50-55 degraded performance
55-60 missed update
90-95 unusually large update
95-100 unusually large update
100-105 unusually large update
105-110 missed update
110-115 unusually large update
115-120 unusually large update
335-340 missed update
345-350 missed update
640-645 unusually large update
645-650 unusually large update
650-655 unusually large update
655-660 missed update
660-665 unexpected traffic
665-670 unusually large update
670-675 missed update
675-680 missed update
680-685 unusually large update
685-690 unusually large update
690-695 unusually large update
695-700 missed update
700-705 unusually large update
Time point (minutes) Alert
705-710 degraded performance
710-715 degraded performance
715-720 degraded performance
720-725 degraded performance
725-730 unexpected traffic
730-735 unexpected traffic
735-740 unexpected traffic
820-825 missed update
825-830 missed update
830-835 unexpected traffic
840-845 unusually large update
855-860 unusually large update
860-865 unusually large update
865-870 unexpected traffic
870-875 unusually large update
875-880 unusually large update
880-885 unexpected traffic
885-890 unusually large update
890-895 unusually large update
895-900 unusually large update
1000-1005 unexpected traffic
1005-1010 missed update
1010-1015 missed update
1050-1055 unusually large update
1065-1070 degraded performance
1070-1075 unusually large update
1275-1280 unusually large update
1280-1285 unusually large update
Table 6. Alerts generated for Figure 6 using a threshold of 30,000
Time point (minutes) Alert
20-25 missed update
25-30 missed update
30-35 degraded performance
35-40 degraded performance
40-45 degraded performance
45-50 degraded performance
50-55 degraded performance
55-60 missed update
115-120 unusually large update
640-645 unusually large update
645-650 unusually large update
650-655 unusually large update
655-660 missed update
660-665 unexpected traffic
665-670 unusually large update
670-675 missed update
675-680 missed update
680-685 unusually large update
685-690 unusually large update
690-695 unusually large update
695-700 missed update
705-710 degraded performance
Time point (minutes) Alert
710-715 degraded performance
715-720 degraded performance
720-725 degraded performance
725-730 unexpected traffic
730-735 unexpected traffic
820-825 missed update
825-830 missed update
830-835 unexpected traffic
840-845 unusually large update
855-860 unusually large update
860-865 unusually large update
865-870 unexpected traffic
870-875 unusually large update
875-880 unusually large update
880-885 unexpected traffic
885-890 unusually large update
890-895 unusually large update
1000-1005 unexpected traffic
1005-1010 missed update
1010-1015 missed update
1065-1070 degraded performance
6. RESULTS AND DISCUSSION
In the first stage, various regression models were tested to determine the best-performing model in
predicting the network traffic curve. The selected model was then used as a component in the second stage,
which focused on classifying anomalies in the network based on the predicted update curve. This two-stage
assessment approach ensured that the developed system could accurately predict and classify network traffic
anomalies.
6.1. Prediction evaluation
The first element of our proposed methodology is an LSTM neural network, which uses aggregated
HTTP log data as input to solve a regression problem, in our case, predict the traffic load in the future so we
9. Int J Elec & Comp Eng ISSN: 2088-8708
Web server load prediction and anomaly detection from hypertext transfer protocol logs (Lenka Benova)
5173
can compare it with real data curves in the future and classify occurring anomalies. It is very challenging to
determine how accurate the model must be in order to assist us in finding the anomalies we seek. Because of
this, we concentrated on comparing the actual curve to the predicted curve to determine how effectively the
model can identify anomalies and whether it can classify them accurately. You can see a comparison of the
model’s prediction accuracy with other studies in the section 7 (discussion). Both the prediction and
classification results are described in Table 5.
6.2. Expert evaluation
The second component of the system is a real-time windowing classifier that works with both the
real traffic and the prediction curve of the regression model. We did not have a labelled dataset; therefore, we
could not use this method to determine the correctness of the system. We relied on expert analysis against
which the model alerts were matched. The following compares the ESET expert Ing. Matej Březina, who
works in the Update Systems Department of ESET Internal Systems, and our proposed approach.
The expert stated that there were missed updates on February 24th
, ranging from midnight to 10 am.
As seen in Figure 7, the model was able to predict the updates, so we see two red peaks between time points
0 and 400, marked by the yellow rectangle. The next update occurred at 10 am, which corresponds to time
point 600 in our sequence. According to the expert, the model anticipated an update at time point 1,060,
which is too late, possibly because of the large volume of data without automatic update release (the updates
are currently not released automatically, so the delay in updates is not an issue and the model is therefore not
as accurate as it could be with automatic updates). The update took place at time point 1,000, as is typical.
According to the expert, the rest of the sequence was normal as it did not cause any issues.
Figure 7. Real vs. one day ahead predicted traffic from February 24th
The classifier alerts, as shown in Table 7, correspond to the expert analysis and are thus considered
correct. Because the classifier is designed to create alerts in real-time, it not only generates missed update
alerts but also degraded performance updates when the curve is no longer rising or falling, indicating that the
performance is degraded. The classifier again issued alerts at the time points referred to by the expert, as
shown in Table 8. These categories correspond to the established threshold and trend guidelines. The
classifier sent similar notifications for both missed updates: a missed update notice followed by a degraded
performance alert, indicating that the update did not occur. The classifier detected more “missed updates” in
both cases as the update curve began to rise again, but real traffic did not increase.
These alarms may be avoided by smoothing both the anticipated and actual update curves, such as
with a rolling mean; however, this would reduce our accuracy, which is undesirable. It is preferable to send
an alert as soon as a problem arises, and a smooth curve would cost us valuable time. The same is true for
unexpected traffic, which occurs when the actual traffic increases, despite the prediction trend.
The server was flapping between 9 and 11 a.m. on February 26th
, 2020, as shown in Figure 8, near
the 600 time point in dark gray marked by the rectangle. Because certain server logs were missing due to
external difficulties, the model input comprised four days of data to achieve a four-day-ahead prediction. The
model was able to anticipate how typical traffic would look, which corresponds to the light gray line if the
server was not flapping, as seen in red around the 600 time point.
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5165-5178
5174
Table 7. Alerts generated for the first update expected after midnight
Time point (minutes) Alert
35-40 missed update
40-45 degraded performance
45-50 degraded performance
50-55 degraded performance
55-60 missed update
60-65 degraded performance
65-70 unexpected traffic
70-75 degraded performance
75-80 degraded performance
80-85 degraded performance
85-90 degraded performance
Table 8. Alerts generated for the second update expected around 5 a.m
Time point (minutes) Alert
330-335 missed update
335-340 degraded performance
340-345 degraded performance
345-350 missed update
350-355 degraded performance
355-360 degraded performance
360-365 degraded performance
365-370 degraded performance
370-375 degraded performance
375-380 unexpected traffic
380-385 degraded performance
385-390 degraded performance
Figure 8. Real vs. 4 days ahead predicted traffic from February 26th
7. DISCUSSION
The first component of the system is an LSTM neural network that solves a regression problem with
extremely particular data as input. As a result, there have not been many studies for comparison. The
majority of the studies focused on network traffic prediction using the GEANT backbone network dataset,
which includes traffic matrices that are different from our input data because they contain universal traffic
data, and our update server data is not only very specific, but we only use features from HTTP logs. The
prediction MSE for an LSTM network employing traffic matrices was approximately 0.042 [22], whereas our
model attained an MSE of 0.0025, which is not comparable. Consequently, the following comparison is
based on articles that deal with similar issues and use a similar, not identical, dataset, in our instance, network
data.
Network packets between two virtual machines were collected using Wireshark containing HTTP,
TCP, and ICMP requests from their local network to compare the performance of RNNs in this area,
according to Ramakrishnan and Soni [23]. The results of comparing the planned system request count
11. Int J Elec & Comp Eng ISSN: 2088-8708
Web server load prediction and anomaly detection from hypertext transfer protocol logs (Lenka Benova)
5175
prediction with their packet count forecasts are listed in Table 9. Both our models performed slightly better,
although the results were similar.
Table 9. Comparison with Ramakrishnan and Soni [23]. We switched our proposed neural network using
LSTM to GRU for this comparison
Model GRU LSTM
Comparison MSE 0.0030 0.0028
MSE of our proposed system 0.0028 0.0025
Different LSTM models for network traffic prediction have been compared by Nihale et al. [24].
RJ Hyndman compared six different time series in the paper. Data were captured every 5 min, hour, and day
in the gathered time series. The daily findings were compared with the daily results from the publications,
and the results are shown in Table 10. The better results of our proposed model can be explained by
combining traffic from multiple servers in the same region, resulting in better forecasts as the curves are
similar.
Table 10. Comparison with various models mentioned in the paper by Nihale et al. [24] using normalized
root mean square error (NRMSE)
Model
Autoregressive integrated
moving average
(ARIMA)+RNN
Vanilla
LSTM
(VLSTM)
Delta LSTM
(DLSTM)
Cluster
LSTM
(CLSTM)
Clusted Delta
LSTM
(CDLSTM)
Proposed
system
LSTM
NRMSE 0.115 0.120 0.114 0.112 0.109 0.042
The same dataset previously mentioned by RJ Hyndman was used in the study by Oliveira et al. [25]
to compare multilayer perceptron using backpropagation (MLP-BP) as the training algorithm, multilayer
perceptron with resilient backpropagation (MLP-RP), RNN, and deep learning stacked autoencoder (SAE).
These models are compared with the proposed model in Table 11. These improved results can be explained
by the unique use of our model. As we could not find any similar-sized NGINX logs dataset to compare with,
we compared our model’s performance on the web server access logs dataset containing 3.3 GB of web
server logs from the Iranian shopping website zanbil.ir. A sample log can be seen in Figure 9.
The shopping dataset did not contain all attributes we logged in our dataset and we, therefore, had to
retrain our model so it only took the attributes present in the shopping dataset into account. This meant
removing the body bytes sent attribute and also the server number as the target dataset only contained logs
from one server and we used data gathered from 5 servers in the same region. After parsing and aggregating
the shopping logs, the dataset consisted of 6,754 rows, whereas our dataset from one server consisted of
97 920 rows. We have extracted a random sample of the same length from our dataset and trained our model
on both the ESET and shopping datasets. The retrained model achieved an MSE of 0.0225 on the ESET
sample test set and an MSE of 0.035 on the shopping dataset test set.
The train performance of our model on the shopping dataset can be observed in Figure 10. By
looking at the figure, we can see the model predicting the traffic curve very well and it is also trying to model
the peaks and trying to smooth out some possible anomalies. For the model to be more precise, it needs more
data for training. Still, this proves the usability of our model for predicting traffic from any web server log
source.
Table 11. Comparison with various models mentioned in the paper by Oliveira et al. [25]
Model MLP-BP MLP-RP RNN SAE Proposed system LSTM
NRMSE 0.200 0.202 0.197 0.336 0.042
Figure 9. Web server access logs Kaggle dataset sample
12. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5165-5178
5176
Figure 10. Model train performance on the web server access logs dataset
The test performance of our model on the shopping dataset can be seen in Figure 11. As we can see,
the model learned the traffic peak behavior but was not able to model the peaks clearly. We observed this
exact behavior when we trained the model on our two-week data sample and saw that the performance
improved with more data. On three months of data, the model was able to predict the peaks, which made it
possible to use the model for our anomaly detection. With the shopping data, we only had around 7% of the
data we trained our model on. Unfortunately, all publicly available datasets are small and this was the largest
we were able to find at the time of writing this article. As our model is trained on real traffic and not just
filtered normal traffic without anomalies, we need millions of requests to be able to lower the importance of
anomalies in the training set so we can later detect them in our test set. We still think that this proves the
usefulness of our approach, it just needs a longer period of data to be able to model the peaks more precisely.
Figure 11. Model test performance on the web server access logs dataset
8. CONCLUSION
We proposed a method for detecting and classifying anomalies in HTTP logs that do not require
extensive expert analysis for removing anomalies from real-time data or data labelling, which requires expert
domain and system knowledge. Numerous studies claim to have achieved successful prediction and
classification outcomes, but they frequently only did so on a single dataset and are not universally applicable
to other datasets. We present a methodology applicable to any dataset consisting of HTTP logs. We examined
the model in an actual network anomaly setting where we presented the value of our work. We were able to
predict and categorize anomalies without the aid of a labelled dataset, demonstrating the model’s usability
while also gathering a substantial dataset from noisy network traffic. The proposed LSTM-based intrusion
detection system was tested in a real-world setting and was successful in detecting various intrusions while in
use, thereby demonstrating its usefulness. We divided anomalies into four categories using 5-minute
windowing over both the predicted and real update curves: unexpected traffic (the load was high and we did
13. Int J Elec & Comp Eng ISSN: 2088-8708
Web server load prediction and anomaly detection from hypertext transfer protocol logs (Lenka Benova)
5177
not expect an update), unusually large update (we issued an update and the load was higher than usual),
missed update (the load was low but we did expect an update), and degraded performance (the load of an
issued update was lower than usual). The system’s overall findings and neural network performance,
achieving an MSE of 0.0021 on our collected noisy dataset, were compared to existing solutions and expert
analysis, where we achieved NRMSE and MSE values of an order of magnitude better. We hereby proved its
usability as the system is able to provide sufficient update curve prediction accuracy on noisy web server
traffic even without automatic updates and almost real-time reactions to occurring events, which is essential
to take action against threats attacking the system and its infrastructure. The features used for anomaly
detection in the paper were not novel but were chosen specifically because they represented a straightforward
and generalizable methodology that could be applied to any HTTP log data. The goal was to create a model
that could be used with minimal preprocessing or feature engineering, making it easy to apply to a wide
range of different datasets and scenarios. By using well-established features that are commonly found in
HTTP log data, we sought to create a model that would be simple to understand and implement, while still
providing performance in terms of anomaly detection. Overall, the focus was on creating a straightforward
and effective approach to anomaly detection, rather than on developing novel or cutting-edge features. The
novelty of this work lies in using only HTTP web server logs to detect web server network intrusions in near
real-time, without the need for a labelled or cleaned dataset. We not only compared our model’s performance
to that of other models trained on similar data, but we also deployed the model in a real-world setting of an
antivirus company to demonstrate its practical value, as well as a study of how well the model performs in a
real-world setting. The antivirus company validated the value of our approach for their ability to react
promptly to emerging anomalies. The techniques we described are not novel, but we presented a new
methodology to apply existing techniques to live data that already has anomalies without having to label
them, allowing us to identify anomalies and assist professionals in their work. This methodology enables the
use of any dataset composed of HTTP logs as input data, providing a robust solution that performs effectively
on a variety of datasets.
ACKNOWLEDGEMENTS
We would like to thank ESET Research Centre for its cooperation. We would also like to express
our gratitude to Matej Březina for providing extensive information regarding ESET update servers. This
publication has been written thanks to the support of the Operational Programme Integrated Infrastructure for
the project: Research in the SANET network and possibilities of its further use and development (ITMS
code: 313011W988), co-funded by the European Regional Development Fund (ERDF).
REFERENCES
[1] X. Ni, D. He, S. Chan, and F. Ahmad, “Network anomaly detection using unsupervised feature selection and density peak
clustering,” in Applied Cryptography and Network Security, Springer International Publishing, 2016, pp. 212–227.
[2] I. Syarif, A. Prugel-Bennett, and G. Wills, “Unsupervised clustering approach for network anomaly detection,” in Networked
Digital Technologies, Springer Berlin Heidelberg, 2012, pp. 135–145.
[3] N. Chouhan, A. Khan, and H.-R. Khan, “Network anomaly detection using channel boosted and residual learning based deep
convolutional neural network,” Applied Soft Computing, vol. 83, Oct. 2019, doi: 10.1016/j.asoc.2019.105612.
[4] M. H. Bhuyan, D. K. Bhattacharyya, and J. K. Kalita, Network traffic anomaly detection and prevention: concepts, techniques,
and tools. Springer, 2017.
[5] S. Mascaro, A. E. Nicholso, and K. B. Korb, “Anomaly detection in vessel tracks using Bayesian networks,” International
Journal of Approximate Reasoning, vol. 55, no. 1, pp. 84–98, Jan. 2014, doi: 10.1016/j.ijar.2013.03.012.
[6] A. A. H. Riyaz, F. Nasaruddin, A. Gani, I. A. T. Hashem, E. Ahmed, and M. Imran, “Real-time big data processing for anomaly
detection: A Survey,” International Journal of Information Management, vol. 45, pp. 289–307, Apr. 2019, doi:
10.1016/j.ijinfomgt.2018.08.006.
[7] L. Nie, D. Jiang, and Z. Lv, “Modeling network traffic for traffic matrix estimation and anomaly detection based on Bayesian
network in cloud computing networks,” Annals of Telecommunications, vol. 72, no. 5–6, pp. 297–305, Jun. 2017, doi:
10.1007/s12243-016-0546-3.
[8] S. Pauwels and T. Calders, “Extending dynamic bayesian networks for anomaly detection in complex logs,” arXiv preprint
arXiv:1805.07107, May 2018.
[9] G. Poojitha, K. N. Kumar, and P. J. Reddy, “Intrusion detection using artificial neural network,” in 2010 Second International
conference on Computing, Communication and Networking Technologies, Jul. 2010, pp. 1–7, doi:
10.1109/ICCCNT.2010.5592568.
[10] M. Ahmed, A. N. Mahmood, and J. Hu, “A survey of network anomaly detection techniques,” Journal of Network and Computer
Applications, vol. 60, pp. 19–31, Jan. 2016, doi: 10.1016/j.jnca.2015.11.016.
[11] M. Sakurada and T. Yairi, “Anomaly detection using autoencoders with nonlinear dimensionality reduction,” in Proceedings of
the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis, Dec. 2014, pp. 4–11, doi:
10.1145/2689746.2689747.
[12] V. L. Cao, M. Nicolau, and J. McDermott, “A hybrid autoencoder and density estimation model for anomaly detection,” in
Parallel Problem Solving from Nature PPSN XIV, Springer International Publishing, 2016, pp. 717–726.
[13] Z. Zhao, W. Chen, X. Wu, P. C. Y. Chen, and J. Liu, “LSTM network: a deep learning approach for short‐term traffic forecast,”
IET Intelligent Transport Systems, vol. 11, no. 2, pp. 68–75, Mar. 2017, doi: 10.1049/iet-its.2016.0208.
14. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 13, No. 5, October 2023: 5165-5178
5178
[14] R. Fu, Z. Zhang, and L. Li, “Using LSTM and GRU neural network methods for traffic flow prediction,” in 2016 31st Youth
Academic Annual Conference of Chinese Association of Automation (YAC), Nov. 2016, pp. 324–328, doi:
10.1109/YAC.2016.7804912.
[15] A. Lazaris and V. K. Prasanna, “An LSTM framework for modeling network traffic,” in 2019 IFIP/IEEE Symposium on
Integrated Network and Service Management (IM), 2019, pp. 19–24.
[16] H. Lu and F. Yang, “A network traffic prediction model based on wavelet transformation and LSTM network,” in 2018 IEEE 9th
International Conference on Software Engineering and Service Science (ICSESS), Nov. 2018, pp. 1–4, doi:
10.1109/ICSESS.2018.8663884.
[17] H. D. Trinh, L. Giupponi, and P. Dini, “Mobile traffic prediction from raw data using LSTM networks,” in 2018 IEEE 29th
Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Sep. 2018, pp. 1827–1832,
doi: 10.1109/PIMRC.2018.8581000.
[18] Q. Zhuo, Q. Li, H. Yan, and Y. Qi, “Long short-term memory neural network for network traffic prediction,” in 2017 12th
International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Nov. 2017, pp. 1–6, doi:
10.1109/ISKE.2017.8258815.
[19] S. Naseer et al., “Enhanced network anomaly detection based on deep neural networks,” IEEE Access, vol. 6, pp. 48231–48246,
2018, doi: 10.1109/ACCESS.2018.2863036.
[20] R. K. Malaiya, D. Kwon, J. Kim, S. C. Suh, H. Kim, and I. Kim, “An empirical evaluation of deep learning for network anomaly
detection,” in 2018 International Conference on Computing, Networking and Communications (ICNC), Mar. 2018, pp. 893–898,
doi: 10.1109/ICCNC.2018.8390278.
[21] T.-Y. Kim and S.-B. Cho, “Web traffic anomaly detection using C-LSTM neural networks,” Expert Systems with Applications,
vol. 106, pp. 66–76, Sep. 2018, doi: 10.1016/j.eswa.2018.04.004.
[22] R. Vinayakumar, K. P. Soman, and P. Poornachandran, “Applying deep learning approaches for network traffic prediction,” in
2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Sep. 2017,
pp. 2353–2358, doi: 10.1109/ICACCI.2017.8126198.
[23] N. Ramakrishnan and T. Soni, “Network traffic prediction using recurrent neural networks,” in 2018 17th IEEE International
Conference on Machine Learning and Applications (ICMLA), Dec. 2018, pp. 187–193, doi: 10.1109/ICMLA.2018.00035.
[24] S. Nihale, S. Sharma, L. Parashar, and U. Singh, “Network traffic prediction using long short-term memory,” in 2020
International Conference on Electronics and Sustainable Communication Systems (ICESC), Jul. 2020, pp. 338–343, doi:
10.1109/ICESC48915.2020.9156045.
[25] T. P. Oliveira, J. S. Barbar, and A. S. Soares, “Computer network traffic prediction: a comparison between traditional and deep
learning neural networks,” International Journal of Big Data Intelligence, vol. 3, no. 1, 2016, doi: 10.1504/IJBDI.2016.073903.
BIOGRAPHIES OF AUTHORS
Lenka Benova received her Bachelor’s degree in Computer Engineering in 2019
from the Faculty of Informatics and Information Technologies of the Slovak University of
Technology in Bratislava. In 2021, she received her Master’s degree at the before mentioned
university in the field of Computer Science. She is currently a Ph.D. student of Applied
Informatics at the Faculty of Informatics and Information Technologies, Slovak University of
Technology with a focus on increasing web server security. She can be contacted at email:
lenka.benova@stuba.sk.
Ladislav Hudec received his Ing diploma summa cum laude in electronics from
the Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University,
Prague, in 1974. In 1985, he received a C.Sc degree (Ph.D.) in Computer Machinery from the
Faculty of Electrical Engineering, Slovak Technical University, Bratislava, in 1989 he was
appointed Associate Professor. Since 1974 he is with the Slovak Technical University. During
the period 1992-1993, he served as Director of the SARC-Centre for Advancement, Science
and Technology, Bratislava. During the period from 1993-2010, he served as National
Coordinator at the European Cooperation in Science and Technology (COST). He is currently
an Associate Professor of Computer Science and Engineering, Deputy Director of the Institute
of Computer Engineering and Applied Informatics, Faculty of Informatics and Information
Technology, Slovak University of Technology. He reads lectures on Principles of information
technologies security, Security of information technologies and Internet security. He is the
author or co-author of over 60 scientific papers published in journals and proceedings of
conferences and over 80 technical papers in the field of fault-tolerant computing, embedded
systems, and computer security. He led over 90 research grants and industrial contracts. He is a
member of the Information System Audit and Control Association (ISACA), and holder of the
CISA license. He can be contacted at email: ladislav.hudec@stuba.sk.