Hyperparameter optimization is one of the main challenges in deep learning despite its successful exploration in many areas such as image classification, speech recognition, natural language processing, and fraud detections. Hyperparameters are critical as they control the learning rate of a model and should be tuned to improve performance. Tuning the hyperparameters manually with default values is a challenging and time-intensive task. Though the time and efforts spent on tuning the hyperparameters are decreasing, it is always a burden when it comes to a new dataset or solving a new task or improving the existing model. In our paper, we propose a custom genetic algorithm to auto-tune the hyperparameters of the deep learning sequential model to classify benign and malicious traffic from internet of things-23 dataset captured by Czech Technical University, Czech Republic. The dataset is a collection of 30.85 million records of malicious and benign traffic. The experimental results show a promising outcome of 98.9% accuracy.
Proposed T-Model to cover 4S quality metrics based on empirical study of root...IJECEIAES
There are various root causes of software failures. Few years ago, software used to fail mainly due to functionality related bugs. That used to happen due to requirement misunderstanding, code issues and lack of functional testing. A lot of work has been done in past on this and software engineering has matured over time, due to which software’s hardly fail due to functionality related bugs. To understand the most recent failures, we had to understand the recent software development methodologies and technologies. In this paper we have discussed background of technologies and testing progression over time. A survey of more than 50 senior IT professionals was done to understand root cause of their software project failures. It was found that most of the softwares fail due to lack of testing of non-functional parameters these days. A lot of research was also done to find most recent and most severe software failures. Our study reveals that main reason of software failures these days is lack of testing of non-functional requirements. Security and Performance parameters mainly constitute non-functional requirements of software. It has become more challenging these days due to lots of development in the field of new technologies like Internet of things (IoT), Cloud of things (CoT), Artificial Intelligence, Machine learning, robotics and excessive use of mobile and technology in everything by masses. Finally, we proposed a software development model called as T-model to ensure breadth and depth of software is considered while designing and testing of software.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for misstatement of information thru its source, content material, or author and save you the unauthenticated assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for fake information presence. The implementation setup produced most volume 99% category accuracy, even as dataset is tested for binary (real or fake) labelling with multiple epochs.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it
tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for
misstatement of information thru its source, content material, or author and save you the unauthenticated
assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network
entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for
fake information presence. The implementation setup produced most volume 99% category accuracy, even
as dataset is tested for binary (real or fake) labelling with multiple epochs.
Network security is one of the foremost anxieties of the modern time. Over
the previous years, numerous studies have been accompanied on the
intrusion detection system. However, network security is one of the foremost
apprehensions of the modern era this is due to the speedy development and
substantial usage of altered technologies over the past period. The
vulnerabilities of these technologies security have become a main dispute
intrusion detection system is used to classify unapproved access and unusual
attacks over the secured networks. For the implementation of intrusion
detection system different approaches are used machine learning technique
is one of them. In order to comprehend the present station of application of
machine learning techniques for solving the intrusion discovery anomalies in
internet of thing (IoT) based big data this review paper conducted. Total 55
papers are summarized from 2010 and 2021 which were centering on the
manner of the single, hybrid and collaborative classifier design. This review
paper also includes some of the basic information like IoT, big data, and
machine learning approaches are discussed.
SECURITY AND PRIVACY AWARE PROGRAMMING MODEL FOR IOT APPLICATIONS IN CLOUD EN...ijccsa
The introduction of Internet of Things (IoT) applications into daily life has raised serious privacy concerns
among consumers, network service providers, device manufacturers, and other parties involved. This paper
gives a high-level overview of the three phases of data collecting, transmission, and storage in IoT systems
as well as current privacy-preserving technologies. The following elements were investigated during these
three phases:(1) Physical and data connection layer security mechanisms(2) Network remedies(3)
Techniques for distributing and storing data. Real-world systems frequently have multiple phases and
incorporate a variety of methods to guarantee privacy. Therefore, for IoT research, design, development,
and operation, having a thorough understanding of all phases and their technologies can be beneficial. In
this Study introduced two independent methodologies namely generic differential privacy (GenDP) and
Cluster-Based Differential privacy ( Cluster-based DP) algorithms for handling metadata as intents and
intent scope to maintain privacy and security of IoT data in cloud environments. With its help, we can
virtual and connect enormous numbers of devices, get a clearer understanding of the IoT architecture, and
store data eternally. However, due of the dynamic nature of the environment, the diversity of devices, the
ad hoc requirements of multiple stakeholders, and hardware or network failures, it is a very challenging
task to create security-, privacy-, safety-, and quality-aware Internet of Things apps. It is becoming more
and more important to improve data privacy and security through appropriate data acquisition. The
proposed approach resulted in reduced loss performance as compared to Support Vector Machine (SVM) ,
Random Forest (RF) .
Proposed T-Model to cover 4S quality metrics based on empirical study of root...IJECEIAES
There are various root causes of software failures. Few years ago, software used to fail mainly due to functionality related bugs. That used to happen due to requirement misunderstanding, code issues and lack of functional testing. A lot of work has been done in past on this and software engineering has matured over time, due to which software’s hardly fail due to functionality related bugs. To understand the most recent failures, we had to understand the recent software development methodologies and technologies. In this paper we have discussed background of technologies and testing progression over time. A survey of more than 50 senior IT professionals was done to understand root cause of their software project failures. It was found that most of the softwares fail due to lack of testing of non-functional parameters these days. A lot of research was also done to find most recent and most severe software failures. Our study reveals that main reason of software failures these days is lack of testing of non-functional requirements. Security and Performance parameters mainly constitute non-functional requirements of software. It has become more challenging these days due to lots of development in the field of new technologies like Internet of things (IoT), Cloud of things (CoT), Artificial Intelligence, Machine learning, robotics and excessive use of mobile and technology in everything by masses. Finally, we proposed a software development model called as T-model to ensure breadth and depth of software is considered while designing and testing of software.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for misstatement of information thru its source, content material, or author and save you the unauthenticated assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for fake information presence. The implementation setup produced most volume 99% category accuracy, even as dataset is tested for binary (real or fake) labelling with multiple epochs.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it
tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for
misstatement of information thru its source, content material, or author and save you the unauthenticated
assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network
entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for
fake information presence. The implementation setup produced most volume 99% category accuracy, even
as dataset is tested for binary (real or fake) labelling with multiple epochs.
Network security is one of the foremost anxieties of the modern time. Over
the previous years, numerous studies have been accompanied on the
intrusion detection system. However, network security is one of the foremost
apprehensions of the modern era this is due to the speedy development and
substantial usage of altered technologies over the past period. The
vulnerabilities of these technologies security have become a main dispute
intrusion detection system is used to classify unapproved access and unusual
attacks over the secured networks. For the implementation of intrusion
detection system different approaches are used machine learning technique
is one of them. In order to comprehend the present station of application of
machine learning techniques for solving the intrusion discovery anomalies in
internet of thing (IoT) based big data this review paper conducted. Total 55
papers are summarized from 2010 and 2021 which were centering on the
manner of the single, hybrid and collaborative classifier design. This review
paper also includes some of the basic information like IoT, big data, and
machine learning approaches are discussed.
SECURITY AND PRIVACY AWARE PROGRAMMING MODEL FOR IOT APPLICATIONS IN CLOUD EN...ijccsa
The introduction of Internet of Things (IoT) applications into daily life has raised serious privacy concerns
among consumers, network service providers, device manufacturers, and other parties involved. This paper
gives a high-level overview of the three phases of data collecting, transmission, and storage in IoT systems
as well as current privacy-preserving technologies. The following elements were investigated during these
three phases:(1) Physical and data connection layer security mechanisms(2) Network remedies(3)
Techniques for distributing and storing data. Real-world systems frequently have multiple phases and
incorporate a variety of methods to guarantee privacy. Therefore, for IoT research, design, development,
and operation, having a thorough understanding of all phases and their technologies can be beneficial. In
this Study introduced two independent methodologies namely generic differential privacy (GenDP) and
Cluster-Based Differential privacy ( Cluster-based DP) algorithms for handling metadata as intents and
intent scope to maintain privacy and security of IoT data in cloud environments. With its help, we can
virtual and connect enormous numbers of devices, get a clearer understanding of the IoT architecture, and
store data eternally. However, due of the dynamic nature of the environment, the diversity of devices, the
ad hoc requirements of multiple stakeholders, and hardware or network failures, it is a very challenging
task to create security-, privacy-, safety-, and quality-aware Internet of Things apps. It is becoming more
and more important to improve data privacy and security through appropriate data acquisition. The
proposed approach resulted in reduced loss performance as compared to Support Vector Machine (SVM) ,
Random Forest (RF) .
Digital Evidence Analysing Industry in India.pdfGeeta Adhikari
With the rise in digital threats and cybercrimes, India Digital Forensic Market makes successive changes like integration of Artificial Intelligence, and marking its overall growth.
The Internet is driving force on how we communicate with one another, from posting messages and images to Facebook or “tweeting” your activities from your vacation. Today it is being used everywhere, now imagine a device that connects to the internet sends out data based on its sensors, this is the Internet-ofThings, a connection of objects with a plethora of sensors. Smart devices as they are commonly called, are invading our homes. With the proliferation of cheap Cloud-based IoT Camera use as a surveillance system to monitor our homes and loved ones right from the palm of our hand using our smartphones. These cameras are mostly white-label product, a process in which the product comes from a single manufacturer and bought by a different company where they are re-branded and sold with their own product name, a method commonly practice in the retail and manufacturing industry. Each Cloud-based IoT cameras sold are not properly tested for security. The problem arises when a hacker, hacks into the Cloud-based IoT Camera sees everything we do, without us knowing about it. Invading our personal digital privacy. This study focuses on the vulnerabilities found on White-label Cloud-based IoT Camera on the market specifically on a Chinese brand sold by Shenzhen Gwelltimes Technology. How this IoT device can be compromised and how to protect our selves from such cyber-attacks.
Comparison study of machine learning classifiers to detect anomalies IJECEIAES
In this era of Internet ensuring the confidentiality, authentication and integrity of any resource exchanged over the net is the imperative. Presence of intrusion prevention techniques like strong password, firewalls etc. are not sufficient to monitor such voluminous network traffic as they can be breached easily. Existing signature based detection techniques like antivirus only offers protection against known attacks whose signatures are stored in the database.Thus, the need for real-time detection of aberrations is observed. Existing signature based detection techniques like antivirus only offers protection against known attacks whose signatures are stored in the database. Machine learning classifiers are implemented here to learn how the values of various fields like source bytes, destination bytes etc. in a network packet decides if the packet is compromised or not . Finally the accuracy of their detection is compared to choose the best suited classifier for this purpose. The outcome thus produced may be useful to offer real time detection while exchanging sensitive information such as credit card details.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
A Novel Security Approach for Communication using IOTIJEACS
The Internet of Things (IOT) is the arrangement of physical articles or "things" introduced with equipment, programming, sensors, and framework accessibility, which enables these things to accumulate and exchange data. Here outlining security convention for the Internet of Things, and execution of this relating security convention on the inserted gadgets. This convention will cover the honesty of messages and verification of every customer by giving a productive confirmation component. By this venture the protected correspondence is executed on implanted gadgets.
A review on machine learning based intrusion detection system for internet of...IJECEIAES
Within an internet of things (IoT) environment, the fundamental purpose of various devices is to gather the abundant amount of data that is being generated and then transmit this data to the predetermined server over the internet. IoT connects billions of objects and the internet to communicate without human intervention. But network security and privacy issues are increasing very fast, in today's world. Because of the prevalence of technological advancement in regular activities, internet security has evolved into a necessary requirement. Because technology is integrated into every aspect of contemporary life, cyberattacks on the internet of things represent a bigger danger than attacks against traditional networks. Researchers have found that combining machine learning techniques into an intrusion detection system (IDS) is an efficient way to get beyond the limitations of conventional IDSs in an IoT context. This research presents a comprehensive literature assessment and develops an intrusion detection system that makes use of machine learning techniques to address security problems in an IoT environment. Along with a comprehensive look at the state of the art in terms of intrusion detection systems for IoT-enabled environments, this study also examines the attributes of approaches, common datasets, and existing methods utilized to construct such systems.
This paper focuses on the implementation of machine
learning algorithms to improve security in the Internet of Things
(IoT) environment. IoT is becoming an essential part of our daily
lives, and security is a significant concern in this domain.
Traditional security measures are not enough to protect IoT systems
from the increasing number of cyber-attacks. Machine learning
algorithms can provide a better and more effective approach to
detecting and mitigating security threats in IoT systems.
Running head INTERNET OF THINGS1INTERNET OF THINGS10.docxcowinhelen
Running head: INTERNET OF THINGS1
INTERNET OF THINGS10
Internet of Things
Ideation Process
Innovation technology is mainly concerned with the introduction of a new idea into the market. Any basic innovation technology has an ideation process as its basis. Ideation becomes the most basic step. During this stage, a thought is generated. From this thought, an idea is analyzed and provided. The analysis process is done purposefully to determine the feasibility of the implementation of the developed idea. Analysis also establishes the market demand of the idea that has been developed. It determines the possible profit, and this such of an idea would bring to the organization upon its implementation. This explanation will be discussed on the aspect of a new technology known as Internet of Things, which was designed with a main reason of providing internet faculties to all locations on the surface of the earth on each scale.
When an analysis has been completed, what follows afterwards is the aspect of idea implementation. Implementation of the idea is always followed by the testing of the idea (Gubbi et al., 2013). Here, the idea is tested for its validity to be established. The validity here is trying to answer the question of whether the idea is trying to answer the problem that led to its establishment and implementation. Validity also gives the corrections and accuracy on the assessment of the set system. In case the assessment gives the right thing that the set idea was meant to provide, then the idea is valued and worth to be used in the market. If a contrary occurs, then it becomes advisable for the idea to be scripted. Every system needs to pass the validity test for it to be reliable, although some experts argues that not all valid systems are reliable.
Alternative
Solution
s
Most cases of the idealization process require alternatives to limit any possible risk or loss. Sometimes, it is evident that the innovative idea may bring some critical challenges in the implementation process. Some aspects that should be considered in the process include budget, and other critical resources that are required in the implementation process. For instance, the IoT can be used to supply internet connectivity in case the other alternative fails or vice versa. It is recommended that any form of learning and research should be done to control any possible discrepancies in the IoT technology. Any other technology can be implemented if it depicts the same functionality as the one described by the IoT.
Overview
The IoT refers to ever-growing technology that provides IP addresses to all objects and substances that one may imagine associating with. The assigned objects can promote communication amongst themselves. This technology tries to improve communication among individuals on a large scale by using the internet. The data grouping as well as digitizing the concept of IoT, calls for construction of four c ...
A new algorithm to enhance security against cyber threats for internet of thi...IJECEIAES
One major problem is detecting the unsuitability of traffic caused by a distributed denial of services (DDoS) attack produced by third party nodes, such as smart phones and other handheld Wi-Fi devices. During the transmission between the devices, there are rising in the number of cyber attacks on systems by using negligible packets, which lead to suspension of the services between source and destination, and can find the vulnerabilities on the network. These vulnerable issues have led to a reduction in the reliability of networks and a reduction in consumer confidence. In this paper, we will introduce a new algorithm called rout attack with detection algorithm (RAWD) to reduce the affect of any attack by checking the packet injection, and to avoid number of cyber attacks being received by the destination and transferred through a determined path or alternative path based on the problem. The proposed algorithm will forward the real time traffic to the required destination from a new alternative backup path which is computed by it before the attacked occurred. The results have showed an improvement when the attack occurred and the alternative path has used to make sure the continuity of receiving the data to the main destination without any affection.
Review on effectiveness of deep learning approach in digital forensicsIJECEIAES
Cyber forensics is use of scientific methods for definite description of cybercrime activities. It deals with collecting, processing and interpreting digital evidence for cybercrime analysis. Cyber forensic analysis plays very important role in criminal investigations. Although lot of research has been done in cyber forensics, it is still expected to face new challenges in near future. Analysis of digital media specifically photographic images, audio and video recordings are very crucial in forensics This paper specifically focus on digital forensics. There are several methods for digital forensic analysis. Currently deep learning (DL), mainly convolutional neural network (CNN) has proved very promising in classification of digital images and sound analysis techniques. This paper presents a compendious study of recent research and methods in forensic areas based on CNN, with a view to guide the researchers working in this area. We first, defined and explained preliminary models of DL. In the next section, out of several DL models we have focused on CNN and its usage in areas of digital forensic. Finally, conclusion and future work are discussed. The review shows that CNN has proved good in most of the forensic domains and still promise to be better.
the world of technology is changing at an unprecedented pace, and th.docxpelise1
the world of technology is changing at an unprecedented pace, and these changes represent business opportunities as well as challenges. Mass connectivity and faster speeds create opportunities for businesses to network more devices, complete more transactions, and enhance transaction quality. Internet Protocol version 6 (IPv6) and Internet of things (IoT) are two such technologies that represent significant opportunities for strategic cybersecurity technology professionals to create lasting value for their organizations.
IoT is the phenomenon of connecting devices used in everyday life. It provides an interactive environment of human users and a myriad of devices in a global information highway, always on and always able to provide information. IoT connections happen among many types of devices — sensors, embedded technologies, machines, appliances, smart phones — all connected through wired and wireless networks.
Cloud architectures such as software as a service have allowed for big data analytics and improved areas such as automated manufacturing. Data and real-time analytics are now available to workers through wearables and mobile devices.
Such pervasive proliferation of IoT devices gives hackers avenues to gain access to personal data and financial information and increases the complexity of data protection. Given the increased risks of data breaches, newer techniques in data loss prevention should be examined.
Increased bandwidth and increased levels of interconnectivity have allowed data to become dispersed, creating issues for big data integrity. In such a world, even the financial transactions of the future are likely to be different — Bitcoin and digital currency may replace a large portion of future financial transactions.
To survive and thrive, organizational technology strategists must develop appropriate technology road maps. These strategists must consider appropriate function, protection, and tamper-proofing of these new communications and transactions.
It will be impossible to protect data by merely concentrating on protecting repositories such as networks or endpoints. Cybersecurity strategists have to concentrate on protecting the data themselves. They will need to ensure that the data are protected no matter where they reside.:
Step2
Select Devices and Technologies
By now, you have an idea of your team members and your role on the team project. Now, it's time to get the details about the devices and technologies needed to be included in the Strategic Technology Plan for Data Loss Prevention.
You should limit the scope of this project by selecting a set of devices and technologies which are most appropriate for data loss prevention for your business mission and future success. Based on your prior knowledge of your company and based on the project roles you agreed upon in the previous step, perform some independent research on the following topics and identify a set of devices and technologies that you propose for.
IS THERE A TROJAN! : LITERATURE SURVEY AND CRITICAL EVALUATION OF THE LATEST ...IJCI JOURNAL
IoT as a domain has grown so much in the last few years that it rivals that of the mobile network
environments in terms of data volumes as well as cybersecurity threats. The confidentiality and privacy of
data within IoT environments have become very important areas of security research within the last few
years. More and more security experts are interested in designing robust IDS systems to protect IoT
environments as a supplement to the more traditional security methods. Given that IoT devices are
resource-constrained and have a heterogeneous protocol stack, most traditional intrusion detection
approaches don’t work well within these schematic boundaries. This has led security researchers to
innovate at the intersection of Machine Learning and IDS to solve the shortcomings of non-learning based
IDS systems in the IoT ecosystem.
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
EFFICIENT ATTACK DETECTION IN IOT DEVICES USING FEATURE ENGINEERING-LESS MACH...ijcsit
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
System call frequency analysis-based generative adversarial network model for...IJECEIAES
In today's digital age, mobile applications have become essential in connecting people from diverse domains. They play a crucial role in enabling communication, facilitating business transactions, and providing access to a range of services. Mobile communication is widespread due to its portability and ease of use, with an increasing number of mobile devices projected to reach 18.22 billion by the end of 2025. However, this convenience comes at a cost, as cybercriminals are constantly looking for ways to exploit security vulnerabilities in mobile applications. Among the several varieties of malicious applications, zero-day malware is particularly dangerous since it cannot be removed by antivirus software. To detect zeroday Android malware, this paper introduces a novel approach based on generative adversarial networks (GANs), which generates new frequencies of feature vectors from system calls. In the proposed approach, the generator is fed with a mixture of real samples and noise, and then trained to create new samples, while the discriminator model aims to classify these samples as either real or fake. We assess the performance of our model through different measures, including loss functions, the Frechet Inception distance, and the inception score evaluation metrics.
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
More Related Content
Similar to Hyperparameter optimization using custom genetic algorithm for classification of benign and malicious traffic on internet of things–23 dataset
Digital Evidence Analysing Industry in India.pdfGeeta Adhikari
With the rise in digital threats and cybercrimes, India Digital Forensic Market makes successive changes like integration of Artificial Intelligence, and marking its overall growth.
The Internet is driving force on how we communicate with one another, from posting messages and images to Facebook or “tweeting” your activities from your vacation. Today it is being used everywhere, now imagine a device that connects to the internet sends out data based on its sensors, this is the Internet-ofThings, a connection of objects with a plethora of sensors. Smart devices as they are commonly called, are invading our homes. With the proliferation of cheap Cloud-based IoT Camera use as a surveillance system to monitor our homes and loved ones right from the palm of our hand using our smartphones. These cameras are mostly white-label product, a process in which the product comes from a single manufacturer and bought by a different company where they are re-branded and sold with their own product name, a method commonly practice in the retail and manufacturing industry. Each Cloud-based IoT cameras sold are not properly tested for security. The problem arises when a hacker, hacks into the Cloud-based IoT Camera sees everything we do, without us knowing about it. Invading our personal digital privacy. This study focuses on the vulnerabilities found on White-label Cloud-based IoT Camera on the market specifically on a Chinese brand sold by Shenzhen Gwelltimes Technology. How this IoT device can be compromised and how to protect our selves from such cyber-attacks.
Comparison study of machine learning classifiers to detect anomalies IJECEIAES
In this era of Internet ensuring the confidentiality, authentication and integrity of any resource exchanged over the net is the imperative. Presence of intrusion prevention techniques like strong password, firewalls etc. are not sufficient to monitor such voluminous network traffic as they can be breached easily. Existing signature based detection techniques like antivirus only offers protection against known attacks whose signatures are stored in the database.Thus, the need for real-time detection of aberrations is observed. Existing signature based detection techniques like antivirus only offers protection against known attacks whose signatures are stored in the database. Machine learning classifiers are implemented here to learn how the values of various fields like source bytes, destination bytes etc. in a network packet decides if the packet is compromised or not . Finally the accuracy of their detection is compared to choose the best suited classifier for this purpose. The outcome thus produced may be useful to offer real time detection while exchanging sensitive information such as credit card details.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
A Novel Security Approach for Communication using IOTIJEACS
The Internet of Things (IOT) is the arrangement of physical articles or "things" introduced with equipment, programming, sensors, and framework accessibility, which enables these things to accumulate and exchange data. Here outlining security convention for the Internet of Things, and execution of this relating security convention on the inserted gadgets. This convention will cover the honesty of messages and verification of every customer by giving a productive confirmation component. By this venture the protected correspondence is executed on implanted gadgets.
A review on machine learning based intrusion detection system for internet of...IJECEIAES
Within an internet of things (IoT) environment, the fundamental purpose of various devices is to gather the abundant amount of data that is being generated and then transmit this data to the predetermined server over the internet. IoT connects billions of objects and the internet to communicate without human intervention. But network security and privacy issues are increasing very fast, in today's world. Because of the prevalence of technological advancement in regular activities, internet security has evolved into a necessary requirement. Because technology is integrated into every aspect of contemporary life, cyberattacks on the internet of things represent a bigger danger than attacks against traditional networks. Researchers have found that combining machine learning techniques into an intrusion detection system (IDS) is an efficient way to get beyond the limitations of conventional IDSs in an IoT context. This research presents a comprehensive literature assessment and develops an intrusion detection system that makes use of machine learning techniques to address security problems in an IoT environment. Along with a comprehensive look at the state of the art in terms of intrusion detection systems for IoT-enabled environments, this study also examines the attributes of approaches, common datasets, and existing methods utilized to construct such systems.
This paper focuses on the implementation of machine
learning algorithms to improve security in the Internet of Things
(IoT) environment. IoT is becoming an essential part of our daily
lives, and security is a significant concern in this domain.
Traditional security measures are not enough to protect IoT systems
from the increasing number of cyber-attacks. Machine learning
algorithms can provide a better and more effective approach to
detecting and mitigating security threats in IoT systems.
Running head INTERNET OF THINGS1INTERNET OF THINGS10.docxcowinhelen
Running head: INTERNET OF THINGS1
INTERNET OF THINGS10
Internet of Things
Ideation Process
Innovation technology is mainly concerned with the introduction of a new idea into the market. Any basic innovation technology has an ideation process as its basis. Ideation becomes the most basic step. During this stage, a thought is generated. From this thought, an idea is analyzed and provided. The analysis process is done purposefully to determine the feasibility of the implementation of the developed idea. Analysis also establishes the market demand of the idea that has been developed. It determines the possible profit, and this such of an idea would bring to the organization upon its implementation. This explanation will be discussed on the aspect of a new technology known as Internet of Things, which was designed with a main reason of providing internet faculties to all locations on the surface of the earth on each scale.
When an analysis has been completed, what follows afterwards is the aspect of idea implementation. Implementation of the idea is always followed by the testing of the idea (Gubbi et al., 2013). Here, the idea is tested for its validity to be established. The validity here is trying to answer the question of whether the idea is trying to answer the problem that led to its establishment and implementation. Validity also gives the corrections and accuracy on the assessment of the set system. In case the assessment gives the right thing that the set idea was meant to provide, then the idea is valued and worth to be used in the market. If a contrary occurs, then it becomes advisable for the idea to be scripted. Every system needs to pass the validity test for it to be reliable, although some experts argues that not all valid systems are reliable.
Alternative
Solution
s
Most cases of the idealization process require alternatives to limit any possible risk or loss. Sometimes, it is evident that the innovative idea may bring some critical challenges in the implementation process. Some aspects that should be considered in the process include budget, and other critical resources that are required in the implementation process. For instance, the IoT can be used to supply internet connectivity in case the other alternative fails or vice versa. It is recommended that any form of learning and research should be done to control any possible discrepancies in the IoT technology. Any other technology can be implemented if it depicts the same functionality as the one described by the IoT.
Overview
The IoT refers to ever-growing technology that provides IP addresses to all objects and substances that one may imagine associating with. The assigned objects can promote communication amongst themselves. This technology tries to improve communication among individuals on a large scale by using the internet. The data grouping as well as digitizing the concept of IoT, calls for construction of four c ...
A new algorithm to enhance security against cyber threats for internet of thi...IJECEIAES
One major problem is detecting the unsuitability of traffic caused by a distributed denial of services (DDoS) attack produced by third party nodes, such as smart phones and other handheld Wi-Fi devices. During the transmission between the devices, there are rising in the number of cyber attacks on systems by using negligible packets, which lead to suspension of the services between source and destination, and can find the vulnerabilities on the network. These vulnerable issues have led to a reduction in the reliability of networks and a reduction in consumer confidence. In this paper, we will introduce a new algorithm called rout attack with detection algorithm (RAWD) to reduce the affect of any attack by checking the packet injection, and to avoid number of cyber attacks being received by the destination and transferred through a determined path or alternative path based on the problem. The proposed algorithm will forward the real time traffic to the required destination from a new alternative backup path which is computed by it before the attacked occurred. The results have showed an improvement when the attack occurred and the alternative path has used to make sure the continuity of receiving the data to the main destination without any affection.
Review on effectiveness of deep learning approach in digital forensicsIJECEIAES
Cyber forensics is use of scientific methods for definite description of cybercrime activities. It deals with collecting, processing and interpreting digital evidence for cybercrime analysis. Cyber forensic analysis plays very important role in criminal investigations. Although lot of research has been done in cyber forensics, it is still expected to face new challenges in near future. Analysis of digital media specifically photographic images, audio and video recordings are very crucial in forensics This paper specifically focus on digital forensics. There are several methods for digital forensic analysis. Currently deep learning (DL), mainly convolutional neural network (CNN) has proved very promising in classification of digital images and sound analysis techniques. This paper presents a compendious study of recent research and methods in forensic areas based on CNN, with a view to guide the researchers working in this area. We first, defined and explained preliminary models of DL. In the next section, out of several DL models we have focused on CNN and its usage in areas of digital forensic. Finally, conclusion and future work are discussed. The review shows that CNN has proved good in most of the forensic domains and still promise to be better.
the world of technology is changing at an unprecedented pace, and th.docxpelise1
the world of technology is changing at an unprecedented pace, and these changes represent business opportunities as well as challenges. Mass connectivity and faster speeds create opportunities for businesses to network more devices, complete more transactions, and enhance transaction quality. Internet Protocol version 6 (IPv6) and Internet of things (IoT) are two such technologies that represent significant opportunities for strategic cybersecurity technology professionals to create lasting value for their organizations.
IoT is the phenomenon of connecting devices used in everyday life. It provides an interactive environment of human users and a myriad of devices in a global information highway, always on and always able to provide information. IoT connections happen among many types of devices — sensors, embedded technologies, machines, appliances, smart phones — all connected through wired and wireless networks.
Cloud architectures such as software as a service have allowed for big data analytics and improved areas such as automated manufacturing. Data and real-time analytics are now available to workers through wearables and mobile devices.
Such pervasive proliferation of IoT devices gives hackers avenues to gain access to personal data and financial information and increases the complexity of data protection. Given the increased risks of data breaches, newer techniques in data loss prevention should be examined.
Increased bandwidth and increased levels of interconnectivity have allowed data to become dispersed, creating issues for big data integrity. In such a world, even the financial transactions of the future are likely to be different — Bitcoin and digital currency may replace a large portion of future financial transactions.
To survive and thrive, organizational technology strategists must develop appropriate technology road maps. These strategists must consider appropriate function, protection, and tamper-proofing of these new communications and transactions.
It will be impossible to protect data by merely concentrating on protecting repositories such as networks or endpoints. Cybersecurity strategists have to concentrate on protecting the data themselves. They will need to ensure that the data are protected no matter where they reside.:
Step2
Select Devices and Technologies
By now, you have an idea of your team members and your role on the team project. Now, it's time to get the details about the devices and technologies needed to be included in the Strategic Technology Plan for Data Loss Prevention.
You should limit the scope of this project by selecting a set of devices and technologies which are most appropriate for data loss prevention for your business mission and future success. Based on your prior knowledge of your company and based on the project roles you agreed upon in the previous step, perform some independent research on the following topics and identify a set of devices and technologies that you propose for.
IS THERE A TROJAN! : LITERATURE SURVEY AND CRITICAL EVALUATION OF THE LATEST ...IJCI JOURNAL
IoT as a domain has grown so much in the last few years that it rivals that of the mobile network
environments in terms of data volumes as well as cybersecurity threats. The confidentiality and privacy of
data within IoT environments have become very important areas of security research within the last few
years. More and more security experts are interested in designing robust IDS systems to protect IoT
environments as a supplement to the more traditional security methods. Given that IoT devices are
resource-constrained and have a heterogeneous protocol stack, most traditional intrusion detection
approaches don’t work well within these schematic boundaries. This has led security researchers to
innovate at the intersection of Machine Learning and IDS to solve the shortcomings of non-learning based
IDS systems in the IoT ecosystem.
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
EFFICIENT ATTACK DETECTION IN IOT DEVICES USING FEATURE ENGINEERING-LESS MACH...ijcsit
Through the generalization of deep learning, the research community has addressed critical challenges in
the network security domain, like malware identification and anomaly detection. However, they have yet to
discuss deploying them on Internet of Things (IoT) devices for day-to-day operations. IoT devices are often
limited in memory and processing power, rendering the compute-intensive deep learning environment
unusable. This research proposes a way to overcome this barrier by bypassing feature engineering in the
deep learning pipeline and using raw packet data as input. We introduce a feature- engineering-less
machine learning (ML) process to perform malware detection on IoT devices. Our proposed model,”
Feature engineering-less ML (FEL-ML),” is a lighter-weight detection algorithm that expends no extra
computations on “engineered” features. It effectively accelerates the low-powered IoT edge. It is trained
on unprocessed byte-streams of packets. Aside from providing better results, it is quicker than traditional
feature-based methods. FEL-ML facilitates resource-sensitive network traffic security with the added
benefit of eliminating the significant investment by subject matter experts in feature engineering.
System call frequency analysis-based generative adversarial network model for...IJECEIAES
In today's digital age, mobile applications have become essential in connecting people from diverse domains. They play a crucial role in enabling communication, facilitating business transactions, and providing access to a range of services. Mobile communication is widespread due to its portability and ease of use, with an increasing number of mobile devices projected to reach 18.22 billion by the end of 2025. However, this convenience comes at a cost, as cybercriminals are constantly looking for ways to exploit security vulnerabilities in mobile applications. Among the several varieties of malicious applications, zero-day malware is particularly dangerous since it cannot be removed by antivirus software. To detect zeroday Android malware, this paper introduces a novel approach based on generative adversarial networks (GANs), which generates new frequencies of feature vectors from system calls. In the proposed approach, the generator is fed with a mixture of real samples and noise, and then trained to create new samples, while the discriminator model aims to classify these samples as either real or fake. We assess the performance of our model through different measures, including loss functions, the Frechet Inception distance, and the inception score evaluation metrics.
Similar to Hyperparameter optimization using custom genetic algorithm for classification of benign and malicious traffic on internet of things–23 dataset (20)
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
Developing a smart system for infant incubators using the internet of things ...IJECEIAES
This research is developing an incubator system that integrates the internet of things and artificial intelligence to improve care for premature babies. The system workflow starts with sensors that collect data from the incubator. Then, the data is sent in real-time to the internet of things (IoT) broker eclipse mosquito using the message queue telemetry transport (MQTT) protocol version 5.0. After that, the data is stored in a database for analysis using the long short-term memory network (LSTM) method and displayed in a web application using an application programming interface (API) service. Furthermore, the experimental results produce as many as 2,880 rows of data stored in the database. The correlation coefficient between the target attribute and other attributes ranges from 0.23 to 0.48. Next, several experiments were conducted to evaluate the model-predicted value on the test data. The best results are obtained using a two-layer LSTM configuration model, each with 60 neurons and a lookback setting 6. This model produces an R 2 value of 0.934, with a root mean square error (RMSE) value of 0.015 and a mean absolute error (MAE) of 0.008. In addition, the R 2 value was also evaluated for each attribute used as input, with a result of values between 0.590 and 0.845.
A review on internet of things-based stingless bee's honey production with im...IJECEIAES
Honey is produced exclusively by honeybees and stingless bees which both are well adapted to tropical and subtropical regions such as Malaysia. Stingless bees are known for producing small amounts of honey and are known for having a unique flavor profile. Problem identified that many stingless bees collapsed due to weather, temperature and environment. It is critical to understand the relationship between the production of stingless bee honey and environmental conditions to improve honey production. Thus, this paper presents a review on stingless bee's honey production and prediction modeling. About 54 previous research has been analyzed and compared in identifying the research gaps. A framework on modeling the prediction of stingless bee honey is derived. The result presents the comparison and analysis on the internet of things (IoT) monitoring systems, honey production estimation, convolution neural networks (CNNs), and automatic identification methods on bee species. It is identified based on image detection method the top best three efficiency presents CNN is at 98.67%, densely connected convolutional networks with YOLO v3 is 97.7%, and DenseNet201 convolutional networks 99.81%. This study is significant to assist the researcher in developing a model for predicting stingless honey produced by bee's output, which is important for a stable economy and food security.
A trust based secure access control using authentication mechanism for intero...IJECEIAES
The internet of things (IoT) is a revolutionary innovation in many aspects of our society including interactions, financial activity, and global security such as the military and battlefield internet. Due to the limited energy and processing capacity of network devices, security, energy consumption, compatibility, and device heterogeneity are the long-term IoT problems. As a result, energy and security are critical for data transmission across edge and IoT networks. Existing IoT interoperability techniques need more computation time, have unreliable authentication mechanisms that break easily, lose data easily, and have low confidentiality. In this paper, a key agreement protocol-based authentication mechanism for IoT devices is offered as a solution to this issue. This system makes use of information exchange, which must be secured to prevent access by unauthorized users. Using a compact contiki/cooja simulator, the performance and design of the suggested framework are validated. The simulation findings are evaluated based on detection of malicious nodes after 60 minutes of simulation. The suggested trust method, which is based on privacy access control, reduced packet loss ratio to 0.32%, consumed 0.39% power, and had the greatest average residual energy of 0.99 mJoules at 10 nodes.
Fuzzy linear programming with the intuitionistic polygonal fuzzy numbersIJECEIAES
In real world applications, data are subject to ambiguity due to several factors; fuzzy sets and fuzzy numbers propose a great tool to model such ambiguity. In case of hesitation, the complement of a membership value in fuzzy numbers can be different from the non-membership value, in which case we can model using intuitionistic fuzzy numbers as they provide flexibility by defining both a membership and a non-membership functions. In this article, we consider the intuitionistic fuzzy linear programming problem with intuitionistic polygonal fuzzy numbers, which is a generalization of the previous polygonal fuzzy numbers found in the literature. We present a modification of the simplex method that can be used to solve any general intuitionistic fuzzy linear programming problem after approximating the problem by an intuitionistic polygonal fuzzy number with n edges. This method is given in a simple tableau formulation, and then applied on numerical examples for clarity.
The performance of artificial intelligence in prostate magnetic resonance im...IJECEIAES
Prostate cancer is the predominant form of cancer observed in men worldwide. The application of magnetic resonance imaging (MRI) as a guidance tool for conducting biopsies has been established as a reliable and well-established approach in the diagnosis of prostate cancer. The diagnostic performance of MRI-guided prostate cancer diagnosis exhibits significant heterogeneity due to the intricate and multi-step nature of the diagnostic pathway. The development of artificial intelligence (AI) models, specifically through the utilization of machine learning techniques such as deep learning, is assuming an increasingly significant role in the field of radiology. In the realm of prostate MRI, a considerable body of literature has been dedicated to the development of various AI algorithms. These algorithms have been specifically designed for tasks such as prostate segmentation, lesion identification, and classification. The overarching objective of these endeavors is to enhance diagnostic performance and foster greater agreement among different observers within MRI scans for the prostate. This review article aims to provide a concise overview of the application of AI in the field of radiology, with a specific focus on its utilization in prostate MRI.
Seizure stage detection of epileptic seizure using convolutional neural networksIJECEIAES
According to the World Health Organization (WHO), seventy million individuals worldwide suffer from epilepsy, a neurological disorder. While electroencephalography (EEG) is crucial for diagnosing epilepsy and monitoring the brain activity of epilepsy patients, it requires a specialist to examine all EEG recordings to find epileptic behavior. This procedure needs an experienced doctor, and a precise epilepsy diagnosis is crucial for appropriate treatment. To identify epileptic seizures, this study employed a convolutional neural network (CNN) based on raw scalp EEG signals to discriminate between preictal, ictal, postictal, and interictal segments. The possibility of these characteristics is explored by examining how well timedomain signals work in the detection of epileptic signals using intracranial Freiburg Hospital (FH), scalp Children's Hospital Boston-Massachusetts Institute of Technology (CHB-MIT) databases, and Temple University Hospital (TUH) EEG. To test the viability of this approach, two types of experiments were carried out. Firstly, binary class classification (preictal, ictal, postictal each versus interictal) and four-class classification (interictal versus preictal versus ictal versus postictal). The average accuracy for stage detection using CHB-MIT database was 84.4%, while the Freiburg database's time-domain signals had an accuracy of 79.7% and the highest accuracy of 94.02% for classification in the TUH EEG database when comparing interictal stage to preictal stage.
Analysis of driving style using self-organizing maps to analyze driver behaviorIJECEIAES
Modern life is strongly associated with the use of cars, but the increase in acceleration speeds and their maneuverability leads to a dangerous driving style for some drivers. In these conditions, the development of a method that allows you to track the behavior of the driver is relevant. The article provides an overview of existing methods and models for assessing the functioning of motor vehicles and driver behavior. Based on this, a combined algorithm for recognizing driving style is proposed. To do this, a set of input data was formed, including 20 descriptive features: About the environment, the driver's behavior and the characteristics of the functioning of the car, collected using OBD II. The generated data set is sent to the Kohonen network, where clustering is performed according to driving style and degree of danger. Getting the driving characteristics into a particular cluster allows you to switch to the private indicators of an individual driver and considering individual driving characteristics. The application of the method allows you to identify potentially dangerous driving styles that can prevent accidents.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Fuzzy logic method-based stress detector with blood pressure and body tempera...IJECEIAES
In this study, using the fuzzy logic method, a stress detection tool was created with body temperature and blood pressure parameters as indicators to determine a person's stress level. This tool uses the LM35DZ sensor to detect body temperature, the MPX5100GP sensor to read blood pressure values, and Arduino Uno as a data processor from sensor readings which are then calculated using the fuzzy logic method as a stress level decisionmaker. The resulting output measures blood pressure, body temperature, and the stress level experienced by a person, which will be displayed on the liquid crystal display. Based on the results of testing the body temperature parameter, the highest error generated was 1.17%, and for the blood pressure parameter, the highest error was 2.5% for systole and 0.93% for diastole. Furthermore, testing the stress level displayed on the tool is compared to the depression, anxiety, and stress scales 42 (DASS 42), a psychological stress measuring instrument. From the results of testing the tool with the questionnaire, the average conformity level is 74%.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
The Benefits and Techniques of Trenchless Pipe Repair.pdf
Hyperparameter optimization using custom genetic algorithm for classification of benign and malicious traffic on internet of things–23 dataset
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 4, August 2022, pp. 4031~4041
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i4.pp4031-4041 4031
Journal homepage: http://ijece.iaescore.com
Hyperparameter optimization using custom genetic algorithm
for classification of benign and malicious traffic on internet of
things–23 dataset
Karthikayini Thavasimani, Nuggehalli Kasturirangan Srinath
Computer Science and Engineering, Rashtreeya Vidyalaya College of Engineering, Bengaluru, India
Article Info ABSTRACT
Article history:
Received Jan 15, 2021
Revised Mar 21, 2022
Accepted Apr 5, 2022
Hyperparameter optimization is one of the main challenges in deep learning
despite its successful exploration in many areas such as image classification,
speech recognition, natural language processing, and fraud detections.
Hyperparameters are critical as they control the learning rate of a model and
should be tuned to improve performance. Tuning the hyperparameters
manually with default values is a challenging and time-intensive task.
Though the time and efforts spent on tuning the hyperparameters are
decreasing, it is always a burden when it comes to a new dataset or solving a
new task or improving the existing model. In our paper, we propose a
custom genetic algorithm to auto-tune the hyperparameters of the deep
learning sequential model to classify benign and malicious traffic from
internet of things-23 dataset captured by Czech Technical University, Czech
Republic. The dataset is a collection of 30.85 million records of malicious
and benign traffic. The experimental results show a promising outcome of
98.9% accuracy.
Keywords:
Auto tuning hyperparameters
Bots
Custom genetic algorithm
Cyber-attacks
Hyperparameters
Neural networks
Optimization This is an open access article under the CC BY-SA license.
Corresponding Author:
Karthikayini Thavasimani
Computer Science and Engineering, Rashtreeya Vidyalaya College of Engineering
Mysore Rd, RV Vidyaniketan, Post, Bengaluru, Karnataka 560059, India
Email: karthikayini@outlook.com
1. INTRODUCTION
A bot is a software program designed to automate tasks/run scripts/imitate human behavior such as
content crawling, manipulating opinions, targeting and vulnerable machines. Initially, bots were created with
good intentions to execute automated tasks such as search engine bots (Google/Bing), website monitoring
bots (Pingdom), Social network bots (Facebook bot), and so on. These bots are naive, and it is easy to detect
with standard detection strategies based on its executing nature. Now, bots are used in e-commerce,
telemarketing, educational sectors for coaching, self-guided learning, and large enterprises as intelligent
personal assistants like Google's Alexa, Apple's Siri, and social networks software development projects.
They are beneficial to individuals as well as to businesses. As summarized in [1], bots are used for various
purposes/tasks in GitHub projects. Despite the benefits, they are also used for exploiting information from
the systems, threatening by hacking private information of the user, false allegations, fake advertisements,
banking theft, launching distributed denial-of-service (DdoS) attacks, particularly on e-commerce sites,
affecting the stock markets, and so on. Generally, Botnets, a network of compromised hosts or bots, are
controlled by a bot-master to do these malicious activities. These bots can be detected using techniques such
as a system based on social network information, crowdsourcing, or feature based. The network visualization
of [2] explained the effects of bad bots that created online debate among Twitter users on recent California
law on vaccination requirements.
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4031-4041
4032
Due to advancements encountered in information technology [3], increased usage of social
networks, application programming interfaces (APIs), internet of things (IoT) smart devices, e-commerce,
and cloud services, the number of cyber-attacks is increasing. Cyber-attacks target the vulnerable systems to
either destroy the services or destroy valuable information or financial benefit. Various types of new
cyberattacks are evolving every day on the internet. Cyberattacks are classified as software supply chain
attacks, phishing attacks, cloud under attacks, and mobile devices [4]. For instance, attackers injected a
skimming script into the JavaScript libraries used by online stores on PrismWEB, an e-commerce platform
affecting more than 200 online university campus stores in North America. Personalizing the email contents
or encoding emails with malicious links are social engineering methods used by the scammers to target
personal accounts. This year 2020, has encountered many attacks on public cloud environments resulting in
high data theft incidents across the business's world. Mobile device attacks, especially banking attacks, are
the most contemporary cyber-attacks faced by individuals. Most of these attacks are triggered using bots.
As per the statistics, IoT devices are projected to reach 20 billion by 2021. Their usage in the
information technology (IT) industries is expected to elevate in the coming years. There is a huge risk of bot
attacks on these devices when the internet world is acclimating to the next stage of innovations. With the
risks and the rising popularity coupled with broad adoption, versatility and diversity, and monotony
restrictions, industries require new forms of protection to detect unauthorized internet of things devices in
their network. Despite the research on existing technologies, it should take a wide-angle on recent
technologies like IoT as there are more chances of a security breach. IoT devices include systems apart from
computer systems such as smart speakers, smart lights, and smart lockers that exchange the data over the
internet without human intervention. Due to minimal manual intervention, they are more vulnerable. It is
easy to launch a DDoS attack from any controlled IoT device autonomously. After a while, it will behave like
a regular device. The challenge lies in determining these devices that change their signature dynamically. Our
work is organized into the following sequence, section 1 presents related work on the IoT device
vulnerabilities and tuning a deep learning model’s hyper-parameter, section 2 is about the dataset and feature
selection process, section 3 describes the proposed custom genetic algorithm (CGA) to auto-tune the hyper-
parameters, section 4 discusses the experimental results and Section 5 includes conclusion and future work.
2. RELATED WORK
With rapid technological innovations and adoptions, the cybersecurity situation will be much worse
considering the current deteriorating situation. The greatest security threat now may be due to internet of
things devices [5]. The unprecedented risks that IoT devices pose to businesses are explained by many
security experts. IoT devices are the ideal target of a botnet to launch DDoS attacks.
Mustapha and Alghamdi [6] explained the common vulnerabilities of IoT devices and provided
ways to solve them. Insufficient authentication, authorization, modifying default passwords, monitoring
ports, insecure network services, and lack of transport encryption. are some of the common reasons for
security breaches in IoT devices [6]. The recent major technological trends in the industries are collaboration,
automation, fighting the unknown, virtual and shared resources, and the internet of things progress towards
an autonomous world [7]. So, finding and blocking the bots that control the IoT devices for malicious
activities is vital. Recently, extensive research is carried out on how intelligent systems can address the
problem of cybersecurity issues.
Machine learning and deep learning are the pathways to achieving cognitive intelligence in
machines. Deep learning is a subset of machine learning, inspired by the artificial neural network, a
collection of interconnected neurons trained to infer the results. Intelligent systems can detect security
breaches in the IoT devices or any network devices per se [7]. Ranjit et al. [8] claimed that Bayesian
optimization results in better hyperparameter space coverage with high validation accuracy in the cloud
infrastructure. They used a long term convolutional recurrent neural network for tuning the hyperparameters
in a distributed fashion and validated the results using video activity recognition problem. Neary [9] proposed
a technique using reinforcement learning and convolutional neural network (CNN) to optimize the
hyperparameters for object recognition. A multiagent-based method was used, and they reported an accuracy
of 95% using the Modified National Institute of Standards and Technology dataset, where the optimal
parameters were obtained in a short period.
Aich et al. [10] proposed a CNN-based model for extracting valuable information from various web
content categories that the text mining applications can leverage. They manually tuned the hyperparameters
of their deep learning model to improve the accuracy. By manually adjusting the hyperparameters, they
achieved 85 to 92% accuracy in the text classification task. Finding an optimal hyperparameter, a
time-consuming process, plays a crucial role in improving the model's accuracy. Ugli et al. [11] developed a
system based on deep learning to detect and classify the source of the distractions while driving a vehicle in
3. Int J Elec & Comp Eng ISSN: 2088-8708
Hyperparameter optimization using custom genetic algorithm for … (Karthikayini Thavasimani)
4033
real-time to avoid accidents and to improve transport safety. A transfer learning method with pre-trained
weights and ResNet50 using various optimizers is developed to enhance the model’s learning capabilities.
Stochastic gradient descent, Adam and RMSprop optimizers are selected after evaluating them manually to
improve the accuracy. Their transfer learning system with SGD optimizer and ResNet50 model achieved an
accuracy of 98.4% on the test dataset, “distracted drivers”.
Chakraborty et al. [12] developed a 3D CNN architecture to learn the intricate patterns in the
magnetic resonance imaging (MRI) scans to detect Parkinson's disease. The 3T T1-weighted data from MRI
scans from the Parkinson's progression markers initiative database containing 406 subjects’ information is
used for the research purpose and the Bayesian optimization technique, Bayesian sequential model-based
optimization (SMBO) is applied with a fixed penalty value, to tune the hyperparameters using the optimizer,
"AdaDelta" to determine the optimal hyperparameters. The developed model achieved an overall accuracy of
95.29 with 0.943 as recall, 0.927 as precision and an F1 score of 0.936. Antonio [13] presented an
optimization framework of a sequential model to optimize a multi-extremal, black box and a partially defined
expensive objective function undefined outside the feasible regions. Subject vector machine constrained
Bayesian optimization (SVM-BO) with two phases, one to approximate the unknown possible region
boundary using subject vector machine and the other to find the globally optimal solution in the feasible
region is applied. The Bayesian optimization process that uses a fixed penalty value is compared for analysis.
The author concluded by showing the significant efficiency and computational efficiency over Bayesian
optimization with SVM-BO.
Goel et al. [14] proposed OptCoNet, a CNN architecture for automatic screening of coronavirus
disease (COVID-19) patients into three categories, normal, pneumonia and COVID-19 positive. The
proposed architecture consists of optimized extraction of features and classification components. They used
grey wolf optimizer (GWO) to find the optimal hyperparameters, publicly available X-ray images of normal,
pneumonia and COVID-19 patients for analysis. They compared the results of the proposed architecture with
existing optimized CNN methods. They concluded that OptCoNet outperformed other models with an
accuracy of 97.78% and an F1 score of 95.25%. Abbas et al. [15] proposed a deep learning CNN based
model called DeTraC that performs activities such as decomposition, transfer, and composition to classify the
chest X-ray images of COVID-19 patients and deal with data irregularities. A comprehensive dataset of
80 samples of normal chest X-ray (CXR) images from the Japanese Society of Radiological Technology
from various hospitals worldwide and 116 CXR images of COVID-19 and severe acute respiratory syndrome
(SARS) was used for analysis. The hyperparameters of the models were manually selected and compared
with the pre-trained models. An accuracy of 93.0% (with a sensitivity of 100%) in COVID-19 detection from
X-ray images was reported.
Namasudra et al. [16] proposed a nonlinear autoregressive neural network time series (NAR-NNTS)
model to the COVID-19 cases. Training algorithms such as Levenberg Marquardt for optimization with
Bayesian regularization and scaled conjugate gradient was used. NAR-NNTS model outperforming others in
forecasting the COVID-19 cases in two metrics, mean square error (MSE) and root mean square error
(RMSE). Lin et al. [17] proposed a neural network model named attention segmental recurrent neural
network (ASRNN) that uses a semi-Markov conditional random field (semi-CRF) model for sequence
labelling. Using hierarchical structure and attention mechanism, the model differentiates more important and
less important information while constructing the segments. Mini-batch stochastic gradient descent is used as
the optimization algorithm with limited static hyperparameters. In CoNLL2000 and Cora dataset, the
proposed model outperformed existing methods with an F1 score of 93.7 and 80.18%.
Yi and Bui [18] proposed a framework fusing meta-learning and Bayesian technique to predict
better highway traffic. They used the Korean highway system dataset for analysis by optimizing the
hyperparameters. Bui and Yi [19] optimized hyperparameters of deep learning models using meta-learning to
predict the traffic with data from the vehicle detection system. Hoof et al. [20] proposed a new surrogate
model based on gradient boosting to find the optimal hyperparameters and regulated the exploration space
with success on the reasonably sized set's classification problems. However, the proposed model is not built
and scaled for the real work problems with larger datasets such as network data. Multi-objective simulated
annealing (MOSA) [21] algorithm that efficiently searches the objective space outperforming the simulated
annealing (SA) algorithm with a caveat that the computational complexity is as important as the test
accuracy. Hoopes et al. [22] proposed HyperMorph, a learning-based strategy that eliminates the need to tune
hyperparameters during training, reducing the computational time but limits the capability to find the optimal
values.
Lee et al. [23] applied the genetic algorithm on CNN using both network configuration and
hyperparameters. They showed that their algorithm outperformed genetic CNN by 11.74% on an amyloid
brain image dataset used for Alzheimer’s disease diagnosis. Lin et al. [24] proposed a hybrid approach by
combining a modified hyperband, mode-pursuing sampling in the trust-region, and repeating the selection
and sampling process until a termination criterion is met for hyperparameter optimization. Shaziya et al. [25]
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4031-4041
4034
explored the impacts of the various hyperparameters in developing the deep learning model. The proposed
system aims to classify the intrusion attacks on the cyber defense dataset (CSE-CIC-IDS2018) by applying
artificial intelligence and achieved an accuracy result as 99.9% and receiver operating characteristic as
0.999% respectively [26]. Hossain et al. [27] achieved 99.08% accuracy, and a detection rate of 0.93 on
Intrusion Detection Evaluation Dataset 2017, a labelled dataset to detect DoS attacks by fine-tuning long
short term memory hyperparameters: learning rates, loss functions, optimizers, and activation functions.
To reduce DenseNet convolutional networks' training time, Nugroho and Suhartanto [28] have used
a random search to select the best candidates for batch size and learning rate. Thavasimani and Srinath [29]
analyzed various optimizers and their impact on a deep learning model's performance and concluded the
importance of tuning the hyperparameters. Kimura et al. [30] tuned the CASIA algorithm's hyperparameters
to increase the detection of classification rate, effectively improving the iris liveness detection. Various
hyperparameters of a random search algorithm were analyzed to achieve the best performance [31]. A
reinforcement learning mechanism is applied to tune the hyperparameters to control computational resource
allocation problems [32]. Complete insights into the recent advancements and theories underlying Bayesian
optimization and explains how it solves the difficult problems were discussed.
Kiatkarun and Phunchongharn [33] proposed hyper-genetic, a variant for the genetic algorithm to
obtain the optimal hyperparameters for gradient boosting machine algorithm. They further compared their
results with grid search, Bayesian optimization, grid search and random searching techniques. They showed
their results were outperforming the others when they evaluated on four different datasets. Likewise,
Putatunda and Rama [34] proposed and proved that the randomized-hyperopt outperforms hyperopt, grid and
random search techniques. Huang et al. [35] solved the capacitated arc routing problem in logistics using
Bayesian optimization technique. Jamieson et al. [36] applied Bayesian optimization on kernel ridge
regression machine learning methods to find optimal hyperparameter configuration.
Cho et al. [37] proposed a modified version of Bayesian optimization: diversified, early termination
enabled, and parallel Bayesian optimization (DEEP-BO), to select the optimal hyperparameters and
benchmarked the performance with existing techniques. DEEP-BO mostly outperformed other methods but
was not optimal in choosing the best hyperparameters. Bayesian optimization dominates the optimization
techniques in the current scenario. It focuses to optimize the config selection to solve the problem of
optimizing the function that are high dimension and non-convex with possibly less evaluation noise and
unknown smoothness. The above papers and current black box techniques motivated us to create a custom
genetic algorithm (CGA), an optimization technique to return the best hyperparameters of deep learning
models with higher prediction accuracy.
3. DATA PREPROCESSING
For our investigation, we have used the IoT-23 dataset [38] containing malicious and benign traffic.
It is a new dataset captured on IoT devices with 20 captures of malware attacks and three captures of benign
traffic accounting for 21 GB in size. It was first published in the year, January 2020. The dataset contains
data monitored from 2018 through 2019 in the Stratosphere Laboratory, Artificial Intelligence Center group,
Faculty of Electrical Engineering, Czech Technical University, the Czech Republic, and funded by Avast and
Prague. The main goal of capturing this IoT dataset is to provide a platform for the researchers to efficiently
test their algorithms/models. The IoT devices used in the laboratory are an Amazon Echo home intelligent
personal assistant, Philips HUE smart LED lamp, and Somfy smart door lock. The official site,
stratosphere.org contains comprehensive information on the types of malicious attacks and benign traffic
listed in Table 1 and Table 2, and how they were collected.
Table 1. Summary of the benign scenarios
# Name of Dataset Duration(~hrs) Packets ZeekFlows Pcap Size Device
1 CTU-Honeypot-Capture-7-1 (somfy-01) 1.4 8,276 139 2,094 KB Somfy Door Lock
2 CTU-Honeypot-Capture-4-1 24 21,000 461 4,594 KB Philips HUE
3 CTU-Honeypot-Capture-5-1 5.4 398,000 1,383 381 MB Amazon Echo
The dataset contains malicious traffic captured at different protocols such as domain name
system (DNS), hypertext transfer protocol (HTTP), dynamic host configuration protocol (DHCP),
telecommunication network (Telnet), and secure socket layer (SSL), We used the labeled connection log file
that fields such as uid, id.orig_h, id.orig_p, id.resp_h, id.resp_p, proto, service, duration, orig_bytes,
resp_bytes, conn_state, local_orig, local_resp, missed_bytes, history, orig_pkts, orig_ip_bytes, resp_pkts,
5. Int J Elec & Comp Eng ISSN: 2088-8708
Hyperparameter optimization using custom genetic algorithm for … (Karthikayini Thavasimani)
4035
resp_ip_bytes, tunnel_parents and target output. All the files are consolidated into a single file for
preprocessing.
The file contents are split into thousand rows using threads and the missing values are filled. Empty,
null, and hyphen values are replaced with zero and non-string values with not available (NA). The strings
values are converted to numerical values using the one-hot encoding technique and label encoder (categorical
values).
Table 2. Summary of malicious IoT scenarios
# Name of Dataset Duration (hrs) Packets ZeekFlows Pcap Size Name
1 CTU-IoT-Malware-Capture-34-1 24 233,000 23,146 121 MB Mirai
2 CTU-IoT-Malware-Capture-43-1 1 82,000,000 67,321,810 6 GB Mirai
3 CTU-IoT-Malware-Capture-44-1 2 1,309,000 238 1.7 GB Mirai
4 CTU-IoT-Malware-Capture-49-1 8 18,000,000 5,410,562 1.3 GB Mirai
5 CTU-IoT-Malware-Capture-52-1 24 64,000,000 19,781,379 4.6 GB Mirai
6 CTU-IoT-Malware-Capture-20-1 24 50,000 3,210 3.9 MB Torii
7 CTU-IoT-Malware-Capture-21-1 24 50,000 3,287 3.9 MB Torii
8 CTU-IoT-Malware-Capture-42-1 8 24,000 4,427 2.8 MB Trojan
9 CTU-IoT-Malware-Capture-60-1 24 271,000,000 3,581,029 21 GB Gagfyt
10 CTU-IoT-Malware-Capture-17-1 24 109,000,000 54,659,864 7.8 GB Kenjiro
11 CTU-IoT-Malware-Capture-36-1 24 13,000,000 13,645,107 992 MB Okiru
12 CTU-IoT-Malware-Capture-33-1 24 54,000,000 54,454,592 3.9 GB Kenjiro
13 CTU-IoT-Malware-Capture-8-1 24 23,000 10,404 2.1 MB Hakai
14 CTU-IoT-Malware-Capture-35-1 24 46,000,000 10,447,796 3.6G Mirai
15 CTU-IoT-Malware-Capture-48-1 24 13,000,000 3,394,347 1.2G Mirai
16 CTU-IoT-Malware-Capture-39-1 7 73,000,000 73,568,982 5.3GB IRCBot
17 CTU-IoT-Malware-Capture-7-1 24 11,000,000 11,454,723 897 MB Linux.Mirai
18 CTU-IoT-Malware-Capture-9-1 24 6,437,000 6,378,294 472 MB Linux.Hajime
19 CTU-IoT-Malware-Capture-3-1 36 496,000 156,104 56 MB Muhstik
20 CTU-IoT-Malware-Capture-1-1 112 1,686,000 1,008,749 140 MB Hide and Seek
We applied Chi-squared technique to determine the important features. Features such as id.orig_h,
id.orig_p, id.resp_h, id.resp_p, proto, service, duration, orig_bytes, resp_bytes, conn_state, history,
orig_pkts, orig_ip_bytes, resp_pkts, resp_ip_bytes, and tunnel_parents are marked as important for
prediction. We added additional column, target output containing the label such as benign or malicious.
Features such as ts, uid, local_orig, local_resp and missed_bytes are ignored. The final input file is used for
training and testing the proposed custom genetic algorithm.
4. PROPOSED WORK
We propose a CGA for tuning the deep learning hyperparameters. Our algorithm is inspired by
Darwin’s theory of natural evolution. Evolutionary algorithms are widely used in search and optimization
problems. So, we created CGA to address the limitations of the genetic algorithm. CGA aims to converge
towards global optimum rather than local optima and problems with fitness measures resulting in poor
convergence. We introduce three modules with simple approaches, bonus-penalty, retain and optimal
selection, to increase the mutation rate and convergence towards the global optimum.
In the Bonus-penalty module, the algorithm applies penalties for networks reaching early
convergence. Otherwise, the bonus is applied. The networks with bonus vales are selected to generate future
offspring (child neural networks). Messenger ribonucleic acid (mRNA) is a single-stranded molecule of
ribonucleic acid. In the gene, it corresponds to the genetic sequences. In the process of protein synthesis, they
are read by ribosomes. In CGA, the bonus and penalty are routed as mRNA values and are read by the neural
networks while creating new populations. Based on the bonus and penalty values, the population is either
selected or discarded. In the retain module, the algorithm picks a random set of individuals from the
discarded list post crossover and mutation to increase the mutation rate.
As a result, a wide range of hyperparameter values is explored in the search space. In optimal
selection module, the top 50% of the high performing configurations in terms of less loss, high accuracy and
high F1 measure are used for creating future off springs. This results in reducing the optimization time to a
greater extent.
5. ALGORITHM DESCRIPTION
A deep learning sequential model with hyperparameter values from the Table 3 is used to initiate the
process. The mRNA values, bonus and penalty are manually set to null in the beginning. As the population
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4031-4041
4036
grows, these values change dynamically. The initial generation of neural network (NN) starts with
configuration/hyperparameters space created using the module 3. A combination of metrics such as high F1
measure, high accuracy and low loss is used as a fitness function to select the best neural networks for
creating child neural networks. NNs with higher metric values and bonus are preferred during the parent's
selection randomly. So, there is a probability of using identical NNs. We eliminate this scenario by choosing
random NNs. New child neural networks are created using crossover and mutation. If they converge early,
the penalty is applied to the NNs. Otherwise, a bonus is applied. This process continues for any number of set
generations. In our analysis, we used 20 generations. As a result of evolution, we obtain the optimal
hyperparameter values with the highest metrics values.
5.1. Module 1 (M1) retain
To produce fittest off springs of neural networks (NNs), the parents are chosen with high accuracy,
f1 measure and low loss and the remaining networks are eliminated. This results in early convergence at
faster pace. To stop the premature convergence, mutation and diversity should be improved. To do so, a part
of discarded neural networks is added back to the parent population.
5.2. Module 2 (M2) penalty-bonus
To maximize the convergence towards the problem’s global optimum, bonuses and penalties are
used. The NNs that reach the convergence early are penalized and if they do not, bonuses are applied. While
generating the future off springs, NNs with bonus are preferred. Using mRNA, the bonus and penalty values
are routed/applied. Thus, mRNA values of bonus are prioritized in population selection.
5.3. Module 3 (M3) optimal selection
The initial and future populations of NNs are determined using optimal selection. The process starts
with default population. The random parameters selected for initial population will significantly impact the
overall process of finding the optimal hyperparameters. It might be slower, moderate, or faster in finding the
optimal hyperparameters. One of the biggest challenges of genetic algorithms are the execution that depends
on the populations to traverse, space to search and available resources.
Module 3 - Population selection
Input: Limit K, Set s where jr,q denotes the qth loss
From the rth set, Max size M
Initialize:
S0 = [n], N=10
Finding the optimal set:
For j 0,1…. c
Set Sj = [sN-j], mj = [MNj-c]
Pull each set in Sj for mj times
Keep the best set in terms of mjth
observed loss as Sj+1
Output: Optimal set, Os
The module 3 consists of steps, i) select a random hyperparameter configuration, ii) evaluate the
performance, iii) use the top 50% and the rest are ignored, and iv) jump to step 2 until the optimal population
is created (default: 10). In module 3, only 50% of the population is used. It is based on the assumption that
promising population tend to produce better offspring than the worst ones even early in the process.
Considering all the neural networks increases the execution time and the time taken to find the optimal
hyperparameters. The objective is to avoid wasting the training time that will lead us nowhere and focus on
the top 50% that is likely to yield optimal hyperparameters, thereby allocating resources to a potentially more
valuable population. In CGA, the initial 50% of top-performing neural network hyperparameters are used to
explore the hyperparameter space further, and the remaining NNs are ignored.
The focus is on reduction of training time of the hyperparameters that does not provide concrete
solution. However, the top performing hyperparameter sets are used for further analysis. The module 3 is
different from other approaches by i) limiting the configs to converge to reduce the usage of resources and ii)
faster execution time by using top performing sets. Figure 1 contains the algorithm of the custom genetic
algorithms with module 1, module 2 and module 3 and how they are invoked.
Figure 2 shows the workflow of the custom genetic algorithms. Start the process with initialization
by setting the initial hyperparameters and the algorithm variables to null. Create the ancestors by invoking
module 3, optimal selection, which returns the default population to start the evolution. They become the first
generation of population/hyperparameters.
7. Int J Elec & Comp Eng ISSN: 2088-8708
Hyperparameter optimization using custom genetic algorithm for … (Karthikayini Thavasimani)
4037
Custom Genetic Algorithm
Initialization
Set the hyperparameters such as epochs, layers, and neurons,
Set the penalty and bonus to 0
Set earlystop with patience to 5
Create the initial population
For I 1 to size_of_the_population
Create the network
End for
Invoke Module 3 to return the optimal
Neural Network to begin with
Return the network object (population)
For I 1 to number_of_generations
Train the networks
Evolve the population of networks
Score and sort the networks in descending order
Select the parents with high score & priority for crossover
While len(children) < desired_length:
If same networks:
Get a random parent network from population
Breed and mutate
Child network is formed
Don’t grow the children than desired length
#Module 2
If the network results in early stop:
Add penalty
If the network doesn’t result in early stop:
Add bonus
Update the network’s priority list with penalty and bonus
Push the unused networks to discarded list
Invoke Module 3 to generate the optimal population set
#Module 1
For I 1 to retain_length:
#Keep some random individuals for future crossover
If random_select > random.random():
population.append(individual)
Select the hyperparameters with highest value
(High Accuracy, Low loss and High f1 measure)
Figure 1. Custom genetic algorithm
Figure 2. Workflow of custom genetic algorithm
Next
enerations
nitialize the
yperparameters
and the ariables
reate Ancestors using
to start the evolution of
hyperparameters
N
0
ptimal yperparameters
Evolution tarts
enalty onus
Retain
ptimal election
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4031-4041
4038
Once the evolution starts, prefer the parents with higher accuracy for crossover and mutation. Select
a part of discarded individuals for diversity and mutation. Apply penalties to parents resulting in early
convergence and bonuses to the parents with higher accuracy and diversity to select the optimal population.
Repeat the process for twenty generations to obtain the optimal child. The final output is the model’s optimal
hyperparameters of the neural network.
6. EXPERIMENTAL RESULTS AND DISCUSSION
In this section, we present the experimental results. We tested our algorithm using Google Colab Pro
with 25 GB RAM, 147 GB hard disk drive, and NVIDIA TESLA P100 GPU. Table 3 lists the
hyperparameters, their range, and optimized values of default and CGA model. After iterating through 20
generations (with 50 populations per generation) on the hyperparameter space, the CGA returned the optimal
hyperparameter values with 98.9% accuracy. For comparison, we experimented on a deep learning model
with default hyperparameter values and other optimization techniques. The default hyperparameters resulted
in an accuracy of 88.22%. The results of the default model will vary depending on the parameters. We
validated the results on a default input range for our testing purpose using the brute force technique to avoid
overfitting and underfitting.
Table 4 compares the default and CGA tuned models on various metrics such as accuracy, loss,
precision, recall, and F1 score on training, validation, and test datasets. We used cross-validation split, 0.3 to
ensure that the deep learning models generalize well to produce the best predictive model. We used 70% data
for training and 30% for testing. CGA tuned model outperformed the default model with an accuracy of
98.9% on testing data.
Table 3. List of hyperparameters initial range settings and optimized values
Hyperparameters Input range Optimized values
Without tuning CGA
Number of epochs [20, 40, 50, 60] 40 20
Batch size [20, 50, 100, 200, 300, 400] 500 20
Number of layers [1, 2, 5, 10] 2 4
Number of neurons [2, 5, 10, 15] 40 40
Regularizer Dropout [0.1, 0.2, 0.5] 0.2 0.2
Optimizers [“adam”, “sgd”, “rmsprop”, “Adadelta”,
“Adamax”, “Nadam”]
Adam Adam
Activations [“Relu”, “sigmoid”, “tanh”] Relu Relu
Last layer activation function Sigmoid Sigmoid Sigmoid
Losses Binary cross entropy Binary cross entropy Binary cross entropy
Metrics Test accuracy 0.8822 0.9890
Table 4. Model results
Training results Validation results
Accuracy Loss Precision Recall F1 Accuracy Loss Precision Recall F1
Default 0.8851 0.2333 0.8274 0.9922 0.9022 0.8836 0.2251 0.8515 0.9483 0.8971
CGA 0.9675 0.1142 0.9523 0.9939 0.9703 0.9888 0.0535 0.9805 0.9991 0.9892
Test Results
Accuracy Loss Precision Recall F1
Default 0.8822 0.2262 0.8489 0.9487 0.8932
CGA 0.9890 0.0522 0.9807 0.9991 0.9895
Figure 3(a)-(d) shows the accuracy and loss graphs of default and CGA tuned model. From the
graph, it is evident that the A tuned model’s accuracy increases as the loss decreases. The CGA tuned
models eliminate overfitting, as well as underfitting, resulted in higher accuracy and lower loss. There is an
increase of more than 10% in both training and test accuracy than models with default hyperparameter
values. Figures 4(a)-(d) represents the receiver operating characteristic (ROC) curve of optimization
techniques such as random, Bayesian, genetic algorithm, and custom genetic algorithm. The area under the
curve for CGA is 0.99 that outperforms the existing optimization others in identifying the false negatives and
true positives. Table 5 compares various metrics such as F-measure, accuracy, recall and precision for the
existing optimization techniques such as random search, genetic algorithm, and Bayesian search. CGA tuned
model outperformed the existing techniques and methods with an accuracy of 0.9890 and F-measure of
0.9895.
9. Int J Elec & Comp Eng ISSN: 2088-8708
Hyperparameter optimization using custom genetic algorithm for … (Karthikayini Thavasimani)
4039
(a) (b)
(c) (d)
Figure . omparing Default model’s (a) accuracy, (b) loss with CGA tuned models, (c) accuracy
and (d) loss
(a) (b)
(c) (d)
Figure 4. ROC curve for (a) random search, (b) Bayesian search, (c) genetic algorithm
and (d) custom genetic algorithm
Random
Chances
Random
Chances
Random
Chances
Random
Chances
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4031-4041
4040
Table 5. Comparison with existing optimization techniques
Technique Accuracy F measure Precision Recall
Random Search 0.9650 0.9694 0.9411 0.9994
Bayesian Search 0.9535 0.9585 0.9322 0.9863
TPOT 0.9557 0.9609 0.9247 1.0000
CGA 0.9890 0.9895 0.9807 0.9991
7. CONCLUSION
In this paper, we discussed IoT devices, their vulnerabilities, the impact of malicious bots, and
explained how a deep learning model is tuned to classify benign and malicious traffic. One of the biggest
challenges of deep learning model is finding the optimal hyperparameter values. Thus, we proposed a custom
genetic algorithm, an optimization technique to obtain the best hyperparameters for a model. The algorithm
was able to successfully learn with a wide range of hyperparameter values without manual intervention. The
CGA tuned model outperformed the default model with an accuracy of 98.9% and effective in reaching the
optimal solution. As future work, we are planning to optimize the algorithm further by experimenting on new
datasets and comparing it against other optimization techniques.
REFERENCES
[1] M. Wessel et al., “The power of bots,” Proceedings of the ACM on Human-Computer Interaction, vol. 2, pp. 1–19, Nov. 2018,
doi: 10.1145/3274451.
[2] E. Ferrara, . arol, . Davis, F. enczer, and A. Flammini, “The rise of social bots,” Communications of the ACM, vol. 59,
no. 7, pp. 96–104, Jun. 2016, doi: 10.1145/2818717.
[3] D. K. aini, “ ense the future,” Campus, vol. 1, no. 11, pp. 14–17, 2011.
[4] . Rokka hhetri and . A. Al Faruque, “Data-driven kinetic cyber-attack detection,” ham: pringer nternational ublishing,
2020, pp. 91–109.
[5] T. J. OConnor, W. Enck, and . Reaves, “ linded and confused,” in Proceedings of the 12th Conference on Security and Privacy
in Wireless and Mobile Networks, May 2019, pp. 140–150, doi: 10.1145/3317549.3319724.
[6] . ustapha and A. . Alghamdi, “DDo attacks on the internet of things and their prevention methods,” in Proceedings of the
2nd International Conference on Future Networks and Distributed Systems, Jun. 2018, pp. 1–5, doi: 10.1145/3231053.3231057.
[7] Y. arel, . en al, and Y. Elovici, “ yber security and the role of intelligent systems in addressing its challenges,” ACM
Transactions on Intelligent Systems and Technology, vol. 8, no. 4, pp. 1–12, Jul. 2017, doi: 10.1145/3057729.
[8] . . Ranjit, . anapathy, K. ridhar, and . Arumugham, “Efficient deep learning hyperparameter tuning using cloud
infrastructure: intelligent distributed hyperparameter tuning with bayesian optimization in the cloud,” in 2019 IEEE 12th
International Conference on Cloud Computing (CLOUD), Jul. 2019, vol. 2019-July, pp. 520–522, doi:
10.1109/CLOUD.2019.00097.
[9] . Neary, “Automatic hyperparameter tuning in deep convolutional neural networks using asynchronous reinforcement learning,”
in 2018 IEEE International Conference on Cognitive Computing (ICCC), Jul. 2018, pp. 73–77, doi: 10.1109/ICCC.2018.00017.
[10] S. Aich, S. Chakraborty, and H.- . Kim, “ onvolutional neural network-based model for web-based text classification,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 9, no. 6, pp. 5191–5785, Dec. 2019, doi:
10.11591/ijece.v9i6.pp5785-5191.
[11] I. K. K. Ugli, S. Aich, H. Ryu, M.-I. Joo, and H.- . Kim, “Detection of distracted driving using deep learning,” in International
Conference on Future Information and Communication Engineering, vol. 12, no. 1, pp. 29–32.
[12] S. Chakraborty, S. Aich, and H.- . Kim, “Detection of arkinson’s disease from T T weighted R scans using D
convolutional neural network,” Diagnostics, vol. 10, no. 6, Jun. 2020, doi: 10.3390/diagnostics10060402.
[13] . Antonio, “ equential model based optimization of partially defined functions under unknown constraints,” Journal of Global
Optimization, vol. 79, no. 2, pp. 281–303, Feb. 2021, doi: 10.1007/s10898-019-00860-4.
[14] T. oel, R. urugan, . irjalili, and D. K. hakrabartty, “ ptCoNet: an optimized convolutional neural network for an
automatic diagnosis of COVID- 9,” Applied Intelligence, vol. 51, no. 3, pp. 1351–1366, Mar. 2021, doi: 10.1007/s10489-020-
01904-z.
[15] A. Abbas, . . Abdelsamea, and . . aber, “ lassification of D-19 in chest X-ray images using DeTraC deep
convolutional neural network,” Applied Intelligence, vol. 51, no. 2, pp. 854–864, Feb. 2021, doi: 10.1007/s10489-020-01829-7.
[16] S. Namasudra, . Dhamodharavadhani, and R. Rathipriya, “Nonlinear neural network based forecasting model for predicting
COVID- 9 cases,” Neural Processing Letters, Apr. 2021, doi: 10.1007/s11063-021-10495-w.
[17] J. C.-W. Lin, Y. hao, Y. Djenouri, and U. Yun, “A RNN: a recurrent neural network with an attention model for sequence
labeling,” Knowledge-Based Systems, vol. 212, Jan. 2021, doi: 10.1016/j.knosys.2020.106548.
[18] H. Yi and K.- . N. ui, “An automated hyperparameter search-based deep learning model for highway traffic prediction,” IEEE
Transactions on Intelligent Transportation Systems, vol. 22, no. 9, pp. 5486–5495, Sep. 2021, doi: 10.1109/TITS.2020.2987614.
[19] K.- . N. ui and . Yi, “ ptimal hyperparameter tuning using meta-learning for big traffic datasets,” in 2020 IEEE International
Conference on Big Data and Smart Computing (BigComp), Feb. 2020, pp. 48–54, doi: 10.1109/BigComp48618.2020.0-100.
[20] J. van Hoof and J. anschoren, “ yperboost: hyperparameter optimization by gradient boosting surrogate models,”
arxiv.org/abs/2101.02289, 2021.
[21] A. ülcü and Z. Kuş, “ ulti-objective simulated annealing for hyper-parameter optimization in convolutional neural networks,”
PeerJ Computer Science, vol. 7, Jan. 2021, doi: 10.7717/peerj-cs.338.
[22] A. oopes, . offmann, . Fischl, J. uttag, and A. Dalca, “ yper orph: amortized hyperparameter learning for image
registration,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture
Notes in Bioinformatics), vol. 12729, Springer International Publishing, 2021, pp. 3–17.
[23] S. Lee, J. Kim, H. Kang, D.-Y. Kang, and J. ark, “ enetic algorithm based deep learning neural network structure and
hyperparameter optimization,” Applied Sciences, vol. 11, no. 2, Jan. 2021, doi: 10.3390/app11020744.
11. Int J Elec & Comp Eng ISSN: 2088-8708
Hyperparameter optimization using custom genetic algorithm for … (Karthikayini Thavasimani)
4041
[24] J. Lin, . Li, Y. uang, J. hen, . uang, and Z. uang, “An efficient modified hyperband and trust-region-based mode-
pursuing sampling hybrid method for hyperparameter optimization,” Engineering Optimization, vol. 54, no. 2, pp. 252–268, Feb.
2022, doi: 10.1080/0305215X.2020.1862823.
[25] . haziya and R. Zaheer, “ mpact of hyperparameters on model development in deep learning,” in Lecture Notes on Data
Engineering and Communications Technologies, vol. 56, Springer Singapore, 2021, pp. 57–67.
[26] . Kanimozhi and T. . Jacob, “Artificial intelligence based network intrusion detection with hyper-parameter optimization
tuning on the realistic cyber dataset CSE-CIC- D 0 8 using cloud computing,” in International Conference on Communication
and Signal Processing (ICCSP), Apr. 2019, pp. 33–36, doi: 10.1109/ICCSP.2019.8698029.
[27] M. D. Hossain, H. Ochiai, D. Fall, and Y. Kadobayashi, “L T -based network attack detection: performance comparison by
hyper-parameter values tuning,” in 2020 7th IEEE International Conference on Cyber Security and Cloud Computing
(CSCloud)/2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom), Aug. 2020, pp. 62–69,
doi: 10.1109/CSCloud-EdgeCom49738.2020.00020.
[28] A. Nugroho and . uhartanto, “ yper-parameter tuning based on random search for DenseNet optimization,” in 2020 7th
International Conference on Information Technology, Computer, and Electrical Engineering (ICITACEE), Sep. 2020, pp. 96–99,
doi: 10.1109/ICITACEE50144.2020.9239164.
[29] K. Thavasimani and N. K. rinath, “Deep learning techniques: a case study on comparative analysis of various optimizers to
detect bots from CRESCI- 0 7 dataset,” International Journal of Advanced Science and Technology, vol. 29, no. 4, pp. 10040–
10053, 2020.
[30] . Kimura, D. Lucio, A. ritto Jr., and D. enotti, “ NN hyperparameter tuning applied to iris liveness detection,” in
Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and
Applications, 2020, vol. 5, pp. 428–434, doi: 10.5220/0008983904280434.
[31] K. . Kelkar and J. W. akal, “ yper parameter tuning of random forest algorithm for affective learning system,” in 2020 Third
International Conference on Smart Systems and Inventive Technology (ICSSIT), Aug. 2020, pp. 1192–1195, doi:
10.1109/ICSSIT48917.2020.9214213.
[32] . Zhang, J. un, and Z. Xu, “Adaptive structural hyper-parameter configuration by Q-learning,” in 2020 IEEE Congress on
Evolutionary Computation (CEC), Jul. 2020, pp. 1–8, doi: 10.1109/CEC48606.2020.9185665.
[33] K. Kiatkarun and . hunchongharn, “Automatic hyper-parameter tuning for gradient boosting machine,” in 2020 1st
International Conference on Big Data Analytics and Practices (IBDAP), Sep. 2020, pp. 1–6, doi:
10.1109/IBDAP50342.2020.9245609.
[34] . utatunda and K. Rama, “A modified bayesian optimization based hyper-parameter tuning approach for extreme gradient
boosting,” in 2019 Fifteenth International Conference on Information Processing (ICINPRO), Dec. 2019, pp. 1–6, doi:
10.1109/ICInPro47689.2019.9092025.
[35] . uang, . Yuan, Y. Li, and X. Yao, “Automatic parameter tuning using bayesian optimization method,” in 2019 IEEE
Congress on Evolutionary Computation (CEC), Jun. 2019, pp. 2090–2097, doi: 10.1109/CEC.2019.8789891.
[36] L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar, “ yperband: a novel bandit-based approach to
hyperparameter optimization,” Journal of Machine Learning Research, vol. 18, pp. 1–52, 2018.
[37] . ho, Y. Kim, E. Lee, D. hoi, Y. Lee, and W. Rhee, “ asic enhancement strategies when using bayesian optimization for
hyperparameter tuning of deep neural networks,” IEEE Access, vol. 8, pp. 52588–52608, 2020, doi:
10.1109/ACCESS.2020.2981072.
[38] . arcia, A. armisano, and . J. Erquiaga, “ oT-23: a labeled dataset with malicious and benign oT network traffic,”
Stratosphere Laboratory, 2020. https://www.stratosphereips.org/datasets-iot23 (accessed Feb. 11, 2022).
BIOGRAPHIES OF AUTHORS
Karthikayini Thavasimani received B.Eng. in Computer Science and
Engineering from Avinashilingam University for Women, ndia and her aster’s degree,
M.Eng. (Gold Medalist) in Computer Science and Engineering from K.S. Rangasamy College
of Engineering, Tiruchengode, Tamil Nadu, India. She is currently working as Data Science
Consultant at Capgemini, Bangalore, India and pursuing her part-time Ph.D. in machine
learning and big data at RV College of Engineering, Bangalore, Karnataka, India. She has
over five years of Academic teaching experience in Computer Science and 1 year of industrial
experience in information technology. Her research interests include artificial intelligence, big
data, natural language processing, statistical analysis, and optimization techniques. She can be
reached at karthikayini@outlook.com.
Nuggehalli Kasturirangan Srinath received his B.E. (Electrical) degree from
Bangalore University in 1982, M.Tech (SE and OR) degree from Roorkee University, India in
1986, and Ph.D. from Avinashilingam University, India. He has more than thirty years of
teaching experience. He worked as Professor and Dean (Academics) in the Department of
Computer Science and Engineering at RV College of Engineering, Bangalore, Karnataka,
India. He has authored 7 books, reviewed 4 and published more than 62 journals. He received
numerous awards such as “ est Faculty Award” by Cognizant Technologies and Leadership
in Academia by Intel. He is subject matter expert in VTU-EDUSAT, Microprocessors and
Microcontrollers. He is an expert committee member of the University Grants Commission,
India, as Chairman. He can be reached at email: srinathnk@rvce.edu.in.