The world is changing quite rapidly while increasingly tuning into digitalization. However, it is important to note that data science is what most technology is evolving around and data is definitely the future of everything. For industries, adopting a “data science approach” is no longer an option, it becomes an obligation in order to enhance their business rather than survive. This paper offers a roadmap for anyone interested in this research field or getting started with “machine learning” learning while enabling the reader to easily comprehend the key concepts behind. Indeed, it examines the benefits of automated machine learning systems, starting with defining machine learning vocabulary and basic concepts. Then, explaining how to, concretely, build up a machine learning model by highlighting the challenges related to data and algorithms. Finally, exposing a summary of two studies applying machine learning in two different fields, namely transportation for road traffic forecasting and supply chain management for demand prediction where the predictive performance of various models is compared based on different metrics.
A survey of big data and machine learning IJECEIAES
This paper presents a detailed analysis of big data and machine learning (ML) in the electrical power and energy sector. Big data analytics for smart energy operations, applications, impact, measurement and control, and challenges are presented in this paper. Big data and machine learning approaches need to be applied after analyzing the power system problem carefully. Determining the match between the strengths of big data and machine learning for solving the power system problem is of utmost important. They can be of great help to plan and operate the traditional grid/smart grid (SG). The basics of big data and machine learning are described in detailed manner along with their applications in various fields such as electrical power and energy, health care and life sciences, government, telecommunications, web and digital media, retailers, finance, e-commerce and customer service, etc. Finally, the challenges and opportunities of big data and machine learning are presented in this paper.
Using deep learning to detecting abnormal behavior in internet of thingsIJECEIAES
The development of the internet of things (IoT) has increased exponentially, creating a rapid pace of changes and enabling it to become more and more embedded in daily life. This is often achieved through integration: IoT is being integrated into billions of intelligent objects, commonly labeled “things,” from which the service collects various forms of data regarding both these “things” themselves as well as their environment. While IoT and IoT-powered devices can provide invaluable services in various fields, unauthorized access and inadvertent modification are potential issues of tremendous concern. In this paper, we present a process for resolving such IoT issues using adapted long short-term memory (LSTM) recurrent neural networks (RNN). With this method, we utilize specialized deep learning (DL) methods to detect abnormal and/or suspect behavior in IoT systems. LSTM RNNs are adopted in order to construct a high-accuracy model capable of detecting suspicious behavior based on a dataset of IoT sensors readings. The model is evaluated using the Intel Labs dataset as a test domain, performing four different tests, and using three criteria: F1, Accuracy, and time. The results obtained here demonstrate that the LSTM RNN model we create is capable of detecting abnormal behavior in IoT systems with high accuracy.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
This document discusses techniques for identifying fake news using social network analysis. It first reviews literature on existing fake news identification methods that use feature extraction from news content and social context. Deep learning models are then proposed to classify news as real or fake using datasets of news and social network information. The implementation achieves 99% accuracy on binary classification of news. Social network analysis factors like bot accounts, echo chambers, and information spread are discussed as enabling the spread of fake news online.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it
tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for
misstatement of information thru its source, content material, or author and save you the unauthenticated
assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network
entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for
fake information presence. The implementation setup produced most volume 99% category accuracy, even
as dataset is tested for binary (real or fake) labelling with multiple epochs.
Unlocking the Potential of Artificial Intelligence_ Machine Learning in Pract...eswaralaldevadoss
Machine learning is a subset of artificial intelligence that involves training computers to learn from data and make predictions or decisions based on that data. It involves building algorithms and models that can learn patterns and relationships from data and use that knowledge to make predictions or take actions.
Here are some key concepts that can help beginners understand machine learning:
Data: Machine learning algorithms require data to learn from. This data can come from a variety of sources such as databases, spreadsheets, or sensors. The quality and quantity of data can greatly impact the accuracy and effectiveness of machine learning models.
Training: In machine learning, training involves feeding data into a model and adjusting its parameters until it can accurately predict outcomes. This process involves testing and tweaking the model to improve its accuracy.
Algorithms: There are many different algorithms used in machine learning, each with its own strengths and weaknesses. Common machine learning algorithms include decision trees, random forests, and neural networks.
Supervised vs. Unsupervised Learning: Supervised learning involves training a model on labeled data, where the desired outcome is already known. Unsupervised learning, on the other hand, involves training a model on unlabeled data and allowing it to identify patterns and relationships on its own.
Evaluation: After training a model, it's important to evaluate its accuracy and performance on new data. This involves testing the model on a separate set of data that it hasn't seen before.
Overfitting vs. Underfitting: Overfitting occurs when a model is too complex and fits the training data too closely, leading to poor performance on new data. Underfitting occurs when a model is too simple and fails to capture important patterns in the data.
Applications: Machine learning is used in a wide range of applications, from predicting stock prices to identifying fraudulent transactions. It's important to understand the specific needs and constraints of each application when building machine learning models.
Overall, machine learning is a powerful tool that can help businesses and organizations make more informed decisions based on data. By understanding the basic concepts and techniques of machine learning, beginners can begin to explore the potential applications and benefits of this exciting field.
Machine learning and artificial intelligence are two of the most rapidly growing and transformative technologies of our time. These technologies are revolutionizing the way businesses operate, improving healthcare outcomes, and transforming the way we live our daily lives. Learn more about it in the PPT below!
Machine Learning: Need of Machine Learning, Its Challenges and its ApplicationsArpana Awasthi
BCA Department of JIMS Vasant Kunj-II is one of the best BCA colleges in Delhi NCR. The curriculum is well updated and the subjects included all the latest technologies which are in demand.
JIMS BCA course teaches Python to II semester students and Artificial Intelligence Using Python to Sixth Semester students.
Here is a small article on the Future of Machine Learning, hope you will find it useful.
Machine Learning is a field of Computer science in which computer systems are able to learn from past experiences, examples, environments. With help of various Machine Learning Algorithms, Computers are provided with the ability to sense the data and produce some relevant results.
Machine learning Algorithms provide the technique of predicting the future outcomes or classifying information from the given input to the Machines so that the appropriate decisions can be taken.
How Does Data Create Economic Value_ Foundations For Valuation Models.pdfDr. Anke Nestler
In order to clarify how data creates economic value, the measurement of data value and the process of valuation of data contribution to economic benefits is being discussed in this article leading to the identification of ways to operationalize data valuation methodologies. The four independent authors André Gorius, Véronique Blum, Andreas Liebl, and Anke Nestler are leading European IP and licensing specialists of the International Licensing Executives Society (LESI).
Um zu klären, wie Daten einen wirtschaftlichen Wert schaffen, werden in diesem Artikel die Messung von Datenwerten und der Prozess der Bewertung des Beitrags von Daten zum wirtschaftlichen Nutzen erörtert, um Wege zur Operationalisierung von Datenbewertungsmethoden zu finden. Die vier unabhängigen Autoren André Gorius, Véronique Blum, Andreas Liebl und Anke Nestler sind führende europäische IP- und Lizenzierungsspezialisten der International Licensing Executives Society (LESI).
A survey of big data and machine learning IJECEIAES
This paper presents a detailed analysis of big data and machine learning (ML) in the electrical power and energy sector. Big data analytics for smart energy operations, applications, impact, measurement and control, and challenges are presented in this paper. Big data and machine learning approaches need to be applied after analyzing the power system problem carefully. Determining the match between the strengths of big data and machine learning for solving the power system problem is of utmost important. They can be of great help to plan and operate the traditional grid/smart grid (SG). The basics of big data and machine learning are described in detailed manner along with their applications in various fields such as electrical power and energy, health care and life sciences, government, telecommunications, web and digital media, retailers, finance, e-commerce and customer service, etc. Finally, the challenges and opportunities of big data and machine learning are presented in this paper.
Using deep learning to detecting abnormal behavior in internet of thingsIJECEIAES
The development of the internet of things (IoT) has increased exponentially, creating a rapid pace of changes and enabling it to become more and more embedded in daily life. This is often achieved through integration: IoT is being integrated into billions of intelligent objects, commonly labeled “things,” from which the service collects various forms of data regarding both these “things” themselves as well as their environment. While IoT and IoT-powered devices can provide invaluable services in various fields, unauthorized access and inadvertent modification are potential issues of tremendous concern. In this paper, we present a process for resolving such IoT issues using adapted long short-term memory (LSTM) recurrent neural networks (RNN). With this method, we utilize specialized deep learning (DL) methods to detect abnormal and/or suspect behavior in IoT systems. LSTM RNNs are adopted in order to construct a high-accuracy model capable of detecting suspicious behavior based on a dataset of IoT sensors readings. The model is evaluated using the Intel Labs dataset as a test domain, performing four different tests, and using three criteria: F1, Accuracy, and time. The results obtained here demonstrate that the LSTM RNN model we create is capable of detecting abnormal behavior in IoT systems with high accuracy.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
This document discusses techniques for identifying fake news using social network analysis. It first reviews literature on existing fake news identification methods that use feature extraction from news content and social context. Deep learning models are then proposed to classify news as real or fake using datasets of news and social network information. The implementation achieves 99% accuracy on binary classification of news. Social network analysis factors like bot accounts, echo chambers, and information spread are discussed as enabling the spread of fake news online.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it
tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for
misstatement of information thru its source, content material, or author and save you the unauthenticated
assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network
entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for
fake information presence. The implementation setup produced most volume 99% category accuracy, even
as dataset is tested for binary (real or fake) labelling with multiple epochs.
Unlocking the Potential of Artificial Intelligence_ Machine Learning in Pract...eswaralaldevadoss
Machine learning is a subset of artificial intelligence that involves training computers to learn from data and make predictions or decisions based on that data. It involves building algorithms and models that can learn patterns and relationships from data and use that knowledge to make predictions or take actions.
Here are some key concepts that can help beginners understand machine learning:
Data: Machine learning algorithms require data to learn from. This data can come from a variety of sources such as databases, spreadsheets, or sensors. The quality and quantity of data can greatly impact the accuracy and effectiveness of machine learning models.
Training: In machine learning, training involves feeding data into a model and adjusting its parameters until it can accurately predict outcomes. This process involves testing and tweaking the model to improve its accuracy.
Algorithms: There are many different algorithms used in machine learning, each with its own strengths and weaknesses. Common machine learning algorithms include decision trees, random forests, and neural networks.
Supervised vs. Unsupervised Learning: Supervised learning involves training a model on labeled data, where the desired outcome is already known. Unsupervised learning, on the other hand, involves training a model on unlabeled data and allowing it to identify patterns and relationships on its own.
Evaluation: After training a model, it's important to evaluate its accuracy and performance on new data. This involves testing the model on a separate set of data that it hasn't seen before.
Overfitting vs. Underfitting: Overfitting occurs when a model is too complex and fits the training data too closely, leading to poor performance on new data. Underfitting occurs when a model is too simple and fails to capture important patterns in the data.
Applications: Machine learning is used in a wide range of applications, from predicting stock prices to identifying fraudulent transactions. It's important to understand the specific needs and constraints of each application when building machine learning models.
Overall, machine learning is a powerful tool that can help businesses and organizations make more informed decisions based on data. By understanding the basic concepts and techniques of machine learning, beginners can begin to explore the potential applications and benefits of this exciting field.
Machine learning and artificial intelligence are two of the most rapidly growing and transformative technologies of our time. These technologies are revolutionizing the way businesses operate, improving healthcare outcomes, and transforming the way we live our daily lives. Learn more about it in the PPT below!
Machine Learning: Need of Machine Learning, Its Challenges and its ApplicationsArpana Awasthi
BCA Department of JIMS Vasant Kunj-II is one of the best BCA colleges in Delhi NCR. The curriculum is well updated and the subjects included all the latest technologies which are in demand.
JIMS BCA course teaches Python to II semester students and Artificial Intelligence Using Python to Sixth Semester students.
Here is a small article on the Future of Machine Learning, hope you will find it useful.
Machine Learning is a field of Computer science in which computer systems are able to learn from past experiences, examples, environments. With help of various Machine Learning Algorithms, Computers are provided with the ability to sense the data and produce some relevant results.
Machine learning Algorithms provide the technique of predicting the future outcomes or classifying information from the given input to the Machines so that the appropriate decisions can be taken.
How Does Data Create Economic Value_ Foundations For Valuation Models.pdfDr. Anke Nestler
In order to clarify how data creates economic value, the measurement of data value and the process of valuation of data contribution to economic benefits is being discussed in this article leading to the identification of ways to operationalize data valuation methodologies. The four independent authors André Gorius, Véronique Blum, Andreas Liebl, and Anke Nestler are leading European IP and licensing specialists of the International Licensing Executives Society (LESI).
Um zu klären, wie Daten einen wirtschaftlichen Wert schaffen, werden in diesem Artikel die Messung von Datenwerten und der Prozess der Bewertung des Beitrags von Daten zum wirtschaftlichen Nutzen erörtert, um Wege zur Operationalisierung von Datenbewertungsmethoden zu finden. Die vier unabhängigen Autoren André Gorius, Véronique Blum, Andreas Liebl und Anke Nestler sind führende europäische IP- und Lizenzierungsspezialisten der International Licensing Executives Society (LESI).
This document discusses 10 AI and machine learning trends that will impact business in 2020:
1. AI is helping combat COVID-19 through technologies like thermal cameras, temperature prediction systems, contactless delivery robots, and healthcare data analysis platforms.
2. PyTorch is becoming more popular than TensorFlow for AI research due to being easier to use, while TensorFlow remains more widely used in businesses.
3. Time series analysis using recurrent neural networks and other deep learning techniques allows for highly accurate forecasting and anomaly detection for business data.
4. Reinforcement learning is developing sophisticated strategies through trial-and-error learning and will be important for applications like goal-oriented chatbots.
5. Advances in biometrics like multimodal
THE INTEREST OF HYBRIDIZING EXPLAINABLE AI WITH RNN TO RESOLVE DDOS ATTACKS: ...IJNSA Journal
In this paper, we suggest a new method to address attack detection problems in IoT networks, by using a specific emphasis on the user’s and network’s requirements regarding the application provider and giving a defence against intruders and malicious users. The paper focuses on resolving the problem of detection of attacks in IoT networks using Recurrent Neural Networks and an explainable artificial intelligence technique (XAI) named SHAP. XAI refers to a collection of methods and procedures for explaining how AI models make decisions. Although XAI is useful for understanding the motivations underlying AI models, the information utilized in these discoveries may be a risk. Machine learning models are vulnerable to attacks such as model inversion, model extraction, and membership inference. Such attacks could concentrate on the learning model or on the data used to train and build the model, depending on the involved circumstances and parties. Hence, when the owner of an AI model wishes to grant only black-box access and does not expose the model's parameters and architecture to third parties, solutions like XAI can notably enhance the susceptibility to model extraction attacks.
To guarantee the users regarding possible violations, the proposed RNN-SHAP method is based on a decentralized topology relying on a set of cooperative independent parties including the users of IoT network. Another advantage of the suggested approach is the capitalization on the trust relationships that are part of IoT networks in real life to build trusted mechanisms in the network. Compared to other methods using different metrics, the suggested RNN-SHAP shows its efficiency in resolving the DDoS problem.
Unveiling the Power of Machine Learning.docxgreendigital
Introduction:
In the vast landscape of technological evolution, Machine Learning (ML) stands as a beacon of innovation. Reshaping the way we interact with the digital world. With its roots in artificial intelligence. ML empowers systems to learn and improve from experience without explicit programming. This transformative technology is at the forefront of revolutionizing industries, from healthcare to finance. and from manufacturing to entertainment. In this article, we delve into the intricacies of machine learning. exploring its applications, challenges, and the profound impact it has on shaping the future.
The Ultimate Guide to Machine Learning (ML)RR IT Zone
Machine learning is a broad term that refers to a variety of techniques that computers learn to do. These include speech recognition, natural language processing, and computer vision. But it’s also the concept behind things like Google Search, and Facebook’s Like button. With machine learning, machines can learn to do things that only humans can do. For example, your smartphone can translate languages with a combination of artificial intelligence, big data, and the internet. It can identify faces in photos, recognize text, and analyze other information—all without human intervention. In addition, machine learning is used to train robots, predict customer behavior, and even build virtual reality environments.
Machine Learning The Powerhouse of AI Explained.pptxCIO Look Magazine
Artificial Intelligence (AI) and Machine Learning (ML) are two terms that have revolutionized the technology landscape, becoming integral in various sectors.
This document provides an introduction to machine learning fundamentals. It defines machine learning as giving computers the ability to learn from data rather than being explicitly programmed. The document discusses the differences between artificial intelligence, machine learning, deep learning, and data science. It also covers applications of machine learning, when to use and not use machine learning, and types of machine learning problems and workflows.
Abstract: Detection of fake news based on deep learning techniques is a major issue used to mislead people. For
the experiments, several types of datasets, models, and methodologies have been used to detect fake news. Also,
most of the datasets contain text id, tweets id, and user-based id and user-based features. To get the proper results
and accuracy various models like CNN (Convolution neural network), DEEP CNN, and LSTM (Long short-term
memory) are used
trends of information systems and artificial technologymilkesa13
This document provides an overview of emerging technologies transforming the information technology industry, as discussed in recent literature. It examines technologies like cloud computing, the internet of things, artificial intelligence, blockchain, big data analytics, and more. For each technology, the document summarizes key points from 5-8 research papers on their characteristics, advantages, and challenges. The goal is to help researchers and practitioners understand these important trends by synthesizing information from multiple sources, rather than reading numerous individual papers. Artificial intelligence is discussed in more depth as an example, outlining how it is used through machine learning and deep learning, and its impact on enhancing security and automating processes within information systems.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep
learning technique is to improve the implementation rate based upon the capability of the society. The
main objective of the proposed system is to apply the deep learning using data driven approach for
controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.
Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
Machine Learning On Big Data: Opportunities And Challenges- Future Research D...PhD Assistance
Machine Learning (ML) is rapidly used in a variety of applications. It has risen to prominence in recent years, owing in part to the emergence of big data. When it comes to big data, ML algorithms have never been more promising. Big data allows machine learning algorithms to discover finer-grained patterns and make more timely and precise predictions than ever before; however, it also poses significant challenges to machine learning, such as model scalability and distributed computing.
Learn More: https://bit.ly/2RB1buD
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
MOBILE CLOUD COMPUTING –FUTURE OF NEXT GENERATION COMPUTINGijistjournal
Mobile device has become essential part of human life. Apart from call and receive functions, user can access many function in his/her mobile. A user wants everything on his/her mobile device for the ease of work. Some people use tablets instead of laptop or desktop. In this paper, insights into Mobile Cloud Computing (MCC) are presented. First overview of cloud computing system is discussed. Then after architecture of MCC is presented. Some applications based on MCC are also discussed and paper is concluded by exploring the problems and solutions of these in MCC.
The document discusses machine-to-machine (M2M) communication and how it differs from the Internet of Things (IoT). M2M allows data exchange between machines, while IoT refers to the potential interconnection of smart objects. M2M provides the underlying connectivity that enables IoT by allowing smart objects to communicate. While M2M focuses on data exchange within closed systems, IoT provides a more comprehensive approach by connecting various M2M devices to address solutions across industries. The document also outlines some benefits of M2M data science such as disaster avoidance, risk minimization, cost optimization, and increased revenues.
Beginning to understand the world of data demands the evolution of procedures and skillsets in tune with the rewarding trends. As the excerpts from the Fortune Business Insight article state; the market for data analytics is estimated to expand by 25% between 2021-2030. Data scientists are predicted to leverage the highest possible benefits for industries such as banking, finance, insurance, entertainment, telecommunication, automobile, etc.
Pace up with the fastest-evolving industries of all time. Make informed decisions in the world of Data Science by mastering the emerging trends in diversified realms of data. Bring in the change with the following Data Science trends set in place in time:
1. Blockchain technology
2. Natural Language Processing
3. Internet of Things
4. Auto Machine Learning
5. Immersive experiences
6. Robotic Process Automation
7. TinyML and Small Data
8. AI-powered Virtual Assistants
9. Graph Analytics
10. Cloyd computing
11. Image processing
12. Data Visualization
13. Augmented Analytics
14. Predictive Analytics
15. Scalable Artificial Intelligence
As is evident, there will be more data in the coming years. This is a clear indication of an escalated need for staying upbeat with the proposed data science industry trends for years to follow. Make the most of the opportunity by enrolling with top-ranking data science certifications from globally renowned data credentials providers.
Download your copy & boost your chances at landing your dream Data Science Jobs with USDSI®
The document describes a Driverless ML API that was created to automate machine learning workflows including feature engineering, model validation, tuning, selection, and deployment. The API uses machine learning interpretability techniques to provide visualizations and explanations of models. It aims to help scale data science efforts and enable both expert and junior data scientists to more quickly develop accurate, production-ready models. Key capabilities of the API include automated exploratory data analysis, feature selection and engineering, model selection and hyperparameter tuning using GPUs for faster training, and model interpretability visualizations.
The upsurge of deep learning for computer vision applicationsIJECEIAES
Artificial intelligence (AI) is additionally serving to a brand new breed of corporations disrupt industries from restorative examination to horticulture. Computers can’t nevertheless replace humans, however, they will work superbly taking care of the everyday tangle of our lives. The era is reconstructing big business and has been on the rise in recent years which has grounded with the success of deep learning (DL). Cyber-security, Auto and health industry are three industries innovating with AI and DL technologies and also Banking, retail, finance, robotics, manufacturing. The healthcare industry is one of the earliest adopters of AI and DL. DL accomplishing exceptional dimensions levels of accurateness to the point where DL algorithms can outperform humans at classifying videos & images. The major drivers that caused the breakthrough of deep neural networks are the provision of giant amounts of coaching information, powerful machine infrastructure, and advances in academia. DL is heavily employed in each academe to review intelligence and within the trade-in building intelligent systems to help humans in varied tasks. Thereby DL systems begin to crush not solely classical ways, but additionally, human benchmarks in numerous tasks like image classification, action detection, natural language processing, signal process, and linguistic communication process.
Unlock the future of AI/ML services with our insights into the 9 key trends shaping 2024. From advanced neural networks to ethical AI practices, stay ahead with cutting-edge innovations. Discover how Mooglelabs is revolutionizing AI/ML services to drive efficiency, enhance customer experiences, and propel businesses into the future.
A Study On Artificial Intelligence Technologies And Its ApplicationsJeff Nelson
This document discusses artificial intelligence (AI) technologies and their applications. It begins by defining AI as the recreation of human intelligence processes by machines. It then describes different types of AI, including weak AI which is designed for specific tasks, and strong AI which exhibits generalized human-level cognition. The document outlines several AI technologies like machine learning, machine vision, and natural language processing. It provides examples of how these technologies are used in applications such as self-driving cars, medical imaging, and digital assistants.
Bringing AI into the Enterprise: A Machine Learning Primermercatoradvisory
New research from Mercator Advisory Group shows how machine learning, a.k.a. AI, has changed consumer behavior and expectations and will evolve to alter all aspects of bank operations. AI’s impact on banking will be broader and faster than the impact of the internet.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
More Related Content
Similar to Automated machine learning: the new data science challenge
This document discusses 10 AI and machine learning trends that will impact business in 2020:
1. AI is helping combat COVID-19 through technologies like thermal cameras, temperature prediction systems, contactless delivery robots, and healthcare data analysis platforms.
2. PyTorch is becoming more popular than TensorFlow for AI research due to being easier to use, while TensorFlow remains more widely used in businesses.
3. Time series analysis using recurrent neural networks and other deep learning techniques allows for highly accurate forecasting and anomaly detection for business data.
4. Reinforcement learning is developing sophisticated strategies through trial-and-error learning and will be important for applications like goal-oriented chatbots.
5. Advances in biometrics like multimodal
THE INTEREST OF HYBRIDIZING EXPLAINABLE AI WITH RNN TO RESOLVE DDOS ATTACKS: ...IJNSA Journal
In this paper, we suggest a new method to address attack detection problems in IoT networks, by using a specific emphasis on the user’s and network’s requirements regarding the application provider and giving a defence against intruders and malicious users. The paper focuses on resolving the problem of detection of attacks in IoT networks using Recurrent Neural Networks and an explainable artificial intelligence technique (XAI) named SHAP. XAI refers to a collection of methods and procedures for explaining how AI models make decisions. Although XAI is useful for understanding the motivations underlying AI models, the information utilized in these discoveries may be a risk. Machine learning models are vulnerable to attacks such as model inversion, model extraction, and membership inference. Such attacks could concentrate on the learning model or on the data used to train and build the model, depending on the involved circumstances and parties. Hence, when the owner of an AI model wishes to grant only black-box access and does not expose the model's parameters and architecture to third parties, solutions like XAI can notably enhance the susceptibility to model extraction attacks.
To guarantee the users regarding possible violations, the proposed RNN-SHAP method is based on a decentralized topology relying on a set of cooperative independent parties including the users of IoT network. Another advantage of the suggested approach is the capitalization on the trust relationships that are part of IoT networks in real life to build trusted mechanisms in the network. Compared to other methods using different metrics, the suggested RNN-SHAP shows its efficiency in resolving the DDoS problem.
Unveiling the Power of Machine Learning.docxgreendigital
Introduction:
In the vast landscape of technological evolution, Machine Learning (ML) stands as a beacon of innovation. Reshaping the way we interact with the digital world. With its roots in artificial intelligence. ML empowers systems to learn and improve from experience without explicit programming. This transformative technology is at the forefront of revolutionizing industries, from healthcare to finance. and from manufacturing to entertainment. In this article, we delve into the intricacies of machine learning. exploring its applications, challenges, and the profound impact it has on shaping the future.
The Ultimate Guide to Machine Learning (ML)RR IT Zone
Machine learning is a broad term that refers to a variety of techniques that computers learn to do. These include speech recognition, natural language processing, and computer vision. But it’s also the concept behind things like Google Search, and Facebook’s Like button. With machine learning, machines can learn to do things that only humans can do. For example, your smartphone can translate languages with a combination of artificial intelligence, big data, and the internet. It can identify faces in photos, recognize text, and analyze other information—all without human intervention. In addition, machine learning is used to train robots, predict customer behavior, and even build virtual reality environments.
Machine Learning The Powerhouse of AI Explained.pptxCIO Look Magazine
Artificial Intelligence (AI) and Machine Learning (ML) are two terms that have revolutionized the technology landscape, becoming integral in various sectors.
This document provides an introduction to machine learning fundamentals. It defines machine learning as giving computers the ability to learn from data rather than being explicitly programmed. The document discusses the differences between artificial intelligence, machine learning, deep learning, and data science. It also covers applications of machine learning, when to use and not use machine learning, and types of machine learning problems and workflows.
Abstract: Detection of fake news based on deep learning techniques is a major issue used to mislead people. For
the experiments, several types of datasets, models, and methodologies have been used to detect fake news. Also,
most of the datasets contain text id, tweets id, and user-based id and user-based features. To get the proper results
and accuracy various models like CNN (Convolution neural network), DEEP CNN, and LSTM (Long short-term
memory) are used
trends of information systems and artificial technologymilkesa13
This document provides an overview of emerging technologies transforming the information technology industry, as discussed in recent literature. It examines technologies like cloud computing, the internet of things, artificial intelligence, blockchain, big data analytics, and more. For each technology, the document summarizes key points from 5-8 research papers on their characteristics, advantages, and challenges. The goal is to help researchers and practitioners understand these important trends by synthesizing information from multiple sources, rather than reading numerous individual papers. Artificial intelligence is discussed in more depth as an example, outlining how it is used through machine learning and deep learning, and its impact on enhancing security and automating processes within information systems.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep
learning technique is to improve the implementation rate based upon the capability of the society. The
main objective of the proposed system is to apply the deep learning using data driven approach for
controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.
Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not
get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the
chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs.Through this system, huge volume of data’s that are generated by the system will also get control.
ANALYSIS OF SYSTEM ON CHIP DESIGN USING ARTIFICIAL INTELLIGENCEijesajournal
Automation is a powerful word that lies everywhere. It shows that without automation, application will not get developed. In a semiconductor industry, artificial intelligence played a vital role for implementing the chip based design through automation .The main advantage of applying the machine learning & deep learning technique is to improve the implementation rate based upon the capability of the society. The main objective of the proposed system is to apply the deep learning using data driven approach for controlling the system. Thus leads to a improvement in design, delay ,speed of operation & costs. Through this system, huge volume of data’s that are generated by the system will also get control.
Machine Learning On Big Data: Opportunities And Challenges- Future Research D...PhD Assistance
Machine Learning (ML) is rapidly used in a variety of applications. It has risen to prominence in recent years, owing in part to the emergence of big data. When it comes to big data, ML algorithms have never been more promising. Big data allows machine learning algorithms to discover finer-grained patterns and make more timely and precise predictions than ever before; however, it also poses significant challenges to machine learning, such as model scalability and distributed computing.
Learn More: https://bit.ly/2RB1buD
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
MOBILE CLOUD COMPUTING –FUTURE OF NEXT GENERATION COMPUTINGijistjournal
Mobile device has become essential part of human life. Apart from call and receive functions, user can access many function in his/her mobile. A user wants everything on his/her mobile device for the ease of work. Some people use tablets instead of laptop or desktop. In this paper, insights into Mobile Cloud Computing (MCC) are presented. First overview of cloud computing system is discussed. Then after architecture of MCC is presented. Some applications based on MCC are also discussed and paper is concluded by exploring the problems and solutions of these in MCC.
The document discusses machine-to-machine (M2M) communication and how it differs from the Internet of Things (IoT). M2M allows data exchange between machines, while IoT refers to the potential interconnection of smart objects. M2M provides the underlying connectivity that enables IoT by allowing smart objects to communicate. While M2M focuses on data exchange within closed systems, IoT provides a more comprehensive approach by connecting various M2M devices to address solutions across industries. The document also outlines some benefits of M2M data science such as disaster avoidance, risk minimization, cost optimization, and increased revenues.
Beginning to understand the world of data demands the evolution of procedures and skillsets in tune with the rewarding trends. As the excerpts from the Fortune Business Insight article state; the market for data analytics is estimated to expand by 25% between 2021-2030. Data scientists are predicted to leverage the highest possible benefits for industries such as banking, finance, insurance, entertainment, telecommunication, automobile, etc.
Pace up with the fastest-evolving industries of all time. Make informed decisions in the world of Data Science by mastering the emerging trends in diversified realms of data. Bring in the change with the following Data Science trends set in place in time:
1. Blockchain technology
2. Natural Language Processing
3. Internet of Things
4. Auto Machine Learning
5. Immersive experiences
6. Robotic Process Automation
7. TinyML and Small Data
8. AI-powered Virtual Assistants
9. Graph Analytics
10. Cloyd computing
11. Image processing
12. Data Visualization
13. Augmented Analytics
14. Predictive Analytics
15. Scalable Artificial Intelligence
As is evident, there will be more data in the coming years. This is a clear indication of an escalated need for staying upbeat with the proposed data science industry trends for years to follow. Make the most of the opportunity by enrolling with top-ranking data science certifications from globally renowned data credentials providers.
Download your copy & boost your chances at landing your dream Data Science Jobs with USDSI®
The document describes a Driverless ML API that was created to automate machine learning workflows including feature engineering, model validation, tuning, selection, and deployment. The API uses machine learning interpretability techniques to provide visualizations and explanations of models. It aims to help scale data science efforts and enable both expert and junior data scientists to more quickly develop accurate, production-ready models. Key capabilities of the API include automated exploratory data analysis, feature selection and engineering, model selection and hyperparameter tuning using GPUs for faster training, and model interpretability visualizations.
The upsurge of deep learning for computer vision applicationsIJECEIAES
Artificial intelligence (AI) is additionally serving to a brand new breed of corporations disrupt industries from restorative examination to horticulture. Computers can’t nevertheless replace humans, however, they will work superbly taking care of the everyday tangle of our lives. The era is reconstructing big business and has been on the rise in recent years which has grounded with the success of deep learning (DL). Cyber-security, Auto and health industry are three industries innovating with AI and DL technologies and also Banking, retail, finance, robotics, manufacturing. The healthcare industry is one of the earliest adopters of AI and DL. DL accomplishing exceptional dimensions levels of accurateness to the point where DL algorithms can outperform humans at classifying videos & images. The major drivers that caused the breakthrough of deep neural networks are the provision of giant amounts of coaching information, powerful machine infrastructure, and advances in academia. DL is heavily employed in each academe to review intelligence and within the trade-in building intelligent systems to help humans in varied tasks. Thereby DL systems begin to crush not solely classical ways, but additionally, human benchmarks in numerous tasks like image classification, action detection, natural language processing, signal process, and linguistic communication process.
Unlock the future of AI/ML services with our insights into the 9 key trends shaping 2024. From advanced neural networks to ethical AI practices, stay ahead with cutting-edge innovations. Discover how Mooglelabs is revolutionizing AI/ML services to drive efficiency, enhance customer experiences, and propel businesses into the future.
A Study On Artificial Intelligence Technologies And Its ApplicationsJeff Nelson
This document discusses artificial intelligence (AI) technologies and their applications. It begins by defining AI as the recreation of human intelligence processes by machines. It then describes different types of AI, including weak AI which is designed for specific tasks, and strong AI which exhibits generalized human-level cognition. The document outlines several AI technologies like machine learning, machine vision, and natural language processing. It provides examples of how these technologies are used in applications such as self-driving cars, medical imaging, and digital assistants.
Bringing AI into the Enterprise: A Machine Learning Primermercatoradvisory
New research from Mercator Advisory Group shows how machine learning, a.k.a. AI, has changed consumer behavior and expectations and will evolve to alter all aspects of bank operations. AI’s impact on banking will be broader and faster than the impact of the internet.
Similar to Automated machine learning: the new data science challenge (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
A review on techniques and modelling methodologies used for checking electrom...
Automated machine learning: the new data science challenge
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 4, August 2022, pp. 4243~4252
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i4.pp4243-4252 4243
Journal homepage: http://ijece.iaescore.com
Automated machine learning: the new data science challenge
Ilham Slimani1
, Nadia Slimani2
, Said Achchab3
, Mohammed Saber1
, Ilhame El Farissi1
, Nawal Sbiti2
,
Mustapha Amghar2
1
SmartICT Laboratory, ENSAO, Mohammed First University, Oujda, Morocco
2
Computer Systems and Productivity Team, EMI, Mohammed V University, Rabat, Morocco
3
ADMIR Laboratory, ENSIAS, Mohammed V University, Rabat, Morocco
Article Info ABSTRACT
Article history:
Received Jun 19, 2021
Revised Mar 23, 2022
Accepted Apr 15, 2022
The world is changing quite rapidly while increasingly tuning into
digitalization. However, it is important to note that data science is what most
technology is evolving around and data is definitely the future of everything.
For industries, adopting a “data science approach” is no longer an option, it
becomes an obligation in order to enhance their business rather than survive.
This paper offers a roadmap for anyone interested in this research field or
getting started with “machine learning” learning while enabling the reader to
easily comprehend the key concepts behind. Indeed, it examines the benefits
of automated machine learning systems, starting with defining machine
learning vocabulary and basic concepts. Then, explaining how to,
concretely, build up a machine learning model by highlighting the
challenges related to data and algorithms. Finally, exposing a summary of
two studies applying machine learning in two different fields, namely
transportation for road traffic forecasting and supply chain management for
demand prediction where the predictive performance of various models is
compared based on different metrics.
Keywords:
Artificial intelligence
Artificial neural networks
Automated machine learning
Convolutional neural network
Data science
Long short term memory
Machine learning
This is an open access article under the CC BY-SA license.
Corresponding Author:
Ilham Slimani
SmartICT Laboratory, Mohammed First University
Oujda, Morocco
Email: slimani.ilham@gmail.com
1. INTRODUCTION
We have all probably heard the expression «data is the new oil»; well it turns out that data now
worth so much more than oil. Indeed, it is true that both data and oil generate value, however oil is “used up”
but data is not in a way that it can be renewable and reused in so many different ways while increasing value
[1]. Certainly, the future is already here and big data is everywhere since we are living in a constantly
changing world that creates huge amount of data every single day. The term “data” includes everything from
words and ideas, to sounds, pictures or videos, to personal data (name, age, gender, height) and of course, to
anything that can be collected from industrial processes or internet of things (IoT) sensors [2]. In other
words, data is anything operated by computers and can be converted to binary numbers (0s and 1s).
Data science is definitely the future of everything”. It is an interdisciplinary field based on scientific
methods, algorithms and processes with the aim of extracting knowledge and value from data in both
structured and unstructured forms, analogous to data mining. In order to gain a competitive advantage,
companies should start leveraging data by using it for their profit maximization. Nevertheless, data is not an
instantly valuable resource; data must be collected, cleaned, improved, before it can be analyzed; and this
process is accomplished using artificial intelligence (AI) techniques.
2. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4243-4252
4244
In a typical business problem, where a bank of three million costumers, for instance, introduces a
new card. The bank’s sales marketing team executed a campaign, offering it to thousands of their existing
costumers. Unfortunately, only 3% of them signed up for it and that is much fewer than expected. Therefore,
costumers are divided to two groups: i) the first group: 3% are delighted to use this new card maybe for
online shopping and so on and ii) the second group: 97% did not want the new credit card, maybe because
they prefer to shop locally and pay cash.
With the purpose of profit maximization, the main question for the bank is how to target more
people from the first group and fewer people from the second one in the next marketing campaign? Well the
answer is in the data! Usually, it takes months to get a project like that off the ground and deploy a predictive
targeting model. However, it goes faster with the use of AI techniques, where data resides in a database and
the next marketing action is planned based on information from the last campaign: historical data with a
detailed record of all activities on costumers’ bank account transactions. Consequently, it becomes an easy
task separating costumers who are interesting in the new product from those who are not.
The present paper offers a roadmap for anyone interested in this research field or getting started with
“machine learning” learning while enabling the reader to easily comprehend the key concepts behind. Indeed,
this paper is structed: As an introduction, the importance and power of data in a digitalization global trend are
highlighted in the first part. The second part is dedicated to the fundamentals of machine learning (ML)
starting with basic concepts such as AI, automation and automated ML [3], classification and regression,
deep learning (DL). Afterwards, in the forth section of this paper, the “how to” of building a ML model is
explained in details. Followed by ML applications for forecasting purposes in different fields namely,
transportation for road traffic forecasting [4] and supply chain management (SCM) [5] for demand prediction
[6] using real datasets from Morocco. In those studies, the predictive performance of various models
including auto regressive moving average (ARIMA), multi layer perceptron (MLP), support vector regression
(SVR or SMOReg), long short term memory (LSTM), convolutional neural network (CNN) are compared
based on different metrics. Finally, the conclusion and perspectives.
2. THE FUNDAMENTALS OF MACHINE LEARNING: LITERATURE REVIEW
2.1. AI vs ML vs DL: Biggest confusion
AI has become the latest “buzzword” in the industry today. As displayed in the following Figure 1,
it has many applications in many subdomains such as robotics, planning, speech or picture regognition, cyber
security. However, we often confuse terms like AI, ML [7] or DL [8]: i) AI enables computers to perform
tasks that normally require human intelligence by allowing machines to mimic human behavior, ii) ML is a
sub-field of AI that uses statistical learning algorithms and enables machines to learn and improve from
experience to learn from their own [9], iii) DL is a ML subset, also defined as a multilayered neural network
architecture containing a large number of parameters and layers. Naming CNN or recurrent neural network
(RNN). DL is a technique that processes information in a similar way to how a human brain processes it. It is
used in various industries such as transportation, finance, and advertising. Most of DL’s models are based on
neural networks structures this is why they are often denoted as deep neural networks (DNN).
Figure 1. AI vs ML vs DL, AI applications
2.2. Difference between automation, ML and automated ML concepts
The terms “automation” and “Machine learning” are two very different concepts that perfectly
complement each other, but often confused when used interchangeably. In short, Automation is more about
3. Int J Elec & Comp Eng ISSN: 2088-8708
Automated machine learning: the new data science challenge (Ilham Slimani)
4245
“doing” without any further thought or guidance and ML is all about “thinking” by mimicking human
intelligence. The concept of automation has been around since ancient times, its practice was focused on
controlling machines and devices. More precisely, it is the idea of using a machine to do repetitive tasks
without human involvement in order to speed worflows. Whereas, ML is a process that enables a system to
make itself smarter over time by taking advantage of the various data and experiences it has. There are
different levels of automation in business: starting from desktop application and manual intervention, to
robotic and process automation with self-service, then AI and ML based on data-driven process. A new
related trend is gaining momentum called “Automated ML” [10]. Essentially, this concept offers ML expert
tools to apply ML to ML itself in order to automate repetitive tasks related to ML like the choice of the
convenient dataset, data preparation or enven feature selection [11].
2.3. Traditional vs ML programming approach
If the problem is complex and you opt for the traditional approach as a resolution method, you might
end up with a long list of rules that are hard to maintain. As illustrated in the following Figure 2, after
studying the problem and writing the appropriate program, the proposed solution is evaluated in order to
launch it if it works or analyze and correct the errors if it does not. This approach is usually not very accurate
and can be hard to implement. The ML programming approach, on the other hand, is usually the best solution
for complex problems and is mostly based on data. Indeed, after studying the problem, the resolution process
strarts from an initial database where data is majorly treated and pre-processed before it can be loaded into
ML algorithms for feature engineering, and then we go through training and test phases with the aim of
searching the best classification or predictive performance with the possibility to iterate if needed. To get
started with ML learning, we should be familiar with ML vocabulary and of course with other disciplines
starting from maths to programming, to databases and ML algorithms and tools.
Figure 2. Traditional vs ML approach [7]
2.4. Taxonomy of ML systems
Based on the amount of data and type of supervision gotten during the training process, ML systems
can be devided into four main categories:i) supervised learning: in this type of learning algorithm, the
problem can be either classification (logistic regression, k-nearest neighbour (KNN), support vector machine
(SVM), naïve Bayes) or regression (decision tree (DT), linear regression, random forest (RF), support vector
regression (SVR) [12]). It is used with labeled data whre the mapping patterns from the training set of as
given model is learned from input to output. Then, the proposed model is trained to predict the response of a
new dataset called test set; ii) unsupervised learning: this learning algorithm is performed when data is not
labeled. Moreover, unlike supervised learning, this algorithm is left on his own to group data by finding
differences or resemblances in the input patterns. It is generally used for clustering problems (k-means,
mean-shift, apriori) or dimentionality reduction problems (principal component analysis (PCA), feature
selection, linear discriminant analysis (LDA)); iii) semisupervised leaning: with a mixture of some labelled
and many unlabeled training data, semisupervised algorithms are a combination of supervised and
unsupervised techniques, such as heuristic approaches, generative models, and iv) Reinforcement learning: Is
an environment where an agent learns how to find the best strategy by continuously interacting with it in a
trial and error method. The agent receives feedback for its earlier actions and experiences and is rewarded for
a correct performance or punished for incorrect actions.
4. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4243-4252
4246
3. BUILDING A MACHINE LEARNING MODEL
3.1. Challenges and limitations of machine learning
In general, the aim of machine learning is to select the best algorithm and train it on an adequate
dataset. It follows that the relevance of the algorithm and the training database play a crucial role in the
accuracy of the results. In this section, we will focus on the different challenges and limitations that can
hinder the performance of the chosen model, namely: bad data (insufficient training data, irrelevant features)
and bad algorithm (topologie, overfitting or underfitting of the training dataset).
3.1.1. Insufficient training data
The first hurdles faced by the machine learnng user is the availability, completeness, and relevance
of the learning and testing database. According to the widely cited article [13], Google researchers Halevy,
Norvig, and Pereira argue that for complex problems such as linguistic expressions and lerning from texts,
the performance of the machine learning process is of paramount importance dependent on the data
regardless of the alogorithm used, so “we may want to reconsider the tradeoff between spending time and
money on algorithm development versus spending it on corpus development”. The size of the training
database is very important for the model’s accuracy, which gives good results depending on the size of the
database; and regardless of the model used [14].
3.1.2. Nonrepresentative and poor quality training data
By utilizing a nonrepresentative training set, we prepare a model that is probably not going to make
precise forecasts; it is harder for the system to detect the underlying patterns, so the system is less likely to
perform well. Chen et al. [15] deduces that the results of the experiments showed a strong correlation
between the quality of the datasets and the performance of the machine learning system. The results also
demonstrated that a rigorous evaluation of data quality is necessary for guiding the quality improvement of
machine learning.
3.1.3. Irrelevent features
The success of machine learning project depends on the detection of the key characteristics that
drive the output. It consists on detecting the good set of features to train on, this process is called feature
engineering and involves [16]: i) feature selection: selecting the most useful features to train the model by
analysing the historical data and ii) feature extraction: combining and producing more useful features and
introduction to the test set.
3.1.4. Overfitting/underfitting the training data
Overfitting means that the model performs well on the training data [17], but it does not generalize
well as shown in Figure 3. It happens when the model is too complex relative to the amount and noisiness of
the training data. The possible solutions are [18]: i) to simplify the model by reducing parameters (e.g., a
linear model rather than a high-degree polynomial model), by reducing the number of attributes in the
training data or by constraining the model, ii) to expand the training data, iii) to clean training data (e.g., fix
data errors and remove outliers), and iv) constraining a model to make it simpler and reduce the risk of
overfitting is called regularization.
Figure 3. llustration of overfitting and underfitting in ML [7]
5. Int J Elec & Comp Eng ISSN: 2088-8708
Automated machine learning: the new data science challenge (Ilham Slimani)
4247
Besides, underfitting is something contrary to overfitting: it happens when the model is too easy to
even consider learning the basic construction of the information. The fundamental choices to fix this issue
are: i) selecting an all the more remarkable model, with more boundaries, ii) redoing the historical analysis of
the data to incorporate more relevant characteristics for the model, and iii) reducing the requirements on the
model.
3.2. End to end machine learning operating mode
This section is dedicated to explain in detail how to design and implement a machine learning model
from end to end. Indeed, starting from understanding the business problem and identifying related data, then
preparing it by determining the training and test sets, followed by an evaluation of the model’s performance.
The main steps to follow are listed bellow [19], [20] and are shown in Figure 4.
Figure 4. Steps to build a ML model
3.2.1. Prepare the data
The first step of implementing a machine learning model is preparing the data. This phase consists
of framing the problem by doing requirement analysis, then characterizing and developing the problem being
adressed (objectives, inputs and outputs). Based on this analysis, sources of information is identified in order
to do the cleaning and set up, then to be filled up with the features’ values.
3.2.2. Handling missing data
There are some techniques to manage the missing data in machine learning algorithm. The obvious
method is to ignore instances with unknown feature values [21]. The other ones are to fill up the missing
feature with most common feature value, or the mean value computed from available cases to fill in missing
data values on the remaining cases. Another method is treating missing feature values as special values.
These would be classified as outliers in the laterstage.
3.2.3. Discretization
The discretization process refers to the conversion of continuous attributes, variables or features to
discretized ones. It aims to fundamentally reduce the quantity of potential upsides of the constant component
since largenumber of conceivable element esteems to slowand ineffective process. However, the problem of
6. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4243-4252
4248
choosing the interval borders for the discretization of a numerical value range remains an open problem in
numerical feature handling [22].
3.2.4. Feature selection
Feature selection is the process of identifying and removing possible irrelevant and redundant
features by getting rid of noise in data. Generally, features are characterized as relevant if they have an
influence on the output and their role cannot be assumed by the rest feature. This is assured by choosing
relevant features based on the type of problem studied.
3.2.5. Normalization
This step consists on “scaling down” transformation ofthe features. Within a feature there is often a
large difference between the maximum and minimum values. When normalization is performed the value
magnitudes and scaled to appreciably lowvalues. The two-mostcommon methods for this scope are min-
maxnormalization and z-score normalization.
3.2.6. Select and train the model
To ensure the proper functioning, the model needs an input data and a desired output data in order to
compare it with the obtained output then calculate the total model error. To reduce the model’s error, a
learning phase is adopted. During this phase, a model adjustment is performed in order to determine the
appropriate value of the connections. Select performance measurement indicators, with which to measure the
accuracy of the results and eventually compare the results provided by the designed system with other
models.
4. MACHINE LEARNING APPLICATIONS: TRANSPORTATION AND SUPPLY CHAIN
MANAGEMENT
4.1. Transportation: Road traffic prediction
4.1.1. Research method
With the rapid development of the cities, and the lack of public transport with low availability
coupled with long waiting times, households encouraged by credit facilities own at least one car. That is why
every country in the world is experiencing a rapid development of motorized transport. However, the
development of road infrastructure has not kept pace, which implies several bottlenecks [23]. This
phenomenon has a direct impact on the quality of life, especially in urban areas, which are still characterized
by almost permanent traffic jams and high levels of air pollution [24].
Prediction is definitely a key component of road traffic management. Considering that, this study
tackles the use of various methods (parametric and non-parametric [25]) to forecast traffic based on real
Moroccan dataset of a toll station with high traffic volume. The approach consists on comparing, according
to predefined criteria, the predictive performances of three different methods, namely: i) neural networks
structure MLP [4] as a non parametric model, ii) mathematical modeling method seasonal ARIMA
(SARIMA) [26] as a parametric model, and iii) SMOreg inspired from the SVM algorithm [12] as a non
parametric model.
4.1.2. Results and discussion
The dataset contains 30792 recordings of hourly traffic flow. This dataset is divided into 42 months
of traffic flow information; it is separated into two parts: the daily traffic of three years (i.e. 85% of the data)
for training, and the daily traffic of the following six months (i.e. 15% of the remaining data) for testing.
Several networks topologiess and combinaisons were tested and the best results were offred with a neural
network with the following characteristics:
a) An input layer provided by 18 values: i) 8 calendar information: working day, weekend, national
holiday, religious holiday, school holiday, Ramadan, strike and chronological order of the day in the
holidays and ii) 10 previous daily traffic flows: the flow for a given day d is predicted using the
historical flows of the days: d-1, d-2, d-3, d-4, d-5, d-6, d-7, d-14, d-21, d-365.
b) The output represents the expected traffic flow d of the day in the next order.
c) Multi-layer perceptron (MLP) of three hidden layers, the first hidden layer is composed of five neurons,
the second is composed of eight neurons and the third one is composed of two neurons. The transfer
function is Sigmoid and Backpropagation as learning algorithm with the existence of the Bias neuron.
The best MLP result is obtained with a total mean square error (MSE) of 0.00927 in the train set and
0.01321 in the test set. The MLP recorded the best forecasting performance with 0.57% absolute error, as
illustrated il the following Tables 1 and 2. The obtained results, presented in Table 2, prove that the neural
7. Int J Elec & Comp Eng ISSN: 2088-8708
Automated machine learning: the new data science challenge (Ilham Slimani)
4249
network gives a satisfactory accuracy of the forecast. The comparison of the observed and modelled traffic
flows in Figure 5 shows that over 80% of the forecasts were made with less than ±5.0% error. A more
intensive gander at the fundamentally various conjectures showed that the observed traffic peaks that
recorded a worse forecast performance are days when other exogenous factors were not introduced as
features: mainly bad weather and pavement maintenance operations.
Table 1. Daily traffic forecasting with MLP 5-8-2
Day Actual traffic Forecasted traffic Difference
08/01/2018
17/01/2018
28/02/2018
24/05/2018
23147
27739
27471
25080
22902
28176
28347
24587
245
-437
-876
493
Table 2. Absolute error comparison on June 2018
Model Total traffic Absolute error Relative error
Actual traffic 827913 - -
MLP 832837 4924 0,57%
SARIMA 841929 14016 1,69%
SMOreg 807560 20353 2,45%
Figure 5. Data visualization per day
4.2. SCM: Demand forecasting
4.2.1. Research method
Prediction is a process that uses historical data to forecast future trends. It is performed using
various time series models. Demand forecasting is a key component to improve the supply chain’s (SC)
performance, since having a clear vision of future demand gives each SC’s member the opportunity to
optimize its performance by minimizing costs, related to inventory, manufacturing, transportation or
distribution; and maximizing the profit. In this study [6], different statistical and deep learning models are
studied in a comparison analysis aiming to demonstrate which one performs better in terms of providing
accurate forecasts, namely: i) Autoregressive integrated moving average (ARIMA) as a statistical model,
ii) Multi layer perceptron (MLP) as a feedforward neural network, iii) Long short term mermory (LSTM)
[27] as a recurrent neural network: is capable of learning long-term dependencies, its architecture is
composed by a collection of subnets or also called blocks of memory where the memory cell stores the state,
the front door controls what to learn, the door of oblivion controls what to forget and the exit door controls
the amount of data to modify, and iv) Convolutional neural networks (CNN or ConvNet) [27]: Like the MLP
structure, CNN has three type of layers: the input layer, the output layer and the hidden layer that is
categirized in two types the feature learning layers (convolution, pooling, and rectified linear) and the
classification layers (fully connected layers and normalization layers). CNN shows its performance in many
fields such as image classification and segmentation, face recognition, object detection, traffic sign
recognition, speech processing.
4.2.2. Results and discussion
In the proposed neural network, the demand of a given day d is predicted using demand quantities of
the days: d-6, d-12, and d-18. Therefore, we model the problem as a three time-steps prediction on one input
corresponding to the quantity of the day, and this for each output. This instead of considering that the data is
8. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4243-4252
4250
composed of three inputs as we did in the MLP study. The LSTM network is composed of 50 neurons in the
hidden layer and 1 neuron in the output layer. The root mean squared error and the efficient Adam version of
the stochastic gradient is used. This yielded an RMSE after normalization of 0.138 and an root-mean-square
error (RMSE) before normalization of 457.958.
The proposed convolutional neural network is composed of the following layers: i) a one-
dimensional convolutional layer with 64 filters, a kernel size of 2 and an input shape of (3,1), the activation
function consists of the relu function; ii) A one-dimensional max pooling layer with a filter of size 2; iii) a
flatten layer; and iv) a fully connected layer composed of 50 neurons with relu being the activation function.
The model was trained using the adam version of stochastic descent and the MSE loss function. The training
after 200 epochs results in a RMSE of 0.138 after normalization, and 457.079 before normalization.” [6].
Daily demand tendency with an overview of comparing actual and forecasted demand using LSTM model
are illustrated in the following Figure 5.
The following Table 3 illustrates the various metrics used to measure the performance of each
proposed model including ARIMA, MLP, LSTM and CNN. The numerical experimentations are approved
using a real dataset provided by a recognized supermarket in Morocco. The results clearly show that the CNN
gives slightly better forecasting results than the LSTM network [6].
Table 3. ARIMA vs MLP vs LSTM vs CNN: performance metrics
Performance metrics ARIMA MLP LSTM CNN
RMSE (before normalization)
RMSE (after normalization)
Training time
Consumed energy (Joule)
485 690
-
1 129 201
16 938 010
464 261
0.14
2 341 361
35 120 412
457 958
0.138
2 541 292
38 119 376
457 079
0.138
2 720 236
40 803 534
5. CONCLUSION AND FUTURE WORK
Machine learning algorithms automatically extract knowledge from input datasets duly accompanied
by the identified features. Unfortunately, their success is usually dependant on the quality of data that they
operate on. Our society is creating vast amounts of data every day, besides; programming relieves people by
managing routine tasks that is why the real future lies within DL. Although not fully developed, this branch
of data science is the closest we have gotten to any time of applied AI. The strength of ML lies on computers’
capacity to learn how to best perform such tasks. Automated ML helps computers to learn how to optimize
the outcome of learning and how to perform the routine actions (applying ML to ML itself). In other words,
Automated ML offers ML experts tools to automate repetitive tasks by applying ML to ML itself. Yet, to
improve business performances, there is a new data science challenge called “dark data” which is the
information assets organizations collect, process and store during regular business activities, but generally
fail to use for other purposes (for example, analytics, business relationships and direct monetizing). It often
comprises most organizations’ universe of information assets. Thus, organizations often retain dark data for
compliance purposes only. Storing and securing data typically incurs more expense (and sometimes greater
risk) than value.
As future work, dark data is an interesting research field that still needs to be exploited using of
course automated ML techniques. Overall, it is worth thinking about dark data as unfulfilled value. The key
to monetising dark data lies not only in gathering it, but also in analyzing it to discover patterns and putting
the insights to use. By utilising new technologies around ML, specifically deep learning, businesses can join
structured and unstructured data sets together to provide high-value results and enhance business
performances by treating: Neglected information: unmanaged, unclassified or unknown data; web log files,
audio/video, email archive, visitor tracking data; storage costs, hidden risks.
REFERENCES
[1] A. Sircar, K. Yadav, K. Rayavarapu, N. Bist, and H. Oza, “Application of machine learning and artificial intelligence in oil and
gas industry,” Petroleum Research, vol. 6, no. 4, pp. 379–391, Dec. 2021, doi: 10.1016/j.ptlrs.2021.05.009.
[2] A. Ghasempour, “Internet of things in smart grid: architecture, applications, services, key technologies, and challenges,”
Inventions, vol. 4, no. 1, pp. 22–33, Mar. 2019, doi: 10.3390/inventions4010022.
[3] L. Vaccaro, G. Sansonetti, and A. Micarelli, “An empirical review of automated machine learning,” Computers, vol. 10, no. 1, p.
11, Jan. 2021, doi: 10.3390/computers10010011.
[4] N. Slimani, I. Slimani, N. Sbiti, and M. Amghar, “Traffic forecasting in Morocco using artificial neural networks,” Procedia
Computer Science, vol. 151, pp. 471–476, 2019, doi: 10.1016/j.procs.2019.04.064.
[5] H. Bousqaoui, I. Slimani, and S. Achchab, “Information sharing as a coordination tool in supply chain using multi-agent system
and neural networks,” in Advances in Intelligent Systems and Computing, Springer International Publishing, 2018, pp. 626–632.
[6] H. Bousqaoui, I. Slimani, and S. Achchab, “Comparative analysis of short-term demand predicting models using ARIMA and
9. Int J Elec & Comp Eng ISSN: 2088-8708
Automated machine learning: the new data science challenge (Ilham Slimani)
4251
deep learning,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 4, pp. 3319–3328, Aug. 2021,
doi: 10.11591/ijece.v11i4.pp3319-3328.
[7] B. Boehmke and B. Greenwell, Hands-on machine learning with R. Chapman and Hall/CRC, 2019.
[8] C. C. Aggarwal, Neural networks and deep learning. Springer International Publishing, 2018.
[9] M. Saber et al., “A comparative performance analysis of the intrusion detection systems,” 2020, pp. 192–200.
[10] J. Waring, C. Lindvall, and R. Umeton, “Automated machine learning: Review of the state-of-the-art and opportunities for
healthcare,” Artificial Intelligence in Medicine, vol. 104, Apr. 2020, doi: 10.1016/j.artmed.2020.101822.
[11] R. Budjač, M. Nikmon, P. Schreiber, B. Zahradníková, and D. Janáčová, “Automated machine learning overview,” Research
Papers Faculty of Materials Science and Technology Slovak University of Technology, vol. 27, no. 45, pp. 107–112, Sep. 2019,
doi: 10.2478/rput-2019-0033.
[12] N. Slimani, I. Slimani, N. Sbiti, and M. Amghar, “Machine learning and statistic predictive modeling for road traffic flow,”
International Journal of Traffic and Transportation Management, vol. 03, no. 01, pp. 17–24, Mar. 2021, doi:
10.5383/JTTM.03.01.003.
[13] A. Halevy, P. Norvig, and F. Pereira, “The unreasonable effectiveness of data,” IEEE Intelligent Systems, vol. 24, no. 2, pp. 8–12,
Mar. 2009, doi: 10.1109/MIS.2009.36.
[14] A. Géron, Hands-on machine learning with scikit-learn, keras, and tensorflow: concepts, tools, and techniques to build intelligent
systems. O’Reilly Media, Inc., 2019.
[15] H. Chen, J. Chen, and J. Ding, “Data evaluation and enhancement for quality improvement of machine learning,” IEEE
Transactions on Reliability, vol. 70, no. 2, pp. 831–847, Jun. 2021, doi: 10.1109/TR.2021.3070863.
[16] Y. Li, R. Liu, C. Wang, L. Yangning, N. Ding, and H.-T. Zheng, “Learning purified feature representations from task-irrelevant
labels,” arXiv preprint arXiv:2102.10955, Feb. 2021.
[17] X. Ying, “An overview of overfitting and its solutions,” Journal of Physics: Conference Series, vol. 1168, Feb. 2019, doi:
10.1088/1742-6596/1168/2/022022.
[18] D. Bashir, G. D. Montañez, S. Sehra, P. S. Segura, and J. Lauw, “An information-theoretic perspective on overfitting and
underfitting,” in AI 2020: Advances in Artificial Intelligence, Springer International Publishing, 2020, pp. 347–358.
[19] S. Vieira, R. Garcia-Dias, and W. H. Lopez Pinaya, “A step-by-step tutorial on how to build a machine learning model,” in
Machine Learning, Elsevier, 2020, pp. 343–370.
[20] L. Arras, F. Horn, G. Montavon, K.-R. Müller, and W. Samek, “‘What is relevant in a text document?’: An interpretable machine
learning approach,” Plos One, vol. 12, no. 8, Aug. 2017, doi: 10.1371/journal.pone.0181142.
[21] S. B. Kotsiantis, D. Kanellopoulos, and P. E. Pintelas, “Data preprocessing for supervised learning,” International Journal of
Computer Science, vol. 1, pp. 111–117, 2006.
[22] H. Shrivastava and S. Sridharan, “Conception of data preprocessing and partitioning procedure for machine learning algorithm,”
International Journal of Recent Advances in Engineering and Technolody (IJRAET), vol. 1, no. 3, 2013.
[23] T. Nagatani, “Traffic flow stabilized by matching speed on network with a bottleneck,” Physica A: Statistical Mechanics and its
Applications, vol. 538, Jan. 2020, doi: 10.1016/j.physa.2019.122838.
[24] J. Lu, B. Li, H. Li, and A. Al-Barakani, “Expansion of city scale, traffic modes, traffic congestion, and air pollution,” Cities, vol.
108, Jan. 2021, doi: 10.1016/j.cities.2020.102974.
[25] B. L. Smith, B. M. Williams, and R. Keith Oswald, “Comparison of parametric and nonparametric models for traffic flow
forecasting,” Transportation Research Part C: Emerging Technologies, vol. 10, no. 4, pp. 303–321, Aug. 2002, doi:
10.1016/S0968-090X(02)00009-8.
[26] N. Slimani, I. Slimani, M. Amghar, and N. Sbiti, “Road traffic forecasting using a real data set in Morocco,” Procedia Computer
Science, vol. 177, pp. 128–135, 2020, doi: 10.1016/j.procs.2020.10.020.
[27] H. Eskandari, M. Imani, and M. P. Moghaddam, “Convolutional and recurrent neural network based model for short-term load
forecasting,” Electric Power Systems Research, vol. 195, Jun. 2021, doi: 10.1016/j.epsr.2021.107173.
BIOGRAPHIES OF AUTHORS
Ilham Slimani Engineer graduated from National School of Applied Sciences
Oujda (ENSAO) and PhD from the National School of Computer Science and System Analysis
(ENSIAS) in Rabat. She is currently Professor at the Department of Computer Science at
Mohammed First University, Faculty of Sciences Oujda, Morocco. Her research and publication
interests include machine learning, supply chain management and demand forecasting,
transportation and road traffic forecasting, intrusion detection in computer networks, IoT and
Finance. She can be contacted at email: slimani.ilham@gmail.com.
Nadia Slimani Engineer graduated from the Mohammedia School of Engineering
(EMI) in Rabat and currently PhD student in Mohammedia School of Engineering. She has a
long experience with a national transport operator in Morocco. She also attended the higher cycle
of management at the ISCAE of Rabat. Her research and publication interests include machine
learning, transportation and road traffic forecasting. She can be contacted at email:
slimani.nadia@gmail.com.
10. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 4, August 2022: 4243-4252
4252
Said Achchab holder of a PHD in Applied Mathematics from the Mohammedia
School of Engineering in Rabat as well as a university habilitation in Business Intelligence from
the same institution, he also attended the higher cycle of management at the ENCG of Settat. He
is currently Professor of Quantitative Finance, Artificial Intelligence and Risk Management at
ENSIAS, and Head of the Department “Computer Science and Decision Support” since January
2018. He is coordinator of the Specialized Master “Engineering for Sustainable Finance and Risk
Management” and of the engineering program “Digital Engineering for Finance”. He is the
founding President of the African Institute of Fintechs. For more than 15 years, he has been
carrying out consulting missions for public and private organizations. He can be contacted at
email: sachchab@gmail.com.
Mohammed Saber Associate Professor at ENSAO (Ecole Nationale des Sciences
Appliquées Oujda), Mohammed First University, Morocco. He is currently Director of Smart
Information, Communication and Technologies Laboratory (SmartICT). His interests include
Network Security (Intrusion Detection System, Evaluation of security components, Security in
Mobile Ad Hoc Networks (MANET), Security IoT), Robotics and Embedded Systems. He can
be contacted at email: m.saber@ump.ac.ma.
Ilhame El Farissi Engigneer and Associate at ENSAO (Ecole Nationale des
Sciences Appliquées Oujda). Her research interests involve Machine learning, classification of
smart card attacks and intrusion detection systems. She can be contacted at email:
i.elfarissi@ump.ac.ma.
Nawal Sbiti holder of a PHD in automation and industrial computing from the
Mohammedia School of Engineering in Morocco as well as a university habilitation in research
from the same institution. She is currently Professor in Electrical Department at Mohammedia
School of Engineering. For more than 30 years, she has been carrying out consulting missions
for public and private organizations. She can be contacted at email: sbiti@emi.ac.ma.
Mustapha Amghar holder of a PHD in industrial computing from Liege University
in Belgium as well as a university habilitation in research from the Mohammedia School of
Engineering in Morocco. He is currently Professor in Electrical Department at Mohammedia
School of Engineering. For more than 30 years, he has been carrying out consulting missions for
public and private organizations. He can be contacted at email: amghar@emi.ac.ma.