RESPONSE SURFACE METHODOLOGY OPTIMIZATION OF FACTORS AFFECTING THE CHARACTERISTICS OF POLYMERIC FILMS USED IN ENTERIC COATING IN SOME PHARMACEUTICAL SOLID DOSAGE FORMS
Boshra Abdullah altaher.This Thesis was submitted in partial fulfillment of the requirements for
the degree of M.Sc. in Chemical Engineering
At
The Faculty of Graduate Studies
Jordan University of Science and Technology
January, 2016
- Response surface methodology (RSM) uses statistical techniques to model and analyze problems with response variables influenced by multiple independent variables. The goal is to optimize the response.
- RSM has been used since the 1930s and was reviewed in landmark papers in 1966 and 1976. It is commonly used in industries, agriculture, medicine, and other fields to optimize processes and products.
- There are two main experimental strategies in RSM - first-order models to initially evaluate relationships between factors and responses, and second-order models to account for curvature and find optimal points if curvature is present.
The document discusses the steps for conducting a response surface methodology (RSM) experiment using central composite design (CCD). It involves determining independent and dependent variables, selecting an appropriate CCD, conducting the experiment runs according to the design, analyzing the data using statistical methods to develop a mathematical model and check its adequacy, and using the model to optimize responses. Key aspects of RSM and CCD covered include developing the design, analyzing results through ANOVA and regression, and checking model validity.
Selecting experimental variables for response surface modelingSeppo Karrila
This document discusses selecting variables and designing experiments for response surface modeling. It recommends beginning with identifying all possible factors, then controlling some and selecting others to experiment with using a design with three levels per variable, like a Box-Behnken design. Response surface modeling fits a quadratic surface to the experimental results rather than optimizing one variable at a time, which can miss the true optimal conditions. The goal is to approximate the maximum or minimum response across variable levels through designed experiments and modeling rather than sequential optimization of individual factors.
This document discusses optimization techniques used in pharmaceutical development. It defines optimization as making a formulation or process perfect by finding the best use of resources while considering all influencing factors. It describes independent and dependent variables, different optimization methods like evolutionary operation, simplex method, and statistical experimental designs including factorial, response surface, and Plackett-Burman designs. The advantages of optimization include determining important variables, measuring interactions, and allowing extrapolation to find the best product. Optimization has applications in formulation development, dissolution testing, tablet coating, and capsule preparation.
This document provides an introduction to genetic algorithms, which are a class of computational models inspired by evolution. It describes how genetic algorithms use processes analogous to natural selection and genetics to arrive at optimal solutions to problems. The document outlines the key components of genetic algorithms, including representing potential solutions as binary strings, selecting parents based on fitness, recombining parents via crossover to create offspring, mutating offspring randomly, and replacing the population with the offspring. The goal is to evolve better and better solutions over many generations through these evolutionary processes of selection, recombination and mutation.
Genetic algorithms are optimization techniques inspired by Darwin's theory of evolution. They use operations like selection, crossover and mutation to evolve solutions to problems by iteratively trying random variations. The document outlines the history, concepts, process and applications of genetic algorithms, including using them to optimize engineering design, routing, computer games and more. It describes how genetic algorithms encode potential solutions and use fitness functions to guide the evolution toward better outcomes.
Removal of ammonium ions from wastewater A short review in development of eff...GJESM Publication
Ammonium ions wastewater pollution has become one of the most serious environmental problems
today. The treatment of ammonium ions is a special concern due to their recalcitrance and persistence in the environment. In recent years, various methods for ammonium ion removal from wastewater have been extensively studied. This paper reviews the current methods that have been used to treat ammonium ion wastewater and evaluates these techniques. These technologies include ion exchange, adsorption, biosorption, wet air oxidation, biofiltration, diffused
aeration, nitrification and denitrification methods. About 75 published studies (1979-2015) are reviewed in this paper.
It is evident from the literature survey articles that ion exchange, adsorption and biological technology are the most frequently studied for the treatment of ammonium ion wastewater.
The document discusses optimal experimental design in systems biology. It introduces concepts like Fisher information and Shannon information that can be used to quantify how much information an experiment provides about unknown parameters. Fisher information measures how sensitive an experiment's outcomes are to changes in the parameters, while Shannon information compares the uncertainty before and after an experiment to determine how much a particular experiment can reduce uncertainty about the parameters. The document provides examples of applying these information measures to help design experiments that efficiently narrow uncertainties and provide the most learning about biological systems.
- Response surface methodology (RSM) uses statistical techniques to model and analyze problems with response variables influenced by multiple independent variables. The goal is to optimize the response.
- RSM has been used since the 1930s and was reviewed in landmark papers in 1966 and 1976. It is commonly used in industries, agriculture, medicine, and other fields to optimize processes and products.
- There are two main experimental strategies in RSM - first-order models to initially evaluate relationships between factors and responses, and second-order models to account for curvature and find optimal points if curvature is present.
The document discusses the steps for conducting a response surface methodology (RSM) experiment using central composite design (CCD). It involves determining independent and dependent variables, selecting an appropriate CCD, conducting the experiment runs according to the design, analyzing the data using statistical methods to develop a mathematical model and check its adequacy, and using the model to optimize responses. Key aspects of RSM and CCD covered include developing the design, analyzing results through ANOVA and regression, and checking model validity.
Selecting experimental variables for response surface modelingSeppo Karrila
This document discusses selecting variables and designing experiments for response surface modeling. It recommends beginning with identifying all possible factors, then controlling some and selecting others to experiment with using a design with three levels per variable, like a Box-Behnken design. Response surface modeling fits a quadratic surface to the experimental results rather than optimizing one variable at a time, which can miss the true optimal conditions. The goal is to approximate the maximum or minimum response across variable levels through designed experiments and modeling rather than sequential optimization of individual factors.
This document discusses optimization techniques used in pharmaceutical development. It defines optimization as making a formulation or process perfect by finding the best use of resources while considering all influencing factors. It describes independent and dependent variables, different optimization methods like evolutionary operation, simplex method, and statistical experimental designs including factorial, response surface, and Plackett-Burman designs. The advantages of optimization include determining important variables, measuring interactions, and allowing extrapolation to find the best product. Optimization has applications in formulation development, dissolution testing, tablet coating, and capsule preparation.
This document provides an introduction to genetic algorithms, which are a class of computational models inspired by evolution. It describes how genetic algorithms use processes analogous to natural selection and genetics to arrive at optimal solutions to problems. The document outlines the key components of genetic algorithms, including representing potential solutions as binary strings, selecting parents based on fitness, recombining parents via crossover to create offspring, mutating offspring randomly, and replacing the population with the offspring. The goal is to evolve better and better solutions over many generations through these evolutionary processes of selection, recombination and mutation.
Genetic algorithms are optimization techniques inspired by Darwin's theory of evolution. They use operations like selection, crossover and mutation to evolve solutions to problems by iteratively trying random variations. The document outlines the history, concepts, process and applications of genetic algorithms, including using them to optimize engineering design, routing, computer games and more. It describes how genetic algorithms encode potential solutions and use fitness functions to guide the evolution toward better outcomes.
Removal of ammonium ions from wastewater A short review in development of eff...GJESM Publication
Ammonium ions wastewater pollution has become one of the most serious environmental problems
today. The treatment of ammonium ions is a special concern due to their recalcitrance and persistence in the environment. In recent years, various methods for ammonium ion removal from wastewater have been extensively studied. This paper reviews the current methods that have been used to treat ammonium ion wastewater and evaluates these techniques. These technologies include ion exchange, adsorption, biosorption, wet air oxidation, biofiltration, diffused
aeration, nitrification and denitrification methods. About 75 published studies (1979-2015) are reviewed in this paper.
It is evident from the literature survey articles that ion exchange, adsorption and biological technology are the most frequently studied for the treatment of ammonium ion wastewater.
The document discusses optimal experimental design in systems biology. It introduces concepts like Fisher information and Shannon information that can be used to quantify how much information an experiment provides about unknown parameters. Fisher information measures how sensitive an experiment's outcomes are to changes in the parameters, while Shannon information compares the uncertainty before and after an experiment to determine how much a particular experiment can reduce uncertainty about the parameters. The document provides examples of applying these information measures to help design experiments that efficiently narrow uncertainties and provide the most learning about biological systems.
1. Statistics plays an important role in research by enabling researchers to extract meaningful information from data in the presence of variability.
2. The most important time for a statistician to be involved is in the beginning of a study to help design the experiment and ensure the data collection will provide the necessary information.
3. Properly designing the experiment through treatment structure, design structure, and randomization is critical for obtaining unbiased and informative results through statistical analysis.
This document discusses Quality by Design (QbD) principles and Design of Experiments (DOE) methodology. It explains that QbD aims to design quality into products and processes through an understanding of key factors and their interactions. DOE provides a systematic approach to determine these factors and optimize conditions through carefully designed experiments. Common DOE steps include screening experiments to identify important factors, followed by optimization experiments to determine optimal levels and robustness testing to ensure consistent performance under variations.
This document provides information on general factor factorial designs. It defines factorial designs as experiments that study the effects of two or more factors by investigating all possible combinations of the factors' levels. Factorial designs are more efficient than one-factor-at-a-time experiments and allow for the estimation of factor effects at different levels of other factors. However, factorial designs become prohibitively large as the number of factors increases and can be difficult to interpret when interactions are present. The document also provides examples of designing two-factor factorial experiments using completely randomized and randomized complete block designs.
These slides provide an overview of the basics of design of experiments. They also describe and give examples of categorical and continuous factors and responses, discrete numeric and mixture variables, and blocking factors. The slides were presented live and in recorded videos as part of the Mastering JMP webcast series. Watch the webcasts at http://www.jmp.com/mastering
This document provides an overview of a seminar on the basic design of experiments using the Taguchi approach. The seminar aims to teach participants how to apply experimental design principles to solve production problems and optimize product and process designs. The seminar covers topics such as orthogonal arrays, main effects, interactions, mixed level factors, experiment planning, and uses software demonstrations and hands-on exercises. The goal is to prepare attendees for immediate application of experimental design methods in industry.
Factorial experiments allow researchers to study the effects of two or more factors simultaneously in a single experiment. This is more efficient than studying each factor individually. A two-factor factorial design involves all combinations of levels of two factors. The analysis of variance for factorial designs partitions the total variation into separate pieces for the main effects of each factor and their interaction. Factorial designs are extended to more than two factors in a similar manner. Factors can include both qualitative and quantitative variables. Response curves and surfaces are used to model and interpret the results involving quantitative factors.
This document discusses factorial design for pharmaceutical experiments. It defines factorial design as an experiment where two or more factors are each studied at different levels or values. The document then describes different types of factorial designs, including full factorial designs with two or three levels, and fractional factorial designs used when there are more than five factors. It also explains how factors and levels are coded numerically for the experiments.
The document discusses parenteral drug delivery. It defines parenteral products and other related terms. Parenteral preparations are those administered outside the digestive tract, usually via injections. They are preferred when rapid drug action is needed, the oral route cannot be used, or the drug would be inactivated in the gastrointestinal tract. The major routes of parenteral administration include subcutaneous, intramuscular, intravenous, and others. Proper formulation, sterilization, and packaging are required to ensure the safety of parenteral products.
LeanUX (lean user experience) experimentation has mostly focused on "A/B" testing. This presentation reviews how full and half factorial design of experiments might be used in Lean User Experience design.
The document discusses fractional factorial designs, which use a fraction of the total number of combinations in a full factorial design to reduce the number of required runs. It describes how effects become confounded in fractional designs and how design resolution relates to confounding. It provides examples of 2-level and 3-level fractional factorial designs, and discusses other types of designs like Plackett-Burman, central composite, and Taguchi designs. The key benefits of fractional factorial designs are reducing the number of required runs when there are many factors to investigate.
Exploring Best Practises in Design of Experiments: A Data Driven Approach to ...JMP software from SAS
Learn about best practises in the
design of experiments and a data-driven approach to DOE that increases robustness, efficiency and effectiveness. This was presented at a JMP seminar in the UK.
The document discusses Taguchi techniques for robust design and quality engineering. It describes the parameter design procedure which involves: (1) defining controllable parameters, uncontrollable noise factors, and measurable responses; (2) selecting an objective function to optimize; (3) planning experiments such as full factorial, fractional factorial, or orthogonal arrays; (4) running the experiment; (5) analyzing results to identify optimal parameter settings; and (6) selecting setpoints and conducting additional experiments if needed. The goal is to design products and processes that perform well even in the presence of uncontrollable noise factors.
This document provides an overview of operational excellence and design of experiments (DOE). It defines key DOE terms and concepts, including factors, levels, interactions, resolution, coding/decoding variables. It discusses the objectives of different DOE designs (screening, modeling, optimizing) and considerations for choosing a design based on factors, levels, and resources. Guidelines are given for planning, executing, and analyzing a DOE. Examples are provided to illustrate DOE concepts like resolution, coding variables, and a full factorial design. The overall purpose is to introduce the reader to the technique of DOE for improving processes.
This document provides information about a case study on the blending process for a pharmaceutical formulation. It includes:
1) Details of the formulation and factors being studied (mixing time, magnesium stearate concentration, and talc concentration) to evaluate blend uniformity.
2) Descriptions of key concepts for experimental design including treatments, experimental units, responses, and interactions.
3) Discussion of blocking as a technique to reduce nuisance factors like different batches of active ingredients being studied.
Solutions. Design and Analysis of Experiments. MontgomeryByron CZ
This document summarizes solutions to problems from a chapter on simple comparative experiments. Key points include:
- Hypotheses are tested to compare means and variances of samples from two populations or processes.
- t-tests and F-tests are used to analyze differences in means and variances based on sample data.
- Confidence intervals are constructed to estimate population parameters based on sample statistics.
- Normality assumptions and sample sizes are considered in selecting appropriate statistical tests.
Introduction to Design of Experiments by Teck Nam Ang (University of Malaya)Teck Nam Ang
This set of slides explains in a simple manner the purpose of experiment, various strategies of experiment, how to plan and design experiment, and the handling of experimental data.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
1. Statistics plays an important role in research by enabling researchers to extract meaningful information from data in the presence of variability.
2. The most important time for a statistician to be involved is in the beginning of a study to help design the experiment and ensure the data collection will provide the necessary information.
3. Properly designing the experiment through treatment structure, design structure, and randomization is critical for obtaining unbiased and informative results through statistical analysis.
This document discusses Quality by Design (QbD) principles and Design of Experiments (DOE) methodology. It explains that QbD aims to design quality into products and processes through an understanding of key factors and their interactions. DOE provides a systematic approach to determine these factors and optimize conditions through carefully designed experiments. Common DOE steps include screening experiments to identify important factors, followed by optimization experiments to determine optimal levels and robustness testing to ensure consistent performance under variations.
This document provides information on general factor factorial designs. It defines factorial designs as experiments that study the effects of two or more factors by investigating all possible combinations of the factors' levels. Factorial designs are more efficient than one-factor-at-a-time experiments and allow for the estimation of factor effects at different levels of other factors. However, factorial designs become prohibitively large as the number of factors increases and can be difficult to interpret when interactions are present. The document also provides examples of designing two-factor factorial experiments using completely randomized and randomized complete block designs.
These slides provide an overview of the basics of design of experiments. They also describe and give examples of categorical and continuous factors and responses, discrete numeric and mixture variables, and blocking factors. The slides were presented live and in recorded videos as part of the Mastering JMP webcast series. Watch the webcasts at http://www.jmp.com/mastering
This document provides an overview of a seminar on the basic design of experiments using the Taguchi approach. The seminar aims to teach participants how to apply experimental design principles to solve production problems and optimize product and process designs. The seminar covers topics such as orthogonal arrays, main effects, interactions, mixed level factors, experiment planning, and uses software demonstrations and hands-on exercises. The goal is to prepare attendees for immediate application of experimental design methods in industry.
Factorial experiments allow researchers to study the effects of two or more factors simultaneously in a single experiment. This is more efficient than studying each factor individually. A two-factor factorial design involves all combinations of levels of two factors. The analysis of variance for factorial designs partitions the total variation into separate pieces for the main effects of each factor and their interaction. Factorial designs are extended to more than two factors in a similar manner. Factors can include both qualitative and quantitative variables. Response curves and surfaces are used to model and interpret the results involving quantitative factors.
This document discusses factorial design for pharmaceutical experiments. It defines factorial design as an experiment where two or more factors are each studied at different levels or values. The document then describes different types of factorial designs, including full factorial designs with two or three levels, and fractional factorial designs used when there are more than five factors. It also explains how factors and levels are coded numerically for the experiments.
The document discusses parenteral drug delivery. It defines parenteral products and other related terms. Parenteral preparations are those administered outside the digestive tract, usually via injections. They are preferred when rapid drug action is needed, the oral route cannot be used, or the drug would be inactivated in the gastrointestinal tract. The major routes of parenteral administration include subcutaneous, intramuscular, intravenous, and others. Proper formulation, sterilization, and packaging are required to ensure the safety of parenteral products.
LeanUX (lean user experience) experimentation has mostly focused on "A/B" testing. This presentation reviews how full and half factorial design of experiments might be used in Lean User Experience design.
The document discusses fractional factorial designs, which use a fraction of the total number of combinations in a full factorial design to reduce the number of required runs. It describes how effects become confounded in fractional designs and how design resolution relates to confounding. It provides examples of 2-level and 3-level fractional factorial designs, and discusses other types of designs like Plackett-Burman, central composite, and Taguchi designs. The key benefits of fractional factorial designs are reducing the number of required runs when there are many factors to investigate.
Exploring Best Practises in Design of Experiments: A Data Driven Approach to ...JMP software from SAS
Learn about best practises in the
design of experiments and a data-driven approach to DOE that increases robustness, efficiency and effectiveness. This was presented at a JMP seminar in the UK.
The document discusses Taguchi techniques for robust design and quality engineering. It describes the parameter design procedure which involves: (1) defining controllable parameters, uncontrollable noise factors, and measurable responses; (2) selecting an objective function to optimize; (3) planning experiments such as full factorial, fractional factorial, or orthogonal arrays; (4) running the experiment; (5) analyzing results to identify optimal parameter settings; and (6) selecting setpoints and conducting additional experiments if needed. The goal is to design products and processes that perform well even in the presence of uncontrollable noise factors.
This document provides an overview of operational excellence and design of experiments (DOE). It defines key DOE terms and concepts, including factors, levels, interactions, resolution, coding/decoding variables. It discusses the objectives of different DOE designs (screening, modeling, optimizing) and considerations for choosing a design based on factors, levels, and resources. Guidelines are given for planning, executing, and analyzing a DOE. Examples are provided to illustrate DOE concepts like resolution, coding variables, and a full factorial design. The overall purpose is to introduce the reader to the technique of DOE for improving processes.
This document provides information about a case study on the blending process for a pharmaceutical formulation. It includes:
1) Details of the formulation and factors being studied (mixing time, magnesium stearate concentration, and talc concentration) to evaluate blend uniformity.
2) Descriptions of key concepts for experimental design including treatments, experimental units, responses, and interactions.
3) Discussion of blocking as a technique to reduce nuisance factors like different batches of active ingredients being studied.
Solutions. Design and Analysis of Experiments. MontgomeryByron CZ
This document summarizes solutions to problems from a chapter on simple comparative experiments. Key points include:
- Hypotheses are tested to compare means and variances of samples from two populations or processes.
- t-tests and F-tests are used to analyze differences in means and variances based on sample data.
- Confidence intervals are constructed to estimate population parameters based on sample statistics.
- Normality assumptions and sample sizes are considered in selecting appropriate statistical tests.
Introduction to Design of Experiments by Teck Nam Ang (University of Malaya)Teck Nam Ang
This set of slides explains in a simple manner the purpose of experiment, various strategies of experiment, how to plan and design experiment, and the handling of experimental data.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.