Ant miner algorithm is used to find the classification rule which helps to do classification of the data. Ant miner uses the Ant Colony Optimization (ACO) which deals with artificial systems that is inspired from the foraging behavior of real ants. Here the task is to improve the pheromone update method in the current system. Pheromone updating is dependent mainly on the initial pheromone of the term, and term Q (quality of term) which is added to current accumulated pheromone. In this methods a try is made to lay pheromone on trail such that selection of terms is not biased and unified behavior for the system as a whole, produce a robust system capable of finding high-quality solutions for problems. Here in this approach amount of pheromone added with the used term is not directly dependent on accumulated pheromone but also on the Q (quality of rule). So here in Q is modified and multiplied in ways that help to get better solution. For this use of rule length is done and manipulated in ways that support approach to achieve goal. Thus, aim is to improve the accuracy and sustaining rule list simplicity using Ant Colony Optimization in data mining.
This document describes improvements made to an ant colony optimization algorithm called Ant-Miner for generating classification rules from data. The improved algorithm is called Ant-Miner3. It uses a different pheromone updating strategy and state transition rule compared to the original Ant-Miner algorithm. The goal is to improve accuracy by increasing diversity in the rules generated by the ants. Ant-Miner3 was tested on two standard problems and performed better than the original Ant-Miner algorithm in terms of accuracy.
Opposition Based Firefly Algorithm Optimized Feature Subset Selection Approac...mlaij
Recently huge amount of data is available in the field of medicine that helps the doctors in diagnosing diseases when analysed. Data mining techniques can be applied to these medical data to extract knowledge so that disease prediction becomes accurate and easier. In this work, cardiotocogram (CTG) data is analysed using Support Vector Machine (SVM) for predicting fetal risk. Opposition based firefly algorithm (OBFA) is proposed to extract the relevant features that maximise the classification performance of SVM. The obtained results show that opposition based firefly algorithm outperforms the standard firefly algorithm (FA).
A Multi-Objective Ant Colony System Algorithm for Virtual Machine PlacementIJERA Editor
This document proposes a multi-objective ant colony system algorithm for virtual machine placement to minimize total resource wastage and power consumption. It introduces the virtual machine placement problem and reviews existing literature on ant colony optimization algorithms. The proposed algorithm is described in detail, including initialization, iterative construction of solutions by ants, evaluation and updating of pheromone trails. The algorithm is tested on examples from literature and shown to perform better than existing approaches in efficiently finding non-dominated solutions for the multi-objective virtual machine placement problem.
Classification with ant colony optimizationkamalikanath89
The document describes the Ant-Miner algorithm for classification using ant colony optimization. It begins with an introduction to ant colony optimization and how it was inspired by the foraging behavior of real ants. It then provides details on the key steps of the Ant-Miner algorithm, including rule construction, pruning, and pheromone updating. The algorithm uses a heuristic function based on information theory and positive feedback to iteratively construct and refine classification rules from training data.
This document presents a new ant colony optimization (ACO) technique called AntMiner+ for data classification. It provides an overview of previous ant-based classification approaches and compares their accuracy to other techniques like decision trees. AntMiner+ augments the environment for ants to include class variables to handle multiclass problems. It also uses the better performing -ant system and allows interval rules. The authors' experiments show AntMiner+ achieves superior accuracy to other AntMiner versions and is competitive with other classification techniques.
This document proposes a new approach to improve the Apriori algorithm for mining association rules from transactional databases. The approach uses a Boolean modeling of the association rules mined to represent them in a cellular automaton. The key steps are:
1) Applying the Apriori-Cell algorithm to a transactional database to extract frequent itemsets and generate association rules.
2) Representing the association rules using a Boolean model within a cellular automaton framework consisting of two layers - one for items and one for transactions.
3) Managing and simulating inference on the represented association rules using the cellular automaton's inference engine to optimize representation and processing of the rules.
An example is provided to illustrate
Cuckoo algorithm with great deluge local-search for feature selection problemsIJECEIAES
Feature selection problem is concerned with searching in a dataset for a set of features aiming to reduce the training time and enhance the accuracy of a classification method. Therefore, feature selection algorithms are proposed to choose important features from large and complex datasets. The cuckoo search (CS) algorithm is a type of natural-inspired optimization algorithms and is widely implemented to find the optimum solution for a specified problem. In this work, the cuckoo search algorithm is hybridized with a local search algorithm to find a satisfactory solution for the problem of feature selection. The great deluge (GD) algorithm is an iterative search procedure, that can accept some worse moves to find better solutions for the problem, also to increase the exploitation ability of CS. The comparison is also provided to examine the performance of the proposed method and the original CS algorithm. As result, using the UCI datasets the proposed algorithm outperforms the original algorithm and produces comparable results compared with some of the results from the literature.
This document describes improvements made to an ant colony optimization algorithm called Ant-Miner for generating classification rules from data. The improved algorithm is called Ant-Miner3. It uses a different pheromone updating strategy and state transition rule compared to the original Ant-Miner algorithm. The goal is to improve accuracy by increasing diversity in the rules generated by the ants. Ant-Miner3 was tested on two standard problems and performed better than the original Ant-Miner algorithm in terms of accuracy.
Opposition Based Firefly Algorithm Optimized Feature Subset Selection Approac...mlaij
Recently huge amount of data is available in the field of medicine that helps the doctors in diagnosing diseases when analysed. Data mining techniques can be applied to these medical data to extract knowledge so that disease prediction becomes accurate and easier. In this work, cardiotocogram (CTG) data is analysed using Support Vector Machine (SVM) for predicting fetal risk. Opposition based firefly algorithm (OBFA) is proposed to extract the relevant features that maximise the classification performance of SVM. The obtained results show that opposition based firefly algorithm outperforms the standard firefly algorithm (FA).
A Multi-Objective Ant Colony System Algorithm for Virtual Machine PlacementIJERA Editor
This document proposes a multi-objective ant colony system algorithm for virtual machine placement to minimize total resource wastage and power consumption. It introduces the virtual machine placement problem and reviews existing literature on ant colony optimization algorithms. The proposed algorithm is described in detail, including initialization, iterative construction of solutions by ants, evaluation and updating of pheromone trails. The algorithm is tested on examples from literature and shown to perform better than existing approaches in efficiently finding non-dominated solutions for the multi-objective virtual machine placement problem.
Classification with ant colony optimizationkamalikanath89
The document describes the Ant-Miner algorithm for classification using ant colony optimization. It begins with an introduction to ant colony optimization and how it was inspired by the foraging behavior of real ants. It then provides details on the key steps of the Ant-Miner algorithm, including rule construction, pruning, and pheromone updating. The algorithm uses a heuristic function based on information theory and positive feedback to iteratively construct and refine classification rules from training data.
This document presents a new ant colony optimization (ACO) technique called AntMiner+ for data classification. It provides an overview of previous ant-based classification approaches and compares their accuracy to other techniques like decision trees. AntMiner+ augments the environment for ants to include class variables to handle multiclass problems. It also uses the better performing -ant system and allows interval rules. The authors' experiments show AntMiner+ achieves superior accuracy to other AntMiner versions and is competitive with other classification techniques.
This document proposes a new approach to improve the Apriori algorithm for mining association rules from transactional databases. The approach uses a Boolean modeling of the association rules mined to represent them in a cellular automaton. The key steps are:
1) Applying the Apriori-Cell algorithm to a transactional database to extract frequent itemsets and generate association rules.
2) Representing the association rules using a Boolean model within a cellular automaton framework consisting of two layers - one for items and one for transactions.
3) Managing and simulating inference on the represented association rules using the cellular automaton's inference engine to optimize representation and processing of the rules.
An example is provided to illustrate
Cuckoo algorithm with great deluge local-search for feature selection problemsIJECEIAES
Feature selection problem is concerned with searching in a dataset for a set of features aiming to reduce the training time and enhance the accuracy of a classification method. Therefore, feature selection algorithms are proposed to choose important features from large and complex datasets. The cuckoo search (CS) algorithm is a type of natural-inspired optimization algorithms and is widely implemented to find the optimum solution for a specified problem. In this work, the cuckoo search algorithm is hybridized with a local search algorithm to find a satisfactory solution for the problem of feature selection. The great deluge (GD) algorithm is an iterative search procedure, that can accept some worse moves to find better solutions for the problem, also to increase the exploitation ability of CS. The comparison is also provided to examine the performance of the proposed method and the original CS algorithm. As result, using the UCI datasets the proposed algorithm outperforms the original algorithm and produces comparable results compared with some of the results from the literature.
The assembly process is one of the most time consuming and expensive manufacturing activities.
Determination of a correct and stable assembly sequence is necessary for automated assembly system. The objective of
the present work is to generate feasible, stable and optimal assembly sequence with minimum assembly time.
Automated assembly has the advantage of greater process capability and scalability. It is faster, more efficient and
precise than any conventional process. Ant Colony Optimization (ACO) method is used for generation of stable
assembly sequence. This method has been applied to a Planetary Gearbox.
Swarm intelligence is a biologically inspired field that studies how social behaviors emerge from the interactions between individuals in a decentralized system. It draws inspiration from natural systems like bird flocking and ant colonies. Particle swarm optimization and ant colony optimization are two popular swarm intelligence algorithms. PSO mimics bird flocking by having particles update their velocities based on their own experience and the swarm's experience. ACO mimics ant foraging behavior by having artificial ants deposit and follow pheromone trails to iteratively find optimal solutions. Both algorithms have been applied to problems like optimization and routing.
The document describes a novel approach called Enhanced Ant Colony Optimization (EACO) for scheduling tasks in a grid computing environment. EACO aims to improve task scheduling by minimizing makespan time compared to existing algorithms like Modified Ant Colony Optimization, MAX-MIN, and Resource Aware Scheduling Algorithm. It does this by considering system and network performance in dynamic grids and selecting resources according to their availability. The document presents the procedures of EACO and the existing algorithms, experimental results showing EACO achieves lower makespan, and concludes EACO is effective for task scheduling in grids.
Comparative Study of Ant Colony Optimization And Gang SchedulingIJTET Journal
Abstract— Ant Colony Optimization (ACO) is a well known and rapidly evolving meta-heuristic technique. All optimization problems have already taken advantage of the ACO technique while countless others are on their way. Ant Colony Optimization (ACO) has been used as an effective algorithm in solving the scheduling problem in grid computing. Whereas gang scheduling is a scheduling algorithm that is used to schedule the parallel systems and schedules related threads or processes to run simultaneously on different processors. The threads that are scheduled are belonging to the same process, but they from different processes in some cases, for example when the processes have a producer-consumer relationship, when all processes come from the same MPI program.
Hybrid Model using Unsupervised Filtering Based on Ant Colony Optimization an...IRJET Journal
This document proposes a hybrid model for medical data mining that uses unsupervised filtering followed by ant colony optimization and multiclass support vector machines. It first discusses data mining and describes ant colony optimization, random forests, and ant colony decision trees. It then explains the proposed hybrid model, which applies unsupervised filtering techniques to raw medical data before using ant colony optimization to build a decision tree. Finally, it briefly introduces multiclass support vector machines as the final component of the hybrid model. The overall goal is to extract useful information and patterns from medical data using this combined approach.
This document summarizes ant colony optimization algorithms for solving the traveling salesman problem. It first discusses how real ant behavior inspired the development of artificial ant colony algorithms. It then reviews different ant colony optimization algorithms for the traveling salesman problem, including the ant system, elitist ant system, rank-based ant system, min-max ant system, and ant colony system algorithms. Finally, it briefly discusses parallel implementations of ant colony algorithms.
This document summarizes ant colony optimization algorithms for solving the traveling salesman problem. It first discusses how real ant behavior inspired the development of artificial ant colony algorithms. It then reviews different ant colony optimization algorithms for the traveling salesman problem, including the ant system, elitist ant system, rank-based ant system, min-max ant system, and ant colony system algorithms. Finally, it briefly discusses parallel implementations of ant colony algorithms.
This document summarizes ant colony optimization algorithms for solving the traveling salesman problem. It first discusses how real ant behavior inspired the development of artificial ant colony algorithms. It then describes how the ant colony optimization metaheuristic can be applied to solve combinatorial optimization problems like the traveling salesman problem. Finally, it reviews several specific ant colony algorithms for the traveling salesman problem, including Ant System, Elitist Ant System, Rank-Based Ant System, Min-Max Ant System, Ant Colony System, and Approximate Nondeterministic Tree Search.
Comparison Between Pid Controllers for Gryphon Robot Optimized With Neuro-Fuz...ijctcm
In this paper three intelligent evolutionary optimization approaches to design PID controller for a Gryphon Robot are presented and compared to the results of a neuro-fuzzy system applied. The three applied approaches are artificial bee colony, shuffled frog leaping algorithm and particle swarm optimization. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying the four optimization methods mentioned above. After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. On the other hand PSO sounds to perform well on steady-state error only. In steady state manner all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
Comparison Between Pid Controllers for Gryphon Robot Optimized With Neuro-Fuz...AlessioAmedeo
In this paper three intelligent evolutionary optimization approaches to design PID controller for a Gryphon Robot are presented and compared to the results of a neuro-fuzzy system applied. The three applied approaches are artificial bee colony, shuffled frog leaping algorithm and particle swarm optimization. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying the four optimization methods mentioned above. After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. On the other hand PSO sounds to perform well on steady-state error only. In steady state manner all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
Ant Colony Optimization for Optimal Low-Pass State Variable Filter Sizing IJECEIAES
In analog filter design, discrete components values such as resistors (R) and capacitors (C) are selected from the series following constant values chosen. Exhaustive search on all possible combinations for an optimized design is not feasible. In this paper, we present an application of the Ant Colony Optimization technique (ACO) in order to selected optimal values of resistors and capacitors from different manufactured series to satisfy the filter design criteria. Three variants of the Ant Colony Optimization are applied, namely, the AS (Ant System), the MMAS (Min-Max AS) and the ACS (Ant Colony System), for the optimal sizing of the Low-Pass State Variable Filter. SPICE simulations are used to validate the obtained results/performances which are compared with already published works.
The optimization of running queries in relational databases using ant colony ...ijdms
The issue of optimizing queries is a cost-sensitive
process and with respect to the number of associat
ed
tables in a query, its number of permutations grows
exponentially. On one hand, in comparison with oth
er
operators in relational database, join operator is
the most difficult and complicated one in terms of
optimization for reducing its runtime. Accordingly,
various algorithms have so far been proposed to so
lve
this problem. On the other hand, the success of any
database management system (DBMS) means
exploiting the query model. In the current paper, t
he heuristic ant algorithm has been proposed to sol
ve this
problem and improve the runtime of join operation.
Experiments and observed results reveal the efficie
ncy
of this algorithm compared to its similar algorithm
s.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
COMPARISON BETWEEN ARTIFICIAL BEE COLONY ALGORITHM, SHUFFLED FROG LEAPING ALG...cscpconf
In this paper three optimum approaches to design PID controller for a Gryphon Robot are presented. The three applied approaches are Artificial Bee Colony, Shuffled Frog Leaping algorithms and nero-fuzzy system. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying Shuffled Frog Leaping (SFL) algorithm, Artificial Bee Colony (ABC) algorithm and Nero-Fuzzy System (FNN). After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on
decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. In steady state manner, all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
The document discusses various optimization techniques and algorithms including genetic algorithms, artificial neural networks, and data analytics. Specifically, it covers genetic algorithms in more detail including the basic concepts of populations of chromosomes evolving over generations using processes like crossover, mutation, and selection to optimize an objective function. It also discusses other metaheuristic algorithms like simulated annealing, particle swarm optimization, and ant colony optimization which are inspired by natural processes and use stochastic components to find robust solutions.
The Effect of Updating the Local Pheromone on ACS Performance using Fuzzy Log...IJECEIAES
Fuzzy Logic Controller (FLC) has become one of the most frequently utilised algorithms to adapt the metaheuristics parameters as an artificial intelligence technique. In this paper, the 휉 parameter of Ant Colony System (ACS) algorithm is adapted by the use of FLC, and its behaviour is studied during this adaptation. The proposed approach is compared with the standard ACS algorithm. Computational results are done based on a library of sample instances for the Traveling Salesman Problem (TSPLIB).
A New Extraction Optimization Approach to Frequent 2 Item setsijcsa
In this paper, we propose a new optimization approach to the APRIORI reference algorithm (AGR 94) for 2-itemsets (sets of cardinal 2). The approach used is based on two-item sets. We start by calculating the 1- itemets supports (cardinal 1 sets), then we prune the 1-itemsets not frequent and keep only those that are frequent (ie those with the item sets whose values are greater than or equal to a fixed minimum threshold). During the second iteration, we sort the frequent 1-itemsets in descending order of their respective supports and then we form the 2-itemsets. In this way the rules of association are discovered more quickly. Experimentally, the comparison of our algorithm OPTI2I with APRIORI, PASCAL, CLOSE and MAXMINER, shows its efficiency on weakly correlated data. Our work has also led to a classical model of sideby-side classification of items that we have obtained by establishing a relationship between the different sets of 2-itemsets.
A NEW EXTRACTION OPTIMIZATION APPROACH TO FREQUENT 2 ITEMSETSijcsa
This document presents a new optimization approach for extracting frequent 2-itemsets from transactional databases. The approach sorts frequent 1-itemsets by support before generating 2-itemsets, aiming to discover association rules more quickly. Experiments show the proposed OPTI2I algorithm performs efficiently on weakly correlated data compared to APRIORI, PASCAL, CLOSE and MAX-MINER. The work also presents a model for side-by-side classification of items based on relationships between 2-itemsets, which could help arrange products in large stores.
A NEW EXTRACTION OPTIMIZATION APPROACH TO FREQUENT 2 ITEMSETSijcsa
In this paper, we propose a new optimization approach to the APRIORI reference algorithm (AGR 94) for 2-itemsets (sets of cardinal 2). The approach used is based on two-item sets. We start by calculating the 1-itemets supports (cardinal 1 sets), then we prune the 1-itemsets not frequent and keep only those that are frequent (ie those with the item sets whose values are greater than or equal to a fixed minimum threshold). During the second iteration, we sort the frequent 1-itemsets in descending order of their respective supports and then we form the 2-itemsets. In this way the rules of association are discovered more quickly. Experimentally, the comparison of our algorithm OPTI2I with APRIORI, PASCAL, CLOSE and MAX-MINER, shows its efficiency on weakly correlated data. Our work has also led to a classical model of side-by-side classification of items that we have obtained by establishing a relationship between the different sets of 2-itemsets.
Comparison between pid controllers for gryphon robot optimized with neuro fuz...ijctcm
In this paper three intelligent evolutionary optimization approaches to design PID controller for a
Gryphon Robot are presented and compared to the results of a neuro-fuzzy system applied. The three
applied approaches are artificial bee colony, shuffled frog leaping algorithm and particle swarm
optimization. The design goal is to minimize the integral absolute error and reduce transient response by
minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is
defined and minimized applying the four optimization methods mentioned above. After optimization of the
objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that
FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of
steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is
better on decreasing of overshoot. On the other hand PSO sounds to perform well on steady-state error
only. In steady state manner all of the methods react robustly to the disturbance, but FNN shows more
stability in transient response.
Due to availability of internet and evolution of embedded devices, Internet of things can be useful to contribute in energy domain. The Internet of Things (IoT) will deliver a smarter grid to enable more information and connectivity throughout the infrastructure and to homes. Through the IoT, consumers, manufacturers and utility providers will come across new ways to manage devices and ultimately conserve resources and save money by using smart meters, home gateways, smart plugs and connected appliances. The future smart home, various devices will be able to measure and share their energy consumption, and actively participate in house-wide or building wide energy management systems. This paper discusses the different approaches being taken worldwide to connect the smart grid. Full system solutions can be developed by combining hardware and software to address some of the challenges in building a smarter and more connected smart grid.
A Survey Report on : Security & Challenges in Internet of Thingsijsrd.com
In the era of computing technology, Internet of Things (IoT) devices are now popular in each and every domains like e-governance, e-Health, e-Home, e-Commerce, and e-Trafficking etc. Iot is spreading from small to large applications in all fields like Smart Cities, Smart Grids, Smart Transportation. As on one side IoT provide facilities and services for the society. On the other hand, IoT security is also a crucial issues.IoT security is an area which totally concerned for giving security to connected devices and networks in the IoT .As, IoT is vast area with usability, performance, security, and reliability as a major challenges in it. The growth of the IoT is exponentially increases as driven by market pressures, which proportionally increases the security threats involved in IoT The relationship between the security and billions of devices connecting to the Internet cannot be described with existing mathematical methods. In this paper, we explore the opportunities possible in the IoT with security threats and challenges associated with it.
More Related Content
Similar to A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm
The assembly process is one of the most time consuming and expensive manufacturing activities.
Determination of a correct and stable assembly sequence is necessary for automated assembly system. The objective of
the present work is to generate feasible, stable and optimal assembly sequence with minimum assembly time.
Automated assembly has the advantage of greater process capability and scalability. It is faster, more efficient and
precise than any conventional process. Ant Colony Optimization (ACO) method is used for generation of stable
assembly sequence. This method has been applied to a Planetary Gearbox.
Swarm intelligence is a biologically inspired field that studies how social behaviors emerge from the interactions between individuals in a decentralized system. It draws inspiration from natural systems like bird flocking and ant colonies. Particle swarm optimization and ant colony optimization are two popular swarm intelligence algorithms. PSO mimics bird flocking by having particles update their velocities based on their own experience and the swarm's experience. ACO mimics ant foraging behavior by having artificial ants deposit and follow pheromone trails to iteratively find optimal solutions. Both algorithms have been applied to problems like optimization and routing.
The document describes a novel approach called Enhanced Ant Colony Optimization (EACO) for scheduling tasks in a grid computing environment. EACO aims to improve task scheduling by minimizing makespan time compared to existing algorithms like Modified Ant Colony Optimization, MAX-MIN, and Resource Aware Scheduling Algorithm. It does this by considering system and network performance in dynamic grids and selecting resources according to their availability. The document presents the procedures of EACO and the existing algorithms, experimental results showing EACO achieves lower makespan, and concludes EACO is effective for task scheduling in grids.
Comparative Study of Ant Colony Optimization And Gang SchedulingIJTET Journal
Abstract— Ant Colony Optimization (ACO) is a well known and rapidly evolving meta-heuristic technique. All optimization problems have already taken advantage of the ACO technique while countless others are on their way. Ant Colony Optimization (ACO) has been used as an effective algorithm in solving the scheduling problem in grid computing. Whereas gang scheduling is a scheduling algorithm that is used to schedule the parallel systems and schedules related threads or processes to run simultaneously on different processors. The threads that are scheduled are belonging to the same process, but they from different processes in some cases, for example when the processes have a producer-consumer relationship, when all processes come from the same MPI program.
Hybrid Model using Unsupervised Filtering Based on Ant Colony Optimization an...IRJET Journal
This document proposes a hybrid model for medical data mining that uses unsupervised filtering followed by ant colony optimization and multiclass support vector machines. It first discusses data mining and describes ant colony optimization, random forests, and ant colony decision trees. It then explains the proposed hybrid model, which applies unsupervised filtering techniques to raw medical data before using ant colony optimization to build a decision tree. Finally, it briefly introduces multiclass support vector machines as the final component of the hybrid model. The overall goal is to extract useful information and patterns from medical data using this combined approach.
This document summarizes ant colony optimization algorithms for solving the traveling salesman problem. It first discusses how real ant behavior inspired the development of artificial ant colony algorithms. It then reviews different ant colony optimization algorithms for the traveling salesman problem, including the ant system, elitist ant system, rank-based ant system, min-max ant system, and ant colony system algorithms. Finally, it briefly discusses parallel implementations of ant colony algorithms.
This document summarizes ant colony optimization algorithms for solving the traveling salesman problem. It first discusses how real ant behavior inspired the development of artificial ant colony algorithms. It then reviews different ant colony optimization algorithms for the traveling salesman problem, including the ant system, elitist ant system, rank-based ant system, min-max ant system, and ant colony system algorithms. Finally, it briefly discusses parallel implementations of ant colony algorithms.
This document summarizes ant colony optimization algorithms for solving the traveling salesman problem. It first discusses how real ant behavior inspired the development of artificial ant colony algorithms. It then describes how the ant colony optimization metaheuristic can be applied to solve combinatorial optimization problems like the traveling salesman problem. Finally, it reviews several specific ant colony algorithms for the traveling salesman problem, including Ant System, Elitist Ant System, Rank-Based Ant System, Min-Max Ant System, Ant Colony System, and Approximate Nondeterministic Tree Search.
Comparison Between Pid Controllers for Gryphon Robot Optimized With Neuro-Fuz...ijctcm
In this paper three intelligent evolutionary optimization approaches to design PID controller for a Gryphon Robot are presented and compared to the results of a neuro-fuzzy system applied. The three applied approaches are artificial bee colony, shuffled frog leaping algorithm and particle swarm optimization. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying the four optimization methods mentioned above. After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. On the other hand PSO sounds to perform well on steady-state error only. In steady state manner all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
Comparison Between Pid Controllers for Gryphon Robot Optimized With Neuro-Fuz...AlessioAmedeo
In this paper three intelligent evolutionary optimization approaches to design PID controller for a Gryphon Robot are presented and compared to the results of a neuro-fuzzy system applied. The three applied approaches are artificial bee colony, shuffled frog leaping algorithm and particle swarm optimization. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying the four optimization methods mentioned above. After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. On the other hand PSO sounds to perform well on steady-state error only. In steady state manner all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
Ant Colony Optimization for Optimal Low-Pass State Variable Filter Sizing IJECEIAES
In analog filter design, discrete components values such as resistors (R) and capacitors (C) are selected from the series following constant values chosen. Exhaustive search on all possible combinations for an optimized design is not feasible. In this paper, we present an application of the Ant Colony Optimization technique (ACO) in order to selected optimal values of resistors and capacitors from different manufactured series to satisfy the filter design criteria. Three variants of the Ant Colony Optimization are applied, namely, the AS (Ant System), the MMAS (Min-Max AS) and the ACS (Ant Colony System), for the optimal sizing of the Low-Pass State Variable Filter. SPICE simulations are used to validate the obtained results/performances which are compared with already published works.
The optimization of running queries in relational databases using ant colony ...ijdms
The issue of optimizing queries is a cost-sensitive
process and with respect to the number of associat
ed
tables in a query, its number of permutations grows
exponentially. On one hand, in comparison with oth
er
operators in relational database, join operator is
the most difficult and complicated one in terms of
optimization for reducing its runtime. Accordingly,
various algorithms have so far been proposed to so
lve
this problem. On the other hand, the success of any
database management system (DBMS) means
exploiting the query model. In the current paper, t
he heuristic ant algorithm has been proposed to sol
ve this
problem and improve the runtime of join operation.
Experiments and observed results reveal the efficie
ncy
of this algorithm compared to its similar algorithm
s.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
COMPARISON BETWEEN ARTIFICIAL BEE COLONY ALGORITHM, SHUFFLED FROG LEAPING ALG...cscpconf
In this paper three optimum approaches to design PID controller for a Gryphon Robot are presented. The three applied approaches are Artificial Bee Colony, Shuffled Frog Leaping algorithms and nero-fuzzy system. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying Shuffled Frog Leaping (SFL) algorithm, Artificial Bee Colony (ABC) algorithm and Nero-Fuzzy System (FNN). After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on
decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. In steady state manner, all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
The document discusses various optimization techniques and algorithms including genetic algorithms, artificial neural networks, and data analytics. Specifically, it covers genetic algorithms in more detail including the basic concepts of populations of chromosomes evolving over generations using processes like crossover, mutation, and selection to optimize an objective function. It also discusses other metaheuristic algorithms like simulated annealing, particle swarm optimization, and ant colony optimization which are inspired by natural processes and use stochastic components to find robust solutions.
The Effect of Updating the Local Pheromone on ACS Performance using Fuzzy Log...IJECEIAES
Fuzzy Logic Controller (FLC) has become one of the most frequently utilised algorithms to adapt the metaheuristics parameters as an artificial intelligence technique. In this paper, the 휉 parameter of Ant Colony System (ACS) algorithm is adapted by the use of FLC, and its behaviour is studied during this adaptation. The proposed approach is compared with the standard ACS algorithm. Computational results are done based on a library of sample instances for the Traveling Salesman Problem (TSPLIB).
A New Extraction Optimization Approach to Frequent 2 Item setsijcsa
In this paper, we propose a new optimization approach to the APRIORI reference algorithm (AGR 94) for 2-itemsets (sets of cardinal 2). The approach used is based on two-item sets. We start by calculating the 1- itemets supports (cardinal 1 sets), then we prune the 1-itemsets not frequent and keep only those that are frequent (ie those with the item sets whose values are greater than or equal to a fixed minimum threshold). During the second iteration, we sort the frequent 1-itemsets in descending order of their respective supports and then we form the 2-itemsets. In this way the rules of association are discovered more quickly. Experimentally, the comparison of our algorithm OPTI2I with APRIORI, PASCAL, CLOSE and MAXMINER, shows its efficiency on weakly correlated data. Our work has also led to a classical model of sideby-side classification of items that we have obtained by establishing a relationship between the different sets of 2-itemsets.
A NEW EXTRACTION OPTIMIZATION APPROACH TO FREQUENT 2 ITEMSETSijcsa
This document presents a new optimization approach for extracting frequent 2-itemsets from transactional databases. The approach sorts frequent 1-itemsets by support before generating 2-itemsets, aiming to discover association rules more quickly. Experiments show the proposed OPTI2I algorithm performs efficiently on weakly correlated data compared to APRIORI, PASCAL, CLOSE and MAX-MINER. The work also presents a model for side-by-side classification of items based on relationships between 2-itemsets, which could help arrange products in large stores.
A NEW EXTRACTION OPTIMIZATION APPROACH TO FREQUENT 2 ITEMSETSijcsa
In this paper, we propose a new optimization approach to the APRIORI reference algorithm (AGR 94) for 2-itemsets (sets of cardinal 2). The approach used is based on two-item sets. We start by calculating the 1-itemets supports (cardinal 1 sets), then we prune the 1-itemsets not frequent and keep only those that are frequent (ie those with the item sets whose values are greater than or equal to a fixed minimum threshold). During the second iteration, we sort the frequent 1-itemsets in descending order of their respective supports and then we form the 2-itemsets. In this way the rules of association are discovered more quickly. Experimentally, the comparison of our algorithm OPTI2I with APRIORI, PASCAL, CLOSE and MAX-MINER, shows its efficiency on weakly correlated data. Our work has also led to a classical model of side-by-side classification of items that we have obtained by establishing a relationship between the different sets of 2-itemsets.
Comparison between pid controllers for gryphon robot optimized with neuro fuz...ijctcm
In this paper three intelligent evolutionary optimization approaches to design PID controller for a
Gryphon Robot are presented and compared to the results of a neuro-fuzzy system applied. The three
applied approaches are artificial bee colony, shuffled frog leaping algorithm and particle swarm
optimization. The design goal is to minimize the integral absolute error and reduce transient response by
minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is
defined and minimized applying the four optimization methods mentioned above. After optimization of the
objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that
FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of
steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is
better on decreasing of overshoot. On the other hand PSO sounds to perform well on steady-state error
only. In steady state manner all of the methods react robustly to the disturbance, but FNN shows more
stability in transient response.
Similar to A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm (20)
Due to availability of internet and evolution of embedded devices, Internet of things can be useful to contribute in energy domain. The Internet of Things (IoT) will deliver a smarter grid to enable more information and connectivity throughout the infrastructure and to homes. Through the IoT, consumers, manufacturers and utility providers will come across new ways to manage devices and ultimately conserve resources and save money by using smart meters, home gateways, smart plugs and connected appliances. The future smart home, various devices will be able to measure and share their energy consumption, and actively participate in house-wide or building wide energy management systems. This paper discusses the different approaches being taken worldwide to connect the smart grid. Full system solutions can be developed by combining hardware and software to address some of the challenges in building a smarter and more connected smart grid.
A Survey Report on : Security & Challenges in Internet of Thingsijsrd.com
In the era of computing technology, Internet of Things (IoT) devices are now popular in each and every domains like e-governance, e-Health, e-Home, e-Commerce, and e-Trafficking etc. Iot is spreading from small to large applications in all fields like Smart Cities, Smart Grids, Smart Transportation. As on one side IoT provide facilities and services for the society. On the other hand, IoT security is also a crucial issues.IoT security is an area which totally concerned for giving security to connected devices and networks in the IoT .As, IoT is vast area with usability, performance, security, and reliability as a major challenges in it. The growth of the IoT is exponentially increases as driven by market pressures, which proportionally increases the security threats involved in IoT The relationship between the security and billions of devices connecting to the Internet cannot be described with existing mathematical methods. In this paper, we explore the opportunities possible in the IoT with security threats and challenges associated with it.
In today’s emerging world of Internet, each and every thing is supposed to be in connected mode with the help of billions of smart devices. By connecting all the devises used in our day to day life, make our life trouble less and easy. We are incorporated in a world where we are used to have smart phones, smart cars, smart gadgets, smart homes and smart cities. Different institutes and researchers are working for creating a smart world for us but real question which we need to emphasis on is how to make dumb devises talk with uncommon hardware and communication technology. For the same what kind of mechanism to use with various protocols and less human interaction. The purpose is to provide the key area for application of IoT and a platform on which various devices having different mechanism and protocols can communicate with an integrated architecture.
Study on Issues in Managing and Protecting Data of IOTijsrd.com
This paper discusses variety of issues for preserving and managing data produced by IoT. Every second large amount of data are added or updated in the IoT databases across the heterogeneous environment. While managing the data each phase of data processing for IoT data is exigent like storing data, querying, indexing, transaction management and failure handling. We also refer to the problem of data integration and protection as data requires to be fit in single layout and travel securely as they arrive in the pool from diversified sources in different structure. Finally, we confer a standardized pathway to manage and to defend data in consistent manner.
Interactive Technologies for Improving Quality of Education to Build Collabor...ijsrd.com
Today with advancement in Information Communication Technology (ICT) the way the education is being delivered is seeing a paradigm shift from boring classroom lectures to interactive applications such as 2-D and 3-D learning content, animations, live videos, response systems, interactive panels, education games, virtual laboratories and collaborative research (data gathering and analysis) etc. Engineering is emerging with more innovative solutions in the field of education and bringing out their innovative products to improve education delivery. The academic institutes which were once hesitant to use such technology are now looking forward to such innovations. They are adopting the new ways as they are realizing the vast benefits of using such methods and technology. The benefits are better comprehensibility, improved learning efficiency of students, and access to vast knowledge resources, geographical reach, quick feedback, accountability and quality research. This paper focuses on how engineering can leverage the latest technology and build a collaborative learning environment which can then be integrated with the national e-learning grid.
Internet of Things - Paradigm Shift of Future Internet Application for Specia...ijsrd.com
In the world more than 15% people are living with disability that also include children below age of 10 years. Due to lack of independent support services specially abled (handicap) people overly rely on other people for their basic needs, that excludes them from being financially and socially active. The Internet of Things (IoT) can give support system and a better quality of life as well as participation in routine and day to day life. For this purpose, the future solutions for current problems has been introduced in this paper. Daunting challenges have been considered as future research and glimpse of the IoT for specially abled person is given in the paper.
A Study of the Adverse Effects of IoT on Student's Lifeijsrd.com
Internet of things (IoT) is the most powerful invention and if used in the positive direction, internet can prove to be very productive. But, now a days, due to the social networking sites such as Face book, WhatsApp, twitter, hike etc. internet is producing adverse effects on the student life, especially those students studying at college Level. As it is rightly said, something which has some positive effects also has some of the negative effects on the other hand. In this article, we are discussing some adverse effects of IoT on student’s life.
Pedagogy for Effective use of ICT in English Language Learningijsrd.com
The use of information and communications technology (ICT) in education is a relatively new phenomenon and it has been the educational researchers' focus of attention for more than two decades. Educators and researchers examine the challenges of using ICT and think of new ways to integrate ICT into the curriculum. However, there are some barriers for the teachers that prevent them to use ICT in the classroom and develop supporting materials through ICT. The purpose of this study is to examine the high school English teachers’ perceptions of the factors discouraging teachers to use ICT in the classroom.
In recent years usage of private vehicles create urban traffic more and more crowded. As result traffic becomes one of the important problems in big cities in all over the world. Some of the traffic concerns are traffic jam and accidents which have caused a huge waste of time, more fuel consumption and more pollution. Time is very important parameter in routine life. The main problem faced by the people is real time routing. Our solution Virtual Eye will provide the current updates as in the real time scenario of the specific route. This research paper presents smart traffic navigation system, based on Internet of Things, which is featured by low cost, high compatibility, easy to upgrade, to replace traditional traffic management system and the proposed system can improve road traffic tremendously.
Ontological Model of Educational Programs in Computer Science (Bachelor and M...ijsrd.com
In this work there is illustrated an ontological model of educational programs in computer science for bachelor and master degrees in Computer science and for master educational program “Computer science as second competence†by Tempus project PROMIS.
Understanding IoT Management for Smart Refrigeratorijsrd.com
1) The document discusses a proposed design for an intelligent refrigerator that leverages sensor technology and wireless communication to identify food items and order more through an internet connection when supplies are low.
2) Key aspects of the proposal include using RFID to uniquely identify each food item, storing item and usage data in an XML database, monitoring usage patterns to determine reordering needs, and executing orders through an online retailer using stored payment details.
3) Security and privacy concerns with such an internet-connected refrigerator are discussed, such as potential hacking of personal information or unauthorized device control. The proposal aims to minimize human interaction for household management.
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...ijsrd.com
Double wishbone designs allow the engineer to carefully control the motion of the wheel throughout suspension travel. 3-D model of the Lower Wishbone Arm is prepared by using CAD software for modal and stress analysis. The forces and moments are used as the boundary conditions for finite element model of the wishbone arm. By using these boundary conditions static analysis is carried out. Then making the load as a function of time; quasi-static analysis of the wishbone arm is carried out. A finite element based optimization is used to optimize the design of lower wishbone arm. Topology optimization and material optimization techniques are used to optimize lower wishbone arm design.
A Review: Microwave Energy for materials processingijsrd.com
Microwave energy is a latest largest growing technique for material processing. This paper presents a review of microwave technologies used for material processing and its use for industrial applications. Advantages in using microwave energy for processing material include rapid heating, high heating efficiency, heating uniformity and clean energy. The microwave heating has various characteristics and due to which it has been become popular for heating low temperature applications to high temperature applications. In recent years this novel technique has been successfully utilized for the processing of metallic materials. Many researchers have reported microwave energy for sintering, joining and cladding of metallic materials. The aim of this paper is to show the use of microwave energy not only for non-metallic materials but also the metallic materials. The ability to process metals with microwave could assist in the manufacturing of high performance metal parts desired in many industries, for example in automotive and aeronautical industries.
Web Usage Mining: A Survey on User's Navigation Pattern from Web Logsijsrd.com
With an expontial growth of World Wide Web, there are so many information overloaded and it became hard to find out data according to need. Web usage mining is a part of web mining, which deal with automatic discovery of user navigation pattern from web log. This paper presents an overview of web mining and also provide navigation pattern from classification and clustering algorithm for web usage mining. Web usage mining contain three important task namely data preprocessing, pattern discovery and pattern analysis based on discovered pattern. And also contain the comparative study of web mining techniques.
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEMijsrd.com
Application of FACTS controller called Static Synchronous Compensator STATCOM to improve the performance of power grid with Wind Farms is investigated .The essential feature of the STATCOM is that it has the ability to absorb or inject fastly the reactive power with power grid . Therefore the voltage regulation of the power grid with STATCOM FACTS device is achieved. Moreover restoring the stability of the power system having wind farm after occurring severe disturbance such as faults or wind farm mechanical power variation is obtained with STATCOM controller . The dynamic model of the power system having wind farm controlled by proposed STATCOM is developed . To validate the powerful of the STATCOM FACTS controller, the studied power system is simulated and subjected to different severe disturbances. The results prove the effectiveness of the proposed STATCOM controller in terms of fast damping the power system oscillations and restoring the power system stability.
Making model of dual axis solar tracking with Maximum Power Point Trackingijsrd.com
Now a days solar harvesting is more popular. As the popularity become higher the material quality and solar tracking methods are more improved. There are several factors affecting the solar system. Major influence on solar cell, intensity of source radiation and storage techniques The materials used in solar cell manufacturing limit the efficiency of solar cell. This makes it particularly difficult to make considerable improvements in the performance of the cell, and hence restricts the efficiency of the overall collection process. Therefore, the most attainable maximum power point tracking method of improving the performance of solar power collection is to increase the mean intensity of radiation received from the source used. The purposed of tracking system controls elevation and orientation angles of solar panels such that the panels always maintain perpendicular to the sunlight. The measured variables of our automatic system were compared with those of a fixed angle PV system. As a result of the experiment, the voltage generated by the proposed tracking system has an overall of about 28.11% more than the fixed angle PV system. There are three major approaches for maximizing power extraction in medium and large scale systems. They are sun tracking, maximum power point (MPP) tracking or both.
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...ijsrd.com
This document summarizes a review paper on performance and emission testing of a 4-stroke diesel engine using ethanol-diesel blends at different pressures. The paper reviews several previous studies that tested blends of 5-30% ethanol mixed with diesel fuel. The studies found that a 10-20% ethanol blend can improve brake thermal efficiency compared to pure diesel, while also reducing emissions like NOx and smoke. Higher ethanol blends required advancing the injection timing to allow the engine to run. Ethanol-diesel blends were found to have lower density, viscosity, pour point and higher flash point compared to pure diesel. Overall, ethanol shows potential as a renewable fuel to improve engine performance and reduce emissions when blended with diesel
Study and Review on Various Current Comparatorsijsrd.com
This paper presents study and review on various current comparators. It also describes low voltage current comparator using flipped voltage follower (FVF) to obtain the single supply voltage. This circuit has short propagation delay and occupies a small chip area as compare to other current comparators. The results of this circuit has obtained using PSpice simulator for 0.18 μm CMOS technology and a comparison has been performed with its non FVF counterpart to contrast its effectiveness, simplicity, compactness and low power consumption.
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...ijsrd.com
Power dissipation is a challenging problem for today's system-on-chip design and test. This paper presents a novel architecture which generates the test patterns with reduced switching activities; it has the advantage of low test power and low hardware overhead. The proposed LP-TPG (test pattern generator) structure consists of modified low power linear feedback shift register (LP-LFSR), m-bit counter, gray counter, NOR-gate structure and XOR-array. The seed generated from LP-LFSR is EXCLUSIVE-OR ed with the data generated from gray code generator. The XOR result of the sequence is single input changing (SIC) sequence, in turn reduces the switching activity and so power dissipation will be very less. The proposed architecture is simulated using Modelsim and synthesized using Xilinx ISE9.2.The Xilinx chip scope tool will be used to test the logic running on FPGA.
Defending Reactive Jammers in WSN using a Trigger Identification Service.ijsrd.com
In the last decade, the greatest threat to the wireless sensor network has been Reactive Jamming Attack because it is difficult to be disclosed and defend as well as due to its mass destruction to legitimate sensor communications. As discussed above about the Reactive Jammers Nodes, a new scheme to deactivate them efficiently is by identifying all trigger nodes, where transmissions invoke the jammer nodes, which has been proposed and developed. Due to this identification mechanism, many existing reactive jamming defending schemes can be benefited. This Trigger Identification can also work as an application layer .In this paper, on one side we provide the several optimization problems to provide complete trigger identification service framework for unreliable wireless sensor networks and on the other side we also provide an improved algorithm with regard to two sophisticated jamming models, in order to enhance its robustness for various network scenarios.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm
1. IJSRD - International Journal for Scientific Research & Development| Vol. 1, Issue 2, 2013 | ISSN (online): 2321-0613
All rights reserved by www.ijsrd.com 119
A new move towards updating pheromone trail in order to gain increased
predictive accuracy in classification rule mining
by implementing ACO algorithm
Vimal G.Bhatt1
Priyanka Trikha2
1
M.Tech Student, Computer Science Department
2
Assistant Professor, Computer Science Department
1,2
Sri Balaji College of Engineering &Technology, Jaipur, Rajasthan
Abstract— Ant miner algorithm is used to find the
classification rule which helps to do classification of the
data. Ant miner uses the Ant Colony Optimization (ACO)
which deals with artificial systems that is inspired from the
foraging behavior of real ants. Here the task is to improve
the pheromone update method in the current system.
Pheromone updating is dependent mainly on the initial
pheromone of the term, and term Q (quality of term) which
is added to current accumulated pheromone. In this methods
a try is made to lay pheromone on trail such that selection of
terms is not biased and unified behavior for the system as a
whole, produce a robust system capable of finding high-
quality solutions for problems. Here in this approach amount
of pheromone added with the used term is not directly
dependent on accumulated pheromone but also on the Q
(quality of rule). So here in Q is modified and multiplied in
ways that help to get better solution. For this use of rule
length is done and manipulated in ways that support
approach to achieve goal. Thus, aim is to improve the
accuracy and sustaining rule list simplicity using Ant
Colony Optimization in data mining.
Keywords: - Aco Algorithm, Current Algorithm, (C-Ant
miner) Proposed Algorithm (New Pheromone Update
Algorithm)
I. INTRODUCTION
Mining information and knowledge from large database has
been recognized by many researchers as a key research topic
in database system and machine learning and by many
industrial companies as an important area with an
opportunity of major revenues. One of the data mining tasks
gaining significant attention is the classification rules
extraction from databases. The goal of this task is to assign
each case (object, record or instance) to one class, out of a
set of predefined classes, based on the values of some
attributes for the case. There are different classifications
algorithms use to extract relevant relationship in the data as
decision trees which operate performing a successive
partitioning of cases until all subsets belong to single class.
There have been many approaches for data classification,
such as statistical and roughest approaches and neural
networks. Though these classification techniques are
algorithmically strong they require significant expertise to
work effectively and do not provide intelligible rules. The
classification problem becomes very hard when the number
of possible different combinations of parameters is so high
that algorithms based on exhaustive searches of the
parameter space rapidly become computationally infeasible.
In recent years, Ant Colony System (ACS) algorithm
(Dorigo & Maniezzo, 1996) has emerged as a promising
technique to discover useful and interesting knowledge from
database.
II. ANT COLONY OPTIMIZATION
In a series of experiments on a colony of ants with a
choice between two unequal length paths leading to a source
of food, biologists have observed that ants tended to use the
shortest route. A model explaining this behavior is as
follows:
1. An ant (called "blitz") runs more or less at random
around the colony;
2. If it discovers a food source, it returns more or less
directly to the nest, leaving in its path a trail of pheromone;
3. These pheromones are attractive, nearby ants will be
inclined to follow, more or less directly, the track;
4. Returning to the colony, these ants will strengthen
the route;
5. If there are two routes to reach the same food source
then, in a given amount of time, the shorter one will be
travelled by more ants than the long route;
6. The short route will be increasingly enhanced, and
therefore become more attractive;
7. The long route will eventually disappear because
pheromones are volatile;
8. Eventually, all the ants have determined and
therefore "chosen" the shortest route.
The idea of ACO is to let artificial ants construct
solutions for a given combinatorial optimization problem. A
prerequisite for designing an ACO algorithm is to have a
constructive method which can be used by an ant to create
different solutions through a sequence of decisions.
Typically an ant constructs a solution by a sequence of
probabilistic decisions where every decision extends a
partial solution by adding a new solution component until a
complete paper.
ACO scheme:
Initialize pheromone values
repeat
for ant k ∈ {1, . . .,m} construct a solution
endfor
forall pheromone values do
decrease the value by a certain percentage
{evaporation}
2. A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm
(IJSRD/Vol. 1/Issue 2/2013/0018)
All rights reserved by www.ijsrd.com
120
endfor forall pheromone values corresponding to good
solutions do increase the value {intensification} endfor until
stopping criterion is met
III. CURRENT SYSTEM
3.1. Algorithm of current system
In this section we discuss in detail Ant Colony
Optimization algorithm for the discovery of classification
rules, called C-Ant-Miner [3].
Input parameter:
1. Number of ants (No_of_ants)
2.Minimum number of cases per rule
(Min_cases_per_rule)
3. Maximum number of uncovered cases in the
training set (Max_uncovered_cases)
4. Number of rules used to test convergence of the ants
(No_rules_converg)
Existing C-Ant miner algorithm
procedure C-ant miner
begin
While stopping criterion not satisfied do
Initialize the pheromone
Repeat
Start ant with empty rule and starts rule construction.
for each
ant do
Choose next term to be added in a rule
End
Remove irrelevant terms that are added during the rule
construction.
Update the pheromone(accumulated pheromone is
multiplied by Q)
end repeat
Choose best rule from constructed rules
Add best rule to the discovered rule list
Reinitialize training set
End
while
end
OUTPUT: Classification rule that will classify data
according to the predetermined class
A General Description of C-Ant Miner In an ACO
algorithm each ant incrementally constructs/modifies a
solution for the target problem. Here the target problem is
the discovery of classification rules. As discussed in the
introduction, each classification rule has the form: IF <
term1 AND term2 AND ...> THEN <class> Each term is a
triple <attribute, operator, value>, where value is a value
belonging to the domain of attribute. The operator element
in the triple is a relational operator. This version of Ant-
Miner copes only with categorical attributes, so that the
operator element in the triple is always “=”. Continuous
(real-valued) attributes are discretized in a preprocessing
step.
A Pseudo code for Existing C-Ant miner algorithm is
mention above. Ant-Miner follows a sequential covering
approach to discover a list of classification rules covering
all, or almost all, the training cases. At first, the list of
discovered rules is empty and the training set consists of all
the training cases. Each iteration of the WHILE loop of
pseudo code, corresponding to a number of executions of
the REPEAT-UNTIL loop, discovers one classification rule.
This rule is added to the list of discovered rules, and the
training cases that are correctly covered by this rule (i.e.,
cases satisfying the rule antecedent and having the class
predicted by the rule consequent) are removed from the
training set. This process is iteratively performed while the
number of uncovered training cases is greater than a user-
specified threshold, called Max_uncovered_cases. Each
iteration of the REPEAT-UNTIL loop of pseudo code
consists of three steps, comprising rule construction, rule
pruning, and pheromone updating, detailed as follows. First,
Antt starts with an empty rule, that is, a rule with no term in
its antecedent, and adds one term at a time to its current
partial rule. The current partial rule constructed by an ant
corresponds to the current partial path followed by that ant.
Similarly, the choice of a term to be added to the current
partial rule corresponds to the choice of the direction in
which the current path will be extended. The choice of the
term to be added to the current partial rule depends on both
a problem-dependent heuristic function (η) and on the
amount of pheromone (τ) associated with each term, as will
be discussed in detail in the next subsections. Antt keeps
adding one-term-at a time to its current partial rule until one
of the following two stopping criteria is met:
1. Any term to be added to the rule would make the
rule cover a number of cases smaller than a user-specified
threshold, called Min_cases_per_rule (minimum number of
cases covered per rule).
2. All attributes have already been used by the ant, so
that there is no more attributes to be added to the rule
antecedent. Note that each attribute can occur only once in
each rule, to avoid invalid rules such as “IF (Sex= male)
AND (Sex = female) ...”
Second, rule Rt constructed by Antt is pruned in order
to remove irrelevant terms, as will be discussed later. For
the moment, we only mention that these irrelevant terms
may have been included in the rule due to stochastic
variations in the term selection procedure and/or due to the
use of a shortsighted, local heuristic function – which
considers only one-attribute-at-a-time, ignoring attribute
interactions. Third, the amount of pheromone in each trail is
updated, increasing the pheromone in the trail followed by
Antt (according to the quality of rule Rt) and decreasing the
pheromone in the other trails (simulating the pheromone
evaporation). Then another ant starts to construct its rule,
using the new amounts of pheromone to guide its search.
This process is repeated until one of the following two
conditions is met:
1. The number of constructed rules is equal to or
greater than the user-specified threshold No_of_ants.
2. The current Antt has constructed a rule that is
exactly the same as the rule constructed by the previous
No_rules_converg–1 ants, where No_rules_converg stands
for the number of rules used to test convergence of the ants.
Once the REPEAT-UNTIL loop is completed, the best rule
3. A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm
(IJSRD/Vol. 1/Issue 2/2013/0018)
All rights reserved by www.ijsrd.com
121
among the rules constructed by all ants is added to the list of
discovered rules, as mentioned earlier, and the system starts
a new iteration of the WHILE loop, by reinitializing all trails
with the same amount of pheromone.
Detail description of each phase of algorithm.
A) Pheromone Initialization
A termij corresponds to a segment in some path
that can be followed by an ant. When the first ant
starts its search, all paths have the same amount of
pheromone. The initial amount of pheromone
deposited at each path is inversely proportional to
the number of values of all attributes, and is
defined by
(1)
‘a’ is the total number of attributes, ’bi’ is the number
of values in the domain of attribute i.
The value returned by this equation is normalized to
facilitate its use in 8.
B) Discretization of continuous value Dynamic
Discretization of continuous value is done by
Entropy based discretization method.
If nominal attribute, where every termij has
the form (ai = vij ), the entropy for the attribute-
value pair is computed as in (2) – used in the
original
( 2)
Ant-Miner: (2) where, p(c | ai = vij) is the empirical
probability of observing class c conditional on having
observed ai= vij k is the number of classes. To compute the
entropy of nodes representing continuous attributes (termi)
since these nodes do not represent an attribute-value pair,
we need to select a threshold value v to dynamically
partition the continuous attribute ai into two intervals: a < v
and ai >= v. The best threshold value is the value v that
minimizes the entropy of the partition, given by (8):
where, |Sai<v| is the total number of examples in the
partition ai < v (partition of training examples where the
attribute ai has a value less than v) |Sai>=v| is the total
number of examples in the partition ai>= v (partition of
training examples where the attribute ai has a value greater
or equal to v) |S| is the total number of training examples.
After the selection of the threshold vbest, the entropy of the
termi corresponds to the minimum entropyvalue ofthe two
partitions and it is definedas:[3] e𝑛 (𝑡
Term added to current partial rule Let a termij be a rule
condition of the form Ai = Vij, where Ai is the i-th attribute
and Vij is the j-th value of the domain of Ai. The probability
that termij is chosen to be added to the current partial rule is
given by Equation: 𝑃𝑃𝑖𝑖𝑖𝑖=�(τij(𝑡
where: ηij is the value of a problem-dependent heuristic
function for termij. The higher the value of ηij, the more
relevant for classification the termij is, and so the higher its
probability of being chosen. Problem-dependent heuristic
function will be discussed in the next sub section(c) of this
section. τij(t) is the amount of pheromone associated with
termij at iteration t, corresponding to the amount of
pheromone currently available in the position i,j of the path
being followed by the current ant. The better the quality of
the rule constructed by an ant, the higher the amount of
pheromone added to the trail segments visited by the ant.
Therefore, as time goes by, the best trail segments to be
followed – that is, the best terms (attribute-value pairs) to be
added to a rule – will have greater and greater amounts of
pheromone, increasing their probability of being chosen. a is
the total number of attributes. xi is set to 1 if the attribute Ai
was not yet used by the current ant, or to 0 otherwise. bi is
the number of values in the domain of the i-th attribute. D)
Heuristic Function
For each termij that can be added to the current rule,
Ant-Miner computes the value ηij of a heuristic function
that is an estimate of the quality of this term, with respect to
its ability to improve the predictive accuracy of the
rule. More precisely, the value of ηij for termij involves a
measure of the entropy (or amount of information)
associated with that term. For each termij of the form
Ai=Vij – where Ai is the i-th attribute and Vij is the j-th
value belonging to the domain of
where: W is the class attribute (i.e., the attribute whose
domain consists of the classes to be predicted). k is the
number of classes. P(w| Ai =Vij) is the empirical probability
of observing class w conditional on having observed Ai
=Vij. The higher the value of H(W| Ai =Vij), the more
uniformly distributed the classes are and so, the smaller the
probability that the current ant chooses to add termij to its
partial rule. It is desirable to normalize the value of the
heuristic function to facilitate its use in 8. In order to
implement this normalization, it is used the fact that the
value of H(W| Ai =Vij) varies in the range 0 ≤ H(W| Ai
=Vij) ≤ log2 k, where k is the number of classes. Therefore,
the proposed normalized, information-theoretic heuristic
function is:
where a, xi, and bi have the same meaning as in 8. Note that
the H(W| Ai =Vij) of termij is always the same, regardless
of the contents of the rule in which the term occurs.
4. A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm
(IJSRD/Vol. 1/Issue 2/2013/0018)
All rights reserved by www.ijsrd.com
122
Therefore, in order to save computational time, the H(W| Ai
=Vij) of all termij is computed as a preprocessing step.
In the above heuristic function there are just two minor
caveats. First, if the value Vij of attribute Ai does not occur
in the training set then H(W| Ai =Vij) is set to its maximum
value of log2k. This corresponds to assigning to termij the
lowest possible predictive power. Second, if all the cases
belong to the same class then H(W| Ai =Vij) is set to 0. This
corresponds to assigning to termij the highest possible
predictive power. In Ant-Miner the entropy is computed for
an attribute value pair only, since an attribute-value pair is
chosen to expand the rule. The entropy measure is heuristic
function together with pheromone updating. This makes the
rule-construction process of Ant-Miner more robust and less
prone to get trapped into local optima in the search space,
since the feedback provided by pheromone updating helps to
correct some mistakes made by the shortsightedness of the
entropy measure. Note that the entropy measure is a local
heuristic measure, which considers only one attribute at a
time, and so is sensitive to attribute interaction problems. In
contrast, pheromone updating tends to cope better with
attribute interactions, since pheromone updating is directly
based on the performance of the rule as a whole (which
directly takes into account interactions among all attributes
occurring in the rule). E) Rule Pruning The main goal of
rule pruning is to remove irrelevant terms that might have
been unduly included in the rule. Rule pruning potentially
increases the predictive power of the rule, helping to avoid
its over fitting to the training data. Another motivation for
rule pruning is that it improves the simplicity of the rule,
since a shorter rule is usually easier to be understood by the
user than a longer one. As soon as the current ant completes
the construction of its rule, the rule pruning procedure is
called. The basic idea of rule pruning is to iteratively
remove one-term-at-a-time from the rule while this process
improves the quality of the rule. More precisely, in the first
iteration one starts with the full rule. Then it is tentatively
tried to remove each of the terms of the rule – each one in
turn – and the quality of the resulting rule is computed using
a given rule-quality function as:
TP (true positives) is the number of cases covered by the
rule that have the class predicted by the rule.
FP (false positives) is the number of cases covered by the
rule that have a class different from the class predicted by
the rule. FN (false negatives) is the number of cases that are
not covered by the rule but that have the class predicted by
the rule. TN (true negatives) is the number of cases that are
not covered by the rule and that do not have the class
predicted by the rule. Q´s value is within the range 0 ≤ Q ≤ 1
and, the larger the value of Q, the higher the quality of the
rule. It should be noted that this step might involve replacing
the class in the rule consequent, since the majority class in
the cases covered by the pruned rule can be different from
the majority class in the cases covered by the original rule.
The term whose removal most improves the quality of the
rule is effectively removed from it, completing the first
iteration. In the next iteration it is removed again the term
whose removal most improves the quality of the rule, and so
on. This process is repeated until the rule has just one term
or until there is no term whose removal will improve the
quality of the rule. E) Pheromone Updating The initial
amount of pheromone deposited at each path position is
inversely proportional to the number of values of all
attributes, and is defined by (1). Whenever an ant constructs
its rule and that rule is pruned (see pseudo code) the amount
of pheromone in all segments of all paths must be updated.
This pheromone updating is supported- by two basic ideas,
namely:
1. The amount of pheromone associated with each
termij occurring in the rule found by the ant (after pruning)
is increased in proportion to the quality of that rule.
2. The amount of pheromone associated with each
termij that does not occur in the rule is decreased, simulating
pheromone evaporation in real ant colonies.
1) Increasing the Pheromone of Used Terms
Increasing the amount of pheromone associated with
each termij occurring in the rule found by an ant
corresponds to increasing the amount of pheromone along
the path completed by the ant. In a rule discovery context,
this corresponds to increasing the robability of termij being
chosen by other ants in the future in proportion to the quality
of the rule Pheromone updating for a termij is performed
according to Equation (9), for all terms termij that
occur in the rule. Where R is the set of terms occurring in
the rule constructed by the ant at iteration t. Therefore, for
all termij occurring in the rule found by the current ant, the
amount of pheromone is increased by a fraction of the
current amount of pheromone, and this fraction is given by
Q(according to (8)).
2) Decreasing the Pheromone of Unused Terms As
mentioned above, the amount of pheromone associated with
each termij that does not occur in the rule found by the
current ant has to be decreased in order to simulate
pheromone evaporation in real ant colonies. In Ant-Miner,
pheromone evaporation is implemented in a somewhat
indirect way. More precisely, the effect of pheromone
evaporation for unused terms is achieved by normalizing the
value of each pheromone τij. This normalization is
performed by dividing the value of each τij by the
summation of all τij, ∀i,j. To see how this implements
pheromone evaporation, remember that only the terms used
by a rule have their amount of pheromone increased by
Equation 9. Therefore, at normalization time the amount of
pheromone of an unused term will be computed by dividing
its current value (not modified by (9)) by the total
summation of pheromone for all terms (which was increased
as a result of applying (9) to all used terms). The final effect
will be the reduction of the normalized amount of
pheromone for each unused term.
Used terms will, of course, have their normalized amount of
pheromone increased due to the application of (9).
5. A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm
(IJSRD/Vol. 1/Issue 2/2013/0018)
All rights reserved by www.ijsrd.com
123
IV. PROPOSED SYSTEM
A. Introduction to system
Intelligence (SI) is the property of a system whereby the
collective behaviors of (unsophisticated) agents interacting
locally with their environment cause Swarm coherent
functional global patterns to emerge. SI provides a basis
with which it is possible to explore collective (or
distributed) problem solving without centralized control or
the provision of a global model. Ant Colony Optimization
(ACO) deals with artificial systems that is inspired from the
foraging behaviour of real ants. Data mining (sometimes
called data or knowledge discovery) is the process of
analyzing data from different perspectives and summarizing
it into useful information - information that can be used to
find patterns and then to validate the findings by applying
the detected patterns to new subsets of data. In order to
achieve this, data mining uses computational techniques
from statistics, machine learning and pattern recognition.
Aim was to improve the predictive accuracy and
sustain/improving rule list simplicity for the existing method
available with Ant Colony Optimization in data mining.
Task is achieved by improving and updating the pheromone
update method in the current system. Selection of terms is
not biased, leading to exploration of the attributes. The
classification rule construction resulting with this method
would be novel. Thus, method of pheromone update is
selected or done which leads to the exploration of the terms
leading to better classification rule construction.
B. Proposed algorithm input parameter:
1. Number of ants (No_of_ants): This is also the
maximum number of complete candidate rules constructed
and pruned during an iteration of the WHILE loop of
Algorithm , since each ant is associated with a single rule. In
each iteration the best candidate rule found is considered a
discovered rule. The larger No_of_ants, the more candidate
rules are evaluated per iteration, but the slower the system
is.
2. Minimum number of cases per rule
(Min_cases_per_rule): Each rule must cover atleast
Min_cases_per_rule cases, to enforce at least a certain
degree of generality in the discovered rules. This helps to
avoid an overfitting of the training data.
3. Maximum number of uncovered cases in the
training set (Max_uncovered_cases): The process of rule
discovery is iteratively performed until the number of
training cases that are not covered by any discovered rule is
smaller than this threshold.
4. Number of rules used to test convergence of the ants
(No_rules_converg): If the current ant has constructed a rule
that is exactly the same as the rule constructed by the
previous No_rules_converg – 1 ants, then the system
concludes that the ants have converged to a single rule
(path). The current iteration of the WHILE loop of
Algorithm is therefore stopped, and another iteration is
started.
ALGORITHM:
TrainingSet = {all training cases};
DiscoveredRuleList = [ ]; /* rule list is initialized with
an empty list */
WHILE (TrainingSet > Max_uncovered_cases)
t = 1; /* ant index */ j = 1; /* convergence test index */
Initialize all trails with the same amount of pheromone
REPEAT
Antt starts with an empty rule and incrementally
constructs a classification rule Rt adding one term at a time
to the current rule;
Prune rule Rt; Update the pheromone of all trails by
increasing pheromone in the trail followed by Antt
(proportional to the quality of Rt)
And decreasing pheromone in the other trails
(simulating pheromone evaporation);
IF (Rt is equal to Rt – 1) /* update convergence test */
THEN j = j + 1;
ELSE j = 1;
END IF t = t + 1;
UNTIL (i≥No_of_ants) OR (j≥No_rules_converg)
Choose the best rule Rbest among all rules Rt
constructed by all the ants;
Add rule Rbest to DiscoveredRuleList;
TrainingSet= TrainingSet - {set of cases correctly
covered by Rbest};
END WHILE
OUTPUT:
Classification rule that will classify data according to
the predetermined class
C. Description of proposed system
1) Pheromone Initialization A term ij corresponds to a
segment in some path that can be followed by an ant. When
the first ant starts its search, all paths have the same amount
of pheromone.
The initial amount of pheromone deposited at each
path is inversely proportional to the number of values of all
attributes, and is defined by
(10)
Where, ‘a’ is the total number of attributes,
’bi’ is the number of values in the domain of attribute ai.
2) Pheromone Updating The initial amount of
pheromone deposited at each path position is inversely
proportional to the number of values of all attributes, and is
defined by(1)
Whenever an ant constructs its rule and that rule is
pruned (see Algorithm) the amount of pheromone in all
segments of all paths must be updated. This pheromone
updating is supported- by two basic ideas, namely:
1. The amount of pheromone associated with each
termij occurring in the rule found by the ant (after pruning)
is increased in proportion to the quality of that rule.
2. The amount of pheromone associated with each
termij that does not occur in the rule is decreased, simulating
pheromone evaporation in real ant colonies.
3) Increasing the Pheromone of Used Terms Increasing the
amount of pheromone associated with each termij occurring
in the rule found by an ant corresponds to increasing the
amount of pheromone along the path completed by the ant.
In a rule discovery context, this corresponds to increasing
6. A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm
(IJSRD/Vol. 1/Issue 2/2013/0018)
All rights reserved by www.ijsrd.com
124
the probability of termij being chosen by other ants in the
future in proportion to the quality of the rule
New Pheromone Updater :
Pheromone updating for a termij is performed according to
Equation (2), for all terms termij that occur in the rule.
(11)
where τ ij pheromone related to the term
Q = quality of term
E = evaporation rate
n = rule length
Therefore, for all termij occurring in the rule found by the
current ant, the amount of pheromone is increased by a
fraction of the current amount of pheromone, and this
fraction is given by Q.
Where,
(12)
Q´s value is within the range 0 ≤ Q ≤ 1 and, the larger the
value of Q, the higher the quality of the rule.
TP (true positives) is the number of cases covered
by the rule that have the class predicted by the rule
FP (false positives) is the number of cases covered
by the rule that have a class different from the class
predicted by the rule.
FN (false negatives) is the number of cases that are
not covered by the rule but that have the class predicted by
the rule.
TN (true negatives) is the number of cases that are
not covered by the rule and that do not have the class
predicted by the rule.
V. EXPERIMENTAL ANALYSIS
The performance of the classification method is affected by
several factors (like the number of attributes, input
parameter, data type of attributes etc.). This chapter we will
take a closer look at how different factors actually affect the
performance of algorithm.
Here a comparison of results of existing C ant-miner
algorithm is shown in tabular form. Proposed New
pheromone updating Here the parameter for comparison is
Accuracy, No. of Rules Discovered and No. of terms/ No. of
rules Discovered. Data set detail used in the experiments is
described in chapter 5 section 5.2. Each algorithm is run for
10 times and then average of ten run is done to measure
efficiency of algorithms. The parameter setting is shown in
the following table:
PARAMETER NAME VALUE
Minimum covered examples per rule value 5
Convergence test size 10
Maximum number of iterations 1500
Maximum number of uncovered cases 10
Number of ants in the colony 60
Cross Validation Fold 10
Table 1: parameter details
For easy in the representation of experimental results, in
tables and chart by New Pheromone Updater
Data Set
Existing C ant
miner
New Pheromone
Updator
Heart 76.5184 77.037
Hepatitis 75.2751 77.439
Ionosphere 87.0643 88.607
WDBC 94.4448 90.862
Table 2: accuracy detail
Data Set
Existing C ant
miner
New Pheromone
Updator
Heart
1.9369 1.768
Hepatitis
1.9728 1.806
Ionosphere
1.5036 1.379
WDBC
1.7015 1.125
Table 3: terms per rule detail
Data Set
Existing C ant
miner
New Pheromone
Updator
Heart 6.31 6.5
Hepatitis 5.02 5.25
Ionosphere 6 6
WDBC 5.46 5.833
Table 4 : no of rules detail
Figure. 1: Accuracy Comparison of Existing System and
New Pheromone Updater
0.00
10.00
20.00
30.00
40.00
50.00
60.00
70.00
80.00
90.00
100.00
Existing C
ant miner
7. A new move towards updating pheromone trail in order to gain increased predictive accuracy in classification rule mining by implementing ACO algorithm
(IJSRD/Vol. 1/Issue 2/2013/0018)
All rights reserved by www.ijsrd.com
125
Figure. 2: Terms per Rule : Comparison of Existing System
and New Pheromone Updater
Figure. 3 No. of Rule generated : Comparison of Existing
System and New Pheromone Updater
VI. CONCLUSION
Data mining predict future trends and behaviors, which
helps to make proactive, knowledge-driven decisions .One
of the data mining tasks is the classification rules extraction
from databases helps in achieving this. Ant Miner is a
technique that successfully incorporates swarm intelligence
in data mining. The main objective of research is related to
improve accuracy and generate better classification rules.
For this we have selected the pheromone update method
which helps to fulfill the task. By the change done in
method of pheromone update as discussed in previous
chapter Gain in accuracy of the solution is acquired by
variation of 2-3%
REFERENCES
[1] R.S.Parnelli, H.S.Lopes and A.A.Freitas, Data Mining
with an Ant Colony Optimization Algorithm, IEEE
Trans. OnEvolutinary Computation, special issue on
Ant colony Algorithm, 6(4),pp 321-332,Aug 2002.
[2] Bo,L., Abbas, H.A, Kay ,B Classification Rule
Discovery With an Ant Colony optimization.In :
International Conference on Intelligent Agent
Technology,2003.IAT 2003. IEEE October 13-1-2003
,pp 83-88(2003).
[3] Fernando E.B. Otero, Alex A. Freitas, and Colin G.
Johnson, cAnt-Miner: An Ant Colony Classification
Algorithm to Cope with Continuous Attributes in
Springer-Verlag Berlin, Heidelberg 2008.
[4] K .Thangavel.,P. Jaganathan,Rule Mining with a new
Ant Colony optimization Algorithm Rules In:2007
IEEE International Conference on Granular Computing
,pp 135-140(2007).
[5] Bo Liu , Hussein A.Abbass, Bob McKay,
Density_based Heuristic for Rule Discovery with
Ant_Miner, The 6th Australia-Japan Joint Workshop on
Intelligent and Evolutionary System,2002, page 180-
184.
[6] Fayyad, Usama; Gregory Piatetsky-Shapiro, and
Padhraic Smyth (1996). "From Data Mining to
Knowledge Discovery in Databases"Retrieved 2008-12-
17.
[7] S. Sumathi, S.N. Sivanandam ,Introduction to Data
Mining and its Applications, Studies in Computational
Intelligence, Volume 29, Springer Berlin Heidelberg
New York
[8] Daniel t. Larose –DATA MINING METHODS AND
MODELS, Department of Mathematical Sciences
Central Connecticut State University
[9] Ghosh, L.C. Jain (Eds.),Evolutionary Computation in
Data Mining , Studies in Fuzziness and Soft
Computing,Volume 163, Springer Berlin Heidelberg
New York
[10] Crina Grosan, Ajith Abraham and Monica
Chis,Swarm Intelligence in Data Mining, Studies in
Computational Intelligence (SCI) 34, Springer Berlin
Heidelberg New York 1–20 (2006)
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
NewPheromone
Updator
Existing C ant
miner
0
1
2
3
4
5
6
7
Existing C ant
miner
NewPheromone
Updator