The intelligent agent based model is a popular approach in constructing Distributed Data Mining (DDM) systems to address scalable mining over large scale and ever increasing distributed data. In an agent based
distributed system, variety of agents coordinate and communicate with each other to perform the various
tasks of the Data Mining (DM) process. In this study a serial computing mode of a multi-agent system
(MAS) called Agent enabled Mining of Globally Strong Association Rules (AeMGSAR) is presented based
on the serial itinerary of the mobile agents. A Running environment is also designed for the implementation and performance study of AeMGSAR system.
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
Optimized Access Strategies for a Distributed Database DesignWaqas Tariq
Abstract Distributed Database Query Optimization has been an active area of research for Database research Community in this decade. Research work mostly involves mathematical programming and evolving new algorithm design techniques in order to minimize the combined cost of storing the database, processing transactions and communication amongst various sites of storage. The complete problem and most of its subsets as well are NP-Hard. Most of proposed solutions till date are based on use of Enumerative Techniques or using Heuristics. In this paper we have shown benefits of using innovative Genetic Algorithms (GA) for optimizing the sequence of sub-query operations over the enumerative methods and heuristics. A stochastic simulator has been designed and experimental results show encouraging improvements in decreasing the total cost of a query. An exhaustive enumerative method is also applied and solutions are compared with that of GA on various parameters of a Distributed Query, like up to 12 joins and 10 sites. Keywords: Distributed Query Optimization, Database Statistics, Query Execution Plan, Genetic Algorithms, Operation Allocation.
PERFORMANCE EVALUATION OF SQL AND NOSQL DATABASE MANAGEMENT SYSTEMS IN A CLUSTERijdms
In this study, we evaluate the performance of SQL and NoSQL database management systems namely;
Cassandra, CouchDB, MongoDB, PostgreSQL, and RethinkDB. We use a cluster of four nodes to run the
database systems, with external load generators.The evaluation is conducted using data from Telenor
Sverige, a telecommunication company that operates in Sweden. The experiments are conducted using
three datasets of different sizes.The write throughput and latency as well as the read throughput and
latency are evaluated for four queries; namely distance query, k-nearest neighbour query, range query, and
region query. For write operations Cassandra has the highest throughput when multiple nodes are used,
whereas PostgreSQL has the lowest latency and the highest throughput for a single node. For read
operations MongoDB has the lowest latency for all queries. However, Cassandra has the highest
throughput for reads. The throughput decreasesas the dataset size increases for both write and read, for
both sequential as well as random order access. However, this decrease is more significant for random
read and write. In this study, we present the experience we had with these different database management
systems including setup and configuration complexity
Clustering is also known as data segmentation aims to partitions data set into groups, clusters, according to their similarity. Cluster analysis has been extensively studied in many researches. There are many algorithms for different types of clustering. These classical algorithms can't be applied on big data due to its distinct features. It is a challenge to apply the traditional techniques on large unstructured data. This study proposes a hybrid model to cluster big data using the famous traditional K-means clustering algorithm. The proposed model consists of three phases namely; Mapper phase, Clustering Phase and Reduce phase. The first phase uses map-reduce algorithm to split big data into small datasets. Whereas, the second phase implements the traditional clustering K-means algorithm on each of the spitted small data sets. The last phase is responsible of producing the general clusters output of the complete data set. Two functions, Mode and Fuzzy Gaussian, have been implemented and compared at the last phase to determine the most suitable one. The experimental study used four benchmark big data sets; Covtype, Covtype-2, Poker, and Poker-2. The results proved the efficiency of the proposed model in clustering big data using the traditional K-means algorithm. Also, the experiments show that the Fuzzy Gaussian function produces more accurate results than the traditional Mode function.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
Optimized Access Strategies for a Distributed Database DesignWaqas Tariq
Abstract Distributed Database Query Optimization has been an active area of research for Database research Community in this decade. Research work mostly involves mathematical programming and evolving new algorithm design techniques in order to minimize the combined cost of storing the database, processing transactions and communication amongst various sites of storage. The complete problem and most of its subsets as well are NP-Hard. Most of proposed solutions till date are based on use of Enumerative Techniques or using Heuristics. In this paper we have shown benefits of using innovative Genetic Algorithms (GA) for optimizing the sequence of sub-query operations over the enumerative methods and heuristics. A stochastic simulator has been designed and experimental results show encouraging improvements in decreasing the total cost of a query. An exhaustive enumerative method is also applied and solutions are compared with that of GA on various parameters of a Distributed Query, like up to 12 joins and 10 sites. Keywords: Distributed Query Optimization, Database Statistics, Query Execution Plan, Genetic Algorithms, Operation Allocation.
PERFORMANCE EVALUATION OF SQL AND NOSQL DATABASE MANAGEMENT SYSTEMS IN A CLUSTERijdms
In this study, we evaluate the performance of SQL and NoSQL database management systems namely;
Cassandra, CouchDB, MongoDB, PostgreSQL, and RethinkDB. We use a cluster of four nodes to run the
database systems, with external load generators.The evaluation is conducted using data from Telenor
Sverige, a telecommunication company that operates in Sweden. The experiments are conducted using
three datasets of different sizes.The write throughput and latency as well as the read throughput and
latency are evaluated for four queries; namely distance query, k-nearest neighbour query, range query, and
region query. For write operations Cassandra has the highest throughput when multiple nodes are used,
whereas PostgreSQL has the lowest latency and the highest throughput for a single node. For read
operations MongoDB has the lowest latency for all queries. However, Cassandra has the highest
throughput for reads. The throughput decreasesas the dataset size increases for both write and read, for
both sequential as well as random order access. However, this decrease is more significant for random
read and write. In this study, we present the experience we had with these different database management
systems including setup and configuration complexity
Clustering is also known as data segmentation aims to partitions data set into groups, clusters, according to their similarity. Cluster analysis has been extensively studied in many researches. There are many algorithms for different types of clustering. These classical algorithms can't be applied on big data due to its distinct features. It is a challenge to apply the traditional techniques on large unstructured data. This study proposes a hybrid model to cluster big data using the famous traditional K-means clustering algorithm. The proposed model consists of three phases namely; Mapper phase, Clustering Phase and Reduce phase. The first phase uses map-reduce algorithm to split big data into small datasets. Whereas, the second phase implements the traditional clustering K-means algorithm on each of the spitted small data sets. The last phase is responsible of producing the general clusters output of the complete data set. Two functions, Mode and Fuzzy Gaussian, have been implemented and compared at the last phase to determine the most suitable one. The experimental study used four benchmark big data sets; Covtype, Covtype-2, Poker, and Poker-2. The results proved the efficiency of the proposed model in clustering big data using the traditional K-means algorithm. Also, the experiments show that the Fuzzy Gaussian function produces more accurate results than the traditional Mode function.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
A report on designing a model for improving CPU Scheduling by using Machine L...MuskanRath1
Disclaimer: Please let me know in case some of the portions of the article match your research. I would include the link to your research in the description section of my article.
Description:
The main concern of our paper describes that we are proposing a model for a uniprocessor system for improving CPU scheduling. Our model is implemented at low-level language or assembly language and LINUX is used for the implementation of the model as it is an open-source environment and its kernel is editable.
There are several methods to predict the length of the CPU bursts, such as the exponential averaging method, however, these methods may not give accurate or reliable predicted values. In this paper, we will propose a Machine Learning (ML) based on the best approach to estimate the length of the CPU bursts for processes. We will make use of Bayesian Theory for our model as a classifier tool that will decide which process will execute first in the ready queue. The proposed approach aims to select the most significant attributes of the process using feature selection techniques and then predicts the CPU-burst for the process in the grid. Furthermore, applying attribute selection techniques improves the performance in terms of space, time, and estimation.
Optimization of workload prediction based on map reduce frame work in a cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...ijdpsjournal
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800 universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly variable. In this paper, we present a task-decomposition based anomaly detection of massive and highvolatility session data of SINET. Three main features are discussed: Tash scheduling, Traffic discrimination, and Histogramming. We adopt a task-decomposition based dynamic scheduling method to handle the massive session data stream of SINET. In the experiment, we have analysed SINET traffic from 2/27 to 3/8 and detect some anomalies by LSTM based time-series data processing.
Implementation of p pic algorithm in map reduce to handle big dataeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Survey on Load Rebalancing for Distributed File System in CloudAM Publications
Distributed file system is used as a key building block of cloud computing. In distributed file system, a
large file is divided into number of chunks and allocates each chunk to separate node to perform MapReduce function
parallel over each node. In cloud, if number of storage nodes, number of files and assesses to that file increases then
the central node (master in MapReduce) becomes bottleneck. The load rebalancing task is used to eliminate the load
on central node. Using load rebalancing algorithm the load of nodes is balanced as well as the movement cost is
reduced. In this survey paper the problem of load imbalancing is overcome.
A comparative study in dynamic job scheduling approaches in grid computing en...ijgca
Grid computing is one of the most interesting research areas for present and future computing strategy
and methodology. The dramatic changes in the complexity of scientific applications and part of nonscientific
applications increase the need for distributed systems in general and grid computing
specifically. One of the main challenges in grid computing environment is the way of handling the jobs
(tasks) in the grid environment. Job scheduling is the activity to schedule the submitted jobs in the grid
environment. There are many approaches in job scheduling in grid computing.
This paper provides an experimental study of different approaches in grid computing job scheduling. The
involved approaches in this paper are “4-levels/RMFF” and our previously published approach “XLevels/
XD-Binary Tree”. First of all, introduction to grid computing and job scheduling techniques is
provided. Then the description of currently existing approaches will be presented. After that, experiments
and provided results give a practical evaluation of these approaches from different perspectives.
Conclusion of the comparative study states that overall average tasks waiting time is enhanced by
approximately 30% by using the X-levels/XD-binary tree approach against 4-levels/RMFF approach.
A COMPARATIVE STUDY IN DYNAMIC JOB SCHEDULING APPROACHES IN GRID COMPUTING EN...ijgca
Grid computing is one of the most interesting research areas for present and future computing strategy and methodology. The dramatic changes in the complexity of scientific applications and part of nonscientific applications increase the need for distributed systems in general and grid computing specifically. One of the main challenges in grid computing environment is the way of handling the jobs (tasks) in the grid environment. Job scheduling is the activity to schedule the submitted jobs in the grid environment. There are many approaches in job scheduling in grid computing. This paper provides an experimental study of different approaches in grid computing job scheduling. The involved approaches in this paper are “4-levels/RMFF” and our previously published approach “XLevels/XD-Binary Tree”. First of all, introduction to grid computing and job scheduling techniques is provided. Then the description of currently existing approaches will be presented. After that, experiments and provided results give a practical evaluation of these approaches from different perspectives. Conclusion of the comparative study states that overall average tasks waiting time is enhanced by approximately 30% by using the X-levels/XD-binary tree approach against 4-levels/RMFF approach.
MULTIPROCESSOR SCHEDULING AND PERFORMANCE EVALUATION USING ELITIST NON DOMINA...ijcsa
Task scheduling plays an important part in the improvement of parallel and distributed systems. The problem of task scheduling has been shown to be NP hard. The time consuming is more to solve the problem in deterministic techniques. There are algorithms developed to schedule tasks for distributed environment, which focus on single objective. The problem becomes more complex, while considering biobjective.This paper presents bi-objective independent task scheduling algorithm using elitist Nondominated
sorting genetic algorithm (NSGA-II) to minimize the makespan and flowtime. This algorithm generates pareto global optimal solutions for this bi-objective task scheduling problem. NSGA-II is implemented by using the set of benchmark instances. The experimental result shows NSGA-II generates efficient optimal schedules.
Query Processing : Query Processing Problem, Layers of Query Processing Query Processing in Centralized Systems – Parsing & Translation, Optimization, Code generation, Example Query Processing in Distributed Systems – Mapping global query to local, Optimization,
Multiprocessor scheduling of dependent tasks to minimize makespan and reliabi...ijfcstjournal
Algorithms developed for scheduling applications on heterogeneous multiprocessor system focus on a
single objective such as execution time, cost or total data transmission time. However, if more than one
objective (e.g. execution cost and time, which may be in conflict) are considered, then the problem becomes
more challenging. This project is proposed to develop a multiobjective scheduling algorithm using
Evolutionary techniques for scheduling a set of dependent tasks on available resources in a multiprocessor
environment which will minimize the makespan and reliability cost. A Non-dominated sorting Genetic
Algorithm-II procedure has been developed to get the pareto- optimal solutions. NSGA-II is a Elitist
Evolutionary algorithm, and it takes the initial parental solution without any changes, in all iteration to
eliminate the problem of loss of some pareto-optimal solutions.NSGA-II uses crowding distance concept to
create a diversity of the solutions.
Simplified Data Processing On Large ClusterHarsh Kevadia
A computer cluster consists of a set of loosely connected or tightly connected computers that work together so that in many respects they can be viewed as a single system. They are connected through fast local area network and are deployed to improve performance over that of single computer. We know that on the web large amount of data are being stored, processed and retrieved in a few milliseconds. Doing so with help of single computer machine is very difficult task. And so we require cluster of machines which can perform this task.
Although using cluster for processing data is not enough, we need to develop a technique that can perform this task easily and efficiently. MapReduce programming model is used for this type of processing. In this model Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.
Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.
A Framework for Performance Analysis of Computing Cloudsijsrd.com
Cloud Computing provides data storage capacity and use of Cloud Computing have increased scalability, availability, security and simplicity. As more use of cloud computing environments increases, it is more difficult to deal with the performance of this environments. We have presented Some virtualization and network related communication issues and finally we have designed and implemented modified load balancing algorithm for performance increase. In market use of cloud many issues occurred like as security, privacy, reliability, legal issues, open standard, compliance. so, we have stated to solve these issues such algorithm to assess increase performance of computing clouds. Secondly, i.e. 'Modified Weighted Active Monitoring Load Balancing Algorithm' on cloud, for the balancer on Cloud Controller to effectively balance load requests between the available Node Controller, in order to achieve better performance parameters such as load on server and current performance on the server. By Existing Algorithm like in RRA (Round Robin Algorithm) load balance sequentially, we have designed this proposed algorithm on cloud and how to balance load randomly and display by existing algorithm and proposed algorithm comparison.
Optimization of workload prediction based on map reduce frame work in a cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The objective of this paper is to present the hybrid approach for edge detection. Under this technique, edge
detection is performed in two phase. In first phase, Canny Algorithm is applied for image smoothing and in
second phase neural network is to detecting actual edges. Neural network is a wonderful tool for edge
detection. As it is a non-linear network with built-in thresholding capability. Neural Network can be trained
with back propagation technique using few training patterns but the most important and difficult part is to
identify the correct and proper training set.
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...ijdpsjournal
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800 universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly variable. In this paper, we present a task-decomposition based anomaly detection of massive and highvolatility session data of SINET. Three main features are discussed: Tash scheduling, Traffic discrimination, and Histogramming. We adopt a task-decomposition based dynamic scheduling method to handle the massive session data stream of SINET. In the experiment, we have analysed SINET traffic from 2/27 to 3/8 and detect some anomalies by LSTM based time-series data processing.
Implementation of p pic algorithm in map reduce to handle big dataeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Survey on Load Rebalancing for Distributed File System in CloudAM Publications
Distributed file system is used as a key building block of cloud computing. In distributed file system, a
large file is divided into number of chunks and allocates each chunk to separate node to perform MapReduce function
parallel over each node. In cloud, if number of storage nodes, number of files and assesses to that file increases then
the central node (master in MapReduce) becomes bottleneck. The load rebalancing task is used to eliminate the load
on central node. Using load rebalancing algorithm the load of nodes is balanced as well as the movement cost is
reduced. In this survey paper the problem of load imbalancing is overcome.
A comparative study in dynamic job scheduling approaches in grid computing en...ijgca
Grid computing is one of the most interesting research areas for present and future computing strategy
and methodology. The dramatic changes in the complexity of scientific applications and part of nonscientific
applications increase the need for distributed systems in general and grid computing
specifically. One of the main challenges in grid computing environment is the way of handling the jobs
(tasks) in the grid environment. Job scheduling is the activity to schedule the submitted jobs in the grid
environment. There are many approaches in job scheduling in grid computing.
This paper provides an experimental study of different approaches in grid computing job scheduling. The
involved approaches in this paper are “4-levels/RMFF” and our previously published approach “XLevels/
XD-Binary Tree”. First of all, introduction to grid computing and job scheduling techniques is
provided. Then the description of currently existing approaches will be presented. After that, experiments
and provided results give a practical evaluation of these approaches from different perspectives.
Conclusion of the comparative study states that overall average tasks waiting time is enhanced by
approximately 30% by using the X-levels/XD-binary tree approach against 4-levels/RMFF approach.
A COMPARATIVE STUDY IN DYNAMIC JOB SCHEDULING APPROACHES IN GRID COMPUTING EN...ijgca
Grid computing is one of the most interesting research areas for present and future computing strategy and methodology. The dramatic changes in the complexity of scientific applications and part of nonscientific applications increase the need for distributed systems in general and grid computing specifically. One of the main challenges in grid computing environment is the way of handling the jobs (tasks) in the grid environment. Job scheduling is the activity to schedule the submitted jobs in the grid environment. There are many approaches in job scheduling in grid computing. This paper provides an experimental study of different approaches in grid computing job scheduling. The involved approaches in this paper are “4-levels/RMFF” and our previously published approach “XLevels/XD-Binary Tree”. First of all, introduction to grid computing and job scheduling techniques is provided. Then the description of currently existing approaches will be presented. After that, experiments and provided results give a practical evaluation of these approaches from different perspectives. Conclusion of the comparative study states that overall average tasks waiting time is enhanced by approximately 30% by using the X-levels/XD-binary tree approach against 4-levels/RMFF approach.
MULTIPROCESSOR SCHEDULING AND PERFORMANCE EVALUATION USING ELITIST NON DOMINA...ijcsa
Task scheduling plays an important part in the improvement of parallel and distributed systems. The problem of task scheduling has been shown to be NP hard. The time consuming is more to solve the problem in deterministic techniques. There are algorithms developed to schedule tasks for distributed environment, which focus on single objective. The problem becomes more complex, while considering biobjective.This paper presents bi-objective independent task scheduling algorithm using elitist Nondominated
sorting genetic algorithm (NSGA-II) to minimize the makespan and flowtime. This algorithm generates pareto global optimal solutions for this bi-objective task scheduling problem. NSGA-II is implemented by using the set of benchmark instances. The experimental result shows NSGA-II generates efficient optimal schedules.
Query Processing : Query Processing Problem, Layers of Query Processing Query Processing in Centralized Systems – Parsing & Translation, Optimization, Code generation, Example Query Processing in Distributed Systems – Mapping global query to local, Optimization,
Multiprocessor scheduling of dependent tasks to minimize makespan and reliabi...ijfcstjournal
Algorithms developed for scheduling applications on heterogeneous multiprocessor system focus on a
single objective such as execution time, cost or total data transmission time. However, if more than one
objective (e.g. execution cost and time, which may be in conflict) are considered, then the problem becomes
more challenging. This project is proposed to develop a multiobjective scheduling algorithm using
Evolutionary techniques for scheduling a set of dependent tasks on available resources in a multiprocessor
environment which will minimize the makespan and reliability cost. A Non-dominated sorting Genetic
Algorithm-II procedure has been developed to get the pareto- optimal solutions. NSGA-II is a Elitist
Evolutionary algorithm, and it takes the initial parental solution without any changes, in all iteration to
eliminate the problem of loss of some pareto-optimal solutions.NSGA-II uses crowding distance concept to
create a diversity of the solutions.
Simplified Data Processing On Large ClusterHarsh Kevadia
A computer cluster consists of a set of loosely connected or tightly connected computers that work together so that in many respects they can be viewed as a single system. They are connected through fast local area network and are deployed to improve performance over that of single computer. We know that on the web large amount of data are being stored, processed and retrieved in a few milliseconds. Doing so with help of single computer machine is very difficult task. And so we require cluster of machines which can perform this task.
Although using cluster for processing data is not enough, we need to develop a technique that can perform this task easily and efficiently. MapReduce programming model is used for this type of processing. In this model Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.
Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.
A Framework for Performance Analysis of Computing Cloudsijsrd.com
Cloud Computing provides data storage capacity and use of Cloud Computing have increased scalability, availability, security and simplicity. As more use of cloud computing environments increases, it is more difficult to deal with the performance of this environments. We have presented Some virtualization and network related communication issues and finally we have designed and implemented modified load balancing algorithm for performance increase. In market use of cloud many issues occurred like as security, privacy, reliability, legal issues, open standard, compliance. so, we have stated to solve these issues such algorithm to assess increase performance of computing clouds. Secondly, i.e. 'Modified Weighted Active Monitoring Load Balancing Algorithm' on cloud, for the balancer on Cloud Controller to effectively balance load requests between the available Node Controller, in order to achieve better performance parameters such as load on server and current performance on the server. By Existing Algorithm like in RRA (Round Robin Algorithm) load balance sequentially, we have designed this proposed algorithm on cloud and how to balance load randomly and display by existing algorithm and proposed algorithm comparison.
SUCCESSIVE LINEARIZATION SOLUTION OF A BOUNDARY LAYER CONVECTIVE HEAT TRANSFE...ijcsa
The purpose of this paper is to discuss the flow of forced convection over a flat plate. The governing partial
differential equations are transformed into ordinary differential equations using suitable transformations.
The resulting equations were solved using a recent semi-numerical scheme known as the successive
linearization method (SLM). A comparison between the obtained results with homotopy perturbation method and numerical method (NM) has been included to test the accuracy and convergence of the method.
AGENT ENABLED MINING OF DISTRIBUTED PROTEIN DATA BANKSijfcstjournal
Mining biological data is an emergent area at the intersection between bioinformatics and data mining
(DM). The intelligent agent based model is a popular approach in constructing Distributed Data Mining
(DDM) systems to address scalable mining over large scale distributed data. The nature of associations
between different amino acids in proteins has also been a subject of great anxiety. There is a strong need to
develop new models and exploit and analyze the available distributed biological data sources. In this study,
we have designed and implemented a multi-agent system (MAS) called Agent enriched Quantitative
Association Rules Mining for Amino Acids in distributed Protein Data Banks (AeQARM-AAPDB). Such
globally strong association rules enhance understanding of protein composition and are desirable for
synthesis of artificial proteins. A real protein data bank is used to validate the system.
Mining biological data is an emergent area at the intersection between bioinformatics and data mining
(DM). The intelligent agent based model is a popular approach in constructing Distributed Data Mining
(DDM) systems to address scalable mining over large scale distributed data. The nature of associations
between different amino acids in proteins has also been a subject of great anxiety. There is a strong need to
develop new models and exploit and analyze the available distributed biological data sources. In this study,
we have designed and implemented a multi-agent system (MAS) called Agent enriched Quantitative
Association Rules Mining for Amino Acids in distributed Protein Data Banks (AeQARM-AAPDB). Such
globally strong association rules enhance understanding of protein composition and are desirable for
synthesis of artificial proteins. A real protein data bank is used to validate the system.
Secure Mining of Association Rules in Horizontally Distributed DatabasesIJSRD
We suggest a protocol for secure mining of association rules in horizontally distributed databases. The existing primary protocol is that of Kantarcioglu and Clifton [1]. Our protocol, like theirs, is rely on the Fast Distributed Mining (FDM) algorithm of Cheungetal, which is not a secured distributed version of the Apriori algorithm. The major ingredients in our protocol are two novel safe multi-party algorithmsâ€â€one that calculates the combination of private subsets that each of the interacting players have, and another that tests the insertion of an element contained by one player in a subset contained by another. Our protocol offers enhanced privacy with respect to the protocol in [1]. In count, it is simpler and is signiï¬Âcantly more effective in terms of interaction rounds, communication charge and computational cost. Data mining techniques are used to discover patterns in huge databases of information. But sometimes these patterns can disclose susceptible information about the data holder or persons whose information are the subject of the patterns. The idea of privacy-preserving data mining is to recognize and prohibit such revelations as evident in the kinds of patterns learned using traditional data mining techniques.[5].
The premise of this paper is to discover frequent patterns by the use of data grids in WEKA 3.8 environment. Workload imbalance occurs due to the dynamic nature of the grid computing hence data grids are used for the creation and validation of data. Association rules are used to extract the useful information from the large database. In this paper the researcher generate the best rules by using WEKA 3.8 for better performance. WEKA 3.8 is used to accomplish best rules and implementation of various algorithms.
Efficient Database Management System For Wireless Sensor Network Onyebuchi nosiri
An effective database management system has been put forward in this work to tackle the problem in remote monitoring using Wireless Sensor Network Object Oriented Analysis and Design method employed as classes was evolved to create objects in the employed program used. An algorithm was developed with a corresponding flowchart to realize the design, the work also came up with a dynamic graph plotter, as this offers an adaptive monitoring facility for data stored in the Database. Sensor Node query was implemented and result of transmitted data was filtered for a particular node
MODELING OF DISTRIBUTED MUTUAL EXCLUSION SYSTEM USING EVENT-B cscpconf
The problem of mutual exclusion arises in distributed systems whenever shared resources are concurrently accessed by several sites. For correctness, it is required that shared resource must be accessed by a single site at a time. To decide, which site execute the critical section next, each site communicate with a set of other sites. A systematic approach is essential to formulate an accurate speciation. Formal methods are mathematical techniques that provide systematic approach for building and verification of model. We have used Event-B as a formal technique for construction of our model. Event-B is event driven approach which
is used to develop formal models of distributed systems .It supports generation and discharge of proof obligations arising due to consistency checking. In this paper, we outline a formal construction of model of Lamport's mutual exclusion algorithm for distributed system using Event-B. We have considered vector clock instead of using Lam-port's scalar clock for the purpose of message's time stamping
Modeling of distributed mutual exclusion system using event bcsandit
The problem of mutual exclusion arises in distributed systems whenever shared resources are concurrently
accessed by several sites. For correctness, it is required that shared resource must be accessed by a single
site at a time. To decide, which site execute the critical section next, each site communicate with a set of
other sites. A systematic approach is essential to formulate an accurate speciation. Formal methods are
mathematical techniques that provide systematic approach for building and verification of model. We have
used Event-B as a formal technique for construction of our model. Event-B is event driven approach which
is used to develop formal models of distributed systems .It supports generation and discharge of proof
obligations arising due to consistency checking. In this paper, we outline a formal construction of model of
Lamport's mutual exclusion algorithm for distributed system using Event-B. We have considered vector
clock instead of using Lam-port's scalar clock for the purpose of message's time stamping.
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
A good foundation has been established for both data mining research and genuine
application based data mining. The current functionality of EMADS is limited
to classification and Meta-ARM. The research team is at present working towards
increasing the diversity of mining tasks that EMADS can address. There are many
directions in which the work can (and is being) taken forward. One interesting direction
is to build on the wealth of distributed data mining research that is currently
available and progress this in an MAS context. The research team are also enhancing
the system’s robustness so as to make it publicly available. It is hoped that once
the system is live other interested data mining practitioners will be prepared to contribute
algorithms and data.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
Optimization of workload prediction based on map reduce frame work in a cloud...eSAT Journals
Abstract Nowadays cloud computing is emerging Technology. It is used to access anytime and anywhere through the internet. Hadoop is an open-source Cloud computing environment that implements the Googletm MapReduce framework. Hadoop is a framework for distributed processing of large datasets across large clusters of computers. This paper proposes the workload of jobs in clusters mode using Hadoop. MapReduce is a programming model in hadoop used for maintaining the workload of the jobs. Depend on the job analysis statistics the future workload of the cluster is predicted for potential performance optimization by using genetic algorithm. Key Words: Cloud computing, Hadoop Framework, MapReduce Analysis, Workload
OMT: A DYNAMIC AUTHENTICATED DATA STRUCTURE FOR SECURITY KERNELSIJCNCJournal
We introduce a family of authenticated data structures — Ordered Merkle Trees (OMT) — and illustrate
their utility in security kernels for a wide variety of sub-systems. Specifically, the utility of two types of
OMTs: a) the index ordered merkle tree (IOMT) and b) the range ordered merkle tree (ROMT), are
investigated for their suitability in security kernels for various sub-systems of Border Gateway Protocol
(BGP), the Internet’s inter-autonomous system routing infrastructure. We outline simple generic security
kernel functions to maintain OMTs, and sub-system specific security kernel functionality for BGP subsystems
(like registries, autonomous system owners, and BGP speakers/routers), that take advantage of
OMTs.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Similar to A SERIAL COMPUTING MODEL OF AGENT ENABLED MINING OF GLOBALLY STRONG ASSOCIATION RULES (20)
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
A SERIAL COMPUTING MODEL OF AGENT ENABLED MINING OF GLOBALLY STRONG ASSOCIATION RULES
1. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
DOI:10.5121/ijcsa.2015.5307 77
A SERIAL COMPUTING MODEL OF AGENT
ENABLED MINING OF GLOBALLY STRONG
ASSOCIATION RULES
G.S.Bhamra1
, A. K.Verma2
and R.B.Patel3
1
M. M. University, Mullana, Haryana, 133207 - India
2
Thapar University, Patiala, Punjab, 147004- India
3
Chandigarh College of Engineering & Technology, Chandigarh- 160019- India
ABSTRACT
The intelligent agent based model is a popular approach in constructing Distributed Data Mining (DDM)
systems to address scalable mining over large scale and ever increasing distributed data. In an agent based
distributed system, variety of agents coordinate and communicate with each other to perform the various
tasks of the Data Mining (DM) process. In this study a serial computing mode of a multi-agent system
(MAS) called Agent enabled Mining of Globally Strong Association Rules (AeMGSAR) is presented based
on the serial itinerary of the mobile agents. A Running environment is also designed for the implementation
and performance study of AeMGSAR system.
KEYWORDS
Knowledge Discovery, Association Rules, Intelligent Agents, Multi-Agent System
1.INTRODUCTION
Data Mining (DM) technique is used to extract some interesting and valid data patterns implicitly
stored in large databases [1], [2]. Intelligent software agent technology is an interdisciplinary
technology dealing with the development and efficient utilization of autonomous software objects
called agents which have access to geographically distributed and heterogeneous resources. They
are autonomous, adaptive, reactive, pro-active, social, cooperative, collaborative and flexible.
They also support temporal continuity and mobility within the network. An intelligent agent with
mobility feature is known as Mobile Agent (MA). MA migrates from node to node in a
heterogeneous network without losing its operability. On reaching at a network node MA is
delivered to an Agent Execution Environment (AEE) where its executable parts are started
running. Upon completion of the desired task, it delivers the results to the home node. A Mobile
Agent Platform (MAP) or Agent Execution Environment (AEE), is a server application that
provides the appropriate functionality to MAs to authenticate, execute, communicate, migrate to
other platform, and use system resources in a secure way. A Multi Agent System (MAS) is
distributed application comprised of multiple interacting intelligent agent components [3].
Let { }, 1jDB T j D= = K be a transactional dataset of size D where each transaction T is assigned
an identifier (TID ) and { },i 1i
I d m= = K , total m data items in DB . A set of items in a particular
transaction T is called itemset or pattern. An itemset, { },i 1i
P d k= = K , which is a set of k data
2. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
78
items in a particular transaction T and P I⊆ , is called k-itemset. Support of an itemset,
( )
No_of_T_containing_P
%s P
D
= is the frequency of occurrence of itemset P in DB , where
No_of_T_containing_P is the support count (sup_count) of itemset P . Frequent Itemsets (FIs)
are the itemset that appear in DB frequently, i.e., if ( ) min_th_sups P ≥ (given minimum
threshold support), then P is a frequent k-itemset. Finding such FIs plays an essential role in
miming the interesting relationships among itemsets. Frequent Itemset Mining (FIM) is the task
of finding the set of all the subsets of FIs in a transactional database [2].
Association Rules (ARs) are used to discover the associations among item in a database [4]. It is
an implication of the form [ ]support,confidenceP Q⇒ where, ,P I Q I⊂ ⊂ and P Q∩ = ∅ . An
AR is measured in terms of its support and confidence factor where support of the rule
( ( )s P Q⇒ ) is the probability of both P and Q appearing in T , i.e., ( )p P Q∪ and the
confidence of the rule ( ( )c P Q⇒ ) is the conditional probability of Q given P , i.e., ( )|p Q P .
An AR is said to be strong if ( ) min_th_sups P Q⇒ ≥ (given minimum threshold support) and
( ) min_th_confc P Q⇒ ≥ (given minimum threshold confidence). Association Rule Mining (ARM)
today is one of the most important aspects of DM tasks. In ARM all the strong ARs are generated
from the FIs. The ARM can be viewed as two step process [5], [6].
1. Find all the frequent k-itemsets ( k
L )
2. Generate Strong ARs from k
L
a. For each frequent itemset, k
l L∈ , generate all non empty subsets of l .
b. For every non empty subset s of l , output the rule “ ( )s l s⇒ − ”, if
( )
( )
sup_count
min_th_conf
sup_count
l
s
≥
Distributed Association Rule Mining (DARM) is the task of generating the globally strong
association rules from the global FIs in a distributed environment. Few preliminaries notations
and definitions required for defining DARM and to make this study self contained are as follows:
• { },i 1iS S n= = K , n distributed sites.
• CENTRAL
S , Central Site.
• { }, 1i j i
DB T j D= = K , Horizontally partitioned data set of size i
D at the local site i
S , where
each transaction j
T is assigned an identifier (TID).
• 1
n
ii
DB DB=
= U , the aggregated dataset of size 1
n
ii
D D=
= ∑ , i j
DB DB∩ = ∅
• { },i 1i
I d m= = K , total m data items in each i
DB .
• ( )
FI
k i
L , Local frequent k-itemsets at site i
S .
• ( )
FISC
k i
L , List of support count ( )
FI
k i
Itemset L∀ ∈ .
• LSAR
i
L , List of locally strong association rules at site i
S .
• 1
nTLSAR LSAR
ii
L L=
= U , List of total locally strong association rules.
• ( )1
nTFI FI
k k ii
L L=
= U , List of total frequent k-itemsets.
3. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
79
• ( )1
nGFI FI
k k ii
L L=
= I , List of global frequent k-itemsets.
• GSAR
CENTRAL
L , List of Globally strong association rule.
Local Knowledge Base (LKB), at site iS , comprises of ( )
FI
k i
L , ( )
FISC
k i
L and LSAR
i
L which can provide
reference to the local supervisor for local decisions. Global Knowledge Base (GKB), at CENTRAL
S ,
comprises of TLSAR
L , TFI
k
L , GFI
k
L and GSAR
CENTRAL
L for the global decision making [7]. Like ARM, DARM
task can also be viewed as two-step process [6]:
1. Find the global frequent k-itemset ( GFI
k
L ) from the distributed Local frequent k-itemsets
( ( )
FI
k i
L ) from the partitioned datasets.
2. Generate globally strong association rules ( GSAR
CENTRAL
L ) from GFI
kL .
The existing agent based systems specifically dealing with DARM task are: Knowledge
Discovery Management System (KDMS) [8], Efficient Distributed Data Mining using Intelligent
Agents [9], Mobile Agent based Distributed Data Mining [10], An Agent based Framework for
Association Rule Mining of Distributed Data (AFARMDD) [11], [12], Multi-Agent Distributed
Association Rule Miner (MADARM) [13]. All these systems are academic research projects.
Qualitative comparison of these DARM frameworks is provided in [14]. Most of the existing
agent based frameworks for DARM task are only prototype model and lacks the appropriate
underlying AEE, scalability, privacy preserving techniques, global knowledge generation and
implementation using a real datasets.
The rest of the paper is organised as follows. Section 2 described the running environment for the
proposed system along with various algorithms involved. Serial computing model of AeMGSAR
is presented in Section 3. Algorithms for all the agents involved in this system are also discussed.
Section 4 describes the implementation and performance study of the system and finally the
article is concluded in Section 5.
2.ENVIRONMENT FOR THE PROPOSED SYSTEM
Every MAS needs an underlying AEE to provide a running infrastructure on which agents can be
deployed and tested. A running environment has been designed in Java. Various attributes of the
MA are encapsulated within a data structure known as AgentProfile . It contains the name of MA
( AgentName ), version number ( AgentVersion ), entire byte code ( BC ), list of nodes to be
visited by MA, i.e., itinerary plan ( NODESL ) , type of the itinerary ( ItinType ) which can be
serial or parallel, a reference of current execution state ( AObject ) and an additional data structure
known as Briefcase that acts as a result bag of MA to store final resultant knowledge ( iResult_S )
at a particular site. Computational time (CPUTime ) taken by a MA at a particular site is also
stored in iResult_S . In addition to results, Briefcase also contains the system time for start of
agent journey ( startTripTime ), system time for end of journey ( endTripTime ) and total round trip
time of MA (TripTime ) calculated using end startTripTime TripTime TripTime← − . Stationary as well
as mobile agents involved in the models would be discussed later on. This environment consists
of the following three components:
4. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
80
• Data Mining Agent Execution Environment (DM_AEE): It is the key component that
acts as a Server. DM_AEE is deployed on any distributed sites iS and is responsible for
receiving, executing and migrating all the visiting DM agents. It receives the incoming
AgentProfile at site iS , retrieves the entire BC of agent and save it with
.AgentName class in the local file system of the site iS after that execution of the agent is
started using AObject . Steps are shown in Algorithm 1.
• Agent Launcher (AL): It acts a Client at agent launching station ( CENTRAL
S ) and launches
the goal oriented DM agents on behalf of the user through a user interface to the
DM_AEE running at the distributed sites. Agent Pool (or Zone) at CENTRAL
S is a repository
of all mobile as well as stationary agents (SAs). AL first reads and stores AgentName
in AgentProfile . The entire BC of the AgentName is loaded from the Agent Pool and
stored in AgentProfile . NODES
L and ItinType are retrieved and stored in AgentProfile .
startTripTime is maintained in Briefcase which is further added to AgentProfile . In case of
serial computing model, i.e., if ItinType Serial= , AL dispatches a specific single MA
along with NODES
L , and it travels from node to node. AgentVersion is set as 1 for this
agent. AL also contacts the Result Manager (RM) for processing the Briefcase of an agent.
Detailed steps are given in Algorithm 2.
• Result Manager (RM): It manages and processes the Briefcase of all MAs. RM is either
contacted by a MA for submitting its results or by AL for processing the results of the
specific MA. On completion of itinerary, each DM agent submits its results to RM which
computes total round trip time (TripTime ) of that MA and saves it in the Briefcase of that
agent. It ItinType Serial= then it saves the updated AgentProfile of an agent at CENTRALS .
When it is contacted by AL for processing the results of a specific agent it sends back the
AgentProfile of that agent. Steps are defined in Algorithm 3.
Algortihm 1 DATA MINING AGENT EXECUTION ENVIRONMENT (DM_AEE)
1: procedure DM_AEE( )
2: while TRUE do
3: iAgentPofile listen and receive AgentProfile at S←
4: AgentName get AgentName from AgentProfile←
5: BC retrieve the BC of agent from AgentProfile←
6: isave the BC with AgentName.class in the local file system of S
7: AObject get AObject from AgentProfile← >current state
8: . ()AObject run >start executing mobile agent
9: end while
10: end procedure
Algortihm 2 AGENT LAUNCHER (AL)
1: procedure AL( )
2: option read option(dispatch / result)←
3: switch option do
4: case dispatch >dispatch the mobile agent to DM_AEE
5: AgentName read Mobile Agent's name←
6: add AgentName to AgentProfile
5. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
81
7: BC load entire byte code of AgentName from AgentPool←
8: add BC to AgentProfile
9: NODES
L read Itinerary (IP addresses)of mobile agent←
10: ItinType read ItinType( Serial / Parallel)←
11: add ItinType to AgentProfile
12: if " "ItinType Serial= then >Serial Itinerary
13: 1AgentVersion ←
14: add AgentVersion to AgentProfile
15: NODES
add L to AgentProfile
16: switch AgentName do
17: case LFIGA
18: minthrsup read minimum threshold support←
19: AObject new LFIGA(AgentProfile,minthrsup)←
20: end case
21: case LKGA
22: minthrconf read minimum threshold confidence←
23: AObject new LKGA(AgentProfile,minthrconf)←
24: end case
25: case TFICA
26: AObject newTFICA(AgentProfile)←
27: end case
28: case LKCA
29: (AObject new LKCA AgentProfile)←
30: end case
31: case GKDA
32: GSAR GSAR
CENTRAL CENTRAL CENTRALL load L generated by GKGA at S←
33: GSAR
CENTRALadd L to Briefcase
34: add updated Briefcase to AgentProfile
35: AObject newGKDA(AgentProfile)←
36: end case
37: end switch
38: add AObject to AgentProfile >current state
39: NODES
Transfer AgentProfile to DM_AEE at first IP address in L
40: end if
41: end case
42: case result >process the result of mobile agent
43: AgentName read mobile agent's name←
44: ItinType read mobile agent's ItinType←
45: AgentInfo
add AgentName to L
46: AgentInfo
add ItinType to L
47: > Result processing for Serial Itinerary Agents
48: if " "ItinType Serial= then
49: AgentInfo
AgentProfile contact RM for L←
50: Briefcase retrieve Briefcase from AgentProfile←
51: switch AgentName do
52: case LFIGA
6. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
82
53: process the Briefcase of LFIGA
54: end case
55: case LKGA
56: process the Briefcase of LKGA
57: end case
58: case TFICA
59: call GFIGA(Briefcase) >stationary agent
60: end case
61: case LKCA
62: call GKGA(Briefcase) >stationary agent
63: end case
64: case GKDA
65: process the Briefcase of GKDA
66: end case
67: end switch
68: end if
69: end case
70: end switch
71: end procedure
Algortihm 3 RESULT MANAGER (RM)
1: procedure RM( )
2: while TRUE do
3: listen and receive the incomming request
4: if icontacted by a mobile agent for submitting results from site S then
5: iAgentProfile receive the incomming AgentProfile from site S←
6: ItinType retrieve ItinType from AgentProfile←
7: Briefcase retrieve mobile agent's Briefcase from AgentProfile←
8: start startTripTime retrieveTripTime from Briefcase←
9: end endTripTime retrieveTripTime from Briefcase←
10: end startTripTime TripTime TripTime← −
11: add TripTime to Briefcase
12: add updated Briefcase to AgentProfile
13: if " "ItinType Serial= then
14: CENTRALsave AgentProfile at S
15: end if
16: end if
17: if contacted by AL for processing the results then
18: AgentInfo
AgentName retrieve AgentName from incomming L←
19: AgentInfo
ItinType retrieve ItinType from incomming L←
20: if " "ItinType Serial= then
21: CENTRALAgentProfile load AgentProfile for AgentName from S←
22: dispatch AgentProfile to AL
23: end if
24: end if
25: end while
26: end procedure
7. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
83
The overall working of AeMGSAR system may be divided into following six stages:
1. Request Stage: Request for the DARM is initiated at CENTRALS by AL on behalf of the user
with necessary credentials.
2. Preparation Stage: AL through User Interface reads agent name; version number;
Itinerary for the MAs journey is obtained in terms of IP addresses of the distributed nodes
to be visited by a MA; any specific additional data for a specific MA is obtained; Agent
code for the specific MA is loaded from AgentPool; for serial itinerary a single specific
MA is dispatched by AL to travel and visit n distributed sites in parallel.
3. Local Mining Stage: ARM process is performed locally by specific DM agents on each
distributed site and results are kept as local knowledge base at that site.
4. Result Collection Stage: Collector agents visits each site and collect the results generated
by DM agents and submit the results back to RM at CENTRALS .
5. Knowledge Integration and Global Knowledge Generation Stage: Knowledge or result
integration is carried out by the RM with the help of stationary agent and Global
Knowledge in the form of Globally Strong Association Rules may be generated with the
help of other stationary agents at CENTRALS .
6. Global Knowledge Dispatching Stage: Global knowledge is dispatched to the distributed
sites by a dispatching agent to compare it with the local knowledge at each site.
Figure 1. AeMGSAR Serial Computing Model
8. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
84
3.SERIAL COMPUTING MODEL OF AEMGSAR
Serial computing model of AeMGSAR system is shown in Figure 1. It consists of total seven
agents, five of these are MAs dispatched from CENTRALS with serial itinerary multi-hop migration
and other two are intelligent SAs running at CENTRALS to perform different tasks. The CPU time
taken by a MA while processing on each site along with some other specific information is
carried back in the result bag at CENTRALS . Agents in serial number 1-5 visit n sites serially other
parameters are collected from different resources. Detailed relationship among these agents and
working behaviour of each agent is as follows:
1. Local Frequent Itemset Generater Agent (LFIGA): This is a MA that carries the
AgentProfile & min_th_sup . LFIGA generates and stores ( )
FI
k iL and ( )
FISC
k iL at site iS by
scanning the local iDB at that site with the constraint of min_th_sup . It carries back the
computational time (CPUTime ) at each site iS and endTripTime . This agent is embedded
with Apriori algorithm [15] for generating all the frequent k-itemset lists. It may be
equipped with decision making capability to select other FIM algorithms based on the
density of the dataset at a particular site. More details are available in Algorithm 4.
2. Local Knowledge Generater Agent (LKGA): This is a MA that carries the
AgentProfile & min_th_conf . LKGA applies the constraint of min_th_conf to generate and
store LSAR
iL by using the ( )
FI
k iL and ( )
FISC
k iL lists already generated by LFIGA agent at site iS .
LSAR
iL list also support and confidence for a particular association rule along with the site
name. It carries back the computational time (CPUTime ) at each site iS and endTripTime .
Detailed steps are given in Algorithm 7.
3. Total Frequent Itemset Collector Agent (TFICA): This is a MA that carries the
AgentProfile . TFICA collects list of local frequent k-itemset ( ( )
FI
k iL ) generated by LFIGA
agent and carries back the list of total frequent k-itemset TFI
kL in the result bag to RM at
CENTRALS . In addition to this resultant knowledge, it also carries back the computational
time (CPUTime ) at each site iS and endTripTime . It executes Algorithm 8.
4. Local Knowledge Collctor Agent (LKCA): This is a MA that carries the AgentProfile .
LKCA collects the list of locally strong association rules ( LSAR
iL ) generated by LKGA
agent and carries back the list of total locally strong association rules ( TLSAR
L ) in the result
bag to RM at CENTRALS . In addition to this resultant knowledge, it also carries back the
computational time (CPUTime ) at each site iS and endTripTime . Steps are shown in
Algprithm 9.
5. Global Knowledge Dispatcher Agent (GKDA): This is a MA that carries the
AgentProfile containing global knowledge ( GSAR
CENTRALL ). It dispatches global knowledge at
every site for further decision making and comparing with the local knowledge at that
site. It executes Algorithm 12.
6. Global Frequent Itemset Generater Agent (GFIGA): It is a stationary agent at CENTRALS ,
mainly used for processing the result bag of TFICA, i.e., total frequent k-itemset list
( TFI
kL ) generated y TIFCA to generate the global frequent itemset list, GFI
kL . More details
are available in Algorithm 10.
7. Global Knowledge Generater Agent (GKGA): It is also a stationary agent at CENTRALS ,
mainly used for processing the GFI
kL list and TLSAR
L list to compile the global knowledge,
9. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
85
i.e., the list of globally strong association rules, GSAR
CENTRALL . Detailed steps are shown in
Algorithm 11.
Algortihm 4 LOCAL FREQUENT ITEMSET GENERATER AGENT (LFIGA)
Input:
• AgentProfile,A collection of agent attributes set by the AL
• min_th_sup,the given minimum threshold support
Output: FI &SC
L ,the list of frequent itemsets and their support counts
1: procedure LFIGA( AgentProfile,min_th_sup )
2: startCPUTime get system time←
3: Briefcase get Briefcase from AgentProfile←
4: i i iDB load DB from local file system of site S←
5: . (0)iT DB get← >No. of records
6: . (1)iI DB get← >No. of items
7: . (3)iDB[T][I] DB get← >itemset data bank
8: minsupcount (T ×min_th_sup) / 100←
9: >generate frequent-1 itemset list ( 1FIL ) and support count list ( 1FISC )
10: 1CFIL {1,2,3...I}← >candidate frequent-1 itemset
11: for i 1,I← do >initialize the support count array 1SCFIL to zero
12: 01SCFIL [i] ←
13: end for
14: 1k ←
15: for all 1candidate c CFIL∈ do >find support count for every candidate
16: for all transaction t DB∈ do
17: if c t⊂ then
18: 1 1[ ] [ ] 1SCFIL k SCFIL k← +
19: end if
20: end for
21: 1k k← +
22: end for
23: >prune 1 1 1CFIL to generate FIL and FISC
24: for 1,k I← do
25: if 1[ ]SCFIL k minsupcount≥ then
26: k 1 1add c CFIL to FIL∈
27: 1 1add SCFIL [k] to FISC
28: end if
29: end for
30: if 1FIL ≠ ∅ then
31: FI
1add FIL to L
32: FISC
1add FISC to L
33: end if
34: 2k ←
35: while 1kFIL − ≠ ∅ do
36: k k-1CFIL Call GenerateCFIL(FIL )← >see Algorithm 5
37: for 1, .ki CFIL length← do >initialize the array kSCFIL to zero
38: [ ] 0kSCFIL i ←
10. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
86
39: end for
40: 1i ←
41: for all kcandidate c CFIL∈ do >find support count for every candidate
42: for all transaction t DB∈ do >scan DB
43: if c t⊂ then
44: 1 1[ ] [ ] 1SCFIL k SCFIL k← +
45: end if
46: end for
47: 1i i← +
48: end for
49: >prune kCFIL to generate kFIL and kFISC
50: for 1, .ki SCFIL length← do
51: if [i]kSCFIL minsupcount≥ then
52: i k kadd c CFIL to FIL∈
53: k kadd SCFIL [i] to FISC
54: end if
55: end for
56: if kFIL ≠ ∅ then
57: FI
kadd FIL to L
58: FISC
kadd FISC to L
59: end if
60: 1k k← +
61: end while
62: FI &SC
add T to L
63: FI FI &SC
add L to L
64: FISC FI &SC
add L to L
65: FI &SC
isave L in the local file system of this site S
66: endCPUTime get system time←
67: end startCPUTime CPUTime CPUTime← −
68: iadd CPUTime to Result_S
69: iadd Result_S to Briefcase
70: add updated Briefcase to AgentProfile
71: NODES
L get itinerary list from AgentProfile←
72: NODES NODES
L remove first IP address from L← >visited site
73: NODES
add updated L to AgentProfile
74: if NODES
L ≠ ∅ then >itinerary not empty
75: AObject new LGFIGA(AgentProfile,min_th_sup)←
76: add AObject to AgentProfile
77: NODES
transfer AgentProfile to DM_AEE at first IP address in L
78: else
79: endTripTime get system time for end of agent journey←
80: endadd TripTime to Briefcase
81: add updated Briefcase to AgentProfile
82: CENTRALtransfer AgentProfile to RM at S
83: end if
84: end procedure
11. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
87
Algortihm 5 GENERATECFIL
Input: 1,kL Frequent k -1itemsets−
Output: kC ,Candidate Frequent k itemsets
1: procedure GENERATECFIL ( 1kL − )
2: for all 1 k-1itemset l L∈ do
3: for all 2 k-1itemset l L∈ do
4: if 1 2 1 2 1 2(l [1] = l [1]) (l [2] = l [2]) (l [k -1] = l [k -1])∧ ∧ ∧L then
5: 1 2c l l← ⊗ >join step: generate candidates
6: end if
7: if HASINFREQUENTSUBSET ( 1, kc L − ) then >see Algorithm 6
8: delete c
9: else
10: kadd c to C
11: end if
12: end for
13: end for
14: return kC
15: end procedure
Algortihm 6 HASINFREQUENTSUBSET
Input: ,c Candidate k itemsets
Output: 1 1kL ,Frequent k itemsets− −
1: procedure HASINFREQUENTSUBSET ( 1, kc L − )
2: for all (k -1) subset s c∈ do
3: if 1ks L −∉ then
4: return TRUE
5: else
6: return FALSE
7: end if
8: end for
9: end procedure
Algortihm 7 LOCAL KNOWLEDGE GENERATER AGENT (LKGA)
Input:
• AgentProfile,A collection of agent attributes set by the AL
• min_th_conf,the given minimum threshold confidence
Output: LSAR
L ,the list of locally strong association rules
1: procedure LKGA( AgentProfile,min_th_conf )
2: startCPUTime get system time←
3: Briefcase get Briefcase from AgentProfile←
4: FI &SC FI &SC
iL load L from local file system of this site S←
5: &
. (0)FI SC
T L get← >No. of records
6: &
. (1)FI FI SC
L L get← >frequent k-itemset list
12. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
88
7: &
. (2)FISC FI SC
L L get← >support count list
8: for 2, .FI
k L size← do
9: . ( )FI
kL L get k← >get frequent k-itemset list
10: for all kl L∈ do
11: subsetsl generate all non - empty subsets of l←
12: FISC
spcountl get support count of l from L←
13: (l / T) 100support spcountAR ← × >support of the association rule
14: for all subsetsnon - empty subset s l∈ do
15: FISC
spcounts get support count of s from L←
16: conf spcount spcountAR (l / s )×100← >confidence of the association rule
17: if confAR min_th_conf≥ then
18: strong support confAR "s l - s[AR %,AR %]"← ⇒
19: print strongAR
20: strongadd l to AR
21: IP
i iS get IP address of this site S←
22: IP
i strongadd S to AR
23: LSAR
strongadd AR to L
24: end if
25: end for
26: end for
27: end for
28: LSAR
isave L in the local file system of this site S
29: endCPUTime get system time←
30: end startCPUTime CPUTime CPUTime← −
31: iadd CPUTime to Result_S
32: iadd Result_S to Briefcase
33: add updated Briefcase to AgentProfile
34: NODES
L get itinerary list from AgentProfile←
35: NODES NODES
L remove first IP address from L← >visited site
36: NODES
add updated L to AgentProfile
37: if NODES
L ≠ ∅ then >itinerary not empty
38: AObject new LKGA(AgentProfile,min_th_conf)←
39: add AObject to AgentProfile
40: NODES
transfer AgentProfile to DM_AEE at first IP address in L
41: else
42: endTripTime get system time for end of agent journey←
43: endadd TripTime to Briefcase
44: add updated Briefcase to AgentProfile
45: CENTRALtransfer AgentProfile to RM at S
46: end if
47: end procedure
13. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
89
Algortihm 8 TOTAL FREQUENT ITEMSET COLLECTOR AGENT (TFICA)
Input: AgentProfile,A collection of agent attributes set by the AL
Output: FI
L ,the list of locally frequent itemsets
1: procedure TFICA( AgentProfile,min_th_conf )
2: startCPUTime get system time←
3: Briefcase get Briefcase from AgentProfile←
4: FI &SC FI &SC
iL load L from local file system of this site S←
5: &
. (1)FI FI SC
L L get← >frequent k-itemset list
6: FI
iadd L to Result_S
7: endCPUTime get system time←
8: end startCPUTime CPUTime CPUTime← −
9: iadd CPUTime to Result_S
10: iadd Result_S to Briefcase
11: add updated Briefcase to AgentProfile
12: NODES
L get itinerary list from AgentProfile←
13: NODES NODES
L remove first IP address from L← >visited site
14: NODES
add updated L to AgentProfile
15: if NODES
L ≠ ∅ then >itinerary not empty
16: AObject newTFICA(AgentProfile)←
17: add AObject to AgentProfile
18: NODES
transfer AgentProfile to DM_AEE at first IP address in L
19: else
20: endTripTime get system time for end of agent journey←
21: endadd TripTime to Briefcase
22: add updated Briefcase to AgentProfile
23: CENTRALtransfer AgentProfile to RM at S
24: end if
25: end procedure
Algortihm 9 LOCAL KNOWLEDGE COLLECTOR AGENT (LKCA)
Input: AgentProfile,A collection of agent attributes set by the AL
Output: LSAR
L ,the list of locally strong association rules
1: procedure LKCA( AgentProfile )
2: startCPUTime get system time←
3: Briefcase get Briefcase from AgentProfile←
4: LSAR LSAR
iL load L from local file system of this site S←
5: LSAR
iadd L to Result_S
6: endCPUTime get system time←
7: end startCPUTime CPUTime CPUTime← −
8: iadd CPUTime to Result_S
9: iadd Result_S to Briefcase
10: add updated Briefcase to AgentProfile
14. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
90
11: NODES
L get itinerary list from AgentProfile←
12: NODES NODES
L remove first IP address from L← >visited site
13: NODES
add updated L to AgentProfile
14: if NODES
L ≠ ∅ then >itinerary not empty
15: AObject new LKCA(AgentProfile)←
16: add AObject to AgentProfile
17: NODES
transfer AgentProfile to DM_AEE at first IP address in L
18: else
19: endTripTime get system time for end of agent journey←
20: endadd TripTime to Briefcase
21: add updated Briefcase to AgentProfile
22: CENTRALtransfer AgentProfile to RM at S
23: end if
24: end procedure
Algortihm 10 GLOBAL FREQUENT ITEMSET GENERATER AGENT (GFIGA)
Input: Briefcase, Result bag of TFICA agent
Output: GFI
L ,the list of global frequent itemsets
1: procedure GFIGA( Briefcase )
2: startCPUTime get system time←
3: ( )nTFI FI
ii=1
L retrieve total frequent itemsets L from Briefcase← U
4: ( )1
nGFI FI
ii
L retrieve global frequent itemsets L from Briefcase=
← I
5: print GFI
L
6: GFI
CENTRALsave L in the local file system of site S
7: endCPUTime get system time←
8: end startCPUTime CPUTime CPUTime← −
9: print CPUTime
10: return GFI
L
11: end procedure
Algortihm 11 GLOBAL KNOWLEDGE GENERATER AGENT (GKGA)
Input: Briefcase, Result bag of LKCA agent
Output: GSAR
CENTRALL ,the list of globally strong association rules
1: procedure GKGA( Briefcase )
2: startCPUTime get system time←
3: ( )nTLSAR LSAR
ii=1
L retrieve total strong rules L from Briefcase← U
4: ( )GFI GFI
CENTRALL load global frequent itemsets L from S←
5: for all TLSAR
strongAR L∈ do
6: strongL get frequent itemset from AR←
7: if GFI
L L∈ then
15. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
91
8: print IP
strong iAR along with the site address (S )
9: GSAR
strong CENTRALadd AR to L
10: end if
11: end for
12: GSAR
CENTRAL CENTRALsave L in the local file system of site S
13: endCPUTime get system time←
14: end startCPUTime CPUTime CPUTime← −
15: print CPUTime
16: return GSAR
CENTRALL
17: end procedure
Algortihm 12 GLOBAL KNOWLEDGE DISPATCHER AGENT (GKDA)
Input: AgentProfile,A collection of agent attributes set by the AL
Output: GSAR
CENTRAL iDispatch L at each distributed site S
1: procedure GKDA( AgentProfile )
2: startCPUTime get system time←
3: Briefcase get Briefcase from AgentProfile←
4: GSAR GSAR
CANTRAL CENTRALL get L from Briefcase←
5: GSAR
CENTRAL isave L in the local file system of site S
6: endCPUTime get system time←
7: end startCPUTime CPUTime CPUTime← −
8: iadd CPUTime to Result_S
9: iadd Result_S to Briefcase
10: add updated Briefcase to AgentProfile
11: NODES
L get itinerary list from AgentProfile←
12: NODES NODES
L remove first IP address from L← >visited site
13: NODES
add updated L to AgentProfile
14: if NODES
L ≠ ∅ then >itinerary not empty
15: AObject newGKDA(AgentProfile)←
16: add AObject to AgentProfile
17: NODES
transfer AgentProfile to DM_AEE at first IP address in L
18: else
19: endTripTime get system time for end of agent journey←
20: endadd TripTime to Briefcase
21: add updated Briefcase to AgentProfile
22: CENTRALtransfer AgentProfile to RM at S
23: end if
24: end procedure
16. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
92
Figure 2. Control Panel of AeMGSAR
4.IMPLEMENTATION AND PERFORMANCE STUDY
All the agents as well as control panel as shown in Figure 2 are designed in Java. Synthetic
dataset ( iDB ) is stored across three distributed sites 1S , 2S and 3S , with 3500, 3850 and 3900
transactions and 10 items in each respectively using Transactional Data Set Generator (TDSG)
tool [16]. Binary and transactional versions of these datasets are shown in Appendix A. The
required configuration of the system is shown in Table 1 with additional deployment of DM_AEE
at each distributed site and AL and RM at CENTRALS . Round Trip time taken by various MAs is
shown in Figure 3. CPU time consumed by various MAs at site 1S , 2S and 3S is shown in Figure
4, Figure 5 and Figure 6, respectively. CPU time for GFIGA and GKGA is 101357102 nano
seconds and 33317458 nano seconds, respectively. ( )
FI
k iL and ( )
FISC
k iL at distributed sites generated by
LFIGA agent with 20% min_th_sup are shown in Appendix B.1, B.2 and B.3. LSAR
iL at distributed
sites generated by LKGA agent with 50% min_th_conf are shown in Appendix B.4, B.5 and B.6.
Globally frequent itemsets generated by GFIGA at CENTRALS is shown in Figure 7. Fifteen numbers
of 2-itemsets and eight number of 3-itemsets are globally frequent in TFI
kL list and 4, 5 and 6-
itemsets, which are locally frequent, are not globally frequent. Globally strong association rules
( GSAR
CENTRALL ) generated by GKGA at CENTRALS for globally frequent 3-itemsets are shown in Figure 8
and GSAR
CENTRALL for 2-itemsets are shown in Appendix B.7.
On comparing this system with the traditional central data warehouse (DW) based approach for
ARM where entire data from the distributed sites is centrally collected in a DW [17], it is found
that the storage cost is reduced as data is mined locally and only the resultant knowledge is
carried at the central site by mobile agents. As size of the resultant data carried across by mobile
17. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
93
agents is small so network communication cost is also reduced in this case. Data mining is
performed locally by agents, so computational cost at central site is also minimised. AeMGSAR
reflects the global knowledge because all the strong association rules generated are also strong at
each distributed site. The system relies upon the Java's in-built security system. As MAs are
scalable in nature so performance would not be affected by adding more sites.
Table 1. Network Configuration
Site Name Processor OS
LAN Configuration
IP a
Network
SCENTRAL Intel b
MS c
192.168.46.5 NW d
S1 Intel b
MS c
192.168.46.212 NW d
S2 Intel b
MS c
192.168.46.189 NW d
S3 Intel b
MS c
192.168.46.213 NW d
a. IP address with Mask: 255.255.255.0 and Gateway 192.168.46.1
b. Intel Pentium Dual Core(3.40 GHz, 3.40 GHz) with 512 MB RAM
c. Microsoft Windows XP Professional ver. 2002
d. Network Speed: 100 Mbps and Network Adaptor: 82566DM-2 Gigabit NIC
Figure 3. Round Trip time taken by various MAs
Figure 4. CPU Time taken by various MAs at site 1S
18. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
94
Figure 5. CPU Time taken by various MAs at site 2S
Figure 6. CPU Time taken by various MAs at site 3S
Figure 7. Lists of global frequent k-itemsets at CENTRALS
19. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
95
Figure 8. Globally strong association rules for globally frequent 3-itemsets
5.CONCLUSION
Mobile agents strongly qualify for designing distributed applications and the amalgamation of
DDM and agent technology gives favourable results. Most of the existing agent based
frameworks for DARM task are only prototype model and lacks the appropriate underlying
execution environment, scalability, privacy preserving techniques, global knowledge generation
and implementation using a real datasets. In this study, a scalable MAS, called Agent enabled
Mining of Globally Strong Association Rules (AeMGSAR), is presented based on the serial
itinerary of the mobile agents. In this system the overall task of mining the globally strong
association rules is divided into subtasks which are handled by various mobile as well as
stationary agents. An AEE is also designed for the implementation and performance study of
AeMGSAR system. Serial itinerary used for mobile agent migration increases the overall cost of
DARM task so a parallel computing model could be designed where clones of each mobile agent
is dispatched in parallel to all distributed sites.
REFERENCES
[1] U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth & R. Uthurusamy, (1996) Advances in Knowledge
Discovery and Data Mining, AAAI/MIT Press.
[2] J. Han & M. Kamber, (2006) Data Mining: Concepts and Techniques, 2nd ed. Morgan Kaufmann.
[3] G. S. Bhamra, R. B. Patel & A. K. Verma, (2014) “Intelligent Software Agent Technology: An
Overview”, International Journal of Computer Applications (IJCA), vol. 89, no. 2, pp. 19–31.
[4] R. Agrawal, T. Imielinski & A. Swami, (1993) “Mining association rules between sets of items in large
databases”, in Proceedings of the ACM-SIGMOD International Conference of Management of Data,
pp. 207–216.
[5] R. Agrawal & J. C. Shafer, (1996) “Parallel mining of association rules”, IEEE Transaction on
Knowledge and Data Engineering, vol. 8, no. 6, pp. 962–969.
[6] M. J. Zaki, (1999) “Parallel and distributed association mining: a survey”, IEEE Concurrency, vol. 7,
no. 4, pp. 14–25.
[7] X. Wu & S. Zhang, (2003) “Synthesizing high-frequency rules from different data sources”, IEEE
Transactions on Knowledge and Data Engineering, vol. 15, no. 2, pp. 353–367.
20. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
96
[8] Y.-L. Wang, Z.-Z. Li & H.-P. Zhu, (2003) “Mobile agent based distributed and incremental techniques
for association rules”, in Proceedings of the International Conference on Machine Learning and
Cybernetics(ICMLC 2003), vol. 1, pp. 266–271.
[9] C. Aflori & F. Leon, (2004) “Efficient Distributed Data Mining using Intelligent Agents”, in
Proceedings of the 8th International Symposium on Automatic Control and Computer Science, pp. 1–
6.
[10] U. P. Kulkarni, P. D. Desai, T. Ahmed, J. V. Vadavi & A. R. Yardi, (2007) “Mobile Agent Based
Distributed Data Mining”, in Proceedings of the International Conference on Computational
Intelligence and Multimedia Applications (ICCIMA 2007), IEEE Computer Society, pp. 18–24.
[11] G. Hu & S. Ding, (2009a) “An Agent-Based Framework for Association Rules Mining of Distributed
Data”, in Software Engineering Research, Management and Applications 2009, ser. Studies in
Computational Intelligence, R. Lee and N. Ishii, Eds. Springer Berlin - Heidelberg, vol. 253, pp. 13–
26.
[12] G. Hu & S. Ding, (2009b) “Mining of Association Rules from Distributed Data using Mobile
Agents,” in Proceedings of the International Conference on e-Business(ICE-B 2009), pp. 21–26.
[13] A. O. Ogunde, O. Folorunso, A. S. Sodiya, J. A. Oguntuase & G. O. Ogunleye, (2011) “Improved
cost models for agent based association rule mining in distributed databases”, Anale SEria
Informatica, vol. 9, no. 1, pp. 231–250, Available: http://anale-
informatica.tibiscus.ro/download/lucrari/9-1-20-Ogunde.pdf
[14] G. S. Bhamra, A. K. Verma, & R. B. Patel, (2015) “Agent Based Frameworks for Distributed
Association Rule Mining: An Analysis”, International Journal in Foundations of Computer Science &
Technology (IJFCST), vol. 5, no. 1, pp. 11-22.
[15] R. Agrawal & R. Srikant, (1994) “Fast Algorithms for Mining Association Rules in Large Databases”,
in Proceedings of the 20th International Conference on Very Large Data Bases (VLDB’94). Morgan
Kaufmann Publishers Inc., pp. 487–499.
[16] G. S. Bhamra, A. K. Verma, & R. B. Patel, (2011) “TDSGenerator: A Tool for generating synthetic
Transactional Datasets for Association Rules Mining”, International Journal of Computer Science
Issues (IJCSI), vol. 8, no. 2, pp. 184-188.
[17] G. S. Bhamra, A. K. Verma, & R. B. Patel, (2014) “An Investigation into the Central Data Warehouse
based Association Rule Mining”, International Journal of Computer Applications (IJCA), vol. 96, no.
10, pp. 1-12.
AUTHORS
Gurpreet Singh Bhamra is currently working as Assistant Professor at
Department of Computer Science and Engineering, M. M. University, Mullana,
Haryana. He received his B.Sc. (Computer Sc.) and MCA from Kurukshetra
University, Kurukshetra in 1995 and 1998, respectively. He is pursuing Ph.D.
from Department of Computer Science and Engineering, Thapar University,
Patiala, Punjab. He is in teaching since 1998. He h as published 13 research
papers in International/National Journals and International Conferences. He has
received Best Paper Award for “An Agent enriched Distributed Data Mining on
Heterogeneous Networks”, in “Challenges & Opportunities in Information
Technology” (COIT-2008). He is a Life Member of Computer Society of India. His research interests are in
Distributed Computing, Distributed Data Mining, Mobile Agents and Bio-informatics.
Dr. Anil Kumar Verma is currently working as Associate Professor at
Department of Computer Science & Engineering, Thapar University, Patiala. He
received his B.S., M.S. and Ph.D. in 1991, 2001 and 2008 respectively, majoring in
Computer science and engineering. He has worked as Lecturer at M.M.M.
Engineering College, Gorakhpur from 1991 to 1996. He joined Thapar Institute of
Engineering & Technology in 1996 as a Systems Analyst in the Computer Centre
and is presently associated with the same Institute. He has been a visiting faculty to
many institutions. He has published over 100 papers in referred journals and
conferences (India and Abroad). He is a MISCI (Turkey), LMCSI (Mumbai),
GMAIMA (New Delhi). He is a certified software quality auditor by MoCIT,
21. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
97
Govt. of India. His research interests include wireless networks, routing algorithms and securing ad hoc
networks and data mining.
Dr. Ram Bahadur Patel is currently working as Professor and Head at Department
of Computer Science & Engineering, Chandigarh College of Engineering &
Technology, Chandigarh. He received PhD from IIT Roorkee in Computer Science &
Engineering, PDF from Highest Institute of Education, Science & Technology
(HIEST), Athens, Greece, MS (Software Systems) from BITS Pilani and B. E. in
Computer Engineering from M. M. M. Engineering College, Gorakhpur, UP. Dr.
Patel is in teaching and research since 1991. He has supervised 36 M. Tech, 7 M.
Phil. and 8 PhD Thesis. He is currently supervising 6 PhD students. He has published
130 research papers in International/National Journals and Refereed International
Conferences. He has written 7 text books for engineering courses. He is member of
ISTE (New Delhi), IEEE (USA). He is a member of various International Technical Committees and
participating frequently in International Technical Committees in India and abroad. His current research
interests are in Mobile & Distributed Computing, Mobile Agent Security and Fault Tolerance and Sensor
Network.
APPENDIX A – SYNTHETIC DATASETS
A.1 BDS3500T10I.txt and corresponding TDS3500T10I.txt( 1DB ) at site 1S
These synthetic binary and transactional datasets of 3500 records are created by TDSG tool at
site 1S . In the binary version each column head represents the item number and each row
represents a transaction where integer ‘1’ is used for a purchased item and ‘0’ is used if it is nor
purchased. The corresponding transactional version has a Transaction It (TID) for each
transaction and Itemset is the set of all the purchased items for that particular transaction.
22. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
98
A.2 BDS3850T10I.txt and corresponding TDS3850T10I.txt( 2DB ) at site 2S
These synthetic binary and transactional datasets of 3850 records are created by TDSG tool at site
2S .
A.3 BDS3900T10I.txt and corresponding TDS3900T10I.txt( 3DB ) at site 3S
These synthetic binary and transactional datasets of 3900 records are created by TDSG tool at site
3S .
23. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
99
APPENDIX B–RESULTANT KNOWLEDGE OF AEMGSAR
SYSTEM
B.1 (1)
FI
kL and (1)
FISC
kL at site 1S
List of frequent k-itemset, i.e., (1)
FI
kL is represented by column L and column SC shows the support
count of the corresponding frequent k-itemset, i.e., (1)
FISC
kL at site 1S . These frequent itemsets and
their support counts are obtained by processing the synthetic dataset ( 1DB ) as shown in Appendix
A.1.
B.2 (2)
FI
kL and (2)
FISC
kL at site 2S
These frequent itemsets and their support counts are obtained by processing the synthetic dataset
( 2DB ) as shown in Appendix A.2.
24. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
100
B.3 (3)
FI
kL and (3)
FISC
kL at site 3S
These frequent itemsets and their support counts are obtained by processing the synthetic dataset
( 3DB ) as shown in Appendix A.3.
25. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
101
B.4 1
LSAR
L at site 1S
Column L represents frequent k-itemset and column AR(support, confidence) shows the list of
locally strong association rules, i.e., 1
LSAR
L at site 1S . Each strong rule has its associated support
and confidence factor. The minimum threshold is taken as 20% and minimum threshold
confidence as 50% for generating the strong rules by making use of the data as shown in
Appendix B.1.
26. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
102
B.5 2
LSAR
L at site 2S
Column L represents frequent k-itemset and column AR(support, confidence) shows the list of
locally strong association rules, i.e., 2
LSAR
L at site 2S . Each strong rule has its associated support
and confidence factor. The minimum threshold is taken as 20% and minimum threshold
confidence as 50% for generating the strong rules by making use of the data as shown in
Appendix B.2.
27. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
103
B.6 3
LSAR
L at site 3S
Column L represents frequent k-itemset and column AR(support, confidence) shows the list of
locally strong association rules, i.e., 3
LSAR
L at site 3S . Each strong rule has its associated support
and confidence factor. The minimum threshold is taken as 20% and minimum threshold
confidence as 50% for generating the strong rules by making use of the data as shown in
Appendix B.3.
28. International Journal on Computational Sciences & Applications (IJCSA) Vol.5, No.3, June 2015
104
B.7 GSAR
CENTRALL at site CENTRALS
Column L represents globally frequent k-itemset, i.e., itemsets which are locally strong at all the
distributed sites and column AR(support, confidence) shows the list of globally strong
association rules, i.e., GSAR
CENTRALL for such itemsets. Each globally strong rule has its associated
support and confidence factor. The minimum threshold is taken as 20% and minimum threshold
confidence as 50%. Site represents the IP address of the site where the rule is locally strong. IP
address 192.168.46.212 is used for site 1S , 192.168.46.189 for site 2S and address
192.168.46.213 is used for site 3S .