Distributed Association Rule Mining (DARM) is the task for generating the globally strong association
rules from the global frequent itemsets in a distributed environment. The intelligent agent based model, to
address scalable mining over large scale distributed data, is a popular approach to constructing
Distributed Data Mining (DDM) systems and is characterized by a variety of agents coordinating and
communicating with each other to perform the various tasks of the data mining process. This study
performs the comparative analysis of the existing agent based frameworks for mining the association rules
from the distributed data sources.
Implementation of Agent Based Dynamic Distributed ServiceCSCJournals
This document proposes a design for agent migration between distributed systems using ACL (Agent Communication Language) messages. It involves serializing an agent's code and state into an ACL message that is sent from one system to another. The receiving system deserializes the agent to restore its execution. The design includes defining an ontology for migration messages, a migration protocol specifying the message flow, and components for handling class loading, agent migration, and conversation protocols. The performance of this distributed agent migration approach is evaluated by applying it to a distributed prime number calculation application.
CYBER INFRASTRUCTURE AS A SERVICE TO EMPOWER MULTIDISCIPLINARY, DATA-DRIVEN S...ijcsit
In supporting its large scale, multidisciplinary scientific research efforts across all the university campuses and by the research personnel spread over literally every corner of the state, the state of Nevada needs to build and leverage its own Cyber infrastructure. Following the well-established as-a-service model, this state-wide Cyber infrastructure that consists of data acquisition, data storage, advanced instruments, visualization, computing and information processing systems, and people, all seamlessly linked together through a high-speed network, is designed and operated to deliver the benefits of Cyber infrastructure-as-aService (CaaS).There are three major service groups in this CaaS, namely (i) supporting infrastructural
services that comprise sensors, computing/storage/networking hardware, operating system, management tools, virtualization and message passing interface (MPI); (ii) data transmission and storage services that provide connectivity to various big data sources, as well as cached and stored datasets in a distributed
storage backend; and (iii) processing and visualization services that provide user access to rich processing and visualization tools and packages essential to various scientific research workflows. Built on commodity hardware and open source software packages, the Southern Nevada Research Cloud(SNRC)and a data repository in a separate location constitute a low cost solution to deliver all these services around CaaS. The service-oriented architecture and implementation of the SNRC are geared to encapsulate as much detail of big data processing and cloud computing as possible away from end users; rather scientists only need to learn and access an interactive web-based interface to conduct their collaborative, multidisciplinary, dataintensive research. The capability and easy-to-use features of the SNRC are demonstrated through a use case that attempts to derive a solar radiation model from a large data set by regression analysis.
A survey of peer-to-peer content distribution technologiessharefish
This document provides a survey of peer-to-peer content distribution technologies. It begins with defining key concepts of peer-to-peer computing and classifying peer-to-peer systems. The focus is on content distribution systems, which allow personal computers to function as a distributed storage medium for digital content. The document proposes a framework for analyzing nonfunctional characteristics and architectural designs of current peer-to-peer content distribution systems.
Re-Engineering Databases using Meta-Programming TechnologyGihan Wikramanayake
G N Wikramanayake (1997) "Re-engineering Databases using Meta-Programming Technology" In:16th National Information Technology Conference on Information Technology for Better Quality of Life Edited by:R. Ganepola et al. pp. 1-14. Computer Society of Sri Lanka, Colombo: CSSL Jul 11-13, ISBN 955-9155-05-9
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...ijcsit
3D reconstruction is a technique used in computer vision which has a wide range of applications in
areas like object recognition, city modelling, virtual reality, physical simulations, video games and
special effects. Previously, to perform a 3D reconstruction, specialized hardwares were required.
Such systems were often very expensive and was only available for industrial or research purpose.
With the rise of the availability of high-quality low cost 3D sensors, it is now possible to design
inexpensive complete 3D scanning systems. The objective of this work was to design an acquisition and
processing system that can perform 3D scanning and reconstruction of objects seamlessly. In addition,
the goal of this work also included making the 3D scanning process fully automated by building and
integrating a turntable alongside the software. This means the user can perform a full 3D scan only by
a press of a few buttons from our dedicated graphical user interface. Three main steps were followed
to go from acquisition of point clouds to the finished reconstructed 3D model. First, our system
acquires point cloud data of a person/object using inexpensive camera sensor. Second, align and
convert the acquired point cloud data into a watertight mesh of good quality. Third, export the
reconstructed model to a 3D printer to obtain a proper 3D print of the model.
THE SOCIALIZED INFRASTRUCTURE OF THE INTERNET ON THE COMPUTING LEVEL ijcsit
To share the huge amount of heterogeneous information to the large-scale of heterogeneous users, the Internet on the computing level should be reconstructed since such a crucial infrastructure was designed without proper understandings. To upgrade, it must consist of five layers from the bottom to the top,
including the routing, the multicasting, the persisting, the presenting and the humans. The routing layer is responsible for establishing the fundamental substrate and finding resources in accordance with social disciplines. The multicasting layer disseminates data in a high performance and low cost way based on the routing. The persisting layer provides the services of storing and accessing persistent data efficiently with
the minimum dedicated resources. The presenting layer absorbs users’ interactions to guide the adjustments of the underlying layers other than shows connected local views to users. Completely different from the lower software layers, the topmost one is made up totally with humans, i.e., users, including individual persons or organizations, which are social capital dominating the Internet. Additionally, within
the upgraded infrastructure, besides the situation that a lower layer supports its immediate upper one only, the humans layer influences the lower ones in terms of transferring its social resources to them. That is different from any other traditional layer-based systems. Those resources lead to adaptations and
adjustments of all of the software layers since each of them needs to follow social rules. Eventually, the updated underlying layers return latest consequences to users upon those modifications.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed
Implementation of Agent Based Dynamic Distributed ServiceCSCJournals
This document proposes a design for agent migration between distributed systems using ACL (Agent Communication Language) messages. It involves serializing an agent's code and state into an ACL message that is sent from one system to another. The receiving system deserializes the agent to restore its execution. The design includes defining an ontology for migration messages, a migration protocol specifying the message flow, and components for handling class loading, agent migration, and conversation protocols. The performance of this distributed agent migration approach is evaluated by applying it to a distributed prime number calculation application.
CYBER INFRASTRUCTURE AS A SERVICE TO EMPOWER MULTIDISCIPLINARY, DATA-DRIVEN S...ijcsit
In supporting its large scale, multidisciplinary scientific research efforts across all the university campuses and by the research personnel spread over literally every corner of the state, the state of Nevada needs to build and leverage its own Cyber infrastructure. Following the well-established as-a-service model, this state-wide Cyber infrastructure that consists of data acquisition, data storage, advanced instruments, visualization, computing and information processing systems, and people, all seamlessly linked together through a high-speed network, is designed and operated to deliver the benefits of Cyber infrastructure-as-aService (CaaS).There are three major service groups in this CaaS, namely (i) supporting infrastructural
services that comprise sensors, computing/storage/networking hardware, operating system, management tools, virtualization and message passing interface (MPI); (ii) data transmission and storage services that provide connectivity to various big data sources, as well as cached and stored datasets in a distributed
storage backend; and (iii) processing and visualization services that provide user access to rich processing and visualization tools and packages essential to various scientific research workflows. Built on commodity hardware and open source software packages, the Southern Nevada Research Cloud(SNRC)and a data repository in a separate location constitute a low cost solution to deliver all these services around CaaS. The service-oriented architecture and implementation of the SNRC are geared to encapsulate as much detail of big data processing and cloud computing as possible away from end users; rather scientists only need to learn and access an interactive web-based interface to conduct their collaborative, multidisciplinary, dataintensive research. The capability and easy-to-use features of the SNRC are demonstrated through a use case that attempts to derive a solar radiation model from a large data set by regression analysis.
A survey of peer-to-peer content distribution technologiessharefish
This document provides a survey of peer-to-peer content distribution technologies. It begins with defining key concepts of peer-to-peer computing and classifying peer-to-peer systems. The focus is on content distribution systems, which allow personal computers to function as a distributed storage medium for digital content. The document proposes a framework for analyzing nonfunctional characteristics and architectural designs of current peer-to-peer content distribution systems.
Re-Engineering Databases using Meta-Programming TechnologyGihan Wikramanayake
G N Wikramanayake (1997) "Re-engineering Databases using Meta-Programming Technology" In:16th National Information Technology Conference on Information Technology for Better Quality of Life Edited by:R. Ganepola et al. pp. 1-14. Computer Society of Sri Lanka, Colombo: CSSL Jul 11-13, ISBN 955-9155-05-9
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...ijcsit
3D reconstruction is a technique used in computer vision which has a wide range of applications in
areas like object recognition, city modelling, virtual reality, physical simulations, video games and
special effects. Previously, to perform a 3D reconstruction, specialized hardwares were required.
Such systems were often very expensive and was only available for industrial or research purpose.
With the rise of the availability of high-quality low cost 3D sensors, it is now possible to design
inexpensive complete 3D scanning systems. The objective of this work was to design an acquisition and
processing system that can perform 3D scanning and reconstruction of objects seamlessly. In addition,
the goal of this work also included making the 3D scanning process fully automated by building and
integrating a turntable alongside the software. This means the user can perform a full 3D scan only by
a press of a few buttons from our dedicated graphical user interface. Three main steps were followed
to go from acquisition of point clouds to the finished reconstructed 3D model. First, our system
acquires point cloud data of a person/object using inexpensive camera sensor. Second, align and
convert the acquired point cloud data into a watertight mesh of good quality. Third, export the
reconstructed model to a 3D printer to obtain a proper 3D print of the model.
THE SOCIALIZED INFRASTRUCTURE OF THE INTERNET ON THE COMPUTING LEVEL ijcsit
To share the huge amount of heterogeneous information to the large-scale of heterogeneous users, the Internet on the computing level should be reconstructed since such a crucial infrastructure was designed without proper understandings. To upgrade, it must consist of five layers from the bottom to the top,
including the routing, the multicasting, the persisting, the presenting and the humans. The routing layer is responsible for establishing the fundamental substrate and finding resources in accordance with social disciplines. The multicasting layer disseminates data in a high performance and low cost way based on the routing. The persisting layer provides the services of storing and accessing persistent data efficiently with
the minimum dedicated resources. The presenting layer absorbs users’ interactions to guide the adjustments of the underlying layers other than shows connected local views to users. Completely different from the lower software layers, the topmost one is made up totally with humans, i.e., users, including individual persons or organizations, which are social capital dominating the Internet. Additionally, within
the upgraded infrastructure, besides the situation that a lower layer supports its immediate upper one only, the humans layer influences the lower ones in terms of transferring its social resources to them. That is different from any other traditional layer-based systems. Those resources lead to adaptations and
adjustments of all of the software layers since each of them needs to follow social rules. Eventually, the updated underlying layers return latest consequences to users upon those modifications.
A Comparative Study: Taxonomy of High Performance Computing (HPC) IJECEIAES
The computer technologies have rapidly developed in both software and hardware field. The complexity of software is increasing as per the market demand because the manual systems are going to become automation as well as the cost of hardware is decreasing. High Performance Computing (HPC) is very demanding technology and an attractive area of computing due to huge data processing in many applications of computing. The paper focus upon different applications of HPC and the types of HPC such as Cluster Computing, Grid Computing and Cloud Computing. It also studies, different classifications and applications of above types of HPC. All these types of HPC are demanding area of computer science. This paper also done comparative study of grid, cloud and cluster computing based on benefits, drawbacks, key areas of research, characterstics, issues and challenges.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
An Intelligent Approach for Handover Decision in Heterogeneous Wireless Envir...CSCJournals
Vertical handoff is the basic requirement of the convergence of different access technologies. It is also the key characteristic and technology of overlay wireless network with appropriate network interfaces. The integration of diverse but complementary cellular and wireless technologies in the next generation wireless networks requires the design of intelligent vertical handoff decision algorithms to enable mobile users equipped with contemporary multi-interfaced mobile terminals to seamlessly switch network access and experience uninterrupted service continuity anywhere and anytime. Most existing vertical handoff decision strategies are designed to meet individual needs that may not achieve a good system performance. In this paper an intelligent approach is used for vertical handover decision. The intelligence is based on the fuzzy logic approach. So here, fuzzy logic is used for network selection and decision making for vertical handover.
This document provides a survey of file replication techniques used in grid systems. It begins with an introduction to grid systems and discusses their use of replication to improve response times and reduce bandwidth consumption. It then categorizes replication techniques as static or dynamic and describes challenges of replication including maintaining consistency and overhead. The document surveys various replication strategies for different grid topologies like peer-to-peer, tree and hybrid. It evaluates strategies based on factors like access latency, bandwidth consumption and fault tolerance. Specific replication techniques are discussed for peer-to-peer architectures aimed at availability, placement strategies and balancing workloads.
IMMERSIVE TECHNOLOGIES IN 5G-ENABLED APPLICATIONS: SOME TECHNICAL CHALLENGES ...ijcsit
5G next-generation networking paradigm with its envisioned capacity, coverage, and data transfer rates
provide a developmental field for novel applications scenarios. Virtual, Mixed, and Augmented Reality will
play a key role as visualization, interaction, and information delivery platforms. The recent hardware and
software developments in immersive technologies including AR, VR and MR in terms of the commercial
availability of advanced headsets equipped with XR-accelerated processing units and Software
Development Kits (SDKs) are significantly increasing the penetration of such devices for entertainment,
corporate and industrial use. This trend creates next-generation usage models which rise serious technical
challenges within all networking and software architecture levels to support the immersive digital
transformation. The focus of this paper is to detect, discuss and propose system development approaches
and architectures for successful integration of the immersive technologies in the future information and
communication concepts like Tactile Internet and Internet of Skills.
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
A MALICIOUS USERS DETECTING MODEL BASED ON FEEDBACK CORRELATIONSIJCNC
The trust and reputation models were introduced to restrain the impacts caused by rational but selfish
peers in P2P streaming systems. However, these models face with two major challenges from dishonest
feedback and strategic altering behaviors. To answer these challenges, we present a global trust model
based on network community, evaluation correlations, and punishment mechanism. We also propose a
two-layered overlay to provide the function of peers’ behaviors collection and malicious detection.
Furthermore, we analysis several security threats in P2P streaming systems, and discuss how to defend
with them by our trust mechanism. The simulation results show that our trust framework can successfully
filter out dishonest feedbacks by using correlation coefficients. It can effectively defend against the
security threats with good load balance as well.
This document discusses enabling technologies for interoperability between geographic information systems (GIS). It addresses problems at the syntactic, structural, and semantic levels of integration that must be solved to achieve fully interoperable GIS. At the syntactic level, standards like XML are used to integrate different data types. At the structural level, mediator systems use mapping rules to integrate heterogeneous data structures. The most difficult problem is semantic integration, where the meanings and contexts of concepts must be resolved. Ontologies and semantic modeling with XML and RDF can help describe information semantically and perform semantic translation between contexts to enable intelligent information integration.
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
AUTHENTICATION SCHEME FOR DATABASE AS A SERVICE(DBAAS) ijccsa
IT Companies have shifted their resources to the cloud at rapidly increasing rate. As part of this trend companies are migrating business critical and sensitive data stored in database to cloud-hosted and Database as a Service (DBaaS) solutions.Of all that has been written about cloud computing, precious little attention has been paid to authentication in the cloud. In this paper we have designed a new effective authentication scheme for Cloud Database as a Service (DBaaS). A user can change his/her password, whenever demanded. Furthermore, security analysis realizes the feasibility of the proposed model for DBaaS and achieves efficiency. We also proposed an efficient authentication scheme to solve the authentication problem in cloud. The proposed solution which we have provided is based mainly on improved Needham-Schroeder’s protocol to prove the users’ identity to determine if this user is authorized or not. The results showed that this scheme is very strong and difficult to break it.
Named Data Networking (NDN) is a recently designed Internet architecture that benefits data names
instead of locations and creates essential changes in the abstraction of network services from "delivering
packets to specific destinations” to "retrieving data with special names" makes. This fundamental change
creates new opportunities and intellectual challenges in all areas, especially network routing and
communication, communication security, and privacy. The focus of this dissertation is on the forwarding
aircraft feature introduced by NDN. Communication in NDN is done by exchanging interest and data
packets
This document discusses the need for adaptive and dynamic software development that can adjust to changing runtime environments and fault conditions. It argues that traditional static approaches to fault tolerance, like using fixed levels of redundancy, are inadequate as the threat environment may vary. The document then introduces an adaptive data integrity tool that allows the level of redundancy to change dynamically based on faults detected at runtime. This provides an example of the new approach called for, termed "New Software Development," that is more adaptive, maintainable and reconfigurable like New Product Development concepts.
Many-Task Computing (MTC) aims to enable task-parallel applications to leverage large distributed systems through a loosely-coupled model. MTC applications involve a large number of tasks with short runtimes and are data-intensive. This differs from traditional HPC which focuses on tightly-coupled applications. As computing resources scale, challenges arise around application scalability, reliability, resource manager scalability, and efficient hardware utilization that MTC seeks to address. MTC is applicable to clusters, grids, and supercomputers and provides opportunities to analyze increasingly large scientific datasets generated from experiments and simulations.
This document summarizes a research paper on using cloud computing for intelligent transportation systems. The paper proposes using intelligent transportation clouds to provide services like traffic management strategies and decision support. It describes a prototype using multi-agent systems with mobile agents to manage traffic. Cloud computing can help handle large data storage and transportation needs efficiently. Intelligent transportation clouds could overcome issues with computing power, storage, and scalability faced by current traffic management systems.
An Overview of Information Extraction from Mobile Wireless Sensor NetworksM H
Information Extraction (IE) is a key research area within the field of Wireless Sensor Networks (WSNs). It has been characterised in a variety of ways, ranging from the description of its purposes, to reasonably abstract models of its processes and components. There has been only a handful of papers addressing IE over mobile WSNs directly, these dealt with individual mobility related problems as the need arises. This paper is presented as a tutorial that takes the reader from the point of identifying data about a dynamic (mobile) real world problem, relating the data back to the world from which it was collected, and finally discovering what is in the data. It covers the entire process with special emphasis on how to exploit mobility in maximising information return from a mobile WSN. We present some challenges introduced by mobility on the IE process as well as its effects on the quality of the extracted information. Finally, we identify future research directions facing the development of efficient IE approaches for WSNs in the presence of mobility.
Agent based Aggregation of Cloud Services- A Research Agendaidescitation
-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
This document discusses the design of a geographic information system (GIS) software platform integrated with a decision support system (DSS) for use in e-government applications in China. It proposes a new approach that tightly integrates DSS techniques with GIS techniques to provide comprehensive information and decision-making services to governments. The platform uses a uniform database design and data management approach. It is developed using a component-based approach to achieve close integration of GIS and DSS functions. The platform adopts a client-server architecture for applications and a client-server structure for system maintenance.
It is well known that the tenacity is a proper measure for studying vulnerability and reliability in graphs.
Here, a modified edge-tenacity of a graph is introduced based on the classical definition of tenacity.
Properties and bounds for this measure are introduced; meanwhile edge-tenacity is calculated for cycle
graphs and also for complete graphs.
Comparative study of different algorithmsijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA) Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to explain the way as how our Proposed Genetic Algorithm (GA), Proposed Simulated Annealing (SA) Algorithm using GA, Classical Backtracking (BT) Algorithm and Classical Brute Force (BF) Search Algorithm can be employed in finding the best solution of N Queens Problem and also, makes a comparison between these four algorithms. It is entirely a review based work. The four algorithms were written as well as implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more time to provide result than the Proposed SA using GA.
This paper introduces a new comparison base stable sorting algorithm, named RA sort. The RA sort
involves only the comparison of pair of elements in an array which ultimately sorts the array and does not
involve the comparison of each element with every other element. It tries to build upon the relationship
established between the elements in each pass. Instead of going for a blind comparison we prefer a
selective comparison to get an efficient method. Sorting is a fundamental operation in computer science.
This algorithm is analysed both theoretically and empirically to get a robust average case result. We have
performed its Empirical analysis and compared its performance with the well-known quick sort for various
input types. Although the theoretical worst case complexity of RA sort is Yworst(n) = O(n√), the
experimental results suggest an empirical Oemp(nlgn)1.333 time complexity for typical input instances, where
the parameter n characterizes the input size. The theoretical complexity is given for comparison operation.
We emphasize that the theoretical complexity is operation specific whereas the empirical one represents the
overall algorithmic complexity.
Distribution of maximal clique size underijfcstjournal
In this paper, we analyze the evolution of a small-world network and its subsequent transformation to a
random network using the idea of link rewiring under the well-known Watts-Strogatz model for complex
networks. Every link u-v in the regular network is considered for rewiring with a certain probability and if
chosen for rewiring, the link u-v is removed from the network and the node u is connected to a randomly
chosen node w (other than nodes u and v). Our objective in this paper is to analyze the distribution of the
maximal clique size per node by varying the probability of link rewiring and the degree per node (number
of links incident on a node) in the initial regular network. For a given probability of rewiring and initial
number of links per node, we observe the distribution of the maximal clique per node to follow a Poisson
distribution. We also observe the maximal clique size per node in the small-world network to be very close
to that of the average value and close to that of the maximal clique size in a regular network. There is no
appreciable decrease in the maximal clique size per node when the network transforms from a regular
network to a small-world network. On the other hand, when the network transforms from a small-world
network to a random network, the average maximal clique size value decreases significantly.
α Nearness ant colony system with adaptive strategies for the traveling sales...ijfcstjournal
On account of ant colony algorithm easy to fall into local optimum, this paper presents an improved ant
colony optimization called α-AACS and reports its performance. At first, we provide an concise description
of the original ant colony system(ACS) and introduce α-nearness based on the minimum 1-tree for ACS’s
disadvantage, which better reflects the chances of a given link being a member of an optimal tour. Then, we
improve α-nearness by computing a lower bound and propose other adaptations for ACS. Finally, we
conduct a fair competition between our algorithm and others. The results clearly show that α-AACS has a
better global searching ability in finding the best solutions, which indicates that α-AACS is an effective
approach for solving the traveling salesman problem.
How secure is the website you are shopping on.
How to tell if the website you are shopping on has a secure shopping cart, that ensures all your credit card information is being transmitted securely.
The AIRCC's International Journal of Computer Science and Information Technology (IJCSIT) is devoted to fields of Computer Science and Information Systems. The IJCSIT is a open access peer-reviewed scientific journal published in electronic form as well as print form. The mission of this journal is to publish original contributions in its field in order to propagate knowledge amongst its readers and to be a reference publication.
An Intelligent Approach for Handover Decision in Heterogeneous Wireless Envir...CSCJournals
Vertical handoff is the basic requirement of the convergence of different access technologies. It is also the key characteristic and technology of overlay wireless network with appropriate network interfaces. The integration of diverse but complementary cellular and wireless technologies in the next generation wireless networks requires the design of intelligent vertical handoff decision algorithms to enable mobile users equipped with contemporary multi-interfaced mobile terminals to seamlessly switch network access and experience uninterrupted service continuity anywhere and anytime. Most existing vertical handoff decision strategies are designed to meet individual needs that may not achieve a good system performance. In this paper an intelligent approach is used for vertical handover decision. The intelligence is based on the fuzzy logic approach. So here, fuzzy logic is used for network selection and decision making for vertical handover.
This document provides a survey of file replication techniques used in grid systems. It begins with an introduction to grid systems and discusses their use of replication to improve response times and reduce bandwidth consumption. It then categorizes replication techniques as static or dynamic and describes challenges of replication including maintaining consistency and overhead. The document surveys various replication strategies for different grid topologies like peer-to-peer, tree and hybrid. It evaluates strategies based on factors like access latency, bandwidth consumption and fault tolerance. Specific replication techniques are discussed for peer-to-peer architectures aimed at availability, placement strategies and balancing workloads.
IMMERSIVE TECHNOLOGIES IN 5G-ENABLED APPLICATIONS: SOME TECHNICAL CHALLENGES ...ijcsit
5G next-generation networking paradigm with its envisioned capacity, coverage, and data transfer rates
provide a developmental field for novel applications scenarios. Virtual, Mixed, and Augmented Reality will
play a key role as visualization, interaction, and information delivery platforms. The recent hardware and
software developments in immersive technologies including AR, VR and MR in terms of the commercial
availability of advanced headsets equipped with XR-accelerated processing units and Software
Development Kits (SDKs) are significantly increasing the penetration of such devices for entertainment,
corporate and industrial use. This trend creates next-generation usage models which rise serious technical
challenges within all networking and software architecture levels to support the immersive digital
transformation. The focus of this paper is to detect, discuss and propose system development approaches
and architectures for successful integration of the immersive technologies in the future information and
communication concepts like Tactile Internet and Internet of Skills.
final Year Projects, Final Year Projects in Chennai, Software Projects, Embedded Projects, Microcontrollers Projects, DSP Projects, VLSI Projects, Matlab Projects, Java Projects, .NET Projects, IEEE Projects, IEEE 2009 Projects, IEEE 2009 Projects, Software, IEEE 2009 Projects, Embedded, Software IEEE 2009 Projects, Embedded IEEE 2009 Projects, Final Year Project Titles, Final Year Project Reports, Final Year Project Review, Robotics Projects, Mechanical Projects, Electrical Projects, Power Electronics Projects, Power System Projects, Model Projects, Java Projects, J2EE Projects, Engineering Projects, Student Projects, Engineering College Projects, MCA Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, Wireless Networks Projects, Network Security Projects, Networking Projects, final year projects, ieee projects, student projects, college projects, ieee projects in chennai, java projects, software ieee projects, embedded ieee projects, "ieee2009projects", "final year projects", "ieee projects", "Engineering Projects", "Final Year Projects in Chennai", "Final year Projects at Chennai", Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, Final Year Java Projects, Final Year ASP.NET Projects, Final Year VB.NET Projects, Final Year C# Projects, Final Year Visual C++ Projects, Final Year Matlab Projects, Final Year NS2 Projects, Final Year C Projects, Final Year Microcontroller Projects, Final Year ATMEL Projects, Final Year PIC Projects, Final Year ARM Projects, Final Year DSP Projects, Final Year VLSI Projects, Final Year FPGA Projects, Final Year CPLD Projects, Final Year Power Electronics Projects, Final Year Electrical Projects, Final Year Robotics Projects, Final Year Solor Projects, Final Year MEMS Projects, Final Year J2EE Projects, Final Year J2ME Projects, Final Year AJAX Projects, Final Year Structs Projects, Final Year EJB Projects, Final Year Real Time Projects, Final Year Live Projects, Final Year Student Projects, Final Year Engineering Projects, Final Year MCA Projects, Final Year MBA Projects, Final Year College Projects, Final Year BE Projects, Final Year BTech Projects, Final Year ME Projects, Final Year MTech Projects, Final Year M.Sc Projects, IEEE Java Projects, ASP.NET Projects, VB.NET Projects, C# Projects, Visual C++ Projects, Matlab Projects, NS2 Projects, C Projects, Microcontroller Projects, ATMEL Projects, PIC Projects, ARM Projects, DSP Projects, VLSI Projects, FPGA Projects, CPLD Projects, Power Electronics Projects, Electrical Projects, Robotics Projects, Solor Projects, MEMS Projects, J2EE Projects, J2ME Projects, AJAX Projects, Structs Projects, EJB Projects, Real Time Projects, Live Projects, Student Projects, Engineering Projects, MCA Projects, MBA Projects, College Projects, BE Projects, BTech Projects, ME Projects, MTech Projects, M.Sc Projects, IEEE 2009 Java Projects, IEEE 2009 ASP.NET Projects, IEEE 2009 VB.NET Projects, IEEE 2009 C# Projects, IEEE 2009 Visual C++ Projects, IEEE 2009 Matlab Projects, IEEE 2009 NS2 Projects, IEEE 2009 C Projects, IEEE 2009 Microcontroller Projects, IEEE 2009 ATMEL Projects, IEEE 2009 PIC Projects, IEEE 2009 ARM Projects, IEEE 2009 DSP Projects, IEEE 2009 VLSI Projects, IEEE 2009 FPGA Projects, IEEE 2009 CPLD Projects, IEEE 2009 Power Electronics Projects, IEEE 2009 Electrical Projects, IEEE 2009 Robotics Projects, IEEE 2009 Solor Projects, IEEE 2009 MEMS Projects, IEEE 2009 J2EE P
A MALICIOUS USERS DETECTING MODEL BASED ON FEEDBACK CORRELATIONSIJCNC
The trust and reputation models were introduced to restrain the impacts caused by rational but selfish
peers in P2P streaming systems. However, these models face with two major challenges from dishonest
feedback and strategic altering behaviors. To answer these challenges, we present a global trust model
based on network community, evaluation correlations, and punishment mechanism. We also propose a
two-layered overlay to provide the function of peers’ behaviors collection and malicious detection.
Furthermore, we analysis several security threats in P2P streaming systems, and discuss how to defend
with them by our trust mechanism. The simulation results show that our trust framework can successfully
filter out dishonest feedbacks by using correlation coefficients. It can effectively defend against the
security threats with good load balance as well.
This document discusses enabling technologies for interoperability between geographic information systems (GIS). It addresses problems at the syntactic, structural, and semantic levels of integration that must be solved to achieve fully interoperable GIS. At the syntactic level, standards like XML are used to integrate different data types. At the structural level, mediator systems use mapping rules to integrate heterogeneous data structures. The most difficult problem is semantic integration, where the meanings and contexts of concepts must be resolved. Ontologies and semantic modeling with XML and RDF can help describe information semantically and perform semantic translation between contexts to enable intelligent information integration.
A survey of models for computer networks managementIJCNCJournal
The virtualization concept along with its underlyin
g technologies has been warmly adopted in many fiel
ds
of computer science. In this direction, network vir
tualization research has presented considerable res
ults.
In a parallel development, the convergence of two d
istinct worlds, communications and computing, has
increased the use of computing server resources (vi
rtual machines and hypervisors acting as active
network elements) in network implementations. As a
result, the level of detail and complexity in such
architectures has increased and new challenges need
to be taken into account for effective network
management. Information and data models facilitate
infrastructure representation and management and
have been used extensively in that direction. In th
is paper we survey available modelling approaches a
nd
discuss how these can be used in the virtual machin
e (host) based computer network landscape; we prese
nt
a qualitative analysis of the current state-of-the-
art and offer a set of recommendations on adopting
any
particular method.
AUTHENTICATION SCHEME FOR DATABASE AS A SERVICE(DBAAS) ijccsa
IT Companies have shifted their resources to the cloud at rapidly increasing rate. As part of this trend companies are migrating business critical and sensitive data stored in database to cloud-hosted and Database as a Service (DBaaS) solutions.Of all that has been written about cloud computing, precious little attention has been paid to authentication in the cloud. In this paper we have designed a new effective authentication scheme for Cloud Database as a Service (DBaaS). A user can change his/her password, whenever demanded. Furthermore, security analysis realizes the feasibility of the proposed model for DBaaS and achieves efficiency. We also proposed an efficient authentication scheme to solve the authentication problem in cloud. The proposed solution which we have provided is based mainly on improved Needham-Schroeder’s protocol to prove the users’ identity to determine if this user is authorized or not. The results showed that this scheme is very strong and difficult to break it.
Named Data Networking (NDN) is a recently designed Internet architecture that benefits data names
instead of locations and creates essential changes in the abstraction of network services from "delivering
packets to specific destinations” to "retrieving data with special names" makes. This fundamental change
creates new opportunities and intellectual challenges in all areas, especially network routing and
communication, communication security, and privacy. The focus of this dissertation is on the forwarding
aircraft feature introduced by NDN. Communication in NDN is done by exchanging interest and data
packets
This document discusses the need for adaptive and dynamic software development that can adjust to changing runtime environments and fault conditions. It argues that traditional static approaches to fault tolerance, like using fixed levels of redundancy, are inadequate as the threat environment may vary. The document then introduces an adaptive data integrity tool that allows the level of redundancy to change dynamically based on faults detected at runtime. This provides an example of the new approach called for, termed "New Software Development," that is more adaptive, maintainable and reconfigurable like New Product Development concepts.
Many-Task Computing (MTC) aims to enable task-parallel applications to leverage large distributed systems through a loosely-coupled model. MTC applications involve a large number of tasks with short runtimes and are data-intensive. This differs from traditional HPC which focuses on tightly-coupled applications. As computing resources scale, challenges arise around application scalability, reliability, resource manager scalability, and efficient hardware utilization that MTC seeks to address. MTC is applicable to clusters, grids, and supercomputers and provides opportunities to analyze increasingly large scientific datasets generated from experiments and simulations.
This document summarizes a research paper on using cloud computing for intelligent transportation systems. The paper proposes using intelligent transportation clouds to provide services like traffic management strategies and decision support. It describes a prototype using multi-agent systems with mobile agents to manage traffic. Cloud computing can help handle large data storage and transportation needs efficiently. Intelligent transportation clouds could overcome issues with computing power, storage, and scalability faced by current traffic management systems.
An Overview of Information Extraction from Mobile Wireless Sensor NetworksM H
Information Extraction (IE) is a key research area within the field of Wireless Sensor Networks (WSNs). It has been characterised in a variety of ways, ranging from the description of its purposes, to reasonably abstract models of its processes and components. There has been only a handful of papers addressing IE over mobile WSNs directly, these dealt with individual mobility related problems as the need arises. This paper is presented as a tutorial that takes the reader from the point of identifying data about a dynamic (mobile) real world problem, relating the data back to the world from which it was collected, and finally discovering what is in the data. It covers the entire process with special emphasis on how to exploit mobility in maximising information return from a mobile WSN. We present some challenges introduced by mobility on the IE process as well as its effects on the quality of the extracted information. Finally, we identify future research directions facing the development of efficient IE approaches for WSNs in the presence of mobility.
Agent based Aggregation of Cloud Services- A Research Agendaidescitation
-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
This document discusses the design of a geographic information system (GIS) software platform integrated with a decision support system (DSS) for use in e-government applications in China. It proposes a new approach that tightly integrates DSS techniques with GIS techniques to provide comprehensive information and decision-making services to governments. The platform uses a uniform database design and data management approach. It is developed using a component-based approach to achieve close integration of GIS and DSS functions. The platform adopts a client-server architecture for applications and a client-server structure for system maintenance.
It is well known that the tenacity is a proper measure for studying vulnerability and reliability in graphs.
Here, a modified edge-tenacity of a graph is introduced based on the classical definition of tenacity.
Properties and bounds for this measure are introduced; meanwhile edge-tenacity is calculated for cycle
graphs and also for complete graphs.
Comparative study of different algorithmsijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA) Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to explain the way as how our Proposed Genetic Algorithm (GA), Proposed Simulated Annealing (SA) Algorithm using GA, Classical Backtracking (BT) Algorithm and Classical Brute Force (BF) Search Algorithm can be employed in finding the best solution of N Queens Problem and also, makes a comparison between these four algorithms. It is entirely a review based work. The four algorithms were written as well as implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more time to provide result than the Proposed SA using GA.
This paper introduces a new comparison base stable sorting algorithm, named RA sort. The RA sort
involves only the comparison of pair of elements in an array which ultimately sorts the array and does not
involve the comparison of each element with every other element. It tries to build upon the relationship
established between the elements in each pass. Instead of going for a blind comparison we prefer a
selective comparison to get an efficient method. Sorting is a fundamental operation in computer science.
This algorithm is analysed both theoretically and empirically to get a robust average case result. We have
performed its Empirical analysis and compared its performance with the well-known quick sort for various
input types. Although the theoretical worst case complexity of RA sort is Yworst(n) = O(n√), the
experimental results suggest an empirical Oemp(nlgn)1.333 time complexity for typical input instances, where
the parameter n characterizes the input size. The theoretical complexity is given for comparison operation.
We emphasize that the theoretical complexity is operation specific whereas the empirical one represents the
overall algorithmic complexity.
Distribution of maximal clique size underijfcstjournal
In this paper, we analyze the evolution of a small-world network and its subsequent transformation to a
random network using the idea of link rewiring under the well-known Watts-Strogatz model for complex
networks. Every link u-v in the regular network is considered for rewiring with a certain probability and if
chosen for rewiring, the link u-v is removed from the network and the node u is connected to a randomly
chosen node w (other than nodes u and v). Our objective in this paper is to analyze the distribution of the
maximal clique size per node by varying the probability of link rewiring and the degree per node (number
of links incident on a node) in the initial regular network. For a given probability of rewiring and initial
number of links per node, we observe the distribution of the maximal clique per node to follow a Poisson
distribution. We also observe the maximal clique size per node in the small-world network to be very close
to that of the average value and close to that of the maximal clique size in a regular network. There is no
appreciable decrease in the maximal clique size per node when the network transforms from a regular
network to a small-world network. On the other hand, when the network transforms from a small-world
network to a random network, the average maximal clique size value decreases significantly.
α Nearness ant colony system with adaptive strategies for the traveling sales...ijfcstjournal
On account of ant colony algorithm easy to fall into local optimum, this paper presents an improved ant
colony optimization called α-AACS and reports its performance. At first, we provide an concise description
of the original ant colony system(ACS) and introduce α-nearness based on the minimum 1-tree for ACS’s
disadvantage, which better reflects the chances of a given link being a member of an optimal tour. Then, we
improve α-nearness by computing a lower bound and propose other adaptations for ACS. Finally, we
conduct a fair competition between our algorithm and others. The results clearly show that α-AACS has a
better global searching ability in finding the best solutions, which indicates that α-AACS is an effective
approach for solving the traveling salesman problem.
How secure is the website you are shopping on.
How to tell if the website you are shopping on has a secure shopping cart, that ensures all your credit card information is being transmitted securely.
A comparative study on remote tracking of parkinson’s disease progression usi...ijfcstjournal
In recent years, applications of data mining method
s are become more popular in many fields of medical
diagnosis and evaluations. The data mining methods
are appropriate tools for discovering and extractin
g
of available knowledge in medical databases. In thi
s study, we divided 11 data mining algorithms into
five
groups which are applied to a dataset of patient’s
clinical variables data with Parkinson’s Disease (P
D) to
study the disease progression. The dataset includes
22 properties of 42 people that all of our algorit
hms
are applied to this dataset. The Decision Table wit
h 0.9985 correlation coefficients has the best accu
racy
and Decision Stump with 0.7919 correlation coeffici
ents has the lowest accuracy.
Defragmentation of indian legal cases withijfcstjournal
The main aim of this research paper is to develop a rule based knowledge database for legal expert system
for consumer protection act, a domain within the Indian legal system which is often in demand. A
knowledge database developed here will further help the legal expert system to determine type of case with
respect to the Indian Judicial System. In this paper a rule based knowledge database will be developed to
determine the type of the case. The main aim of the study is to build a prototype which will rule based in
nature. The rule based knowledge database development is the first phase in the development of
comprehensive rule based legal expert system for consumer protection act which will be of great help in
process of solving consumer related cases.
A hybrid fuzzy ann approach for software effort estimationijfcstjournal
This document presents a study that develops a software effort estimation model using an Adaptive Neuro Fuzzy Inference System (ANFIS). The study evaluates the proposed ANFIS model using COCOMO81 datasets and compares its performance to an Artificial Neural Network (ANN) model and the intermediate COCOMO model. The results show that the ANFIS model provides better estimates than the ANN and COCOMO models, with lower values for metrics like the Root Mean Square Error and Magnitude of Relative Error.
Multi-Agent systems (Autonomous agents or agents) and knowledge discovery (or data mining) are two active
areas in information technology. A profound insight of bringing these two communities together has unveiled a tremendous
potential for new opportunities and wider applications through the synergy of agents and data mining. Multi-agent systems
(MAS) often deal with complex applications that require distributed problem solving. In many applications the individual and
collective behavior of the agents depends on the observed data from distributed data sources. Data mining technology has
emerged, for identifying patterns and trends from large quantities of data. The increasing demand to scale up to massive data sets
inherently distributed over a network with limited band width and computational resources available motivated the development of
distributed data mining (DDM).Distributed data mining is originated from the need of mining over decentralized data
sources. DDM is expected to perform partial analysis of data at individual sites and then to send the outcome as partial result
to other sites where it sometimes required to be aggregated to the global result
This document provides an overview of multi agent-based distributed data mining. It discusses how data mining techniques have challenges when dealing with large, distributed data sources. Multi-agent systems can help address these challenges by allowing for distributed problem solving across decentralized data sources. The document then discusses how agent computing is well-suited for distributed data mining applications due to properties like decentralization, autonomy, and reactivity. It provides examples of application domains for distributed data mining and outlines key aspects like interoperability, dynamic system configuration, and performance that agent-based distributed data mining systems should address.
Distributed Data mining using Multi Agent dataIRJET Journal
This document discusses using multi-agent systems to perform distributed data mining. It begins by defining distributed data mining as mining data located across different sites to avoid transferring large volumes of data and address security issues. It then discusses how multi-agent systems can improve distributed data mining by dealing with complex, distributed data systems. Specifically, agents can retrieve relevant data from distributed databases and identify patterns in the observed data. The document focuses on the synergy between multi-agent systems and distributed data mining, with agents working together in a distributed manner to perform classification and other data mining tasks on distributed data sources while preserving data privacy.
This document discusses using Hidden Markov Model (HMM) forward chaining techniques for prefetching in distributed file systems (DFS) for cloud computing. It begins by introducing DFS for cloud storage and issues like load balancing. It then discusses using HMM to analyze client I/O and predict future requests to prefetch relevant data. The HMM forward algorithm would be used to prefetch data from storage servers to clients proactively. This could improve performance by reducing client wait times for requested data in DFS for cloud applications.
Coordination issues of multi agent systems in distributed data miningIAEME Publication
This document discusses coordination issues in multi-agent systems for distributed data mining. It proposes an agent-based approach for distributed data clustering and classification. The key points are:
1. Distributed data mining uses multiple agents that can autonomously access decentralized data sources for mining. This addresses issues of data distribution, privacy and security.
2. The proposed approach uses different agent types like client agents, service agents, and mobile agents to coordinate the distributed data mining process.
3. Coordination challenges include handling multiple concurrent data mining tasks, enabling agent reuse and coordination, and ensuring scalability and adaptability to changes in data sources.
DISTRIBUTED AND BIG DATA STORAGE MANAGEMENT IN GRID COMPUTINGijgca
Big data storage management is one of the most challenging issues for Grid computing environments, since large amount of data intensive applications frequently involve a high degree of data access locality. Grid applications typically deal with large amounts of data. In traditional approaches high-performance computing consists dedicated servers that are used to data storage and data replication. In this paper we present a new mechanism for distributed and big data storage and resource discovery services. Here we proposed an architecture named Dynamic and Scalable Storage Management (DSSM) architecture in grid environments. This allows in grid computing not only sharing the computational cycles, but also share the storage space. The storage can be transparently accessed from any grid machine, allowing easy data sharing among grid users and applications. The concept of virtual ids that, allows the creation of virtual spaces has been introduced and used. The DSSM divides all Grid Oriented Storage devices (nodes) into multiple geographically distributed domains and to facilitate the locality and simplify the intra-domain storage management. Grid service based storage resources are adopted to stack simple modular service piece by piece as demand grows. To this end, we propose four axes that define: DSSM architecture and algorithms description, Storage resources and resource discovery into Grid service, Evaluate purpose prototype system, dynamically, scalability, and bandwidth, and Discuss results. Algorithms at bottom and upper level for standardization dynamic and scalable storage management, along with higher bandwidths have been designed.
A CLOUD BASED ARCHITECTURE FOR WORKING ON BIG DATA WITH WORKFLOW MANAGEMENTIJwest
In real environment there is a collection of many noisy and vague data, called Big Data. On the other hand,
to work on the data middleware have been developed and is now very widely used. The challenge of
working on Big Data is its processing and management. Here, integrated management system is required
to provide a solution for integrating data from multiple sensors and maximize the target success. This is in
situation that the system has constant time constrains for processing, and real-time decision-making
processes. A reliable data fusion model must meet this requirement and steadily let the user monitor data
stream. With widespread using of workflow interfaces, this requirement can be addressed. But, the work
with Big Data is also challenging. We provide a multi-agent cloud-based architecture for a higher vision to
solve this problem. This architecture provides the ability to Big Data Fusion using a workflow management
interface. The proposed system is capable of self-repair in the presence of risks and its risk is low.
The document discusses security issues in distributed database systems. It begins by defining distributed databases and their architecture. It then discusses three main security aspects: access control, authentication, and encryption. The document also discusses distributed database system design considerations like concurrency control and data fragmentation. Emerging security tools for distributed databases mentioned include data warehousing, data mining, collaborative computing, distributed object systems, and web applications. Maintaining security when building and querying data warehouses from multiple sources is highlighted as a key challenge.
A review of multi-agent mobile robot systems applicationsIJECEIAES
A multi-agent robot system (MARS) is one of the most important topics nowadays. The basic task of this system is based on distributive and cooperative work among agents (robots). It combines two important systems; multi-agent system (MAS) and multi-robots system (MRS). MARS has been used in many applications such as navigation, path planning detection systems, negotiation protocol, and cooperative control. Despite the wide applicability, many challenges still need to be solved in this system such as the communication links among agents, obstacle detection, power consumption, and collision avoidance. In this paper, a survey of the motivations, contributions, and limitations for the researchers in the MARS field is presented and illustrated. Therefore, this paper aims at introducing new study directions in the field of MARS.
This document describes a proposed multi-agent system for searching distributed data. The system uses three types of agents - coordinator agents, search agents, and local agents. Coordinator agents coordinate the retrieval process by creating search agents and collecting results. Search agents carry queries to nodes containing relevant databases. Local agents reside at nodes with databases, accept queries from search agents, search the databases for answers, and return results to the search agents. The system aims to retrieve data from distributed databases with minimum network bandwidth consumption using this multi-agent approach.
An elastic , effective, activety or intelligent ,graceful networking architecture layout be desired to make processing massive data. next to that ,existent network architectures be considerably incapable for
cleatting the huge data. massive data thrusts network exchequers into border it consequence with in network overcrowding ,needy achievement, then permicious employer exprtises. this offered the current state-of-the-art research affronts ,potential solutions into huge data networking notion. More specifically, present the state of networking problems into massive data connected intrequirements,capacity,running ,
data manipulating also will introduce the architectures of MapReduce , Hadoop paradigm within research
requirements, fabric networks and software defined networks which utilizized into making today’s idly growing digital world and compare and contrast into identify relevant drawbacks and solutions.
Dynamic Resource Provisioning with Authentication in Distributed DatabaseEditor IJCATR
Data center have the largest consumption amounts of energy in sharing the power. The public cloud workloads of different
priorities and performance requirements of various applications [4]. Cloud data center have capable of sensing an opportunity to present
different programs. In my proposed construction and the name of the security level of imperturbable privacy leakage rarely distributed
cloud system to deal with the persistent characteristics there is a substantial increases and information that can be used to augment the
profit, retrenchment overhead or both. Data Mining Analysis of data from different perspectives and summarizing it into useful
information is a process. Three empirical algorithms have been proposed assignments estimate the ratios are dissected theoretically and
compared using real Internet latency data recital of testing methods
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Analyse the performance of mobile peer to Peer network using ant colony optim...IJCI JOURNAL
The document describes analyzing the performance of a mobile peer-to-peer network using ant colony optimization. It proposes using a distributed spanning tree (DST) structure to improve efficiency by reducing the large number of messages. The DST is optimized using ant colony optimization to give an optimal solution. Simulation results show the approach reduces the number of messages, average delay, and increases packet delivery ratio in the network.
BIG DATA NETWORKING: REQUIREMENTS, ARCHITECTURE AND ISSUESijwmn
A flexible, efficient and secure networking architecture is required in order to process big data. However,
existing network architectures are mostly unable to handle big data. As big data pushes network resources
to the limits it results in network congestion, poor performance, and detrimental user experiences. This
paper presents the current state-of-the-art research challenges and possible solutions on big data
networking theory. More specifically, we present the state of networking issues of big data related to
capacity, management and data processing. We also present the architectures of MapReduce and Hadoop
paradigm with research challenges, fabric networks and software defined networks (SDN) that are used to
handle today’s idly growing digital world and compare and contrast them to identify relevant problems and
solutions.
BIG DATA NETWORKING: REQUIREMENTS, ARCHITECTURE AND ISSUESijwmn
The document discusses requirements, architectures, and issues related to networking for big data. It begins by outlining the network requirements for big data, including resiliency, congestion mitigation, performance consistency, scalability, partitioning, and application awareness. It then describes the MapReduce and Hadoop architectures commonly used for big data processing and some of the research challenges they present for networks. Finally, it discusses fabric network infrastructures and software defined networks that can help address networking needs for big data.
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
BIG DATA NETWORKING: REQUIREMENTS, ARCHITECTURE AND ISSUESijwmn
A flexible, efficient and secure networking architecture is required in order to process big data. However, existing network architectures are mostly unable to handle big data. As big data pushes network resources
to the limits it results in network congestion, poor performance, and detrimental user experiences. This paper presents the current state-of-the-art research challenges and possible solutions on big data networking theory. More specifically, we present the state of networking issues of big data related to
capacity, management and data processing. We also present the architectures of MapReduce and Hadoop paradigm with research challenges, fabric networks and software defined networks (SDN) that are used to handle today’s idly growing digital world and compare and contrast them to identify relevant problems and solutions.
A Survey of Agent Based Pre-Processing and Knowledge RetrievalIOSR Journals
Abstract: Information retrieval is the major task in present scenario as quantum of data is increasing with a
tremendous speed. So, to manage & mine knowledge for different users as per their interest, is the goal of every
organization whether it is related to grid computing, business intelligence, distributed databases or any other.
To achieve this goal of extracting quality information from large databases, software agents have proved to be
a strong pillar. Over the decades, researchers have implemented the concept of multi agents to get the process
of data mining done by focusing on its various steps. Among which data pre-processing is found to be the most
sensitive and crucial step as the quality of knowledge to be retrieved is totally dependent on the quality of raw
data. Many methods or tools are available to pre-process the data in an automated fashion using intelligent
(self learning) mobile agents effectively in distributed as well as centralized databases but various quality
factors are still to get attention to improve the retrieved knowledge quality. This article will provide a review of
the integration of these two emerging fields of software agents and knowledge retrieval process with the focus
on data pre-processing step.
Keywords: Data Mining, Multi Agents, Mobile Agents, Preprocessing, Software Agents
DWDM-RAM: a data intensive Grid service architecture enabled by dynamic optic...Tal Lavian Ph.D.
The DWDM-RAM project develops an architecture for data-intensive grid services enabled by dynamic optical networks. It encapsulates optical network resources like wavelengths and lightpaths as grid services. This allows applications to schedule large data transfers using these network resources. The architecture consists of application and resource middleware layers. It is being implemented on the OMNInet photonic testbed to demonstrate on-demand and scheduled data retrieval using a dynamically switched DWDM network. In summary, DWDM-RAM schedules high-bandwidth network resources through dynamic lightpath provisioning and makes large-scale data services accessible via a grid service interface.
Similar to Agent based frameworks for distributed association rule mining an analysis (20)
ENHANCING ENGLISH WRITING SKILLS THROUGH INTERNET-PLUS TOOLS IN THE PERSPECTI...ijfcstjournal
This investigation delves into incorporating a hybridized memetic strategy within the framework of English
composition pedagogy, leveraging Internet Plus resources. The study aims to provide an in-depth analysis
of how this method influences students’ writing competence, their perceptions of writing, and their
enthusiasm for English acquisition. Employing an explanatory research design that combines qualitative
and quantitative methods, the study collects data through surveys, interviews, and observations of students’
writing performance before and after the intervention. Findings demonstrate a beneficial impact of
integrating the memetic approach alongside Internet Plus tools on the writing aptitude of English as a
Foreign Language (EFL) learners. Students reported increased engagement with writing, attributing it to
the use of Internet plus tools. They also expressed that the memetic approach facilitated a deeper
understanding of cultural and social contexts in writing. Furthermore, the findings highlight a significant
improvement in students’ writing skills following the intervention. This study provides significant insights
into the practical implementation of the memetic approach within English writing education, highlighting
the beneficial contribution of Internet Plus tools in enriching students' learning journeys.
ENHANCING ENGLISH WRITING SKILLS THROUGH INTERNET-PLUS TOOLS IN THE PERSPECTI...ijfcstjournal
This investigation delves into incorporating a hybridized memetic strategy within the framework of English
composition pedagogy, leveraging Internet Plus resources. The study aims to provide an in-depth analysis
of how this method influences students’ writing competence, their perceptions of writing, and their
enthusiasm for English acquisition. Employing an explanatory research design that combines qualitative
and quantitative methods, the study collects data through surveys, interviews, and observations of students’
writing performance before and after the intervention. Findings demonstrate a beneficial impact of
integrating the memetic approach alongside Internet Plus tools on the writing aptitude of English as a
Foreign Language (EFL) learners. Students reported increased engagement with writing, attributing it to
the use of Internet plus tools. They also expressed that the memetic approach facilitated a deeper
understanding of cultural and social contexts in writing. Furthermore, the findings highlight a significant
improvement in students’ writing skills following the intervention. This study provides significant insights
into the practical implementation of the memetic approach within English writing education, highlighting
the beneficial contribution of Internet Plus tools in enriching students' learning journeys.
A SURVEY TO REAL-TIME MESSAGE-ROUTING NETWORK SYSTEM WITH KLA MODELLINGijfcstjournal
Messages routing over a network is one of the most fundamental concept in communication which requires
simultaneous transmission of messages from a source to a destination. In terms of Real-Time Routing, it
refers to the addition of a timing constraint in which messages should be received within a specified time
delay. This study involves Scheduling, Algorithm Design and Graph Theory which are essential parts of
the Computer Science (CS) discipline. Our goal is to investigate an innovative and efficient way to present
these concepts in the context of CS Education. In this paper, we will explore the fundamental modelling of
routing real-time messages on networks. We study whether it is possible to have an optimal on-line
algorithm for the Arbitrary Directed Graph network topology. In addition, we will examine the message
routing’s algorithmic complexity by breaking down the complex mathematical proofs into concrete, visual
examples. Next, we explore the Unidirectional Ring topology in finding the transmission’s
“makespan”.Lastly, we propose the same network modelling through the technique of Kinesthetic Learning
Activity (KLA). We will analyse the data collected and present the results in a case study to evaluate the
effectiveness of the KLA approach compared to the traditional teaching method.
A COMPARATIVE ANALYSIS ON SOFTWARE ARCHITECTURE STYLESijfcstjournal
Software architecture is the structural solution that achieves the overall technical and operational
requirements for software developments. Software engineers applied software architectures for their
software system developments; however, they worry the basic benchmarks in order to select software
architecture styles, possible components, integration methods (connectors) and the exact application of
each style.
The objective of this research work was a comparative analysis of software architecture styles by its
weakness and benefits in order to select by the programmer during their design time. Finally, in this study,
the researcher has been identified architectural styles, weakness, and Strength and application areas with
its component, connector and Interface for the selected architectural styles.
SYSTEM ANALYSIS AND DESIGN FOR A BUSINESS DEVELOPMENT MANAGEMENT SYSTEM BASED...ijfcstjournal
A design of a sales system for professional services requires a comprehensive understanding of the
dynamics of sale cycles and how key knowledge for completing sales is managed. This research describes
a design model of a business development (sales) system for professional service firms based on the Saudi
Arabian commercial market, which takes into account the new advances in technology while preserving
unique or cultural practices that are an important part of the Saudi Arabian commercial market. The
design model has combined a number of key technologies, such as cloud computing and mobility, as an
integral part of the proposed system. An adaptive development process has also been used in implementing
the proposed design model.
AN ALGORITHM FOR SOLVING LINEAR OPTIMIZATION PROBLEMS SUBJECTED TO THE INTERS...ijfcstjournal
Frank t-norms are parametric family of continuous Archimedean t-norms whose members are also strict
functions. Very often, this family of t-norms is also called the family of fundamental t-norms because of the
role it plays in several applications. In this paper, optimization of a linear objective function with fuzzy
relational inequality constraints is investigated. The feasible region is formed as the intersection of two
inequality fuzzy systems defined by frank family of t-norms is considered as fuzzy composition. First, the
resolution of the feasible solutions set is studied where the two fuzzy inequality systems are defined with
max-Frank composition. Second, some related basic and theoretical properties are derived. Then, a
necessary and sufficient condition and three other necessary conditions are presented to conceptualize the
feasibility of the problem. Subsequently, it is shown that a lower bound is always attainable for the optimal
objective value. Also, it is proved that the optimal solution of the problem is always resulted from the
unique maximum solution and a minimal solution of the feasible region. Finally, an algorithm is presented
to solve the problem and an example is described to illustrate the algorithm. Additionally, a method is
proposed to generate random feasible max-Frank fuzzy relational inequalities. By this method, we can
easily generate a feasible test problem and employ our algorithm to it.
LBRP: A RESILIENT ENERGY HARVESTING NOISE AWARE ROUTING PROTOCOL FOR UNDER WA...ijfcstjournal
Underwater detector network is one amongst the foremost difficult and fascinating analysis arenas that
open the door of pleasing plenty of researchers during this field of study. In several under water based
sensor applications, nodes are square measured and through this the energy is affected. Thus, the mobility
of each sensor nodes are measured through the water atmosphere from the water flow for sensor based
protocol formations. Researchers have developed many routing protocols. However, those lost their charm
with the time. This can be the demand of the age to supply associate degree upon energy-efficient and
ascendable strong routing protocol for under water actuator networks. During this work, the authors tend
to propose a customary routing protocol named level primarily based routing protocol (LBRP), reaching to
offer strong, ascendable and energy economical routing. LBRP conjointly guarantees the most effective use
of total energy consumption and ensures packet transmission which redirects as an additional reliability in
compare to different routing protocols. In this work, the authors have used the level of forwarding node,
residual energy and distance from the forwarding node to the causing node as a proof in multicasting
technique comparisons. Throughout this work, the authors have got a recognition result concerning about
86.35% on the average in node multicasting performances. Simulation has been experienced each in a
wheezy and quiet atmosphere which represents the endorsement of higher performance for the planned
protocol.
STRUCTURAL DYNAMICS AND EVOLUTION OF CAPSULE ENDOSCOPY (PILL CAMERA) TECHNOLO...ijfcstjournal
This research paper examined and re-evaluates the technological innovation, theory, structural dynamics
and evolution of Pill Camera(Capsule Endoscopy) technology in redirecting the response manner of small
bowel (intestine) examination in human. The Pill Camera (Endoscopy Capsule) is made up of sealed
biocompatible material to withstand acid, enzymes and other antibody chemicals in the stomach is a
technology that helps the medical practitioners especially the general physicians and the
gastroenterologists to examine and re-examine the intestine for possible bleeding or infection. Before the
advent of the Pill camera (Endoscopy Capsule) the colonoscopy was the local method used but research
showed that some parts (bowel) of the intestine can’t be reach by mere traditional method hence the need
for Pill Camera. Countless number of deaths from stomach disease such as polyps, inflammatory bowel
(Crohn”s diseases), Cancers, Ulcer, anaemia and tumours of small intestines which ordinary would have
been detected by sophisticated technology like Pill Camera has become norm in the developing nations.
Nevertheless, not only will this paper examine and re-evaluate the Pill Camera Innovation, theory,
Structural dynamics and evolution it unravelled and aimed to create awareness for both medical
practitioners and the public.
AN OPTIMIZED HYBRID APPROACH FOR PATH FINDINGijfcstjournal
Path finding algorithm addresses problem of finding shortest path from source to destination avoiding
obstacles. There exist various search algorithms namely A*, Dijkstra's and ant colony optimization. Unlike
most path finding algorithms which require destination co-ordinates to compute path, the proposed
algorithm comprises of a new method which finds path using backtracking without requiring destination
co-ordinates. Moreover, in existing path finding algorithm, the number of iterations required to find path is
large. Hence, to overcome this, an algorithm is proposed which reduces number of iterations required to
traverse the path. The proposed algorithm is hybrid of backtracking and a new technique(modified 8-
neighbor approach). The proposed algorithm can become essential part in location based, network, gaming
applications. grid traversal, navigation, gaming applications, mobile robot and Artificial Intelligence.
EAGRO CROP MARKETING FOR FARMING COMMUNITYijfcstjournal
The Major Occupation in India is the Agriculture; the people involved in the Agriculture belong to the poor
class and category. The people of the farming community are unaware of the new techniques and Agromachines, which would direct the world to greater heights in the field of agriculture. Though the farmers
work hard, they are cheated by agents in today’s market. This serves as a opportunity to solve
all the problems that farmers face in the current world. The eAgro crop marketing will serve as a better
way for the farmers to sell their products within the country with some mediocre knowledge about using
the website. This would provide information to the farmers about current market rate of agro-products,
their sale history and profits earned in a sale. This site will also help the farmers to know about the market
information and to view agricultural schemes of the Government provided to farmers.
EDGE-TENACITY IN CYCLES AND COMPLETE GRAPHSijfcstjournal
It is well known that the tenacity is a proper measure for studying vulnerability and reliability in graphs.
Here, a modified edge-tenacity of a graph is introduced based on the classical definition of tenacity.
Properties and bounds for this measure are introduced; meanwhile edge-tenacity is calculated for cycle
graphs and also for complete graphs.
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA)
Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to
explain the way as how the Proposed Genetic Algorithm (GA), the Proposed Simulated Annealing (SA)
Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm can be
employed in finding the best solution of N Queens Problem and also, makes a comparison between these
four algorithms. It is entirely a review based work. The four algorithms were written as well as
implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better
than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and
the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the
Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute
Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more
time to provide result than the Proposed SA using GA.
PSTECEQL: A NOVEL EVENT QUERY LANGUAGE FOR VANET’S UNCERTAIN EVENT STREAMSijfcstjournal
In recent years, the complex event processing technology has been used to process the VANET’s temporal
and spatial event streams. However, we usually cannot get the accurate data because the device sensing
accuracy limitations of the system. We only can get the uncertain data from the complex and limited
environment of the VANET. Because the VANET’s event streams are consist of the uncertain data, so they
are also uncertain. How effective to express and process these uncertain event streams has become the core
issue for the VANET system. To solve this problem, we propose a novel complex event query language
PSTeCEQL (probabilistic spatio-temporal constraint event query language). Firstly, we give the definition
of the possible world model of VANET’s uncertain event streams. Secondly, we propose an event query
language PSTeCEQL and give the syntax and the operational semantics of the language. Finally, we
illustrate the validity of the PSTeCEQL by an example.
CLUSTBIGFIM-FREQUENT ITEMSET MINING OF BIG DATA USING PRE-PROCESSING BASED ON...ijfcstjournal
This document describes the ClustBigFIM algorithm for frequent itemset mining of big data using pre-processing based on the MapReduce framework. The ClustBigFIM algorithm first applies k-means clustering to generate clusters from large datasets. It then mines frequent itemsets from the generated clusters using the Apriori and Eclat algorithms within the MapReduce programming model. Experimental results on several datasets show that the ClustBigFIM algorithm increases execution efficiency compared to the BigFIM algorithm by applying k-means clustering as a pre-processing step before frequent itemset mining.
A MUTATION TESTING ANALYSIS AND REGRESSION TESTINGijfcstjournal
This document discusses mutation testing and regression testing. Mutation testing involves intentionally introducing small errors or mutations into code and then testing if test suites can detect the errors. Regression testing is done after code changes to ensure the changes did not unintentionally break existing functionality. The document provides examples and algorithms to illustrate how mutation testing and regression testing work. It also discusses advantages like improving test quality and disadvantages like time required. Overall, the document examines these two software testing techniques.
GREEN WSN- OPTIMIZATION OF ENERGY USE THROUGH REDUCTION IN COMMUNICATION WORK...ijfcstjournal
Advances in micro fabrication and communication techniques have led to unimaginable proliferation of
WSN applications. Research is focussed on reduction of setup operational energy costs. Bulk of operational
energy costs are linked to communication activities of WSN. Any progress towards energy efficiency has a
potential of huge savings globally. Therefore, every energy efficient step is an endeavour to cut costs and
‘Go Green’. In this paper, we have proposed a framework to reduce communication workload through: Innetwork compression and multiple query synthesis at the base-station and modification of query syntax
through introduction of Static Variables. These approaches are general approaches which can be used in
any WSN irrespective of application.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
AGENT ENABLED MINING OF DISTRIBUTED PROTEIN DATA BANKSijfcstjournal
Mining biological data is an emergent area at the intersection between bioinformatics and data mining
(DM). The intelligent agent based model is a popular approach in constructing Distributed Data Mining
(DDM) systems to address scalable mining over large scale distributed data. The nature of associations
between different amino acids in proteins has also been a subject of great anxiety. There is a strong need to
develop new models and exploit and analyze the available distributed biological data sources. In this study,
we have designed and implemented a multi-agent system (MAS) called Agent enriched Quantitative
Association Rules Mining for Amino Acids in distributed Protein Data Banks (AeQARM-AAPDB). Such
globally strong association rules enhance understanding of protein composition and are desirable for
synthesis of artificial proteins. A real protein data bank is used to validate the system.
International Journal on Foundations of Computer Science & Technology (IJFCST)ijfcstjournal
International Journal on Foundations of Computer Science & Technology (IJFCST) is a Bi-monthly peer-reviewed and refereed open access journal that publishes articles which contribute new results in all areas of the Foundations of Computer Science & Technology. Over the last decade, there has been an explosion in the field of computer science to solve various problems from mathematics to engineering. This journal aims to provide a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals. Topics of interest include, but are not limited to the following:
Because the technology is used largely in the last decades; cybercrimes have become a significant
international issue as a result of the huge damage that it causes to the business and even to the ordinary
users of technology. The main aims of this paper is to shed light on digital crimes and gives overview about
what a person who is related to computer science has to know about this new type of crimes. The paper has
three sections: Introduction to Digital Crime which gives fundamental information about digital crimes,
Digital Crime Investigation which presents different investigation models and the third section is about
Cybercrime Law.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Agent based frameworks for distributed association rule mining an analysis
1. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
DOI:10.5121/ijfcst.2015.5102 11
AGENT BASED FRAMEWORKS FOR DISTRIBUTED
ASSOCIATION RULE MINING: AN ANALYSIS
G. S. Bhamra1
, A. K. Verma2
and R. B. Patel3
1
M. M. University, Mullana, Haryana, 133207 - India
2
Thapar University, Patiala, Punjab, 147004- India
3
Chandigarh College of Engineering & Technology, Chandigarh- 160019- India
ABSTRACT
Distributed Association Rule Mining (DARM) is the task for generating the globally strong association
rules from the global frequent itemsets in a distributed environment. The intelligent agent based model, to
address scalable mining over large scale distributed data, is a popular approach to constructing
Distributed Data Mining (DDM) systems and is characterized by a variety of agents coordinating and
communicating with each other to perform the various tasks of the data mining process. This study
performs the comparative analysis of the existing agent based frameworks for mining the association rules
from the distributed data sources.
KEYWORDS
Knowledge Discovery, Association Rules, Intelligent Agents, Multi-Agent System .
1. INTRODUCTION
Data Mining (DM) is a process to automatically extract some interesting and valid data patterns
or trends representing knowledge, implicitly stored in large databases [1], [2]. The traditional
approach for knowledge discovery in distributed environment creates a single centrally integrated
data repository called Data Warehouse (DW) and then DM techniques are used to mine the data
and extract the knowledge [3]. The central DW based approach, however, is ineffective or
infeasible because of heavy storage and computational costs involved in managing data form the
ever increasing and updated distributed resources where data is produced continuously in streams.
Network communication cost is also involved while transferring huge data over the wired or
wireless network in a limited network bandwidth scenario. It is also not desirable to centrally
collect the privacy-sensitive raw distributed data of the business organizations like banking, and
telecommunication as they want only knowledge to be exchanged globally. Data from modern
business organizations are not only geographically distributed but also horizontally or vertically
fragmented making it difficult if not possible to combine them in a central location. Performance
and scalability of a DM application can be increased by distributing the workload among sites [4].
Intelligent software agent technology is an interdisciplinary technology inherited from Distributed
Computing (DC), Distributed Artificial Intelligence (DAI), advanced knowledge base systems,
and human computer interaction. The motivating idea of this technology is the development and
efficient utilization of autonomous software objects called agents, which have access to
geographically distributed and heterogeneous information resources to simplify the complexities
of DC. They are autonomous, adaptive, reactive, pro-active, social, cooperative, collaborative and
flexible. They also support temporal continuity and mobility (weak and strong) within the
network. An intelligent agent with mobility feature is known as Mobile Agent (MA). MA
2. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
12
migrates from node to node in a heterogeneous network without losing its operability. It can
continue to function even if the user is disconnected from the network. It carries its code,
execution state and other data while on the move. On reaching at a node MA is delivered to an
Agent Execution Environment (AEE) where its executable parts are started running. Upon
completion of the desired task, it delivers the results to the home node. With MA, a single
serialized object is transmitted over the network carrying the small amount of resultant data only
thus reducing the consumption of network bandwidth, latency (response time delay) and network
traffic. They are robust, fault-tolerant and useful for low cost, light weight, portable computing
devices having the low processing powers, memory constraints, and intermittent low bandwidth
connection. Agent’s strong mobility feature is helpful in load balancing of processor and memory
intensive tasks. Number of participating hosts can be increased without any significant impact on
the complexity of the application. MAs are self-contained and highly reusable. The parent agent
can also clone several child agents to implement concurrent operations, and improve the
efficiency. They also facilitate the rapid prototyping of distributed applications as software
components can be flexibly and dynamically deployed in the form of MAs.
A Mobile Agent Platform (MAP)/Agent Execution Environment (AEE)/Agent Development
Toolkit, is a middleware, distributed, server application that provides the appropriate functionality
to MAs to authenticate, execute, communicate (with other agents, users, and other platforms),
migrate to other platform, and use system resources in a secure way. A Multi Agent System
(MAS) is distributed application comprised of multiple interacting intelligent agent components
[5].
2. DISTRIBUTED DATA MINING
The issues discussed above for centralized DW based DM results into the development of
techniques for parallel knowledge discovery (PKD) and distributed knowledge discovery (DKD).
Distributed Data Mining (DDM) is the related pattern extraction problem in DKD. DDM is
concerned with application of classical DM procedures in a DC environment trying to make the
best of the available resources including communication networks, computing units and
distributed data repositories, human factors etc. In DDM, DM takes place both locally at each
geographically distributed site and at a global level where the local knowledge is merged in order
to discover global knowledge. DDM techniques are scalable where its performance does not
degrade much with the increase of data set size and the number of distributed sites involved. A
DDM system is a very complex entity that is comprised of many components; mining algorithms,
communication subsystem, resources management, task scheduling, user interfaces, etc. It should
provide efficient access to both distributed data and computing resources; monitor the entire
mining procedure; and present results to users in appropriate formats. A successful DDM system
is also flexible enough to adapt to various situations. It should dynamically identify the optimal
mining strategy under the given resources and provide an easy way to update its components [3],
[4], [6], [7], [8].
In one of the important general DDM architecture (Figure 1) proposed in [3], processing at the
different distributed nodes generates several local models which are then aggregated to form a
global model representing the global knowledge. Authors in [4] proposed another phase wise
DDM approach. In the first phase local distributed databases are analyzed. Then, the discovered
knowledge is transmitted to a merger site, where all the distributed local models are integrated.
The global knowledge is then transmitted back to update the distributed databases. In some cases,
instead of having a merger site, the local models are broadcasted to all other sites, so that each
site can compute the global model in parallel. Data replication, data fragmentation, adaptation,
interestingness property and privacy preservation of the local data are some of the issues that
need to be addressed designing DDM applications [7].
3. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
13
Figure 1. Distributed Data Mining Framework [3]
2.1. Why Agents for DDM?
Above mentioned problems and challenges of DDM and inherent features of software agents of
being autonomous, capable of adaptive and deliberative reasoning clearly indicate the use of MA
technology for the development of advanced DDM systems. Agent Mining or Agent enriched
Data Mining or Multi Agent driven Data Mining, is an emerging interdisciplinary area that
integrates MAS, DM and knowledge discovery, machine learning and other relevant areas such as
statistics and semantic web. The interaction and integration between agent technology, DM and
machine learning come from the intrinsic challenges, needs and opportunities faced by the
constituent technologies [9]. All of the agent based DDM systems employ one or more agents per
data site. These agents are responsible for analyzing local data and communicate with other
agents during the mining stage. A globally coherent knowledge is synthesized via exchanges of
locally mined knowledge. However, in an agent-based model, an efficient control over remote
resources is inherently difficult. The motivation for the use of agent technology in DDM stems
from two reasons. Firstly, there is the underlying basis of DDM being a technology that has
characteristics that are intuitively suited for an agent-based approach. These characteristics are
modular and well defined sub-tasks, the need to encapsulate different mining algorithms to
present a common interface, the requirement for interaction and co-operation between different
system and the ability to deal with the distribution. Thus, from this perspective, the focus is on the
collaborating and information aspects of agency. Secondly, agent technology is seen as
addressing the specific concerns of increasing scalability and enhancing performance by reducing
the communication overhead associated with the transfer of large volumes of data. In the second
criterion for using agents as the building blocks of DDM systems, the focus is on the mobility
aspects of agency in addition to the collaborating and information aspects. Usually, such systems
have one agent that acts as a controlling and coordinating entity for a task [10], [11].
2.2. Existing DDM Systems
There are predominantly three architectural frameworks for the development of DDM systems,
the client-server model, the agent-based model and the hybrid approach which integrates the two
4. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
14
former techniques. The important technologies used to develop client server DDM are Common
Object Request Broker Architecture (CORBA), Distributed Component Object Model (DCOM),
Enterprise Java Beans (EJB) , Remote Method Invocation (RMI) and Java Database Connectivity
(JDBC) [10]. The most prominent DDM systems developed using client-server architectural are
Kensington Enterprise Data Mining Decision Centre [12], IntelliMiner [13] and InterAct [14].
A number of DDM solutions are provided in recent years using various techniques such as,
distributed clustering, Bayesian learning, classification (regression), and compression, distributed
association rules but only a few of them make use of intelligent agents [15]. The agent based
model can be further classified into systems that use mobile agents and those that use stationary
agents [11]. These systems are generally Java based to support the need for heterogeneity and
platform independence. The most prominent DDM systems developed using agent-based
architectural are PArallel Data Mining Agents (PADMA)[16], Java Agents for Meta-Learning
(JAM) [17], Besiezing knOwledge through Distributed Heterogeneous Induction (BODHI) [18],
Papyrus [19], InfoSleuth [20], Distributed Knowledge Networks (DKN) [21], a mediator oriented
agent based DDM system [22], Optimized Incremental Knowledge Integration(OIKI) [23],
Extendible Multi-Agent Data mining System(EMADS)[24], [25], [26], [27].
Authors in [11] compared DecisionCentre, IntelliMiner, InterAct, PADMA, JAM, BODHI,
Papyrus and InfoSleuth DDM systems and proposed a hybrid model called Distributed Agent
based Mining Environment (DAME) integrating the client-server and mobile agent model for
delivering internet-based DDM services by incorporating cost metrics such as application run
time estimation and optimization of the DDM process. Authors in [28] proposed a FIPA-
compliant multi-agent platform based on mining-driven agent (Agent Academy) that offers
facilities for design, implementation and deployment of multi agent systems. The researchers
describe the agent academy as an attempt to develop a framework through which users can create
an agent community having the ability to train and retain its own agents using DM techniques.
3. ASSOCIATION RULE MINING
Let , 1jDB T j D be a transactional dataset of size D where each transaction T is
assigned an identifier (TID ) and ,i 1iI d m , total m data items in DB . A set of items in
a particular transaction T is called itemset or pattern. An itemset, ,i 1iP d k , which is a
set of k data items in a particular transaction T and P I , is called k-itemset. Support of an
itemset,
No_of_T_containing_P
%s P
D
is the frequency of occurrence of itemset P
in DB , where No_of_T_containing_P is the support count (sup_count) of itemset P . Frequent
Itemsets (FIs) are the itemset that appear in DB frequently, i.e., if min_th_sups P (given
minimum threshold support), then P is a frequent k-itemset. Finding such FIs plays an essential
role in miming the interesting relationships among itemsets. Frequent Itemset Mining (FIM) is the
task of finding the set of all the subsets of FIs in a transactional database. It is CPU and
input/output intensive task, mainly because of the large size of the datasets involved [2].
Association Rules (ARs) first introduced in [29], are used to discover the associations (or co-
occurrences) among item in a database. AR is an implication of the form
support,confidenceP Q where, ,P I Q I and P and Qare disjoint itemsets, i.e.,
P Q . An AR is measured in terms of its support and confidence factor where:
5. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
15
Support s P Q = p P Q =
No_of_T_containing_both_P_and_Q
%
D
: the
probability of both P and Q appearing in T , we can say that s % of the transactions
support the rule P Q , 0 1.0s or 0% 100%s .
Confidence c P Q = |p Q P =
s P Q
s P
=
sup_count
sup_count
P Q
P
%: the
conditional probability of Q given P , we can say that when itemset P occurs in a
transaction there are c % chances that itemset Qwill occur in that transaction,
0 1.0c or 0% 100%c .
An AR P Q is said to be strong if min_th_sups P Q (given minimum threshold
support) and min_th_confc P (given minimum threshold confidence). Association Rule
Mining (ARM) today is one of the most important aspects of DM tasks. In ARM all the strong
ARs are generated from the FIs. The ARM can be viewed as two step process [30], [31].
1. Find all the frequent k-itemsets( kL )
2. Generate Strong ARs from kL
a. For each frequent itemset, kl L , generate all non empty subsets of l .
b. For every non empty subset s of l, output the rule “ s l s ”, if
sup_count
min_th_conf
sup_count
l
s
3.1. Distributed Association Rule Mining
Distributed Association Rule Mining (DARM) generates the globally strong association rules
from the global FIs in a distributed environment. Because of an intrinsic data skew property of
the distributed database, it is desirable to mine the global rules for the global business decisions
and the local rules for the local business decisions.
3.1.1. Preliminaries and Definitions
Few preliminaries notations and definitions required for defining DARM and to make this study
self contained are as follows:
,i 1iS S n , n distributed sites.
CENTRALS , Central Site.
, 1i j iDB T j D , Horizontally partitioned data set of size iD at the local site iS ,
where each transaction jT is assigned an identifier (TID).
1
n
ii
DB DB
, the aggregated dataset of size 1
n
ii
D D
, i jDB DB
,i 1iI d m , total m data items in each iDB .
( )
FI
k iL , Local frequent k-itemsets at site iS .
( )
FISC
k iL , List of support count ( )
FI
k iItemset L .
6. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
16
LSAR
iL , List of locally strong association rules at site iS .
1
nTLSAR LSAR
ii
L L
, List of total locally strong association rules.
( )1
nTFI FI
k k ii
L L
, List of total frequent k-itemsets.
( )1
nGFI FI
k k ii
L L
, List of global frequent k-itemsets.
GSAR
CENTRALL , List of Globally strong association rule.
Local Knowledge Base (LKB), at site iS , comprises of ( )
FI
k iL , ( )
FISC
k iL and LSAR
iL which can provide
reference to the local supervisor for local decisions. Global Knowledge Base (GKB), at CENTRALS ,
comprises of TLSAR
L , TFI
kL , GFI
kL and GSAR
CENTRALL for the global decision making. If the raw data from
each of the individual databases were sent to a single database to generate the rules, certain useful
rules, which would aid in making decisions about local branches, would be lost. In such cases
organization may miss out certain rules that were prominent in certain branches and were not
found in other branches. The frequent patterns in distributed databases are divided into three
classes [32]: (a) Local patterns- Local branches need to consider the original data in their data sets
so they can identify local patterns for local decisions; (b) High-vote patterns- Patterns that are
supported by most of the branches and are used for making global decisions; (c) Exceptional
patterns- Such patterns are strongly supported by only a few branches and used to create policies
for specific branches. Like ARM, DARM task can also be viewed as two-step process [31]:
1. Find the global frequent k-itemset ( GFI
kL ) from the distributed Local frequent k-itemsets
( ( )
FI
k iL ) from the partitioned datasets.
2. Generate globally strong association rules ( GSAR
CENTRALL ) from GFI
kL .
4. COMPARATIVE ANALYSIS OF EXISTING AGENT BASED DARM SYSTEMS
The existing agent based systems specifically dealing with DARM task are: Knowledge
Discovery Management System (KDMS) [33], Efficient Distributed Data Mining using
Intelligent Agents [34], Mobile Agent based Distributed Data Mining [35], An Agent based
Framework for Association Rule Mining of Distributed Data (AFARMDD) [36], [37], Multi-
Agent Distributed Association Rule Miner (MADARM) [38]. All these systems are academic
research projects. Discussion of these and few others are given below.
A mobile-agent based distributed knowledge discovery system (MADKDS) is proposed in [33].
Various agents involved in the system are: Data Mining Mobile Agent (DMMA) encapsulated
with a novel incremental algorithm, IAA[39] for mining the local frequent itemsets to generate
the local knowledge base and return back this knowledge to Mining Process Manager, Data Pre-
processing Mobile Agent (PMA) to preprocess the local data and collect back at central data
warehouse which results into an increase in the storage cost at central site and Counter Mobile
Agent (CMA) to scan the local databases and collect the support count of some itemsets. A
central site is known as Knowledge Discovery Management System (KDMS) and distributed site
are called Knowledge Discovery sub Systems (sub-KDS) in this architecture. All the mobile
agents are dispatched by Mobile Agent Control Centre (MACC) at KDMS site and received and
handled by Mobile Agent Execution Environment (MAEE). MACC and MAEE components on
are designed on the top of IBM Aglet Workbench [44], [45]. Parallel itinerary is maintained for
mobile agents and this framework is implemented using Java and C++ as dynamic link library
7. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
17
through Java native Interface (JNI). No privacy preserving techniques are used for the local
knowledge. No user interface for the MAS is designed. No cost model for the overall DARM task
is discussed and experimental validation using a large size synthetic or real data set is also
required.
An Agent-Based Framework for Association Rules Mining of Distributed Data (AFARMDD) is
proposed in [36], [37]. The main aim of this study is to protect the privacy of the local data from
being exposed to other distributed sites encapsulating the existing techniques proposed in [42],
[43] into agents. Various agents involved in the system are: Encrypt Secure Union Agent (ESUA)
to encapsulate data mining operation and the encryption of Secure Union operation at each site,
Decrypt secure union Agent (DSUA) to encapsulate the decryption of secure union operation at
each site, Encrypt Sum Agent (ESA) to encapsulate the encryption of secure sum operation at
each site, Decrypt Sum Agent (DSA) to encapsulate the decryption of secure sum operation at
each site, Broadcast Agent (BA) to carry the global frequent k-itemsets to each site and Over
Agent(OA) to notify all the sites at the mining operation has terminated. Parallel as well as serial
itinerary is maintained for mobile agents. Agent Server and Local Host components are designed
as underlying AEE. Apriori [40] algorithm is used for mining the local frequent itemsets. Privacy
preserving techniques are discussed for the local sensitive data and these techniques are the core
area of this study. No user interface for the MAS is designed. No cost model for the overall
DARM task is discussed and experimental validation using a large size synthetic or real data set
is also required. Globally strong association rules are also not generated.
Authors in [38] proposed theoretical cost models for agent based ARM in distributed data using a
prototype model called Multi-Agent Distributed Association Rule Miner (MADARM). These
cost models serve as a basic model to estimate and predict the response time of a DARM task.
Various agents involved in the system are: Association Rule Mining Coordinating Agent
(ARMCA) for creating and coordinating other agents in agent zone, Mobile Agent-Based
Association Rule Miner (MAARM) for performing ARM task at each data source, Mobile Agent-
Based Result Reporter (MARR) created by MAARM agent for migrating the result to ARMCA
and Results Integration Coordinating Agent (RICA) for knowledge or result integration.
Knowledge integration is optimized on the data sources by using agent based distributed
knowledge integration (ADKI) as opposed to incremental knowledge integration proposed in
[23]. Theoretical cost models are the core area of this study. Apriori [40] and FP-growth [46]
algorithm are considered for mining the local frequent itemsets. Parallel itinerary for MAARM,
MARR and serial itinerary for RICA agent is maintained. No underlying AEE is discussed. Only
conceptual views are presented in the paper while the researchers concluded that the work still
needs improvement and experimental validation.
In an experimental setup, authors in [34] performed efficient DDM intelligent agents
incorporating standard Apriori [40] algorithm implemented in J#. Though authors claim that it is
an agent based setup but it has been observed that there is no AEE and related agents exist in the
study. So lot of work needs to be done designing an agent based framework where intelligent
agents are actually implemented comprising a MAS on the top of an AEE using a large synthetic
or real datasets.
Authors in [35] proposed agent based DDM approach which as an improvement over PMFI
algorithm proposed in [41]. The basic objective is to reduce the time required to compute Global
Frequent Itemset (GFI). The proposed algorithm performs two tasks parallel: (1) Local sites send
LFIs to central site and also to all their neighbours; (2) Calculation of GFI/Candidate GFI at
central site and counts of CGFI at local sites is done as an overlapped operation. That is, local
sites need not wait for central site to send CGFI. Thus total time taken is reduced drastically. No
information is given about which algorithm used by Mining Agent to generate Local Frequent
8. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
18
Itemset (LFI). No AEE exist in the study. Implementation, validation and the underlying AEE
required to actually perform the agent enabled DDM using a large synthetic or real datasets.
Qualitative comparison of some prominent current agent based DARM frameworks is provided in
Table 1 taking into account some of the features they provide. The features include the following
fields:
Agents field shows the community of agents involved in the system.
Itinerary indicates the serial or parallel travel plan followed by the mobile agents.
AEE/MAP shows which underlying Agent Execution Environment or Mobile Agent
Platform is used for developing MAS for DARM.
Impl filed indicates whether the MAS is implemented along with the language used for
implementation or it is just a Prototype framework.
Algorithm indicates the FIM/ARM algorithm considered in the study.
CM indicates whether any cost model is discussed in the study.
PP points out whether any privacy-preserving mechanism is taken into account for the
sensitive local data.
GUI is for Graphical User Interface feature of the MAS.
DS indicates whether any dataset (synthetic or real) is used in experimental validation.
Use indicates the use of MAS in practical applications, development projects, case
studies etc.
Table 1. Qualitative comparison of the three agent based DARM Frameworks.
Features
DARM Framework
MADKDS AFARMDD MADARM
Agents DMMA, PMA,
CMA
ESUA, DSUA,
ESA, DSA, BA,
OA
ARMCA, MAARM, MARR,
RICA
Itinerary Parallel Serial and Parallel Serial and Parallel
AEE/MAP Yes Yes No
Impl Yes Yes No
Algorithm IAA [39] Apriori[40] Apriori[40], FP-growth[46]
CM No No Yes
PP No Yes No
GUI No No No
DS No No No
Use No No No
This analysis reveals that AFARMDD [36], [37] and MADARM [38] is based on the parallel and
serial itinerary of the MAs whereas MADKDS[33] use parallel itinerary. Only MADKDS[33] has
an existing IBMs Aglet Workbench as the underlying AEE, others dont have any underlying AEE
to test and validate the DARM system. MADARM [38] is only a prototype model without any
implementation. Apriori [40] algorithm is mostly used for FIM in such systems. Only MADARM
[38] discuss the cost model involved in the entire DARM task, others don’t have it. Privacy
preserving mechanism for the sensitive local data is the core area of AFARMDD [36], [37]
system while others don’t have any such mechanism. None of these frameworks has any
Graphical user interface designed to work with the systems. None of these frameworks is used in
any real applications, development projects, case studies etc.
Researchers in this area should focus more on developing algorithms and architecture that will
reduce the massive data movement in global knowledge mining and integration thereby reducing
the response time. Further algorithms and methods should also consider the development of
9. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
19
adaptive, fault tolerant and easily extendable systems in the area of DARM. Such systems will
greatly reduce communication and interpretation costs, improve autonomy, efficiency and
scalability, collaboration, security and trustworthiness of the DARM system, all of which are
common issues with existing systems [8]. Agent based DARM framework must be designed on
the top of an effective AEE with a GUI and implementation of all the agents involved in the
system. It should effectively address and validate the cost model for the overall DARM task.
Such systems must be equipped with a case study usage.
5. CONCLUSION
Mobile agents strongly qualify for designing distributed applications. DDM, when clubbed with
the agent technology, makes a promising alliance that gives favourable results. In this study,
comparative analysis of existing agent based frameworks for DARM task is done. Most of the
existing agent based frameworks for DARM task are only prototype model and lacks the
appropriate underlying Agent Execution Environment(AEE), scalability, privacy preserving
techniques, global knowledge generation and implementation using a real datasets. With this
study, we expect to contribute to cover the need of an updated review and analysis of the role of
intelligent agents in designing DARM framework and also to encourage future work in this
domain.
REFERENCES
[1] U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth & R. Uthurusamy, (1996) Advances in Knowledge
Discovery and Data Mining, AAAI/MIT Press.
[2] J. Han & M. Kamber, (2006) Data Mining: Concepts and Techniques, 2nd ed. Morgan Kaufmann.
[3] B. -H. Park & H. Kargupta, (2002) “Distributed Data Mining: Algorithms, Systems, and
Applications,” Department of Computer Science and Electrical Engineering, University of Maryland
Baltimore County, 1000 Hilltop Circle Baltimore, MD 21250, Available:
http://www.csee.umbc.edu/_hillol/PUBS/review.pdf
[4] G. Tsoumakas & I. Vlahavas, (2009) “Distributed Data Mining”, Department of Informatics, Aristotle
University of Thessaloniki, Thessaloniki, Greece, Available:
http://talos.csd.auth.gr/tsoumakas/publications/E1.pdf
[5] G. S. Bhamra, R. B. Patel & A. K. Verma, (2014) “Intelligent Software Agent Technology: An
Overview”, International Journal of Computer Applications (IJCA), vol. 89, no. 2, pp. 19–31.
[6] H. Kargupta & P. Chan, (2000) Advances in Distributed and Parallel Knowledge Discovery.
AAAI/MIT Press.
[7] Y. Fu, (2001) “Distributed Data Mining: An Overview”, Department of Computer Science,
University of Missouri-Rolla, Available: http://academic.csuohio.edu/fuy/Pub/tcdp01.pdf
[8] A. O. Ogunde, O. Folorunso, A. S. Sodiya & G. O. Ogunleye, (2011) “A Review of Some Issues and
Challenges in Current Agent-Based Distributed Association Rule Mining”, Asian Journal of
Information Technology, vol. 10, no. 2, pp. 84–95.
[9] L. Cao, G. Weiss & P. S. Yu, (2012) “A brief introduction to agent mining”, Autonomous Agent and
Multi-Agent System, vol. 25, pp. 419–424.
[10] R. Orfali, D. Harkey & J. Edwards, (1995) The essential distributed objects survival guide, John
Wiley & Sons, USA.
[11] S. Krishnaswami, (2002) “A Hybrid Model for Delivering Internet-based Distributed Data Mining
Services”, Ph.D. dissertation, School of Computer Science and Software Engineering, Monash
University, Australia.
[12] J. Chattratichat, J. Darlington, Y. Guo, S. Hedvall, M. Kohler & J. Syed, (1999) “An architecture for
distributed enterprise data mining”, in High-Performance Computing and Networking, ser. Lecture
Notes in Computer Science, P. Sloot, M. Bubak, A. Hoekstra, & B. Hertzberger, Eds. Springer Berlin
- Heidelberg, vol. 1593, pp. 573–582.
10. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
20
[13] S. Parthasarathy & R. Subramonian, (1999) “Facilitating Data Mining on a Network of
Workstations”, in Advances in Distributed Data Mining, H. Kargupta and P. Chan, Eds. AAAI Press,
pp. 229–254.
[14] S. Parthasarathy & S. Dwarkadas, (2002) “Shared State for Distributed Interactive Data Mining
Applications”, Journal of Distributed and Parallel Databases, vol. 11, no. 2, pp. 129–155.
[15] M. Klusch, S. Lodi & M. Gianluca, (2003) “The role of agents in distributed data mining: issues and
benefits”, in Proceedings of the IEEE/WIC International Conference on Intelligent Agent
Technology(IAT 2003). IEEE, pp. 211–217.
[16] H. Kargupta, I. Hamzaoglu & B. Stafford, (1997) “Scalable, Distributed Data Mining Using An
Agent Based Architecture”, in Proceedings of the 3rd International Conference on the Knowledge
Discovery and Data Mining(KDD-97), D. Heckerman, H. Mannila, D. Pregibon & R. Uthurusamy,
Eds. AAAI Press, Menlo Park, California, pp. 211–214.
[17] S. Stolfo, A. L. Prodromidis, S. Tselepis, W. Lee, D. W. Fan & P. K. Chan, (1997) “JAM: Java
Agents for Meta-Learning over Distributed Databases”, in Proceedings of the 3rd International
Conference on the Knowledge Discovery and Data Mining(KDD-97), D. Heckerman, H. Mannila, D.
Pregibon, and R. Uthurusamy, Eds. AAAI Press, Menlo Park, California, pp. 74–81.
[18] H. Kargupta, B. Park, D. Hershberger & E. Johnson, (1999) “Collective Data Mining: A new
perspective toward Distributed Data Mining”, in Advances in Distributed and Parallel Knowledge
Discovery, H. Kargupta and P. Chan, Eds. AAAI/MIT Press, pp. 131–178.
[19] S. Bailey, R. Grossman, H. Sivakumar & A. Turinsky, (1999) “Papyrus: a system for data mining
over local and wide area clusters and super-clusters”, in Proceedings of the ACM/IEEE conference on
Supercomputing (SC’99). ACM New York, NY, USA, p. 63.
[20] G. L. Martin, A. Unruh & S. D. Urban, (1999) “InfoSleuth: An agent infrastructure for knowledge
discovery and event detection”, Microelectronics and Computer Technology Corporation (MCC),
Tech. Rep. MCC-INSL-003-99.
[21] V. Honavar, L. Miller & J. Wong, (1998) “Distributed Knowledge Networks”, in Proceedings of the
IEEE Information Technology Conference.
[22] S. W. Baik, J. Bala & J. S. Cho, (2005) “Agent Based Distributed Data Mining”, in Parallel and
Distributed Computing: Applications and Technologies, ser. Lecture Notes in Computer Science, K.-
M. Liew, H. Shen, S. See, W. Cai, P. Fan & S. Horiguchi, Eds. Springer Berlin – Heidelberg, vol.
3320, pp. 42–45.
[23] E. I. Ariwa, M. B. Senousy & M. M. Medhat, (2003) “Informatization and E-Business Model
Application for Distributed Data Mining Using Mobile Agents”, in Proceedings of the IADIS
International Conference WWW/Internet (ICWI 2003), pp. 85–92.
[24] K. A. Albashiri, F. Coenen, R. Sanderson & P. Leng, (2007) “Frequent Set Meta Mining: Towards
Multi-Agent Data Mining”, in Proceedings of the 27th SGAI International Conference on Artificial
Intelligence (AI 2007), pp. 139–151.
[25] K. A. Albashiri & F. Coenen, (2009) “Agent-Enriched Data Mining Using an Extendable
Framework”, in Agents and Data Mining Interaction, ser. Lecture Notes in Computer Science, L. Cao,
V. Gorodetsky, J. Liu, G. Weiss & P. S. Yu, Eds. Springer Berlin - Heidelberg, , vol. 5680, pp. 53–68.
[26] K. A. Albashiri, F. Coenen & P. Leng, (2009) “EMADS: An extendible multi-agent data miner”,
Knowledge-Based Systems, vol. 22, no. 7, pp. 523–528
[27] K. A. Albashiri, (2010) “An investigation into the issues of Multi-Agent Data Mining”, Ph.D.
dissertation, The University of Liverpool, Ashton Building, Ashton Street, Liverpool L69 3BX,
United Kingdom.
[28] A. L. Symeonidis & P. A. Mitkas, (2005) Agent Intelligence Through Data Mining , First ed., ser.
Multiagent Systems, Artificial Societies, and Simulated Organizations. Springer, vol. 14.
[29] R. Agrawal, T. Imielinski & A. Swami, (1993) “Mining association rules between sets of items in
large databases”, in Proceedings of the ACM-SIGMOD International Conference of Management of
Data, pp. 207–216.
[30] R. Agrawal & J. C. Shafer, (1996) “Parallel mining of association rules”, IEEE Transaction on
Knowledge and Data Engineering, vol. 8, no. 6, pp. 962–969.
[31] M. J. Zaki, (1999) “Parallel and distributed association mining: a survey”, IEEE Concurrency, vol. 7,
no. 4, pp. 14–25.
[32] X. Wu & S. Zhang, (2003) “Synthesizing high-frequency rules from different data sources”, IEEE
Transactions on Knowledge and Data Engineering, vol. 15, no. 2, pp. 353–367.
11. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
21
[33] Y.-L. Wang, Z.-Z. Li & H.-P. Zhu, (2003) “Mobile agent based distributed and incremental
techniques for association rules”, in Proceedings of the International Conference on Machine
Learning and Cybernetics(ICMLC 2003), vol. 1, pp. 266–271.
[34] C. Aflori & F. Leon, (2004) “Efficient Distributed Data Mining using Intelligent Agents”, in
Proceedings of the 8th International Symposium on Automatic Control and Computer Science, pp. 1–
6.
[35] U. P. Kulkarni, P. D. Desai, T. Ahmed, J. V. Vadavi & A. R. Yardi, (2007) “Mobile Agent Based
Distributed Data Mining”, in Proceedings of the International Conference on Computational
Intelligence and Multimedia Applications (ICCIMA 2007), IEEE Computer Society, pp. 18–24.
[36] G. Hu & S. Ding, (2009a) “An Agent-Based Framework for Association Rules Mining of Distributed
Data”, in Software Engineering Research, Management and Applications 2009, ser. Studies in
Computational Intelligence, R. Lee and N. Ishii, Eds. Springer Berlin - Heidelberg, vol. 253, pp. 13–
26.
[37] G. Hu & S. Ding, (2009b) “Mining of Association Rules from Distributed Data using Mobile
Agents,” in Proceedings of the International Conference on e-Business(ICE-B 2009), pp. 21–26.
[38]A. O. Ogunde, O. Folorunso, A. S. Sodiya, J. A. Oguntuase & G. O. Ogunleye, (2011) “Improved cost
models for agent based association rule mining in distributed databases”, Anale SEria Informatica,
vol. 9, no. 1, pp. 231–250, Available: http://anale-informatica.tibiscus.ro/download/lucrari/9-1-20-
Ogunde.pdf
[39] Y. Wang, Z. Li, J. Xue & Y. Zhao, (2002) “A Novel Incremental Algorithm for Mining Frequent
Itemsets”, in Proceedings of the 2000 International Symposium on Distributed Computing and
Application to Business Engineering and Science (DCABES 2002), pp. 60–64.
[40] R. Agrawal & R. Srikant, (1994) “Fast Algorithms for Mining Association Rules in Large
Databases”, in Proceedings of the 20th International Conference on Very Large Data Bases
(VLDB’94). Morgan Kaufmann Publishers Inc., pp. 487–499.
[41] Y.-L. Ruan, G. Liu & Q.-H. Li, (2005) “Parallel algorithm for Mining Frequent Item sets”, in
Proceedings of the 4th International Conference on Machine Learning and Cybernetics, vol. 4. IEEE,
pp. 2118–2121.
[42] D. W. Cheung, V. T. Ng, A. W. Fu & Y. Fu, (1996) “Efficient Mining of Association Rules in
Distributed Databases”, IEEE Transactions on Knowledge and Data Engineering, vol. 8, no. 6, pp.
911–922.
[43] M. Kantarcioglu & C. Clifton, (2004) “Privacy-Preserving Distributed Mining of Association Rules
on Horizontally Partitioned Data”, IEEE Transactions on Knowledge and Data Engineering, vol. 16,
no. 9, pp. 1026–1037.
AUTHORS
Gurpreet Singh Bhamra is currently working as Assistant Professor at Department
of Computer Science and Engineering, M. M. University, Mullana, Haryana. He
received his B.Sc. (Computer Sc.) and MCA from Kurukshetra University,
Kurukshetra in 1995 and 1998, respectively. He is pursuing Ph.D. from Department
of Computer Science and Engineering, Thapar University, Patiala, Punjab. He is in
teaching since 1998. He has published 10 research papers in International/National
Journals and International Conferences. He has received Best Paper Award for “An
Agent enriched Distributed Data Mining on Heterogeneous Networks”, in
“Challenges & Opportunities in Information Technology” (COIT-2008). He is a Life
Member of Computer Society of India. His research interests are in Distributed Computing, Distributed
Data Mining, Mobile Agents and Bio-informatics.
Dr. Anil Kumar Verma is currently working as Associate Professor at Department
of Computer Science & Engineering, Thapar University, Patiala. He received his
B.S., M.S. and Ph.D. in 1991, 2001 and 2008 respectively, majoring in Computer
science and engineering. He has worked as Lecturer at M.M.M. Engineering College,
Gorakhpur from 1991 to 1996. He joined Thapar Institute of Engineering &
Technology in 1996 as a Systems Analyst in the Computer Centre and is presently
associated with the same Institute. He has been a visiting faculty to many institutions.
He has published over 100 papers in referred journals and conferences (India and
12. International Journal in Foundations of Computer Science & Technology (IJFCST), Vol.5, No.1, January 2015
22
Abroad). He is a MISCI (Turkey), LMCSI (Mumbai), GMAIMA (New Delhi). He is a certified software
quality auditor by MoCIT, Govt. of India. His research interests include wireless networks, routing
algorithms and securing ad hoc networks and data mining.
Dr. Ram Bahadur Patel is currently working as Professor and Head at Department
of Computer Science & Engineering, Chandigarh College of Engineering &
Technology, Chandigarh. He received PhD from IIT Roorkee in Computer Science &
Engineering, PDF from Highest Institute of Education, Science & Technology
(HIEST), Athens, Greece, MS (Software Systems) from BITS Pilani and B. E. in
Computer Engineering from M. M. M. Engineering College, Gorakhpur, UP. Dr.
Patel is in teaching and research since 1991. He has supervised 36 M. Tech, 7 M.
Phil. and 8 PhD Thesis. He is currently supervising 6 PhD students. He has published
130 research papers in International/National Journals and Refereed International
Conferences. He has written 7 text books for engineering courses. He is member of
ISTE (New Delhi), IEEE (USA). He is a member of various International Technical Committees and
participating frequently in International Technical Committees in India and abroad. His current research
interests are in Mobile & Distributed Computing, Mobile Agent Security and Fault Tolerance and Sensor
Network.