A lot of research work has been done in the area of Garbage collection for both uniprocessor and
distributed systems. Actors are associated with activity (thread) and hence usual garbage collection
algorithms cannot be applied for them. Hence a separate algorithm should be used to collect them. If we
transform the active reference graph into a graph which captures all the features of actors and looks like
passive reference graph then any passive reference graph algorithm can be applied for it. But the cost of
transformation and optimization are the core issues. An attempt has been made to walk through these
issues.
Top-K Dominating Queries on Incomplete Data with Prioritiesijtsrd
Top-K dominating query returns the k objects that are dominated in a dataset. Finding dominated elements on incomplete dataset is more complicated than in case of complete dataset. In the real- time datasets the dataset can be incomplete due to various reasons such as data loss, privacy preservation or awareness problem etc. In this paper we aims to find top-k elements from an incomplete dataset by providing priority values to each dimension in the data object. Skyline based algorithm is applied for that purpose. Since the priority value is used while determining the dominance this method return the most suitable and efficient result than other previous methods. The output will be more preferable according to the users purpose. Dr. Prabha Shreeraj Nair | Prof. Dr. G. K. Awari"Top-K Dominating Queries on Incomplete Data with Priorities" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-1 , December 2017, URL: http://www.ijtsrd.com/papers/ijtsrd7056.pdf http://www.ijtsrd.com/computer-science/other/7056/top-k-dominating-queries-on-incomplete--data-with-priorities/dr-prabha-shreeraj-nair
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
EXPERIMENTS ON HYPOTHESIS "FUZZY K-MEANS IS BETTER THAN K-MEANS FOR CLUSTERING"IJDKP
Clustering is one of the data mining techniques that have been around to discover business intelligence by grouping objects into clusters using a similarity measure. Clustering is an unsupervised learning process that has many utilities in real time applications in the fields of marketing, biology, libraries, insurance, city-planning, earthquake studies and document clustering. Latent trends and relationships among data objects can be unearthed using clustering algorithms. Many clustering algorithms came into existence. However, the quality of clusters has to be given paramount importance. The quality objective is to achieve
highest similarity between objects of same cluster and lowest similarity between objects of different clusters. In this context, we studied two widely used clustering algorithms such as the K-Means and Fuzzy K-Means. K-Means is an exclusive clustering algorithm while the Fuzzy K-Means is an overlapping clustering algorithm. In this paper we prove the hypothesis “Fuzzy K-Means is better than K-Means for Clustering” through both literature and empirical study. We built a prototype application to demonstrate the differences between the two clustering algorithms. The experiments are made on diabetes dataset
obtained from the UCI repository. The empirical results reveal that the performance of Fuzzy K-Means is better than that of K-means in terms of quality or accuracy of clusters. Thus, our empirical study proved the hypothesis “Fuzzy K-Means is better than K-Means for Clustering”.
Top-K Dominating Queries on Incomplete Data with Prioritiesijtsrd
Top-K dominating query returns the k objects that are dominated in a dataset. Finding dominated elements on incomplete dataset is more complicated than in case of complete dataset. In the real- time datasets the dataset can be incomplete due to various reasons such as data loss, privacy preservation or awareness problem etc. In this paper we aims to find top-k elements from an incomplete dataset by providing priority values to each dimension in the data object. Skyline based algorithm is applied for that purpose. Since the priority value is used while determining the dominance this method return the most suitable and efficient result than other previous methods. The output will be more preferable according to the users purpose. Dr. Prabha Shreeraj Nair | Prof. Dr. G. K. Awari"Top-K Dominating Queries on Incomplete Data with Priorities" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-1 , December 2017, URL: http://www.ijtsrd.com/papers/ijtsrd7056.pdf http://www.ijtsrd.com/computer-science/other/7056/top-k-dominating-queries-on-incomplete--data-with-priorities/dr-prabha-shreeraj-nair
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
EXPERIMENTS ON HYPOTHESIS "FUZZY K-MEANS IS BETTER THAN K-MEANS FOR CLUSTERING"IJDKP
Clustering is one of the data mining techniques that have been around to discover business intelligence by grouping objects into clusters using a similarity measure. Clustering is an unsupervised learning process that has many utilities in real time applications in the fields of marketing, biology, libraries, insurance, city-planning, earthquake studies and document clustering. Latent trends and relationships among data objects can be unearthed using clustering algorithms. Many clustering algorithms came into existence. However, the quality of clusters has to be given paramount importance. The quality objective is to achieve
highest similarity between objects of same cluster and lowest similarity between objects of different clusters. In this context, we studied two widely used clustering algorithms such as the K-Means and Fuzzy K-Means. K-Means is an exclusive clustering algorithm while the Fuzzy K-Means is an overlapping clustering algorithm. In this paper we prove the hypothesis “Fuzzy K-Means is better than K-Means for Clustering” through both literature and empirical study. We built a prototype application to demonstrate the differences between the two clustering algorithms. The experiments are made on diabetes dataset
obtained from the UCI repository. The empirical results reveal that the performance of Fuzzy K-Means is better than that of K-means in terms of quality or accuracy of clusters. Thus, our empirical study proved the hypothesis “Fuzzy K-Means is better than K-Means for Clustering”.
Searching is a very tedious process because,we all be giving the different keywords to the search engine until we land up with the best results.
There is no clustering approach is achieved in existing.
Feature subset selection is an effective way for reducing dimensionality,removing irrelavant data,increasing learing accuracy and improving result comprehensibility.
XML based cluster formation is achieved in order to have space and language competency
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
VARIATIONS IN OUTCOME FOR THE SAME MAP REDUCE TRANSITIVE CLOSURE ALGORITHM IM...ijcsit
This paper describes the outcome of an attempt to implement the same transitive closure (TC) algorithm for Apache MapReduce running on different Apache Hadoop distributions. Apache MapReduce is a software framework used with Apache Hadoop, which has become the de facto standard platform for
processing and storing large amounts of data in a distributed computing environment. The research presented here focuses on the variations observed among the results of an efficient iterative transitive closure algorithm when run against different distributed environments. The results from these comparisons
were validated against the benchmark results from OYSTER, an open source Entity Resolution system. The experiment results highlighted the inconsistencies that can occur when using the same codebase with different implementations of Map Reduce.
A practical parser with combined parsingijseajournal
This paper introduces a practical solution for dramatically enlarging the capabilities of an established
parser, a task that presents substantial challenges. During the development of new procedures for
SUDAAN®, a commercial statistical software package, we found the existing parser to be inadequate for
new situations. Like many other parsers, the one in use could be characterized as a no-repair, noguesswork,
and no-backtracking look-ahead left-to-right LALR(1) parser [1, p. 300]. This paper describes
how the parser was enhanced to handle extra syntax for sophisticated mathematical and logical
expressions. The new parser adds a noncanonical parsing technique, along with a Shunting-Yard-style
algorithm and other techniques as a second step after the original canonical LALR [2], resulting in a
powerful and efficient two-level parsing approach. Adding a second step to the successful one-step parser
offered a way to preserve existing, well-tested capabilities while adding capabilities for parsing more
complex syntax.
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...vikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Social networks are not new, even though websites like Facebook and Twitter might make you want to believe they are; and trust me- I’m not talking about Myspace! Social networks are extremely interesting models for human behavior, whose study dates back to the early twentieth century. However, because of those websites, data scientists have access to much more data than the anthropologists who studied the networks of tribes!
Because networks take a relationship-centered view of the world, the data structures that we will analyze model real world behaviors and community. Through a suite of algorithms derived from mathematical Graph theory we are able to compute and predict behavior of individuals and communities through these types of analyses. Clearly this has a number of practical applications from recommendation to law enforcement to election prediction, and more.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...vikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Biclustering using Parallel Fuzzy Approach for Analysis of Microarray Gene Ex...CSCJournals
Biclusters are required to analyzing gene expression patterns of genes comparing rows in expression profiles and analyzing expression profiles of samples by comparing columns in gene expression matrix. In the process of biclustering we need to cluster genes and samples. The algorithm presented in this paper is based upon the two-way clustering approach in which the genes and samples are clustered using parallel fuzzy C-means clustering using message passing interface, we call it MFCM. MFCM applied for clustering on genes and samples which maximize membership function values of the data set. It is a parallelized rework of a parallel fuzzy two-way clustering algorithm for microarray gene expression data [9], to study the efficiency and parallelization improvement of the algorithm. The algorithm uses gene entropy measure to filter the clustered data to find biclusters. The method is able to get highly correlated biclusters of the gene expression dataset.
A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube.
Searching is a very tedious process because,we all be giving the different keywords to the search engine until we land up with the best results.
There is no clustering approach is achieved in existing.
Feature subset selection is an effective way for reducing dimensionality,removing irrelavant data,increasing learing accuracy and improving result comprehensibility.
XML based cluster formation is achieved in order to have space and language competency
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
OPTIMAL GLOBAL THRESHOLD ESTIMATION USING STATISTICAL CHANGE-POINT DETECTIONsipij
Aim of this paper is reformulation of global image thresholding problem as a well-founded statistical
method known as change-point detection (CPD) problem. Our proposed CPD thresholding algorithm does
not assume any prior statistical distribution of background and object grey levels. Further, this method is
less influenced by an outlier due to our judicious derivation of a robust criterion function depending on
Kullback-Leibler (KL) divergence measure. Experimental result shows efficacy of proposed method
compared to other popular methods available for global image thresholding. In this paper we also propose
a performance criterion for comparison of thresholding algorithms. This performance criteria does not
depend on any ground truth image. We have used this performance criterion to compare the results of
proposed thresholding algorithm with most cited global thresholding algorithms in the literature.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
VARIATIONS IN OUTCOME FOR THE SAME MAP REDUCE TRANSITIVE CLOSURE ALGORITHM IM...ijcsit
This paper describes the outcome of an attempt to implement the same transitive closure (TC) algorithm for Apache MapReduce running on different Apache Hadoop distributions. Apache MapReduce is a software framework used with Apache Hadoop, which has become the de facto standard platform for
processing and storing large amounts of data in a distributed computing environment. The research presented here focuses on the variations observed among the results of an efficient iterative transitive closure algorithm when run against different distributed environments. The results from these comparisons
were validated against the benchmark results from OYSTER, an open source Entity Resolution system. The experiment results highlighted the inconsistencies that can occur when using the same codebase with different implementations of Map Reduce.
A practical parser with combined parsingijseajournal
This paper introduces a practical solution for dramatically enlarging the capabilities of an established
parser, a task that presents substantial challenges. During the development of new procedures for
SUDAAN®, a commercial statistical software package, we found the existing parser to be inadequate for
new situations. Like many other parsers, the one in use could be characterized as a no-repair, noguesswork,
and no-backtracking look-ahead left-to-right LALR(1) parser [1, p. 300]. This paper describes
how the parser was enhanced to handle extra syntax for sophisticated mathematical and logical
expressions. The new parser adds a noncanonical parsing technique, along with a Shunting-Yard-style
algorithm and other techniques as a second step after the original canonical LALR [2], resulting in a
powerful and efficient two-level parsing approach. Adding a second step to the successful one-step parser
offered a way to preserve existing, well-tested capabilities while adding capabilities for parsing more
complex syntax.
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...vikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Social networks are not new, even though websites like Facebook and Twitter might make you want to believe they are; and trust me- I’m not talking about Myspace! Social networks are extremely interesting models for human behavior, whose study dates back to the early twentieth century. However, because of those websites, data scientists have access to much more data than the anthropologists who studied the networks of tribes!
Because networks take a relationship-centered view of the world, the data structures that we will analyze model real world behaviors and community. Through a suite of algorithms derived from mathematical Graph theory we are able to compute and predict behavior of individuals and communities through these types of analyses. Clearly this has a number of practical applications from recommendation to law enforcement to election prediction, and more.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...vikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Biclustering using Parallel Fuzzy Approach for Analysis of Microarray Gene Ex...CSCJournals
Biclusters are required to analyzing gene expression patterns of genes comparing rows in expression profiles and analyzing expression profiles of samples by comparing columns in gene expression matrix. In the process of biclustering we need to cluster genes and samples. The algorithm presented in this paper is based upon the two-way clustering approach in which the genes and samples are clustered using parallel fuzzy C-means clustering using message passing interface, we call it MFCM. MFCM applied for clustering on genes and samples which maximize membership function values of the data set. It is a parallelized rework of a parallel fuzzy two-way clustering algorithm for microarray gene expression data [9], to study the efficiency and parallelization improvement of the algorithm. The algorithm uses gene entropy measure to filter the clustered data to find biclusters. The method is able to get highly correlated biclusters of the gene expression dataset.
A robot may need to use a tool to solve a complex problem. Currently, tool use must be pre-programmed by a human. However, this is a difficult task and can be helped if the robot is able to learn how to use a tool by itself. Most of the work in tool use learning by a robot is done using a feature-based representation. Despite many successful results, this representation is limited in the types of tools and tasks that can be handled. Furthermore, the complex relationship between a tool and other world objects cannot be captured easily. Relational learning methods have been proposed to overcome these weaknesses [1, 2]. However, they have only been evaluated in a sensor-less simulation to avoid the complexities and uncertainties of the real world. We present a real world implementation of a relational tool use learning system for a robot. In our experiment, a robot requires around ten examples to learn to use a hook-like tool to pull a cube from a narrow tube.
I am Samuel H. I am a Python Homework Expert at pythonhomeworkhelp.com. I hold a Master's in Python Programming from, the University of Alberta, Canada. I have been helping students with their homework for the past 7 years. I solve homework related to Python.
Visit pythonhomeworkhelp.com or email support@pythonhomeworkhelp.com.
You can also call on +1 678 648 4277 for any assistance with Python Homework.
Python Application: Visual Approach of Hopfield Discrete Method for Hiragana ...journalBEEI
Python is a dynamic object-oriented programming language. Python provides strong support for integration with other programming languages and other tools. Python programming is rarely used in the field of artificial intelligence, especially artificial neural networks. This research focuses on running Python programming to recognize hiragana letters. In learning the character of Hiragana, one can experience difficulties because of the many combinations of vowels that form new letters by different means of reading and meaning. Discrete Hopfield network is a fully connected, that every unit is attached to every other unit. This network has asymmetrical weights. At Hopfield Network, each unit has no relationship with itself. Therefore it is expected that a computer system that can help recognize the Hiragana Images. With this pattern recognition Application of Hiragana Images, it is expected the system can be developed further to recognize the Hiragana Images quickly and precisely.
Python Homework Help has the best homework help experts for your academic homework. Our Python experts hold Ph.D. degrees and can help you in preparing accurate solutions and answers for your Python homework questions. Our panel of online Python experts will help you get your basics right in order to understand and tackle difficult problems.
Reach out to our team via: -
Website: - https://www.pythonhomeworkhelp.com/
Email: support@pythonhomeworkhelp.com
Call/WhatsApp: +1(315)557–6473
We provide the best Python Homework Help online, and we guarantee that the clients get the highest score. We can work in a short time duration by maintaining the quality of work. Your deadline becomes our priority when you hire us!
Reach out to our team via: -
Website: - https://www.pythonhomeworkhelp.com/
Email: support@pythonhomeworkhelp.com
Call/WhatsApp: +1(315)557–6473
Bat-Cluster: A Bat Algorithm-based Automated Graph Clustering Approach IJECEIAES
Defining the correct number of clusters is one of the most fundamental tasks in graph clustering. When it comes to large graphs, this task becomes more challenging because of the lack of prior information. This paper presents an approach to solve this problem based on the Bat Algorithm, one of the most promising swarm intelligence based algorithms. We chose to call our solution, “Bat-Cluster (BC).” This approach allows an automation of graph clustering based on a balance between global and local search processes. The simulation of four benchmark graphs of different sizes shows that our proposed algorithm is efficient and can provide higher precision and exceed some best-known values.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Similar to ACTOR GARBAGE COLLECTION IN DISTRIBUTED SYSTEMS USING GRAPH TRANSFORMATION (20)
ON THE PROBABILITY OF K-CONNECTIVITY IN WIRELESS AD HOC NETWORKS UNDER DIFFER...graphhoc
We compare the probability of k-Connectivity of an ad hoc network under Random Way Point (RWP),City Section and Manhattan mobility models. A Network is said to be k Connected if there exists at least k edge disjoint paths between any pair of nodes in that network at any given time and velocity. Initially, for each of the three mobility models, the movement of the each node in the ad hoc network at a given velocity and time are captured and stored in the Node Movement Database (NMDB). Using the movements in the NMDB, the location of the node at a given time is computed and stored in the Node
Location Database (NLDB).
The Impact of Data Replication on Job Scheduling Performance in Hierarchical ...graphhoc
In data-intensive applications data transfer is a primary cause of job execution delay. Data access time depends on bandwidth. The major bottleneck to supporting fast data access in Grids is the high latencies of Wide Area Networks and Internet. Effective scheduling can reduce the amount of data transferred across the internet by dispatching a job to where the needed data are present. Another solution is to use a data replication mechanism. Objective of dynamic replica strategies is reducing file access time which leads to reducing job runtime. In this paper we develop a job scheduling policy and a dynamic data replication strategy, called HRS (Hierarchical Replication Strategy), to improve the data access efficiencies. We study our approach and evaluate it through simulation. The results show that our algorithm has improved 12% over the current strategies
DISTANCE TWO LABELING FOR MULTI-STOREY GRAPHSgraphhoc
An L (2, 1)-labeling of a graph G (also called distance two labeling) is a function f from the vertex set V (G) to the non negative integers {0,1,…, k }such that |f(x)-f(y)| ≥2 if d(x, y) =1 and | f(x)- f(y)| ≥1 if d(x, y) =2. The L (2, 1)-labeling number λ (G) or span of G is the smallest k such that there is a f with
max {f (v) : vє V(G)}= k. In this paper we introduce a new type of graph called multi-storey graph. The distance two labeling of multi-storey of path, cycle, Star graph, Grid, Planar graph with maximal edges and its span value is determined. Further maximum upper bound span value for Multi-storey of simple
graph are discussed.
Impact of Mobility for Qos Based Secure Manet graphhoc
Secure multicast communication in Mobile Adhoc Networks (MANETs) is challenging due to its inherent characteristics of infrastructure-less architecture with lack of central authority, limited resources such as bandwidth, energy and power. Several group oriented applications over MANETs create new challenges to routing protocols in terms of QOS requirements. In many multicast interactions, due to its frequent node mobility, new member can join and current members can leave at a time. It is necessary to choose a routing protocol which establishes true connectivity between the mobile nodes. The pattern of movement of members is classified into different mobility models and each one has its own distinct features. It is a crucial part in the performance of MANET. Hence key management is the fundamental challenge in achieving secure communication using multicast key distribution for mobile adhoc networks. This paper describes the impact of mobility models for the performance of a new cluster-based multicast tree algorithm with destination sequenced distance vector routing protocol in terms of QOS requirements such as end to end delay, energy consumption and key delivery ratio. For simulation purposes, three mobility models are considered. Simulation results illustrate the performance of routing protocol with different mobility models and different mobility speed under varying network conditions.
A Transmission Range Based Clustering Algorithm for Topology Control Manetgraphhoc
This paper presents a novel algorithm for clustering of nodes by transmission range based clustering (TRBC).This algorithm does topology management by the usage of coverage area of each node and power management based on mean transmission power within the context of wireless ad-hoc networks. By reducing the transmission range of the nodes, energy consumed by each node is decreased and topology is formed. A new algorithm is formulated that helps in reducing the system power consumption and prolonging the battery life of mobile nodes. Formation of cluster and selection of optimal cluster head and thus forming the optimal cluster taking weighted metrics like battery life, distance, position and mobility is done based on the factors such as node density, coverage area, contention index, required and current node degree of the nodes in the clusters
A Battery Power Scheduling Policy with Hardware Support In Mobile Devices graphhoc
A major issue in the ad hoc networks with energy constraints is to find ways that increase their lifetime. The use of multihop radio relaying requires a sufficient number of relaying nodes to maintainnetwork connectivity. Hence, battery power is a precious resource that must be used efficiently in order to avoid early termination of any node. In this paper, a new battery power scheduling policy based on dynamic programming is proposed for mobile devices.This policy makes use of the state information of each cell provided by the smart battery package and uses the strategy of dynamic programming to optimally satisfy a request for power. Using extensive simulation it is proved that dynamic programming based schedulingpolicyimproves the lifetime of the mobile nodes.Also a hardware support is proposed to succeeds in distinguishing between real-time and non-real-time traffic and provides the appropriate grade of service, to meet the time constraints associated with real time traffic.
A Review of the Energy Efficient and Secure Multicast Routing Protocols for ...graphhoc
This paper presents a thorough survey of recent work addressing energy efficient multicast routing protocols and secure multicast routing protocols in Mobile Ad hoc Networks (MANETs). There are so many issues and solutions which witness the need of energy management and security in ad hoc wireless networks. The objective of a multicast routing protocol for MANETs is to support the propagation of data from a sender to all the receivers of a multicast group while trying to use the available bandwidth efficiently in the presence of frequent topology changes. Multicasting can improve the efficiency of the wireless link when sending multiple copies of messages by exploiting the inherent broadcast property of wireless transmission. Secure multicast routing plays a significant role in MANETs. However, offering energy efficient and secure multicast routing is a difficult and challenging task. In recent years, various multicast routing protocols have been proposed for MANETs. These protocols have distinguishing features and use different mechanisms.
Case Study On Social Engineering Techniques for Persuasion Full Text graphhoc
There are plenty of security software in market; each claiming the best, still we daily face problem of viruses and other malicious activities. If we know the basic working principal of such malware then we can very easily prevent most of them even without security software. Hackers and crackers are experts in psychology to manipulate people into giving them access or the information necessary to get access. This paper discusses the inner working of such attacks. Case study of Spyware is provided. In this case study, we got 100% success using social engineering techniques for deception on Linux operating system, which is considered as the most secure operating system. Few basic principal of defend, for the individual as well as for the organization, are discussed here, which will prevent most of such attack if followed.
Breaking the Legend: Maxmin Fairness notion is no longer effective graphhoc
In this paper we analytically propose an alternative approach to achieve better fairness in scheduling mechanisms which could provide better quality of service particularly for real time application. Our proposal oppose the allocation of the bandwidth which adopted by all previous scheduling mechanism. It rather adopt the opposition approach be proposing the notion of Maxmin-charge which fairly distribute the congestion. Furthermore, analytical proposition of novel mechanism named as Just Queueing is been demonstrated
I-Min: An Intelligent Fermat Point Based Energy Efficient Geographic Packet F...graphhoc
Energy consumption and delay incurred in packet delivery are the two important metrics for measuring the performance of geographic routing protocols for Wireless Adhoc and Sensor Networks (WASN). A protocol capable of ensuring both lesser energy consumption and experiencing lesser delay in packet delivery is thus suitable for networks which are delay sensitive and energy hungry at the same time. Thus a smart packet forwarding technique addressing both the issues is thus the one looked for by any geographic routing protocol. In the present paper we have proposed a Fermat point based forwarding technique which reduces the delay experienced during packet delivery as well as the energy consumed for transmission and reception of data packets.
Fault tolerant wireless sensor mac protocol for efficient collision avoidancegraphhoc
In sensor networks communication by broadcast methods involves many hazards, especially collision. Several MAC layer protocols have been proposed to resolve the problem of collision namely ARBP, where the best achieved success rate is 90%. We hereby propose a MAC protocol which achieves a greater success rate (Success rate is defined as the percentage of delivered packets at the source reaching the destination successfully) by reducing the number of collisions, but by trading off the average propagation delay of transmission. Our proposed protocols are also shown to be more energy efficient in terms of energy dissipation per message delivery, compared to the currently existing protocol.
Enhancing qo s and qoe in ims enabled next generation networksgraphhoc
Managing network complexity, accommodating greater numbers of subscribers, improving coverage to support data services (e.g. email, video, and music downloads), keeping up to speed with fast-changing technology, and driving maximum value from existing networks – all while reducing CapEX and OpEX and ensuring Quality of Service (QoS) for the network and Quality of Experience (QoE) for the user. These are just some of the pressing business issues faced by mobileservice providers, summarized by the demand to “achieve more, for less.” The ultimate goal of optimization techniques at the network and application layer is to ensure End-user perceived QoS. The next generation networks (NGN), a composite environment of proven telecommunications and Internet-oriented mechanisms have become generally recognized as the telecommunications environment of the future. However, the nature of the NGN environment presents several complex issues regarding quality assurance that have not existed in the legacy environments (e.g., multi-network, multi-vendor, and multi-operator IP-based telecommunications environment, distributed intelligence, third-party provisioning, fixed-wireless and mobile access, etc.). In this Research Paper, a service aware policy-based approach to NGN quality assurance is presented, taking into account both perceptual quality of experience and technologydependant quality of service issues. The respective procedures, entities, mechanisms, and profiles are discussed. The purpose of the presented approach is in research, development, and discussion of pursuing the end-to-end controllability of the quality of the multimedia NGN-based communications in an environment that is best effort in its nature and promotes end user’s access agnosticism, service agility, and global mobility
Simulated annealing for location area planning in cellular networksgraphhoc
LA planning in cellular network is useful for minimizing location management cost in GSM network. In fact, size of LA can be optimized to create a balance between the LA update rate and expected paging rate within LA. To get optimal result for LA planning in cellular network simulated annealing algorithm is used. Simulated annealing give optimal results in acceptable run-time
Secure key exchange and encryption mechanism for group communication in wirel...graphhoc
Secured communication in ad hoc wireless networks is primarily important, because the communication signals are openly available as they propagate through air and are more susceptible to attacks ranging from passive eavesdropping to active interfering. The lack of any central coordination and shared wireless medium makes them more vulnerable to attacks than wired networks. Nodes act both as hosts and routers and are interconnected by Multi- hop communication path for forwarding and receiving packets to/from other nodes. The objective of this paper is to propose a key exchange and encryption mechanism that aims to use the MAC address as an additional parameter as the message specific key[to encrypt]and forward data among the nodes. The nodes are organized in spanning tree fashion, as they avoid forming cycles and exchange of key occurs only with authenticated neighbors in ad hoc networks, where nodes join or leave the network dynamically.
Simulation to track 3 d location in gsm through ns2 and real lifegraphhoc
In recent times the cost of mobile communication has dropped significantly leading to a dramatic increase in mobile phone usage. The widespread usage has led mobiles to emerge as a strong alternative for other applications one of which is tracking. This has enabled law-enforcing agencies to detect overspeeding vehicles and organizations to keep track its employees. The 3 major ways of tracking being employed presently are (a) via GPS [1] (b) signal attenuation property of a packet [3] and (c) using GSM Network [2]. The initial cost of GPS is very high resulting in low usage whereas (b) needs a very high precision measuring device. The paper presents a GSM-based tracking technique which eliminates the above mentioned overheads, implements it in NS2 and shows the limitations of the real life simulation. An accuracy of 97% was achieved during NS2 simulation which is comparable to the above mentioned alternate methods of tracking.
Performance Analysis of Ultra Wideband Receivers for High Data Rate Wireless ...graphhoc
For high data rate ultra wideband communication system, performance comparison of Rake, MMSE and Rake-MMSE receivers is attempted in this paper. Further a detail study on Rake-MMSE time domain equalizers is carried out taking into account all the important parameters such as the effect of the number of Rake fingers and equalizer taps on the error rate performance. This receiver combats inter-symbol interference by taking advantages of both the Rake and equalizer structure. The bit error rate performances are investigated using MATLAB simulation on IEEE 802.15.3a defined UWB channel models. Simulation results show that the bit error rate probability of Rake-MMSE receiver is much better than Rake receiver and MMSE equalizer. Study on non-line of sight indoor channel models illustrates that bit error rate performance of Rake-MMSE (both LE and DFE) improves for CM3 model with smaller spread compared to CM4 channel model. It is indicated that for a MMSE equalizer operating at low to medium SNR values, the number of Rake fingers is the dominant factor to improve system performance, while at high SNR values the number of equalizer taps plays a more significant role in reducing the error rate.
Coverage and Connectivity Aware Neural Network Based Energy Efficient Routing...graphhoc
There are many challenges when designing and deploying wireless sensor networks (WSNs). One of the key challenges is how to make full use of the limited energy to prolong the lifetime of the network, because energy is a valuable resource in WSNs. The status of energy consumption should be continuously monitored after network deployment. In this paper, we propose coverage and connectivity aware neural network based energy efficient routing in WSN with the objective of maximizing the network lifetime. In the proposed scheme, the problem is formulated as linear programming (LP) with coverage and connectivity aware constraints. Cluster head selection is proposed using adaptive learning in neural networks followed by coverage and connectivity aware routing with data transmission. The proposed scheme is compared with existing schemes with respect to the parameters such as number of alive nodes, packet delivery fraction, and node residual energy. The simulation results show that the proposed scheme can be used in wide area of applications in WSNs.
An Overview of Mobile Ad Hoc Networks for the Existing Protocols and Applicat...graphhoc
Mobile Ad Hoc Network (MANET) is a collection of two or more devices or nodes or terminals with
wireless communications and networking capability that communicate with each other without the aid of
any centralized administrator also the wireless nodes that can dynamically form a network to exchange
information without using any existing fixed network infrastructure. And it’s an autonomous system in
which mobile hosts connected by wireless links are free to be dynamically and some time act as routers at
the same time, and we discuss in this paper the distinct characteristics of traditional wired networks,
including network configuration may change at any time , there is no direction or limit the movement and
so on, and thus needed a new optional path Agreement (Routing Protocol) to identify nodes for these
actions communicate with each other path, An ideal choice way the agreement should not only be able to
find the right path, and the Ad Hoc Network must be able to adapt to changing network of this type at any
time. and we talk in details in this paper all the information of Mobile Ad Hoc Network which include the
History of ad hoc, wireless ad hoc, wireless mobile approaches and types of mobile ad Hoc networks, and
then we present more than 13 types of the routing Ad Hoc Networks protocols have been proposed. In this
paper, the more representative of routing protocols, analysis of individual characteristics and advantages
and disadvantages to collate and compare, and present the all applications or the Possible Service of Ad
Hoc Networks
An Algorithm for Odd Graceful Labeling of the Union of Paths and Cycles graphhoc
In 1991, Gnanajothi [4] proved that the path graph n
P with n vertex and n −1edge is odd graceful, and
the cycle graph Cm with m vertex and m edges is odd graceful if and only if m even, she proved the
cycle graph is not graceful if m odd. In this paper, firstly, we studied the graphCm∪Pn when m = 4, 6,8,10
and then we proved that the graphCm∪Pn
is odd graceful if m is even. Finally, we described an
algorithm to label the vertices and the edges of the vertex set ( ) m n
V C ∪P and the edge set ( ) m n
E C ∪P .
A Proposal Analytical Model and Simulation of the Attacks in Routing Protocol...graphhoc
In this work we have devoted to some proposed analytical methods to simulate these attacks, and node mobility in MANET. The model used to simulate the malicious nodes mobility attacks is based on graphical theory, which is a tool for analyzing the behavior of nodes. The model used to simulate the Blackhole cooperative, Blackmail, Bandwidth Saturation and Overflow attacks is based on malicious nodes and the number of hops. We conducted a simulation of the attacks with a C implementation of the proposed mathematical models.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Online aptitude test management system project report.pdfKamal Acharya
The purpose of on-line aptitude test system is to take online test in an efficient manner and no time wasting for checking the paper. The main objective of on-line aptitude test system is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc. This can be used in educational institutions as well as in corporate world. Can be used anywhere any time as it is a web based application (user Location doesn’t matter). No restriction that examiner has to be present when the candidate takes the test.
Every time when lecturers/professors need to conduct examinations they have to sit down think about the questions and then create a whole new set of questions for each and every exam. In some cases the professor may want to give an open book online exam that is the student can take the exam any time anywhere, but the student might have to answer the questions in a limited time period. The professor may want to change the sequence of questions for every student. The problem that a student has is whenever a date for the exam is declared the student has to take it and there is no way he can take it at some other time. This project will create an interface for the examiner to create and store questions in a repository. It will also create an interface for the student to take examinations at his convenience and the questions and/or exams may be timed. Thereby creating an application which can be used by examiners and examinee’s simultaneously.
Examination System is very useful for Teachers/Professors. As in the teaching profession, you are responsible for writing question papers. In the conventional method, you write the question paper on paper, keep question papers separate from answers and all this information you have to keep in a locker to avoid unauthorized access. Using the Examination System you can create a question paper and everything will be written to a single exam file in encrypted format. You can set the General and Administrator password to avoid unauthorized access to your question paper. Every time you start the examination, the program shuffles all the questions and selects them randomly from the database, which reduces the chances of memorizing the questions.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
ACTOR GARBAGE COLLECTION IN DISTRIBUTED SYSTEMS USING GRAPH TRANSFORMATION
1. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
DOI : 10.5121/jgraphoc.2011.3403 27
ACTOR GARBAGE COLLECTION IN DISTRIBUTED
SYSTEMS USING GRAPH TRANSFORMATION
B. Seetha Lakshmi, C.D. Balapriya, R.Soniya
KLN College of Information Technology
Pottapalayam, Sivagangai District, Tamil Nadu, India
seethasee1976@rediffmail.com
ABSTRACT
A lot of research work has been done in the area of Garbage collection for both uniprocessor and
distributed systems. Actors are associated with activity (thread) and hence usual garbage collection
algorithms cannot be applied for them. Hence a separate algorithm should be used to collect them. If we
transform the active reference graph into a graph which captures all the features of actors and looks like
passive reference graph then any passive reference graph algorithm can be applied for it. But the cost of
transformation and optimization are the core issues. An attempt has been made to walk through these
issues.
KEYWORDS
Active Objects, Garbage Collection, Passive Objects, Distributed Garbage Collection,
Transformation Algorithm.
1. INTRODUCTION
When an object is no longer referenced by a program, the heap space it occupies can be recycled
so that the space is made available for subsequent new objects. If there is no automatic storage
reclamation then the programmer has to manually find the objects which are unused and has to
collect them, which may be error prone and time consuming.
This paper is divided into three sections. In the first section the fundamentals of Actor System
and Traditional Passive objects are being dealt with. Secondly, Transformation Algorithms are
discussed. Third section thoroughly explains about the cost and optimization of transformation
algorithm.
1.1 Distributed Garbage Collection
In the distributed systems can support both passive and active objects. Active objects relate to
actors, and we use actors to refer to them. One major difference between actors and passive
2. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
28
objects is the thread of control. A passive object is operated by external threads, which can create
new objects, add new references, or delete references. If an object can be possibly manipulated by
the external threads of control, it is live; otherwise it is garbage. On the other hand, an actor has
an internal thread, it can be manipulated by the external thread s of control and at the same time it
can also manipulate other objects provided if it is not in an blocked state. Hence an actor is live if
it can be manipulated by external threads or if it can manipulate other object’s thread otherwise it
is garbage. Both active and passive objects can become garbage, and require a garbage collection
mechanism to reclaim them.
1.1 Passive Object Reference Graph
Figure 1: Passive Reference Graph
The above diagram is a passive object reference graph in which root objects are shown as
triangles and others as circles. The objects (1,2) which can be reached from root objects were
not garbage.
1.2 Actor Reference Graph
Figure 2: Active Reference Graph
The above is an actor reference graph. Actors 3,4,8 are live because they can potentially sent
messages to the root. For example, even thought he actor 3 cannot send message to the root
directly, it can do so using object 2 (that is in 3 can activate 2 by calling it, which can be referred
by root actor) indirectly.
1.3 Terminologies
The principle features and terminology of the actor model which relate to the garbage collection
problem are these:
Passive Object: A passive object is one that only speaks when spoken to. i.e., Only responds and
calls other functions on other objects, when one of its own functions is called. In essence, a
traditional programming object
Active Object: An active object has a mind and life of its own. It owns its own thread of control,
notionally associated with its own mini address space
3. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
29
Actor: A concurrently active object. There are no passive entities. Each actor is uniquely
identified by the address of its single mail queue.
Acquaintance: Actor B is an acquaintance of actor A if B’s mail queue address is known to actor
A.
Inverse acquaintance: if actor A is an acquaintance of actor B, then actor B is an inverse
acquaintance of A.
Acquaintance list: a set of mail queue addresses including any mail queue address contained in a
message on the actors mail queue or in transit to the mail queue. This accounts for delays in
message processing.
Blocked actor: All behaviors are blocked.
Active actor: an actor with at least one active behavior.
Root actors: An actor designated as being “always useful.” Examples of root actors are those
which have the ability to directly affect real-world through sensors, actuators, I/O devices, users,
etc.
1.4 Need for Graph Transformation
When a system contains both active and passive object garbage then we have to use active object
garbage collection algorithm to collect actor garbage and use passive object garbage collection
algorithm to collect passive object garbage. Instead of using two algorithms we can use
transformation algorithm to transform active object graph into passive object graph. In the next
section let us discuss about two transformation algorithms. They are
1. Transformation Algorithm by Vardhan and Agha.
2. Transformation algorithm by Wei-Jen Wang et al.
2. TRANSFORMATION ALGORITHMS
2.1 Transformation Algorithm by Vardhan and Agha
The method proposed by Vardhan and Agha performs transformation of the actor reference graph
which captures all the information necessary for actor GC, and makes it possible to apply a
garbage collection algorithm for passive objects to the transformed graph in order to collect
garbage actors. The transformation represents each actor in the original graph by a pair of nodes
in the transformed graph. References between nodes in the transformed graph are derived using
rules which depend not only on the actors to which know a particular actor, but also on which
actors it knows; and whether or not that actor has messages pending in its mail queue.
Rules for Transformation
1. For every actor named a in original Actor graph, there are two corresponding nodes:
Original Object and its Mail queue.
4. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
30
Figure 3: Rule 1
2. For every root actor there is an equivalent object and its mail queue object.
Figure 4: Rule 2
3. If an actor a is unblocked, there is an edge from its mailqueue to itself in the transformed
graph.
Figure 5: Rule 3
4. If an actor a has a reference to an actor b, there is an edge from original object to its
mailqueue and to the original object.
Figure 6: Rule 4
5. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
31
Example
In the below figure for actor names i = {1, 6, 10, 12} which are unblocked, there is an edge from
µ(i) to α(i). Looking at this graph we can see that a garbage collector for passive objects would
regard α(1), α(2), α(3), α(4), α(5), α(6) and α(8) as live and all others objects in A’ as garbage. A
look at the original actor-reference graph shows that it is exactly actors 1, 2, 3, 4, 5, 6 and 8 that
are live. Of special interest is α(6) in the transformed graph. Because α(6) has a reference from
µ(6) which is reachable from µ(1) (the root), it is correctly identified as being live. The reader can
also note that, although µ(7) is reachable in the transformed graph, α(7) is not. By step 4 of
Algorithm 1, it is α(7) that is used for deciding garbage status of actor 7 and hence 7 is correctly
identified as garbage.
Figure 7: Example
2.2 Transformation Algorithm by Wei Jen Wang
The essential concept of passive object garbage lies in the idea of the possibility of object
manipulation. Objects that can be manipulated by the thread of control of the application are live;
otherwise they are garbage. Root objects are those which can be directly accessed by the thread
of control, while transitively live objects are those transitively reachable from the root objects by
following references. The problem of passive object garbage collection and active garbage
collection can be represented as a graph problem. Hence if we transform active reference graph
into passive reference graph then we can apply any one of the passive garbage collection
algorithm to collect garbage.
2.2.1 Transformation by Direct Back Pointers to Unblocked Actors.
This is a much easier approach to transform actor garbage collection into passive object garbage
collection, by making
E′= E ∪{aqau | au ∈(U ∪R) ∧au aq}.
6. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
32
Figure 8: Example
For example, in the above Fig.5, Actors 2 and 3 have back pointers to Unblocked Actor 1 because
they are reachable from Actor 1. Actor 11 has a back pointer to Root Actor 9 and another one to
Unblocked Actor 13 for the same reason. Actor 3 does not have a back pointer to Actor 5 because
Actor 5 is neither a root nor an unblocked actor. Notice the use of term back pointers to describe
the newly added references is to avoid ambiguity with the term in-verse references.
2.2.2 Transformation by Indirect Back Pointers to Unblocked Actors.
This is an another similar approach to transform actor garbage collection into passive object
garbage collection,
E′ = E ∪{aqap | au ∈(U ∪R) ∧apaq ∈E ∧au ap}.
Figure 9: Example
For example, in the above Fig.6, Actor 2 has back pointers to Unblocked Actor 1 and Actor 3 has
back pointers to Actor 2 because they are reachable from Actor 1. The newly added back pointers
will create a corresponding counter-directional path of a path from an unblocked/root actor to
another actor which is reachable from the unblocked/root actor. Similarly, Actor 11 has a new
counter-directional path to Root Actor 9 and another one to Unblocked Actor 13.
7. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
33
3. COST AND OPTIMIZATION
3.1 Transformation by Abhay Vardhan et. al.
3.1.1 Analysis of Cost
To perform analysis of cost of GC (Garbage Collection) an actor program wa run on the Actor
Foundry. The program implements an exhaustive search solution for the 5-queens and 4-queens
problem. The problem is to put 5 queens on an 5 by 5(4 queens on an 4 by 4) chess-board such
that no queen is under attack from another one according to the rules of chess. In the
implementation a single actor, C, starts the computation with an empty chess board. It places a
single queen in one of the squares on the first row and creates an actor to solve the remainder of
the problem. One actor is created for every square on the first row. When a newly created actor
receives a partially filled chess-board it places queens on the row following the rows that have
already been filled and spawns additional actors to do the remainder of the computation. If an
actor manages to fill all rows, it sends a message to C notifying it of the solution. The program
generated a large amount of garbage.
Table 1 : Timings for 5-Queens Problem(Single Host)
Table 1 shows the Timings for 5 queens problem on a single host. Table 2 shows the break cost
of GC for 5 queens problem on a single host. Table 3 shows the Timings for 4 queens problem
on a network which contains 2 host.
Table 2 : Breakup cost for 5-Queens Problem(Single Host)
Table 3 : Timings for 4-Queens Problem(Two Hosts)
Experimental results indicate that the ratio of time taken with GC running and without GC is seen
to be 1.6 for a single host case and 1.3 for the case of a network with two hosts.
3.1.2 Analysis of Performance
Table 4 shows the various parameters which affect the performance together with the advantages,
disadvantages and a possible suggesstion to overcome them.
8. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
34
Table: 4 Various Performance Parameters
Parameter Advantage Disadvantage Suggession
Mail Queue
Objects
To know the blocked or
unblocked status of the
objects
Extra space occupied
by mail queue object
is an Overhead
Reference between
the mailqueue
objects and reference
between the can be
maintained as a bit in
the original object
itself.
Inverse
Acquaintances
If an GC algorithm does
not support Inverse
Acquaintances then the
algorithm has to trace the
entire reachability set of
the unblocked actor
through which an actor
pass message to the root.
When to maintain the
Inverse
Acquaintances
Inverse
acquaintances can be
maintained at the
time of garbage
collection.
Since Mailqueue objects and Inverse acquaintances are introduced in the transformed graph,
exactly twice the number of objects and thrice the no of references are added as overhead.
3.2 Direct and Indirect Back Pointers by Wei Jen Wang
3.2.1 Analysis of Cost
To understand the impact of actor garbage collection, we measure actor garbage collection using
four different mechanisms: NO-GC, GDP, LGC, and CDGC. By using these mechanisms, we can
understand the overhead each actor garbage collection algorithm imposes on the actor system.
The mechanisms are described as follows:
No-GC: Data structures and algorithms for actor garbage collection are not used.
GDP: The local garbage collector is not activated. Only the garbage detection protocol
(the implementation of the pseudo-root approach) is used.
LGC: The local garbage collector is activated every n seconds or in the case of
insufficient memory (n=2 for the tests in this chapter).
CDGC: The logically centralized garbage collector is activated every m seconds
or in the case of insufficient memory (m=20 for the tests).
We developed three different benchmark applications using to measure the impact of our local
actor garbage collection mechanism. These applications are Fibonacci number (Fib), N queens
number (NQ), and Matrix multiplication (MX). Each application is executed on a dual-core
processor Sun Blade 1000s machine, equipped with two 750 MHz processors and 2 GB of RAM.
The operating system used was SunOS 5.10 and the Java VM was Java HotSpot Client VM (build
1.4.1). The applications are described as follows:
9. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
35
Fibonacci number (Fib): Fibonacci number, abbreviated as Fib, takes one argument k and
then computes the k-th Fibonacci number concurrently. It is a coordinated tree-structure
computation. When k ≤ 30, the application sequentially computes the k-th Fibonacci
number.
N queens number (NQ): N queens number, abbreviated as NQ, takes one argument to
calculate the total solutions of the N queens problem by creating (N −1)×(N −2) actors
for parallel execution and one actor for coordination.
Matrix multiplication (MX): Matrix multiplication, abbreviated as MX, requires two files
for application arguments, each of which contains a matrix. The application calculates
one matrix multiplication of the given two matrices.
We also developed four distributed benchmark applications. They are performed on four dual-
core processor Sun Blade 1000s machines. The distributed benchmark applications are described
as follows:
Distributed Fibonacci number with locality (Dfibl): Dfibl optimizes the number of inter-
node messages by locating four sub-computing-trees at each computing node.
Distributed Fibonacci number without locality (Dfibn): Dfibn distributes the actors in a
breadth-first-search manner.
Distributed N queens number (DNQ): DNQ equally distributes the actors to four
computing nodes.
Distributed Matrix multiplication (DMX): DMX divides the first input matrix into four
sub-matrices, sends the sub-matrices and the second matrix to four computing nodes,
performs one matrix multiplication operation, and then merges the data at the computing
node that initializes the computation.
The local experimental results are shown in Table 5, and the distributed results are in Table 6
Each result of a benchmark application is the average of ten execution times. Notice that Real
represents the total real execution time to get the computing result, while CPU represents the total
CPU time of both processors to get the computing result. CPU time can be bigger than Real time
because the machine to test has two CPUs and CPU time is equal to the sum of the individual
CPU time. The average GDP real time overhead of local experimental results is 20.5%; the
average GDP CPU time overhead of local experimental results is 16%; the average LGC+GDP
Real time overhead of local experimental results is 24%; the average LGC+GDP CPU time
overhead of local experimental results is 19%; the average LGC+GDP+CDGC Real time
overhead of experimental results is 19%;
10. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
36
Table 5: Local Experimental Results
Table 6: Distributed Results
3.2.2 Analysis of Performance
Table 7 shows the various parameters which affects the performance together with the
advantages, disadvantages and a possible suggestion to overcome them.
Table 7: Performance Analysis
Parameter Advantage Disadvantage Suggestion
Scanning the
reference graph
twice
Scans the reference
graph only twice for
marking, and has linear
time complexity of
O(V +E) and extra
space complexity
O(V +E).
Processing twice
the graph is an
Overhead
Use one extra
marking variable in
each actor,
and scan the
reference graph once
4. Conclusion
In the transformation algorithm given by Abhay Vardhan Mailqueue objects and Inverse
acquaintances are introduced in the transformed graph. Exactly twice the number of objects and
thrice the number of references are added as overhead. Compared to Abhay Vardhan’s
transformation method Wei Jen Wang’s method is efficient since there are no mail queue objects.
The number of references is also less in Wei Jen Wang’s method since inverse acquaintances are
not added for all nodes. The back pointer algorithm requires scanning the reference graph twice
which is again an overhead. Back Pointer algorithm has linear time complexity of O (V +E) and
11. International journal on applications of graph theory in wireless ad hoc networks and sensor networks
(GRAPH-HOC) Vol.3, No.4, December 2011
37
extra space complexity O (V +E). Only these two algorithms are available for transformation of
active reference graph into passive reference graph. This area has to be further researched upon
in the coming days to minimise the overheads caused by transformation.
References
[1] Kafura, D., Washabaugh, D., Nelson, J.: Garbage collection of actors. In: OOPSLA’ 90 ACM
Conference on Object-Oriented Systems, Languages and Applications, ACM Press.
[2] Vardhan, A., Agha, G.: Using passive object garbage collection algorithms for garbage collection of
active objects. In: ISMM’02. ACM SIGPLAN Notices, Berlin,ACM Press.
[3] Wei-Jen Wang, Carlos Varela, Fu-Hau Hsu, and Cheng-Hsien Tang: Actor Garbage Collection Using
Vertex-Preserving Actor-to-Object Graph Transformations. CiteseerXbeta.
[4] Agha, G. (1986): Actors: A Model of Concurrent Computation in Distributed Systems. MIT Press
[5] Abdullahi, S.E., Ringwood, A. (1998): Garbage collecting the internet: A survey of distributed
garbage collection. ACM Computing Surveys 30(3) 330–373
[6] Wang, Varela: Distributed garbage collection for mobile actor systems: The pseudo root approach.
[7] Dickman: Incremental, Distributed Orphan Detection and Actor Garbage Collection using graph
partitioning and Euler cycles submitted for publication.
[8] Washabaugh.: Real-time garbage collection of actors in a distributed system.
[9] Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: (2001) 21. In: Introduction to
Algorithms.Second edn. MIT Press/McGraw-Hill. 498–522.
[10] Nelson, J.: (1989) Automatic, incremental, on-the-fly garbage collection of actors. Master’s thesis,
Virginia Tech, Blacksburg, VA.