This document describes a genetic algorithm approach for optimizing the hierarchical structure of multi-agent systems. It introduces a novel "hierarchical genetic algorithm" that uses hierarchical crossover and mutation operators. The algorithm represents hierarchical organizational structures as arrays to make them amenable to genetic operators. It was tested on an information retrieval model case study involving 10 scenarios. The results showed the algorithm could find competitive baseline organizational structures leading to optimal performance, outperforming traditional genetic algorithms.
CONFIGURING ASSOCIATIONS TO INCREASE TRUST IN PRODUCT PURCHASEIJwest
Clustering is categorizing data into groups with similar objects. Data mining adds to complexities of clustering a large dataset with various features. Among these datasets, there are electronic business stores which offer their products through web. These stores require recommendation systems which can offer products to the user which the user might require them with higher probability. In this study, previous purchases of users are used to present a sorted list of products to the user. Identifying associations related to users and finding centers increases precision of the recommended list. Configuration of associations and creating a profile for users is important in current studies. In the proposed method, association rules are presented to model user interactions in the web which use time that a page is visited and frequency of visiting a page to weight pages and describes users’ interest to page groups. Therefore, weight of each transaction item describes user’s interest in that item. Analyzing results show that the proposed method presents a more complete model of users’ behavior because it combines weight and membership degree of pages simultaneously for ranking candidate pages. This method has obtained higher accuracy compared to other methods even in higher number of pages.
A survey on discrimination deterrence in data miningeSAT Journals
Abstract
For extracting useful knowledge which is hidden in large set of data, Data mining is a very important technology. There are some negative perceptions about data mining. This perception may contain unfairly treating people who belongs to some specific group. Classification rule mining technique has covered the way for making automatic decisions like loan granting/denial and insurance premium computation etc. These are automated data collection and data mining techniques. According to discrimination attributes if training data sets are biases then discriminatory decisions may ensue. Thus in data mining antidiscrimination techniques with discrimination discovery and prevention are included. It can be direct or indirect. . When choices are created depending on delicate features that period the discrimination is oblique. The elegance is oblique when choices are created depending on nonsensitive features which are strongly correlated with one-sided delicate ones. The suggested system tries to deal with elegance protection in information exploration. It suggests new improved techniques applicable for immediate or oblique elegance protection independently or both simultaneously. Conversations about how to clean coaching information sets and contracted information places in such a way that immediate and/or oblique discriminatory decision guidelines are transformed to genuine classification guidelines are done. New analytics to evaluate the utility of the suggested methods are suggests and comparison of these methods is also done.
Keywords: Antidiscrimination, information exploration, oblique and immediate elegance protection, concept protection, concept generalization, privacy.
This document discusses using classification algorithms in data mining to predict employee performance. It evaluates the C4.5, Bagging, and Rotation Forest decision tree algorithms on a dataset from an educational institution. The Rotation Forest algorithm achieved the highest accuracy at 100% on the training set, while C4.5 and Bagging had lower accuracies. When evaluated using 10-fold cross-validation, Rotation Forest again had the best performance at 51.46% accuracy, compared to 41.47% for C4.5 and 45.62% for Bagging. The study aims to identify the most effective algorithm for predicting employee talent and performance using a machine learning approach.
IRJET- Personalize Travel Recommandation based on Facebook DataIRJET Journal
The document summarizes a proposed system for personalized travel recommendation based on Facebook data. It begins by discussing existing challenges with cold start recommendations and existing recommendation techniques. It then proposes a new framework called Implicit-feedback based Content-aware Collaborative Filtering (ICCF) that incorporates semantic content from social networks to address cold start recommendations without negative sampling. Finally, it evaluates ICCF on a large location-based social network dataset and finds it outperforms existing baselines particularly for cold start scenarios by leveraging user profile information.
The document discusses a link mining methodology adapted from the CRISP-DM process to incorporate anomaly detection using mutual information. It applies this methodology in a case study of co-citation data. The methodology involves data description, preprocessing, transformation, exploration, modeling, and evaluation. Hierarchical clustering identified 5 clusters, with cluster 1 showing strong links and cluster 5 weak links. Mutual information validated the results, showing cluster 5 had the lowest mutual information, indicating independent variables. The case study demonstrated the approach can interpret anomalies semantically and be used with real-world data volumes and inconsistencies.
This document presents a novel approach to anomaly detection in link mining based on applying mutual information. It adapts the CRISP-DM methodology for link mining and applies it to a case study using co-citation data. The methodology includes data description, preprocessing, transformation, exploration, modeling through graph mapping and hierarchical clustering, and evaluation. Mutual information is used to interpret the semantics of anomalies identified in clusters. The case study identifies collective and community anomalies and confirms mutual information can validate clustering results by showing strong links within clusters but independence between objects in one cluster.
A Novel Approach for Travel Package Recommendation Using Probabilistic Matrix...IJSRD
This document proposes a novel approach for travel package recommendation using probabilistic matrix factorization (PMF). It discusses how existing recommendation systems are usually classification-based and supervised, whereas the proposed approach uses an unsupervised E-TRAST (Efficient-Tourist Relation Area Season Topic) model. The E-TRAST model represents travel packages and tourists using different topics modeled through PMF. It analyzes travel data characteristics and introduces a cocktail approach considering features like seasonal tourist performance to recommend customized travel packages.
Clustering Prediction Techniques in Defining and Predicting Customers Defecti...IJECEIAES
With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several ecommerce sites and compare their competitors‟ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-RecencyFrequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macroaveraging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers‟ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection.
CONFIGURING ASSOCIATIONS TO INCREASE TRUST IN PRODUCT PURCHASEIJwest
Clustering is categorizing data into groups with similar objects. Data mining adds to complexities of clustering a large dataset with various features. Among these datasets, there are electronic business stores which offer their products through web. These stores require recommendation systems which can offer products to the user which the user might require them with higher probability. In this study, previous purchases of users are used to present a sorted list of products to the user. Identifying associations related to users and finding centers increases precision of the recommended list. Configuration of associations and creating a profile for users is important in current studies. In the proposed method, association rules are presented to model user interactions in the web which use time that a page is visited and frequency of visiting a page to weight pages and describes users’ interest to page groups. Therefore, weight of each transaction item describes user’s interest in that item. Analyzing results show that the proposed method presents a more complete model of users’ behavior because it combines weight and membership degree of pages simultaneously for ranking candidate pages. This method has obtained higher accuracy compared to other methods even in higher number of pages.
A survey on discrimination deterrence in data miningeSAT Journals
Abstract
For extracting useful knowledge which is hidden in large set of data, Data mining is a very important technology. There are some negative perceptions about data mining. This perception may contain unfairly treating people who belongs to some specific group. Classification rule mining technique has covered the way for making automatic decisions like loan granting/denial and insurance premium computation etc. These are automated data collection and data mining techniques. According to discrimination attributes if training data sets are biases then discriminatory decisions may ensue. Thus in data mining antidiscrimination techniques with discrimination discovery and prevention are included. It can be direct or indirect. . When choices are created depending on delicate features that period the discrimination is oblique. The elegance is oblique when choices are created depending on nonsensitive features which are strongly correlated with one-sided delicate ones. The suggested system tries to deal with elegance protection in information exploration. It suggests new improved techniques applicable for immediate or oblique elegance protection independently or both simultaneously. Conversations about how to clean coaching information sets and contracted information places in such a way that immediate and/or oblique discriminatory decision guidelines are transformed to genuine classification guidelines are done. New analytics to evaluate the utility of the suggested methods are suggests and comparison of these methods is also done.
Keywords: Antidiscrimination, information exploration, oblique and immediate elegance protection, concept protection, concept generalization, privacy.
This document discusses using classification algorithms in data mining to predict employee performance. It evaluates the C4.5, Bagging, and Rotation Forest decision tree algorithms on a dataset from an educational institution. The Rotation Forest algorithm achieved the highest accuracy at 100% on the training set, while C4.5 and Bagging had lower accuracies. When evaluated using 10-fold cross-validation, Rotation Forest again had the best performance at 51.46% accuracy, compared to 41.47% for C4.5 and 45.62% for Bagging. The study aims to identify the most effective algorithm for predicting employee talent and performance using a machine learning approach.
IRJET- Personalize Travel Recommandation based on Facebook DataIRJET Journal
The document summarizes a proposed system for personalized travel recommendation based on Facebook data. It begins by discussing existing challenges with cold start recommendations and existing recommendation techniques. It then proposes a new framework called Implicit-feedback based Content-aware Collaborative Filtering (ICCF) that incorporates semantic content from social networks to address cold start recommendations without negative sampling. Finally, it evaluates ICCF on a large location-based social network dataset and finds it outperforms existing baselines particularly for cold start scenarios by leveraging user profile information.
The document discusses a link mining methodology adapted from the CRISP-DM process to incorporate anomaly detection using mutual information. It applies this methodology in a case study of co-citation data. The methodology involves data description, preprocessing, transformation, exploration, modeling, and evaluation. Hierarchical clustering identified 5 clusters, with cluster 1 showing strong links and cluster 5 weak links. Mutual information validated the results, showing cluster 5 had the lowest mutual information, indicating independent variables. The case study demonstrated the approach can interpret anomalies semantically and be used with real-world data volumes and inconsistencies.
This document presents a novel approach to anomaly detection in link mining based on applying mutual information. It adapts the CRISP-DM methodology for link mining and applies it to a case study using co-citation data. The methodology includes data description, preprocessing, transformation, exploration, modeling through graph mapping and hierarchical clustering, and evaluation. Mutual information is used to interpret the semantics of anomalies identified in clusters. The case study identifies collective and community anomalies and confirms mutual information can validate clustering results by showing strong links within clusters but independence between objects in one cluster.
A Novel Approach for Travel Package Recommendation Using Probabilistic Matrix...IJSRD
This document proposes a novel approach for travel package recommendation using probabilistic matrix factorization (PMF). It discusses how existing recommendation systems are usually classification-based and supervised, whereas the proposed approach uses an unsupervised E-TRAST (Efficient-Tourist Relation Area Season Topic) model. The E-TRAST model represents travel packages and tourists using different topics modeled through PMF. It analyzes travel data characteristics and introduces a cocktail approach considering features like seasonal tourist performance to recommend customized travel packages.
Clustering Prediction Techniques in Defining and Predicting Customers Defecti...IJECEIAES
With the growth of the e-commerce sector, customers have more choices, a fact which encourages them to divide their purchases amongst several ecommerce sites and compare their competitors‟ products, yet this increases high risks of churning. A review of the literature on customer churning models reveals that no prior research had considered both partial and total defection in non-contractual online environments. Instead, they focused either on a total or partial defect. This study proposes a customer churn prediction model in an e-commerce context, wherein a clustering phase is based on the integration of the k-means method and the Length-RecencyFrequency-Monetary (LRFM) model. This phase is employed to define churn followed by a multi-class prediction phase based on three classification techniques: Simple decision tree, Artificial neural networks and Decision tree ensemble, in which the dependent variable classifies a particular customer into a customer continuing loyal buying patterns (Non-churned), a partial defector (Partially-churned), and a total defector (Totally-churned). Macroaveraging measures including average accuracy, macro-average of Precision, Recall, and F-1 are used to evaluate classifiers‟ performance on 10-fold cross validation. Using real data from an online store, the results show the efficiency of decision tree ensemble model over the other models in identifying both future partial and total defection.
Performance Analysis of Selected Classifiers in User Profilingijdmtaiir
User profiles can serve as indicators of personal
preferences which can be effectively used while providing
personalized services. Building user files which can capture
accurate information of individuals has been a daunting task.
Several attempts have been made by researchers to extract
information from different data sources to build user profiles
on different application domains. Towards this end, in this
paper we employ different classification algorithmsto create
accurate user profiles based on information gathered from
demographic data. The aim of this work is to analyze the
performance of five most effective classification methods,
namely Bayesian Network(BN), Naïve Bayesian(NB), Naives
Bayes Updateable(NBU), J48, and Decision Table(DT). Our
simulation results show that, in general, the J48has the highest
classification accuracy performance with the lowest error rate.
On the other hand, it is found that Naïve Bayesian and Naives
Bayes Updateable classifiers have the lowest time requirement
to build the classification model
UML MODELING AND SYSTEM ARCHITECTURE FOR AGENT BASED INFORMATION RETRIEVALijcsit
In this current technological era, there is an enormous increase in the information available on web and
also in the online databases. This information abundance increases the complexity of finding relevant
information. To solve such challenges, there is a need for improved and intelligent systems for efficient
search and retrieval. Intelligent Agents can be used for better search and information retrieval in a
document collection. The information required by a user is scattered in a large number of databases. In this
paper, the object oriented modeling for agent based information retrieval system is presented. The paper
also discusses the framework of agent architecture for obtaining the best combination terms that serve as
an input query to the information retrieval system. The communication and cooperation among the agents
are also explained. Each agent has a task to perform in information retrieval.
Identification of important features and data mining classification technique...IJECEIAES
Employees absenteeism at the work costs organizations billions a year. Prediction of employees’ absenteeism and the reasons behind their absence help organizations in reducing expenses and increasing productivity. Data mining turns the vast volume of human resources data into information that can help in decision-making and prediction. Although the selection of features is a critical step in data mining to enhance the efficiency of the final prediction, it is not yet known which method of feature selection is better. Therefore, this paper aims to compare the performance of three well-known feature selection methods in absenteeism prediction, which are relief-based feature selection, correlation-based feature selection and information-gain feature selection. In addition, this paper aims to find the best combination of feature selection method and data mining technique in enhancing the absenteeism prediction accuracy. Seven classification techniques were used as the prediction model. Additionally, cross-validation approach was utilized to assess the applied prediction models to have more realistic and reliable results. The used dataset was built at a courier company in Brazil with records of absenteeism at work. Regarding experimental results, correlationbased feature selection surpasses the other methods through the performance measurements. Furthermore, bagging classifier was the best-performing data mining technique when features were selected using correlation-based feature selection with an accuracy rate of (92%).
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
An improved technique for ranking semantic associationst07IJwest
The primary focus of the search techniques in the first generation of the Web is accessing relevant
documents from the Web. Though it satisfies user requirements, but it is insufficient as the user sometimes
wishes to access actionable information involvin
g complex relationships between two given entities.
Finding such complex relationships (also known as semantic associations) is especially useful in
applications
such as
National Security, Pharmacy, Business Intelligence etc. Therefore the next frontier is
discovering relevant semantic associations between two entities present in large semantic metadata
repositories. Given two entities, there exist a huge number of semantic associations between two entities.
Hence ranking of these associations is required i
n order to find more relevant associations. For this
Aleman Meza et al. proposed a method involving six metrics viz. context, subsumption, rarity, popularity,
association length and trust. To compute the overall rank of the associations this method compute
s
context, subsumption, rarity and popularity values for each component of the association and for all the
associations. However it is obvious that, many components appears repeatedly in many associations
therefore it is not necessary to compute context, s
ubsumption, rarity
,
popularity
,
and
trust
values of the
components every time for each association rather the previously computed values may be used while
computing the overall rank of the associations. Thi
s paper proposes a method to re
use the previously
computed values using a hash data structure thus reduce the execution time. To demonstrate the
effectiveness of the proposed method, experiments were conducted on SWETO ontology. Results show
that the proposed method is more efficient than the other existi
ng methods
.
Biometric Identification and Authentication Providence using Fingerprint for ...IJECEIAES
The raise in the recent security incidents of cloud computing and its challenges is to secure the data. To solve this problem, the integration of mobile with cloud computing, Mobile biometric authentication in cloud computing is presented in this paper. To enhance the security, the biometric authentication is being used, since the Mobile cloud computing is popular among the mobile user. This paper examines how the mobile cloud computing (MCC) is used in security issue with finger biometric authentication model. Through this fingerprint biometric, the secret code is generated by entropy value. This enables the person to request for accessing the data in the desk computer. When the person requests the access to the authorized user through Bluetooth in mobile, the Authorized user sends the permit access through fingerprint secret code. Finally this fingerprint is verified with the database in the Desk computer. If it is matched, then the computer can be accessed by the requested person.
Automatic detection of online abuse and analysis of problematic users in wiki...Melissa Moody
For their 2019 capstone project, DSI Master of Science in Data Science students Charu Rawat, Arnab Sarkar, and Sameer Singh proposed a framework to understand and detect such abuse in the English Wikipedia community.
Rawat, Sarkar, and Singh received the award for Best Paper in the Data Science for Society category at the 2019 Systems & Information Design Symposium (SIEDS). In "Automatic Detection of Online Abuse and Analysis of Problematic Users in Wikipedia," the team presented an analysis of user misconduct in Wikipedia and a system for the automated early detection of inappropriate behavior.
The document proposes an improved clustering algorithm for social network analysis. It combines BSP (Business System Planning) clustering with Principal Component Analysis (PCA) to group social network objects into classes based on their links and attributes. Specifically, it applies PCA before BSP clustering to reduce the dimensionality of the social network data and retain only the most important variables for clustering. This improves the BSP clustering results by focusing on the key information in the social network.
A Study of Neural Network Learning-Based Recommender Systemtheijes
This document summarizes a study that proposes a neural network learning model for recommender systems. The study aims to improve collaborative filtering methods by estimating user preferences based on learned correlations between users through a neural network. The proposed method was tested on MovieLens data and showed improved precision of 6.7% compared to other techniques. Additionally, the study found that precision and recall improved further, by 3.5% and 2.4% respectively, when including film genre information in the neural network learning. The document concludes the proposed technique can utilize diverse data sources and perform well regardless of data complexity compared to other recommender system methods.
This document presents a travel package recommendation system called TRAVELMATE that uses data mining techniques. It develops a Topic-Area-Season (TAST) model to understand travel package characteristics and tourist interests. A cocktail recommendation approach is introduced that uses the TAST model output and collaborative filtering to generate customized travel package recommendations. It also extends the TAST model to the TRAST model to capture relationships between tourists in tour groups. The models and recommendation approaches are evaluated on real-world travel package data.
Linking Behavioral Patterns to Personal Attributes through Data Re-Miningertekg
Download Link >https://ertekprojects.com/gurdal-ertek-publications/blog/linking-behavioral-patterns-to-personal-attributes-through-data-re-mining/
A fundamental challenge in behavioral informatics is the development of methodologies and systems that can achieve its goals and tasks, including be-havior pattern analysis. This study presents such a methodology, that can be con-verted into a decision support system, by the appropriate integration of existing tools for association mining and graph visualization. The methodology enables the linking of behavioral patterns to personal attributes, through the re-mining of colored association graphs that represent item associations. The methodology is described and mathematically formalized, and is demonstrated in a case study related with retail industry.
The document discusses credit apportionment in rule-based expert systems. It describes a framework that includes a system environment sub-model, principles of usefulness, and definitions of the credit apportionment problem. The credit apportionment problem involves estimating the inherent usefulness of rules from payoffs received. The document also reviews a bucket brigade algorithm approach to credit apportionment that uses context variables and array-valued strengths. Finally, it discusses an expert system called GAMBLE that uses a genetic algorithm and credit apportionment for branch selection advice.
Optimization of Mining Association Rule from XML DocumentsIOSR Journals
This document summarizes research on optimizing the mining of association rules from XML documents. It first provides background on association rule mining and challenges with semi-structured XML data. It then describes indexing XML elements and using an index table to extract transactions and items. The Apriori algorithm is used to generate association rules, but some rules are weak. The document proposes using an Ant Colony Optimization (ACO) algorithm to optimize the results by updating pheromone values based on rule confidence and pruning weak rules. ACO mimics how ants cooperate to find optimal paths; it is applied here to iteratively improve the generated association rule set.
The Real Time Drowisness Detection Using Arm 9IOSR Journals
This document describes a real-time driver drowsiness detection system using an ARM9 microcontroller. The system uses a webcam to capture images of the driver's eyes and an electrooculography (EOG) sensor to monitor visual activity. Image processing techniques are used to detect eye closure and blinking patterns. If drowsiness is detected, an alarm is activated to warn the driver. The system was tested on 15 people with 80% accuracy. The document concludes that image processing provides a non-invasive way to accurately detect drowsiness without interfering with the driver.
Design and Analysis of Triple-Band Multi Slotted Microstrip Patch AntennaIOSR Journals
Abstract: In this paper, a multi slotted microstrip patch antenna design has been proposed. The characteristics of the antenna are obtained in terms of return loss, gain and bandwidth. It is observed that the new proposed configuration can operate in three different frequency bands with a good amount of bandwidth i.e. bandwidth of 21.12% at 1.1GHz frequency band, bandwidth of 11.65% at 2.11 GHz and bandwidth of 13.05% at 2.76GHz frequency band . The resonating behavior in different frequency bands makes this antenna structure suitable for different types of applications with an antenna gain of 6.163dBi and antenna efficiency of 86.82%.The substrate material with relative permittivity of 4.2 and loss tangent of 0.0013 is used in this proposed antenna. The designing and simulation of the antenna structure is done over IE3D simulation software version 15.02. Keywords: Ground plane, Multi slotted, Patch Antenna, Triple band
Rehabilitation Process and Persons with Physical DysfunctionsIOSR Journals
Abstract: The main purpose of this study is to determine rehabilitation process and persons with physical
dysfunctions. To achieve the purpose of this study, three hypotheses were formulated. Ex-post facto research
design was adopted for the study. A sample of one hundred persons with disabilities was randomly selected for
the study. The selection was done through the simple random sampling technique. This was to give equal and
independent opportunity to all the respondents to be selected for the study. The questionnaire was the major
instrument used for data collection. The instrument was subjected to both face and content validation by expert
in measurement and evaluation. The reliability estimate of the instrument was established through the test-retest
reliability method Pearson product correlation analysis and independent t-test were employed were adopted to
test the hypotheses at .05 level of significance. The result of the analysis reveals that rehabilitation significantly
relates with persons with orthopedic and neurological impairments. The result also revealed that there is a
significant difference between male and female disabled persons in their perception of rehabilitation of persons
with other health impairments.
Keywords: Rehabilitation process, persons, physical, dysfunctions.
Parametric Optimization of Eicher 11.10 Chassis Frame for Weight Reduction Us...IOSR Journals
Abstract: The chassis serves as a backbone for supporting the body and different parts of the automobile. It
should be rigid enough to withstand the shock, twist, vibration and other stresses. Along with strength, an
important consideration in chassis design is to have adequate bending stiffness. The main objective of the
research is to obtain the minimum weight of Eicher 11.10 chassis frame. The chassis frame is made of two side
members joined with a series of cross members. The number of cross members, their locations, cross-section
and the sizes of the side and the cross members becomes the design variables. The chassis frame model is to be
developed in Solid works and analyzed using Ansys. Since the no. of parameters and levels are more, the
probable models are too many. So, to select optimum parameters among them large no of modeling and analysis
work is involved which consumes more time. To overcome this problem TAGUCHI method along with FEA is
use. The weight reduction of the side bar is achieved by changing the Parameters using orthogonal array. Then
FEA is performed on those models to get the best solution. This method can save material used, production cost
and time.
Keywords: Parametric optimization, Chassis frame, FE analysis, FEA-DOE hybrid modeling, Weight
reduction
A Utility Interactive Electricity Generation Schemes with Renewable ResourcesIOSR Journals
Abstract: In the recent year’s power utility of coastal areas are experiencing relatively large quantum of solar and wind energy. If the wind is heavy it might produce larger sea waves of high energy contents. The electricity needs of a township or a village situated in a coastal area can be partially fulfilled by installing a modular mini electricity generating unit and an intensified solar heat extractor in buildings. Also, installation of medium sized windmill plant, solar heated steam turbine electricity generator and sea wave energy extracting plants could fulfill the rest of the electricity needs of the township. Here we discuss the regulation of the voltage and frequency of a stand-alone fixed-pitch wind energy conversion system (WECS) based on a self-excited squirrel-cage induction machine. The characteristics of the wind turbine, self-excited generator, and the ratings of the VSI are considered in order to determine the load range for which voltageand frequency can be regulated for a given wind speed range. Keywords: Solar panel, solar tracker, solar water heater,renewable energy, wind mills, induction generator, load management.
Decision Making and Autonomic ComputingIOSR Journals
Abstract: Autonomic Computing refers to the self-managing characteristics of distributed computing
resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.
An autonomic system makes decisions on its own, using high-level policies; it will constantly check and
optimize its status and automatically adapt itself to changing conditions. As widely reported in literature, an
autonomic computing framework might be seen composed by autonomic components interacting with each
other.
An Autonomic Computing can be modeled in terms of two main control loops (local and global) with
sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting
policies based on self- and environment awareness.
The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning
while keeping the system's complexity invisible to the user.
General Terms: Autonomic systems, Self-configuration, Self-healing, Self-optimization, Self-protection.
Keywords: Know itself, reconfigure, recover from extraordinary events, expert in self-protection,
Design of Adjustable Reconfigurable Wireless Single Core CORDIC based Rake Re...IOSR Journals
In wireless communication system transmitted signals are subjected to multiple reflections,
diffractions and attenuation caused by obstacles such as buildings and hills, etc. At the receiver end, multiple
copies of the transmitted signal are received that arrive at clearly distinguishable time instants and are faded by
signal cancellation. Rake receiver is a technique to combine these so called multi-paths [2] by utilizing multiple
correlation receivers allocated to those delay positions on which the significant energy arrives which achieves a
significant improvement in the SNR of the output signal. This paper shows how the rake, including dispreading
and descrambling could be replaced by a receiver that can be implemented on a CORDIC based hardware
architecture. The performance in conjunction with the computational requirements of the receiver is widely
adjustable which is significantly better than that of the conventional rake receiver
Performance Analysis of Selected Classifiers in User Profilingijdmtaiir
User profiles can serve as indicators of personal
preferences which can be effectively used while providing
personalized services. Building user files which can capture
accurate information of individuals has been a daunting task.
Several attempts have been made by researchers to extract
information from different data sources to build user profiles
on different application domains. Towards this end, in this
paper we employ different classification algorithmsto create
accurate user profiles based on information gathered from
demographic data. The aim of this work is to analyze the
performance of five most effective classification methods,
namely Bayesian Network(BN), Naïve Bayesian(NB), Naives
Bayes Updateable(NBU), J48, and Decision Table(DT). Our
simulation results show that, in general, the J48has the highest
classification accuracy performance with the lowest error rate.
On the other hand, it is found that Naïve Bayesian and Naives
Bayes Updateable classifiers have the lowest time requirement
to build the classification model
UML MODELING AND SYSTEM ARCHITECTURE FOR AGENT BASED INFORMATION RETRIEVALijcsit
In this current technological era, there is an enormous increase in the information available on web and
also in the online databases. This information abundance increases the complexity of finding relevant
information. To solve such challenges, there is a need for improved and intelligent systems for efficient
search and retrieval. Intelligent Agents can be used for better search and information retrieval in a
document collection. The information required by a user is scattered in a large number of databases. In this
paper, the object oriented modeling for agent based information retrieval system is presented. The paper
also discusses the framework of agent architecture for obtaining the best combination terms that serve as
an input query to the information retrieval system. The communication and cooperation among the agents
are also explained. Each agent has a task to perform in information retrieval.
Identification of important features and data mining classification technique...IJECEIAES
Employees absenteeism at the work costs organizations billions a year. Prediction of employees’ absenteeism and the reasons behind their absence help organizations in reducing expenses and increasing productivity. Data mining turns the vast volume of human resources data into information that can help in decision-making and prediction. Although the selection of features is a critical step in data mining to enhance the efficiency of the final prediction, it is not yet known which method of feature selection is better. Therefore, this paper aims to compare the performance of three well-known feature selection methods in absenteeism prediction, which are relief-based feature selection, correlation-based feature selection and information-gain feature selection. In addition, this paper aims to find the best combination of feature selection method and data mining technique in enhancing the absenteeism prediction accuracy. Seven classification techniques were used as the prediction model. Additionally, cross-validation approach was utilized to assess the applied prediction models to have more realistic and reliable results. The used dataset was built at a courier company in Brazil with records of absenteeism at work. Regarding experimental results, correlationbased feature selection surpasses the other methods through the performance measurements. Furthermore, bagging classifier was the best-performing data mining technique when features were selected using correlation-based feature selection with an accuracy rate of (92%).
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
An improved technique for ranking semantic associationst07IJwest
The primary focus of the search techniques in the first generation of the Web is accessing relevant
documents from the Web. Though it satisfies user requirements, but it is insufficient as the user sometimes
wishes to access actionable information involvin
g complex relationships between two given entities.
Finding such complex relationships (also known as semantic associations) is especially useful in
applications
such as
National Security, Pharmacy, Business Intelligence etc. Therefore the next frontier is
discovering relevant semantic associations between two entities present in large semantic metadata
repositories. Given two entities, there exist a huge number of semantic associations between two entities.
Hence ranking of these associations is required i
n order to find more relevant associations. For this
Aleman Meza et al. proposed a method involving six metrics viz. context, subsumption, rarity, popularity,
association length and trust. To compute the overall rank of the associations this method compute
s
context, subsumption, rarity and popularity values for each component of the association and for all the
associations. However it is obvious that, many components appears repeatedly in many associations
therefore it is not necessary to compute context, s
ubsumption, rarity
,
popularity
,
and
trust
values of the
components every time for each association rather the previously computed values may be used while
computing the overall rank of the associations. Thi
s paper proposes a method to re
use the previously
computed values using a hash data structure thus reduce the execution time. To demonstrate the
effectiveness of the proposed method, experiments were conducted on SWETO ontology. Results show
that the proposed method is more efficient than the other existi
ng methods
.
Biometric Identification and Authentication Providence using Fingerprint for ...IJECEIAES
The raise in the recent security incidents of cloud computing and its challenges is to secure the data. To solve this problem, the integration of mobile with cloud computing, Mobile biometric authentication in cloud computing is presented in this paper. To enhance the security, the biometric authentication is being used, since the Mobile cloud computing is popular among the mobile user. This paper examines how the mobile cloud computing (MCC) is used in security issue with finger biometric authentication model. Through this fingerprint biometric, the secret code is generated by entropy value. This enables the person to request for accessing the data in the desk computer. When the person requests the access to the authorized user through Bluetooth in mobile, the Authorized user sends the permit access through fingerprint secret code. Finally this fingerprint is verified with the database in the Desk computer. If it is matched, then the computer can be accessed by the requested person.
Automatic detection of online abuse and analysis of problematic users in wiki...Melissa Moody
For their 2019 capstone project, DSI Master of Science in Data Science students Charu Rawat, Arnab Sarkar, and Sameer Singh proposed a framework to understand and detect such abuse in the English Wikipedia community.
Rawat, Sarkar, and Singh received the award for Best Paper in the Data Science for Society category at the 2019 Systems & Information Design Symposium (SIEDS). In "Automatic Detection of Online Abuse and Analysis of Problematic Users in Wikipedia," the team presented an analysis of user misconduct in Wikipedia and a system for the automated early detection of inappropriate behavior.
The document proposes an improved clustering algorithm for social network analysis. It combines BSP (Business System Planning) clustering with Principal Component Analysis (PCA) to group social network objects into classes based on their links and attributes. Specifically, it applies PCA before BSP clustering to reduce the dimensionality of the social network data and retain only the most important variables for clustering. This improves the BSP clustering results by focusing on the key information in the social network.
A Study of Neural Network Learning-Based Recommender Systemtheijes
This document summarizes a study that proposes a neural network learning model for recommender systems. The study aims to improve collaborative filtering methods by estimating user preferences based on learned correlations between users through a neural network. The proposed method was tested on MovieLens data and showed improved precision of 6.7% compared to other techniques. Additionally, the study found that precision and recall improved further, by 3.5% and 2.4% respectively, when including film genre information in the neural network learning. The document concludes the proposed technique can utilize diverse data sources and perform well regardless of data complexity compared to other recommender system methods.
This document presents a travel package recommendation system called TRAVELMATE that uses data mining techniques. It develops a Topic-Area-Season (TAST) model to understand travel package characteristics and tourist interests. A cocktail recommendation approach is introduced that uses the TAST model output and collaborative filtering to generate customized travel package recommendations. It also extends the TAST model to the TRAST model to capture relationships between tourists in tour groups. The models and recommendation approaches are evaluated on real-world travel package data.
Linking Behavioral Patterns to Personal Attributes through Data Re-Miningertekg
Download Link >https://ertekprojects.com/gurdal-ertek-publications/blog/linking-behavioral-patterns-to-personal-attributes-through-data-re-mining/
A fundamental challenge in behavioral informatics is the development of methodologies and systems that can achieve its goals and tasks, including be-havior pattern analysis. This study presents such a methodology, that can be con-verted into a decision support system, by the appropriate integration of existing tools for association mining and graph visualization. The methodology enables the linking of behavioral patterns to personal attributes, through the re-mining of colored association graphs that represent item associations. The methodology is described and mathematically formalized, and is demonstrated in a case study related with retail industry.
The document discusses credit apportionment in rule-based expert systems. It describes a framework that includes a system environment sub-model, principles of usefulness, and definitions of the credit apportionment problem. The credit apportionment problem involves estimating the inherent usefulness of rules from payoffs received. The document also reviews a bucket brigade algorithm approach to credit apportionment that uses context variables and array-valued strengths. Finally, it discusses an expert system called GAMBLE that uses a genetic algorithm and credit apportionment for branch selection advice.
Optimization of Mining Association Rule from XML DocumentsIOSR Journals
This document summarizes research on optimizing the mining of association rules from XML documents. It first provides background on association rule mining and challenges with semi-structured XML data. It then describes indexing XML elements and using an index table to extract transactions and items. The Apriori algorithm is used to generate association rules, but some rules are weak. The document proposes using an Ant Colony Optimization (ACO) algorithm to optimize the results by updating pheromone values based on rule confidence and pruning weak rules. ACO mimics how ants cooperate to find optimal paths; it is applied here to iteratively improve the generated association rule set.
The Real Time Drowisness Detection Using Arm 9IOSR Journals
This document describes a real-time driver drowsiness detection system using an ARM9 microcontroller. The system uses a webcam to capture images of the driver's eyes and an electrooculography (EOG) sensor to monitor visual activity. Image processing techniques are used to detect eye closure and blinking patterns. If drowsiness is detected, an alarm is activated to warn the driver. The system was tested on 15 people with 80% accuracy. The document concludes that image processing provides a non-invasive way to accurately detect drowsiness without interfering with the driver.
Design and Analysis of Triple-Band Multi Slotted Microstrip Patch AntennaIOSR Journals
Abstract: In this paper, a multi slotted microstrip patch antenna design has been proposed. The characteristics of the antenna are obtained in terms of return loss, gain and bandwidth. It is observed that the new proposed configuration can operate in three different frequency bands with a good amount of bandwidth i.e. bandwidth of 21.12% at 1.1GHz frequency band, bandwidth of 11.65% at 2.11 GHz and bandwidth of 13.05% at 2.76GHz frequency band . The resonating behavior in different frequency bands makes this antenna structure suitable for different types of applications with an antenna gain of 6.163dBi and antenna efficiency of 86.82%.The substrate material with relative permittivity of 4.2 and loss tangent of 0.0013 is used in this proposed antenna. The designing and simulation of the antenna structure is done over IE3D simulation software version 15.02. Keywords: Ground plane, Multi slotted, Patch Antenna, Triple band
Rehabilitation Process and Persons with Physical DysfunctionsIOSR Journals
Abstract: The main purpose of this study is to determine rehabilitation process and persons with physical
dysfunctions. To achieve the purpose of this study, three hypotheses were formulated. Ex-post facto research
design was adopted for the study. A sample of one hundred persons with disabilities was randomly selected for
the study. The selection was done through the simple random sampling technique. This was to give equal and
independent opportunity to all the respondents to be selected for the study. The questionnaire was the major
instrument used for data collection. The instrument was subjected to both face and content validation by expert
in measurement and evaluation. The reliability estimate of the instrument was established through the test-retest
reliability method Pearson product correlation analysis and independent t-test were employed were adopted to
test the hypotheses at .05 level of significance. The result of the analysis reveals that rehabilitation significantly
relates with persons with orthopedic and neurological impairments. The result also revealed that there is a
significant difference between male and female disabled persons in their perception of rehabilitation of persons
with other health impairments.
Keywords: Rehabilitation process, persons, physical, dysfunctions.
Parametric Optimization of Eicher 11.10 Chassis Frame for Weight Reduction Us...IOSR Journals
Abstract: The chassis serves as a backbone for supporting the body and different parts of the automobile. It
should be rigid enough to withstand the shock, twist, vibration and other stresses. Along with strength, an
important consideration in chassis design is to have adequate bending stiffness. The main objective of the
research is to obtain the minimum weight of Eicher 11.10 chassis frame. The chassis frame is made of two side
members joined with a series of cross members. The number of cross members, their locations, cross-section
and the sizes of the side and the cross members becomes the design variables. The chassis frame model is to be
developed in Solid works and analyzed using Ansys. Since the no. of parameters and levels are more, the
probable models are too many. So, to select optimum parameters among them large no of modeling and analysis
work is involved which consumes more time. To overcome this problem TAGUCHI method along with FEA is
use. The weight reduction of the side bar is achieved by changing the Parameters using orthogonal array. Then
FEA is performed on those models to get the best solution. This method can save material used, production cost
and time.
Keywords: Parametric optimization, Chassis frame, FE analysis, FEA-DOE hybrid modeling, Weight
reduction
A Utility Interactive Electricity Generation Schemes with Renewable ResourcesIOSR Journals
Abstract: In the recent year’s power utility of coastal areas are experiencing relatively large quantum of solar and wind energy. If the wind is heavy it might produce larger sea waves of high energy contents. The electricity needs of a township or a village situated in a coastal area can be partially fulfilled by installing a modular mini electricity generating unit and an intensified solar heat extractor in buildings. Also, installation of medium sized windmill plant, solar heated steam turbine electricity generator and sea wave energy extracting plants could fulfill the rest of the electricity needs of the township. Here we discuss the regulation of the voltage and frequency of a stand-alone fixed-pitch wind energy conversion system (WECS) based on a self-excited squirrel-cage induction machine. The characteristics of the wind turbine, self-excited generator, and the ratings of the VSI are considered in order to determine the load range for which voltageand frequency can be regulated for a given wind speed range. Keywords: Solar panel, solar tracker, solar water heater,renewable energy, wind mills, induction generator, load management.
Decision Making and Autonomic ComputingIOSR Journals
Abstract: Autonomic Computing refers to the self-managing characteristics of distributed computing
resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.
An autonomic system makes decisions on its own, using high-level policies; it will constantly check and
optimize its status and automatically adapt itself to changing conditions. As widely reported in literature, an
autonomic computing framework might be seen composed by autonomic components interacting with each
other.
An Autonomic Computing can be modeled in terms of two main control loops (local and global) with
sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting
policies based on self- and environment awareness.
The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning
while keeping the system's complexity invisible to the user.
General Terms: Autonomic systems, Self-configuration, Self-healing, Self-optimization, Self-protection.
Keywords: Know itself, reconfigure, recover from extraordinary events, expert in self-protection,
Design of Adjustable Reconfigurable Wireless Single Core CORDIC based Rake Re...IOSR Journals
In wireless communication system transmitted signals are subjected to multiple reflections,
diffractions and attenuation caused by obstacles such as buildings and hills, etc. At the receiver end, multiple
copies of the transmitted signal are received that arrive at clearly distinguishable time instants and are faded by
signal cancellation. Rake receiver is a technique to combine these so called multi-paths [2] by utilizing multiple
correlation receivers allocated to those delay positions on which the significant energy arrives which achieves a
significant improvement in the SNR of the output signal. This paper shows how the rake, including dispreading
and descrambling could be replaced by a receiver that can be implemented on a CORDIC based hardware
architecture. The performance in conjunction with the computational requirements of the receiver is widely
adjustable which is significantly better than that of the conventional rake receiver
An Enhanced Biometric System for Personal AuthenticationIOSR Journals
Palm vein authentication is a new and latest biometric method utilizing the vein patterns inside one’s
palm for personal identity verification. Palm patterns are different for each person.and as they are hidden
underneath the skin’s surface,forgery is extremely difficult.Infrared light is used to capture an image of a palm
that shows the vein patterns,which have various widths and brightness that change temporally as a result of
fluctuations in the amount of blood in the vein,depending on temperature,physical conditions,etc.To robustly
extract the precise details of the depicted veins, we developed a method of Anisotropic technique in crosssectional
profiles of a vein image.This method can extract the centrelines of the veins consistently without being
affected by the fluctuations in vein width and brightness,so its pattern matching is highly accurate. This paper
discusses the origins, feature extraction, technology,applications of palm vein authentication. The proposed
system include: 1) Infrared palm images capture; 2) Detection of Region of Interest; 3) Palm vein extraction by
Anisotropic filtering; 4) Matching. The experimental results demonstrate that the recognition rate using palm
vein is good.
Design of Ball Screw Mechanism for Retro Fit of External Grinding MachineIOSR Journals
Abstract: To convert the existing grinding machine into a good working machine, a ball screw mechanism is
designed and incorporated in the existing grinding machine by retrofitting process. Grinding machine removes
material from the work piece by abrasion, which can generate substantial amount of heat. Therefore a coolant
System is incorporated to cool the work piece, so that it does not overheat and go out of its tolerance. Grinding
practice is a large and diverse area of manufacturing and tool making. It can also rough out large volume of
metal quite rapidly. It is usually better suited to the machining of hard materials. Cylindrical grinding is also
called center type grinding. It is used in the removing the cylindrical surface and shoulders of the work piece.
The five type of cylindrical grinding are outside diameter (OD) grinding. Inside diameter (ID) grinding, plunge
grinding, creep feed grinding and center less grinding.It is used in the industries for grinding the nozzle body.
The grinding machine can be changed to automatic machine according to one of the latest technologies called
PLC’s controller. The manually operated grinding machine has some of the in-accuracies and disadvantages
compared to new modern CNC grinding machines.Based on the case study of both manual and CNC grinding
machine, the manual machine is converted into automatic machine for the better accuracy and efficiency. The
main replacement of the machine parts are hydraulic cylinder and stepper motor by ball screw mechanisms and
servo motor.
An Efficient implementation of PKI architecture based Digital Signature using...IOSR Journals
Abstract: Digital Signature technique is widely being used to detect unauthorized modification to data and to
authenticate the identity of the signatory. It is essential for secure transaction over unsecure/ open networks.
Digital Signature schemes are mostly used in cryptographic protocols to provide services like entity
authentication, authenticated key transport and key agreement. The PKI (Public Key Infrastructure) based
digital signature architecture is related with RSA algorithm and secure Hash functions (MD5 &, SHA variants).
RSA digital signature algorithm is an asymmetric cryptographic method whose security is associated with
difficulty of factorization and hash function is applied to the message to yields a fixed-size message digest. This
paper explores the PKI architecture based digital signature and presents an efficient way of its implementation
and discusses various issues associated with signature schemes based upon RSA and hash functions. The results
show that signing and verification are much faster in the developed application.
Keywords: Digital Signature, MD5, RSA, SHA1, SHA2
Revisiting A Panicked Securitization MarketIOSR Journals
With the passage of Finance Bill 2013 on April 30 in Lok Sabha proposing to Levy a 30% distribution tax on the investors in securitization deals through special purpose vehicles, there is a stir in the securitization market. The principal investors (banks) were paying the tax on their net income from the securitization transaction through SPVs. Now, they will be taxed on the gross income as per the new Finance Bill. The new securitization guidelines issued in May 2012 dipped the volume of fresh issue to Rs. 28,400 crore from Rs. 44,500 crore in the preceding fiscal.
Establishment of A Small Scale Non-Woven Tissue Processing IndustryIOSR Journals
Abstract: A study was made to establish a site for the tissue manufacturing industry. The industry is proposed to
be located in the Northern part of Nigeria. It will be situated on a 615,204 square meters area, which is to
be acquired prior to erection of building and other infrastructure. The major raw materials include
obsolete/waste paper, carton and chipboards. The types, of machines to be used are pulper machines,
chester machine, tissue recycling machine, culling machine and slicing machine. The in i t i a l capital outlay
for the plant is N75 million for a start. The product will generate a total revenue over a period of three years
of N97 mill io n with a net fixed asset of N54.S million. Thus, the project is feasible even at an annual discount
of 10%.
A Novel Interface to a Web Crawler using VB.NET TechnologyIOSR Journals
This document describes the design of a web crawler interface created using VB.NET technology. It discusses the components and architecture of web crawlers, including the seed URLs, frontier, parser, and performance metrics used to evaluate crawlers. The high-level design of the crawler simulator is presented as an algorithm, and screenshots of the VB.NET user interface for the crawler are shown. The crawler was tested on the website www.cdlu.edu.in using different crawling algorithms like breadth-first and best-first, and the results were stored in an MS Access database.
A comparative analysis on qos multicast routing protocols in MANETsIOSR Journals
Abstract: Simultaneous transmission of data from one sender to multiple receivers is called multicasting.
Several widely used applications require multicasting at least at the logical level. Examples include audio video
teleconferencing, real time video streaming and the maintenance of distributed databases. In many cases it is
advantageous to implement multicasting at the level of the routing algorithm (other approaches would be oneto-all
unicast or the implementation of multicasting at the application layer). In this paper we are presenting a
comparative analysis on various multicast routing protocols in adhoc networks.
Keywords: multicasting, multicast protocols,dynamic core, performance evaluation,Qos Parameters
Particulate Sintering of Iron Ore and Empirical Analysis of Sintering Time Ba...IOSR Journals
Particulate sintering of iron ore has been carried out using the necessary ingredients. Empirical
analysis of the sintering time based on the coke breeze input concentration and ignition temperature were also
successfully obtained through first principle application of a derived model which functioned as a evaluative
tool. The derived model;
S = (√T)0.95 + 0.0012α
indicates that amongst ignition temperature and coke breeze input, sintering time is more significantly affected
by the coke breeze input concentration. This is based on the higher correlation it makes with sintering time
compared to applied ignition temperature, all other process parameters being constant. The validity of the
model was rooted in the core expression S – Kα ≈ (√T )N where both sides of the expression are correspondingly
approximately almost equal. Sintering time per unit rise in the operated ignition temperature as obtained from
experiment, derived model and regression model were evaluated as 0.0169, 0.0128 and 0.0159 mins. / 0C
respectively. Similarly, sintering time per unit coke breeze input concentration as obtained from experiment,
derived model and regression model were evaluated as 4.0, 3.0183 and 3.7537 mins./ % respectively indicating a
significant proximate agreement and validity of the model. The standard error (STEYX) incurred in predicting
sintering time for each value of the ignition temperature and coke breeze input concentration considered, as
obtained from the experiment, derived model and regression model are 1.6646, 0.7678 and 2.98 x10-5 % as well
as 2.2128, 1.0264 and 1.2379% respectively. The maximum deviation of mode-predicted results from the
corresponding experimental values was less than 11%.
Error Reduction of Modified Booth Multipliers in Mac UnitIOSR Journals
Abstract: The fixed-width multiplier is well attractive to many multimedia and digital signal processing systems. It proposes a reduction of truncation error from 16-bit to 8-bit MSB bits (Truncated output) using simple error reduction circuit. The Fixed width modified booth multiplier is used to minimize the partial product matrix of Booth multiplication. Multiplication is binary mathematical operation scaling one number by another. Lead the design of high accuracy, low power and area in MAC unit and compare with the Wallace tree multiplier. The system will be designed using VHDL coding (Very High speed Integrated Circuit Hardware Descriptive Language). Index Terms: Multiplier and Accumulator, Most significant bits, Modified booth multiplier, error reduction circuit, fixed width multiplier
Text Independent Speaker Identification Using Imfcc Integrated With IcaIOSR Journals
Abstract: Over the years, more research work has been reported in literature regarding text independent
speaker identification using MFC coefficients. MFCC is one of the best methods modeled on human auditory
system. Murali et al (2011) [1] has developed a Text independent speaker identification using MFC coefficients
which follows Generalized Gaussian mixer model. MFCC, because of its filter bank structure it captures the
characteristics of information more effectively in lower frequency region than higher region, because of this,
valuable information in high frequency region may be lost. In this paper we rectify the above problem by
retrieving the information in high frequency region by inverting the Mel bank structure. The dimensionality and
dependency of above features were reduced by integrating with ICA. Here Text Independent Speaker
Identification system is developed by using Generalized Gaussian Mixer Model .By the experimentation, it was
observed that this model outperforms the earlier existing models.
Keywords: Independent Component Analysis; Generalized Gaussian Mixer Model; Inverted Mel frequency
cepstral coefficients; Bayesian classifier; EM algorithm.
A Study on Optimization using Stochastic Linear ProgrammingIOSR Journals
The self Help Group (SHG) is group of rural poor who have organized themselves into a group for eradicationof poverty. The members of the group belong to families below the poverty line. This will help the families of occupational groups like agricultural labourers, marginal farmers, designers and artisans marginally above the poverty line, or who may have been excluded from the Below Poverty Line (BPL) list to become members of the Self Help Group. A self help group consists of two categories. One named as magalier thittam and another is non- magalier thittam. The factors of Self help group categories are random in nature. These factors can be handled using stochastic linear programming problem (SLPP). Here the data is collected from Tuticorin district. The optimization technique such as two stage programming and chance constrained programming can be adopted for SLPP. In this paper chance constrained programming (CPP) is used to obtain optimal solution.
A Review of BSS Based Digital Image Watermarking and Extraction MethodsIOSR Journals
Abstract :The field of Signal Processing has witnessed the strong emergence of a new technique, the Blind Signal Processing (BSP) which is based on sound theoretical foundation. An offshoot of the BSP is known as Blind Source Separation (BSS). This digital signal processing techniques have a wide and varied potential applications. The term blind is indicative of the fact that both the source signal and the mixing procedures are unknown. One of the more interesting applications of BSS is in field of image data security/authentication where digital watermarking is proposed. Watermarking is a promising technique to help protect data security and intellectual property rights. The plethora digital image watermarking methods are surveyed and discussed here with their features and limitations. Thus literature survey is presented in two major categories-Digital image watermarking methods and BSS based techniques in digital image watermarking and extraction. Keywords – BSP, BSS, Mixing Coefficient, Digital Image Watermarking, Watermark Extraction.
Towards to an Agent-Oriented Modeling and Evaluating Approach for Vehicular S...Zac Darcy
1) The document proposes an agent-oriented meta-model for modeling and evaluating vehicular systems security.
2) It extends the existing Extended Gaia meta-model to build a new meta-model suited for modeling transportation problems.
3) The new meta-model adds concepts like functional requirement, non-functional requirement, agent model, and organization model to allow modeling of transportation system requirements and behaviors.
Towards to an agent oriented modeling and evaluating approach for vehicular s...Zac Darcy
Agent technology is a software paradigm that permits to implement large and complex distributed
applications. In order to assist the development of multi-agent systems, agent-oriented methodologies
(AOM) have been created in the last years to support modeling more and more complex applications in
many different domains. By defining in a non-ambiguous way concepts used in a specific domain, Meta
modeling may represent a step towards such interoperability. In the Transport domain, this paper propose
an agent-oriented meta-model that provides rigorous concepts for conducting transportation system
problem modeling. The aim is to allow analysts to produce a transportation system model that precisely
captures the knowledge of an organization so that an agent-oriented requirements specification of the
system-to-be and its operational corporate environment can be derived from it. To this end, we extend and
adapt an existing meta-model, Extended Gaia, to build a meta-model and an adequate model for
transportation problems. Our new agent-oriented meta-model aims to allow the analyst to model and
specify any transportation system as a multi-agent system. Based on the proposed meta-model, we proposes
an approach for modeling and evaluating the Transportation System based on Stochastic Activity Network
(SAN) components. The proposed process is based on seven steps from “Recognition” phase to
“Quantitative Analysis” phase. These analyzes are based on the Dependability models which are built
using the formalism Stochastic Activity Network. A real case study of Urban Public Transportation System
has been conducted to show the benefits of the approach.
Software requirement analysis enhancements by
prioritizing requirement attributes using rank
based Agents.
Ashok Kumar Vinay Goyal
Professor Assistant Professor
Department of Computer Science and Applications Department of MCA
Kurukshetra University, Kurukshetra, India Panipat Institute of Engineering & Technology
Panipat, India
Abstract- This paper proposes a new technique in the
domain of Agent oriented software engineering. Agents
work in autonomous environments and can respond to
agent triggers. Agents can be very useful in requirement
analysis phase of software development process, where
they can react towards the requirement triggers and
result in aligned notations to identify the best possible
design solution from existing designs. Agent helps in
design generation process, which includes the use of
Artificial intelligence. The results produced clearly
shows the improvements over the conventional
reusability principles and ideas.
1. INTRODUCTION
Agent oriented software engineering is a new
emerging technique which is growing very
rapidly. Software development industries have
invested huge efforts in this domain and results
published by many of them are very exiting [1].
The autonomous and reactive nature of agents
makes it possible for the designers to visualize
in terms of real life problem solving scenarios
where socio-logical [2] characteristics of agents
automatically activate the timely checks for any
problem in domain and to solve the same using
agents.
Agents are very helpful in the software
development life cycle. Experiments carried out
in past have shown [2][9][10] the improvement
in the SDLC and conclusion is that agents can be
very helpful in cost and effort minimization; if
tuned properly. Fine-tuning of agents and SDLC
process-state-plug-in for two-way
communications results in agent based software
development process where intelligent agents
will take decisions for better time and resource
utilization.
Fine-tuning of agents and SDLC process-state-
plug-in for two-way communications results in
agent based software development process
where intelligent agents will take decisions for
better time and resource utilization. Agents are
capable of storing historic data, which helps in
decision-making using heuristic based approach.
This paper discusses the details of one such
experiment conducted to improve the
requirement analysis process with the help of
proactive agents. Agents automatically sense the
requirement environment and propose their own
set of important requirement checklist. This is
sort of intelligent assistance with domain
heuristic, which leads to cover all possible
requirement entities of the problem domain.
2. RELATED WORK
Michael Wooldridge, Nicholas R. Jennings &
David Kinny describe the analysis process using
agent-oriented approach [1]. They have
considered the GAIA notations. The analysis
stages of Gaia are:
1) Identify the agent’s roles in the system, which
typically correspond to identify ro ...
Agent-SSSN: a strategic scanning system network based on multiagent intellige...IJERA Editor
The document describes an Agent-SSSN system that uses a multi-agent approach and ontology to develop a strategic scanning system for business intelligence. The system aims to integrate expert knowledge through cooperative information gathering from the web. It uses various agent roles like information retrieval agents, mediator agents, and notification agents. Ontologies are used to represent shared domain concepts and expert knowledge to enable knowledge sharing between agents. The system is modeled using the O-MaSE methodology, with goals, roles, and capabilities defined for each agent.
The Evaluation of Generic Architecture for Information Availability (GAIA) an...inventionjournals
Along with the growing interest in agent applications, there has been an increasing number of agentoriented software engineering methodologies proposed in recent years. These methodologies were developed and specially tailored to the characteristics of agents. The roles of these methodologies can provide methods, models, techniques, and tools so that the development of agent based system can be carried out in a former and systematic way. The goal of this paper is to understand the relationship between two key agent-oriented methodologies: Gaia, and MaSE. More specially, we evaluate and compare these three methodologies by performing a feature analysis, on them, which is carried out by evaluating the strengths and weaknesses of each participating methodology using an attribute-based evaluation framework. This evaluation framework addresses some areas of an agent-oriented methodology: concepts, modeling language, process and pragmatics
The Evaluation of Generic Architecture for Information Availability (GAIA) an...inventionjournals
Along with the growing interest in agent applications, there has been an increasing number of agentoriented software engineering methodologies proposed in recent years. These methodologies were developed and specially tailored to the characteristics of agents. The roles of these methodologies can provide methods, models, techniques, and tools so that the development of agent based system can be carried out in a former and systematic way. The goal of this paper is to understand the relationship between two key agent-oriented methodologies: Gaia, and MaSE. More specially, we evaluate and compare these three methodologies by performing a feature analysis, on them, which is carried out by evaluating the strengths and weaknesses of each participating methodology using an attribute-based evaluation framework. This evaluation framework addresses some areas of an agent-oriented methodology: concepts, modeling language, process and pragmatics
IMPACT OF DIFFERENT SELECTION STRATEGIES ON PERFORMANCE OF GA BASED INFORMATI...ijcsa
As the information proliferates, searching for relevant information has become a primary task. Searching
or Information retrieval (IR) aims to help the users in organising as well as retrieving those documents
from the documentary collection which are most likely to satisfy information needs of the user. An optimal
Information Retrieval System (IRS) is one which retrieves only those documents from the document
database which are pertinent to user's information needs, while excluding documents that are not relevant.
Genetic Algorithm is described by higher likelihood of finding good solutions to large and complex
problems of IR optimisation. The performance of Genetic Algorithm depends upon the decision of
underlying operators used namely selection, crossover and mutation. A GA-based algorithm IRIGA
(Information Retrieval Improvement using Genetic Algorithm) is developed to improve the performance of
Information Retrieval System. This paper presents a comparison of performance of IRIGA when different
selection methods are used. The results are analysed by conducting experiments keeping the rest of the GA
parameters as constant and varying only the selection strategy.
https://utilitasmathematica.com/index.php
Our journal has a actively working to create a more diverse editorial board that represents a wide range of perspectives and experiences within the statistics field. It encourages authors to consider JEDI aspects in their research and actively participates in outreach programs to attract and support a more diverse group of researchers.
This document describes a proposed high interaction multi-agent system model for automatic prediction. The model uses five agents working together: a preprocessing agent prepares the data, three learning agents staff train on the data using different machine learning algorithms (Random Forest, Naive Bayes, KNN), and a decision-making agent integrates the results to make a prediction. The agents work sequentially, with the preprocessing agent passing data to the learning agents who build models and pass results to the decision-making agent. The goal is for the agents to collaborate to make more accurate predictions than single models.
Multiagent Based Methodologies have become an
important subject of research in advance Software Engineering.
Several methodologies have been proposed as, a theoretical
approach, to facilitate and support the development of complex
distributed systems. An important question when facing the
construction of Agent Applications is deciding which
methodology to follow. Trying to answer this question, a
framework with several criteria is applied in this paper for the
comparative analysis of existing multiagent system
methodologies. The results of the comparative over two of them,
conclude that those methodologies have not reached a sufficient
maturity level to be used by the software industry. The
framework has also proved its utility for the evaluation of any
kind of Multiagent Based Software Engineering Methodology
This document presents ATALK, a decentralized agent platform that supports dynamic role deployment and relocation in open multi-agent systems. ATALK represents roles as service-oriented components that can be deployed and executed at runtime. It provides a compositional agent architecture that allows agents to dynamically integrate roles. ATALK also enables agents to hand over roles and their runtime states to other agents, allowing roles to be relocated without disrupting the organization. The document evaluates ATALK through qualitative analysis and simulations showing it achieves high reconfigurability with low overhead.
Improving the quality of information in strategic scanning system network app...ijaia
Integrating Business Intelligence (BI) processes in an information system requires a form of strategic
scanning system for which the information is the main source of efficiency and decision support. A process
of strategic scanning system network is primarily a cooperative approach to sharing knowledge that actors
are "producers" of information. The dynamics of the actor’s interactions allow gradual building of shared
knowledge. This paper proposes Multi Agent System (MAS) architecture which facilitates the integration of
a process of strategic scanning system network in the information system, to emerge relevant information
from simple information while ensuring the quality and safety information. In particular, this approach is
geared towards supporting system properties specially focused on cooperative multi-agent system. It gives
finally an overview of implementation of a prototype of the proposed solution limited for the moment to the
integration of processes most used in the majority of information systems.
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
The document discusses managing order batching issues in supply chain management using a multi-agent system. It first provides background on multi-agent systems and their advantages over centralized systems, such as being able to solve problems that are too large or complex for a single agent. It then discusses how a multi-agent system can be used to handle the order batching problem in supply chain management, which is a major cause of the bullwhip effect that negatively impacts supply chain performance. The proposed system uses intelligent agents to maintain information related to order batching issues and make decisions to manage order batching.
A Brief Survey on Recommendation System for a Gradient Classifier based Inade...Christo Ananth
Recommender systems are a common and successful feature of modern internet services. (RS). A service that connects users to tasks is known as a recommendation system. Making it simpler for customers and project providers to identify and receive projects and other solutions achieves this. A recommendation system is a strong device that may be advantageous to a business or organisation. This study explores whether recommender systems may be utilised to solve cold-start and data-sparsely issues with recommender systems, as well as delays and business productivity. Recommender systems make it easier and more convenient for people to get information. Over the years, several different methods have been created. We employ a potent predictive regression method known as the slope classifier algorithm, which minimises a loss function by repeatedly choosing a function that points in the direction of the weak hypothesis or the negative gradient. A group that is experiencing trouble handling cold beginnings and data sparsity will send enormous datasets to the suggested systems team. The users have to finish their job by the deadline in order to overcome these challenges.
This document presents a framework for reusing existing software agents through ontological engineering. The framework includes components like a user interface agent, query processor, mapping agent, transfer agent, wrapper agent, and remote agents containing ontologies. The query processor reformulates the user's query, the mapping agent identifies relevant ontologies, and the transfer agent sends the query to remote agents. The remote agents provide ontologies as output, which are then integrated/merged and presented back to the user interface agent. The goal is to enable reuse of heterogeneous agents across different development environments through a standardized ontology representation.
Scalable Action Mining Hybrid Method for Enhanced User Emotions in Education ...IJCI JOURNAL
Education sector ,Business field,Medical domain and SocialMedia,huge amounts of data in a single day . Mining this data can provide a lot of meaningful insights on how to improve user experience in social media, users engage in these domains collect and cherish the data as they hope to find patterns and trends and the golden nuggets that help them to accomplish their goal. For example: How to improve student learning; how to increase business profit ability; how to improve user experience in social media; and how to heal patients and assists hospital administrators. Action Rule Miningmines actionable patterns which are hidden in various datasets. Action Rules provide actionable suggestions on how to change the state of an object from an existing state to a desired state for the benefit of the user. There are two major frameworks in the literature of Action Rule mining namely Rule-Based method where the extraction of Action Rules is dependent on the pre-processing step of classification rule discovery and Object-Based method where it extracts the Action Rules directly from the database without the use of classification rules.Hybrid Action rule mining approach combines both these frame works and generates complete set of Action Rules. The hybrid approach shows significant improvement in terms computational performance over the Rule-Based and Object-Based approach. In this work we propose a novel Modified Hybrid Action rule method with Partition Threshold Rho, which further improves the computational performance with large datasets.
EFFECTIVENESS OF E-RKAP SYSTEM IMPLEMENTATION WITH HUMAN, ORGANIZING, TECHNOL...AJHSSR Journal
ABSTRACT: This study aims to test and prove the influence of human, organizing and technology on the
effectiveness of the E-RKAP system. The hypotheses of this study are System Use affects the Effectiveness of the
E-RKAP System, User Satisfaction affects the Effectiveness of the E-RKAP System, Structural affects the
Effectiveness of the E-RKAP System, Environment affects the Effectiveness of the E-RKAPSystem, System
Quality affects the Effectiveness of the E-RKAP System, Information Quality affects the Effectiveness of the ERKAP System, Service Quality affects the Effectiveness of the E-RKAP System. The population of this study
were company employees. The sampling technique used in this study was purposive sampling which obtained a
sample of 90 people. This study shows that system use, use satisfaction, structure, environment, information
quality and service quality have no influence on the effectiveness of the E-RKAP system. System quality has an
influence on the effectiveness of the E-RKAP system.
KEYWORDS : Effectiveness, E-RKAP system, HOT Fit.
This document provides an overview of network organizations through three frameworks - as a computer, economy, and society. It discusses key characteristics of network organizations such as permeable boundaries, less hierarchical management, specialized resources, project-driven tasks, and the importance of trust. The document contrasts networks with hierarchies and markets and discusses different types of networks. The sections analyze network organizations through each framework/metaphor, examining factors like decision processes, rational agency, and human behavior within networks.
Effective Feature Selection for Feature Possessing Group Structurerahulmonikasharma
This document proposes a new method called efficient group variable selection (EGVS) for feature selection when features have a group structure. EGVS has two stages: 1) within-group variable selection evaluates each feature individually to select discriminative features within each group. 2) Between-group variable selection re-evaluates all features to remove redundancy and obtain an optimal subset by considering relationships between groups. The method is demonstrated on benchmark datasets, showing it increases classification accuracy by leveraging the group structure during feature selection.
Similar to New application of genetic algorithm in optimization of structural weights (20)
This document provides a technical review of secure banking using RSA and AES encryption methodologies. It discusses how RSA and AES are commonly used encryption standards for secure data transmission between ATMs and bank servers. The document first provides background on ATM security measures and risks of attacks. It then reviews related work analyzing encryption techniques. The document proposes using a one-time password in addition to a PIN for ATM authentication. It concludes that implementing encryption standards like RSA and AES can make transactions more secure and build trust in online banking.
This document analyzes the performance of various modulation schemes for achieving energy efficient communication over fading channels in wireless sensor networks. It finds that for long transmission distances, low-order modulations like BPSK are optimal due to their lower SNR requirements. However, as transmission distance decreases, higher-order modulations like 16-QAM and 64-QAM become more optimal since they can transmit more bits per symbol, outweighing their higher SNR needs. Simulations show lifetime extensions up to 550% are possible in short-range networks by using higher-order modulations instead of just BPSK. The optimal modulation depends on transmission distance and balancing the energy used by electronic components versus power amplifiers.
This document provides a review of mobility management techniques in vehicular ad hoc networks (VANETs). It discusses three modes of communication in VANETs: vehicle-to-infrastructure (V2I), vehicle-to-vehicle (V2V), and hybrid vehicle (HV) communication. For each communication mode, different mobility management schemes are required due to their unique characteristics. The document also discusses mobility management challenges in VANETs and outlines some open research issues in improving mobility management for seamless communication in these dynamic networks.
This document provides a review of different techniques for segmenting brain MRI images to detect tumors. It compares the K-means and Fuzzy C-means clustering algorithms. K-means is an exclusive clustering algorithm that groups data points into distinct clusters, while Fuzzy C-means is an overlapping clustering algorithm that allows data points to belong to multiple clusters. The document finds that Fuzzy C-means requires more time for brain tumor detection compared to other methods like hierarchical clustering or K-means. It also reviews related work applying these clustering algorithms to segment brain MRI images.
1) The document simulates and compares the performance of AODV and DSDV routing protocols in a mobile ad hoc network under three conditions: when users are fixed, when users move towards the base station, and when users move away from the base station.
2) The results show that both protocols have higher packet delivery and lower packet loss when users are either fixed or moving towards the base station, since signal strength is better in those scenarios. Performance degrades when users move away from the base station due to weaker signals.
3) AODV generally has better performance than DSDV, with higher throughput and packet delivery rates observed across the different user mobility conditions.
This document describes the design and implementation of 4-bit QPSK and 256-bit QAM modulation techniques using MATLAB. It compares the two techniques based on SNR, BER, and efficiency. The key steps of implementing each technique in MATLAB are outlined, including generating random bits, modulation, adding noise, and measuring BER. Simulation results show scatter plots and eye diagrams of the modulated signals. A table compares the results, showing that 256-bit QAM provides better performance than 4-bit QPSK. The document concludes that QAM modulation is more effective for digital transmission systems.
The document proposes a hybrid technique using Anisotropic Scale Invariant Feature Transform (A-SIFT) and Robust Ensemble Support Vector Machine (RESVM) to accurately identify faces in images. A-SIFT improves upon traditional SIFT by applying anisotropic scaling to extract richer directional keypoints. Keypoints are processed with RESVM and hypothesis testing to increase accuracy above 95% by repeatedly reprocessing images until the threshold is met. The technique was tested on similar and different facial images and achieved better results than SIFT in retrieval time and reduced keypoints.
This document studies the effects of dielectric superstrate thickness on microstrip patch antenna parameters. Three types of probes-fed patch antennas (rectangular, circular, and square) were designed to operate at 2.4 GHz using Arlondiclad 880 substrate. The antennas were tested with and without an Arlondiclad 880 superstrate of varying thicknesses. It was found that adding a superstrate slightly degraded performance by lowering the resonant frequency and increasing return loss and VSWR, while decreasing bandwidth and gain. Specifically, increasing the superstrate thickness or dielectric constant resulted in greater changes to the antenna parameters.
This document describes a wireless environment monitoring system that utilizes soil energy as a sustainable power source for wireless sensors. The system uses a microbial fuel cell to generate electricity from the microbial activity in soil. Two microbial fuel cells were created using different soil types and various additives to produce different current and voltage outputs. An electronic circuit was designed on a printed circuit board with components like a microcontroller and ZigBee transceiver. Sensors for temperature and humidity were connected to the circuit to monitor the environment wirelessly. The system provides a low-cost way to power remote sensors without needing battery replacement and avoids the high costs of wiring a power source.
1) The document proposes a model for a frequency tunable inverted-F antenna that uses ferrite material.
2) The resonant frequency of the antenna can be significantly shifted from 2.41GHz to 3.15GHz, a 31% shift, by increasing the static magnetic field placed on the ferrite material.
3) Altering the permeability of the ferrite allows tuning of the antenna's resonant frequency without changing the physical dimensions, providing flexibility to operate over a wide frequency range.
This document summarizes a research paper that presents a speech enhancement method using stationary wavelet transform. The method first classifies speech into voiced, unvoiced, and silence regions based on short-time energy. It then applies different thresholding techniques to the wavelet coefficients of each region - modified hard thresholding for voiced speech, semi-soft thresholding for unvoiced speech, and setting coefficients to zero for silence. Experimental results using speech from the TIMIT database corrupted with white Gaussian noise at various SNR levels show improved performance over other popular denoising methods.
This document reviews the design of an energy-optimized wireless sensor node that encrypts data for transmission. It discusses how sensing schemes that group nodes into clusters and transmit aggregated data can reduce energy consumption compared to individual node transmissions. The proposed node design calculates the minimum transmission power needed based on received signal strength and uses a periodic sleep/wake cycle to optimize energy when not sensing or transmitting. It aims to encrypt data at both the node and network level to further optimize energy usage for wireless communication.
This document discusses group consumption modes. It analyzes factors that impact group consumption, including external environmental factors like technological developments enabling new forms of online and offline interactions, as well as internal motivational factors at both the group and individual level. The document then proposes that group consumption modes can be divided into four types based on two dimensions: vertical (group relationship intensity) and horizontal (consumption action period). These four types are instrument-oriented, information-oriented, enjoyment-oriented, and relationship-oriented consumption modes. Finally, the document notes that consumption modes are dynamic and can evolve over time.
The document summarizes a study of different microstrip patch antenna configurations with slotted ground planes. Three antenna designs were proposed and their performance evaluated through simulation: a conventional square patch, an elliptical patch, and a star-shaped patch. All antennas were mounted on an FR4 substrate. The effects of adding different slot patterns to the ground plane on resonance frequency, bandwidth, gain and efficiency were analyzed parametrically. Key findings were that reshaping the patch and adding slots increased bandwidth and shifted resonance frequency. The elliptical and star patches in particular performed better than the conventional design. Three antenna configurations were selected for fabrication and measurement based on the simulations: a conventional patch with a slot under the patch, an elliptical patch with slots
1) The document describes a study conducted to improve call drop rates in a GSM network through RF optimization.
2) Drive testing was performed before and after optimization using TEMS software to record network parameters like RxLevel, RxQuality, and events.
3) Analysis found call drops were occurring due to issues like handover failures between sectors, interference from adjacent channels, and overshooting due to antenna tilt.
4) Corrective actions taken included defining neighbors between sectors, adjusting frequencies to reduce interference, and lowering the mechanical tilt of an antenna.
5) Post-optimization drive testing showed improvements in RxLevel, RxQuality, and a reduction in dropped calls.
This document describes the design of an intelligent autonomous wheeled robot that uses RF transmission for communication. The robot has two modes - automatic mode where it can make its own decisions, and user control mode where a user can control it remotely. It is designed using a microcontroller and can perform tasks like object recognition using computer vision and color detection in MATLAB, as well as wall painting using pneumatic systems. The robot's movement is controlled by DC motors and it uses sensors like ultrasonic sensors and gas sensors to navigate autonomously. RF transmission allows communication between the robot and a remote control unit. The overall aim is to develop a low-cost robotic system for industrial applications like material handling.
This document reviews cryptography techniques to secure the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in mobile ad-hoc networks. It discusses various types of attacks on AODV like impersonation, denial of service, eavesdropping, black hole attacks, wormhole attacks, and Sybil attacks. It then proposes using the RC6 cryptography algorithm to secure AODV by encrypting data packets and detecting and removing malicious nodes launching black hole attacks. Simulation results show that after applying RC6, the packet delivery ratio and throughput of AODV increase while delay decreases, improving the security and performance of the network under attack.
The document describes a proposed modification to the conventional Booth multiplier that aims to increase its speed by applying concepts from Vedic mathematics. Specifically, it utilizes the Urdhva Tiryakbhyam formula to generate all partial products concurrently rather than sequentially. The proposed 8x8 bit multiplier was coded in VHDL, simulated, and found to have a path delay 44.35% lower than a conventional Booth multiplier, demonstrating its potential for higher speed.
This document discusses image deblurring techniques. It begins by introducing image restoration and focusing on image deblurring. It then discusses challenges with image deblurring being an ill-posed problem. It reviews existing approaches to screen image deconvolution including estimating point spread functions and iteratively estimating blur kernels and sharp images. The document also discusses handling spatially variant blur and summarizes the relationship between the proposed method and previous work for different blur types. It proposes using color filters in the aperture to exploit parallax cues for segmentation and blur estimation. Finally, it proposes moving the image sensor circularly during exposure to prevent high frequency attenuation from motion blur.
This document describes modeling an adaptive controller for an aircraft roll control system using PID, fuzzy-PID, and genetic algorithm. It begins by introducing the aircraft roll control system and motivation for developing an adaptive controller to minimize errors from noisy analog sensor signals. It then provides the mathematical model of aircraft roll dynamics and describes modeling the real-time flight control system in MATLAB/Simulink. The document evaluates PID, fuzzy-PID, and PID-GA (genetic algorithm) controllers for aircraft roll control and finds that the PID-GA controller delivers the best performance.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
New application of genetic algorithm in optimization of structural weights
1. IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE)
e-ISSN: 2278-1684,p-ISSN: 2320-334X, Volume 7, Issue 3 (Jul. - Aug. 2013), PP 52-70
www.iosrjournals.org
www.iosrjournals.org 52 | Page
New application of genetic algorithm in optimization of structural
weights
Mokhtar Jalilian * 1
, Nasser Taghizadieh 2
1
Master Student Civil Structures, Department of Civil Engineering, University of Tabriz, Tabriz, Iran
2
Assistant Professor of Faculty Departments of Civil Engineering, University of Tabriz, Tabriz, Iran
Abstract: It has been widely recognized that the performance of a multi-agent system (MAS) is highly affected
by its organization. A large scale MAS may have billions of possible ways of organization, depending on the
number of agents, the roles, and the relationships among these agents. These characteristics make it impractical
to find an optimal choice of organization using exhaustive search methods. In this report, we propose a genetic
algorithm aided optimization scheme for designing hierarchical structures of multi-agent systems. We introduce
a novel algorithm, called the hierarchical genetic algorithm, in which hierarchical crossover with a repair
strategy and mutation of small perturbation are used. The phenotypic hierarchical structure space is translated
to the genome-like array representation space, which makes the algorithm genetic-operator-literate. A case
study with 10 scenarios of a hierarchical information retrieval model is provided. Our experiments have shown
that competitive baseline structures which lead to the optimal organization in terms of utility can be found by
the proposed algorithm during the evolutionary search. Compared with the traditional genetic operators, the
newly introduced operators produced better organizations of higher utility more consistently in a variety of test
cases. The proposed algorithm extends the search processes of the state-of-the-art multi-agent organization
design methodologies, and is more computationally efficient in a large search space.
Keywords: genetic algorithm, hierarchical crossover, information retrieval, multi-agent systems, organization
design, optimization, representation, tree structures.
I. Introduction
The research on the organization of a multi-agent system (MAS) has attracted much interest in recent
years. An organization provides a framework for activities and interactions in a MAS through the definition of
agent roles, groups, tasks, behavioral expectations and authority relationships such that all the agents in the
MAS can cooperate systematically and contribute to the common good of the overall system. More specifically,
the organization defines which resources an agent is able to acquire, what roles/functions it takes, with which
other agents it is allowed to exchange information, etc.
A proper organization for a MAS can ensure the behavior of the agents to be externally observable and
make up for the major drawback of the traditional agent centered MAS in which the patterns and the outcomes
of the interactions are inherently unpredictable because of the high likelihood of emergent (and unwanted)
behavior [4]. Particularly, in large scale systems, to form and evolve an organization makes it possible for the
system to exploit collective efficiencies and to manage emerging situations [12]. So far, a number of
organization designs have been proposed for multi-agent systems [9]. Experiments and simulations have shown
that various organizations employed by a system with the same set of agents may have different impacts on its
performance [8][15][5][10][17][21].
Among all kinds of organizations, the hierarchical structure is one of the most common structures
observed in multi-agent systems. Like human organizations, primate societies, and insect colonies, many multi-
agent systems can be abstracted as hierarchical, tree-like structures or sets of parallel hierarchical structures,
where agents are categorized in different levels in the hierarchies [11]. Often, the level of an agent indicates its
capabilities and roles. In other words, a specific level in the system consists of equally capable agents,
performing similar roles. Agents at the bottom level may execute the routine tasks under the orders given by
their higher-level authorities, whereas agents at the top level may assign the task, collect and assemble the
returned information from their subordinates, as seen in the distributed information retrieval (IR) system
described in [10].
For a large hierarchical MAS, there exist a great variety of possible ways to organize the system, which
induces different agent behaviors and system characteristics. Due to the difference in the depth and the width of
the hierarchy, the number of organization instances increases exponentially with the number of agents, which
poses a great challenge for us to construct the most suitable organization for a given system. Although many
methodologies for organization modeling have been proposed, few of them present an effective way to search
for an optimal organization instance.
2. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 53 | Page
In order to solve the problem, this report proposes a genetic algorithm (GA) approach as an alternative to the
conventional enumeration methods for optimizing hierarchical multi-agent systems. Inspired by biological
evolution processes such as selection, reproduction, and mutation, GAs are known to be robust global search
algorithms for optimization and machine learning [7][2][3]. The heuristic nature of GA helps it to locate the
global optimum in a vast search space. We design novel crossover and mutation operators to make the algorithm
suitable for organization evolution and thereby ensure competitive performance. We tested the algorithm in an
example of the IR model [10] which exhibits numerous possible organizational variants and verify its capability
through simulations in different scenarios.
The rest of the report is structured as follows. Section 2 discusses the related work. In Section 3, we
introduce the representation of organization employed in our algorithm, followed by the newly proposed
crossover and mutation operators in Section 4. Section 5 proceeds with description of the IR model in our case
study, with implementation details and experimental setup. And in Section 6, the simulation results are
presented with the number of databases varying from 12 to 30. We analyze the results by comparing the
different test cases which show the impact of environment variables on the best organizations obtained. The
proposed algorithm is compared with the standard genetic algorithm (SGA) with one-point crossover and two-
point crossover in terms of its search accuracy and stability. In Section 7, we further compare our algorithm with
the search process of the state-of-the-art multi-agent organization design methodologies. In the last section, we
conclude the report and discuss promising future research directions in this topic.
II. Related Work
The design of a multi-agent system organization has been investigated by many researchers. Early
methodologies such as Gaia [19] and OMNI [18] aim to assist the manual design process of agent organizations.
In these models the roles that agents have to play within the MAS and the interaction protocols are identified.
Instead of relying heavily on the expertise of human designers, it is desirable to automate the process of
producing multi-agent organization designs. In this case, a quantitative measurement of a set of metrics is
essentially needed for us to rapidly and precisely predict the performance of the MAS. With these metrics we
can evaluate a number of organization instances, rank them, and select the best organization without having to
introduce heavy cost by actually implementing the organization designs.
In [10], the utility value was defined as the quantitative measurement of the performance of a distributed sensor
network and an information retrieval system. An organizational design modeling language (ODML) was
proposed and a template was constructed for each domain. Several approaches, including the exploitation of
hard constraints and equivalence classes, parallel search, and the use of abstraction, have been studied in order
to reduce the complexity of searching for a valid optimal organization.
Another organization designer, KB-ORG, which also incorporates quantitative utility as a user
evaluation criterion, was proposed for multi-agent systems in [17]. It uses both application-level and
coordination-level organization design knowledge to explore the combinatorial search space of candidate
organizations selectively. This approach significantly reduces the exploration effort required to produce
effective designs as compared to modeling and evaluation-based approaches that do not incorporate designer
expertise.
Nonetheless, similar to ODML, KB-ORG aims at pruning the search space. However, the design
knowledge alone is inadequate for the identification of an optimal design when the possible varieties of the
organization structure become large.
Evolutionary based search mechanisms have been used to help the design of MAS organizations on a
few occasions. For example, in [20], a GA-based algorithm is proposed for coalition structure formation which
aims at achieving the goals of high performance, scalability, and fast convergence rate simultaneously. And in
[13], a heuristic search method, called evolutionary organizational search (EOS), which is based on genetic
programming (GP), was introduced. A review of evolutionary methodologies, mostly involving co-evolution,
for the engineering of multi-agent market mechanisms, can also be found in [16]. These techniques show a
promising direction to deal with the organization search in hierarchical multi-agent systems, as exhaustive
methods, such as breadth-first search and depth-first search, become inefficient and impractical in a large search
space.
III. Representation of Organizations
Generally speaking, the organization of a hierarchical MAS consists of a number of tree structures. It
can either be a single tree, where the root node is the sole leader of the organization, or a set of trees, where
there are several equally important leaders that communicate with each other and share the decision-making
power. The intermediate nodes in a tree have the responsibility to assign tasks to their subordinates, as well as
reporting the results of the accomplished tasks back to their higher-level authorities. Information exchange is
only allowed in the vertical directions between higher and lower levels, and there is no interaction of agents
3. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 54 | Page
horizontally, or among different hierarchies. The leaf nodes are the bottom of the structure and they complete
the most basic tasks.
Optimization in such a search space can be handled by evolutionary algorithms [3], especially genetic
programming, which supports populations of model structures of varying length and complexity. It has also
been shown from previous studies that some well-structured trees (e.g. binary trees), with a certain number of
levels and a fixed number of subordinates per node, can be represented by arrays [14][1]. Transformations are
feasible as a result of their regular structures, which thereby allow the traditional crossover and mutation
operators of other evolutionary algorithms, such as genetic algorithms, to take effect.
We propose an array representation of hierarchical MAS organizations which is applicable to a much
broader range of hierarchical structures than just binary trees. It converts s set of hierarchical trees into a fixed-
length array with integer components, which resemble gene sequences. The representation is not limited to
describe a single tree, and the number of subordinates of each node need not be a constant. Unbalanced trees, in
which leaf nodes are not on the same hierarchical level, can also be depicted using this representation.
3.1Translating Organizations into Genomes
We assume that the hierarchical MAS considered here have the following properties. We assume that
the number of leaf node agents is fixed before the search. We also assume that the maximum possible number of
levels is determined. Thus, the total number of agents in the organization is bounded. Based on these
assumptions, we can make use of the partition concept to convert the organization from tree structures to arrays.
Let N be the total number of leaf nodes or end nodes, so that the they can be numbered as 1, 2, …, N
respectively from left to right. Let M be the maximum tree depth (i.e. maximum height of the structure). The
reason for limiting the height is that very tall structures can be slow or irresponsive, as the long path length from
root to leaf increases message latency among the agents. The organization of a hierarchical MAS can be
outlined by Representation 1:
a1a2a3…aN–1
where ai is an integer between 1 and M, denoting the level number where leaf nodes i and i+1 start to
separate.
An example with seven leaf nodes (N=7) is illustrated in Figure 1. It consists of two trees. On Level 1, the four
leaf nodes on the left and the three leaf nodes on the right separate into two trees. In other words, there is one
separation between the leaf nodes 4 and 5, so a4=1. On Level 2, there are two leaf nodes and one intermediate
node (three nodes altogether) under the left tree root, corresponding to the “2 2” (two partition numbers) to the
left of the “1” in the array. The one leaf node and one intermediate node (two nodes altogether) under the right
tree root give the “2” (one partition number) to the right. Both intermediate nodes on Level 2 have two leaf
nodes as their subordinates (leaf nodes 3 and 4, leaf nodes 6 and 7), which are separated on Level 3, resulting in
the two 3‟s in the 3rd
and 6th
places in the array. Therefore, the array “2 2 3 1 2 3” fully specifies the
organization.
Conversely, we can also obtain an organization by interpreting the representation array. For instance, if
we want to determine which level node 4 in Figure 1 sits on, we need to examine both the node's left and right
neighbor. The third and forth digits in the array are “3” and “1”. It means that node 3 and node 4 are separated
on Level 3. Node 4 and node 5 are separated on Level 1. As a result, we can place node 4 on Level 3 (larger
number between 3 and 1). Similarly, because the fifth digit is “2”, i.e. node 5 and node 6 are separated on level 2,
node 5 can be put on level 2 (larger number between 2 and 1).
2 2 3 1 2 3
Figure 1: A sample organization and its array representation. Agent
nodes are displayed as circles in the figure, and leaf nodes are numbered.
4. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 55 | Page
Theorem:
The above representation has the following properties.
(1) For every hierarchical organization instance which satisfies our assumptions in the beginning of
Section 3.1, the array representation that can be generated is unique.
(2) For every representation of the above mentioned form, there is an organization instance corresponding
to it.
Proof:
(1) We firstly prove the existence of an array representation for every hierarchical organization instance.
The way of generating an array representation of an arbitrary hierarchical organization instance can be
expressed as follows. If there are N leaf nodes, we prepare N–1 slots. Firstly, organize the structure well so that
the root nodes, intermediate nodes, and leaf nodes are on their proper levels. Secondly, we examine the
separation pattern between adjacent leaf nodes one by one from left to right. Fill the slots with the level number
where the adjacent leaf nodes start to separate. See Figure 1 for an example. The first two leaf nodes on the left
are direct subordinates of the first tree root, i.e. on the root level (Level 1) they do not separate. However, on
Level 2, they separate into different nodes. So the first number is 2. The second slot should also be filled with
number 2 because the second and third leaf nodes on the left separate on Level 2. The third leaf node belongs to
an intermediate node on Level 2 different from the second leaf node. And as the third and fourth leaf nodes are
direct subordinates of an intermediate node on Level 2, they start to separate on Level 3. Number 3 should be
the third number in the array representation. And so on, we can get the values, which are the level numbers, for
all the slots. Together they form the required representation.
We then prove the uniqueness of the generated array representation. If array representations
a1a2a3…aN–1 and b1b2b3…bN–1 which are derived from the same organization instance are different, there exits an
i{1, 2, …, N} such that ai≠bi. This shows that the leaf nodes i and i+1 separate at different levels in the two
corresponding organization structures, which means the organization structures are not identical.
(2) Given an array representation with positive integers of length L, we would like to construct an
organization instance containing L+1 leaf nodes as follows. Find all the digit “1”s in the representation (if there
are any). Calculate the number of digits (greater than 1) between adjacent 1‟s one by one from left to right, and
denote them as n1, n2, n3, …, nk+1, where k is the number of 1‟s. If there are no 1‟s, then k=0 and n1=L. The
corresponding organization has k+1 root nodes with n1+1, n2+1, n3+1, …, nk+1+1 leaf nodes, respectively, from
left to right. So far we have completed the root level (Level 1) of the organization. For instance, with array [2 2
3 1 2 3], n1=3, n2=2, i.e. there are two root nodes with 4 and 3 leaf nodes respectively. For Level 2, we take
segments with 1‟s and 2‟s as separators. These segments should only contain digits greater than 2 (if any). Like
what is done for Level 1, the number of digits between adjacent separators are recorded as r1, r2, r3, …, rt+1,
where t is the number of 1‟s and 2‟s. If ri=0, it corresponds to a leaf node; otherwise, it corresponds to an
intermediate node on Level 2. After that, take segments with 1‟s, 2‟s, and 3‟s as separators, and repeat the steps
until the greatest numbers in the representation are examined. In this way we can obtain the full organization
instance.
Note that the organization instance is non-unique. Figure 2(a) illustrates an extreme case where all
three leaf nodes separate on Level 2, so the representation is [2 2]. It has the same representation as the
organization in Figure 2(b). When such circumstances arise, we should examine all the possible organization
instances that correspond to a representation and use the best one. In the following section we explain that in the
IR model, the sub-organizations having nodes with only one subordinate are uneconomical and should be
simplified to achieve higher utility. Therefore, we only need to focus on the most simplified organization
instance.
So far, we have established a surjective mapping from the set of all valid structure instances containing
N leaf nodes with maximum height M, denoted as A, to the set of all arrays containing N–1 integer elements
(a) (b)
Figure 2: Organizations with the same representation.
5. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 56 | Page
ranging from 1 to M, denoted as B. Furthermore, the representation is compatible with genetic operators such as
one-point, two-point or uniform crossover, i.e. the offspring generated after the crossover of individuals from set
B still belong to set B. Bit-wise mutation can also be applied here, so that every bit of the genome ai is mutated
to a randomly picked different value from {1, 2, …, M}{ai} according to the user defined mutation probability.
3.2 Simplifying Organizations
The above representation can be applied to a general hierarchical MAS organization. For specific
organization search problems, we may find it beneficial to simplify the representation in order to prune the
search space and avoid unnecessary candidate evaluations of the algorithm. The simplification steps should be
determined by the designer depending on the problems. Trimming, combining, and reducing of branches are
easy to achieve using the proposed representation. We will give an example of how to remove redundant
intermediate nodes of the IR system in Section 5.2.
3.3 Variations of Representations
In Section 3.1, we have assumed that the leaf nodes are homogeneous. In such circumstances, a 1×N–1
array is enough to represent a hierarchical organization of a MAS. Nonetheless, in view of the circumstances
where each leaf node must be treated uniquely, a second row can be added to the array representation to address
the distinction resulting from permutations. This will make the representation to be in the form of a 2×N–1 array
(Representation 2):
1321
1321
N
N
pppp
aaaa
where {ai} are still integers between 1 and M, denoting the level of the partition between leaf nodes i and i+1,
and p1, p2, …, pN–1 are a permutation of 1 to N with the last number discarded. Still using the example in Figure
1, now we use numbers 1, 2, …, 7 to distinguish the mutually different leaf nodes. If in the organization they are
5, 3, 2, 1, 4, 7, 6, respectively, then the representation is:
741235
321322
.
One may also want to design an organization in which the number of leaf node agents is not fixed beforehand.
To account for varied number of leaf node agents, we may use the following Representation 3:
s0')(
1321
12
1
000
NN
Naaaa
where N1 is the actual number of leaf nodes of the representation, N2 is the maximum number of leaf nodes
allowed in the organization, and the remaining positions are filled with zeros.
These variants of representations will function in the same manner as the Representation 1 when taken to go
through genetic operators which are introduced next.
IV. Crossover and Mutation Operators
The traditional one-point crossover chooses a random slicing position along the chromosomes of both
parents. All data beyond that point in either solution is swapped between the two parents. The resulting
chromosomes are two offspring. Though commonly used in genetic algorithms, this crossover method only
influences the structure near the crossover point, as shown in Figure 3(a,b). It may not be enough to generate
new offspring in large-scale systems. To speed up the evolution and increase the chance of getting a desired
structure with higher utility, new crossover operators are needed. In this report, we propose a novel crossover
operator - hierarchical crossover - specially designed for optimization of tree-structured organizations.
6. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 57 | Page
The proposed hierarchical crossover operator based on the previously described Representation 1
contains swapping of sub-organizations and a repair strategy to keep the number of total leaf nodes constant. It
is implemented as follows.
First of all, we compare the number of structure levels of two randomly selected organization solutions from the
population. Denote the organization with more levels as the first individual and the number of levels as T.
Denote the organization with fewer levels as the second individual. (In the case of a tie, the order can be
arbitrarily assigned.) After that, we choose a node randomly from all nodes whose level number is between 1
and T–1 from the first solution and denote the level number of the chosen node as S. Thirdly, we choose a node
randomly at Level S, or the penultimate level, whichever is smaller, from the second solution, and exchange the
sub-structures between the two solutions below the chosen nodes. If any of the solution candidates have only
(a) Array representation
(b) One-point crossover
(c) Hierarchical crossover
Figure 3: Illustration of one-point crossover and hierarchical crossover using
array representation and organization structures.
7. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 58 | Page
one level, we generate two random individuals of maximum tree depth instead. The exchange ensures that the
two newly formed organization structures do not exceed the maximum height of their parent structures.
However, the exchanged sub-structures do not necessarily contain equal number of leaf nodes. Thus, we
propose the following repair strategy.
Find the solution with longer representation and randomly pick out one digit from it and insert this digit
into a random slot in the other solution. Continue until the two solutions have equal length. This will guarantee
the validity of the two solutions, as shown in Figure 3(a,c). Illustrated in both the array representation and the
organization structures, Figure 3 displays the difference between the proposed hierarchical crossover and one-
point crossover. The pseudo code of hierarchical crossover is given in Figure 4.
To apply hierarchical crossover to Representation 2, all we need is to bundle each column and move
the second row together with the first row. As for organizations in Representation 3, the repair strategy is
implemented with the digits randomly picked out from non-zero locations only and until each selected
organizations have the same number of leaf nodes as before.
As seen in Figure 3, a branch of the tree is corresponding to a piece of gene fragment. By swapping the two
selected gene segments in the parents, we get two new organization instances with exchanged sub-organizations.
This step is similar to two-point crossover, in which the segments between the two randomly selected crossover
points of both parents are swapped to form the offspring. However, like one-point crossover, two-point
Let parent1 and parent 2 be the array representations of two selected parents.
if max(parent1)<max(parent2)
Exchange parent1 and parent2;
end
T = max(parent1);
if T==1 or max(parent2)==1
Randomly generate offspring1 and offspring2 of maximum tree depth;
return
end
For parent1:
List all possible crossover nodes of parent1 from Level 1 till T-1;
Randomly select a node from the above list as cp1;
Record the level number of cp1 as S;
Get the segments of the array representation of the sub-structure below cp1 as portion_c1;
Get the segments of the array representation to the left of the sub-structure below cp1 as portion_l1;
Get the segments of the array representation to the right of the sub-structure below cp1 as portion_r1;
For parent2:
Randomly select a node cp2 from parent2 at the level number min(S, max(parent2)-1));
Get the segments of the array representation of the sub-structure below cp2 as portion_c2;
Get the segments of the array representation to the left of the sub-structure below cp2 as portion_l2;
Get the segments of the array representation to the right of the sub-structure below cp2 as portion_r2;
offspring1 = [portion_l1 portion_c2 portion_r1];
offspring2 = [portion_l2 portion_c1 portion_r2];
Repair strategy:
if length(offspring1)>length(parent1)
exnum = length(offspring1)-length(parent1);
for j=1:exnum,
Randomly select an integer p1 between 1 and length(offspring1);
Randomly select an integer p2 between 1 and length(offspring2)+1;
offspring2 = [offspring2(1:p2-1) offspring1(p1) offspring2(p2:end)];
offspring1 = [offspring1(1:p1-1) offspring1(p1+1:end)];
end
elseif length(offspring2)>length(parent2)
exnum = length(offspring2)- length(parent2);
for j=1:exnum,
Randomly select an integer p2 between 1 and length(offspring2);
Randomly select an integer p1 between 1 and length(offspring1)+1;
offspring1 = [offspring1(1:p1-1) offspring2(p2) offspring1(p1:end)];
offspring2 = [offspring2(1:p2-1) offspring2(p2+1:end)];
end
end
Figure 4: Pseudo code for hierarchical crossover.
8. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 59 | Page
crossover also does not concern whether the selected gene segments correspond to the whole tree branches or
not. And as long as the two crossover points are determined, the segments are fixed and the locations of them in
the arrays do not change. Hierarchical crossover is different from two-point crossover in that it focuses on the
branches of the tree structures and only change the gene segments that refer to whole branches. Moreover, the
locations of the two gene segments of the parents may differ from each other, and the repair strategy promotes
population update.
In addition to the crossover method mentioned above, we use the mutation of small perturbation. It is
different from bit-wise mutation in that the digit can only increase by 1 or decrease by 1 with equal probability.
In the cases of the boundaries, if the perturbed digit is out of bounds, the original value is restored. The pseudo
code of the mutation operator based on Representation 1 is displayed in Figure 5.
V. The Information Retrieval Model
In this report we will examine the algorithm in the information retrieval system [10]. A structured,
hierarchical organization composed of nodes as mediators, aggregators, and databases is used to model the IR
system. An agent is assigned for each node to take the corresponding functions. The information recall and the
query response time are combined to form a metric to determine the utility of the organization. We will
summarize the derivation of the utility function in the following section. Detailed procedures to calculate the
utility can be found in [10]. In the template of the IR system shown in Figure 6, directed edges with a solid
arrow represent has-a relations, and the corresponding label indicates the magnitude of that relation, and
hollow-arrow edges represent is-a relations.
At the top level of each hierarchy is a mediator. The user sends a query, which a randomly assigned
mediator is responsible to handle. It uses the collection signatures of all the mediators to compare data sources,
then routes the query to those mediators that seem appropriate. After the query has been directed through the
aggregators and processed by all the databases under the selected mediators, the responsible mediator finally
collects and delivers the resulting data.
5.1 The Utility of the IR Model
According to [10], every mediator has got a rank according to its perceived response size. The one with the
largest perceived response size receives rank No. 1, and the same rank is given to mediators with equal
Let offspring be the array representation of an offspring created by the
crossover operator, numVar be the length of the representation,
mutOps be the mutation probability, and maxTreeDepth be the
maximum tree depth.
rN = rand(size(offspring,1),numVar)<mutOps;
offspring = offspring+rN.*((rand(size(offspring,1),numVar)>0.5)*2-
1);
offspring(offspring==0) = 1;
offspring(offspring==maxTreeDepth+1) = maxTreeDepth;
Figure 5: Mutation of small perturbation.
Figure 5: Organization template of the information retrieval system. [10]
9. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 60 | Page
perceived response sizes. Mediators are chosen to be sent queries based on their ranks, resulting in the query
probability P(m) (m=1, 2, …, num_mediators). This is used to calculate the response recall of the organization,
which is given by the following equation:
sizetopicenv
msizeresponseactualmP
recallresponse
mediatorsnum
m
__
)(__)(
_
_
1
(1)
where the expectation of the system‟s actual response size regarding all the mediators is divided by the
environmental topic size to form the value of the response recall.
The IR model assumes that queries have a Poisson arrival distribution with mean rate query rate, and each node
follows the FIFO processing principle. Each database has a process service rate, defining how quickly it can
process queries. Likewise, each aggregator and mediator has a response service rate, and must wait for the
slowest information source before sending responds. The probability density function (pdf) and cumulative
density function (cdf) of the waiting time in a database node are given as:
x
M exf
),( (2)
x
M exF
1),( (3)
where x≥0 is the waiting time and =service_rate–arrival_rate. The query rate of the mediator m equals
query_rate×P(m), and all nodes under a particular mediator inherit the query rate of that mediator. The service
rate of a database is simply its process service rate, whereas aggregators and mediators have service rate as
response_service_rate/num_sources.
The pdf and cdf of the maximum service time of a node‟s all sources can be generated by the following
equations:
n
i i
i
n
i
in
xF
xf
xFxf
11
)(
)(
)(
])([)(
(4)
n
i
in xFxF
1
)( )()(
(5)
where fi and Fi represent the pdf and cdf of the ith
source respectively.
The mediator and aggregator must process and aggregate the resulting data, leading to a total service time
combining these two activities. The pdf and cdf of the total service time can then be determined by the
convolution of the corresponding local and source distribution functions, which have the forms:
x
i
lsC stepdistixfifxf
0
_)()()(
(6)
x
i
lsC stepdistixFifxF
0
_)()()(
(7)
where x=0, 1, 2, …, dist_range/dist_step, with dist_range representing the upper bound on the sampled points
and dist_step the stride length between points. fs is the aggregate information source pdf, and fl and Fl are the pdf
and cdf of the waiting time for the local queuing process.
By incorporating the result propagation process and the cumulative overhead latency incurred by the message
transits we can predict the expected response time of the system as a whole. And finally the utility of
organization is computed by combining the aspects of response recall and response time with appropriate
weights of each term as follows:
10/_1000_ timeresponserecallresponseutility (8)
5.2 Simplifying the Organization Representation with Regard to the IR Model
Since it is assumed in the IR model that all the databases in the system contain the same amount of
topic data, and thus, there are no differences among the end nodes (i.e. leaves of the trees), we may directly
borrow the array representation introduced in Section 3 to the IR model. Here Level 1 is the mediator level,
where nodes are all mediators. The intermediate nodes correspond to aggregators, and the leaf nodes are
database agents. The whole organization can be outlined by a set of trees. Exchange of information is enabled
between every two root nodes and all immediate superiors and subordinates.
From a practical viewpoint, we notice that it is not necessary to include an aggregator if it only has one
subordinate, because it will only increase the information transmission delay and not bring any integration
advantages. Hence, if such an organization instance emerges, we can simply omit the aggregator node and
reduce the organization structure by one level.
10. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 61 | Page
Related modification can be made in the array representation, which is summarized below. Firstly, obtain all the
Figure 8: Flowchart of the algorithm.
Original representation: 3 1 5 2 3 3 4 2 1 2 5 3 1 4 3
Using “1” as separators: 3 1 5 2 3 3 4 2 1 2 5 3 1 4 3
Using “1” “2” as separators: 2 1 5 2 3 3 4 2 1 2 5 3 1 4 2
Using “1” to “3” as separators: 2 1 3 2 3 3 4 2 1 2 5 3 1 3 2
Final organization: 2 1 3 2 3 3 4 2 1 2 4 3 1 3 2
Figure 7: Simplifying the organization. Nodes M are mediators, nodes A are aggregators, and nodes D are
databases.
11. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 62 | Page
segments of a genome between adjacent mediators (i.e. the integer series between 1‟s). Set the smallest values
of these segments to 2. Secondly, obtain all the segments with 1‟s and 2‟s as separators. Set the smallest values
of these segments to 3. Continue until the highest level of the organization. Figure 7 shows the detailed steps of
a sample simplifying procedure. It transforms a 5-level sample organization of the IR system to a 4-level one. In
the simplified organization, all mediators and aggregators have no less than two sources.
The simplifying procedure is employed to achieve higher utility. At the same time, the number of organization
instances we have to evaluate for every representation is reduced to one.
5.3 Implementation and Evaluation Criteria
In the case study of the IR model, the optimization is carried out using genetic algorithm with
population of organizations represented by arrays, the hierarchical crossover and the mutation of small
perturbation as described in the above sections. The utility value serves as the fitness measure of an individual
organization. If the arrival rate exceeds the service rate at one or more points, resulting in infinite queues, the
fitness of the organization will be penalized. Systems with one infinite queue are considered to have fitness of –
2500, and for each additional infinite queue, another 500 is deducted from the fitness.
We recognize that there are likely multiple optimal solutions that achieve the same utility in a given
system environment, owing to the symmetry of the structures. Besides, the building blocks that may lead to a
good solution need to be maintained in the population. Therefore, we need a method that allows growth in
several promising areas in the search space. In other words, the diversity of the population should be enhanced
and over-convergence should be avoided. We increase the competition between similar individuals by applying
the restricted tournament selection (RTS) method described in [6]. It helps to preserve diverse building blocks
needed to locate the optimal organization. A flowchart of the algorithm is shown in Figure 8.
We compare the proposed algorithm, called hierarchical genetic algorithm (HGA), with the standard genetic
algorithm using one-point crossover with bit-wise mutation (SGA1) and two-point crossover with bit-wise
mutation (SGA2) in order to show the benefits of the newly introduced operators. We examine the algorithms in
two aspects, the accuracy and the stability of search, which are evaluated using the parameters, average
percentage relative error (APRE) and success rate (SR), respectively. They are derived using the following
equations.
The percentage relative error (PRE) can be calculated by:
PRE=(fbest–f)/fbest×100 (9)
where fbest is the best known fitness value among all the runs of all the algorithms for a given test case, and f is
the current fitness value achieved by the algorithm. APRE is the average of the PRE values among all the
independent runs of each test case.
SR is a number between 0 and 1 that denotes the ratio of the number of runs in which the best known solution is
found by the algorithm to the total number of runs in each test case. Since GAs involve stochastic initialization
of solution candidates, selection, crossover, and mutation, the stability of search is also an important factor that
we should take account into.
We examine the test cases of 12, 14, 16, 18, 20, 22, 24, 26, 28, and 30 databases. The maximum height of the
structures is set to be 4. The population size and the maximum number of candidate evaluations used are shown
in Table 1. All algorithms use a window size w=5 for RTS in the population updating stage. The mutation rate is
0.1. All the test cases involve 10 independent runs.
The environment parameters of the IR model are set as follows: message latency = 20 milliseconds, process
service rate = 10 per second, response service rate = 20 per second, and query rate = 3 per second. The search
set size and query set size are set to be the total number of mediators for each organization. The response recall
is therefore identical (100%) in all cases, and the utility is determined by the response time.
The best achieved fitness value in every generation is recorded and the best organization instance found after the
maximum number of candidate evaluations along with its fitness are used for calculating APRE and SR. In this
case study and many other applications, the computation time of the genetic operators and population updating
is negligible compared to that of the candidate evaluations. Moreover, when parallel computing is used, the
execution time depends on number and quality of the machines used. Therefore, we conclude that the number of
candidate evaluations is more suitable as an evaluation metric than computation time. When we use the same
machine, computation time is proportional to the number of candidate evaluations. All algorithms are tested in
MATLAB ver. 7.9.0.
12. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 63 | Page
Table 1: Configurations of HGA.
No. DBs Population Size No. of Candidate Evaluations
12 50 2,000
14 100 5,000
16 200 10,000
18 500 50,000
20 500 50,000
22 500 50,000
24 500 100,000
26 500 100,000
28 500 100,000
30 1,000 200,000
VI. Experimental Results
In this section we will firstly analyze the properties of the best solutions found by the algorithms so far.
Secondly, we will demonstrate the advantage of the proposed HGA over the standard GA with one-point and
two point crossover in locating the best organization of the IR system.1
6.1 Best Organizations Found by the Algorithms
The characteristics of the best organizations found by the algorithms are listed in Table 2, and the
corresponding structures are shown in Figure 9. Since previous studies did not give comparison among the
highly rated organizations in different scenarios, it should be worthwhile for us to summarize their features.
Table 2: Characteristics of the Best Organizations.
No. of DBs Representation of Best Organization
No. of
Mediators
No. of
Levels
Total
No. of
Agents
Fitness
12 33233133233 2 3 18 860.39
14 3233132331323 3 3 23 847.62
16 332313323133233 3 3 25 839.20
18 33233133233133233 3 3 27 832.27
20 4434342443434243434 1 4 33 821.60
22 332434341434342434434 2 4 37 813.90
24 43434243434143434243434 2 4 42 810.13
26 4434342443434143434243434 2 4 44 802.24
28 443434244343414434342443434 2 4 46 795.96
30 4434344244343441443434244343
4
2 4 48 790.06
Firstly, we may see that there is no node with more than 6 sources in the best organization of any test case
because it will cause an infinite queue in the current settings. If an aggregator has too many sources, it needs a
long time to collect and analyze the information from the sources, and is thus not optimal. Secondly, most of the
best found organizations are composed of the following strings: 3323, 33233, 443434. These baseline structures
of 5, 6, and 7 databases offer an advantage in efficiency and are assembled to constitute the best organization in
a larger scale. During the evolutionary search, they are identified by the algorithms as building blocks for
solutions with high fitness values. Thirdly, as the number of databases increases, the model has to deal with
more distributed load. It first seeks to introduce more mediators, and later the height of the structure is increased
to balance off the transmission burden of mediators. For example, 2 mediators are sufficient to handle a system
with 12 databases, but for a system with 18 databases, 3 mediators are needed. And in the 20-database case, a 3-
level organization with 3 mediators is no longer adequate, therefore a 4th
level is added. Since the height of the
structure is raised, the number of mediators is cut down to avoid unnecessary delay in assembling the data.
1
As the EOS method does not contain detailed description of the algorithm, unfortunately, we are not able to
compare our algorithm with EOS.
13. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 64 | Page
It can be observed from Figure 9 that it is beneficial to group the databases at the bottom level as evenly as
possible, which is consistent with our intuition of a good organization design. In the test cases where there are
12, 18, 24, and 28 databases, balanced allocation can be realized. Perfect symmetry appears in the designs.
Similar efforts are made in the test cases of 14, 16, 20, 26, and 30 databases. Note that for the latter two
instances, the two mediators process different number of databases, however the second-level aggregators have
exactly the same subordinate structures. The organizations shown in Figure 9(h&j) achieve higher fitness values
than the organizations with both mediators having the same number of databases, which can be represented as
[443434 2 43434 1 443434 2 43434] and [4434344 2 443434 1 4434344 2 443434] respectively. It is more
interesting to investigate the case where there are 22 databases. The tradeoff is so difficult and eventually
(a) 12 databases (b) 14 databases (c) 16 databases
(d) 18 databases (e) 20 databases
(f) 22 databases (g) 24 databases
(h) 26 databases
(i) 28 databases
(j) 30 databases
Figure 9: Best organizations found by the algorithm.
14. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 65 | Page
unbalanced organization wins. Moreover, putting two or three databases at the penultimate level emerges as a
good choice in this kind of situations.
6.2 Comparison of Results
Table 3 shows the APRE of SGA1, SGA2, and HGA in the 10 test cases, and the SR values are
displayed in Table 4. The best value for each test case is highlighted. It can be observed that the accuracy of the
proposed HGA is better than SGA1 and SGA2 in 9 out of the 10 cases. Only in the 18-database case, SGA2
outperforms SGA1 and HGA in terms of APRE.
Table 3: Average Percentage Relative Error.
No. DBs SGA1 SGA2 HGA
12 0.1103 0.1122 0.0370
14 0.0090 0.0460 0
16 0.0966 0.0869 0
18 0.0940 0.0372 0.0505
20 0.1150 0.3076 0.0749
22 0.2037 0.3085 0.0031
24 0.3376 0.4914 0.0406
26 0.1556 0.3494 0
28 0.2104 0.5307 0.0067
30 0.2470 0.4825 0
Regarding the search ability, HGA also has an advantage over SGA1 and SGA2 in the majority of the test cases.
The superiority of HGA is more pronounced in larger-scale organizations which contain more than 20 database
nodes. In those cases, SGA1 and SGA2 fail to locate the best known organization instances for most of the time,
whereas the proposed HGA still maintains high SR values of 90%–100%. This proves that HGA uses fewer
candidate evaluations to locate the best organization than the conventional GAs. Given that the candidate
evaluations are very computationally expensive in many real-world systems, it is beneficial to use HGA in such
circumstances.
Table 4: Success Rate.
No. DBs SGA1 SGA2 HGA
12 0.5 0.5 0.8
14 0.8 0.7 1
16 0.7 0.8 1
18 0.8 0.8 0.8
20 0.5 0.1 0.3
22 0.1 0 0.9
24 0.2 0 0.9
26 0.4 0.1 1
28 0.2 0 0.9
30 0.2 0.1 1
The non-parametric Wilcoxon signed-rank test is performed to judge whether there is a statistically significant
difference between HGA and SGA1/SGA2. As a pair-wise test in a multi-problem scenario, we use all the
APRE values of each algorithm as sample vectors. The null hypothesis H0 is set as “there is no difference
between HGA and SGA1/SGA2 in terms of the APRE values.” Accordingly, the alternative hypothesis H1 is
„„The two methods are significantly different.” A significance level of 0.05 is implemented, i.e. if the p-value of
the test turns out to be less than 0.05, the algorithms involved are considered to have different performance, and
the smaller the p-value is, the more distinct they are from each other. We get that the APRE values of HGA is
different from those of SGA1 at the p-value of 0.001953 and is different from those of SGA2 at the p-value of
0.003906, which suggests the proposed algorithm is statistically better than both SGAs.
The performance graphs of the median runs (i.e. the 5th
best runs in our experiment) of SGA1, SGA2, and HGA
are shown in Figure 10. Owning to the specially designed genetic operators, HGA is able to locate good
solutions faster in most of the circumstances. When the number of databases is larger (especially over 20
databases), HGA regularly scores higher fitness than SGA1 and SGA2 when the same number of candidate
15. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 66 | Page
evaluations is used. It is also able to find better organizations within the maximum number of candidate
evaluations. From Figure 10(f,g,h,i,j) we can see, HGA has a remarkable advantage over SGA1 and SGA2 in
the convergence speed
.
(a) 12 databases (b) 14 databases
(c) 16 databases (d) 18 databases
(e) 20 databases (f) 22 databases
Figure 10: Performance graph.
16. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 67 | Page
VII. Comparison of HGA with the State-of-the-Art Multi-Agent Organization Design
Methodologies
While we have demonstrated the advantage of HGA‟s newly introduced operators over the traditional
GA operators, it is interesting to investigate how HGA performs compared with the search processes of the
state-of-the-art multi-agent organization design methodologies. In this section we will explore the hierarchical
(g) 24 databases (h) 26 databases
(i) 28 databases (j) 30 databases
Figure 10 (cont.): Performance graph.
Figure 11: An Example of equivalent organizations in ODML.
17. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 68 | Page
IR system using ODML [10] and KB-ORG [17] that are previously mentioned in Section 2. Results are given
following the experimentation in Section 5.3.
7.1 Comparison with ODML
In ODML, four approaches are listed to assist the search process. They are the exploitation of hard
constraints, equivalence classes, parallel search, and model abstraction. Rather than going through a decision
tree to verify whether an organization instance satisfies the hard constraints of the problem as ODML does, our
algorithm incorporates the array representation that already ensures the satisfaction of constraints in maximum
height of the structure and the number of databases in the system. Parallel search and model abstraction are also
intuitively used in HGA.
In ODML, the agents are treated in three equivalence classes: the mediators, the aggregators, and the
databases. Within the same class, the characteristics of the agents do not distinguish between each other. In
other words, choosing any agent in the “mediators” group for a role of mediator in the IR organization is the
same. Moreover, the number of organization alternatives can be cut down by discarding organizations which are
equivalent to an existing one in the candidate pool. For instance, the organizations shown in Figure 11 are
equivalent in ODML in that their utility will be exactly the same, and only one should be kept as an evaluation
candidate.
Based on these notions, we have calculated the number of evaluations needed for ODML in the 10 test
cases as in Section 5.3, with exploited hard constraints of the number of database nodes from 12 to 30 and the
maximum height of the structure equaling 4. All nodes in the organizations (expect the leaf nodes) should have a
minimum of two subordinates. Details are shown in Table 5. It confirms that the number of organization
instances increases exponentially as the number of leaf node agents increases, despite the truncation of
redundant equivalent organizations. The total number of evaluations can be approximated as O(2.1N
), where N is
the number of leaf nodes. Comparing Table 5 with Table 1, we can see that HGA uses much fewer candidate
evaluations than ODML does. Especially, when the number of databases becomes larger, the fraction of the
number of candidate evaluations needed by HGA to the total number of candidate evaluations becomes smaller
and smaller. This saves a great amount of computation burden, as the computation of utility functions can be
extremely expensive.
Table 5: Number of organization Evaluations Needed for ODML.
No. DBs No. of Evaluations No. DBs No. of Evaluations
12 4,304 22 9,675,949
14 20,699 24 43,663,703
16 98,186 26 195,062,099
18 459,311 28 863,372,191
20 2,120,799 30 3,788,734,984
It should be noted that the proposed HGA is compatible with all the above mentioned search space reducing
measures, however, we maintain the equivalent organizations as in Figure 11, for they may contribute to finding
an optimal solution of the test problems. This compromise results in a larger search space for HGA, whereas in
ODML, the elimination of redundant equivalent organizations helps to narrow down the search range to a great
extent. When the number of equivalent organizations is prevailing, ODML should have an advantage benefited
from the elimination measure. Nevertheless, in the studied system, HGA still manages to evolve the population
of organizations at a reasonable pace, and it spares the computation time for branch pruning at the same time.
7.2 Comparison with KB-ORG
KB-ORG has also placed much effort on reducing the search space. Different from ODML, it
emphasizes the use of design knowledge in application and coordination of roles and design functions. With
good knowledge, a system can be designed with relatively affordable cost. However, in certain cases, design
knowledge is hard to acquire. It largely depends on the level of expertise of the designer. A barely trained
designer may have little experience to rely on when he or she tries to construct an organization for a multi-agent
system under the guideline of KB-ORG. Design knowledge is not guaranteed to be accurate. When taking a
greedy approach in a certain decision step, the search process may leave out the optimal solutions. In addition,
design knowledge needs to be updated following the change of environmental variables. If the environmental
variables are altered, previous knowledge may not be applicable anymore; instead, new knowledge should be
added to help the organization design.
In the IR model, the utility of the organizations does not involve spatial contents, and every role has
only got one kind of agent to perform, so no extra knowledge is required in either spatial proximity of the agents
18. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 69 | Page
or role-agent binding. The main difficulty lies in the coordination of agents, e.g. how many levels of hierarchy is
needed. Assume that the designer has successfully searched out the best organizations for 12, 14, 16, and 18
databases. He may think that a 3-level hierarchy is best for the 20-databse case. This will reduce the search
space to 58,327 organizations, which is only 2.75% of ODML‟s search space, but, it will miss out the highest
rated organization, which is 4-leveled with the utility of 821.60. The best 3-level organization can be expressed
with our proposed representation as [33233 1 33233 1 3233233], with the utility of 814.11, which is worse than
the worst utility (820.01) found by HGA within 50,000 evaluations in all runs. On the other hand, if the designer
reaches at a relaxed bound of structure height of either 3 or 4 for the 20-database case, the number of
organization evaluations will mount to 2,120,662.
Let us further assume that the designer not only has the knowledge about the vertical depth of the
organization structure, but also has some knowledge about its horizontal size. If in the 22-database case, it can
be speculated that the organizations with 4 levels and 2 mediators are optimal, the designer is faced with a
search space of 3,384,278 options without duplicate. And for organizations with 24 databases, 4 levels, and 2
mediators, the number is 12,686,252. If it can be speculated furthermore that the highest rated organization is
made up of 4 levels, 2 mediators, with every mediator having 2 subordinate agents, the number of evaluations
needed for KB-ORG will be 282,812 and 800,996 for the test cases with 22 and 24 databases respectively,
whereas, for HGA, only 50,000 and 100,000 evaluations are needed to reach a 90% success rate. Although
design knowledge has brought us convenience in searching for the highest rated organization in these test cases,
it is far from satisfactory. In contrast, our algorithm searches for the highest rated organization in a heuristic way.
It is able to handle these test cases without the assistance of external expertise.
VIII. Conclusion and Future Work
We have proposed a novel genetic algorithm based approach to solve the problem of designing the best
organization in hierarchical multi-agent systems. Complementary to existing methodologies that emphasize on
the pruning of the search space, our algorithm uses a bio-inspired evolutionary approach to lead the search to
promising areas of the search space, and is thus suitable for optimizing multi-agent systems with a great variety
of possible organizations where designer expertise alone is not enough or hard to acquire. In the example of the
information retrieval system, we have empirically proved that the algorithm is able to discover competitive
baseline structures in different systems and assemble them to obtain the highest rated structure from a
magnitude of up to 109
organization alternatives. In particular, we propose the use of hierarchical crossover and
mutation of small perturbation to add to the advantage of our algorithm. The new crossover and mutation
methods help HGA enhance the search efficiency greatly, promoting its performance both in accuracy and
stability of search.
With necessary modifications, the proposed algorithm is applicable to other models as well. It can be
used to optimize any tree-based hierarchical organizations of multi-agent systems, given that proper fitness
values are assigned. Application areas include scenario tree and decision tree optimization. On the other hand,
the proposed array representation can also be used for other forms of MAS organizations, such as holarchies. It
is worthwhile to further examine the performance of the algorithm for systems with non-uniform leaf nodes and
unfixed number of leaf nodes using Representation 2 and 3. In subsequent studies, we will investigate the
efficiency of the proposed approach in larger-scale MASs involving a massive number of agents.
References
[1]. Aranha, C., and Iba, H. 2009. The memetic tree-based genetic algorithm and its application to portfolio optimization. Memetic
Computing 1(2): 139–151.
[2]. Bäck, T. 1996. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic
Algorithms. Oxford University Press US.
[3]. De Jong, K. A. 2006. Evolutionary Computation: A Unified Approach. Cambridge, MA: MIT Press.
[4]. Ferber, J., Gutknecht, O., and Michel, F. 2003. From agents to organizations: an organizational view of multi-agent systems. In:
Lecture Notes in Computer Science, 2935, Proc. Agent-Oriented Software Engineering 2003: 214–230.
[5]. Fernández, A., and Ossowski, S. 2008. Exploiting organisational information for service coordination in multiagent systems. In
Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 257–264, Estoril,
Portugal.
[6]. Harik, G. R. 1995. Finding multimodal solutions using restricted tournament selection. In Proceedings of the 6th International
Conference on Genetic Algorithms, 24–31. San Francisco, CA: Morgan Kaufmann Publishers Inc.
[7]. Holland, J. H. 1975. Adaptation in Natural and Artificial Systems. Ann Arbor, MI: University of Michigan Press.
[8]. Horling, B., and Lesser, V. 2005. Analyzing, modeling and predicting organizational effects in a distributed sensor network. Journal
of the Brazilian Computer Society 11(1): 9–30.
[9]. Horling, B., and Lesser, V. 2005. A survey of multi-agent organizational paradigms. The Knowledge Engineering Review 19(4):
281–316.
[10]. Horling, B., and Lesser, V. 2008. Using quantitative models to search for appropriate organizational designs. Autonomous Agents
and Multi-Agent Systems 16(2): 95–149.
[11]. Kirley, M. 2006. Dominance hierarchies and social diversity in multi-agent systems. Proceedings of the 8th Annual Conference on
Genetic and Evolutionary Computation (GECCO), 159–166. Seattle, Washington, USA.
19. New application of genetic algorithm in optimization of structural weights
www.iosrjournals.org 70 | Page
[12]. Lesser, V. 1998. Reflections on the nature of multi-agent coordination and its implications for an agent architecture. Autonomous
Agents and Multi-Agent Systems, 1: 89–111.
[13]. Li, B., Yu, H., Shen, Z., and Miao, C. 2009. Evolutionary organizational search. In Proceedings of the 8th International Conference
on Autonomous Agents and Multiagent Systems - Volume 2, 1329–1330. Budapest, Hungary.
[14]. Nan, G., Li, M., and Kou, J. 2005. Multi-level genetic algorithm (MLGA) for the construction of clock binary tree. In Proceedings
of the 2005 Conference on Genetic and Evolutionary Computation, 1441–1445. Washington DC, USA: ACM.
[15]. Okamoto, S., Scerri, P., and Sycara, K. 2008. The impact of vertical specialization on hierarchical multi-agent systems. In
Proceedings of the 23rd AAAI Conference on Artificial Intelligence, 138–143.
[16]. Phelps, S., McBurney, P., and Parsons, S. 2010. Evolutionary mechanism design: a review. Autonomous Agents and Multi-Agent
Systems, 21: 237–264.
[17]. Sims, M., Corkill, D., and Lesser, V. 2008. Automated organization design for multi-agent systems. Autonomous Agents and Multi-
Agent Systems 16(2): 151–185.
[18]. Vázquez-Salceda, J., Dignum, V., and Dignum, F. 2005. Organizing multiagent systems. Autonomous Agents and Multi-Agent
Systems, 11: 307–360.
[19]. Wooldridge, M., Jennings, N. R., and Kinny, D. 2000. The Gaia methodology for agent-oriented analysis and design. Autonomous
Agents and Multi-Agent Systems, 3: 285–312.
[20]. Yang, J. and Luo, Z. 2007. Coalition formation mechanism in multi-agent systems based on genetic algorithms. Applied Soft
Computing, 7: 561–568.
[21]. Zafar, H., Lesser, V., Corkill, D., and Ganesan, D. 2008. Using organization knowledge to improve routing performance in wireless
multi-agent networks. In Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems - Volume
2, 821–828.