The document describes a research platform for coevolving software agents that interact in a producer/consumer economic world. The platform allows agents to evolve strategies for allocating resources to different production technologies and maximize profits. It provides a controlled environment for examining emergent behaviors from coevolution and how system parameters affect those behaviors. The design uses object-oriented classes like producerAgent and marketAgent to represent the agents and economic rules in a modular, extensible way for ongoing experiments.
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
Artificial intelligence (AI) can have wide reaching application within the construction
industry, however, the actual application of this set of technologies is currently under exploited. This
paper considers the role that the application of AI can take in optimising the efficiencies of project
execution and how this can potentially reduce project duration and minimise and mitigate delay on
projects.
Artificial Intelligence in Robot Path Planningiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Review on Algorithmic and Non Algorithmic Software Cost Estimation Techniquesijtsrd
Effective software cost estimation is the most challenging and important activities in software development. Developers want a simple and accurate method of efforts estimation. Estimation of the cost before starting of work is a prediction and prediction always not accurate. Software effort estimation is a very critical task in the software engineering and to control quality and efficiency a suitable estimation technique is crucial. This paper gives a review of various available software effort estimation methods, mainly focus on the algorithmic model and non algorithmic model. These existing methods for software cost estimation are illustrated and their aspect will be discussed. No single technique is best for all situations, and thus a careful comparison of the results of several approaches is most likely to produce realistic estimation. This paper provides a detailed overview of existing software cost estimation models and techniques. This paper presents the strength and weakness of various cost estimation methods. This paper focuses on some of the relevant reasons that cause inaccurate estimation. Pa Pa Win | War War Myint | Hlaing Phyu Phyu Mon | Seint Wint Thu "Review on Algorithmic and Non-Algorithmic Software Cost Estimation Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26511.pdfPaper URL: https://www.ijtsrd.com/engineering/-/26511/review-on-algorithmic-and-non-algorithmic-software-cost-estimation-techniques/pa-pa-win
1. The document discusses simulation as a technique used to study and analyze the behavior of systems over time. Simulation involves creating a computer-based model of a real-world system to draw conclusions about how it operates.
2. Simulation can be used for task training, decision-making, scientific research, and predicting the behavior of natural systems. It allows testing alternatives without committing resources.
3. The document provides examples of how simulation can be used to model the operations of cooperative societies and banks to help students better understand commercial mathematics topics.
1. The document discusses simulation as a technique used to study and analyze the behavior of actual or theoretical systems by creating computer-based models. It is used when directly studying real systems is not possible or practical.
2. Simulation models can be static or dynamic, discrete or continuous, and deterministic or stochastic. They are composed of mathematical and logical relationships that are analyzed using numerical rather than analytical methods.
3. Simulation has many applications including manufacturing and materials handling systems. It allows testing designs and systems virtually before implementing them in the real world. It provides insights into how systems work and which variables most impact performance.
The document presents an Aviation System Risk Model (ASRM) developed by NASA and the FAA to assess risks from low probability, high consequence aviation accidents. The ASRM uses Bayesian belief networks to model causal factors and their probabilistic relationships leading to different types of accidents. It was developed through analyzing accident case studies and expert knowledge elicitation. The model identifies precursors from accident reports and inserts new technologies to evaluate their potential risk mitigation effectiveness.
Performance Analysis of Genetic Algorithm as a Stochastic Optimization Tool i...paperpublications3
Abstract: Engineering design problems are complex by nature because of their critical objective functions involving many variables and Constraints. Engineers have to ensure the compatibility with the imposed specifications keeping the manufacturing costs low. Moreover, the methodology may vary according to the design problem.
The main issue is to choose the proper tool for optimization. In the earlier days, a design problem was optimized by some of the conventional optimization techniques like gradient Search, evolutionary optimization, random search etc. These are known as classical methods.
The method is to be properly Chosen depending on the nature of the problem- an incorrect choice may sometimes fail to give the optimal solution. So the methods are less robust.
Now-a-days soft-computing techniques are being widely used for optimizing a function. These are more robust. Genetic algorithm is one such method. It is an effective tool in the realm of stochastic optimization (non-classical). The algorithm produces many strings and generation to reach the optimal point.
The main objective of the paper is to optimize engineering design problems using Genetic Algorithm and to analyze how the algorithm reaches the optima effectively and closely. We choose a mathematical expression for the objective function in terms of the design variables and optimize the same under given constraints using GA.
This document discusses a model for analyzing how network connectivity impacts asset returns and risks. The model augments a traditional multi-factor model to account for systemic links between assets represented by a network connectivity matrix. The model shows that network links inflate asset loadings to common factors, impacting expected returns and total risk decomposition into systematic and idiosyncratic components. Greater network connectivity reduces diversification benefits by slowing the decrease in portfolio idiosyncratic risk as the number of assets increases. The authors propose extending the model to incorporate heterogeneous asset responses to links and time-varying network structures.
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
Artificial intelligence (AI) can have wide reaching application within the construction
industry, however, the actual application of this set of technologies is currently under exploited. This
paper considers the role that the application of AI can take in optimising the efficiencies of project
execution and how this can potentially reduce project duration and minimise and mitigate delay on
projects.
Artificial Intelligence in Robot Path Planningiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Review on Algorithmic and Non Algorithmic Software Cost Estimation Techniquesijtsrd
Effective software cost estimation is the most challenging and important activities in software development. Developers want a simple and accurate method of efforts estimation. Estimation of the cost before starting of work is a prediction and prediction always not accurate. Software effort estimation is a very critical task in the software engineering and to control quality and efficiency a suitable estimation technique is crucial. This paper gives a review of various available software effort estimation methods, mainly focus on the algorithmic model and non algorithmic model. These existing methods for software cost estimation are illustrated and their aspect will be discussed. No single technique is best for all situations, and thus a careful comparison of the results of several approaches is most likely to produce realistic estimation. This paper provides a detailed overview of existing software cost estimation models and techniques. This paper presents the strength and weakness of various cost estimation methods. This paper focuses on some of the relevant reasons that cause inaccurate estimation. Pa Pa Win | War War Myint | Hlaing Phyu Phyu Mon | Seint Wint Thu "Review on Algorithmic and Non-Algorithmic Software Cost Estimation Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26511.pdfPaper URL: https://www.ijtsrd.com/engineering/-/26511/review-on-algorithmic-and-non-algorithmic-software-cost-estimation-techniques/pa-pa-win
1. The document discusses simulation as a technique used to study and analyze the behavior of systems over time. Simulation involves creating a computer-based model of a real-world system to draw conclusions about how it operates.
2. Simulation can be used for task training, decision-making, scientific research, and predicting the behavior of natural systems. It allows testing alternatives without committing resources.
3. The document provides examples of how simulation can be used to model the operations of cooperative societies and banks to help students better understand commercial mathematics topics.
1. The document discusses simulation as a technique used to study and analyze the behavior of actual or theoretical systems by creating computer-based models. It is used when directly studying real systems is not possible or practical.
2. Simulation models can be static or dynamic, discrete or continuous, and deterministic or stochastic. They are composed of mathematical and logical relationships that are analyzed using numerical rather than analytical methods.
3. Simulation has many applications including manufacturing and materials handling systems. It allows testing designs and systems virtually before implementing them in the real world. It provides insights into how systems work and which variables most impact performance.
The document presents an Aviation System Risk Model (ASRM) developed by NASA and the FAA to assess risks from low probability, high consequence aviation accidents. The ASRM uses Bayesian belief networks to model causal factors and their probabilistic relationships leading to different types of accidents. It was developed through analyzing accident case studies and expert knowledge elicitation. The model identifies precursors from accident reports and inserts new technologies to evaluate their potential risk mitigation effectiveness.
Performance Analysis of Genetic Algorithm as a Stochastic Optimization Tool i...paperpublications3
Abstract: Engineering design problems are complex by nature because of their critical objective functions involving many variables and Constraints. Engineers have to ensure the compatibility with the imposed specifications keeping the manufacturing costs low. Moreover, the methodology may vary according to the design problem.
The main issue is to choose the proper tool for optimization. In the earlier days, a design problem was optimized by some of the conventional optimization techniques like gradient Search, evolutionary optimization, random search etc. These are known as classical methods.
The method is to be properly Chosen depending on the nature of the problem- an incorrect choice may sometimes fail to give the optimal solution. So the methods are less robust.
Now-a-days soft-computing techniques are being widely used for optimizing a function. These are more robust. Genetic algorithm is one such method. It is an effective tool in the realm of stochastic optimization (non-classical). The algorithm produces many strings and generation to reach the optimal point.
The main objective of the paper is to optimize engineering design problems using Genetic Algorithm and to analyze how the algorithm reaches the optima effectively and closely. We choose a mathematical expression for the objective function in terms of the design variables and optimize the same under given constraints using GA.
This document discusses a model for analyzing how network connectivity impacts asset returns and risks. The model augments a traditional multi-factor model to account for systemic links between assets represented by a network connectivity matrix. The model shows that network links inflate asset loadings to common factors, impacting expected returns and total risk decomposition into systematic and idiosyncratic components. Greater network connectivity reduces diversification benefits by slowing the decrease in portfolio idiosyncratic risk as the number of assets increases. The authors propose extending the model to incorporate heterogeneous asset responses to links and time-varying network structures.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
A Comparison of Traditional Simulation and MSAL (6-3-2015)Bob Garrett
This document compares traditional simulation approaches to the Model-Simulation-Analysis-Looping (MSAL) approach. It provides background information on system modeling and simulation basics, including conceptual models, simulation programs, sensitivity analysis, Monte Carlo methods, and simulation optimization. It then discusses risk and uncertainty, modeling systems of systems, and the current state of modeling and simulation in systems engineering. Finally, it introduces the MSAL approach, which uses graphs, analytics, and repeated simulation loops to address the increased complexity and uncertainty in systems of systems compared to traditional approaches. The MSAL approach aims to provide benefits like improved handling of uncertainty and complexity.
Conditional planning deals with incomplete information by constructing conditional plans that account for possible contingencies. The agent includes sensing actions to determine which part of the plan to execute based on conditions. Belief networks are constructed by choosing relevant variables, ordering them, and adding nodes while satisfying conditional independence properties. Inference in multi-connected belief networks can use clustering, conditioning, or stochastic simulation methods. Knowledge engineering for probabilistic reasoning first decides on topics and variables, then encodes general and problem-specific dependencies and relationships to answer queries.
Systemic and Systematic risk - Monica Billio, Massimiliano Caporin, Roberto Panzica, Loriana Pelizzon
SYRTO Code Workshop
Workshop on Systemic Risk Policy Issues for SYRTO (Bundesbank-ECB-ESRB)
Head Office of Deustche Bundesbank, Guest House
Frankfurt am Main - July, 2 2014
Cost Estimation Predictive Modeling: Regression versus Neural Networkmustafa sarac
This document compares regression and neural network models for cost estimation predictive modeling. It summarizes previous research applying regression and neural networks to cost estimation problems in various fields. The document then describes a study that systematically examines the performance of regression versus neural networks for cost estimation under different data conditions, including varying data set sizes, imperfections from noise, and sampling bias. Both simulated and real-world data sets are used to compare the accuracy, variability, and usability of regression and neural network models for cost estimation.
The document proposes a methodology to improve evolutionary multi-objective algorithms (EMOAs) by incorporating achievement scalarizing functions (ASFs) to provide convergence to the Pareto optimal front while maintaining diversity. The methodology executes in serial stages: running an EMOA to get a non-dominated set, clustering this set to extract a representative set, calculating pseudo-weights for the representative set, and perturbing the extreme points to generate reference points to drive the ASF towards the Pareto front over iterations until no improvements are found. Initial studies on test problems ZDT1, ZDT2 and ZDT3 show promising results, with the proposed approach finding a representative set of clustered Pareto points in fewer generations compared to NSGA
Using Dempster-Shafer Theory and Real Options TheoryEric van Heck
This document discusses combining Dempster-Shafer theory and real options theory to assess competing strategies for implementing IT infrastructures. It presents a model that uses an evidential reasoning approach based on Dempster-Shafer theory to account for risks and uncertainties, and real options analysis to value the flexibility in multi-stage investment strategies. The model is applied in a case study to define a strategy for implementing a human resource management application at a large European service provider. The key benefits of combining these approaches are that it provides flexible decision support that takes into account both quantitative and qualitative factors in making implementation decisions under uncertainty.
Kenneth Lloyd introduces category theory as a potential language for scientific discourse in agent-based modeling and simulation (ABMS). Category theory defines mathematical structures and relationships between them. Lloyd argues that agents can be considered as structures within a category. He provides an example of applying category theory concepts like functors to represent functional objects and agents. Finally, Lloyd discusses how category theory may provide a formalism for describing emergent properties in multi-agent systems and validating hypotheses through simulation.
Interpretable machine learning : Methods for understanding complex modelsManojit Nandi
1. Interpretability helps understand complex machine learning models by explaining their outcomes based on inputs. Higher predictive accuracy often reduces interpretability.
2. Methods like LIME and SHAP attribute model outcomes to input features through local surrogate models and game theory.
3. Recourse analysis identifies actions individuals could take to improve outcomes from automated decisions.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...csandit
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines.
Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is
not an easy work due the high number of parameters that have to be measured.
The document discusses analogical reasoning and case-based reasoning. It provides an overview of research in these areas including structure mapping theory, models of analogical processing like SME and MAC/FAC, and case-based reasoning systems. It proposes an analogy ontology to integrate analogical processing and first-principles reasoning by providing a formal representation of analogy concepts and results.
Discussion of “Network Connectivity and Systematic Risk” and “The Impact of N...SYRTO Project
Discussion of “Network Connectivity and Systematic Risk” and “The Impact of Network Connectivity on Factor Exposures, Asset pricing and Portfolio Diversification” by Billio, Caporin, Panzica and Pelizzon. Arjen Siegmann. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Slides accompanying Malcolm Moore’s 2014 webcast on statistical and predictive modelling where he demonstrates JMP as an effective tool for exploratory data analysis, and JMP Pro as an expert modelling tool that scales to any number of Xs and Ys, is effective with messy data, and reduces the risk of selecting the wrong model. Watch the webcasts at http://www.jmp.com/uk/about/events/webcasts/
T OWARDS A S YSTEM D YNAMICS M ODELING M E- THOD B ASED ON DEMATELijcsit
This document proposes a new method for constructing system dynamics models that combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique with system dynamics modeling. DEMATEL is first used to systematically define and quantify causal relationships between variables in a system. The results from DEMATEL, including impact relation maps and a total influence matrix, are then used to derive the causal loop diagram and define variable weights in the stock-flow chart equations of the system dynamics model. This combined method aims to overcome limitations and subjectivity in traditional system dynamics modeling that relies solely on decision makers' mental models.
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
Automatically Estimating Software Effort and Cost using Computing Intelligenc...cscpconf
In the IT industry, precisely estimate the effort of each software project the development cost
and schedule are count for much to the software company. So precisely estimation of man
power seems to be getting more important. In the past time, the IT companies estimate the work
effort of man power by human experts, using statistics method. However, the outcomes are
always unsatisfying the management level. Recently it becomes an interesting topic if computing
intelligence techniques can do better in this field. This research uses some computing
intelligence techniques, such as Pearson product-moment correlation coefficient method and
one-way ANOVA method to select key factors, and K-Means clustering algorithm to do project
clustering, to estimate the software project effort. The experimental result show that using
computing intelligence techniques to estimate the software project effort can get more precise
and more effective estimation than using traditional human experts did
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and also makes it possible to find better solutions for algorithm. In this paper, we’re going to offer gradient descent and ACO algorithm which in comparison to some former methods it shows that algorithm has significantly improved routes optimization.
The document discusses the GRASP (General Responsibility Assignment Software Principles) patterns and principles for assigning responsibilities in object-oriented design. It defines GRASP as helping to clearly outline which objects are responsible for which actions. There are nine GRASP principles covered: Creator, Controller, Information Expert, Low Coupling, High Cohesion, Indirection, Polymorphism, Protected Variations, and Pure Fabrication. These principles provide guidelines for assigning responsibilities to classes to achieve well-structured and maintainable code. The document then explains each principle in more detail using a chess game as an example domain.
The document outlines several web design guidelines for creating usable and accessible websites, including: following Fitts' Law to make interactive elements easy to select; providing feedback to users on their actions; reusing users' experience through consistent navigation; using scroll bars sparingly; keeping sentences and paragraphs short; optimizing images for different bandwidths; and avoiding overly heavy pages. The goal is to design intuitive sites that minimize learning curves and user effort.
This document provides course information for CS 478 Machine Learning offered in the spring of 2003, including:
- An overview of machine learning techniques covered such as decision trees, Bayesian learning, neural networks, and clustering.
- Contact information for the instructor and teaching assistants.
- Details on evaluation including homework, a project, and exam worth 40%, 40%, and 20% respectively.
- Links to the course webpage and newsgroup for additional resources.
This document discusses algorithm-independent machine learning techniques. It introduces concepts like bias and variance which can be used to quantify how well a learning algorithm matches a problem, regardless of the specific algorithm used. It discusses techniques like cross-validation, resampling, and combining multiple classifiers that can improve performance in a way that is independent of the learning algorithm. The document also covers principles like minimum description length and no free lunch which provide theoretical foundations for algorithm-independent machine learning.
JENIS JENIS SISTEM OPERASI PADA KOMPUTER DAN HANDPHONE NAMA ...butest
Dokumen tersebut membahas berbagai jenis sistem operasi komputer dan handphone seperti DOS, UNIX, Windows, dan Linux. Mencakup penjelasan singkat tentang sejarah, fitur, dan contoh versi dari masing-masing sistem operasi.
A preliminary survey on optimized multiobjective metaheuristic methods for da...ijcsit
The present survey provides the state-of-the-art of research, copiously devoted to Evolutionary Approach
(EAs) for clustering exemplified with a diversity of evolutionary computations. The Survey provides a
nomenclature that highlights some aspects that are very important in the context of evolutionary data
clustering. The paper missions the clustering trade-offs branched out with wide-ranging Multi Objective
Evolutionary Approaches (MOEAs) methods. Finally, this study addresses the potential challenges of
MOEA design and data clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research.
A Comparison of Traditional Simulation and MSAL (6-3-2015)Bob Garrett
This document compares traditional simulation approaches to the Model-Simulation-Analysis-Looping (MSAL) approach. It provides background information on system modeling and simulation basics, including conceptual models, simulation programs, sensitivity analysis, Monte Carlo methods, and simulation optimization. It then discusses risk and uncertainty, modeling systems of systems, and the current state of modeling and simulation in systems engineering. Finally, it introduces the MSAL approach, which uses graphs, analytics, and repeated simulation loops to address the increased complexity and uncertainty in systems of systems compared to traditional approaches. The MSAL approach aims to provide benefits like improved handling of uncertainty and complexity.
Conditional planning deals with incomplete information by constructing conditional plans that account for possible contingencies. The agent includes sensing actions to determine which part of the plan to execute based on conditions. Belief networks are constructed by choosing relevant variables, ordering them, and adding nodes while satisfying conditional independence properties. Inference in multi-connected belief networks can use clustering, conditioning, or stochastic simulation methods. Knowledge engineering for probabilistic reasoning first decides on topics and variables, then encodes general and problem-specific dependencies and relationships to answer queries.
Systemic and Systematic risk - Monica Billio, Massimiliano Caporin, Roberto Panzica, Loriana Pelizzon
SYRTO Code Workshop
Workshop on Systemic Risk Policy Issues for SYRTO (Bundesbank-ECB-ESRB)
Head Office of Deustche Bundesbank, Guest House
Frankfurt am Main - July, 2 2014
Cost Estimation Predictive Modeling: Regression versus Neural Networkmustafa sarac
This document compares regression and neural network models for cost estimation predictive modeling. It summarizes previous research applying regression and neural networks to cost estimation problems in various fields. The document then describes a study that systematically examines the performance of regression versus neural networks for cost estimation under different data conditions, including varying data set sizes, imperfections from noise, and sampling bias. Both simulated and real-world data sets are used to compare the accuracy, variability, and usability of regression and neural network models for cost estimation.
The document proposes a methodology to improve evolutionary multi-objective algorithms (EMOAs) by incorporating achievement scalarizing functions (ASFs) to provide convergence to the Pareto optimal front while maintaining diversity. The methodology executes in serial stages: running an EMOA to get a non-dominated set, clustering this set to extract a representative set, calculating pseudo-weights for the representative set, and perturbing the extreme points to generate reference points to drive the ASF towards the Pareto front over iterations until no improvements are found. Initial studies on test problems ZDT1, ZDT2 and ZDT3 show promising results, with the proposed approach finding a representative set of clustered Pareto points in fewer generations compared to NSGA
Using Dempster-Shafer Theory and Real Options TheoryEric van Heck
This document discusses combining Dempster-Shafer theory and real options theory to assess competing strategies for implementing IT infrastructures. It presents a model that uses an evidential reasoning approach based on Dempster-Shafer theory to account for risks and uncertainties, and real options analysis to value the flexibility in multi-stage investment strategies. The model is applied in a case study to define a strategy for implementing a human resource management application at a large European service provider. The key benefits of combining these approaches are that it provides flexible decision support that takes into account both quantitative and qualitative factors in making implementation decisions under uncertainty.
Kenneth Lloyd introduces category theory as a potential language for scientific discourse in agent-based modeling and simulation (ABMS). Category theory defines mathematical structures and relationships between them. Lloyd argues that agents can be considered as structures within a category. He provides an example of applying category theory concepts like functors to represent functional objects and agents. Finally, Lloyd discusses how category theory may provide a formalism for describing emergent properties in multi-agent systems and validating hypotheses through simulation.
Interpretable machine learning : Methods for understanding complex modelsManojit Nandi
1. Interpretability helps understand complex machine learning models by explaining their outcomes based on inputs. Higher predictive accuracy often reduces interpretability.
2. Methods like LIME and SHAP attribute model outcomes to input features through local surrogate models and game theory.
3. Recourse analysis identifies actions individuals could take to improve outcomes from automated decisions.
OPTIMIZATION IN ENGINE DESIGN VIA FORMAL CONCEPT ANALYSIS USING NEGATIVE ATTR...csandit
There is an exhaustive study around the area of engine design that covers different methods that try to reduce costs of production and to optimize the performance of these engines.
Mathematical methods based in statistics, self-organized maps and neural networks reach the best results in these designs but there exists the problem that configuration of these methods is
not an easy work due the high number of parameters that have to be measured.
The document discusses analogical reasoning and case-based reasoning. It provides an overview of research in these areas including structure mapping theory, models of analogical processing like SME and MAC/FAC, and case-based reasoning systems. It proposes an analogy ontology to integrate analogical processing and first-principles reasoning by providing a formal representation of analogy concepts and results.
Discussion of “Network Connectivity and Systematic Risk” and “The Impact of N...SYRTO Project
Discussion of “Network Connectivity and Systematic Risk” and “The Impact of Network Connectivity on Factor Exposures, Asset pricing and Portfolio Diversification” by Billio, Caporin, Panzica and Pelizzon. Arjen Siegmann. Amsterdam - June, 25 2015. European Financial Management Association 2015 Annual Meetings.
Slides accompanying Malcolm Moore’s 2014 webcast on statistical and predictive modelling where he demonstrates JMP as an effective tool for exploratory data analysis, and JMP Pro as an expert modelling tool that scales to any number of Xs and Ys, is effective with messy data, and reduces the risk of selecting the wrong model. Watch the webcasts at http://www.jmp.com/uk/about/events/webcasts/
T OWARDS A S YSTEM D YNAMICS M ODELING M E- THOD B ASED ON DEMATELijcsit
This document proposes a new method for constructing system dynamics models that combines the Decision Making Trial and Evaluation Laboratory (DEMATEL) technique with system dynamics modeling. DEMATEL is first used to systematically define and quantify causal relationships between variables in a system. The results from DEMATEL, including impact relation maps and a total influence matrix, are then used to derive the causal loop diagram and define variable weights in the stock-flow chart equations of the system dynamics model. This combined method aims to overcome limitations and subjectivity in traditional system dynamics modeling that relies solely on decision makers' mental models.
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
Automatically Estimating Software Effort and Cost using Computing Intelligenc...cscpconf
In the IT industry, precisely estimate the effort of each software project the development cost
and schedule are count for much to the software company. So precisely estimation of man
power seems to be getting more important. In the past time, the IT companies estimate the work
effort of man power by human experts, using statistics method. However, the outcomes are
always unsatisfying the management level. Recently it becomes an interesting topic if computing
intelligence techniques can do better in this field. This research uses some computing
intelligence techniques, such as Pearson product-moment correlation coefficient method and
one-way ANOVA method to select key factors, and K-Means clustering algorithm to do project
clustering, to estimate the software project effort. The experimental result show that using
computing intelligence techniques to estimate the software project effort can get more precise
and more effective estimation than using traditional human experts did
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and also makes it possible to find better solutions for algorithm. In this paper, we’re going to offer gradient descent and ACO algorithm which in comparison to some former methods it shows that algorithm has significantly improved routes optimization.
The document discusses the GRASP (General Responsibility Assignment Software Principles) patterns and principles for assigning responsibilities in object-oriented design. It defines GRASP as helping to clearly outline which objects are responsible for which actions. There are nine GRASP principles covered: Creator, Controller, Information Expert, Low Coupling, High Cohesion, Indirection, Polymorphism, Protected Variations, and Pure Fabrication. These principles provide guidelines for assigning responsibilities to classes to achieve well-structured and maintainable code. The document then explains each principle in more detail using a chess game as an example domain.
The document outlines several web design guidelines for creating usable and accessible websites, including: following Fitts' Law to make interactive elements easy to select; providing feedback to users on their actions; reusing users' experience through consistent navigation; using scroll bars sparingly; keeping sentences and paragraphs short; optimizing images for different bandwidths; and avoiding overly heavy pages. The goal is to design intuitive sites that minimize learning curves and user effort.
This document provides course information for CS 478 Machine Learning offered in the spring of 2003, including:
- An overview of machine learning techniques covered such as decision trees, Bayesian learning, neural networks, and clustering.
- Contact information for the instructor and teaching assistants.
- Details on evaluation including homework, a project, and exam worth 40%, 40%, and 20% respectively.
- Links to the course webpage and newsgroup for additional resources.
This document discusses algorithm-independent machine learning techniques. It introduces concepts like bias and variance which can be used to quantify how well a learning algorithm matches a problem, regardless of the specific algorithm used. It discusses techniques like cross-validation, resampling, and combining multiple classifiers that can improve performance in a way that is independent of the learning algorithm. The document also covers principles like minimum description length and no free lunch which provide theoretical foundations for algorithm-independent machine learning.
JENIS JENIS SISTEM OPERASI PADA KOMPUTER DAN HANDPHONE NAMA ...butest
Dokumen tersebut membahas berbagai jenis sistem operasi komputer dan handphone seperti DOS, UNIX, Windows, dan Linux. Mencakup penjelasan singkat tentang sejarah, fitur, dan contoh versi dari masing-masing sistem operasi.
1) The document compares the performance of four machine learning techniques (decision tree, random forest, logistic regression, and neural network) on two classification tasks: predicting the winner of tic-tac-toe games and predicting the subcellular location of proteins.
2) For tic-tac-toe, logistic regression had the best performance with 98.3% accuracy, followed by neural network and random forest, while decision tree performed worst.
3) For predicting protein location, random forest performed best with 63.4% accuracy, followed by logistic regression and neural network, while decision tree again had the lowest accuracy.
This document discusses a Bayesian approach to active learning for collaborative filtering. It summarizes that collaborative filtering uses preference patterns to predict user ratings, but requires many user ratings for accuracy. Active learning aims to acquire the most informative ratings from users. Previous active learning methods only consider estimated models, which can be misleading with few ratings. The proposed method takes a full Bayesian approach, averaging expected loss over the posterior model distribution to account for model uncertainty and avoid problems from estimated models. It aims to select items that maximize reduction in expected loss when considering the full model distribution, rather than just an estimated model.
The document discusses several challenges in machine learning, including:
1) Determining when generative vs. discriminative learning methods are better and how to make generative methods more computationally feasible.
2) Developing methods for learning from non-vectorial data like text, images, and graphs that can work across different data types and learning algorithms.
3) Extending discriminative methods like neural networks and support vector machines to more complex problems beyond classification and regression.
4) Developing distributed learning methods that can handle distributed data while preserving privacy.
Introducing Evolutionary Component Capabilities in Agent ...butest
The document provides details on two contracts totaling $85,000 between Boeing and subcontractors UWE and Dr. Robert E. Smith. UWE will provide technical support and facilities for $70,000, while Dr. Smith will perform primary research and development work on an adaptive agent framework for $15,000. The framework aims to provide agents with evolutionary computation capabilities to increase their ability to adapt.
A contextual bandit algorithm for mobile context-aware recommender systemBouneffouf Djallel
Most existing approaches in Mobile Context-Aware Recommender Systems focus on recommending relevant items to users taking into account contextual information, such as time, location, or social aspects. However, none of them has considered the problem of user’s content evolution. We introduce in this paper an algorithm that tackles this dynamicity. It is based on dynamic exploration/exploitation and can adaptively balance the two aspects by deciding which user’s situation is most relevant for exploration or exploitation. Within a deliberately designed offline simulation framework we conduct evaluations with real online event log data. The experimental results demonstrate that our algorithm outperforms surveyed algorithms.
A Solution To The Random Assignment Problem On The Full Preference DomainJoe Andelija
This document summarizes a research paper that proposes a new algorithm for solving the random assignment problem when agents' preferences allow for indifference between objects. The algorithm extends the probabilistic serial mechanism to the full preference domain by interpreting it as an iterative algorithm to compute maximum flow in a network. However, the authors also prove that on the full preference domain, it is impossible for any mechanism to find an assignment that is both envy-free and ordinally efficient while also satisfying a weak strategyproofness property.
DALL-E 2 - OpenAI imagery automation first developed by Vishal Coodye in 2021...MITAILibrary
The document provides a review of machine learning interpretability methods. It begins with an introduction to explainable artificial intelligence and a discussion of key concepts like interpretability and explainability. It then presents a taxonomy of interpretability methods that are divided into four main categories: methods for explaining black-box models, creating white-box models, promoting fairness, and analyzing model sensitivity. Specific machine learning interpretability techniques are summarized within each category.
Keating and Katina (2015) Foundational perspectives for the emerging complex ...Polinho Katina
1. The document introduces the emerging field of complex system governance (CSG) which explores the intersection between complex systems and governance.
2. It notes that while the fields of complex systems and governance have contributed significantly independently, exploring their intersection could provide novel insights to address modern problems.
3. The document outlines some key characteristics of today's complex problem domain, including ambiguity, complexity, emergence, interdependence, and uncertainty. It argues CSG has the potential to enhance our ability to deal with complex systems problems.
The document discusses the differences between machine learning (ML), statistical learning, data mining (DM), and automated learning (AL). It argues that while ML and statistical learning developed similar techniques starting in the 1960s, DM emerged in the 1990s from a merging of database research and automated learning. However, industry was much more enthusiastic about adopting DM techniques compared to AL techniques, even though many DM systems are just friendly interfaces of AL systems. The document aims to explain the key differences between DM and AL that led to DM's greater commercial success.
Real World Talent Insights From Computer SimulationsAndrea Kropp
Teaching Talent Analytics executives how to use computer simulations to complement the predictive modeling work in HR. Simulations allow you to examine multiple scenarios and examine their end states and consequences before taking action.
Exploration exploitation trade off in mobile context-aware recommender systemsBouneffouf Djallel
Most existing approaches in Context-Aware Recommender Systems (CRS) focus on recommending relevant items to users taking into account contextual information, such as time, loca-tion, or social aspects. However, none of them have considered the problem of user’s content dynamicity. This problem has been studied in the reinforcement learning community, but without paying much attention to the contextual aspect of the recommendation. We introduce in this paper an algorithm that tackles the user’s content dynamicity by modeling the CRS as a contextual bandit algorithm. It is based on dynamic explora-tion/exploitation and it includes a metric to decide which user’s situation is the most relevant to exploration or exploitation. Within a deliberately designed offline simulation framework, we conduct extensive evaluations with real online event log data. The experimental results and detailed analysis demon-strate that our algorithm outperforms surveyed algorithms.
System modeling and simulation involves creating simplified representations of real-world systems to understand and evaluate their behavior over time. A system is composed of interconnected parts designed to achieve specific objectives. A model abstracts and simplifies a system for analysis. Simulation executes a model over time to observe how a system operates. It allows experimenting with systems that may be too expensive, dangerous or complex to study directly. Simulation has many uses including analyzing systems before implementation, optimizing designs, training, and evaluating "what-if" scenarios. Key areas where simulation is applied include manufacturing, business, healthcare, transportation and the military.
Survey: Biological Inspired Computing in the Network SecurityEswar Publications
Traditional computing techniques and systems consider a main process device or main server, and technique details generally
serially. They're non-robust and non-adaptive, and have limited quantity. Indifference, scientific technique details in a very similar and allocated manner, while not a main management. They're exceedingly strong, elastic, and ascendible. This paper offers a short conclusion of however the ideas from biology are will never to style new processing techniques and techniques that even have a number of the beneficial qualities of scientific techniques. Additionally, some illustrations are a device given of however these techniques will be used in details security programs.
This document discusses agent-based modeling (ABM). It provides definitions and explanations of ABM, including that it is a bottom-up approach using autonomous agents to simulate real-world systems. The document outlines the key features of ABM, including how agents have attributes and behaviors that interact. It also discusses how to build ABMs, including constructing agent populations and parameterizing agents. Both strengths and weaknesses of ABM are presented.
A comprehensive review of the firefly algorithmsXin-She Yang
This document provides a comprehensive review of firefly algorithms. It begins with background on swarm intelligence and how firefly algorithms were inspired by the flashing lights of fireflies. It then describes the basic structure of firefly algorithms, including initializing a population of fireflies, evaluating their fitness, sorting by fitness, selecting the best solution, and moving fireflies toward more attractive solutions over generations. The document reviews applications of firefly algorithms in areas like continuous, combinatorial, and multi-objective optimization as well as engineering problems. It concludes by discussing exploration vs exploitation in firefly algorithms and directions for further development.
Author Mr Di Chen : Ecole Polytechnique Féderale de Lausanne: Financial Engineering Section
This paper shows that complexity influences stock returns. By establishing the complexity and resilience measure of the common stock and analyzing the relationship between return, momentum, size, complexity, book-to-market ratio and resilience, three measures (size, complexity and momentum) stand out as the factors that can influence stock returns.
The document discusses using interactive evolution to tune the physical attributes of a simulated jeep in a game environment. It compares two approaches: one directed by user requests for changes, and the other based on measuring user performance on tasks. The main conclusion is that interactive evolution allows searching a larger attribute space than direct manipulation by the user. It presents two methods for interactive evolution - one based on user requests, and one generating multiple offspring and selecting the best performer on a task. The methods are tested by evolving attribute values for a simulated vehicle to improve the user experience of driving it.
TOWARD ORGANIC COMPUTING APPROACH FOR CYBERNETIC RESPONSIVE ENVIRONMENTijasa
The developpment of the Internet of Things (IoT) concept revives Responsive Environments (RE) technologies. Nowadays, the idea of a permanent connection between physical and digital world is technologically possible. The capillar Internet relates to the Internet extension into daily appliances such as they become actors of Internet like any hu-man. The parallel development of Machine-to-Machine
communications and Arti cial Intelligence (AI) technics start a new area of cybernetic. This paper presents an approach for Cybernetic Organism (Cyborg) for RE based on Organic Computing (OC). In such approach, each appli-ance is a part of an autonomic system in order to control a physical environment.The underlying idea is that such systems must have self-x properties in order to adapt their behavior to
external disturbances with a high-degree of autonomy.
Organization Structure And Inter-Organizational...Stephanie Clark
This document discusses the evolution of TCP/IP and the internet. It explains that prior to the 1960s, computer communication consisted mainly of simple text and binary data transmitted over telephone circuit networks. In the 1960s, Paul Baran described a robust packet switching network that would be more efficient. This led to the creation of the ARPANET in the late 1960s, which used early versions of protocols like TCP and IP. TCP and IP were combined into the TCP/IP protocol suite in 1974 to better handle the increasing network load. The TCP/IP protocol suite became widely adopted and led to the commercialization and growth of the internet.
This document summarizes a study that developed a Holonic Workforce Allocation Model (HWM) to reduce the impact of absenteeism and turnover in job shop environments. HWM is based on the Holonic Manufacturing System (HMS) paradigm and uses a weighted random formulation to allocate workers to tasks. The formulation considers factors like worker skills, task urgency, and cross-training opportunities. Computer simulations tested HWM against other models and found it more effectively minimized late tasks, improved average skills, and provided balanced workloads and cross-training while maintaining productivity. The study was motivated by the importance of workforce issues to HMS and the need to address absenteeism and turnover that can derail production plans.
Running head Multi-actor modelling system 1Multi-actor mod.docxtodd581
Running head: Multi-actor modelling system 1
Multi-actor modelling system3
Multi-actor modelling system
Yogesh Dagwale
University of the Cumberland’s
Ligtenberg, A., Wachowicz, M., Bregt, A. K., Beulens, A., & Kettenis, D. L. (2004). A design and application of a multi-agent system for simulation of multi-actor spatial planning. Journal of environmental management, 72(1-2), 43-55.
They talk about the potential and restrictions of the MAS to manufacture models that empower spatial organizers to incorporate the 'actor factor' in their examination. Their structure system contemplates actors who assume a functioning job in the spatial planning. They included actors who can watch and see a spatial domain. Using these perceptions and discernment they produce an inclination for a preferred spatial situation. Actors at that point present and discuss their inclinations amid their exchanges with different actors.
The inclinations of the actor fill in as inputs for an official choice making. Finally, ultimate conclusions are actualized in the spatial framework. They found that MAS can produce space utilization designs in light of a portrayal of a multi-actor planning process. It additionally can clear up the impacts of actors under the administration of various planning styles on the space utilization and prove how the relations between actors change amid a planning process and under different orders of coming up with decisions. Unlike the work by Parker, Manson, Janssen, Hoffman & Deadman,2003, cited below, this paper did not include the various challenges associated with the use of MAS.
Parker, D. C., Manson, S. M., Janssen, M. A., Hoffmann, M. J., & Deadman, P. (2003). Multi-agent systems for the simulation of land-use and land-cover change: a review. Annals of the association of American Geographers, 93(2), 314-337.
In this paper, they studied different models. These models, however, were not thorough enough and therefore they took into account the multi-actor system, dynamic spatial Simulation, which has two components, that is, a cellular model that speaks to biogeophysical and biological parts of a demonstrated framework and an actor-based model to speak to human conclusion making. Because of its nature and ability to model complex situations, they highlighted some of the areas that MAS can be applied where other models cannot be able to deliver. Such areas are modeling of emergent phenomena whereby MAS can model landscape plans, due to its flexibility, MAS can represent complex land use/ cover systems, and they can be used to model dynamic paths. They also outlined the various challenges to Multi-actor systems. Such challenges include an understanding of complexity, individual decision making, empirical parameterization and model validation, and communication.
Faber, N. R., & Jorna, R. J. (2011, June). The use of multi-actor systems for studying social sustainability: Theoretical backgrounds and pseudo-specifications. In Com.
Running head Multi-actor modelling system 1Multi-actor mod.docxglendar3
Running head: Multi-actor modelling system 1
Multi-actor modelling system3
Multi-actor modelling system
Yogesh Dagwale
University of the Cumberland’s
Ligtenberg, A., Wachowicz, M., Bregt, A. K., Beulens, A., & Kettenis, D. L. (2004). A design and application of a multi-agent system for simulation of multi-actor spatial planning. Journal of environmental management, 72(1-2), 43-55.
They talk about the potential and restrictions of the MAS to manufacture models that empower spatial organizers to incorporate the 'actor factor' in their examination. Their structure system contemplates actors who assume a functioning job in the spatial planning. They included actors who can watch and see a spatial domain. Using these perceptions and discernment they produce an inclination for a preferred spatial situation. Actors at that point present and discuss their inclinations amid their exchanges with different actors.
The inclinations of the actor fill in as inputs for an official choice making. Finally, ultimate conclusions are actualized in the spatial framework. They found that MAS can produce space utilization designs in light of a portrayal of a multi-actor planning process. It additionally can clear up the impacts of actors under the administration of various planning styles on the space utilization and prove how the relations between actors change amid a planning process and under different orders of coming up with decisions. Unlike the work by Parker, Manson, Janssen, Hoffman & Deadman,2003, cited below, this paper did not include the various challenges associated with the use of MAS.
Parker, D. C., Manson, S. M., Janssen, M. A., Hoffmann, M. J., & Deadman, P. (2003). Multi-agent systems for the simulation of land-use and land-cover change: a review. Annals of the association of American Geographers, 93(2), 314-337.
In this paper, they studied different models. These models, however, were not thorough enough and therefore they took into account the multi-actor system, dynamic spatial Simulation, which has two components, that is, a cellular model that speaks to biogeophysical and biological parts of a demonstrated framework and an actor-based model to speak to human conclusion making. Because of its nature and ability to model complex situations, they highlighted some of the areas that MAS can be applied where other models cannot be able to deliver. Such areas are modeling of emergent phenomena whereby MAS can model landscape plans, due to its flexibility, MAS can represent complex land use/ cover systems, and they can be used to model dynamic paths. They also outlined the various challenges to Multi-actor systems. Such challenges include an understanding of complexity, individual decision making, empirical parameterization and model validation, and communication.
Faber, N. R., & Jorna, R. J. (2011, June). The use of multi-actor systems for studying social sustainability: Theoretical backgrounds and pseudo-specifications. In Com.
State of the art of agile governance a systematic reviewijcsit
This document summarizes a systematic literature review on the state of agile governance. The review identified over 1,900 studies from 10 databases, of which 167 provided evidence to answer the research questions. The studies were organized into four major groups: software engineering, enterprise, manufacturing, and multidisciplinary. The review provides a definition of agile governance, six meta-principles, and a map of findings organized by topic and classified by relevance and convergence. The evidence suggests agile governance is a new, wide, and multidisciplinary area focused on organizational performance that requires more intensive study.
Similar to A Research Platform for Coevolving Agents.doc (20)
Este documento analiza el modelo de negocio de YouTube. Explica que YouTube y otros sitios de video online representan un nuevo modelo de negocio para contenidos audiovisuales debido al cambio en los hábitos de consumo causado por las nuevas tecnologías. Describe cómo YouTube aprovecha la participación de los usuarios para mejorar continuamente y atraer una audiencia diferente a la de los medios tradicionales.
The defense was successful in portraying Michael Jackson favorably to the jury in several ways:
1) They dressed Jackson in ornate costumes that conveyed images of purity, innocence, and humility.
2) Jackson was shown entering the courtroom as if on a red carpet, emphasizing his celebrity status.
3) Jackson appeared vulnerable, childlike, and in declining health during the trial, eliciting sympathy from jurors.
4) Defense attorney Tom Mesereau effectively presented a coherent narrative of Jackson as a victim and portrayed Neverland as a place of refuge, undermining the prosecution's arguments.
Michael Jackson was born in 1958 in Gary, Indiana and rose to fame in the 1960s as the lead singer of The Jackson 5, topping music charts in the 1970s. As a solo artist in the 1980s, his album Thriller broke music records. In the 1990s and 2000s, Jackson faced several legal issues related to child abuse allegations while continuing to release music. He married Lisa Marie Presley and Debbie Rowe and had two children before his death in 2009.
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...butest
This document appears to be a list of popular books from various authors. It includes over 150 book titles across many genres such as fiction, non-fiction, memoirs, and novels. The books cover a wide range of topics from politics to cooking to autobiographies.
The prosecution lost the Michael Jackson trial due to several key mistakes and weaknesses in their case:
1) The lead prosecutor, Thomas Sneddon, was too personally invested in the case against Jackson, having pursued him for over a decade without success.
2) Sneddon's opening statement was disorganized and weak, failing to effectively outline the prosecution's case.
3) The accuser's mother was not credible and damaged the prosecution's case through her erratic testimony, history of lies and con artist behavior.
4) Many prosecution witnesses were not credible due to prior lawsuits against Jackson, debts owed to him, or having been fired by him. Several witnesses even took the Fifth Amendment.
Here are three examples of public relations from around the world:
1. The UK government's "Be Clear on Cancer" campaign which aims to raise awareness of cancer symptoms and encourage early diagnosis.
2. Samsung's global brand marketing and sponsorship activities which aim to increase brand awareness and favorability of Samsung products worldwide.
3. The Brazilian government's efforts to improve its international image and relations with other countries through strategic communication and diplomacy.
The three most important functions of public relations are:
1. Media relations because the media is how most organizations reach their key audiences. Strong media relationships are crucial.
2. Writing, because written communication is at the core of public relations and how most information is
Michael Jackson Please Wait... provides biographical information about Michael Jackson including his birthdate, birthplace, parents, height, interests, idols, favorite foods, films, and more. It discusses his background, career highlights including influential albums like Thriller, and films he appeared in such as The Wiz and Moonwalker. The document contains photos and details about Jackson's life and illustrious music career.
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazzbutest
The document discusses the process of manufacturing celebrity and its negative byproducts. It argues that celebrities are rarely the best in their individual pursuits like singing, dancing, etc. but become famous due to being products of a system controlled by wealthy elites. This system stifles opportunities for worthy artists and creates feudalism. The document also asserts that manufactured celebrities should not be viewed as role models due to behaviors like drug abuse and narcissism that result from the celebrity-making process.
Michael Jackson was a child star who rose to fame with the Jackson 5 in the late 1960s and early 1970s. As a solo artist in the 1970s and 1980s, he had immense commercial success with albums like Off the Wall, Thriller, and Bad, which featured hit singles and groundbreaking music videos. However, his career and public image were plagued by controversies related to allegations of child sexual abuse in the 1990s and 2000s. He continued recording and performing but faced ongoing media scrutiny into his private life until his death in 2009.
Social Networks: Twitter Facebook SL - Slide 1butest
The document discusses using social networking tools like Twitter and Facebook in K-12 education. Twitter allows students and teachers to share short updates and can be used to give parents a window into classroom activities. Facebook allows targeted advertising that could be used to promote educational activities. Both tools could help facilitate communication between schools and communities if used properly while managing privacy and security concerns.
Facebook has over 300 million active users who log on daily, and allows brands to create public profile pages to interact with users. Pages are for brands and organizations only, while groups can be made by any user about any topic. Pages do not show admin names and have no limits on fans, while groups display admin names and are limited to 5,000 members. Content on pages should aim to provoke action from subscribers and establish a regular posting schedule using a conversational tone.
Executive Summary Hare Chevrolet is a General Motors dealership ...butest
Hare Chevrolet is a car dealership located in Noblesville, Indiana that has successfully used social media platforms like Twitter, Facebook, and YouTube to create a positive brand image. They invest significant time interacting directly with customers online to foster a sense of community rather than overtly advertising. As a result, Hare Chevrolet has built a large, engaged audience on social media and serves as a model for how brands can use online presences strategically.
Welcome to the Dougherty County Public Library's Facebook and ...butest
This document provides instructions for signing up for Facebook and Twitter accounts. It outlines the sign up process for both platforms, including filling out forms with name, email, password and other details. It describes how the platforms will then search for friends and suggest people to connect with. It also explains how to search for and follow the Dougherty County Public Library page on both Facebook and Twitter once signed up. The document concludes by thanking participants and providing a contact for any additional questions.
Paragon Software announces the release of Paragon NTFS for Mac OS X 8.0, which provides full read and write access to NTFS partitions on Macs. It is the fastest NTFS driver on the market, achieving speeds comparable to native Mac file systems. Paragon NTFS for Mac 8.0 fully supports the latest Mac OS X Snow Leopard operating system in 64-bit mode and allows easy transfer of files between Windows and Mac partitions without additional hardware or software.
This document provides compatibility information for Olympus digital products used with Macintosh OS X. It lists various digital cameras, photo printers, voice recorders, and accessories along with their connection type and any notes on compatibility. Some products require booting into OS 9.1 for software compatibility or do not support devices that need a serial port. Drivers and software are available for download from Olympus and other websites for many products to enable use with OS X.
To use printers managed by the university's Information Technology Services (ITS), students and faculty must install the ITS Remote Printing software on their Mac OS X computer. This allows them to add network printers, log in with their ITS account credentials, and print documents while being charged per page to funds in their pre-paid ITS account. The document provides step-by-step instructions for installing the software, adding a network printer, and printing to that printer from any internet connection on or off campus. It also explains the pay-in-advance printing payment system and how to check printing charges.
The document provides an overview of the Mac OS X user interface for beginners, including descriptions of the desktop, login screen, desktop elements like the dock and hard disk, and how to perform common tasks like opening files and folders. It also addresses frequently asked questions for Windows users switching to Mac OS X, such as where documents are stored, how to save or find documents, and what the equivalent of the C: drive is in Mac OS X. The document concludes with sections on file management tasks like creating and deleting folders, organizing files within applications, using Spotlight search, and an overview of the Dashboard feature.
This document provides a checklist for securing Mac OS X version 10.5, focusing on hardening the operating system, securing user accounts and administrator accounts, enabling file encryption and permissions, implementing intrusion detection, and maintaining password security. It describes the Unix infrastructure and security framework that Mac OS X is built on, leveraging open source software and following the Common Data Security Architecture model. The checklist can be used to audit a system or harden it against security threats.
This document summarizes a course on web design that was piloted in the summer of 2003. The course was a 3 credit course that met 4 times a week for lectures and labs. It covered topics such as XHTML, CSS, JavaScript, Photoshop, and building a basic website. 18 students from various majors enrolled. Student and instructor evaluations found the course to be very successful overall, though some improvements were suggested like ensuring proper software and pairing programming/non-programming students. The document also discusses implications of incorporating web design material into existing computer science curriculums.
1. A Research Platform for Coevolving Agents
with Producer/Consumer Interactions
R. E. Smith
The Intelligent Computing Systems Centre
The University of The West of England
robert.smith@uwe.ac.uk
Introduction
This report documents progress made during the author's summer research fellowship in
the Intelligent Business Systems Group at BT Research Laboratories. The focus of the
fellowship was on the exploration of evolutionary computation (EC) in agent-based
systems, with a mind towards applications of interest to BT research. The work built on a
framework for EC in mobile agents that the author had constructed in anticipation of the
fellowship (Smith & Taylor, 1998). The result of the fellowship is a platform for
examining coevolving agents in a controllable environment. This environment also has a
direct relationship to other agent-based system efforts going on at BT, under the
supervision of Dr. Paul Kearney. The platform is an extensible system where agents
interact in a producer/consumer economic world. This report describes the motivations
for this system, its design, preliminary results, and directions for future research.
Motivations for the EC Agents Framework
Agent-based systems are of particular interest in a variety of telecommunications
applications. The key qualities that agent-based components and systems exhibit are:
autonomy, reactivity, proactivity, and social behavior. Moreover, agents have the
possibility of mobility in complex network environments, putting software functions near
the computational resources they require. Agents can also explicitly exploit the
availability of distributed, parallel computation facilities (Franklin & Graesser, 1997;
Wooldridge & Jennings, 1996).
However, these qualities ultimately depend on the potential for agent adaptation. For
instance, if an agent is to operate with true autonomy in a complex environment, it may
have to react to a spectrum of circumstances that cannot be foreseen by the agent’s
designer. Autonomous agents may need to explore alternative reactive and proactive
strategies, evaluate their performance online, and formulate new, innovative strategies
without user intervention. Moreover, for systems of agents to behave in this manner,
social interactions between agents may also need to adapt and emerge as conditions
change.
Areas where agents could benefit from adaptation are addressed by active research in
machine learning (e.g., classification of unforeseen inputs, strategy acquisition through
reinforcement learning, etc.). However, many machine learning techniques are focused
on centralized processing of databases to formulate models or strategies. In contrast,
evolutionary computation (EC) techniques are inherently based on a distributed paradigm
(natural evolution), making them particularly well suited for adaptation in agents. The
2. following sections provide further motivation for investigating EC in agent-based
systems.
Empirical Motivation
Consider the author's previous work on acquiring novel combat maneuver strategies for
fighter planes (Smith & Dike, 1995) using genetics-based machine learning. In this
project, systems of distinct rules coevolve under the action of an EC system to specify
complex maneuvers, like the one shown in Figure 1. In this maneuver, the plane on the
right is controlled by the genetics-based machine learning system, while the plane on the
left is following standard combat logic. Both planes execute a change in their controls
every 1/10th of a second. Each control action is specified by a separate rule. Note that
rules firing early in the maneuver sacrifice immediate advantage (turning nose away from
the opponent) such that later rules can dictate a tight turn, and overwhelm the opponent.
This demonstrates that although the genetically-learned rules evolve as separate entities
(agents), they exhibit complex cooperative behaviors.
Figure 1: A maneuver dictated by a set of rules, coevolved in an evolutionary
computation system (Smith & Dike, 1995).
This result shows that complex, innovative, multi-component adaptive systems can
emerge from EC processes.
Theoretical Motivation
Although the empirical evidence is compelling, one must ask whether there is a more
mathematically justifiable reason for examining EC in agent-based systems. To provide
and answer, consider the oft-cited motivations for agent-based systems themselves:
• proactivity
• reactivity
• social interaction
3. • autonomy
• distributability
Each of these motivations, to some extent, suggests adaptation or learning. However,
when one considers agent-based systems, and social interaction, one must consider the
impact of learning on the system of agents, while maintaining distributability. In other
words, one must consider how to obtain and propagate learned information in a system of
agents, without a centralized coordinator of this activity.
Holland’s seminal book, Adaptation in Natural and Artificial Systems (Holland, 1975)
was primarily addressed at this issue. Although the book is often cited as the first on
genetic algorithms (GAs), one of the earliest forms of EC, it is much more directed at
general issues of adaptation in systems of distributed entities. The book mathematically
examines how populations of entities interacting in “systems” can and should adapt.
Extending this to agents is a natural implication of Holland's work.
There are four main theoretical pillars upon which Holland's theories are based:
• The K-armed Bandit
• The Schema Theorem
• The Building Block Hypothesis
• Implicit Parallelism
To understand these motivations, consider an agent interacting with other agents and
entities in a complex environment. Assume each agent is made up of some sort of
encoded feature set. Given this assumption, one can ask the general question of how one
should determine features to try in other agents? Clearly, there is seldom a clear mapping
from these features to agent fitness. However, Holland manages to deal with this question
in a general context by casting it as a probabilistic decision problem: the k-armed bandit.
Consider two alternative “features” that an agent may have. Given a feature’s interactions
with other features within the agent, with other agents, and with a complex environment
one has, at best, a noisy evaluation of each alternative feature.
Therefore, to evaluate the feature, one can spend some time evaluating the alternative
features in a variety of agents and contexts. However, this leads to a potential loss in
agent performance while trying out features that turn out to be less advantageous. So, it is
desirable to quickly incorporate features that appear advantageous into our agents.
However, this leads to a potential performance loss if a less advantageous feature is
selected prematurely. This is the classic dilemma of exploration versus exploitation.
One can overcome this dilemma by increasing the number of agents with the apparent
best feature gradually while continuing to experiment with other candidates. However,
one has a choice with regard to the speed of this feature propagation. As a function of
time (t), should one increase the number of agents (m(H,t)) with the apparent best feature
(H):
• linearly: m( H , t + 1) = m( H , t ) + C
geometrically: m( H , t + 1) = m( H , t ) + Ct
D
•
4. • exponentially: m( H , t + 1) = Cm( H , t ) , or
superexponentially: m( H , t + 1) = Ct m( H , t )
D
•
where C and D are constants?
Holland’s K-armed bandit argument shows that, regardless of the distribution of noise
surrounding the feature’s fitness, one should allocate the apparent best feature
exponentially, to insure minimum loss. Moreover, Holland's Schema Theorem of GAs
shows that a reproductive (EC) plan does just such an allocation.
This leads to Holland's Building Block Hypothesis, which is the central heuristic in GA
search. This heuristic assumes that if one can propagate advantageous features (building
blocks) in a population of agents, they will combine (build) into improved agents, and
agent system behaviors. Although this heuristic is clearly not adequate for all situations,
it seems appropriate, given a general system with a distributed character.
Moreover, Holland shows that in considering a population of agents, one implicitly
applies the k-armed bandit argument, through the schema theorem, to a large number of
building blocks to which the building block hypothesis might accurately apply. Clearly,
one cannot expose each and every feature to the same exponential decision process, since
the space of features (where a feature is any combination of elements with an agent) is far
larger than the space of possible agents. . However, Holland’s implicit parallelism
argument shows that for moderate population sizes N, the number being processed is on
the order of N3.
Given these empirical and theoretical arguments, Holland’s genetic adaptive plans seem
the appropriate way to propagate features through a system of agents. Moreover, these
EC processes implicitly exploit parallelism, while remaining trivial to explicitly
parallelize. Therefore, EC methods are one of the most natural machine learning
techniques to transfer general-purpose adaptive capabilities to agent-based systems.
Philosophical Motivation
The previous sections outline motivations for exploring EC in agent-based systems. In
many ways, all research on coevolving EC systems is apropos to agent-based systems.
However, EC systems that use actual software agents have been rare. One possible
exception is work on Tierra (Ray, 1990), which involves the coevolution of pieces of
computer code. However, Tierra "agents" are not standard software components. They
cannot perform the actions of general software agents, and require an EC specific
software environment in which to run.
It is important to investigate EC (and other distributed AI techniques) within an
environment of real-world, standards-based software components, for reasons of
embodiment (Brooks, et al., 1998). In the area of robotic artificial intelligence,
embodiment means that a robot's sensory-motor systems cannot be fully disconnected
from behavior, capability to adapt, and intelligence. Internal representations, reasoning,
and behavior are ultimately grounded in the physical nature of the robot, and its physical
world. Therefore, Brooks and others are addressing the issue of robotic intelligence
through hardware robots, where embodiment is considered directly.
5. Similar arguments can be made for artificial intelligence in agent-based systems. To
consider the embodiment of adaptation, behavior, and intelligence in such agents, one
must examine real sensors, effectors, and environments along with those agents. Thus, it
is important to consider EC in systems of standards-based software agents, that interact in
a generalized computing environment, and are not specifically designed for EC
experiments..
Pre-BT Fellowship Results
Until recent work by the author, no standards-based, EC agent framework existed. Before
the BT fellowship, the author had developed an agent-based EC framework. There are
several potential advantages to this framework. The primary is advantage is that such a
framework could allow several users or agents, at distributed locations, to interact in EC
processes in a standardized way, without a priori knowledge of these processes. Potential
applications include collaborative search, and collaborative problem solving. Details of
the framework itself are included in this report's Appendix (Page 16).
The framework was tested on problems where coevolution was not necessary (i.e.,
optimization problems). Results from one such experiment are shown in Figure 2.
30
25
Agent Fitness
20
15
10
5
0
0.0E+00
1.0E+06
2.0E+05
4.0E+05
6.0E+05
8.0E+05
1.2E+06
Tim e (m illiseconds)
Figure 2: A result with the EC agent framework applied to a simple optimization task.
In this experiment, each agent was set with the simple task of finding fit mates. Every
agent advertises (through a Plumage object) its own fitness as the number of ones in an
internal, 32-bit chromosome. Every agent trusts the fitness advertised by other agents,
and as a result the number of agents with the "all ones" chromosome gradually increases.
Note that such trust is not necessary for the general operation of the framework.
In other words, this experiment is an agent-based implementation of an EC benchmarking
scenario known as the 32-bit "counting ones" problem. All agents play both a paternal
and maternal role. Once an agent "mothers" a child, it dies, thus insuring a constant
6. population size. One also can think of each matrilineal trace as being a single agent that
can modify its own characteristics. The result in Figure 2 is similar to those of standard
EC systems, despite the asynchronous, decentralized, agent-based nature of the system.
The distributed nature of the experiment is better illustrated in Figure 3
30
25
Agent Fitness
20
15
10
5
0
160
78050
123250
184000
237990
315540
376180
441490
523820
618400
689860
803830
860240
938730
987340
Agent Generated
(Tim e Labels)
(m illiseconds)
Figure 3: A single maternal line, from the experiment shown in Figure 2.
The figure shows the fitness of a single "maternal line". Imagine that this line of agents
remains the "property" of a single user. This user could have genetic interactions with
other users, through the framework. Such interactions would improve those agents
belonging to the user.
Although these results are encouraging, they do not consider a key issue in both EC and
agent-based systems research. The fitness value of each agent in the previous experiment
is independent of the fitness values of the other agents in the population. Moreover, their
only interactions are genetic. The goal of the BT fellowship research was to begin a
deeper investigation, where evolving agents were interdependent. Such efforts should
relate to previous centralized genetics-based machine learning systems (like those in
Smith & Dike, 1995), and more realistic agent based systems. Moreover this research will
being to address the watchwords of modern EC research, including niches, coevolution,
and emergence.
There are proven theories on how an EC system can implicitly balance a diverse,
cooperative population, while exploiting Holland’s theories (Deb, 89; Smith, Forrest, &
Perelson, 95; Horn & Goldberg, 96). However, these issues have yet to be evaluated in a
standard framework of asynchronous software agents. The BT work has set the stage for
this examination, in a producer/consumer agent world.
7. The Producer/Consumer Interaction Framework
Much work has been done in the area of coevolution in EC systems (Rosin & Belew,
1997). However, many of the systems in past research are difficult to parameterize for
detailed examination of various effects. In consultation during the BT fellowship, Paul
Kearney, Walter Merlat, and the author developed a much more tractable system for
these examinations, based on producer/consumer economics.
The problem setting is as follows. Any number of agents interact in a specific economy.
Each agent in the system has a pre-specified number of workers at its disposal. These
workers can be allocated to any one of M possible technologies. There are N possible
goods in the economy. Each technology converts one set of goods into another set.
Technologies are a pre-specified aspect of the economic world.
From an agent's perspective, the goal is to allocate its workers, and maximize its profit.
An agent must perform this task, given that the price of each good varies with time, with
the action of other agents, and with external market forces. Each of these price variations
takes place through a market process of supply and demand.
From the perspective of the overall system, one goal is to demonstrate the evolutionary
emergence of interesting, productive economic interaction amongst selfish, co-evolved
agents. Another goal is to examine variations in emergent behavior, based on variations
in system parameters, and individual agent behavior.
Software Structure
The system designed during the BT fellowship is intended as a platform for a wide
variety of ongoing experiments. Therefore, it was important to maintain an extensible,
object-oriented design. The following are the key classes in that design. Javadoc and
source code files are available within the BT Intelligent Business Systems Research
Group. Like the EC Framework, the agent classes here are built around IBM's Aglets.
However, they could be easily transferred to other agent systems. Each class is extensible
for future experimentation.
The marketAgent class
Agents of this class process orders for goods from producer/consumer agents, and credit
these agents’ cash account.
The economicWorld class
This class defines the "rules" of the economic world, including
• M, the number of technologies
• N, the number of goods
• the set of technologies, and what they do, and
• external consumers and producers (of end goods and raw materials, respectively).
The producerAgent class
This is the key agent class. Each producerAgent responds to “Current Prices”
messages from agent(s) of the marketAgent class. The producerAgent determines a
8. worker allocation based on this current information, and its own Genotype object (see
details of the EC agent framework in the Appendix, Page 16). The producerAgent then
submits an order message to the marketAgent, with negative and positive quantities
representing sell and buy orders for each good.
When triggered by an internally defined condition, a producerAgent searches out
mates, and attempts to produce child producerAgents.
Each of these basic classes is extensible for a number of market and genetic behaviors.
Also, there are a set of utility classes that allow for graphical interaction with an
experiment. Screen shots are shown in Figure 4.
Figure 4: Screen shots from the Java-based producer/consumer system.
Preliminary Experiments
To illustrate the basic operation of the producer/consumer economic framework, a
simplified economic world was constructed.
The Market
In this world, there is only one marketAgent, which maintains a store of each good.
Each store can have positive and negative quantities, and represents the excess supply of
a good. The effect of supply and demand is simulated, by basing each good's price, P, on
a simple function of the amount in the marketAgent's excess store of that good, S:
P = exp( − λ S )
9. where λ is a parameter.
Prices are only recalculated at the end of a trading day. A trading day begins with the
marketAgent broadcasting a current price message to every producerAgent. The
market then asynchronously processes orders from each producerAgent that received
this current price message. The trading day is complete when all these orders are
processed. Prices are then recalculated, and a new trading day begins.
The Agents
In this preliminary experiment, the producerAgents are also simplified. In its
Genotype, each producerAgent has a BooleanGene for each of the M
technologies. A producerAgent allocates its workers evenly to each technology that is
marked true in its Genotype.
In the preliminary experiment, producerAgents trade for 10 trading days, then they
(asynchronously) broadcast and receive Plumage and Sperm objects, to locate mates.
In this case, a Plumage object contains the broadcasting agent's evaluation of its own
profit per trading day, which is implicitly trusted by the other agents. Note that such trust
is an aspect of this experiment only, and is not, in general, a necessary aspect of the
framework. When five or more Plumage objects are received by an agent, the best is
picked, and the associated Sperm object is used to create a producerAgent child.
Then the maternal parent agent dies. Note that in the paternal role, an agent can have
many children, based on it's relative profit potential, while in the maternal role, an agent
only has one child.
The Economic World
In the preliminary experiment, it was desirable to have a set of technologies and goods
that interacted in a uniform, extensible manner. Therefore, the following scheme was
employed. Goods were "stacked" into R rows and C columns, as shown in Figure 5.
10. End goods out
good(R,0) good(R,1) good(R,2)
... good(R,C)
... ... ...
...
good(1,0) good(1,1) good(1,2)
... good(1,C)
good(0,0) good(0,1) good(0,2)
... good(0,C)
good(0,C)
Raw materials in
Figure 5: The simplified economic world used in preliminary experiments.
In this figure, the unfilled arrows represent technologies. Note that each technology takes
two goods in one row, and converts them into a single good in the next row. Such
conversions are conservative, such that x quantity of one good, combined with y quantity
of another good, yields (x+y) quantity of the third good. Note that the world "wraps
around" at the right and left edges. Thus, the world is expandable via the R and C
parameters, and goods in the R-2 intermediate rows are treated identically.
The filled arrows in Figure 5 represent external suppliers of raw materials, and consumers
of end goods. The raw material goods (in the lowest row) and the end goods (in the
highest row) are controllable through the supply and demand of external suppliers and
consumers. Note that the total number of goods N=R*C, and the total number of
technologies M=(R-1)*C. The preliminary experiment presented here is the minimal
meaningful form of this world. There is one layer of intermediate goods (R=3). For
simplicity, C=3, giving a total of N=9 goods and M=6 technologies. External supplies
and demands are manipulated such that there is a fixed price for the raw materials, and
another fixed price for end goods.
Results
In the first experiment, the prices of these raw materials and end goods are both
maintained at a value of one (i.e. the external supply and demand adjust so as to maintain
S = 0 for these goods). A population of 50 producerAgents interacts in the market, and
in the EC mating process.
Prices of the three intermediate goods in this simulation are shown in Figure 6. Variations
in these prices are due to evolution of the agents involved in the market. As prices for
intermediate goods rise (due to low supply and high demand), agents evolve to produce
these high-profit goods. This causes supply to increase, demand to fall, and prices to fall.
11. Then the agents evolve towards being consumers of the low-cost intermediate goods.
Note that the agents involved have no memory of price history. All changes in behavior
occur through the action of EC.
18
16
14
12
10
Prices
8
6
4
2
0
1
169
337
505
673
841
1177
1345
1513
1681
1849
2017
2185
2353
2521
2689
2857
1009
Trading Day
Figure 6: Prices of three intermediate goods as a function of trading day in an evolving
economy of producers and consumers.
In a second experiment, the price of raw materials remains fixed at 1, but the price of end
goods is raised to the fixed value of 5. Thus, a fixed price gradient is maintained from
raw materials to end goods. Prices of the three intermediate goods in this simulation are
shown in Figure 7. Once again, all price variations are due to agent evolution.
50
45
40
35
30
Price
25
20
15
10
5
0
364
727
1
122
243
485
606
848
969
1090
1211
1332
1453
1574
1695
1816
Trading Day
Figure 7: Prices of three intermediate goods as a function of trading day in an evolving
economy of producers and consumers, with a fixed, positive price gradient.
12. Analysis
Clearly, these are only preliminary results from the system. However, they do point to the
advantages of the framework developed during the BT fellowship. Unlike much of the
extant research on coevolving systems (for instance, the fighter plane research shown in
Figure 1), this particular setting yields results can be analyzed. For instance, consider a
Fast Fourier Transform of the data in Figure 6, shown as a power spectral density in
Figure 8. In this figure, the power spectrum for each good is normalized, such that the
maximum power equals one.
1
Relative
0.5
Power
0
1024
512
341
256
205
171
146
128
114
102
93
1/Frequency
(wavelength in
trading days)
Figure 8: A power spectral density of prices for the three intermediate goods shown in
Figure 6. Power values are normalized, such that the maximum power for each good
equals one.
The graph shows the complexity of the price fluctuations for each good (i.e., there are
many significant frequency components) Note that, given the symmetries of the
economic world, there is no clear reason that each good should have different price
variations. However, although the goods all share some frequencies of variation in
common (chiefly those around 1/300 trading days), there are broad variations in the
secondary frequency components. This is most likely due to subtle, "genetic drift" effects
in the EC system.
Contrast these results to those shown in Figure 9, which is a similar power spectral
density for the experiment shown in Figure 7. Recall that in this experiment, there is a
positive price gradient maintained from the raw materials to the end goods. The power
spectral density shows that the price variations in this experiment are much less complex
than those in the previous experiment (i.e., there are fewer pronounced secondary
frequencies). Moreover, the price variations in the intermediate goods (although different
in scale) are more similar than those of the previous experiment (i.e., there are fewer
differences between the spectra of the different goods).
13. 1
Relative
0.5
Power
0
1024
512
341
256
205
171
146
128
114
102
93
1/Frequency
(wavelength in
trading days)
Figure 9: A power spectral density of prices for the three intermediate goods shown in
Figure 7. Power values are normalized, such that the maximum power for each good
equals one.
Future Study
Clearly, these experiments only begin to scratch the surface of a large area for
investigation that is of importance to both the EC and the agent-based systems
communities. The research environment presented here is uniquely suited to this
investigation. It has the advantages of
• embodiment of the agents in a generalized, fully asynchronous agent system (IBM's
Aglets),
• a flexible, object-oriented design, well-suited to modifications, extensions, and
further experimentation,
• possible extension into real-world applications, and
• coevolved results that manifest themselves in a clear, analyzable form.
The last point is key: previous research efforts on co-evolved systems have used
experimental environments where clear-cut examination of agent interactions was
difficult or impossible.
Several further experiments with the current system present themselves immediately,
including those discussed below.
Problem Variations
In the preliminary experiments shown here, supply and demand are manipulated to
maintain constant prices for raw materials and end goods. However, a more realistic
simulation would consider fixed flow rates for raw materials and end goods, or flows that
14. are dependent on prices. This is a fertile area for exploration, that is easily supported by
the current framework.
Another problem variation lies in the complexity of technology interactions. In the
preliminary experiments presented here, there is a uniformity of the economic world,
induced by the similarity of all technologies. Breaking symmetries in the economic
world, through the introduction of more complex technologies, is an interesting area for
investigation. Once again, this extension is easily introduced into the current framework.
Agent Variations
The most significant simplification in the preliminary experiments is that the agents
assign their workers blindly, as dictated by their genetic code. All reactivity to price
patterns occurs on the evolutionary time scale.
An obvious next step is to build simple price reactivity to the agents. As a first step, this
would involve no learning of price patterns during the agent's lifetime. The agent would
only have the capacity to shift worker allocation in reaction to current prices, due to a
strategy dictated by its genetic code. This closely parallel's previous work by the author
on coevolution in learning classifier systems (Smith, 1995).
A second step would by to allow the agents to adapt internal parameters during their
lifetimes. This would involve memory of past price patterns within the agent, and
resulting lifetime learning. Through such learning, an individual agent would have the
capacity to predict and anticipate price patterns. Once again, this closely parallels
previous work by the author (Smith, 1995).
Note that as the complexity of the agent's strategies increases, issues of embodiment
become more important. An agent’s ability to sense, calculate, and execute are implicitly
tied to its computational interactions with other agents, and interactions with its
computational environment. In these situations, the sort of framework presented here is
key to future investigation. The current framework will easily support these extensions.
EC Variations
The modifications suggested above lead to important issues in EC. Chief among these is
the Baldwin Effect. This effect is the interplay of lifetime learning and genetic evolution.
If an agent is evolved to learn a particular behavioral strategy, that agent's descendents
are more likely to learn that strategy as well, although the strategy is not specifically
inherited genetically. This interplay of genetic learning and lifetime learning is key to
desirable emergent behavior in agent-based systems. The current system is well poised
for an examination in this area.
Many other more technical EC issues can be examined within the current system,
including:
• more complex coding of strategies, including genetic programming techniques
• mating restrictions to promote distinct species of agents
• diploidy and dominance to improve tracking of time variations in the agent-based
system (Smith, 92)
15. Given the object based nature of the EC agents framework, each of these extensions is
straight forward.
Final Comments
The system of producer/consumer agents developed during the BT fellowship, coupled
with the author's previous work on constructing a framework for EC in agent-based
systems, presents a clear opportunity for further research. The current system is
analyzable, while maintaining a realistic, embodied character of the agents. Preliminary
results show the promise of investigations using this system. Given that EC seems a
theoretically and empirically appropriate technique for developing social, adaptive
behavior in agents, further investigations with this system are promising.
16. Appendix: Details of the EC Agents Framework
Design Philosophy
The structure of a typical EC software system is shown in Figure 10. In such systems, a
central program manipulates data structures that comprise the GA population. Although
these structures may act as agents, through interactions with an external world, they do
not function autonomously. All interactions are dictated by the central GA program. Even
in distributed GAs (Kapsalis, Smith & Rayward-Smith, 1994) this structure is
maintained, though the central GA program may be replicated at several locations.
Centralized GA program
Evaluation/Interaction
population
w/ external applications
Selection
Recombination
Mutation
Individuals are
simple genetic encodings
of parameters
Figure 10: The structure of typical GA (EC) software systems.
A truly agent-based GA structure is shown in Figure 11. In this system, the population is
comprised of standardized software agents, that interact with each other, and with the
external world, autonomously. There is no central GA. Agents can interact in general
ways, not just genetically. When interacting genetically, these agents exchange standard
objects that allow them to select mates, exchange genetic material, and construct
children, all based on their own internal goals and procedures.
17. Generalized Agent Environment
contains genetic and
non-genetic agents
Agent’s Evaluation
and Selection of Mates
Agent’s
genetic and non-genetic Recombination Strategy
message interactions
with other agents Agent’s
and the “outside world” Mutation Strategy
“Child” Agent
Figure 11
Implementation Specifics
The EC agents framework was implemented in Java, for its machine independence. The
framework is also based on IBM's Aglets, for agent infrastructure, relationship to
emerging systems standards, and the possibility of agent mobility. Javadoc is available
for the entire framework, from the Intelligent Business Systems Group at BT Labs. The
following sections outline the basics of the framework.
The EC agents framework consists of two main packages. The ECbasics package
defines a set of extensible objects for EC. It is not specific to any particular agent system.
The GAglet package defines of set of EC extensions and utilities for IBM's Aglets.
These packages are discussed in greater detail below.
The ECbasics Package
This package defines a number of key interfaces for the objects exchanged in EC
interactions. These include:
Randomizable
This interface defines methods that allow one to generate a "random" instance of an
object. What this means depends on the particular class that implements the interface. For
instance, for a BooleanGene (see below), this interface implements methods that
generate a gene with an equiprobable, random (true or false) value.
18. Mutatable
This interface defines methods that cause a random change in an object. What this means
depends on the particular class that implements this interface. For instance, in a
BooleanGene (see below), this interface implements methods that change a true
valued gene to a false valued gene, or vice versa.
Gene
This interface trivially extends Randomizable and Mutatable.
Genotype
This interface extends Randomizable and Mutatable, but also defines methods to
generate objects that implement Sperm or Egg interfaces (see below).
Gamete
This interface defines methods that can "get" and "set" segments of an object. What this
means depends on the class that implements this interface.
Sperm
This interface extends Gamete, but only trivially.
Egg
This interface extends Gamete, but also defines routines that can combine the
implementing class with a class implementing the Sperm interface. The result is an
object implementing the Genotype interface (i.e., the genotype of a potential child).
Example Objects
BooleanGene implements Gene
This class is the typical Boolean gene often used in GAs.
FloatGene implements Gene
This class is a floating point gene, like those often used in evolutionary programming. Its
mutation operator is a Gaussian "creep" around the current gene value.
VectorGenotype extends Vector, implements Genotype
This class is the typical chromosome string often used in GAs, but it also inherits all the
functionality of the Java Vector class.
VectorEgg and VectorSperm extend Vector, implement Egg or Sperm,
respectively
These classes implement the recombination of string chromosomes typically used in
GAs.
Other Possibilities
Within the framework, Genes, Genotypes, Sperm, and Eggs can be any type of object.
For instance, one could easily implement classes like
• HashtableGenotype
19. • TreeGenotype
• IntegerGene
• VectorGene
etc.
The GAglet Package
This package implements EC functionality for agents within the IBM Aglet framework.
This key classes of this package are discussed below.
There are two key utility classes:
Plumage
Objects from this class are used by agents to "advertise" themselves for mating to other
agents.
Phylum
Objects from this class are used to initialize the basic genetic behaviors and intentions of
a "first generation" agent, including its preferences in mates.
There are also two key agent classes:
GAglet extends Aglet
This class is a basic Aglet agent, with methods that allow it to be initialized with either
Phylum objects, or objects that implement the Genotype interface.
Egglet extends GAglet
This class implements methods that provide the agent with mate seeking behavior. These
methods broadcast Plumage and Sperm objects, and call appropriate methods to create
new child agents.
20. References
Aglets Workbench. http://www.trl.ibm.co.jp/aglets/
Brooks, R.A., C. Breazeal (Ferrell), R. Irie, C. Kemp, M. Marjanovic, B. Scassellat and
M. Williamson (1998). Alternate essences of intelligence. To appear in Proceedings of
AAAI-98 Conference. AAAI Press.
Deb, K. and Goldberg, D. E. (1989). An investigation of niche and species formation in
genetic function optimization. Proceedings of the Third International Conference on
Genetic Algorithms. p. 42--50.
Franklin and Graesser (1997). Is it an agent, or just a program? : A taxonomy for
autonomous agents. In Proceedings of the Third International Workshop on Agent
Theories, Architectures, and Languages. Springer-Verlag. pp. 21-35.
Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine
learning. Addison-Wesley. Reading, MA
Holland, J. H. (1975). Adaptation in natural and artificial systems. The University of
Michigan Press. Ann Arbor, MI.
Horn, J. and Goldberg, D. E. and Deb, K. (1994). Implicit niching in a learning classifier
system: Nature's way. Evolutionary Computation, 2(1). p. 37-66.
Kapsalis, A., Smith, G. D. and Rayward-Smith, V. J. (1994). A unified paradigm for
parallel genetic algorithms. In T. Fogarty (ed.) Evolutionary Computing AISB Workshop.
Springer-Verlag. 131-149.
Koza, J. R. (1992). Genetic Programming: On The Programming of Computers By
Means of Natural Selection. MIT Press.
Ray, Thomas (1990). An approach to the synthesis of life. In Langton, C., Taylor, C.
Farmer, J. D. and Rasmussen, S. (eds.) Artificial Life II. Addison-Wesley. p. 371-408
Rosin, C. D. and Belew, R. K. (1997). New methods in competitive coevolution.
Evolutionary Computation. 5(1). pp. 1-29
Smith, R. E. (1995). Memory Exploitation in Learning Classifier Systems. Evolutionary
Computation, 2(3). p. 199-220
Smith R. E. and Dike, B. A. (1995). Learning novel fighter combat maneuver rules via
genetic algorithms. International Journal of Expert Systems. 8 (3). 247-276.
Smith, R. E., Forrest, S. and Perelson, A. S. (1993). Searching for diverse, cooperative
populations with genetic algorithms. Evolutionary Computation, 1(2). p. 127-149.
Smith, R. E. and Goldberg, D. E. (1992). Diploidy and dominance in artificial genetic
search. Complex Systems, 6(3). p. 251-285.
Smith, R. E. and Taylor, N. (1998).A framework for evolutionary computation in agent-
based systems. In C. Looney and J. Castaing (eds.) Proceedings of the 1998
International Conference on Intelligent Systems. ISCA Press. p. 221-224
Wooldridge and Jennings (1996). Software agents. IEE Review. January 1996. 17-20.