1) A series of experiments compared the evolution of neural network agents playing the board game Cartagena under different conditions of heuristic complexity and evolutionary supervision.
2) Agents with more complex heuristics evolved more quickly initially but were not necessarily stronger, while less complex agents had to learn spatial relationships.
3) Agents evolving in an unsupervised environment consistently improved over generations, unlike those with expert supervision.
The document discusses several studies that aimed to determine personality types through analysis of social media data like tweets.
One study analyzed over 1.7 million tweets to determine Myers-Briggs Type Indicator (MBTI) personality types. It found that word vectors, part-of-speech tags, and n-grams achieved over 65% accuracy on average. Another study used over 960,000 tweets to classify MBTI types with over 99% accuracy using support vector machines and logistic regression. However, it saw performance drop with testing data, likely due to noise in tweets. A third study compiled a dataset of 1.2 million tweets from 1,500 users self-reporting MBTI types.
Los aviones son aeronaves con alas que pueden volar impulsados por motores. Existen varios tipos de aviones como aviones comerciales para transporte de pasajeros, aviones de carga para transporte de bienes, aviones militares utilizados con fines bélicos, y aviones de papel construidos totalmente de papel como juguetes.
Capstone board presentation mba ( michael toney , mike toney )Mike Toney, MBA
This document summarizes a company's strategic moves over multiple rounds. It eliminated low-end products and focused on high-end, which provided two products in the high-end segment. It invested heavily in marketing, R&D, and training to drive customer awareness and lower costs. However, poor timing of some moves like repositioning a product, initial overspending on manufacturing, and maintaining high leverage led to a low stock price and losses despite high sales. Future rounds aim to improve size, performance and introduce new products to better compete in key segments.
Este documento describe un proyecto para utilizar la realidad virtual y aumentada en la intervención del espectro autista. Explica brevemente los rasgos del autismo como la falta de flexibilidad, alteraciones en la comunicación y dificultades para comprender bromas y sentimientos. Luego resume algunos proyectos existentes como INMER-II y Google Glass, y describe el proyecto desarrollado que consiste en un aula virtual para mejorar las interacciones sociales y la comunicación de personas con autismo.
- Women's rights activists in Nepal demanded meaningful participation of women in reconstruction efforts, including through gender mainstreaming and women's representation in key decision-making bodies.
- Assessments found that many women did not feel safe in displacement sites due to lack of lighting, segregated latrines, and knowledge of how to report gender-based violence. Women reported increased domestic violence and sexual harassment since the earthquake.
- Coordination efforts are focusing on priority clusters like early recovery, shelter, protection, and food security to integrate gender equality at district levels through partnerships with local women's groups and government women and children's offices.
Este documento describe cómo la realidad virtual y aumentada pueden ayudar a personas en el espectro autista. Explica cómo Siobhan le enseñó a Christopher John Francis Boone los significados de caras tristes y felices a través de dibujos simples, pero que las expresiones faciales rápidas de las personas son difíciles de entender para él.
The document summarizes a case involving a business collaboration between Golden Brewing and Seoul Brewing. It discusses two initial problems that have arisen: 1) Mr. Kim purchased $600 suits for Mr. Gillespie which exceeds Golden's $50 gift limit, and 2) Seoul made changes to the agreed-upon beer and started testing it without authorization. The team provides solutions for how Mr. Gillespie should respond that take Korean cultural considerations into account, emphasize respecting rules, and use inclusive language to maintain the collaboration.
The document provides information about the 24th Annual Mad City Open disc golf tournament taking place June 18-19, 2016 in Madison, Wisconsin. It includes the tournament schedule, player pool assignments and course locations, as well as food and drink sponsor information.
The document discusses several studies that aimed to determine personality types through analysis of social media data like tweets.
One study analyzed over 1.7 million tweets to determine Myers-Briggs Type Indicator (MBTI) personality types. It found that word vectors, part-of-speech tags, and n-grams achieved over 65% accuracy on average. Another study used over 960,000 tweets to classify MBTI types with over 99% accuracy using support vector machines and logistic regression. However, it saw performance drop with testing data, likely due to noise in tweets. A third study compiled a dataset of 1.2 million tweets from 1,500 users self-reporting MBTI types.
Los aviones son aeronaves con alas que pueden volar impulsados por motores. Existen varios tipos de aviones como aviones comerciales para transporte de pasajeros, aviones de carga para transporte de bienes, aviones militares utilizados con fines bélicos, y aviones de papel construidos totalmente de papel como juguetes.
Capstone board presentation mba ( michael toney , mike toney )Mike Toney, MBA
This document summarizes a company's strategic moves over multiple rounds. It eliminated low-end products and focused on high-end, which provided two products in the high-end segment. It invested heavily in marketing, R&D, and training to drive customer awareness and lower costs. However, poor timing of some moves like repositioning a product, initial overspending on manufacturing, and maintaining high leverage led to a low stock price and losses despite high sales. Future rounds aim to improve size, performance and introduce new products to better compete in key segments.
Este documento describe un proyecto para utilizar la realidad virtual y aumentada en la intervención del espectro autista. Explica brevemente los rasgos del autismo como la falta de flexibilidad, alteraciones en la comunicación y dificultades para comprender bromas y sentimientos. Luego resume algunos proyectos existentes como INMER-II y Google Glass, y describe el proyecto desarrollado que consiste en un aula virtual para mejorar las interacciones sociales y la comunicación de personas con autismo.
- Women's rights activists in Nepal demanded meaningful participation of women in reconstruction efforts, including through gender mainstreaming and women's representation in key decision-making bodies.
- Assessments found that many women did not feel safe in displacement sites due to lack of lighting, segregated latrines, and knowledge of how to report gender-based violence. Women reported increased domestic violence and sexual harassment since the earthquake.
- Coordination efforts are focusing on priority clusters like early recovery, shelter, protection, and food security to integrate gender equality at district levels through partnerships with local women's groups and government women and children's offices.
Este documento describe cómo la realidad virtual y aumentada pueden ayudar a personas en el espectro autista. Explica cómo Siobhan le enseñó a Christopher John Francis Boone los significados de caras tristes y felices a través de dibujos simples, pero que las expresiones faciales rápidas de las personas son difíciles de entender para él.
The document summarizes a case involving a business collaboration between Golden Brewing and Seoul Brewing. It discusses two initial problems that have arisen: 1) Mr. Kim purchased $600 suits for Mr. Gillespie which exceeds Golden's $50 gift limit, and 2) Seoul made changes to the agreed-upon beer and started testing it without authorization. The team provides solutions for how Mr. Gillespie should respond that take Korean cultural considerations into account, emphasize respecting rules, and use inclusive language to maintain the collaboration.
The document provides information about the 24th Annual Mad City Open disc golf tournament taking place June 18-19, 2016 in Madison, Wisconsin. It includes the tournament schedule, player pool assignments and course locations, as well as food and drink sponsor information.
Stability of Individuals in a Fingerprint System across Force LevelsITIIIndustries
This research studied the question: “Are all
individual’s performance stable in a fingerprint recognition
system?” The fingerprints of 154 individuals, provided at
different force levels, were examined using the biometric
menagerie tool, first coined by Doddington et al. in 1998. The
Biometric Menagerie illustrates how each person in a given
dataset performs in a biometric system, by using their genuine
and impostor scores, and providing them a classification based
upon those scores. This research examined the biometric
menagerie classifications across different force levels in a
fingerprint recognition study to uncover if individuals performed
the same over five force levels. The study concluded that they did
not, and a new metric has been created to quantify this
phenomenon. As a result of this discovery, the new metric,
Stability Score Index is described to showcase the movement of
individuals in the menagerie.
one of the areas of Artificial Intelligence is Game Playing. Game playing
programs are often described as being a combination of search and knowledge. The board
games are very popular. Board games provide dynamic environments that make them ideal
area of computational intelligence theories. Othello is 8 × 8 board game and it has very huge
state space as 364 ≈ 1028 total states. It is implemented by game search tree like Mini-max
algorithm, alpha-beta pruning. But it required more storage memory and more time to
compute best move among all valid moves. Evolutionary algorithms such as Genetic
algorithm are applied to the game playing because of the very large state space of the
problem. Game search tree algorithm like alpha- beta pruning is used to build efficient
computer player program. This paper mainly highlights on hybrid approach which is
combination of Genetic algorithm and alpha-beta pruning. Genetic algorithm is applied to
optimize search space of Othello game and building Genetic Weight Vector. This weight
vector is applied to game which played by alpha- beta pruning game search tree algorithm.
And we optimize search space of Othello and get best move in less amount of time.
Neural Mechanisms of Free-riding and Cooperation in a Public Goods Game: An E...Kyongsik Yun
Dongil Chung, Kyongsik Yun, Jaeseung Jeong. "Neural Mechanisms of Free-riding in the Public Goods game: EEG Hyperscanning Study", Proceedings of the 6th International Conference on Cognitive Science. Seoul. Republic of Korea, July 27 - 29, 2008, p. 336 – 339
soft computing BTU MCA 3rd SEM unit 1 .pptxnaveen356604
This document discusses hard computing and soft computing. Hard computing uses deterministic algorithms and mathematical models to produce accurate and predictable results, while soft computing can handle imprecision, uncertainty, and ambiguity. Soft computing techniques include fuzzy logic, neural networks, genetic algorithms, probabilistic reasoning, and evolutionary computation. These techniques aim to mimic human-like reasoning by tolerating uncertainty, learning and adapting, and integrating multiple methods. Examples of evolutionary computation algorithms provided are genetic algorithms, genetic programming, evolutionary strategies, differential evolution, and particle swarm optimization. Neural networks, ant colony optimization, and fuzzy logic are also summarized.
This document summarizes a research paper on genetic algorithms. It begins by explaining the biological concepts of evolution and natural selection that inspired genetic algorithms. A genetic algorithm uses populations of candidate solutions that are evolved over generations through mechanisms like reproduction, crossover, and mutation. The paper then describes the components of a genetic algorithm - generating an initial population, calculating fitness, selection, crossover and mutation. It provides an example of using a genetic algorithm to simulate a player moving through a maze to reach a goal. The paper concludes by discussing applications of genetic algorithms and limitations of the approach.
Genetic algorithms are a class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover. They work by maintaining a population of potential solutions and applying genetic operators of selection, crossover and mutation to generate new populations in search of an optimal solution. A genetic algorithm begins with a randomly generated population that is evaluated and selected using a fitness function. Selected solutions then reproduce through crossover and mutation to create a new population, and the process repeats until a termination condition is reached.
Emotional Interactions in Human Decision-Making using EEG HyperscanningKyongsik Yun
This study investigated emotional interaction in human decision making using EEG hyperscanning during an ultimatum game. EEG was recorded simultaneously from 13 pairs of participants as proposers offered splits of money and responders decided to accept or reject. Analysis found increased high frequency oscillations in frontal regions during decision making. Nonlinear interdependence analysis revealed stronger information flow between frontal areas of participants' brains compared to other regions, suggesting the frontal areas play an important role in social decision making. This was the first study to use dual EEG to analyze temporal dynamics and functional connectivity during human social interaction.
Genetic algorithms are a type of evolutionary algorithm used to find optimal or near-optimal solutions to problems by mimicking biological evolution. They work by maintaining a population of candidate solutions and applying genetic operators like selection, crossover and mutation to generate new solutions. A fitness function evaluates the solutions, with more fit solutions being more likely to be selected for reproduction. Over many generations, the population evolves toward an optimal solution. Genetic algorithms are inspired by Darwinian evolution and use techniques like inheritance, mutation, selection and crossover. They have been shown to reliably find good solutions to problems that are not well suited for standard optimization algorithms.
This document summarizes research on developing an AI system called AlphaGo that can defeat human professionals at the game of Go. Key points:
1. AlphaGo uses deep neural networks including a policy network to select moves and a value network to evaluate board positions, trained through both supervised learning from expert games and reinforcement learning from self-play games.
2. Without lookahead search, the neural networks can play Go at a strong amateur level. Combined with a new Monte Carlo tree search algorithm, AlphaGo defeats other Go programs and the European Go champion.
3. This is the first time a computer program has defeated a human professional in the full game of Go, a feat previously thought to be at least
Evolving Neural Network Agents In The Nero VideoStelios Petrakis
This paper introduces rt-NEAT, a method for evolving artificial neural networks in real-time to control agents in video games. rt-NEAT extends NEAT, which evolves increasingly complex neural networks, to allow networks to complexify as a game is played. This enables agents to evolve increasingly sophisticated behaviors in real-time. rt-NEAT is demonstrated in the NERO video game, where the player trains a team of robots for combat by allowing their neural networks to evolve over many battles.
VanRullen & Thorpe (2001) studied the time course of visual processing using EEG. Their results identified two mechanisms: 1) An early perceptual process starting at 75-80ms that is task-independent and category-dependent. 2) A later task-related process starting after 150ms that is category-independent and involves decision making.
Rogers & Patterson (2007) sought to explain contradictions in categorization research. Their experiments using reaction times and patients with semantic dementia supported a parallel distributed processing model where more general categories are processed first, though full activation of basic categories occurs faster.
Mack & Palmeri (2015) explored factors influencing category advantages. Their 5 experiments showed that both brief exposure and
Stability of Individuals in a Fingerprint System across Force LevelsITIIIndustries
This research studied the question: “Are all
individual’s performance stable in a fingerprint recognition
system?” The fingerprints of 154 individuals, provided at
different force levels, were examined using the biometric
menagerie tool, first coined by Doddington et al. in 1998. The
Biometric Menagerie illustrates how each person in a given
dataset performs in a biometric system, by using their genuine
and impostor scores, and providing them a classification based
upon those scores. This research examined the biometric
menagerie classifications across different force levels in a
fingerprint recognition study to uncover if individuals performed
the same over five force levels. The study concluded that they did
not, and a new metric has been created to quantify this
phenomenon. As a result of this discovery, the new metric,
Stability Score Index is described to showcase the movement of
individuals in the menagerie.
one of the areas of Artificial Intelligence is Game Playing. Game playing
programs are often described as being a combination of search and knowledge. The board
games are very popular. Board games provide dynamic environments that make them ideal
area of computational intelligence theories. Othello is 8 × 8 board game and it has very huge
state space as 364 ≈ 1028 total states. It is implemented by game search tree like Mini-max
algorithm, alpha-beta pruning. But it required more storage memory and more time to
compute best move among all valid moves. Evolutionary algorithms such as Genetic
algorithm are applied to the game playing because of the very large state space of the
problem. Game search tree algorithm like alpha- beta pruning is used to build efficient
computer player program. This paper mainly highlights on hybrid approach which is
combination of Genetic algorithm and alpha-beta pruning. Genetic algorithm is applied to
optimize search space of Othello game and building Genetic Weight Vector. This weight
vector is applied to game which played by alpha- beta pruning game search tree algorithm.
And we optimize search space of Othello and get best move in less amount of time.
Neural Mechanisms of Free-riding and Cooperation in a Public Goods Game: An E...Kyongsik Yun
Dongil Chung, Kyongsik Yun, Jaeseung Jeong. "Neural Mechanisms of Free-riding in the Public Goods game: EEG Hyperscanning Study", Proceedings of the 6th International Conference on Cognitive Science. Seoul. Republic of Korea, July 27 - 29, 2008, p. 336 – 339
soft computing BTU MCA 3rd SEM unit 1 .pptxnaveen356604
This document discusses hard computing and soft computing. Hard computing uses deterministic algorithms and mathematical models to produce accurate and predictable results, while soft computing can handle imprecision, uncertainty, and ambiguity. Soft computing techniques include fuzzy logic, neural networks, genetic algorithms, probabilistic reasoning, and evolutionary computation. These techniques aim to mimic human-like reasoning by tolerating uncertainty, learning and adapting, and integrating multiple methods. Examples of evolutionary computation algorithms provided are genetic algorithms, genetic programming, evolutionary strategies, differential evolution, and particle swarm optimization. Neural networks, ant colony optimization, and fuzzy logic are also summarized.
This document summarizes a research paper on genetic algorithms. It begins by explaining the biological concepts of evolution and natural selection that inspired genetic algorithms. A genetic algorithm uses populations of candidate solutions that are evolved over generations through mechanisms like reproduction, crossover, and mutation. The paper then describes the components of a genetic algorithm - generating an initial population, calculating fitness, selection, crossover and mutation. It provides an example of using a genetic algorithm to simulate a player moving through a maze to reach a goal. The paper concludes by discussing applications of genetic algorithms and limitations of the approach.
Genetic algorithms are a class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover. They work by maintaining a population of potential solutions and applying genetic operators of selection, crossover and mutation to generate new populations in search of an optimal solution. A genetic algorithm begins with a randomly generated population that is evaluated and selected using a fitness function. Selected solutions then reproduce through crossover and mutation to create a new population, and the process repeats until a termination condition is reached.
Emotional Interactions in Human Decision-Making using EEG HyperscanningKyongsik Yun
This study investigated emotional interaction in human decision making using EEG hyperscanning during an ultimatum game. EEG was recorded simultaneously from 13 pairs of participants as proposers offered splits of money and responders decided to accept or reject. Analysis found increased high frequency oscillations in frontal regions during decision making. Nonlinear interdependence analysis revealed stronger information flow between frontal areas of participants' brains compared to other regions, suggesting the frontal areas play an important role in social decision making. This was the first study to use dual EEG to analyze temporal dynamics and functional connectivity during human social interaction.
Genetic algorithms are a type of evolutionary algorithm used to find optimal or near-optimal solutions to problems by mimicking biological evolution. They work by maintaining a population of candidate solutions and applying genetic operators like selection, crossover and mutation to generate new solutions. A fitness function evaluates the solutions, with more fit solutions being more likely to be selected for reproduction. Over many generations, the population evolves toward an optimal solution. Genetic algorithms are inspired by Darwinian evolution and use techniques like inheritance, mutation, selection and crossover. They have been shown to reliably find good solutions to problems that are not well suited for standard optimization algorithms.
This document summarizes research on developing an AI system called AlphaGo that can defeat human professionals at the game of Go. Key points:
1. AlphaGo uses deep neural networks including a policy network to select moves and a value network to evaluate board positions, trained through both supervised learning from expert games and reinforcement learning from self-play games.
2. Without lookahead search, the neural networks can play Go at a strong amateur level. Combined with a new Monte Carlo tree search algorithm, AlphaGo defeats other Go programs and the European Go champion.
3. This is the first time a computer program has defeated a human professional in the full game of Go, a feat previously thought to be at least
Evolving Neural Network Agents In The Nero VideoStelios Petrakis
This paper introduces rt-NEAT, a method for evolving artificial neural networks in real-time to control agents in video games. rt-NEAT extends NEAT, which evolves increasingly complex neural networks, to allow networks to complexify as a game is played. This enables agents to evolve increasingly sophisticated behaviors in real-time. rt-NEAT is demonstrated in the NERO video game, where the player trains a team of robots for combat by allowing their neural networks to evolve over many battles.
VanRullen & Thorpe (2001) studied the time course of visual processing using EEG. Their results identified two mechanisms: 1) An early perceptual process starting at 75-80ms that is task-independent and category-dependent. 2) A later task-related process starting after 150ms that is category-independent and involves decision making.
Rogers & Patterson (2007) sought to explain contradictions in categorization research. Their experiments using reaction times and patients with semantic dementia supported a parallel distributed processing model where more general categories are processed first, though full activation of basic categories occurs faster.
Mack & Palmeri (2015) explored factors influencing category advantages. Their 5 experiments showed that both brief exposure and
1. Comparison of the impact of heuristic complexity and evolutionary populations in the
development of the co-evolution of neural network based Cartagena players
John Faherty
March 2009
Abstract
A series of genetic evolution experiments were conducted where multilayered feed forward neural net game playing agents competed in an
evolutionary environment based on their ability to play a relatively simple board game called Cartagena (described below). Two series of
experiments are presented in this thesis. Firstly, three cohorts of Cartagena agents with increasing complex heuristics were evolved and
compared to investigate the impact of heuristic complexity on agent evolution. The second series of experiments involved three cohorts of neural
net based game playing agents (with the same level of heuristic complexity) evolving in environments with differing levels of supervision, i.e.
unsupervised, semi-supervised and fully supervised by expert agents. The strength of the evolving agents was assessed against external fixed
benchmarking agents.
The results of the different heuristic complexity experiments showed that more complex agents, which did not need to learn the spatial details of
the board evolved more quickly, but some of the more complex heuristics actually hinder the evolution. The results of the different levels of
supervision experiments showed that agents that evolved in an unsupervised environment showed increased strength through evolutionary
cycles, but the agents that evolved in semi-supervised and fully supervised environments did not show an increase in strength through the
evolutionary process.
2. Context and Aim of the research
Game description: The game of Cartagena is based on the tale of a 1672 breakout from a Spanish Prison in Cartagena. Players are in control of
6 pirates, and the objective of each player is to navigate their pirates from Cartagena, through an underground passage to safety (see Figure 1).
Each space in the passage contains one of six distinct symbols (dagger, hat, pistol, bottle, skull and
keys) and there is a pack of cards, each one bearing one of the symbols. Players move their pirates to
the Sloop by playing cards, or they may select to pick up cards by moving backwards through the
passageway. The players take turns sequentially, and each turn comprises three “moves”, i.e. either
play a card or pick up cards. The winner is the first player to navigate all of their pirates to the Sloop.
Agent design: Each game playing agent comprises neural net. Move selection is undertaken by
identifying all of the potential one ply game positions from a distinct position. Features of each game
position (i.e. heuristics) are then feed into the neural net, and the game position with the highest
output from the neural net defines the move to be played. The agents play each other in round robin
competitions with evolutionary pools, with winners of these games being assigned positive pay-offs
(and losers negative pay-offs).
Research question: This research investigates two subjects: the impact of heuristic complexity on
the agent evolution; and the impact of supervision of the evolutionary pool on agent evolution.
SLOOP
DIRECTION
OF TRAVEL
CARTAGENA
Fig 1: CARTAGENA
BOARD
3. Initialise Population
Randomly Vary
Individuals
'Evaluate 'fitness
Apply Selection
Research method/Techniques
Overview: An overview of the co-evolutionary process is given in Figure 2 opposite. The first stage is to initialise the population, where the
weights and biases of the neural nets in the initial population are set randomly. The next stage is to evaluate the fitness
of the individuals within the pool, via ‘round robin’ competition within the evolutionary pool. The fittest individuals
are selected and mutated to form the next generation, and the process is repeated. Periodically the strength of the
fittest individuals within a pool is assessed objectively through benchmarking by playing against an ensemble of three
fixed external agents.
Heuristic Complexity: Heuristics are defined as features of potential game boards that are inputs to neural nets
and this experiment involved comparing the evolution of agents with three different levels of heuristic complexity:
• Simple’ heuristics where the location of the pirates in the passageway is main input
• ‘Spatial’ heuristics which attempt to capture the spatial information concerning the pirates’ relative positions, as
well as the distance travelled by each pirate along the passageway.
• ‘Spatial’ and Cards’ heuristics are as the spatial heuristics, but also include looking further forward to playing
cards in subsequent moves, some consideration of position of the opponents pirates
Levels of Supervision: This experiment also investigated the influence of level of supervision on agent evolution, by considered three levels of
supervision. In the “unsupervised” experiment the round robin fitness assessment only involved play against other members of the evolutionary
pool. In the “semi-supervised” trial half of the round robin games were against other members of the pool, and half were against expert agents,
and in the “fully supervised” game all of the games that the agents played in the pool were against expert agents.
Figure 2:
Evolutionary
Process
4. Results: Heuristic complexity trials
Figure 3 opposite shows the results of the experiments into the
effect of heuristic complexity on strength development. The
figure shows the average strength of the three fitness agents from
evolutionary generation. This diagram shows the strength against
the “Random” benchmark (that selects moves at random).
• The Spatial & Cards heuristic based agents ultimately
evolved to a higher strength level compared to the Spatial
heuristic based agents.
• The Spatial based agents and the Spatial with Cards based
agents evolved more rapidly between generation 0 and 200,
when compared with the Simple Heuristics.
• By generation 500 the Simple Heuristic based agents have a greater strength compared to the Spatial Heuristic. Unfortunately the Simple heuristic trial
was shorter (due to time constraints) than the other heuristic trials, which is one of the limitations of this research.
Figure 3: Random BenchMark
-100
0
100
200
300
400
500
600
700
800
900
0 200 400 600 800 1000 1200 1400 1600 1800
Generation
Strength
Spatial&CardsAverage
SpatialAveage
Simple average
5. Random BenchMark for Different Cohort populations
-600
-400
-200
0
200
400
600
800
0 20 40 60 80 100 120 140
Generation
Strength
Fully Supervised UnSupervised Semi Supervised
Results: Cohort population trials
Figure 4 opposite shows the results of the trials with different
levels of supervision.
• The agents in the unsupervised trial, became continuously
more stronger compared to the random benchmark. This is
shown through a continuous improvement in strength
(when compared to the external benchmark) with
increasing number of generations.
• The agents evolved in semi and fully supervised cohorts
did not show any increase in strength development with
successive evolutionary cycles.
• The agents in the fully and semi supervised environments, average strength is less than zero when compared to a random benchmark. This means that
they are routinely beaten by a benchmark player that plays random moves. This is because a random player will generally play cards to move forward,
whereas a “random heuristic” will actively choose moves that select certain random features.
Figure 4: Results of Cohort population trials
6. Conclusions
Heuristic Complexity Trials:
• The strength development of the agents did vary with different heuristic complexities, although more complex heuristics did not necessarily result in a
higher strength of the evolved agent. It is thought that the Spatial Heuristics may have been hindered by less pertinent heuristics interfering with the more
basic salient heuristics, whereas the Simple Heuristics only contained the more basic salient heuristics.
• The initial Strength development of the Simplistic Heuristic based agents was slower compared to the Spatial and Spatial with Cards Heuristics. This
resulted from the fact that the Simple Heuristics had to learn the spatial relationships between the different sections of the passage, whereas these spatial
relationships were inherent in the inputs used in the Spatial and Spatial with Cards Heuristics.
• The extra board features included in the Spatial with Cards heuristics, compared to the Spatial Heuristics, allowed the Spatial with Cards heuristic based
agents to evolve to a higher strength, implying the extra heuristics inputs were important board features of the board that the were appropriate for board
evaluation.
Evolutionary Pool Trials:
• Evolution of agents in an unsupervised evolutionary cohort is a far more effective method for developing neural net based agents compared to evolution
of agents within semi or fully supervised evolutionary cohorts. In these trials only agents evolved in an unsupervised evolutionary cohort actually
resulted in a consistent increase in strength through evolutionary cycles. The agents evolved in semi and fully supervised cohorts did not show any
increase in strength development with successive evolutionary cycles.
7. References (limited by word count to key references– Full reference list in dissertation text)
Chellapilla, K and Fogel, D (1999) Evolution, Neural Networks, Games, and Intelligence, in Proceedings of the IEEE 87 (9), 1471-1496
Yao, X (1999) Evolving Artificial Neural Networks, in Proceedings of the IEEE 87 (9), 1423-1447
Runarsson, T and Lucas, S (2005) Co evolution Versus Self-Play Temporal Difference Learning for Acquiring Position Evaluation in Small-
Board Go, in IEEE Transactions of Evolutionary Computation 9 (6)628-640
Russell and Norvig (2003), Artificial Intelligence – A Modern Approach, Prentice-Hall
Darwen, P (20001) Why co-evolution beats temporal difference learning at backgammon for linear architecture, but not a non-linear architecture
Proceedings of the 2001 Congress on Evolutionary Computation, 1003-1010
Mandziuk, J, Kusiak, M and Waledzik, K (2007) Evolutionary-based heuristic generators for checkers and give-away checkers, in Expert
Systems 24 (4), 189–211