Samuel’s Checkers Program6 - 1959, Arthur Samuel started to look at Checkers - The determination of weights through self-play - 39 Features - Included look-ahead via mini-max (Alpha- Beta) - Defeated Robert Nealy
Chinook7 - Produced by Jonathan Schaeffer in 1989. - 40,000 openings. - 8-piece endgame database in 1994. - Won the 1989 Computer Olympiad. - Chinook become the world champion. The first automated game player to have achieved this.
Deep Blue8 - Developed by IBM in mid 1990s. - An attempt to create a Chess program that was capable of beating the world champion at that time - 30 processors with parallel search, could evaluate up to 200 million chess positions per second - 8,000 different features - The opening database in Deep Blue consisted of 4,000 positions
Deep Blue9 - The end game database of Deep Blue consists of all positions with five or fewer chess pieces on the board. - Defeated Gary Kasparov in a six-game match in 1997 to become the first computer program to defeat a world Chess champion.
Blondie2410 - Produced by Fogel in 1999-2000 - Neural network as an evaluation function. - Values for input nodes Red (Black) – positive White – negative Empty – zero - Piece differential - Subsections (sub-boards)
Blondie2412 - Initial population of 30 neural networks (players). - Each neural network plays 5 games (as red) against 5 randomly chosen players:- +1 for a win 0 for a draw -2 for a loss -Best 15 players retained, the other 15 players eliminated. -Copy the best 15 players (replacing the worst
Blondie2413 - Repeat the process for 840 generations and the best player after these generations is retained. - Played 165 games at zone.com. - Rating: 2045.85 at that time - In top 500 of over 120,000 players on zone.com at that time. - Better than 99.61% of registered players on zone.com
Blondie2414 Blondie24 Performance after 165 games on zone.com
Blondie24-R15 - Has the same structure and architecture that Fogel utilised in Blondie24. - The only exception that the value of the King is fixed to 2. - The King is more valuable than an ordinary piece, and this is a well-known, even to novice players.
Blondie24-RR16 - Eliminate the randomness in the evolutionary phase of Blondie24-R. - A league competition between all the 30 neural networks. - All the neural networks play against each other. - The total number of matches per generation will be 870 (30*29) rather than 150 (30*5). - This increase (number of matches) will decrease the number of generations (840
Results and Discussion17 Blondie24-R Blondie24-RR Online WinCheck3D SX checkers Blondie24-R - Draw Win Lose (7-Piece) Lose (8-Piece) Blondie24-RR Win - Win Lose (2-Pieces ) Lose (4-Pieces) Results of Playing Against Selected Programs
Results and Discussion18 - Blondie24-RR plays two matches (one as red and one as white) against Blondie24- R, Blondie24-RR. - Wins as red against Blondie24-R. - The result is draw when Blondie24-RR moves second. - Reflects a success for our hypothesis based on the fact that both players are end products.
Results and Discussion19 - Blondie24-R and Blondie24-RR win against an online program which can be considered as another success. - Plays against two programs (strong). - For the first one Blondie24-RR lost with a two piece difference, Blondie24-R lost with a seven piece difference. - Playing against the second program shows that Blondie24-RR lost with a four piece difference, while Blondie24-R lost with an eight
Conclusion20 - The results show that Blondie24-RR is performing better than Blondie24-R. - Based on these results it would seem appropriate to use the league structure, instead of only choosing five random opponents to play against during the evolutionary phase.
Future Works21 - Investigate if other changes are possible. - Investigate using individual and social learning methods in order to enhance the ability of Blondie24-RR to overcome the problem of being an end product.
References221- Samuel, A. L., Some studies in machine learning using the game of checkers 1959,1967.2- Fogel D. B., Blondie24 Playing at the Edge of AI, United States of America Academic Press, 2002.3- Chellapilla K. and Fogel, D. B., Anaconda defeats hoyle 6-0: A case study competing an evolved checkers program against commercially available software 2000.4- Fogel D. B. and Chellapilla K., Verifying anacondas expert rating by competing against Chinook: experiments in co-evolving a neural checkers player.5- Chellapilla K. and Fogel D.B., Evolution, Neural Networks, Games, and Intelligence,” 1999..6- Chellapilla K. and Fogel D. B., Evolving an expert checkers playing program without using human expertise.7- Chellapilla K. and Fogel D. B., Evolving neural networks to play checkers without relying on expert knowledge.1999.