1. THE IMPORTANCE OF A LOOK-AHEAD DEPTH TO
EVOLUTIONARY CHECKERS
Belal Al-Khateeb Graham Kendall
bxk@cs.nott.ac.uk gxk@cs.nott.ac.uk
School of Computer Science
(ASAP Group)
University of Nottingham
2. Outline
2
-Introduction
- Checkers
- Samuel’s Checkers Program
- Previous Work
- Experimental Setup
- Results and Discussion
- Conclusions
6. Samuel’s Checkers Program
6
- 1959, Arthur Samuel started to look at
Checkers
- The determination of weights through
self-play
- 39 Features
- Included look-ahead via mini-max (Alpha-
Beta)
- Defeated Robert Nealy
7. How good was Samuel’s
7
player?
o Samuels’s program defeated Robert
Nealy, although the victory is surrounded
in controversy
o Did he lose the game or did Samuel win?
8. How good was Samuel’s
8
player?
Red (Samuel’s Program) : Just about to make move 16
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
9. How good was Samuel’s
9
player?
Red (Samuel’s Program)
1 2 3 4
5 6 7 8
Forced Jump
9 10 11 12
13 14 15 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
10. How good was Samuel’s
10
player?
Red (Samuel’s Program)
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
11. How good was Samuel’s
11
player?
Red (Samuel’s Program)
Trapped Strong
1
(Try to
2 3 4
keep)
5 6 7 8
9 10 11 12
13 14 15 16
Only
17 18 19
advance to
20
Square 28
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
12. How good was Samuel’s
12
player?
Red (Samuel’s Program)
What Move
1
would you 2 3 4
make?
5 6 7 8
20
21
9 10 11 12
22
26 14 15 16
13
32
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
13. How good was Samuel’s
13
player?
Red (Samuel’s Program)
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
14. How good was Samuel’s
14
player?
Red (Samuel’s Program)
o This was a
very poor 1 2 3 4
move
5 6 7 8
o It allowed
Samuel to 9 10 11 12
retain his
“Triangle of 13 14 15 16
Oreo
17 18 19 20
o AND.. By
moving his 23
21 22 24
checker
from 19 to 25 26 28
27
24 it
guaranteed 29 30 31 32
Samuel a White (Nealey)
King
15. How good was Samuel’s
15
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22
K 23 24
25 26 28
27
29 30 31 32
White (Nealey)
16. How good was Samuel’s
16
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22
K 23 24
25 26 28
27
29 30 31 32
White (Nealey)
17. How good was Samuel’s
17
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13
16-12 then 5-1, Chinook said
14 15 16
would be a draw
17 18 19 20
21 22
K 23 24
25 26 28
27
29 30 31 32
White (Nealey)
18. How good was Samuel’s
18
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22
K 23 24
25 26 28
27
29 30 31 32
White (Nealey)
19. How good was Samuel’s
19
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22
K 23 24
25 26 28
27
29 30 31 32
White (Nealey)
20. How good was Samuel’s
20
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22
K 23 24
25 26 28
27
29 30 31 32
White (Nealey)
21. How good was Samuel’s
21
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22
K 23 24
25 26 28
27
29 30 31 32
White (Nealey)
22. How good was Samuel’s
22
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
21 22
K 23 24
25 26 28
27
29 30 31 32
White (Nealey)
23. How good was Samuel’s
23
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
This
14 15 16
13
checker is
lost K
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
24. How good was Samuel’s
24
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 K 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
25. How good was Samuel’s
25
player?
Red (Samuel’s Program) : After Move 25
K 1 2 3 4
5 6 7 8
9 10 11 12
13 This checker
14 15 16
could run (to K
17 18 19 20
10) but..
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
26. How good was Samuel’s
26
player?
Red (Samuel’s Program) : After Move 25
K 1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 K 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
27. How good was Samuel’s
27
player?
Red (Samuel’s Program) : After Move 25
K 1 2 3 4
5 6 7 8
9 10 11 12
13 14 15
K 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
28. How good was Samuel’s
28
player?
Red (Samuel’s Program) : After Move 25
K 1 2 3 4
Forced
Jump 5 6 7 8
9 10 11 12
13 14 15
K 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
29. How good was Samuel’s
29
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9
K 10 11 12
13 14 15
K 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
30. How good was Samuel’s
30
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
5 6 7 8
9
K 10 11 12
13 14 15
K 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
31. How good was Samuel’s
31
player?
Red (Samuel’s Program) : After Move 25
1 2 3 4
Victory
5 6 7 8
9 10 11 12
13
K 14 15 16
17 18 19 20
21 22 23 24
25 26 28
27
29 30 31 32
White (Nealey)
32. How good was Samuel’s
32
player?
Two Mistakes by Nealy
Allowing Samuel to get a King
Playing a move that led to defeat
when there was a draw available
33. How good was Samuel’s
33
player?
o The next year a six match rematch was
won by Nealy 5-1.
o Three years later (1966) the two world
championship challengers (Walter
Hellman and Derek Oldbury) played four
games each against Samuel’s program.
They won every game.
34. Blondie24
34
- Produced by Fogel and Chellapilla in 1999-
2000
- Neural network as an evaluation function.
- Values for input nodes
Red (Black) – positive
White – negative
Empty – zero
- Piece differential
- Subsections (sub-boards)
36. Blondie24
36
- Initial population of 30 neural networks
(players).
- Each neural network plays 5 games (as red)
against 5 randomly chosen players:-
+1 for a win
0 for a draw
-2 for a loss
-Best 15 players retained, the other 15 players
eliminated.
-Copy the best 15 players (replacing the worst
37. Blondie24
37
- Repeat the process for 840 generations and
the best player after these generations is
retained.
- Played 165 games at zone.com.
- Rating: 2045.85 at that time
- In top 500 of over 120,000 players on
zone.com at that time.
- Better than 99.61% of registered players on
zone.com
38. Blondie24
38
- There has been a lot of discussion about the
importance of the look-ahead depth
- It is believed to be important and many
people state it, but we wanted to investigate
- Fogel, in his work on evolving Blondie24 said
that “At four ply, there really isn’t any “deep”
search beyond what a novice could do with a
paper and pencil if he or she wanted to”.
39. Blondie24
39
-Generating four ply depth using a paper and
pencil:
- Not an easy task for novices.
- Time consuming.
- It might be done at some subconscious
level, where pruning is taking place.
- Has not been reported in the scientific
literature.
40. Previous Work
40
-Many researchers have shown the importance
of the look-ahead depth for computer games.
-None of them was related to checkers.
-Most of the findings are related to chess
- Increasing the depth level will produce
superior chess players.
41. Previous Work
41
- Runarsson and Jonsson for Othello:
- Better playing strategies are found when
using TD learning when ε–greedy is
applied with a lower look-ahead search
depth and a deeper look-ahead search
during game play.
42. Experimental Setup
42
- Forthe purpose of investigating our
hypothesis an evolutionary checkers
player, was implemented.
- Our implementation has the same
structure and architecture that Fogel
utilised in Blondie24.
- Four players were evolved.
C1 is evolved using one ply depth.
C2 is evolved using two ply depth.
C3 is evolved using three ply depth.
43. Experimental Setup
43
-Our previous efforts to enhance Blondie24
introduced a round robin tournament.
Al-Khateeb, B and Kendall, G Introducing a round robin tournament into Blondie24. CIG 2009: 112-116, 2009
- We also use this player, Blondie24-RR
(evolved using four ply) to investigate the
importance of the look-ahead depth.
- Three players were evolved (in addition to
Blondie24-RR.
- Blondie24-RR1Ply is evolved using one
ply.
- Blondie24-RR2Ply is evolved using two
44. Experimental Setup
44
- C1, C2, C3 and C4 played against each other
by using the idea of a two-move ballot and
each player allowed to search to a depth of 6-
ply.
- The games were played until either one side
wins or a draw is declared after 100 moves
for each player.
- The same procedure was also used to play
Blondie24-RR1Ply, Blondie24-
RR2Ply, Blondie24-RR3Ply, Blondie24-RR.
45. Results and Discussion
45
C1 C2 C3 C4 Σ wins C C2 C C4 Σ draws
1 3
C1 - 28 17 13 58
C1 - 25 24 14 63
C2 33 - 24 19 76
C2 25 - 31 27 83
C3 45 31 - 27 103
C3 24 31 - 26 91
C4 59 40 35 - 134
C4 14 27 26 - 67
Number of wins (for the row Number of draws (for the row
player) out of 258 games. player) out of 258 games.
46. Results and Discussion
46
Mean SD Class C2 C3 C4
C1 1188.94 28.94 E C1 Red Lost Lost Lost
C2 1206.24 27.62 D White Drawn Lost Lost
1146.58 27.40 E C2 Red - Lost Lost
C1
White - Drawn Lost
C3 1266.18 26.14 D
C3 Red - Lost
C1 1264.11 27.21 D White - Lost
C4 1474.99 26.14 C
1179.47 26.85 E Wins/Loses for C1, C2, C3 and C4 when not
C2
using two-move ballot.
C3 1205.10 25.60 D
C2 1114.61 27.17 E
C4 1200.21 25.88 D
C3 1176.02 28.26 E
C4 1205.26 26.98 D
Standard rating formula for all players after
5000 different orderings of the 86 games
played.
47. Results and Discussion
47
1pl 2ply 3ply 4ply Σ wins 1ply 2ply 3ply 4ply Σ draws
y
1ply - 28 20 14 62 1ply - 26 24 15 65
2ply 32 - 29 21 82 2ply 26 - 23 19 68
3ply 42 34 - 27 103 3ply 24 23 - 20 67
4ply 57 46 39 - 142 4ply 15 19 20 - 54
Number of wins (for the row Number of draws (for the row
player) out of 258 games for the player) out of 258 games for the
round robin players. round robin players.
48. Results and Discussion
48
Mean SD Class 2Ply 3Ply 4Ply
1Pl 1187.79 28.86 E 1Ply Red Lost Lost Lost
y 1200.74 27.55 D White Lost Lost Lost
2Pl 2Ply Red - Lost Lost
y White - Lost Lost
3Ply Red - Lost
1Pl 1160.17 28.15 E
White - Lost
y 1252.67 26.84 D
3Pl Wins/Loses for 1Ply, 2Ply, 3Ply and 4Ply
y when not using two-move ballot.
1Pl 1256.00 27.71 D
y 1450.51 26.58 C
4Pl
y
2Pl 1194.62 29.30 E
Standard rating formula for all D
y 1212.04 27.98 players after
3Pl
5000 different orderings of the 86 games
y
played.
49. Conclusions
49
- Many evolutionary checkers players
produced, using different depths of ply
during learning.
- Better value functions would be learned when
training with deeper look-ahead search.
50. Conclusions
50
- Increasing of the ply depth will increase the
computational cost of evolving evolutionary
checkers. In our experiments as all the
experiments were run for the same amount of
time (19 days).
- The results suggest that starting with a depth
of four ply is the best value function to start
with during learning phase for checkers. That
is, train at four ply and then play at the
highest ply possible.
51. References
51
1- Samuel, A. L., Some studies in machine learning using the game of checkers 1959,1967.
2- Fogel D. B., Blondie24 Playing at the Edge of AI, United States of America Academic Press, 2002.
3- Chellapilla K. and Fogel, D. B., Anaconda defeats hoyle 6-0: A case study competing an evolved
checkers program against commercially available software 2000.
4- Fogel D. B. and Chellapilla K., Verifying anaconda's expert rating by competing against Chinook:
experiments in co-evolving a neural checkers player.
5- Chellapilla K. and Fogel D.B., Evolution, Neural Networks, Games, and Intelligence,” 1999.
6- Chellapilla K. and Fogel D. B., Evolving an expert checkers playing program without using human
expertise, 2001.
7- Chellapilla K. and Fogel D. B., Evolving neural networks to play checkers without relying on
expert knowledge.1999.
8- Runarsson, T.P. and Jonsson, E.O, Effect of look-ahead search depth in learning position
evaluation functions for Othello using ε–greedy exploration, 2007.