These are the slides about the 2015 Fighting Game Artificial Intelligence Competition presented at the 2015 IEEE Conference on Computational Intelligence and Games (CIG 2015) on September 2, 2015 in Tainan, Taiwan.
3. Fighting game AI platform in Java, viable with a small-size team
First of its kinds, since 2013 & CIG 2014 (previous AI codes available)
Four papers at CIG 2014 & 2015 (two by others + two by us)
𝑆𝑐𝑜𝑟𝑒𝑠 = 1000 ∗
𝑜𝑝𝑝𝑜𝑛𝑒𝑛𝑡.𝐻𝑃𝑙𝑜𝑠𝑠
𝑠𝑒𝑙𝑓.𝐻𝑃𝑙𝑜𝑠𝑠+𝑜𝑝𝑝𝑜𝑛𝑒𝑛𝑡.𝐻𝑃𝑙𝑜𝑠𝑠
60 FPS (16.67ms response time)
Current game state is
delayed by 15 frames
Currently one character type
Zen for competition
Forward model available
Available soon!
MCTS sample AI
Nicer special effects
Kinect interface Game resources are from The Rumble Fish 2 with the courtesy of Dimps Corporation.
FightingICE
CIG 2015, Sep 2, 2015
5. Players use attack actions (skills) and moving actions
to fight
A skill has 3 stages:
Startup, Active and Recovery
Skill System(1/4)
CIG 2015, Sep 2, 2015
6. Skill System(2/4)
Startup
1st stage
Prepare to attack
No attack box, so
no damage to give
CIG 2015, Sep 2, 2015
7. Skill System(3/4)
Active
2nd stage
The small red box -- attack
box -- can be seen on the
character’s arm
In these frames, if the
opponent’s attack hit box
coincide with this attack box,
the opponent will be
damaged
CIG 2015, Sep 2, 2015
8. Skill System(4/4)
Recover
3rd stage
Recover to normal
status
Special frame:
cancelable
Some skills can be
used during
cancelable frames
CIG 2015, Sep 2, 2015
10. Two games, each switching the player sides, are
played for each pair of submitted AIs in a round-robin
tournament.
A game has 3 rounds, each with 5-second ready time
and 60-second fighting time.
The characters' position will be reset when time over,
and if it is not already in the third round, the system
will start a new round.
Contest Rules
CIG 2015, Sep 2, 2015
11. 17 AIs from 10 locations
Brazil, China, France, Germany, Indonesia, Japan, South Korea,
Spain, Taiwan, and Thailand
Six AIs from Sejong University
Four AIs from Bangkok University
Most use rule-based systems or finite-state machines
One AI uses linear extrapolation for prediction the
position
Four AIs have a mechanism for predicting the
opponent’s next action
J48, k-NN (2 teams), and forward-model
Two AIs use GA for optimizing
action-execution probabilities
fuzzy rules
Summary of AI Fighters
CIG 2015, Sep 2, 2015
12. FightingICE @ Bangkok University, Thailand
Multimedia Intelligent Technology (BU-MIT)
http://mit.science.bu.ac.th/
• Participation in FTGAIC
– 2013: 3 teams, 3rd-year undergrad students (3rd place)
– 2014: 1 team, 4th-year undergrad students (2nd place)
– 2015: 4 teams, 3rd-year undergrad students and Internship students
• From 2014, their Senior Project is about developing their AI bots
for FTGAIC.
• 2015 - present, collaborate with ICE Lab, Ritsumeikan University
in developing an Kinect interface for the fighting game controller
(the 1st version was based on FTGAI platform)
Asst. Prof. Worawat Choensawat, worawat.c@bu.ac.th
Asst. Prof. Kingkarn Sookhanaphibarn, kingkarn.s@bu.ac.th
13. CIG 2015, Sep 2, 2015
FightingICE@ Sejong University, South Korea
(Prof. Kyung-Joong Kim, http://cilab.sejong.ac.kr)
We offered an Artificial Intelligence course for seniors (30% teaching, 70%
projects) in Spring 2015
17 students enrolled
As a first course project, each student submitted “his own entry” of fighting
game AI
We ran an internal competition with the same setting of the CIG competition
Most of them were based on “rule-bases”
As a second course project, student teams did “short-term” research using
games (including the fighting game)
Applying CI techniques (reinforcement learning, ensemble algorithm, and so on) to the
games
Course grade
It’s based on the rankings from the internal competitions and the final research report
Encourage students to submit their entries (from the internal competition
or research project) to the CIG competition
K.-J. Kim, and S.-B. Cho, “Game AI competitions: An open platform for computational intelligence education,” IEEE Computational Intelligence Magazine,
August 2013
D.-M. Yoon and K.-J. Kim, “Challenges and Opportunities in Game Artificial Intelligence Education using Angry Birds,” IEEE Access, June 2015
21. AI’s Outline Get opponent’s
current action.
Can
hit me?
Try to select a
skill which can
effect faster than
opponent’s action.
Does it
exist?
Enter the
guard state.
Try to select skill in
Strategies List(created
by machine learning).
Does it
exist?
Use it.
Predict opponent’s
next skill and select a
skill which can counter
it.
Does it
exist?
Approach to
the opponent.
En
d
Star
t
Yes
Yes
Yes
Yes
No
No
NO
NO
23. InitiativeStrategies
The AI can search and updata the Strategies
by read and write the file “AISTR.txt”.
In this txt file, Strategies are recorded in this
form——
(OpponentAction, MyAction,MinDistanceX,
MaxDistanceX,MinDistanceY,MaxDistanceY)
24. InitiativeStrategies
OpponentAction:The opponent’s current action.
MyAction:The action which can counter the opponent’s
action.
MinDistanceX:The min X distance of the MyAction’s range.
MaxDistanceX:The max X distance of the MyAction’s range.
MinDistanceY:The min Y distance of the MyAction’s range.
MaxDistanceY:The max Y distance of the MyAction’s range.
25. PredictNextSkill
(1)Create two array to store myAction and opponentAction.
(2)Select an action from myAction and use the method
Simulator.simulate to simulate it with all action in list opponentAction
one by one.
Meanwhile,calculate the damage expectation with every situation.
(3)Repeat the step (2) until all elements in myAction are dealt with by
step(2)’s method.
(4)Finally return an action which has the largest damage expectation
by compare the simulate results.
26. PredictNextSkill
damage expectation’s calculation:
Total improved score : The sum of every situation in step(2)’s
improved score.
Hit rate : Current myAction’s hit count/the number of opponent’s
action
DE=
Total improved score
the number of opponent's action
*hit rate
28. The use frequency of each strategy
in 100 round
(Round)
This graph shows the use frequency of PredictNextSkill
and InitiativeStrategies in each of the 100th games from
the first game.
With the increase of round, the InitiativeStrategies’ use
frequency is increasing.
So, I think this graph can prove that the simple machine
learning is working.
30. Ensemble Fighter
Jin Kim, JeongHyeock Ahn, SeungHo Choi, JiYuu Yi,
SuJung Kim, and Kyung-Joong Kim
Department of Computer Science and Engineering,
Sejong University, Seoul, South Korea
kimkj@sejong.ac.kr
31. Ensemble Approach for
Fighting Game Play
• Multiple rule-based systems designed by
different experts
• For each round, the player selector
chooses one of them randomly
Rule-based System 1
Rule-based System 2
Rule-based System 3
Player Selector
32. Preparation of Multiple Rule-
based Systems
• As an undergraduate course project, 17
students submitted their own controller
and we ran the internal competition with
the same setting of the “fighting game
AI competition”
• We selected three best student’s
controllers and combined them as an
ensemble player
33. Combination of Multiple Players
• From initial testing, we found that it’s
better to change the player round by
round
• As a future work, we need to use an
advanced “player selection” technique
instead of the “random selection”
35. Introduction
• AI Name : AI128200
• Developers' Name : Ji-In Mun
• Advisor : Kyung-joong Kim
• Affiliation: Department of Computer Engineering, Sejong University, South Korea
36. AI’ Outline
(1) Movement
1)To reduce the distance between two characters
=>FOR_JUMP
(2) In order to avoid enemy attacks
=>FOR_JUMP
38. Fighting Game AI: Ash
Che-Chun Chen, Cheng-Yuan Wu, Tsung-Che Chiang
Department of Computer Science and Information Engineering,
National Taiwan Normal University,
Taipei, Taiwan
40147014S@ntnu.edu.tw, asdwayne1@yahoo.com.tw, tcchiang@ieee.org
39. distanceenergy
High Medium Low
Far State1 State2 State3
Medium State4 State5 State6
Close State7 State8 State9
Near State10 State11 State12
Action Probability
Action A X%
Action B Y%
…. ….
Each state has its own
Action table.
The probability is
determined by genetic
algorithm (GA) and
domain knowledge.
40. • Special states of the opponent
– Air
– Strong Kick
– Emit energy ball
– …..
We set different actions for these special states.
• Changing strategies
– If the damage we suffered is much higher than that of the
opponent at the middle or the end of the round, an
alternative action table will be tried.
42. Features
• Prediction of the opponent’s Position and Action
Search reachable attach by using the position prediction
Counter action by using the action prediction
• “Hate” gauge
Prevention of repeatedly receiving damage
• “winDegree” gauge
The criterion of closing to opponent less or more actively.
43. Prediction
• Position Prediction
• Predict 15 frames later by linear extrapolation
• Calculate distance between myself and opponent
• Predict the time when command “startup”
• Example: “Startup” of “STAND_A” is 3 frames.
15(original delay) + 3(Startup) = 18 is prediction frame.
• Search reachable attack by using predicted position
• Action Prediction
• Predict 15 frames later by k-Nearest neighbor method
• 6 features: relative X, Y, absolute Y coordinates and these difference from 15
frames before
• Counter against “JUMP” and “AIR” attack by using predicted action
45. Other features
• Hate gauge
• When this AI receives damage repeatedly, it attempts to guard next attack,
by doing so, attempts to escapes from opponent’s loop attack.
• winDegree gauge
• When this AI has high scores, it is less active than usual.
46. BlueLag
Julien Neveu
Internship: Faculty of Science and Technology, Bangkok University
IUT d’Angoulême, Université de Poitiers, France
Advisor: Dr. Worawat Choensawat
Bangkok University, School of Science and Technology
47. BlueLag
Defence Avoid damages by escaping fire balls, ultimate skill
and avoid blocking in the corner.
Counter attack when the character are close.
Attack The attack algorithm selection depends of the distance with
different thresholds.
49. ATTACK
Distance test Actions
We also use a test
to know if we can use the ultimate skill
50. DragonWarrior
Developed by: Renan Motta Goulart.(Master's Degree
Student)
Affiliation: Universidade Federal de Juiz de Fora,
Brazil.
Email: renan.aganai@gmail.com ,
raikoalihara@hotmail.com
51. Outline
Learns how the opponent fights.
Predict the oponent's next move by keeping
information of his past actions and the distance when
they were used.
52. Outline
The possible attacks that the oponent might use are
discovered by using the average position and the
standard deviation of where the oponent attacked.
54. AI Outline
• Using J48, an open source C4.5 algorithm in Weka.
• It records data during the game, to be used for
opponent prediction system using J48.
55. AI Outline
• It also uses a simple weighting system, to
determine actions during the game.
• It updates the weight for each action using an
evaluation function that calculates HP difference
between the two players, before and after the
action.
56. FuzzyGA
DEVELOPERS: CARLOS LÓPEZ TURIÉGANO, JOSÉ MARÍA FONT
FERNÁNDEZ, DANIEL MANRIQUE GAMO
AFFILIATION: UNIVERSIDAD POLITÉCNICA DE MADRID, SPAIN
EMAIL: CARLOSJLT24@GMAIL.COM
57. FuzzyGA outline
Fuzzy rule-base system using fuzzylite library
Every state of the game is evaluated and what to do is determined by the rule-base sistem
The set of rules have been obtained using an Evolution Sistem.
Sparrings for training have been AIs from 2014 tournament and 2 custom IAs.
58. FuzzyGA – Input variables
Distance Score
Own X position Opponent X position
Own Y position Opponent Y position
Own energy Opponent energy
Own X speed Opponent X speed
Own Y speed Opponent Y speed
Input variables are variables obtained from framData or composed by them.
59. FuzzyGA – Output variables
- The output is the action that will be send to the CommandCenter
Horizontal Movement
FORWARD
STAND
BACK
Vertical Movement
JUMP
STAND
CROUCH
Action
STAND_GUARD, CROUCH_GUARD, AIR_GUARD, THROW_A, THROW_B,
STAND_A, STAND_B, CROUCH_A, CROUCH_B, AIR_A, AIR_B, AIR_DA, AIR_DB,
STAND_FA, STAND_FB, CROUCH_FA, CROUCH_FB, AIR_FA, AIR_FB,
AIR_UA, AIR_UB, STAND_D_DF_FA, STAND_D_DF_FB, STAND_F_D_DFA,
STAND_F_D_DFB, STAND_D_DB_BA, STAND_D_DB_BB, AIR_D_DF_FA,
AIR_D_DF_FB, AIR_F_D_DFA, AIR_F_D_DFB, AIR_D_DB_BA, AIR_D_DB_BB,
STAND_D_DF_FC
61. ➢ We developed an AI charactor by using a rule-based strategy to
define the fighting states and actions.
❖ Our AI robot makes decision based on the following states:
➢ Defense state
➢ Attack state
➢ Counter attack state
❖ For each state, we can divided into two cases as follows
➢ Far = distance between our AI character and opponent over a
predefined threshold.
➢ Near = distance between our AI character and opponent less than a
predefined threshold.
62. ● Detection of the opponent’s
skill:
● In case of Opponent’s skill =
“Fireball”, Our AI character
will use “Jump” whenever Opp
skill=“Ultimate” and
getDistance() <= threshold.
● But “Forward Jump” if
getDistance > threshold.
63. Our AI character will enter to “Attack state” or “Counter attack state” by
considering two variables: Opponent’s skill and Distance from
Opponent.
When opponent’s skill isn’t “Fireball”
64. 1) If (Our_AI_energy > 300) then Our_AI_skill = “Ultimate”.
2) If (Our_AI_energy > 50 And Time is low) then Our_AI_skill = “Small
Ultimate”.
3) If (getDistance() > threshold) then Our AI_skill will be “Fireball”
4) If (getDistance() is between threshold) then Our AI_skill will be
“AIR_UB”
5) If (getDistance() < threshold) then Our AI_skill will be
“CROUCH_FA” else Our AI_skill will be “CROUCH_FB” .
65.
66. If(opponent’s skill Is Air
and getDistence <
threshold) then Our_AI_skill
= “AIR_UA"
68. Based rules, depending on the distance.
Infighting style.
Change strategy by energy score.
Very aggressive AI.
Using various ground skill.
Q/A : Jaykim0104@gmail.com
69. MACHETE:
REFLEX AGENTAI Name: Machete.
Developed by: Axel G. Garcia Krastek.
Affiliation: Otto-von-Guericke University Magdeburg, Germany.
Contact: spaxel@gmail.com
70. Machete is a reflex agent with simple, but effective rules.
Rules are based on distance to the opponent, energy of the opponent and the energy
of the agent.
If opponent is too far away, Machete will get closer with forward jumps. If opponent
is far but not too far then Machete will advance forward.
When the energy of Machete reaches a threshold, it will perform an action based on
the amount of energy.
71. Machete has one very important survival rule: When the enemy’s energy reaches 300,
Machete will try to avoid getting hit by the energy ball that the enemy can produce with
300 points.
Finally, when none of the other mentioned conditions are met, Machete will perform
kicks which helps in two ways:
It introduces randomness, so the enemy cannot accurately predict Machete’s
movements.
Machete is never standing still doing nothing, it will always be kicking, which increases
its chances of winning the match.
73. AI’s Outline
1. Getting 3000 points from defensive opponent
2. The selection most effective action based on distance
3. Handling unfavorable situation(Ensemble with SejongFighter AI)
state
1
Start state
2
state
3
Movement
Of opponent
Deteriorated
situation
Improved
situation
74. State1 : Getting 3000 points from defensive opponent
The state of game just being started .
And a opponent doesn’t move repeating the actions of same patter.
Defensive characters tend not to move,
until his opponent comes in attack range.
Using this feature, My character withdraws after using the air skills.
As a result, I can get 3000 points.
75. Player1(my opponent) is just repeating same skills,
at the same position even he was damaged 10 points.
State1 : Getting 3000 points from defensive opponent
76. State2 : The selection of the most effective action based on distance
According to my analysis result,
kicking with jumping was most effective at the close combat.
At long distance, the air skills were most effective.
If my energy is enough to use stronger air skills, use those.
77. State3 : Handling unfavorable situation(Ensemble with SejongFighter AI)
Although my character use effective skills,
Sometimes It can be unfavorable.
A definition of ‘unfavorable situation’ is
(myHP<3*enemyHP) && (remainingTime<30sec).
If my character is in trouble, It will change its pattern.
A pattern is that of SejongFighter AI.
The reason that I used a pattern of SejongFighter, is that
my original pattern was vulnerable to the SejongFighter.
I think the ensemble with SejongFighter can supplement
the defect of my pattern.
80. Our Proposed Concept
❖ We developed an AI robot with a rule-based method to define the fighting states and
actions, and the considering variables are as follows
➢ Distance between players
➢ Our energy
❖ Our AI robot makes decision based on the following states:
➢ Counter state
➢ Defense state
➢ Attack state
❖ For each state, we can divided into two cases as follows
➢ Far = distance between our AI character and opponent over a predefined
threshold.
➢ Near = distance between our AI character and opponent less than a predefined
threshold.
81. Defense state
❖Use “Fireball” skill for keeping distance from
Opponent by applying the following strategies:
❖ In case of energy > 300 and remaningtime < 5000, use
“Ultimate” skill.
82. Attack state
❖ Our AI character will use the following skills:
“CROUCH_B”whenever the distance to Opponent is in a defined
range.
❖ Our AI character will use “FOR_JUMP” skill immediately if Opp is
very close to our AI.
❖ All the constant parameters (threshold values) are investigated by
experiments.
83. Counter state
If enemy positionY < thredhold and
positionX < thredhold use “AIR_UA”
86. AI Name : SDBOT
Character : ZEN
Advisor : Kyung-Joong Kim
Developer Name: Seung-Ho Choi
Affiliation
Dept. of Computer Engineering, Sejong Univ.
1. Introduiction
87. 1) To reduce distance with opponent – use skills
but those are a priority among the skills.
According to the skill in using the determined distance value and
pseudo-random value.
Primary skill : STAND_D_DF_FA , STAND_D_DF_FB , STAND_D_DF_FA
2. AI’s Outline
2) Move
if x point of my character large than x point of opponent character
→ inputKey.L
else
→ inputKey.R
89. Strategic
• AI is a rules-based strategy is adopted by a lot of use projectile and
jump skill.
• There are advantages to having the use of high projectile that score
points.
• Get the energy, just use the projectile and to avoid the projectiles
well but it is still unfinished.
• This strategy could be the No. 1 in the competition of AI class in
university.
90. Point
• This Code is simple so powerfull.
• Distance base-action is so good-reward in this AI game.
• The strategy of a number of experiments were able to get the
highest score.
91. Weakness
• This SDBOT isn’t good AI because of rule-base.
• Seaching method or Machine-Learning not apply.
• SDBOT have weak in specific skils.(Catching, Same jump skills, rapidly
frame skills)
93. 1. Introduction
AI Name : SniperInSejong
Character : Zen
Advisor : Kyung-Joong Kim
Developer's Name : Seonghun Yoon
Affiliation : Dept. of Compter Engineering, Sejong University
94. 2. AI's Outline
Use Ensemble Techniques
Basic
Enermy
In Air
Approach
to the
Enermy
Approach
to the
Enermy
In Air
Four Strategy
Changes in real time
95. 2. AI's Outline
Basic
Use the projectile
Enermy In Air
Use the air kick and air projectile
Approach to the Enermy
Use the air kick and crouch kick
Use the air kick and crouch kick
Approach to the Enermy In Air
96. 2. AI's Outline
Check the Hp Rate
HP Rate > 1
Avoid enermy
HP Rate == 1
Basic Strategy
HP Rate < 1
Approach to the enermy
Hp Rate =
(myHp - 1.0)/(EnermyHp - 1.0)
98. Overview of Our AI character
● We use a rule-based algorithm with three main varaibles to define the fighting states and
actions; and the variables are as follows:
-Distance from Opponent
-Opponent Action
- Our AI Energy
● Our AI fighting states are divided into two states by considering the distance from Opponent:
-Defense state when Opponent is FAR.
-Attack state when Opponent is NEAR.
● Our AI actions are as follows:
-Defense state: “STAND_D_DF_FC”, “STAND_D_DF_FA”, “FOR_JUMP”, “CROUCH_A”,
“CROUCH_FA” , “BACK_JUMP”
-Attack state: “CROUCH_FA”, “CROUCH_FB”, “AIR_FB”, “FORWARD_WALK”
99. DEFENSE state
•Our AI character will use these two skills: “STAND_D_DF_FC” OR “STAND_D_DF_FA”.
•If Dist_from_Opp <=200 then our_AI_skill = “CROUCH_FA” else our_AI_skill = “FOR_JUMP”.
•In case that our_AI_character is at corner then our_AI_skill = “CROUCH_A”.
•Our AI character does not often use a set of defense skills like “GUARD”, but just use “jump” to
alleviate the damage from Opponent’s attacks.
100. ATTACK state
If (Distance from Opponent > 120) then Our_AI_character_skill = “FORWARD_WALK”.
If (Distance from Opponent is in a defined range) then Our_AI_character_skill = “CROUCH_FB”
OR “AIR_FB” OR “CROUCH_FA”.
If (Opponent is very close to ours on GROUND) then Our_AI_character_skill = “CROUCH_FB”
If (Opponent is very close to ours on GROUND) then Our_AI_character_skill = “AIR_FB”
All the action skills mentioned above cannot do not much damage to Opponent. Thus, our AI
character often use “FORWARD_WALK” to make a score until our AI energy is enough to release
“Ultimate skill”.