SlideShare a Scribd company logo
1 of 103
2015 Fighting Game AI Competition
Kaito Yamamoto, Yuto Nakagawa, Chun Yin Chu,
Lucas Philippe, Marc-Etienne Barrut, FeiYu Lu,
Makoto Ishihara, Taichi Miyazaki, Toshiki Yasui,
Ruck Thawonmas
Team FightingICE
Intelligent Computer Entertainment Laboratory
Ritsumeikan University
Japan
CIG 2015, Sep 2, 2015
FightingICE
Contest
Results
Contents
CIG 2015, Sep 2, 2015
 Fighting game AI platform in Java, viable with a small-size team
 First of its kinds, since 2013 & CIG 2014 (previous AI codes available)
 Four papers at CIG 2014 & 2015 (two by others + two by us)
 𝑆𝑐𝑜𝑟𝑒𝑠 = 1000 ∗
𝑜𝑝𝑝𝑜𝑛𝑒𝑛𝑡.𝐻𝑃𝑙𝑜𝑠𝑠
𝑠𝑒𝑙𝑓.𝐻𝑃𝑙𝑜𝑠𝑠+𝑜𝑝𝑝𝑜𝑛𝑒𝑛𝑡.𝐻𝑃𝑙𝑜𝑠𝑠
 60 FPS (16.67ms response time)
 Current game state is
delayed by 15 frames
 Currently one character type
Zen for competition
 Forward model available
 Available soon!
 MCTS sample AI
 Nicer special effects
 Kinect interface Game resources are from The Rumble Fish 2 with the courtesy of Dimps Corporation.
FightingICE
CIG 2015, Sep 2, 2015
Interactions between the AI/Human
Players and the System
CIG 2015, Sep 2, 2015
 Players use attack actions (skills) and moving actions
to fight
 A skill has 3 stages:
Startup, Active and Recovery
Skill System(1/4)
CIG 2015, Sep 2, 2015
Skill System(2/4)
 Startup
 1st stage
 Prepare to attack
 No attack box, so
no damage to give
CIG 2015, Sep 2, 2015
Skill System(3/4)
 Active
 2nd stage
 The small red box -- attack
box -- can be seen on the
character’s arm
 In these frames, if the
opponent’s attack hit box
coincide with this attack box,
the opponent will be
damaged
CIG 2015, Sep 2, 2015
Skill System(4/4)
 Recover
 3rd stage
 Recover to normal
status
 Special frame:
cancelable
 Some skills can be
used during
cancelable frames
CIG 2015, Sep 2, 2015
FightingICE
Contest
Results
Contents
CIG 2015, Sep 2, 2015
 Two games, each switching the player sides, are
played for each pair of submitted AIs in a round-robin
tournament.
 A game has 3 rounds, each with 5-second ready time
and 60-second fighting time.
 The characters' position will be reset when time over,
and if it is not already in the third round, the system
will start a new round.
Contest Rules
CIG 2015, Sep 2, 2015
 17 AIs from 10 locations
 Brazil, China, France, Germany, Indonesia, Japan, South Korea,
Spain, Taiwan, and Thailand
 Six AIs from Sejong University
 Four AIs from Bangkok University
 Most use rule-based systems or finite-state machines
 One AI uses linear extrapolation for prediction the
position
 Four AIs have a mechanism for predicting the
opponent’s next action
 J48, k-NN (2 teams), and forward-model
 Two AIs use GA for optimizing
 action-execution probabilities
 fuzzy rules
Summary of AI Fighters
CIG 2015, Sep 2, 2015
FightingICE @ Bangkok University, Thailand
Multimedia Intelligent Technology (BU-MIT)
http://mit.science.bu.ac.th/
• Participation in FTGAIC
– 2013: 3 teams, 3rd-year undergrad students (3rd place)
– 2014: 1 team, 4th-year undergrad students (2nd place)
– 2015: 4 teams, 3rd-year undergrad students and Internship students
• From 2014, their Senior Project is about developing their AI bots
for FTGAIC.
• 2015 - present, collaborate with ICE Lab, Ritsumeikan University
in developing an Kinect interface for the fighting game controller
(the 1st version was based on FTGAI platform)
Asst. Prof. Worawat Choensawat, worawat.c@bu.ac.th
Asst. Prof. Kingkarn Sookhanaphibarn, kingkarn.s@bu.ac.th
CIG 2015, Sep 2, 2015
FightingICE@ Sejong University, South Korea
(Prof. Kyung-Joong Kim, http://cilab.sejong.ac.kr)
 We offered an Artificial Intelligence course for seniors (30% teaching, 70%
projects) in Spring 2015
 17 students enrolled
 As a first course project, each student submitted “his own entry” of fighting
game AI
 We ran an internal competition with the same setting of the CIG competition
 Most of them were based on “rule-bases”
 As a second course project, student teams did “short-term” research using
games (including the fighting game)
 Applying CI techniques (reinforcement learning, ensemble algorithm, and so on) to the
games
 Course grade
 It’s based on the rankings from the internal competitions and the final research report
 Encourage students to submit their entries (from the internal competition
or research project) to the CIG competition
K.-J. Kim, and S.-B. Cho, “Game AI competitions: An open platform for computational intelligence education,” IEEE Computational Intelligence Magazine,
August 2013
D.-M. Yoon and K.-J. Kim, “Challenges and Opportunities in Game Artificial Intelligence Education using Angry Birds,” IEEE Access, June 2015
FightingICE
Contest
Results
Contents
CIG 2015, Sep 2, 2015
Our Lab at Ritsumeikan Univ., Japan
CIG 2015, Sep 2, 2015
FightingICE
Contest
Results
Contents
CIG 2015, Sep 2, 2015
CIG 2015, Sep 2, 2015
Full Scores (won all games)
96,000
CIG 2014, August 29, 2014
Appendices: AI Details
(in alphabetical order)
Fighting Game AI
with Skill Predict and Simple Machine Learning
Zhang BoYao
School of New Media
Zhejiang University of Media and Communications
Personal information
(1)Developer’s Name : Zhang Boyao
(2)AI Name : AI_ZBY0323
(3)Affiliation : zhangrichyao@hotmail.com
AI’s Outline Get opponent’s
current action.
Can
hit me?
Try to select a
skill which can
effect faster than
opponent’s action.
Does it
exist?
Enter the
guard state.
Try to select skill in
Strategies List(created
by machine learning).
Does it
exist?
Use it.
Predict opponent’s
next skill and select a
skill which can counter
it.
Does it
exist?
Approach to
the opponent.
En
d
Star
t
Yes
Yes
Yes
Yes
No
No
NO
NO
Major Classes’ introduction
(1)Class:InitiativeStrategies
(2)Class:PredictNextSkill
(3)Class:StrategySelecter
InitiativeStrategies
The AI can search and updata the Strategies
by read and write the file “AISTR.txt”.
In this txt file, Strategies are recorded in this
form——
(OpponentAction, MyAction,MinDistanceX,
MaxDistanceX,MinDistanceY,MaxDistanceY)
InitiativeStrategies
OpponentAction:The opponent’s current action.
MyAction:The action which can counter the opponent’s
action.
MinDistanceX:The min X distance of the MyAction’s range.
MaxDistanceX:The max X distance of the MyAction’s range.
MinDistanceY:The min Y distance of the MyAction’s range.
MaxDistanceY:The max Y distance of the MyAction’s range.
PredictNextSkill
(1)Create two array to store myAction and opponentAction.
(2)Select an action from myAction and use the method
Simulator.simulate to simulate it with all action in list opponentAction
one by one.
Meanwhile,calculate the damage expectation with every situation.
(3)Repeat the step (2) until all elements in myAction are dealt with by
step(2)’s method.
(4)Finally return an action which has the largest damage expectation
by compare the simulate results.
PredictNextSkill
damage expectation’s calculation:
Total improved score : The sum of every situation in step(2)’s
improved score.
Hit rate : Current myAction’s hit count/the number of opponent’s
action
DE=
Total improved score
the number of opponent's action
*hit rate
StrategySelecter
Integrate InitiativeStrategiesClass and PredictNextSkill,
make them work in correctly timing.
The use frequency of each strategy
in 100 round
(Round)
This graph shows the use frequency of PredictNextSkill
and InitiativeStrategies in each of the 100th games from
the first game.
With the increase of round, the InitiativeStrategies’ use
frequency is increasing.
So, I think this graph can prove that the simple machine
learning is working.
Thank you!
Ensemble Fighter
Jin Kim, JeongHyeock Ahn, SeungHo Choi, JiYuu Yi,
SuJung Kim, and Kyung-Joong Kim
Department of Computer Science and Engineering,
Sejong University, Seoul, South Korea
kimkj@sejong.ac.kr
Ensemble Approach for
Fighting Game Play
• Multiple rule-based systems designed by
different experts
• For each round, the player selector
chooses one of them randomly
Rule-based System 1
Rule-based System 2
Rule-based System 3
Player Selector
Preparation of Multiple Rule-
based Systems
• As an undergraduate course project, 17
students submitted their own controller
and we ran the internal competition with
the same setting of the “fighting game
AI competition”
• We selected three best student’s
controllers and combined them as an
ensemble player
Combination of Multiple Players
• From initial testing, we found that it’s
better to change the player round by
round
• As a future work, we need to use an
advanced “player selection” technique
instead of the “random selection”
FIGHTING GAME AI
COMPETITION
Sejong University
JiIn Mun
wwldlsl9401@naver.com
Introduction
• AI Name : AI128200
• Developers' Name : Ji-In Mun
• Advisor : Kyung-joong Kim
• Affiliation: Department of Computer Engineering, Sejong University, South Korea
AI’ Outline
(1) Movement
1)To reduce the distance between two characters
=>FOR_JUMP
(2) In order to avoid enemy attacks
=>FOR_JUMP
AI’ Outline
(3) Attack
1)main attack skill
=>CROUCH_B,THROW_B
2)My Energy >= 60
EnemyCharacter-air => AIR_D_DF_FB
EnemyCharacter-ground => STAND_D_DF_FA
3)My Energy >= 300
=> STAND_D_DF_FC
Fighting Game AI: Ash
Che-Chun Chen, Cheng-Yuan Wu, Tsung-Che Chiang
Department of Computer Science and Information Engineering,
National Taiwan Normal University,
Taipei, Taiwan
40147014S@ntnu.edu.tw, asdwayne1@yahoo.com.tw, tcchiang@ieee.org
distanceenergy
High Medium Low
Far State1 State2 State3
Medium State4 State5 State6
Close State7 State8 State9
Near State10 State11 State12
Action Probability
Action A X%
Action B Y%
…. ….
Each state has its own
Action table.
The probability is
determined by genetic
algorithm (GA) and
domain knowledge.
• Special states of the opponent
– Air
– Strong Kick
– Emit energy ball
– …..
We set different actions for these special states.
• Changing strategies
– If the damage we suffered is much higher than that of the
opponent at the middle or the end of the round, an
alternative action table will be tried.
AsuchAI_LEPnkNN
Developer: Kazuki Asayama
Supervisor: Koichi Moriyama, Ken-ichi Fukui, and Masayuki
Numao
Affiliation: The Institute of Scientific and Industrial Research,
Osaka University
Features
• Prediction of the opponent’s Position and Action
Search reachable attach by using the position prediction
Counter action by using the action prediction
• “Hate” gauge
Prevention of repeatedly receiving damage
• “winDegree” gauge
The criterion of closing to opponent less or more actively.
Prediction
• Position Prediction
• Predict 15 frames later by linear extrapolation
• Calculate distance between myself and opponent
• Predict the time when command “startup”
• Example: “Startup” of “STAND_A” is 3 frames.
15(original delay) + 3(Startup) = 18 is prediction frame.
• Search reachable attack by using predicted position
• Action Prediction
• Predict 15 frames later by k-Nearest neighbor method
• 6 features: relative X, Y, absolute Y coordinates and these difference from 15
frames before
• Counter against “JUMP” and “AIR” attack by using predicted action
Figure of Position prediction
(𝑥𝑡 𝑐−𝑓, 𝑦𝑡 𝑐−𝑓)
(𝑥𝑡 𝑐
, 𝑦𝑡 𝑐
)
(𝑥𝑡 𝑐+𝑓, 𝑦𝑡 𝑐+𝑓)
Moving opponent
(𝑥𝑡 𝑐
− 𝑥𝑡 𝑐−𝑓, 𝑦𝑡 𝑐
− 𝑦𝑡 𝑐−𝑓) 𝑥𝑡 𝑐+𝑓 = 2𝑥𝑡 𝑐
− 𝑥𝑡 𝑐−𝑓
𝑦𝑡 𝑐+𝑓 = 2𝑦𝑡 𝑐
− 𝑦𝑡 𝑐−𝑓
Other features
• Hate gauge
• When this AI receives damage repeatedly, it attempts to guard next attack,
by doing so, attempts to escapes from opponent’s loop attack.
• winDegree gauge
• When this AI has high scores, it is less active than usual.
BlueLag
Julien Neveu
Internship: Faculty of Science and Technology, Bangkok University
IUT d’Angoulême, Université de Poitiers, France
Advisor: Dr. Worawat Choensawat
Bangkok University, School of Science and Technology
BlueLag
 Defence  Avoid damages by escaping fire balls, ultimate skill
and avoid blocking in the corner.
 Counter attack when the character are close.
 Attack  The attack algorithm selection depends of the distance with
different thresholds.
DEFENCE Escaping
 Escaping
 Counter attack
 Counter attack
 Escaping
ATTACK
Distance test   Actions
 We also use a test
to know if we can use the ultimate skill
DragonWarrior
Developed by: Renan Motta Goulart.(Master's Degree
Student)
Affiliation: Universidade Federal de Juiz de Fora,
Brazil.
Email: renan.aganai@gmail.com ,
raikoalihara@hotmail.com
Outline
Learns how the opponent fights.
Predict the oponent's next move by keeping
information of his past actions and the distance when
they were used.
Outline
The possible attacks that the oponent might use are
discovered by using the average position and the
standard deviation of where the oponent attacked.
FICE_AI_OM
Developer: Aldi Doanta Kurnia
Affiliation: Institut Teknologi Bandung
Indonesia
AI Outline
• Using J48, an open source C4.5 algorithm in Weka.
• It records data during the game, to be used for
opponent prediction system using J48.
AI Outline
• It also uses a simple weighting system, to
determine actions during the game.
• It updates the weight for each action using an
evaluation function that calculates HP difference
between the two players, before and after the
action.
FuzzyGA
DEVELOPERS: CARLOS LÓPEZ TURIÉGANO, JOSÉ MARÍA FONT
FERNÁNDEZ, DANIEL MANRIQUE GAMO
AFFILIATION: UNIVERSIDAD POLITÉCNICA DE MADRID, SPAIN
EMAIL: CARLOSJLT24@GMAIL.COM
FuzzyGA outline
Fuzzy rule-base system using fuzzylite library
Every state of the game is evaluated and what to do is determined by the rule-base sistem
The set of rules have been obtained using an Evolution Sistem.
Sparrings for training have been AIs from 2014 tournament and 2 custom IAs.
FuzzyGA – Input variables
Distance Score
Own X position Opponent X position
Own Y position Opponent Y position
Own energy Opponent energy
Own X speed Opponent X speed
Own Y speed Opponent Y speed
Input variables are variables obtained from framData or composed by them.
FuzzyGA – Output variables
- The output is the action that will be send to the CommandCenter
Horizontal Movement
FORWARD
STAND
BACK
Vertical Movement
JUMP
STAND
CROUCH
Action
STAND_GUARD, CROUCH_GUARD, AIR_GUARD, THROW_A, THROW_B,
STAND_A, STAND_B, CROUCH_A, CROUCH_B, AIR_A, AIR_B, AIR_DA, AIR_DB,
STAND_FA, STAND_FB, CROUCH_FA, CROUCH_FB, AIR_FA, AIR_FB,
AIR_UA, AIR_UB, STAND_D_DF_FA, STAND_D_DF_FB, STAND_F_D_DFA,
STAND_F_D_DFB, STAND_D_DB_BA, STAND_D_DB_BB, AIR_D_DF_FA,
AIR_D_DF_FB, AIR_F_D_DFA, AIR_F_D_DFB, AIR_D_DB_BA, AIR_D_DB_BB,
STAND_D_DF_FC
Suwijak Wipachon
Thiti Rueangrit
Sutee Chamnankit
Kingkarn Sookhanaphibarn
(Advisor)
School of Science and Technology
BANGKOK UNIVERSITY
➢ We developed an AI charactor by using a rule-based strategy to
define the fighting states and actions.
❖ Our AI robot makes decision based on the following states:
➢ Defense state
➢ Attack state
➢ Counter attack state
❖ For each state, we can divided into two cases as follows
➢ Far = distance between our AI character and opponent over a
predefined threshold.
➢ Near = distance between our AI character and opponent less than a
predefined threshold.
● Detection of the opponent’s
skill:

● In case of Opponent’s skill =
“Fireball”, Our AI character
will use “Jump” whenever Opp
skill=“Ultimate” and
getDistance() <= threshold.
● But “Forward Jump” if
getDistance > threshold.
Our AI character will enter to “Attack state” or “Counter attack state” by
considering two variables: Opponent’s skill and Distance from
Opponent.
 When opponent’s skill isn’t “Fireball”
1) If (Our_AI_energy > 300) then Our_AI_skill = “Ultimate”.
2) If (Our_AI_energy > 50 And Time is low) then Our_AI_skill = “Small
Ultimate”.
3) If (getDistance() > threshold) then Our AI_skill will be “Fireball”
4) If (getDistance() is between threshold) then Our AI_skill will be
“AIR_UB”
5) If (getDistance() < threshold) then Our AI_skill will be
“CROUCH_FA” else Our AI_skill will be “CROUCH_FB” .
 If(opponent’s skill Is Air
and getDistence <
threshold) then Our_AI_skill
= “AIR_UA"
Sejong university(KOR)
undergraduate student
Manje Kim, Cheong-mok Bae
 Based rules, depending on the distance.
 Infighting style.
 Change strategy by energy score.
 Very aggressive AI.
 Using various ground skill.
 Q/A : Jaykim0104@gmail.com
MACHETE:
REFLEX AGENTAI Name: Machete.
Developed by: Axel G. Garcia Krastek.
Affiliation: Otto-von-Guericke University Magdeburg, Germany.
Contact: spaxel@gmail.com
Machete is a reflex agent with simple, but effective rules.
Rules are based on distance to the opponent, energy of the opponent and the energy
of the agent.
If opponent is too far away, Machete will get closer with forward jumps. If opponent
is far but not too far then Machete will advance forward.
When the energy of Machete reaches a threshold, it will perform an action based on
the amount of energy.
Machete has one very important survival rule: When the enemy’s energy reaches 300,
Machete will try to avoid getting hit by the energy ball that the enemy can produce with
300 points.
Finally, when none of the other mentioned conditions are met, Machete will perform
kicks which helps in two ways:
 It introduces randomness, so the enemy cannot accurately predict Machete’s
movements.
 Machete is never standing still doing nothing, it will always be kicking, which increases
its chances of winning the match.
Fighting Game AI
Competition
AI Name : Ni1mir4ri
Developer : Jiyuu Yi
Affiliation : Sejong University, Korea
2015.08.14
AI’s Outline
1. Getting 3000 points from defensive opponent
2. The selection most effective action based on distance
3. Handling unfavorable situation(Ensemble with SejongFighter AI)
state
1
Start state
2
state
3
Movement
Of opponent
Deteriorated
situation
Improved
situation
State1 : Getting 3000 points from defensive opponent
The state of game just being started .
And a opponent doesn’t move repeating the actions of same patter.
Defensive characters tend not to move,
until his opponent comes in attack range.
Using this feature, My character withdraws after using the air skills.
As a result, I can get 3000 points.
Player1(my opponent) is just repeating same skills,
at the same position even he was damaged 10 points.
State1 : Getting 3000 points from defensive opponent
State2 : The selection of the most effective action based on distance
According to my analysis result,
kicking with jumping was most effective at the close combat.
At long distance, the air skills were most effective.
If my energy is enough to use stronger air skills, use those.
State3 : Handling unfavorable situation(Ensemble with SejongFighter AI)
Although my character use effective skills,
Sometimes It can be unfavorable.
A definition of ‘unfavorable situation’ is
(myHP<3*enemyHP) && (remainingTime<30sec).
If my character is in trouble, It will change its pattern.
A pattern is that of SejongFighter AI.
The reason that I used a pattern of SejongFighter, is that
my original pattern was vulnerable to the SejongFighter.
I think the ensemble with SejongFighter can supplement
the defect of my pattern.
THANK YOU
RatioBot
Teerakit Vanitcharoennum
Natthawut manchusoontornkul
Worawat Choensawat (Advisor)
School of Science and Technology
BANGKOK UNIVERSITY
Thailand
Our Proposed Concept
❖ We developed an AI robot with a rule-based method to define the fighting states and
actions, and the considering variables are as follows
➢ Distance between players
➢ Our energy
❖ Our AI robot makes decision based on the following states:
➢ Counter state
➢ Defense state
➢ Attack state
❖ For each state, we can divided into two cases as follows
➢ Far = distance between our AI character and opponent over a predefined
threshold.
➢ Near = distance between our AI character and opponent less than a predefined
threshold.
Defense state
❖Use “Fireball” skill for keeping distance from
Opponent by applying the following strategies:
❖ In case of energy > 300 and remaningtime < 5000, use
“Ultimate” skill.
Attack state
❖ Our AI character will use the following skills:
“CROUCH_B”whenever the distance to Opponent is in a defined
range.
❖ Our AI character will use “FOR_JUMP” skill immediately if Opp is
very close to our AI.
❖ All the constant parameters (threshold values) are investigated by
experiments.
Counter state
 If enemy positionY < thredhold and
positionX < thredhold use “AIR_UA”
2015_FTG_AI
SDBOT
Dept. of Computer Engineering, Sejong University
AI Name : SDBOT
Character : ZEN
Advisor : Kyung-Joong Kim
Developer Name: Seung-Ho Choi
Affiliation
Dept. of Computer Engineering, Sejong Univ.
1. Introduiction
1) To reduce distance with opponent – use skills
but those are a priority among the skills.
According to the skill in using the determined distance value and
pseudo-random value.
Primary skill : STAND_D_DF_FA , STAND_D_DF_FB , STAND_D_DF_FA
2. AI’s Outline
2) Move
if x point of my character large than x point of opponent character
→ inputKey.L
else
→ inputKey.R
2. AI’s Outline
3) attack
- primary skills : STAND_D_DF_FA , STAND_D_DF_FB ,
STAND_D_DF_FA
- sub-skills : AIR_B, AIR_UB
Strategic
• AI is a rules-based strategy is adopted by a lot of use projectile and
jump skill.
• There are advantages to having the use of high projectile that score
points.
• Get the energy, just use the projectile and to avoid the projectiles
well but it is still unfinished.
• This strategy could be the No. 1 in the competition of AI class in
university.
Point
• This Code is simple so powerfull.
• Distance base-action is so good-reward in this AI game.
• The strategy of a number of experiments were able to get the
highest score.
Weakness
• This SDBOT isn’t good AI because of rule-base.
• Seaching method or Machine-Learning not apply.
• SDBOT have weak in specific skils.(Catching, Same jump skills, rapidly
frame skills)
Thanks.
made by Seung-Ho Choi
1. Introduction
AI Name : SniperInSejong
Character : Zen
Advisor : Kyung-Joong Kim
Developer's Name : Seonghun Yoon
Affiliation : Dept. of Compter Engineering, Sejong University
2. AI's Outline
 Use Ensemble Techniques
Basic
Enermy
In Air
Approach
to the
Enermy
Approach
to the
Enermy
In Air
 Four Strategy
Changes in real time
2. AI's Outline
 Basic
 Use the projectile
 Enermy In Air
 Use the air kick and air projectile
 Approach to the Enermy
 Use the air kick and crouch kick
 Use the air kick and crouch kick
 Approach to the Enermy In Air
2. AI's Outline
 Check the Hp Rate
 HP Rate > 1
Avoid enermy
 HP Rate == 1
Basic Strategy
 HP Rate < 1
Approach to the enermy
Hp Rate =
(myHp - 1.0)/(EnermyHp - 1.0)
SNORKEL
Suriya Sampanchit
Ariya Tippawan
Kingkarn Sookhanaphibarn (Advisor)
School of Science and Technology
BANGKOK UNIVERSITY
Thailand
Overview of Our AI character
● We use a rule-based algorithm with three main varaibles to define the fighting states and
actions; and the variables are as follows:
-Distance from Opponent
-Opponent Action
- Our AI Energy
● Our AI fighting states are divided into two states by considering the distance from Opponent:
-Defense state when Opponent is FAR.
-Attack state when Opponent is NEAR.
● Our AI actions are as follows:
-Defense state: “STAND_D_DF_FC”, “STAND_D_DF_FA”, “FOR_JUMP”, “CROUCH_A”,
“CROUCH_FA” , “BACK_JUMP”
-Attack state: “CROUCH_FA”, “CROUCH_FB”, “AIR_FB”, “FORWARD_WALK”
DEFENSE state
•Our AI character will use these two skills: “STAND_D_DF_FC” OR “STAND_D_DF_FA”.
•If Dist_from_Opp <=200 then our_AI_skill = “CROUCH_FA” else our_AI_skill = “FOR_JUMP”.
•In case that our_AI_character is at corner then our_AI_skill = “CROUCH_A”.
•Our AI character does not often use a set of defense skills like “GUARD”, but just use “jump” to
alleviate the damage from Opponent’s attacks.
ATTACK state
If (Distance from Opponent > 120) then Our_AI_character_skill = “FORWARD_WALK”.
If (Distance from Opponent is in a defined range) then Our_AI_character_skill = “CROUCH_FB”
OR “AIR_FB” OR “CROUCH_FA”.
If (Opponent is very close to ours on GROUND) then Our_AI_character_skill = “CROUCH_FB”
If (Opponent is very close to ours on GROUND) then Our_AI_character_skill = “AIR_FB”
All the action skills mentioned above cannot do not much damage to Opponent. Thus, our AI
character often use “FORWARD_WALK” to make a score until our AI energy is enough to release
“Ultimate skill”.
* * *
*
CIG 2015, Sep 2, 2015
Thank you and
see you at CIG 2016!

More Related Content

Similar to 2015 Fighting Game Artificial Intelligence Competition

Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...Deren Lei
 
GVGAI Single-Player Learning Competition at IEEE CIG17
GVGAI Single-Player Learning Competition at IEEE CIG17GVGAI Single-Player Learning Competition at IEEE CIG17
GVGAI Single-Player Learning Competition at IEEE CIG17Jialin LIU
 
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)ftgaic
 
Mini Max Algorithm Proposal Document
Mini Max Algorithm Proposal DocumentMini Max Algorithm Proposal Document
Mini Max Algorithm Proposal DocumentUjjawal Poudel
 
[Seminar] 200807 Seongyeol Choe
[Seminar] 200807 Seongyeol Choe[Seminar] 200807 Seongyeol Choe
[Seminar] 200807 Seongyeol Choeivaderivader
 
Recommendation algorithm using reinforcement learning
Recommendation algorithm using reinforcement learningRecommendation algorithm using reinforcement learning
Recommendation algorithm using reinforcement learningArithmer Inc.
 
Online learning &amp; adaptive game playing
Online learning &amp; adaptive game playingOnline learning &amp; adaptive game playing
Online learning &amp; adaptive game playingSaeid Ghafouri
 
IRJET- Unabridged Review of Supervised Machine Learning Regression and Classi...
IRJET- Unabridged Review of Supervised Machine Learning Regression and Classi...IRJET- Unabridged Review of Supervised Machine Learning Regression and Classi...
IRJET- Unabridged Review of Supervised Machine Learning Regression and Classi...IRJET Journal
 
Ppig2014 problem solvingpaths
Ppig2014 problem solvingpathsPpig2014 problem solvingpaths
Ppig2014 problem solvingpathsRoya Hosseini
 
The Role of Shologuti in Artificial Intelligence Research: A Rural Game of Ba...
The Role of Shologuti in Artificial Intelligence Research: A Rural Game of Ba...The Role of Shologuti in Artificial Intelligence Research: A Rural Game of Ba...
The Role of Shologuti in Artificial Intelligence Research: A Rural Game of Ba...IJCSIS Research Publications
 
The Effectiveness of using a Historical Sequence-based Predictor Algorithm in...
The Effectiveness of using a Historical Sequence-based Predictor Algorithm in...The Effectiveness of using a Historical Sequence-based Predictor Algorithm in...
The Effectiveness of using a Historical Sequence-based Predictor Algorithm in...AM Publications,India
 
Comparative Analysis of Machine Learning Models for Cricket Score and Win Pre...
Comparative Analysis of Machine Learning Models for Cricket Score and Win Pre...Comparative Analysis of Machine Learning Models for Cricket Score and Win Pre...
Comparative Analysis of Machine Learning Models for Cricket Score and Win Pre...IRJET Journal
 
Teaching AI through retro gaming
Teaching AI through retro gamingTeaching AI through retro gaming
Teaching AI through retro gamingDiogo Gomes
 
Programing Slicing and Its applications
Programing Slicing and Its applicationsPrograming Slicing and Its applications
Programing Slicing and Its applicationsAnkur Jain
 
Genetic Algorithm Demonstation System
Genetic Algorithm Demonstation SystemGenetic Algorithm Demonstation System
Genetic Algorithm Demonstation SystemBenjamin Murphy
 
Data-driven Analysis for Multi-agent Trajectories in Team Sports
Data-driven Analysis for Multi-agent Trajectories in Team SportsData-driven Analysis for Multi-agent Trajectories in Team Sports
Data-driven Analysis for Multi-agent Trajectories in Team SportsKeisuke Fujii
 
Full Lyifecycle Architecture for Serious Games - JCSG 2017
Full Lyifecycle Architecture for Serious Games - JCSG 2017Full Lyifecycle Architecture for Serious Games - JCSG 2017
Full Lyifecycle Architecture for Serious Games - JCSG 2017Cristina Alonso
 
Military simulator a case study
Military simulator  a case studyMilitary simulator  a case study
Military simulator a case studyShruti Jadon
 
Military simulator a case study
Military simulator  a case studyMilitary simulator  a case study
Military simulator a case studyShruti Jadon
 

Similar to 2015 Fighting Game Artificial Intelligence Competition (20)

Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
 
GVGAI Single-Player Learning Competition at IEEE CIG17
GVGAI Single-Player Learning Competition at IEEE CIG17GVGAI Single-Player Learning Competition at IEEE CIG17
GVGAI Single-Player Learning Competition at IEEE CIG17
 
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
 
Mini Max Algorithm Proposal Document
Mini Max Algorithm Proposal DocumentMini Max Algorithm Proposal Document
Mini Max Algorithm Proposal Document
 
[Seminar] 200807 Seongyeol Choe
[Seminar] 200807 Seongyeol Choe[Seminar] 200807 Seongyeol Choe
[Seminar] 200807 Seongyeol Choe
 
Cricket 2
Cricket 2Cricket 2
Cricket 2
 
Recommendation algorithm using reinforcement learning
Recommendation algorithm using reinforcement learningRecommendation algorithm using reinforcement learning
Recommendation algorithm using reinforcement learning
 
Online learning &amp; adaptive game playing
Online learning &amp; adaptive game playingOnline learning &amp; adaptive game playing
Online learning &amp; adaptive game playing
 
IRJET- Unabridged Review of Supervised Machine Learning Regression and Classi...
IRJET- Unabridged Review of Supervised Machine Learning Regression and Classi...IRJET- Unabridged Review of Supervised Machine Learning Regression and Classi...
IRJET- Unabridged Review of Supervised Machine Learning Regression and Classi...
 
Ppig2014 problem solvingpaths
Ppig2014 problem solvingpathsPpig2014 problem solvingpaths
Ppig2014 problem solvingpaths
 
The Role of Shologuti in Artificial Intelligence Research: A Rural Game of Ba...
The Role of Shologuti in Artificial Intelligence Research: A Rural Game of Ba...The Role of Shologuti in Artificial Intelligence Research: A Rural Game of Ba...
The Role of Shologuti in Artificial Intelligence Research: A Rural Game of Ba...
 
The Effectiveness of using a Historical Sequence-based Predictor Algorithm in...
The Effectiveness of using a Historical Sequence-based Predictor Algorithm in...The Effectiveness of using a Historical Sequence-based Predictor Algorithm in...
The Effectiveness of using a Historical Sequence-based Predictor Algorithm in...
 
Comparative Analysis of Machine Learning Models for Cricket Score and Win Pre...
Comparative Analysis of Machine Learning Models for Cricket Score and Win Pre...Comparative Analysis of Machine Learning Models for Cricket Score and Win Pre...
Comparative Analysis of Machine Learning Models for Cricket Score and Win Pre...
 
Teaching AI through retro gaming
Teaching AI through retro gamingTeaching AI through retro gaming
Teaching AI through retro gaming
 
Programing Slicing and Its applications
Programing Slicing and Its applicationsPrograming Slicing and Its applications
Programing Slicing and Its applications
 
Genetic Algorithm Demonstation System
Genetic Algorithm Demonstation SystemGenetic Algorithm Demonstation System
Genetic Algorithm Demonstation System
 
Data-driven Analysis for Multi-agent Trajectories in Team Sports
Data-driven Analysis for Multi-agent Trajectories in Team SportsData-driven Analysis for Multi-agent Trajectories in Team Sports
Data-driven Analysis for Multi-agent Trajectories in Team Sports
 
Full Lyifecycle Architecture for Serious Games - JCSG 2017
Full Lyifecycle Architecture for Serious Games - JCSG 2017Full Lyifecycle Architecture for Serious Games - JCSG 2017
Full Lyifecycle Architecture for Serious Games - JCSG 2017
 
Military simulator a case study
Military simulator  a case studyMilitary simulator  a case study
Military simulator a case study
 
Military simulator a case study
Military simulator  a case studyMilitary simulator  a case study
Military simulator a case study
 

Recently uploaded

A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 

Recently uploaded (20)

A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 

2015 Fighting Game Artificial Intelligence Competition

  • 1. 2015 Fighting Game AI Competition Kaito Yamamoto, Yuto Nakagawa, Chun Yin Chu, Lucas Philippe, Marc-Etienne Barrut, FeiYu Lu, Makoto Ishihara, Taichi Miyazaki, Toshiki Yasui, Ruck Thawonmas Team FightingICE Intelligent Computer Entertainment Laboratory Ritsumeikan University Japan CIG 2015, Sep 2, 2015
  • 3.  Fighting game AI platform in Java, viable with a small-size team  First of its kinds, since 2013 & CIG 2014 (previous AI codes available)  Four papers at CIG 2014 & 2015 (two by others + two by us)  𝑆𝑐𝑜𝑟𝑒𝑠 = 1000 ∗ 𝑜𝑝𝑝𝑜𝑛𝑒𝑛𝑡.𝐻𝑃𝑙𝑜𝑠𝑠 𝑠𝑒𝑙𝑓.𝐻𝑃𝑙𝑜𝑠𝑠+𝑜𝑝𝑝𝑜𝑛𝑒𝑛𝑡.𝐻𝑃𝑙𝑜𝑠𝑠  60 FPS (16.67ms response time)  Current game state is delayed by 15 frames  Currently one character type Zen for competition  Forward model available  Available soon!  MCTS sample AI  Nicer special effects  Kinect interface Game resources are from The Rumble Fish 2 with the courtesy of Dimps Corporation. FightingICE CIG 2015, Sep 2, 2015
  • 4. Interactions between the AI/Human Players and the System CIG 2015, Sep 2, 2015
  • 5.  Players use attack actions (skills) and moving actions to fight  A skill has 3 stages: Startup, Active and Recovery Skill System(1/4) CIG 2015, Sep 2, 2015
  • 6. Skill System(2/4)  Startup  1st stage  Prepare to attack  No attack box, so no damage to give CIG 2015, Sep 2, 2015
  • 7. Skill System(3/4)  Active  2nd stage  The small red box -- attack box -- can be seen on the character’s arm  In these frames, if the opponent’s attack hit box coincide with this attack box, the opponent will be damaged CIG 2015, Sep 2, 2015
  • 8. Skill System(4/4)  Recover  3rd stage  Recover to normal status  Special frame: cancelable  Some skills can be used during cancelable frames CIG 2015, Sep 2, 2015
  • 10.  Two games, each switching the player sides, are played for each pair of submitted AIs in a round-robin tournament.  A game has 3 rounds, each with 5-second ready time and 60-second fighting time.  The characters' position will be reset when time over, and if it is not already in the third round, the system will start a new round. Contest Rules CIG 2015, Sep 2, 2015
  • 11.  17 AIs from 10 locations  Brazil, China, France, Germany, Indonesia, Japan, South Korea, Spain, Taiwan, and Thailand  Six AIs from Sejong University  Four AIs from Bangkok University  Most use rule-based systems or finite-state machines  One AI uses linear extrapolation for prediction the position  Four AIs have a mechanism for predicting the opponent’s next action  J48, k-NN (2 teams), and forward-model  Two AIs use GA for optimizing  action-execution probabilities  fuzzy rules Summary of AI Fighters CIG 2015, Sep 2, 2015
  • 12. FightingICE @ Bangkok University, Thailand Multimedia Intelligent Technology (BU-MIT) http://mit.science.bu.ac.th/ • Participation in FTGAIC – 2013: 3 teams, 3rd-year undergrad students (3rd place) – 2014: 1 team, 4th-year undergrad students (2nd place) – 2015: 4 teams, 3rd-year undergrad students and Internship students • From 2014, their Senior Project is about developing their AI bots for FTGAIC. • 2015 - present, collaborate with ICE Lab, Ritsumeikan University in developing an Kinect interface for the fighting game controller (the 1st version was based on FTGAI platform) Asst. Prof. Worawat Choensawat, worawat.c@bu.ac.th Asst. Prof. Kingkarn Sookhanaphibarn, kingkarn.s@bu.ac.th
  • 13. CIG 2015, Sep 2, 2015 FightingICE@ Sejong University, South Korea (Prof. Kyung-Joong Kim, http://cilab.sejong.ac.kr)  We offered an Artificial Intelligence course for seniors (30% teaching, 70% projects) in Spring 2015  17 students enrolled  As a first course project, each student submitted “his own entry” of fighting game AI  We ran an internal competition with the same setting of the CIG competition  Most of them were based on “rule-bases”  As a second course project, student teams did “short-term” research using games (including the fighting game)  Applying CI techniques (reinforcement learning, ensemble algorithm, and so on) to the games  Course grade  It’s based on the rankings from the internal competitions and the final research report  Encourage students to submit their entries (from the internal competition or research project) to the CIG competition K.-J. Kim, and S.-B. Cho, “Game AI competitions: An open platform for computational intelligence education,” IEEE Computational Intelligence Magazine, August 2013 D.-M. Yoon and K.-J. Kim, “Challenges and Opportunities in Game Artificial Intelligence Education using Angry Birds,” IEEE Access, June 2015
  • 15. Our Lab at Ritsumeikan Univ., Japan CIG 2015, Sep 2, 2015
  • 17. CIG 2015, Sep 2, 2015 Full Scores (won all games) 96,000
  • 18. CIG 2014, August 29, 2014 Appendices: AI Details (in alphabetical order)
  • 19. Fighting Game AI with Skill Predict and Simple Machine Learning Zhang BoYao School of New Media Zhejiang University of Media and Communications
  • 20. Personal information (1)Developer’s Name : Zhang Boyao (2)AI Name : AI_ZBY0323 (3)Affiliation : zhangrichyao@hotmail.com
  • 21. AI’s Outline Get opponent’s current action. Can hit me? Try to select a skill which can effect faster than opponent’s action. Does it exist? Enter the guard state. Try to select skill in Strategies List(created by machine learning). Does it exist? Use it. Predict opponent’s next skill and select a skill which can counter it. Does it exist? Approach to the opponent. En d Star t Yes Yes Yes Yes No No NO NO
  • 23. InitiativeStrategies The AI can search and updata the Strategies by read and write the file “AISTR.txt”. In this txt file, Strategies are recorded in this form—— (OpponentAction, MyAction,MinDistanceX, MaxDistanceX,MinDistanceY,MaxDistanceY)
  • 24. InitiativeStrategies OpponentAction:The opponent’s current action. MyAction:The action which can counter the opponent’s action. MinDistanceX:The min X distance of the MyAction’s range. MaxDistanceX:The max X distance of the MyAction’s range. MinDistanceY:The min Y distance of the MyAction’s range. MaxDistanceY:The max Y distance of the MyAction’s range.
  • 25. PredictNextSkill (1)Create two array to store myAction and opponentAction. (2)Select an action from myAction and use the method Simulator.simulate to simulate it with all action in list opponentAction one by one. Meanwhile,calculate the damage expectation with every situation. (3)Repeat the step (2) until all elements in myAction are dealt with by step(2)’s method. (4)Finally return an action which has the largest damage expectation by compare the simulate results.
  • 26. PredictNextSkill damage expectation’s calculation: Total improved score : The sum of every situation in step(2)’s improved score. Hit rate : Current myAction’s hit count/the number of opponent’s action DE= Total improved score the number of opponent's action *hit rate
  • 27. StrategySelecter Integrate InitiativeStrategiesClass and PredictNextSkill, make them work in correctly timing.
  • 28. The use frequency of each strategy in 100 round (Round) This graph shows the use frequency of PredictNextSkill and InitiativeStrategies in each of the 100th games from the first game. With the increase of round, the InitiativeStrategies’ use frequency is increasing. So, I think this graph can prove that the simple machine learning is working.
  • 30. Ensemble Fighter Jin Kim, JeongHyeock Ahn, SeungHo Choi, JiYuu Yi, SuJung Kim, and Kyung-Joong Kim Department of Computer Science and Engineering, Sejong University, Seoul, South Korea kimkj@sejong.ac.kr
  • 31. Ensemble Approach for Fighting Game Play • Multiple rule-based systems designed by different experts • For each round, the player selector chooses one of them randomly Rule-based System 1 Rule-based System 2 Rule-based System 3 Player Selector
  • 32. Preparation of Multiple Rule- based Systems • As an undergraduate course project, 17 students submitted their own controller and we ran the internal competition with the same setting of the “fighting game AI competition” • We selected three best student’s controllers and combined them as an ensemble player
  • 33. Combination of Multiple Players • From initial testing, we found that it’s better to change the player round by round • As a future work, we need to use an advanced “player selection” technique instead of the “random selection”
  • 34. FIGHTING GAME AI COMPETITION Sejong University JiIn Mun wwldlsl9401@naver.com
  • 35. Introduction • AI Name : AI128200 • Developers' Name : Ji-In Mun • Advisor : Kyung-joong Kim • Affiliation: Department of Computer Engineering, Sejong University, South Korea
  • 36. AI’ Outline (1) Movement 1)To reduce the distance between two characters =>FOR_JUMP (2) In order to avoid enemy attacks =>FOR_JUMP
  • 37. AI’ Outline (3) Attack 1)main attack skill =>CROUCH_B,THROW_B 2)My Energy >= 60 EnemyCharacter-air => AIR_D_DF_FB EnemyCharacter-ground => STAND_D_DF_FA 3)My Energy >= 300 => STAND_D_DF_FC
  • 38. Fighting Game AI: Ash Che-Chun Chen, Cheng-Yuan Wu, Tsung-Che Chiang Department of Computer Science and Information Engineering, National Taiwan Normal University, Taipei, Taiwan 40147014S@ntnu.edu.tw, asdwayne1@yahoo.com.tw, tcchiang@ieee.org
  • 39. distanceenergy High Medium Low Far State1 State2 State3 Medium State4 State5 State6 Close State7 State8 State9 Near State10 State11 State12 Action Probability Action A X% Action B Y% …. …. Each state has its own Action table. The probability is determined by genetic algorithm (GA) and domain knowledge.
  • 40. • Special states of the opponent – Air – Strong Kick – Emit energy ball – ….. We set different actions for these special states. • Changing strategies – If the damage we suffered is much higher than that of the opponent at the middle or the end of the round, an alternative action table will be tried.
  • 41. AsuchAI_LEPnkNN Developer: Kazuki Asayama Supervisor: Koichi Moriyama, Ken-ichi Fukui, and Masayuki Numao Affiliation: The Institute of Scientific and Industrial Research, Osaka University
  • 42. Features • Prediction of the opponent’s Position and Action Search reachable attach by using the position prediction Counter action by using the action prediction • “Hate” gauge Prevention of repeatedly receiving damage • “winDegree” gauge The criterion of closing to opponent less or more actively.
  • 43. Prediction • Position Prediction • Predict 15 frames later by linear extrapolation • Calculate distance between myself and opponent • Predict the time when command “startup” • Example: “Startup” of “STAND_A” is 3 frames. 15(original delay) + 3(Startup) = 18 is prediction frame. • Search reachable attack by using predicted position • Action Prediction • Predict 15 frames later by k-Nearest neighbor method • 6 features: relative X, Y, absolute Y coordinates and these difference from 15 frames before • Counter against “JUMP” and “AIR” attack by using predicted action
  • 44. Figure of Position prediction (𝑥𝑡 𝑐−𝑓, 𝑦𝑡 𝑐−𝑓) (𝑥𝑡 𝑐 , 𝑦𝑡 𝑐 ) (𝑥𝑡 𝑐+𝑓, 𝑦𝑡 𝑐+𝑓) Moving opponent (𝑥𝑡 𝑐 − 𝑥𝑡 𝑐−𝑓, 𝑦𝑡 𝑐 − 𝑦𝑡 𝑐−𝑓) 𝑥𝑡 𝑐+𝑓 = 2𝑥𝑡 𝑐 − 𝑥𝑡 𝑐−𝑓 𝑦𝑡 𝑐+𝑓 = 2𝑦𝑡 𝑐 − 𝑦𝑡 𝑐−𝑓
  • 45. Other features • Hate gauge • When this AI receives damage repeatedly, it attempts to guard next attack, by doing so, attempts to escapes from opponent’s loop attack. • winDegree gauge • When this AI has high scores, it is less active than usual.
  • 46. BlueLag Julien Neveu Internship: Faculty of Science and Technology, Bangkok University IUT d’Angoulême, Université de Poitiers, France Advisor: Dr. Worawat Choensawat Bangkok University, School of Science and Technology
  • 47. BlueLag  Defence  Avoid damages by escaping fire balls, ultimate skill and avoid blocking in the corner.  Counter attack when the character are close.  Attack  The attack algorithm selection depends of the distance with different thresholds.
  • 48. DEFENCE Escaping  Escaping  Counter attack  Counter attack  Escaping
  • 49. ATTACK Distance test   Actions  We also use a test to know if we can use the ultimate skill
  • 50. DragonWarrior Developed by: Renan Motta Goulart.(Master's Degree Student) Affiliation: Universidade Federal de Juiz de Fora, Brazil. Email: renan.aganai@gmail.com , raikoalihara@hotmail.com
  • 51. Outline Learns how the opponent fights. Predict the oponent's next move by keeping information of his past actions and the distance when they were used.
  • 52. Outline The possible attacks that the oponent might use are discovered by using the average position and the standard deviation of where the oponent attacked.
  • 53. FICE_AI_OM Developer: Aldi Doanta Kurnia Affiliation: Institut Teknologi Bandung Indonesia
  • 54. AI Outline • Using J48, an open source C4.5 algorithm in Weka. • It records data during the game, to be used for opponent prediction system using J48.
  • 55. AI Outline • It also uses a simple weighting system, to determine actions during the game. • It updates the weight for each action using an evaluation function that calculates HP difference between the two players, before and after the action.
  • 56. FuzzyGA DEVELOPERS: CARLOS LÓPEZ TURIÉGANO, JOSÉ MARÍA FONT FERNÁNDEZ, DANIEL MANRIQUE GAMO AFFILIATION: UNIVERSIDAD POLITÉCNICA DE MADRID, SPAIN EMAIL: CARLOSJLT24@GMAIL.COM
  • 57. FuzzyGA outline Fuzzy rule-base system using fuzzylite library Every state of the game is evaluated and what to do is determined by the rule-base sistem The set of rules have been obtained using an Evolution Sistem. Sparrings for training have been AIs from 2014 tournament and 2 custom IAs.
  • 58. FuzzyGA – Input variables Distance Score Own X position Opponent X position Own Y position Opponent Y position Own energy Opponent energy Own X speed Opponent X speed Own Y speed Opponent Y speed Input variables are variables obtained from framData or composed by them.
  • 59. FuzzyGA – Output variables - The output is the action that will be send to the CommandCenter Horizontal Movement FORWARD STAND BACK Vertical Movement JUMP STAND CROUCH Action STAND_GUARD, CROUCH_GUARD, AIR_GUARD, THROW_A, THROW_B, STAND_A, STAND_B, CROUCH_A, CROUCH_B, AIR_A, AIR_B, AIR_DA, AIR_DB, STAND_FA, STAND_FB, CROUCH_FA, CROUCH_FB, AIR_FA, AIR_FB, AIR_UA, AIR_UB, STAND_D_DF_FA, STAND_D_DF_FB, STAND_F_D_DFA, STAND_F_D_DFB, STAND_D_DB_BA, STAND_D_DB_BB, AIR_D_DF_FA, AIR_D_DF_FB, AIR_F_D_DFA, AIR_F_D_DFB, AIR_D_DB_BA, AIR_D_DB_BB, STAND_D_DF_FC
  • 60. Suwijak Wipachon Thiti Rueangrit Sutee Chamnankit Kingkarn Sookhanaphibarn (Advisor) School of Science and Technology BANGKOK UNIVERSITY
  • 61. ➢ We developed an AI charactor by using a rule-based strategy to define the fighting states and actions. ❖ Our AI robot makes decision based on the following states: ➢ Defense state ➢ Attack state ➢ Counter attack state ❖ For each state, we can divided into two cases as follows ➢ Far = distance between our AI character and opponent over a predefined threshold. ➢ Near = distance between our AI character and opponent less than a predefined threshold.
  • 62. ● Detection of the opponent’s skill:  ● In case of Opponent’s skill = “Fireball”, Our AI character will use “Jump” whenever Opp skill=“Ultimate” and getDistance() <= threshold. ● But “Forward Jump” if getDistance > threshold.
  • 63. Our AI character will enter to “Attack state” or “Counter attack state” by considering two variables: Opponent’s skill and Distance from Opponent.  When opponent’s skill isn’t “Fireball”
  • 64. 1) If (Our_AI_energy > 300) then Our_AI_skill = “Ultimate”. 2) If (Our_AI_energy > 50 And Time is low) then Our_AI_skill = “Small Ultimate”. 3) If (getDistance() > threshold) then Our AI_skill will be “Fireball” 4) If (getDistance() is between threshold) then Our AI_skill will be “AIR_UB” 5) If (getDistance() < threshold) then Our AI_skill will be “CROUCH_FA” else Our AI_skill will be “CROUCH_FB” .
  • 65.
  • 66.  If(opponent’s skill Is Air and getDistence < threshold) then Our_AI_skill = “AIR_UA"
  • 68.  Based rules, depending on the distance.  Infighting style.  Change strategy by energy score.  Very aggressive AI.  Using various ground skill.  Q/A : Jaykim0104@gmail.com
  • 69. MACHETE: REFLEX AGENTAI Name: Machete. Developed by: Axel G. Garcia Krastek. Affiliation: Otto-von-Guericke University Magdeburg, Germany. Contact: spaxel@gmail.com
  • 70. Machete is a reflex agent with simple, but effective rules. Rules are based on distance to the opponent, energy of the opponent and the energy of the agent. If opponent is too far away, Machete will get closer with forward jumps. If opponent is far but not too far then Machete will advance forward. When the energy of Machete reaches a threshold, it will perform an action based on the amount of energy.
  • 71. Machete has one very important survival rule: When the enemy’s energy reaches 300, Machete will try to avoid getting hit by the energy ball that the enemy can produce with 300 points. Finally, when none of the other mentioned conditions are met, Machete will perform kicks which helps in two ways:  It introduces randomness, so the enemy cannot accurately predict Machete’s movements.  Machete is never standing still doing nothing, it will always be kicking, which increases its chances of winning the match.
  • 72. Fighting Game AI Competition AI Name : Ni1mir4ri Developer : Jiyuu Yi Affiliation : Sejong University, Korea 2015.08.14
  • 73. AI’s Outline 1. Getting 3000 points from defensive opponent 2. The selection most effective action based on distance 3. Handling unfavorable situation(Ensemble with SejongFighter AI) state 1 Start state 2 state 3 Movement Of opponent Deteriorated situation Improved situation
  • 74. State1 : Getting 3000 points from defensive opponent The state of game just being started . And a opponent doesn’t move repeating the actions of same patter. Defensive characters tend not to move, until his opponent comes in attack range. Using this feature, My character withdraws after using the air skills. As a result, I can get 3000 points.
  • 75. Player1(my opponent) is just repeating same skills, at the same position even he was damaged 10 points. State1 : Getting 3000 points from defensive opponent
  • 76. State2 : The selection of the most effective action based on distance According to my analysis result, kicking with jumping was most effective at the close combat. At long distance, the air skills were most effective. If my energy is enough to use stronger air skills, use those.
  • 77. State3 : Handling unfavorable situation(Ensemble with SejongFighter AI) Although my character use effective skills, Sometimes It can be unfavorable. A definition of ‘unfavorable situation’ is (myHP<3*enemyHP) && (remainingTime<30sec). If my character is in trouble, It will change its pattern. A pattern is that of SejongFighter AI. The reason that I used a pattern of SejongFighter, is that my original pattern was vulnerable to the SejongFighter. I think the ensemble with SejongFighter can supplement the defect of my pattern.
  • 79. RatioBot Teerakit Vanitcharoennum Natthawut manchusoontornkul Worawat Choensawat (Advisor) School of Science and Technology BANGKOK UNIVERSITY Thailand
  • 80. Our Proposed Concept ❖ We developed an AI robot with a rule-based method to define the fighting states and actions, and the considering variables are as follows ➢ Distance between players ➢ Our energy ❖ Our AI robot makes decision based on the following states: ➢ Counter state ➢ Defense state ➢ Attack state ❖ For each state, we can divided into two cases as follows ➢ Far = distance between our AI character and opponent over a predefined threshold. ➢ Near = distance between our AI character and opponent less than a predefined threshold.
  • 81. Defense state ❖Use “Fireball” skill for keeping distance from Opponent by applying the following strategies: ❖ In case of energy > 300 and remaningtime < 5000, use “Ultimate” skill.
  • 82. Attack state ❖ Our AI character will use the following skills: “CROUCH_B”whenever the distance to Opponent is in a defined range. ❖ Our AI character will use “FOR_JUMP” skill immediately if Opp is very close to our AI. ❖ All the constant parameters (threshold values) are investigated by experiments.
  • 83. Counter state  If enemy positionY < thredhold and positionX < thredhold use “AIR_UA”
  • 84.
  • 85. 2015_FTG_AI SDBOT Dept. of Computer Engineering, Sejong University
  • 86. AI Name : SDBOT Character : ZEN Advisor : Kyung-Joong Kim Developer Name: Seung-Ho Choi Affiliation Dept. of Computer Engineering, Sejong Univ. 1. Introduiction
  • 87. 1) To reduce distance with opponent – use skills but those are a priority among the skills. According to the skill in using the determined distance value and pseudo-random value. Primary skill : STAND_D_DF_FA , STAND_D_DF_FB , STAND_D_DF_FA 2. AI’s Outline 2) Move if x point of my character large than x point of opponent character → inputKey.L else → inputKey.R
  • 88. 2. AI’s Outline 3) attack - primary skills : STAND_D_DF_FA , STAND_D_DF_FB , STAND_D_DF_FA - sub-skills : AIR_B, AIR_UB
  • 89. Strategic • AI is a rules-based strategy is adopted by a lot of use projectile and jump skill. • There are advantages to having the use of high projectile that score points. • Get the energy, just use the projectile and to avoid the projectiles well but it is still unfinished. • This strategy could be the No. 1 in the competition of AI class in university.
  • 90. Point • This Code is simple so powerfull. • Distance base-action is so good-reward in this AI game. • The strategy of a number of experiments were able to get the highest score.
  • 91. Weakness • This SDBOT isn’t good AI because of rule-base. • Seaching method or Machine-Learning not apply. • SDBOT have weak in specific skils.(Catching, Same jump skills, rapidly frame skills)
  • 93. 1. Introduction AI Name : SniperInSejong Character : Zen Advisor : Kyung-Joong Kim Developer's Name : Seonghun Yoon Affiliation : Dept. of Compter Engineering, Sejong University
  • 94. 2. AI's Outline  Use Ensemble Techniques Basic Enermy In Air Approach to the Enermy Approach to the Enermy In Air  Four Strategy Changes in real time
  • 95. 2. AI's Outline  Basic  Use the projectile  Enermy In Air  Use the air kick and air projectile  Approach to the Enermy  Use the air kick and crouch kick  Use the air kick and crouch kick  Approach to the Enermy In Air
  • 96. 2. AI's Outline  Check the Hp Rate  HP Rate > 1 Avoid enermy  HP Rate == 1 Basic Strategy  HP Rate < 1 Approach to the enermy Hp Rate = (myHp - 1.0)/(EnermyHp - 1.0)
  • 97. SNORKEL Suriya Sampanchit Ariya Tippawan Kingkarn Sookhanaphibarn (Advisor) School of Science and Technology BANGKOK UNIVERSITY Thailand
  • 98. Overview of Our AI character ● We use a rule-based algorithm with three main varaibles to define the fighting states and actions; and the variables are as follows: -Distance from Opponent -Opponent Action - Our AI Energy ● Our AI fighting states are divided into two states by considering the distance from Opponent: -Defense state when Opponent is FAR. -Attack state when Opponent is NEAR. ● Our AI actions are as follows: -Defense state: “STAND_D_DF_FC”, “STAND_D_DF_FA”, “FOR_JUMP”, “CROUCH_A”, “CROUCH_FA” , “BACK_JUMP” -Attack state: “CROUCH_FA”, “CROUCH_FB”, “AIR_FB”, “FORWARD_WALK”
  • 99. DEFENSE state •Our AI character will use these two skills: “STAND_D_DF_FC” OR “STAND_D_DF_FA”. •If Dist_from_Opp <=200 then our_AI_skill = “CROUCH_FA” else our_AI_skill = “FOR_JUMP”. •In case that our_AI_character is at corner then our_AI_skill = “CROUCH_A”. •Our AI character does not often use a set of defense skills like “GUARD”, but just use “jump” to alleviate the damage from Opponent’s attacks.
  • 100. ATTACK state If (Distance from Opponent > 120) then Our_AI_character_skill = “FORWARD_WALK”. If (Distance from Opponent is in a defined range) then Our_AI_character_skill = “CROUCH_FB” OR “AIR_FB” OR “CROUCH_FA”. If (Opponent is very close to ours on GROUND) then Our_AI_character_skill = “CROUCH_FB” If (Opponent is very close to ours on GROUND) then Our_AI_character_skill = “AIR_FB” All the action skills mentioned above cannot do not much damage to Opponent. Thus, our AI character often use “FORWARD_WALK” to make a score until our AI energy is enough to release “Ultimate skill”.
  • 101.
  • 103. CIG 2015, Sep 2, 2015 Thank you and see you at CIG 2016!

Editor's Notes

  1. MUST CORRECT TYPOS
  2. 50
  3. 51
  4. 52