These are slides about the 2019 Fighting Game Artificial Intelligence Competition presented at the 2019 IEEE Conference on Games (CoG) on August 21, 2019 in London, UK.
Presentation on how to chat with PDF using ChatGPT code interpreter
2019 Fighting Game AI Competition
1. 2019 Fighting Game AI Competition
Yoshina Takano lead programmer
Ryota Ishii programmer, tester, etc.
Yutian Ma programmer, tester, etc.
Hayato Noguchi programmer, tester, etc.
Hideyasu Inoue programmer, tester, etc.
Tatsuki Toma programmer, tester, etc
Keita Fujimaki programmer, tester, etc
Suguru Ito (now with DIMPS) advisor
Takahiro Kusano (now with KONAMI) advisor
Tomohiro Harada vice director
Ruck Thawonmas director
Team FightingICE
Intelligent Computer Entertainment Laboratory
Ritsumeikan University
Japan
Game resources are from The Rumble Fish 2 with the courtesy of Dimps Corporation.
CoG 2019: Aug 21, 2019
3. Fighting game AI platform viable to develop with
a small-size team in Java and also wrapped for Python
First of its kinds since 2013 & CIG 2014, developed from
scratch without using game ROM data
Aims:
Towards general fighting
game AIs
Strong against any unseen
opponents (AIs or players) ,
character types, and play modes
FightingICE
http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
CoG 2019: Aug 21, 2019
Game resources are from The Rumble Fish 2 with the
courtesy of Dimps Corporation.
4. Has 16.67 ms response time (60 FPS)
for the agent to choose its action out of 40 actions
Provides the latest game state with a delay of 15
frames, to simulate human response time
Equipped with
a forward model
a method for accessing
the screen information
an OpenAI Gym API
FightingICE’s Main Features
CoG 2019: Aug 21, 2019
Why FightingICE?
Deep learning does not prevail yet!!
Generalization against different
opponents is challenging
60 FPS + introduced delay are
challenging factors for MCTS
5. Recent Research Using FightingICE
CoG 2019: Aug 21, 2019
One paper at EANN 2019
“A Hybrid Approach for the Fighting Game AI Challenge: Balancing Case
Analysis and Monte Carlo Tree Search for the Ultimate Performance in
Unknown Environment,” (Thuan, et al.)
Preselection of a set of actions for Monte-Carlo tree search + some rules
One paper at GECCO 2019
“Integrating Agent Actions with Genetic Action Sequence Method,” (Kim, et
al.)
A combination of genetic operations and Monte-Carlo tree search
Two papers at CoG 2019 by our group about
MCTS + highlight cues for generation of entertaining gameplay (oral)
DDA-MCTS AI for physical health promotion (poster)
7. Three tournaments for Standard and Speedrunning
using three characters:
ZEN, GARNET, and LUD (LUD’s character data not
revealed in advance)
Standard: considers the winner of a round as the one with
the HP above zero at the time its opponent's HP has reached
zero (all AIs' initial HP = 400)
Speedrunning: the winner of a given character type is the
AI with the shortest average time to beat our sample MctsAi
(all AIs' initial HP = 400)
Contest Rules
CoG 2019: Aug 21, 2019
8. Nine Entries from
Bangladesh, China, Germany, Indonesia,
Japan, Korea, Mexico, and Thailand
One sample AI (Openloop MCTS)
from our group for reference
MCTS AI
Summary of AI Fighters
CoG 2019: Aug 21, 2019
Techniques in use by the
submitted nine AIs
3 AI
Multiple heuristic rules
2 AIs
MCTS + heuristic rules
2 AIs
Minimax + MCTS
1 AI
genetic operations + MCTS
1 AI
RHEA + opponent modeling (with
DeepLearning4j)
10. CoG 2019: Aug 21, 2019
Results
• Winner AI: Reiwa Thunder by Eita Aoki (young professional), Japan
• an improved version of his 2018 champion replacing MCTS with
MINIMAX + a set of heuristic rules for each character
• The developer has won this competition for four consecutive
years!
• Runner-up AI: RHEA_PI by Zhentao Tang and Jiabo Zhang, University of
Chinese Academy of Sciences and University of Science and Technology
Beijing, China
• Rolling Horizon Evolutionary Algorithm combined with the
adaptive learning-based opponent model (with DeepLearning4j)
• 3nd Place AI: Toothless by Lam Gia Thuan and Marcin Stelmaszyk,
Frankfurt University of Applied Sciences, Germany
• a combination of MiniMax and MCTS + rules developed in Kotlin
11. CoG 2019: Aug 21, 2019
Thank you and see you at
CoG 2020 in Osaka
12. CoG 2019: Aug 21, 2019
Appendices: AI Details
(in alphabetical order)
15. Dice AI
- Random Action but look at district between P1 and P2
- Use array to keep about action on the ground and in the air
- Check district between P1 and P2
Outline
16. Outline
- Check state of character(in the air / on the ground)
- Choose action in that state
19. Introduction
This is my first project in making fighting games AI. This AI is mainly based on
MCTS-AI and several others projects as reference.
The general strategy is the AI agent will switch its aggressive or defensive
strategy based on hp difference between the AI agent and the enemy.
20. Strategy Outline
While the AI agent has more hp or at least not much any different in hp values
with the enemy, it will be more aggressive.
In this situation, the agent will focus more on moves which recover energy or
moves which have high damage output.
While the Ai agent has far lower hp than the enemy, it will more defensive.
In this situation, the agent will focus more on moves which have low active
frames count or moves which have low total frames count.
21. HAIBU AI
2019 FIGHTING GAME AI COMPETITION
INTELLIGENT COMPUTER ENTERTAINMENT LAB, RITSUMEIKAN UNIVERSITY
22. INTRODUCTION
o DEVELOPER:
Nowshin Faiza Alam
o AFFILIATION:
Asian University for Women (AUW), Bangladesh
o CONTACT INFO:
EMAIL: nowshinalam92@gmail.com
23. AI OUTLINE
o SOURCE CODE / REFERENCE USED:
Previously developed Mutagen by Connor Gregorich-Trevor, JayBot_GM by Man-Je Kim & Donghyeon Lee,
ZoneAI by Frank Ying
o TECHNIQUES IMPLEMENTED & MODIFICATIONS:
A hybrid AI consisting of a rule-based system and state-grouping. Actions are divided into a number of
array sets. The arrays and rule sets from Mutagen and JayBot_GM have been revised and used. After
traversing through the arrays based on its distance from the opponent, the AI will randomly pick an action.
Some of the actions have been repeated multiple times based on their importance.
Instead of Monte Carlo Tree Search for decision making, randomization of the set of actions based on
certain rules have been used to ensure a wide variety actions in certain situations being used by the AI.
This idea has been an inspiration from ZoneAI developed by Frank Ying. As their expectations for future
implementation mentioned, prediction of opponents’ attacks and different rules for various states and
energy usage were issues, HaibuAI tries to find easy solutions for that keeping the simplicity intact.
24. AI OUTLINE
• PLANS FOR FUTURE IMPROVEMENTS
Even though the rules are specified, the characters are not able to find the best or optimal solution
always since such a technique has not been used. Neither are they always able to successfully
predict opponent’s next move and guard itself from the attack.
Better algorithm needs to be implemented for better optimal results and prediction.
Since it’s a rule-based AI for the most part, the AI itself does not learn anything on its own. I wish
to be able to apply reinforcement learning in the future to make the AI as intelligent as possible as
I study and learn further on this subject.
Optimization of all the characters
28. Introduction
Member
Man-Je Kim1 - (Graduate Student)
Sungjin Kim2 and Junho Kim2
Affiliation
1Gwangju Institute of Science and Technology(GIST)
2LG Electronics co. Ltd.
Acknowledgement
Our AI development was technically cooperated and supported by LG
Electronics.
29. Genetic Action Sequence
• This Algorithm presents a brief description and implementative analysis of
Action Sequence which was designed to deal with such a "penny-wise and
pound-foolish" problem. Based on a combination of genetic operations and
Monte-Carlo tree search, our proposed method is expected to show improved
computational efficiency, in which situational difficulties are often
troublesome to resolve with naive behaviors.
Integrating Agent Actions with Genetic Action Sequence Method
(GECCO 2019)
30. State Grouping Method
• State Grouping binds similar states together. It is a technique
inspired by the representativeness heuristic: a single feature in a
set represents the whole sets characteristics based on the
comparison of corresponding action frequencies. With a set of
state groups, one can say that there is high spatial similarity 55
throughout the space in which actions are repeated.
On Going Paper
31. Opponent Action Table
• This technique, based on Action Table, is a moderate improvement over its
predecessor in 2017. It acts correspondingly with an action table it currently
holds. In the first round, the AI uses the table that consists of the actions
favored by the top 5 agents from the last competition. After that, the AI
replaces it with the table that has collected the opponent's actions most
frequently performed during the previous round.
Opponent modeling based on action table for MCTS-based fighting game AI
(CIG 2017)
32. Hybrid Method
• This algorithm is a combination of MCTS and Genetic Algorithm,
which has a hierarchical recursive structure in which the genetic
algorithm selects the best behavior to choose from. Based on
these behaviors, MCTS explores the best behavior.
Hybrid Fighting Game AI Using a Genetic Algorithm and Monte Carlo Tree Search
(GECCO 2018)
33. Reference
[1] Opponent modeling based on action table for MCTS-based
fighting game AI, CIG 2017 - (Link)
[2] Hybrid fighting game AI using a genetic algorithm and Monte
Carlo tree search, GECCO 2018 – (Link)
[3] Integrating agent actions with genetic action sequence method,
GECCO 2019 – (Link)
36. Outline
The player use attack if distance of player less than threshold.
And use movement action like for_jump or dash if the distance is
too far away.
Attack Action will be set by random.
Attack by
Random()
Set a new Attack
Is Near ?
Get Distance
Set New
Attack
Use Movement
Action
True
False
37. Outline
COMBO
Hit = true
• if the Attack hit the enemy, then current attack action will be added
to Current Combo.
• If not, the current combo will be added to the combo list as long as
combo list still not full and set a new Attack
Add Combo to List /
Set New Attack
False
38. Outline
After the combo list is full, the new attack setup will stop and
player will use combo attack from the list of combo.
The most hit attack in the combo will use in current combo.
Combo-List
Current Combo
the most hit
attack
40. Outline
Base:Thunder, I made 2018.
New approach
・Bug fix of Simulator(Default Simulator can’t use
Action.STAND_D_DF_FC )
・Use MinMax (suppose self and opponent act hitting action)
・Use Action “NEUTRAL” at the right time
41. Rethinking the approach from Midterm
GARNET
In the Midterm , I could not get the first place in the GARNET Standard
League.
My AI came to think that the enemy use projectile attack Positively as
LGIST_Bot Measures that won the 1st prise in the Midterm.
42. Approach that I tried but did not go well
Actor-Critic Learning like alphago
It was difficult because of the delay
Imitate CheatMctsAi Policy By Policy Learing
It was able to imitate partially but was not strong
Make effective use of a new parameter “isControl”
Please do not change the specifications 3 weeks before the deadline
43. Rolling Fighting Bot - RHEA_PI
Zhentao Tang (Student)
Affiliation: University of Chinese Academy of Sciences
Jiabo Zhang (Student)
Affiliation: University of Science and Technology
Beijing
44. Rolling Fighting Bot
• Rolling Fighting Bot is based on Rolling Horizon Evolutionary
Algorithm, combined with the adaptive learning-based
opponent model. It is capable of inferring which action
opponent will take in the next step according to history battles,
and adopts rolling horizon evolutionary algorithm to search
what own action sequence can result in effective damages to
opponent.
• Besides, rolling fighting bot uses Thunder Bot as a reference,
and uses valid action set as candidate.
50. TOVOR (Fighting ICE AI)
by Carlos Torres-Fernández (No affiliation)
Fighting Game AI Competition, Intelligent Entertainment Computer Lab., Ritsumeikan University
51. AI Outline
Simple implementation of the Monte Carlo Tree
Search algorithm
Image: Wikipedia.org
52. AI Outline
Assumes random moves by the opponent
Makes simulations in intervals of 35 frames, the
average for ZEN’s actions
53. CoG 2019: Aug 21, 2019
Thank you again and
see you at CoG 2020 in
Osaka