SlideShare a Scribd company logo
1 of 51
2021 Fighting Game AI Competition
Keita Fujimaki, lead programmer
Xincheng Dai, co- lead programmer
Roman Savchyn, tester, etc.
Hideyasu Inoue, advisor
Pujana Paliyawan vice director
Ruck Thawonmas director
Team FightingICE
Intelligent Computer Entertainment Laboratory
Ritsumeikan University
Japan
Game resources are from The Rumble Fish 2 with the courtesy of Dimps Corporation.
http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
CoG 2021: Aug 16-20, 2021 Updated on August 28, 2021
FightingICE
Contest
Results
Contents
2
CoG 2021: Aug 16-20, 2021
 Fighting game AI platform viable to develop with
a small-size team in Java and also wrapped for Python
 First of its kinds since 2013 & CIG 2014, developed from
scratch without using game ROM data
 Aims:
for research on general fighting
game AIs
 Strong against any unseen
opponents (AIs or players) ,
character types, and play modes
FightingICE
http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
Game resources are from The Rumble Fish 2 with the
courtesy of Dimps Corporation.
CoG 2021: Aug 16-20, 2021
3
 Has 16.67 ms response time (60 FPS)
for the agent to choose its action out of 40 actions
 Provides the latest game state with a delay of 15
frames, to simulate human response time
 Equipped with
 a forward model
 a method for accessing
the screen information
 an OpenAI Gym API
FightingICE’s Main Features
 Why FightingICE?
 Generalization against different
opponents of unknown behaviors
 challenging for DRL
 60 FPS + introduced delay
 challenging for tree search
4
CoG 2021: Aug 16-20, 2021
Recent Publications Using FightingICE by
Other Groups since CoG 2020
 Rongqin Liang, Yuanheng Zhu, Zhentao Tang, Mu Yang and Xiaolong Zhu - Proximal Policy Optimization with
Elo-based Opponent Selection and Combination with Enhanced Rolling Horizon Evolution Algorithm, 2021 IEEE
Conference on Games, August 17-20, 2021.
 Tianyu Chen, Florian Richoux, Javier M. Torres, Katsumi Inoue, "Interpretable Utility-based Models Applied to
the FightingICE Platform," 2021 IEEE Conference on Games, August 17-20, 2021.
 Man-Je Kim , Jun Suk Kim , Sungjin James Kim , Min-jung Kim , Chang Wook Ahn, "Genetic state-grouping
algorithm for deep reinforcement learning," Expert Systems with Applications, 15 December 2020.
 Xenija Neufeld, "Long-Term Planning and Reactive Execution in Highly Dynamic Environments," Doctoral thesis,
Otto-von-Guericke-Universität Magdeburg, Dec. 2020.
 Zhentao Tang, Yuanheng Zhu, Dongbin Zhao, and Simon M. Lucas, "Enhanced Rolling Horizon Evolution
Algorithm with Opponent Model Learning," IEEE Transactions on Games, 2020.
 Deng Shida, Takeshi Ito, "Fighting game AI with dynamic difficulty adjustment to make it fun to play against,"
Proc. of the 25th Game Programming Workshop 2020, pp. 58-61, Nov. 2020. (in Japanese)
 Yuanheng Zhu ,Dongbin Zhao, "Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games,"
IEEE Transactions on Neural Networks and Learning Systems, Nov. 2020. (Early Access)
 Mohammad Farhan Ferdous, "Privacy Preservation Algorithms on Cryptography for AI as Human-like Robotic
Player for Fighting Game Using Rule-Based Method," Cyber Defense Mechanisms, pp. 185-196. Sep. 2020.
 MJ Kim, JH Lee, CW Ahn, "Genetic Optimizing Method for Real-time Monte Carlo Tree Search Problem," Proc.
of the 9 International Conference on Smart Media and Applications, Sep, 2020.
5
CoG 2021: Aug 16-20, 2021
FightingICE
Contest
Results
Contents
6
CoG 2021: Aug 16-20, 2021
 Standard and Speedrunning leagues, each using three characters:
ZEN, GARNET, and LUD (GARNET and LUD’s character data not
revealed in advance, unknown
characters)
 Standard: considers the winner of a round as the one with the HP above
zero at the time its opponent's HP has reached zero (all AIs' initial HP = 400)
 Speedrunning: the winner of a given character type is the AI with the
shortest average time to beat our sample MctsAi (all AIs' initial HP = 400)
Contest Rules
7
CoG 2021: Aug 16-20, 2021
Summary of 10 Entries
8
AI Affliation Language Description
BlackMamba
Researcher team from Netease Games
AI Lab, China
Java
PPO trained against weaken MctsAI in Speedrunning League and against self-play or
previous entries with added noise in the character data in Standard League
EggTart Student from KMUTT, Thailand Java Rule-based AI
ERHEA_PPO_PG
Student team from University of
Chinese Academy of Sciences, China
Java
Enhanced Rolling Horizon Evolution Algorithm and combined with Proximal Policy
Optimization (PPO) with Elo-based opponent selection
IBM_AI
Student from Haripur University
graduate, Pakistan
Java Rule-based AI
Thunder2021 Individual developer, Japan Java
1. Prioritize certain actions in advance. 2. Predict the most possible three actions by the
opponent. 3. Select the best action by AI against the opponent's three actions. 4.
Limited actions for ZEN Speedrunning League.
DQAI Individual developer, Vietnam Python
Duel Q-network Reinforcement Learning AI
LTAI Individual developer, China Python Dual-clip PPO with a novel opponent sampling algorithm based on payoff matrix
Ruba
Student from Kyoto Sangyo University,
Japan
Python Rule-base + Genetic Algorithm AI
SummerAI Researcher team from ETRI, Korea Python PPO
WinOrGoHome
Individual researcher from Netease
Games AI Lab, China
Python
PPO trained against MctsAI in Speedrunning League and against self-play in Standard
League
• 5 Java entries, 5 Python entries; 4 student entries, 4 individual developer/researcher entries, 2 researcher team entries
• 4 entries from China, 2 entries from Japan, 1 entry from Korea, Pakistan, Thailand, and Vietnam, respectively
• PPO used in 5 entries, EA in 2 entries
CoG 2021: Aug 16-20, 2021
FightingICE
Contest
Results
Contents
9
CoG 2021: Aug 16-20, 2021
Results
• Winner AI: BlackMamba by Peng ZHANG, Guanghao ZHANG, Xuechun WANG,
Sijia XU, Shuo SHEN, and Weidong Zhang (Netease Games AI Lab, China)
• Proximal Policy Optimization Algorithms (PPO) trained against weaken MctsAI
in Speedrunning League and against self-play or previous entries with added
noise in the character data in Standard League.
• Runner-up AI: WinOrGoHome by Weijun Hong (Netease Games AI Lab, China)
• PPO trained against MctsAI in Speedrunning League and against self-play in
Standard League.
• 3rd Place AI: Thunder2021 by Eita Aoki, an Individual developer, Japan (2020 runner-
up, winner at the 2016, 2017, 2018, and 2019 competition)
• 1. Prioritize certain actions in advance. 2. Predict the most possible three
actions by the opponent. 3. Select the best action by AI against the opponent's
three actions. 4. Limited actions for ZEN Speedrunning League.
10
CoG 2021: Aug 16-20, 2021
Updated on
August 28, 2021
Sample Fights
BlackMamba (P1) vs WinOrGoHome (P2)
11
CoG 2021: Aug 16-20, 2021
 BlackMamba on GARNET tends to
use a kick action when facing the
opponent. It doesn’t defend the
opponent’s attack, but uses attacks
to fight back.
 BlackMamba on ZEN tends to use
a jump action when looking out for
the opponent’s weakness, while
doing more continuous attacks
when pushing the opponent to the
edge.
 BlackMamba on LUD tends to find
a chance to hit the opponent in the
air. It also uses a jump action to
break the deadlock.
Please see the descriptions below
Thank you and see you at
CoG 2022 in China
(Plan to add human players for
assessment of the AI performance)
http://www.ice.ci.ritsumei.ac.jp/~ftgaic/
12
CoG 2021: Aug 16-20, 2021
BlackMamba
An intelligent Fighter based on Reinforcement Learning
Guanghao ZHANG
Xuechun WANG
Peng ZHANG
Sijia XU
Shuo SHEN
Developer: Affiliation: Netease Games AI Lab
Weidong ZHANG
Outline
BlackMamba is an RL agent trained by Proximal Policy Optimization. Regarding the
diversity and richness demand of data sampling, our AI is trained by fighting with
history opponents revealed in FightingICE Game and self-play.
The Policy Network we used is a simple six-layer MLP. And its weights are saved in csv
files finally.
In order to improve exploration and balance the convergence speed for different
opponents, we add an opponent selection mechanism during the training process.
Training
For speed league, we train the model by fighting with MctsAi. Considering lower
machine performance of the organizers, we let our agent also fight with a weak version
MctsAi, whose searching time is constrained.
worker
worker
worker
Agent MctsAi
VS
Data
Buffer
Policy
Rollout data
Latest policy
Learner
Worker
Training
For standard league, we train the model by fighting with historical participants and self-
play. To cope with changes of GARNET and LUD’s motion data, we randomly modify
the motion data when training GARNET and LUD’s model.
worker
worker
worker
Agent
Historical
Participants
VS
Data
Buffer
Policy
Rollout data
Latest policy
Learner
Historical Worker
worker
worker
worker
Agent Agent
VS
Self-play Worker
Thank
sFree to contact us
{zhangguanghao, zhangpeng17,
wangxuechun}@corp.netease.com
Fighting Game AI
Competition 2021
AI name: EGGTART
Developer name: Gunt CHANMAS
Affiliation: School of Information Technology, KMUTT
Outline
 Rule-based AI:
 Move forward if distance X > 200
 Perform “CROUCH_FB” when distance X < 250 and distance Y <= 20
 Dodge by
 1. “FORWARD_WALK” when distance Y > 40
 2. “BACK_STEP” when distance Y > 20
THANK YOU
Enhanced Rolling Fighting Bot -
ERHEA_PPO_PG
Rongqin Liang(Student)
Affiliation: University of Chinese Academy of Sciences
Yuanheng Zhu
Affiliation: Chinese Academy of Sciences, Institute of
Automation
Dongbin Zhao
Affiliation: Chinese Academy of Sciences, Institute of
Automation
Enhanced Rolling Fighting Bot
• Rolling Fighting Bot is based on Enhanced Rolling Horizon
Evolution Algorithm and combined with Proximal Policy
Optimization with Elo-based opponent selection. It uses Thunder
Bot as a reference with the valid action set as candidate.
• Base: ERHEA_PI, we made 2020.
• New approach:
* Add PPO Algorithm
* Modify Zen’s Action Set in Speed Mode
Welcome to contact me,
Rongqin Liang :
liangrongqin2020@ia.ac.cn
AI For FTG AI Competition
AI Name : IBM_AI
Developer’s Name: Ibrahim Khan
Affiliation
 Incoming master student at Intelligent Computer
Entertainment Laboratory, Ritsumeikan University.
 BSCS in Computer Science From Haripur University,
Pakistan.
 From Pakistan.
AI Outline
 AI is Inspired From The MCTS AI and Zone AI(a
previous entry in the competition).
 Simple and stright forward AI with a lot of room for
improvement.
 Chooses attacks and movements at random with the
help of some parameters.
 No use of Machine Learning.
Thunder2021
Eita Aoki
(I got my first degree at Nagoya University in 2013)
Outline
 Base:ReiwaThunder, I made 2020.
 New approach
・limited actions for ZEN SPEED MODE
 Test
・Generate 30 Motion.csv for GARNET and LUD.
・ Using the generated Motion.csv, play against other AI and adjust the
jump timing and the filter of the moves used.
DQAI
FightingICE Competition 2021
Thai Nguyen Van
Nguyenvanthai0212@gmail.com
Introduction
• AI Name: DQAI
• Duel Q-network Reinforcement Learning AI
• Developers & Affiliation
• Thai Nguyen Van (nguyenvanthai0212@gmail.com)
• AI Development Language
• Python 3.5
AI Outline
• Method: Double Q network Reinforcement Learning
• RL Configuration
• Duel Q network Learning Algorithm
• Trained with MCTS AI
LTAI
FightingICE Competition 2021
AI OUTLIINE
• Based on SpringAI
• Reinforcement Learning && Sample from opponent pool:
• Use an improved version PPO, dual-clip PPO
• PPO: Proximal Policy Optimization Algorithms
• The opponent pool consists of two parts:
• Some java-based AI: HaibuAI, JayBot_GM, MctsAi, UtalFighter
• Historical version of training model
• A novel opponent sampling algorithm based on payoff matrix
AI TEST (In windows10)
• Exacting zip file, copy the Folder “LTAI” into the path ${FTG4.50}/python
• Opening a new terminal, ensure the current path is ${FTG4.50} and run:
• java –Xms1024m –Xmx1024m –cp ".FightingICE.jar;.liblwjgl*;.libnativeswindows*;.lib*;.dataai*" Main --py4j --limithp 400 400
• Opening another terminal, ensure the current path is ${FTG4.50/python/LTAI} and run:
• python Main_PyAIvsJavaAI.py
RUBA
Developer: Jun Tanabe
Kyoto Sangyo University, Japan
E-main: baseball.junjun@gmail.com
Outline
 Rule-base + Genetic Algorithm
 Rule-base → RUBA
Rule
 Rule1: AIR or GROUND
 I divided states of p1 and p2 into four categories.
 Rule2: My energy level
 0, 0~50, 50~150, 150~
 Rule3: Distance between p1 and p2
 ~100, ~150, ~200, ~400
Genetic Algorithm
 Crossover is uniform crossover.
 At end of a round, I get reward.(Fig1)
Fig1. reward
SummerAI
FightingICE Competition 2021
ETRI
dooroomie@etri.re.kr
Details
• AI Name : SummerAI
• Developers & Affiliation
• Dae-Wook Kim (dooroomie@etri.re.kr) and Teammates
• Electronics and Telecommunications Research Institute (ETRI)
• Daejeon, Korea
• AI Development Language
• Python 3.6
AI Outline
• Method
• Reinforcement Learning
• Proximal Policy Optimization Algorithms (PPO)
Network Structure
• Self-attention
X 2
HP / Energy Movement 56 Action State / frame Projectile
My movement
My action
My state
My projectile
My HP/E
Op movement
Op action
Op state
Op projectile
Op HP/E
Gametime
(For my information
and the opponent information)
Network Structure
• Self-attention
My movement
My action
My state
My projectile
My HP/E
Op movement
Op action
Op state
Op projectile
Op HP/E
Gametime
query
key
value
𝑠𝑜𝑓𝑡𝑚𝑎𝑥
𝑄𝐾𝑇
𝑑
𝑉
Action
Value
How to Test
• After extracting zip file, you can see below files
How to Test
• Copy into FTG4.50/python directory
Copy
How to Test
• Open a terminal and run the FTG simulator
How to Test
• Go to python directory
• Open a new terminal and run python file
WinOrGoHome
Developer: Weijun Hong
Affiliation: NetEase Games AI Lab, GuangZhou, China
Email: hongweijun@corp.netease.com
2021/07/29
COMPANY
TURNOVER
┌
• WinOrGoHome is a Python agent totally built with deep reinforcement learning and self-play.
• Only numpy & py4j are required during inference, where the policy is modeled as a simple 3-layer MLP.
• It uses a slightly modified gym API based on [1], with a reduced action space, an enlarged 282-dim observation space, more
training-friendly API and some other fault-tolerant mechanisms for distributed training.
• It is trained by a distributed asynchronous version of PPO [2].
• 6 stand-alone model is trained for each track (i.e. league in FTGAIC):
• We use self-play to train the models for the standard track, with league training to enhance the diversity of opponent
strategies [3].
• The models for speed-run track is trained totally against MctsAi (LUD is finetuned from the self-play model).
O v e r v i e w
[1] https://github.com/TeamFightingICE/Gym-FightingICE
[2] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint
arXiv:1707.06347.
[3] Vinyals, O., Babuschkin, I., Czarnecki, W.M. et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature
COMPANY
TURNOVER
┌
• Our training framework is designed after SEED [4] which is featured with centralized inference.
• It is an asynchronous architecture with great flexibility of large-scale training.
• The controller collects outcomes like winning rate from all environments, and periodically switch the training opponents or
saving the current model as a new opponent.
Tr a i n i n g F r a m e w o r k
[4] Espeholt, L., Marinier, R., Stanczyk, P., Wang, K., & Michalski, M. (2019). Seed rl: Scalable and efficient deep-rl with accelerated
central inference. arXiv preprint arXiv:1910.06591.
COMPANY
TURNOVER
┌
F e a t u re E n g i n e e r i n g
[5] Ye, D., Chen, G., Zhang, W., Chen, S., Yuan, B., Liu, B., ... & Liu, W. (2020). Towards playing full moba games with deep
reinforcement learning. arXiv preprint arXiv:2011.12692.
• We extend the original 143-dim vector in Gym-FightingICE env with some more features:
• Relative speed/position/Hp
• Projectile info like speed/hit energy/impact distance, etc.
• Opponent’s action distribution within a round
• Action space is also changed:
• Only keep 41~42 useful actions
• Extend the effect frames of STAND_GRUAD and CROUCH_GRUAD
• Reward:
• Hp difference between the previous and current frames of both players is used for the standard track
• Only the self_hp diff and an additional reward w.r.t. the remaining time at the end of each game is used for speed-run track
• Multiple head value [5] is introduced to reduce the variance of value estimation, but used with just the same discount factor
COMPANY
TURNOVER
┌
O p p o n e n t P o o l
• Firstly we want to say thanks to the other teams in the past years’ competition, including: TeraThunder, ButcherPudge, EmcmAi,
SpringAI, CYR_AI, ReiwaThunder, Thunder, FalzAI, MctsAi, SimpleAI, LGIST_Bot, Machete. In consideration of the opponents’
diversity, during self-play, we add these AIs to our initial opponent pool with heuristic sample rates according to both their strength
and style.
• During self-play, for each character, we train our AI against the agents sampled from the opponent pool, as well as
WinOrGoHome’s past generations. Here each generation is trained until convergence and then added to the pool as a new opponent.
• After the first 3~5 generations, we train an exploiter for every second generation that plays against the previous generation, in order
to find out its weakness.
• The whole training procedure ends up with around 10 generations, and the final opponent pool is filled with about 12 past AIs, 6 or
7 self-play AIs, and 3 or 4 exploiter AIs. The last self-play generation is chosen to submit. (GARNET is trained less than 10
generations because it is harder to converge and we do not have enough time.)

More Related Content

What's hot

【Unity道場スペシャル 2018京都】今日からはじめる。ユニティちゃんトゥーンシェーダー2.0〜Unity道場カラー黒帯スペシャル〜
【Unity道場スペシャル 2018京都】今日からはじめる。ユニティちゃんトゥーンシェーダー2.0〜Unity道場カラー黒帯スペシャル〜【Unity道場スペシャル 2018京都】今日からはじめる。ユニティちゃんトゥーンシェーダー2.0〜Unity道場カラー黒帯スペシャル〜
【Unity道場スペシャル 2018京都】今日からはじめる。ユニティちゃんトゥーンシェーダー2.0〜Unity道場カラー黒帯スペシャル〜Unity Technologies Japan K.K.
 
BLS署名の実装とその応用
BLS署名の実装とその応用BLS署名の実装とその応用
BLS署名の実装とその応用MITSUNARI Shigeo
 
AIと最適化の違いをうっかり聞いてしまう前に
AIと最適化の違いをうっかり聞いてしまう前にAIと最適化の違いをうっかり聞いてしまう前に
AIと最適化の違いをうっかり聞いてしまう前にMonta Yashi
 
【CEDEC2019 】5G時代に対応した『モノビットエンジン5G』を初公開! HoloLens対応した通信クラウド最新情報も!
【CEDEC2019 】5G時代に対応した『モノビットエンジン5G』を初公開! HoloLens対応した通信クラウド最新情報も!【CEDEC2019 】5G時代に対応した『モノビットエンジン5G』を初公開! HoloLens対応した通信クラウド最新情報も!
【CEDEC2019 】5G時代に対応した『モノビットエンジン5G』を初公開! HoloLens対応した通信クラウド最新情報も!モノビット エンジン
 
(文献紹介)エッジ保存フィルタ:Side Window Filter, Curvature Filter
(文献紹介)エッジ保存フィルタ:Side Window Filter, Curvature Filter(文献紹介)エッジ保存フィルタ:Side Window Filter, Curvature Filter
(文献紹介)エッジ保存フィルタ:Side Window Filter, Curvature FilterMorpho, Inc.
 
実践イカパケット解析
実践イカパケット解析実践イカパケット解析
実践イカパケット解析Yuki Mizuno
 
ゲームAI製作のためのワークショップ(III)
ゲームAI製作のためのワークショップ(III)ゲームAI製作のためのワークショップ(III)
ゲームAI製作のためのワークショップ(III)Youichiro Miyake
 
[DL輪読会]AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
[DL輪読会]AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning[DL輪読会]AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
[DL輪読会]AdaShare: Learning What To Share For Efficient Deep Multi-Task LearningDeep Learning JP
 
SIMDで整数除算
SIMDで整数除算SIMDで整数除算
SIMDで整数除算shobomaru
 
【Unity道場スペシャル 2017札幌】カッコいい文字を使おう、そうtext meshならね
【Unity道場スペシャル 2017札幌】カッコいい文字を使おう、そうtext meshならね【Unity道場スペシャル 2017札幌】カッコいい文字を使おう、そうtext meshならね
【Unity道場スペシャル 2017札幌】カッコいい文字を使おう、そうtext meshならねUnity Technologies Japan K.K.
 
探索と活用の戦略 ベイズ最適化と多腕バンディット
探索と活用の戦略 ベイズ最適化と多腕バンディット探索と活用の戦略 ベイズ最適化と多腕バンディット
探索と活用の戦略 ベイズ最適化と多腕バンディットH Okazaki
 
テクニカルアーティストの仕事とスキル ~パイプライン系TAの事例~
テクニカルアーティストの仕事とスキル ~パイプライン系TAの事例~テクニカルアーティストの仕事とスキル ~パイプライン系TAの事例~
テクニカルアーティストの仕事とスキル ~パイプライン系TAの事例~Manabu Murakami
 
簡易版AutoML+OptunaによるHyperparams Tuning
簡易版AutoML+OptunaによるHyperparams Tuning簡易版AutoML+OptunaによるHyperparams Tuning
簡易版AutoML+OptunaによるHyperparams TuningMasaharu Kinoshita
 
MySQLの限界に挑戦する
MySQLの限界に挑戦するMySQLの限界に挑戦する
MySQLの限界に挑戦するMeiji Kimura
 
ゆらぐヒト脳波データからどのように集中度合いを可視化するか
ゆらぐヒト脳波データからどのように集中度合いを可視化するかゆらぐヒト脳波データからどのように集中度合いを可視化するか
ゆらぐヒト脳波データからどのように集中度合いを可視化するかKenyu Uehara
 
1076: CUDAデバッグ・プロファイリング入門
1076: CUDAデバッグ・プロファイリング入門1076: CUDAデバッグ・プロファイリング入門
1076: CUDAデバッグ・プロファイリング入門NVIDIA Japan
 
モノビットエンジン と AWS と クラウドパッケージで 最強のリアルタイム・マルチプレイ環境を構築&運用
モノビットエンジン と AWS と クラウドパッケージで最強のリアルタイム・マルチプレイ環境を構築&運用モノビットエンジン と AWS と クラウドパッケージで最強のリアルタイム・マルチプレイ環境を構築&運用
モノビットエンジン と AWS と クラウドパッケージで 最強のリアルタイム・マルチプレイ環境を構築&運用モノビット エンジン
 
ディープラーニングで音ゲー譜面を自動作成!
ディープラーニングで音ゲー譜面を自動作成!ディープラーニングで音ゲー譜面を自動作成!
ディープラーニングで音ゲー譜面を自動作成!KLab Inc. / Tech
 
DSIRNLP #3 LZ4 の速さの秘密に迫ってみる
DSIRNLP #3 LZ4 の速さの秘密に迫ってみるDSIRNLP #3 LZ4 の速さの秘密に迫ってみる
DSIRNLP #3 LZ4 の速さの秘密に迫ってみるAtsushi KOMIYA
 

What's hot (20)

【Unity道場スペシャル 2018京都】今日からはじめる。ユニティちゃんトゥーンシェーダー2.0〜Unity道場カラー黒帯スペシャル〜
【Unity道場スペシャル 2018京都】今日からはじめる。ユニティちゃんトゥーンシェーダー2.0〜Unity道場カラー黒帯スペシャル〜【Unity道場スペシャル 2018京都】今日からはじめる。ユニティちゃんトゥーンシェーダー2.0〜Unity道場カラー黒帯スペシャル〜
【Unity道場スペシャル 2018京都】今日からはじめる。ユニティちゃんトゥーンシェーダー2.0〜Unity道場カラー黒帯スペシャル〜
 
BLS署名の実装とその応用
BLS署名の実装とその応用BLS署名の実装とその応用
BLS署名の実装とその応用
 
AIと最適化の違いをうっかり聞いてしまう前に
AIと最適化の違いをうっかり聞いてしまう前にAIと最適化の違いをうっかり聞いてしまう前に
AIと最適化の違いをうっかり聞いてしまう前に
 
【CEDEC2019 】5G時代に対応した『モノビットエンジン5G』を初公開! HoloLens対応した通信クラウド最新情報も!
【CEDEC2019 】5G時代に対応した『モノビットエンジン5G』を初公開! HoloLens対応した通信クラウド最新情報も!【CEDEC2019 】5G時代に対応した『モノビットエンジン5G』を初公開! HoloLens対応した通信クラウド最新情報も!
【CEDEC2019 】5G時代に対応した『モノビットエンジン5G』を初公開! HoloLens対応した通信クラウド最新情報も!
 
(文献紹介)エッジ保存フィルタ:Side Window Filter, Curvature Filter
(文献紹介)エッジ保存フィルタ:Side Window Filter, Curvature Filter(文献紹介)エッジ保存フィルタ:Side Window Filter, Curvature Filter
(文献紹介)エッジ保存フィルタ:Side Window Filter, Curvature Filter
 
実践イカパケット解析
実践イカパケット解析実践イカパケット解析
実践イカパケット解析
 
ゲームAI製作のためのワークショップ(III)
ゲームAI製作のためのワークショップ(III)ゲームAI製作のためのワークショップ(III)
ゲームAI製作のためのワークショップ(III)
 
[DL輪読会]AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
[DL輪読会]AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning[DL輪読会]AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
[DL輪読会]AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning
 
SIMDで整数除算
SIMDで整数除算SIMDで整数除算
SIMDで整数除算
 
【Unity道場スペシャル 2017札幌】カッコいい文字を使おう、そうtext meshならね
【Unity道場スペシャル 2017札幌】カッコいい文字を使おう、そうtext meshならね【Unity道場スペシャル 2017札幌】カッコいい文字を使おう、そうtext meshならね
【Unity道場スペシャル 2017札幌】カッコいい文字を使おう、そうtext meshならね
 
探索と活用の戦略 ベイズ最適化と多腕バンディット
探索と活用の戦略 ベイズ最適化と多腕バンディット探索と活用の戦略 ベイズ最適化と多腕バンディット
探索と活用の戦略 ベイズ最適化と多腕バンディット
 
テクニカルアーティストの仕事とスキル ~パイプライン系TAの事例~
テクニカルアーティストの仕事とスキル ~パイプライン系TAの事例~テクニカルアーティストの仕事とスキル ~パイプライン系TAの事例~
テクニカルアーティストの仕事とスキル ~パイプライン系TAの事例~
 
簡易版AutoML+OptunaによるHyperparams Tuning
簡易版AutoML+OptunaによるHyperparams Tuning簡易版AutoML+OptunaによるHyperparams Tuning
簡易版AutoML+OptunaによるHyperparams Tuning
 
MySQLの限界に挑戦する
MySQLの限界に挑戦するMySQLの限界に挑戦する
MySQLの限界に挑戦する
 
ゆらぐヒト脳波データからどのように集中度合いを可視化するか
ゆらぐヒト脳波データからどのように集中度合いを可視化するかゆらぐヒト脳波データからどのように集中度合いを可視化するか
ゆらぐヒト脳波データからどのように集中度合いを可視化するか
 
1076: CUDAデバッグ・プロファイリング入門
1076: CUDAデバッグ・プロファイリング入門1076: CUDAデバッグ・プロファイリング入門
1076: CUDAデバッグ・プロファイリング入門
 
モノビットエンジン と AWS と クラウドパッケージで 最強のリアルタイム・マルチプレイ環境を構築&運用
モノビットエンジン と AWS と クラウドパッケージで最強のリアルタイム・マルチプレイ環境を構築&運用モノビットエンジン と AWS と クラウドパッケージで最強のリアルタイム・マルチプレイ環境を構築&運用
モノビットエンジン と AWS と クラウドパッケージで 最強のリアルタイム・マルチプレイ環境を構築&運用
 
DDoS対処の戦術と戦略
DDoS対処の戦術と戦略DDoS対処の戦術と戦略
DDoS対処の戦術と戦略
 
ディープラーニングで音ゲー譜面を自動作成!
ディープラーニングで音ゲー譜面を自動作成!ディープラーニングで音ゲー譜面を自動作成!
ディープラーニングで音ゲー譜面を自動作成!
 
DSIRNLP #3 LZ4 の速さの秘密に迫ってみる
DSIRNLP #3 LZ4 の速さの秘密に迫ってみるDSIRNLP #3 LZ4 の速さの秘密に迫ってみる
DSIRNLP #3 LZ4 の速さの秘密に迫ってみる
 

Similar to 2021 Fighting Game AI Competition

2020 Fighting Game AI Competition
2020 Fighting Game AI Competition2020 Fighting Game AI Competition
2020 Fighting Game AI Competitionftgaic
 
2019 Fighting Game AI Competition
2019 Fighting Game AI Competition2019 Fighting Game AI Competition
2019 Fighting Game AI Competitionftgaic
 
2018 Fighting Game AI Competition
2018 Fighting Game AI Competition 2018 Fighting Game AI Competition
2018 Fighting Game AI Competition ftgaic
 
2015 Fighting Game Artificial Intelligence Competition
2015 Fighting Game Artificial Intelligence Competition2015 Fighting Game Artificial Intelligence Competition
2015 Fighting Game Artificial Intelligence Competitionftgaic
 
2017 Fighting Game AI Competition
2017 Fighting Game AI Competition2017 Fighting Game AI Competition
2017 Fighting Game AI Competitionftgaic
 
Applying AI in Games (GDC2019)
Applying AI in Games (GDC2019)Applying AI in Games (GDC2019)
Applying AI in Games (GDC2019)Jun Okumura
 
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...Deren Lei
 
INTRODUCTION
INTRODUCTIONINTRODUCTION
INTRODUCTIONbutest
 
INTRODUCTION
INTRODUCTIONINTRODUCTION
INTRODUCTIONbutest
 
IRJET - Colt: The Code Series Game for Learning Program Logic through Rea...
IRJET -  	  Colt: The Code Series Game for Learning Program Logic through Rea...IRJET -  	  Colt: The Code Series Game for Learning Program Logic through Rea...
IRJET - Colt: The Code Series Game for Learning Program Logic through Rea...IRJET Journal
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Pier Luca Lanzi
 
Individual Project - Final Report
Individual Project - Final ReportIndividual Project - Final Report
Individual Project - Final ReportSteven Hooper
 
Teaching AI through retro gaming
Teaching AI through retro gamingTeaching AI through retro gaming
Teaching AI through retro gamingDiogo Gomes
 
2016 Fighting Game Artificial Intelligence Competition
2016 Fighting Game Artificial Intelligence Competition2016 Fighting Game Artificial Intelligence Competition
2016 Fighting Game Artificial Intelligence Competitionftgaic
 
Gschwind, PowerAI: A Co-Optimized Software Stack for AI on Power
Gschwind, PowerAI: A Co-Optimized Software Stack for AI on PowerGschwind, PowerAI: A Co-Optimized Software Stack for AI on Power
Gschwind, PowerAI: A Co-Optimized Software Stack for AI on PowerMichael Gschwind
 
Anime Generation with AI
Anime Generation with AIAnime Generation with AI
Anime Generation with AIKoichi Hamada
 
Development of a secure routing protocol using game theory model in mobile ad...
Development of a secure routing protocol using game theory model in mobile ad...Development of a secure routing protocol using game theory model in mobile ad...
Development of a secure routing protocol using game theory model in mobile ad...LogicMindtech Nologies
 
Development of a secure routing protocol using game theory model in mobile ad...
Development of a secure routing protocol using game theory model in mobile ad...Development of a secure routing protocol using game theory model in mobile ad...
Development of a secure routing protocol using game theory model in mobile ad...LogicMindtech Nologies
 
Game Design as an Intro to Computer Science (Meaningful Play 2014)
Game Design as an Intro to Computer Science (Meaningful Play 2014)Game Design as an Intro to Computer Science (Meaningful Play 2014)
Game Design as an Intro to Computer Science (Meaningful Play 2014)marksuter
 
This is a group assignments, I am assigned to do only the SWOT AND.docx
This is a group assignments, I am assigned to do only the SWOT AND.docxThis is a group assignments, I am assigned to do only the SWOT AND.docx
This is a group assignments, I am assigned to do only the SWOT AND.docxjuliennehar
 

Similar to 2021 Fighting Game AI Competition (20)

2020 Fighting Game AI Competition
2020 Fighting Game AI Competition2020 Fighting Game AI Competition
2020 Fighting Game AI Competition
 
2019 Fighting Game AI Competition
2019 Fighting Game AI Competition2019 Fighting Game AI Competition
2019 Fighting Game AI Competition
 
2018 Fighting Game AI Competition
2018 Fighting Game AI Competition 2018 Fighting Game AI Competition
2018 Fighting Game AI Competition
 
2015 Fighting Game Artificial Intelligence Competition
2015 Fighting Game Artificial Intelligence Competition2015 Fighting Game Artificial Intelligence Competition
2015 Fighting Game Artificial Intelligence Competition
 
2017 Fighting Game AI Competition
2017 Fighting Game AI Competition2017 Fighting Game AI Competition
2017 Fighting Game AI Competition
 
Applying AI in Games (GDC2019)
Applying AI in Games (GDC2019)Applying AI in Games (GDC2019)
Applying AI in Games (GDC2019)
 
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
Learning to Reason in Round-based Games: Multi-task Sequence Generation for P...
 
INTRODUCTION
INTRODUCTIONINTRODUCTION
INTRODUCTION
 
INTRODUCTION
INTRODUCTIONINTRODUCTION
INTRODUCTION
 
IRJET - Colt: The Code Series Game for Learning Program Logic through Rea...
IRJET -  	  Colt: The Code Series Game for Learning Program Logic through Rea...IRJET -  	  Colt: The Code Series Game for Learning Program Logic through Rea...
IRJET - Colt: The Code Series Game for Learning Program Logic through Rea...
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
 
Individual Project - Final Report
Individual Project - Final ReportIndividual Project - Final Report
Individual Project - Final Report
 
Teaching AI through retro gaming
Teaching AI through retro gamingTeaching AI through retro gaming
Teaching AI through retro gaming
 
2016 Fighting Game Artificial Intelligence Competition
2016 Fighting Game Artificial Intelligence Competition2016 Fighting Game Artificial Intelligence Competition
2016 Fighting Game Artificial Intelligence Competition
 
Gschwind, PowerAI: A Co-Optimized Software Stack for AI on Power
Gschwind, PowerAI: A Co-Optimized Software Stack for AI on PowerGschwind, PowerAI: A Co-Optimized Software Stack for AI on Power
Gschwind, PowerAI: A Co-Optimized Software Stack for AI on Power
 
Anime Generation with AI
Anime Generation with AIAnime Generation with AI
Anime Generation with AI
 
Development of a secure routing protocol using game theory model in mobile ad...
Development of a secure routing protocol using game theory model in mobile ad...Development of a secure routing protocol using game theory model in mobile ad...
Development of a secure routing protocol using game theory model in mobile ad...
 
Development of a secure routing protocol using game theory model in mobile ad...
Development of a secure routing protocol using game theory model in mobile ad...Development of a secure routing protocol using game theory model in mobile ad...
Development of a secure routing protocol using game theory model in mobile ad...
 
Game Design as an Intro to Computer Science (Meaningful Play 2014)
Game Design as an Intro to Computer Science (Meaningful Play 2014)Game Design as an Intro to Computer Science (Meaningful Play 2014)
Game Design as an Intro to Computer Science (Meaningful Play 2014)
 
This is a group assignments, I am assigned to do only the SWOT AND.docx
This is a group assignments, I am assigned to do only the SWOT AND.docxThis is a group assignments, I am assigned to do only the SWOT AND.docx
This is a group assignments, I am assigned to do only the SWOT AND.docx
 

More from ftgaic

Introduction to the Replay File Analysis Tool
Introduction to the Replay File Analysis ToolIntroduction to the Replay File Analysis Tool
Introduction to the Replay File Analysis Toolftgaic
 
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)ftgaic
 
Mcts ai
Mcts aiMcts ai
Mcts aiftgaic
 
Applying fuzzy control in fighting game ai
Applying fuzzy control in fighting game aiApplying fuzzy control in fighting game ai
Applying fuzzy control in fighting game aiftgaic
 
2014 Fighting Game Artificial Intelligence Competition
2014 Fighting Game Artificial Intelligence Competition2014 Fighting Game Artificial Intelligence Competition
2014 Fighting Game Artificial Intelligence Competitionftgaic
 
2013 Fighting Game Artificial Intelligence Competition
2013 Fighting Game Artificial Intelligence Competition2013 Fighting Game Artificial Intelligence Competition
2013 Fighting Game Artificial Intelligence Competitionftgaic
 

More from ftgaic (6)

Introduction to the Replay File Analysis Tool
Introduction to the Replay File Analysis ToolIntroduction to the Replay File Analysis Tool
Introduction to the Replay File Analysis Tool
 
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
Application of Monte Carlo Tree Search in a Fighting Game AI (GCCE 2016)
 
Mcts ai
Mcts aiMcts ai
Mcts ai
 
Applying fuzzy control in fighting game ai
Applying fuzzy control in fighting game aiApplying fuzzy control in fighting game ai
Applying fuzzy control in fighting game ai
 
2014 Fighting Game Artificial Intelligence Competition
2014 Fighting Game Artificial Intelligence Competition2014 Fighting Game Artificial Intelligence Competition
2014 Fighting Game Artificial Intelligence Competition
 
2013 Fighting Game Artificial Intelligence Competition
2013 Fighting Game Artificial Intelligence Competition2013 Fighting Game Artificial Intelligence Competition
2013 Fighting Game Artificial Intelligence Competition
 

Recently uploaded

Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 

Recently uploaded (20)

Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 

2021 Fighting Game AI Competition

  • 1. 2021 Fighting Game AI Competition Keita Fujimaki, lead programmer Xincheng Dai, co- lead programmer Roman Savchyn, tester, etc. Hideyasu Inoue, advisor Pujana Paliyawan vice director Ruck Thawonmas director Team FightingICE Intelligent Computer Entertainment Laboratory Ritsumeikan University Japan Game resources are from The Rumble Fish 2 with the courtesy of Dimps Corporation. http://www.ice.ci.ritsumei.ac.jp/~ftgaic/ CoG 2021: Aug 16-20, 2021 Updated on August 28, 2021
  • 3.  Fighting game AI platform viable to develop with a small-size team in Java and also wrapped for Python  First of its kinds since 2013 & CIG 2014, developed from scratch without using game ROM data  Aims: for research on general fighting game AIs  Strong against any unseen opponents (AIs or players) , character types, and play modes FightingICE http://www.ice.ci.ritsumei.ac.jp/~ftgaic/ Game resources are from The Rumble Fish 2 with the courtesy of Dimps Corporation. CoG 2021: Aug 16-20, 2021 3
  • 4.  Has 16.67 ms response time (60 FPS) for the agent to choose its action out of 40 actions  Provides the latest game state with a delay of 15 frames, to simulate human response time  Equipped with  a forward model  a method for accessing the screen information  an OpenAI Gym API FightingICE’s Main Features  Why FightingICE?  Generalization against different opponents of unknown behaviors  challenging for DRL  60 FPS + introduced delay  challenging for tree search 4 CoG 2021: Aug 16-20, 2021
  • 5. Recent Publications Using FightingICE by Other Groups since CoG 2020  Rongqin Liang, Yuanheng Zhu, Zhentao Tang, Mu Yang and Xiaolong Zhu - Proximal Policy Optimization with Elo-based Opponent Selection and Combination with Enhanced Rolling Horizon Evolution Algorithm, 2021 IEEE Conference on Games, August 17-20, 2021.  Tianyu Chen, Florian Richoux, Javier M. Torres, Katsumi Inoue, "Interpretable Utility-based Models Applied to the FightingICE Platform," 2021 IEEE Conference on Games, August 17-20, 2021.  Man-Je Kim , Jun Suk Kim , Sungjin James Kim , Min-jung Kim , Chang Wook Ahn, "Genetic state-grouping algorithm for deep reinforcement learning," Expert Systems with Applications, 15 December 2020.  Xenija Neufeld, "Long-Term Planning and Reactive Execution in Highly Dynamic Environments," Doctoral thesis, Otto-von-Guericke-Universität Magdeburg, Dec. 2020.  Zhentao Tang, Yuanheng Zhu, Dongbin Zhao, and Simon M. Lucas, "Enhanced Rolling Horizon Evolution Algorithm with Opponent Model Learning," IEEE Transactions on Games, 2020.  Deng Shida, Takeshi Ito, "Fighting game AI with dynamic difficulty adjustment to make it fun to play against," Proc. of the 25th Game Programming Workshop 2020, pp. 58-61, Nov. 2020. (in Japanese)  Yuanheng Zhu ,Dongbin Zhao, "Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games," IEEE Transactions on Neural Networks and Learning Systems, Nov. 2020. (Early Access)  Mohammad Farhan Ferdous, "Privacy Preservation Algorithms on Cryptography for AI as Human-like Robotic Player for Fighting Game Using Rule-Based Method," Cyber Defense Mechanisms, pp. 185-196. Sep. 2020.  MJ Kim, JH Lee, CW Ahn, "Genetic Optimizing Method for Real-time Monte Carlo Tree Search Problem," Proc. of the 9 International Conference on Smart Media and Applications, Sep, 2020. 5 CoG 2021: Aug 16-20, 2021
  • 7.  Standard and Speedrunning leagues, each using three characters: ZEN, GARNET, and LUD (GARNET and LUD’s character data not revealed in advance, unknown characters)  Standard: considers the winner of a round as the one with the HP above zero at the time its opponent's HP has reached zero (all AIs' initial HP = 400)  Speedrunning: the winner of a given character type is the AI with the shortest average time to beat our sample MctsAi (all AIs' initial HP = 400) Contest Rules 7 CoG 2021: Aug 16-20, 2021
  • 8. Summary of 10 Entries 8 AI Affliation Language Description BlackMamba Researcher team from Netease Games AI Lab, China Java PPO trained against weaken MctsAI in Speedrunning League and against self-play or previous entries with added noise in the character data in Standard League EggTart Student from KMUTT, Thailand Java Rule-based AI ERHEA_PPO_PG Student team from University of Chinese Academy of Sciences, China Java Enhanced Rolling Horizon Evolution Algorithm and combined with Proximal Policy Optimization (PPO) with Elo-based opponent selection IBM_AI Student from Haripur University graduate, Pakistan Java Rule-based AI Thunder2021 Individual developer, Japan Java 1. Prioritize certain actions in advance. 2. Predict the most possible three actions by the opponent. 3. Select the best action by AI against the opponent's three actions. 4. Limited actions for ZEN Speedrunning League. DQAI Individual developer, Vietnam Python Duel Q-network Reinforcement Learning AI LTAI Individual developer, China Python Dual-clip PPO with a novel opponent sampling algorithm based on payoff matrix Ruba Student from Kyoto Sangyo University, Japan Python Rule-base + Genetic Algorithm AI SummerAI Researcher team from ETRI, Korea Python PPO WinOrGoHome Individual researcher from Netease Games AI Lab, China Python PPO trained against MctsAI in Speedrunning League and against self-play in Standard League • 5 Java entries, 5 Python entries; 4 student entries, 4 individual developer/researcher entries, 2 researcher team entries • 4 entries from China, 2 entries from Japan, 1 entry from Korea, Pakistan, Thailand, and Vietnam, respectively • PPO used in 5 entries, EA in 2 entries CoG 2021: Aug 16-20, 2021
  • 10. Results • Winner AI: BlackMamba by Peng ZHANG, Guanghao ZHANG, Xuechun WANG, Sijia XU, Shuo SHEN, and Weidong Zhang (Netease Games AI Lab, China) • Proximal Policy Optimization Algorithms (PPO) trained against weaken MctsAI in Speedrunning League and against self-play or previous entries with added noise in the character data in Standard League. • Runner-up AI: WinOrGoHome by Weijun Hong (Netease Games AI Lab, China) • PPO trained against MctsAI in Speedrunning League and against self-play in Standard League. • 3rd Place AI: Thunder2021 by Eita Aoki, an Individual developer, Japan (2020 runner- up, winner at the 2016, 2017, 2018, and 2019 competition) • 1. Prioritize certain actions in advance. 2. Predict the most possible three actions by the opponent. 3. Select the best action by AI against the opponent's three actions. 4. Limited actions for ZEN Speedrunning League. 10 CoG 2021: Aug 16-20, 2021 Updated on August 28, 2021
  • 11. Sample Fights BlackMamba (P1) vs WinOrGoHome (P2) 11 CoG 2021: Aug 16-20, 2021  BlackMamba on GARNET tends to use a kick action when facing the opponent. It doesn’t defend the opponent’s attack, but uses attacks to fight back.  BlackMamba on ZEN tends to use a jump action when looking out for the opponent’s weakness, while doing more continuous attacks when pushing the opponent to the edge.  BlackMamba on LUD tends to find a chance to hit the opponent in the air. It also uses a jump action to break the deadlock. Please see the descriptions below
  • 12. Thank you and see you at CoG 2022 in China (Plan to add human players for assessment of the AI performance) http://www.ice.ci.ritsumei.ac.jp/~ftgaic/ 12 CoG 2021: Aug 16-20, 2021
  • 13. BlackMamba An intelligent Fighter based on Reinforcement Learning Guanghao ZHANG Xuechun WANG Peng ZHANG Sijia XU Shuo SHEN Developer: Affiliation: Netease Games AI Lab Weidong ZHANG
  • 14. Outline BlackMamba is an RL agent trained by Proximal Policy Optimization. Regarding the diversity and richness demand of data sampling, our AI is trained by fighting with history opponents revealed in FightingICE Game and self-play. The Policy Network we used is a simple six-layer MLP. And its weights are saved in csv files finally. In order to improve exploration and balance the convergence speed for different opponents, we add an opponent selection mechanism during the training process.
  • 15. Training For speed league, we train the model by fighting with MctsAi. Considering lower machine performance of the organizers, we let our agent also fight with a weak version MctsAi, whose searching time is constrained. worker worker worker Agent MctsAi VS Data Buffer Policy Rollout data Latest policy Learner Worker
  • 16. Training For standard league, we train the model by fighting with historical participants and self- play. To cope with changes of GARNET and LUD’s motion data, we randomly modify the motion data when training GARNET and LUD’s model. worker worker worker Agent Historical Participants VS Data Buffer Policy Rollout data Latest policy Learner Historical Worker worker worker worker Agent Agent VS Self-play Worker
  • 17. Thank sFree to contact us {zhangguanghao, zhangpeng17, wangxuechun}@corp.netease.com
  • 18. Fighting Game AI Competition 2021 AI name: EGGTART Developer name: Gunt CHANMAS Affiliation: School of Information Technology, KMUTT
  • 19. Outline  Rule-based AI:  Move forward if distance X > 200  Perform “CROUCH_FB” when distance X < 250 and distance Y <= 20  Dodge by  1. “FORWARD_WALK” when distance Y > 40  2. “BACK_STEP” when distance Y > 20 THANK YOU
  • 20. Enhanced Rolling Fighting Bot - ERHEA_PPO_PG Rongqin Liang(Student) Affiliation: University of Chinese Academy of Sciences Yuanheng Zhu Affiliation: Chinese Academy of Sciences, Institute of Automation Dongbin Zhao Affiliation: Chinese Academy of Sciences, Institute of Automation
  • 21. Enhanced Rolling Fighting Bot • Rolling Fighting Bot is based on Enhanced Rolling Horizon Evolution Algorithm and combined with Proximal Policy Optimization with Elo-based opponent selection. It uses Thunder Bot as a reference with the valid action set as candidate. • Base: ERHEA_PI, we made 2020. • New approach: * Add PPO Algorithm * Modify Zen’s Action Set in Speed Mode
  • 22. Welcome to contact me, Rongqin Liang : liangrongqin2020@ia.ac.cn
  • 23. AI For FTG AI Competition AI Name : IBM_AI Developer’s Name: Ibrahim Khan
  • 24. Affiliation  Incoming master student at Intelligent Computer Entertainment Laboratory, Ritsumeikan University.  BSCS in Computer Science From Haripur University, Pakistan.  From Pakistan.
  • 25. AI Outline  AI is Inspired From The MCTS AI and Zone AI(a previous entry in the competition).  Simple and stright forward AI with a lot of room for improvement.  Chooses attacks and movements at random with the help of some parameters.  No use of Machine Learning.
  • 26. Thunder2021 Eita Aoki (I got my first degree at Nagoya University in 2013)
  • 27. Outline  Base:ReiwaThunder, I made 2020.  New approach ・limited actions for ZEN SPEED MODE  Test ・Generate 30 Motion.csv for GARNET and LUD. ・ Using the generated Motion.csv, play against other AI and adjust the jump timing and the filter of the moves used.
  • 28. DQAI FightingICE Competition 2021 Thai Nguyen Van Nguyenvanthai0212@gmail.com
  • 29. Introduction • AI Name: DQAI • Duel Q-network Reinforcement Learning AI • Developers & Affiliation • Thai Nguyen Van (nguyenvanthai0212@gmail.com) • AI Development Language • Python 3.5
  • 30. AI Outline • Method: Double Q network Reinforcement Learning • RL Configuration • Duel Q network Learning Algorithm • Trained with MCTS AI
  • 32. AI OUTLIINE • Based on SpringAI • Reinforcement Learning && Sample from opponent pool: • Use an improved version PPO, dual-clip PPO • PPO: Proximal Policy Optimization Algorithms • The opponent pool consists of two parts: • Some java-based AI: HaibuAI, JayBot_GM, MctsAi, UtalFighter • Historical version of training model • A novel opponent sampling algorithm based on payoff matrix
  • 33. AI TEST (In windows10) • Exacting zip file, copy the Folder “LTAI” into the path ${FTG4.50}/python • Opening a new terminal, ensure the current path is ${FTG4.50} and run: • java –Xms1024m –Xmx1024m –cp ".FightingICE.jar;.liblwjgl*;.libnativeswindows*;.lib*;.dataai*" Main --py4j --limithp 400 400 • Opening another terminal, ensure the current path is ${FTG4.50/python/LTAI} and run: • python Main_PyAIvsJavaAI.py
  • 34. RUBA Developer: Jun Tanabe Kyoto Sangyo University, Japan E-main: baseball.junjun@gmail.com
  • 35. Outline  Rule-base + Genetic Algorithm  Rule-base → RUBA
  • 36. Rule  Rule1: AIR or GROUND  I divided states of p1 and p2 into four categories.  Rule2: My energy level  0, 0~50, 50~150, 150~  Rule3: Distance between p1 and p2  ~100, ~150, ~200, ~400
  • 37. Genetic Algorithm  Crossover is uniform crossover.  At end of a round, I get reward.(Fig1) Fig1. reward
  • 39. Details • AI Name : SummerAI • Developers & Affiliation • Dae-Wook Kim (dooroomie@etri.re.kr) and Teammates • Electronics and Telecommunications Research Institute (ETRI) • Daejeon, Korea • AI Development Language • Python 3.6
  • 40. AI Outline • Method • Reinforcement Learning • Proximal Policy Optimization Algorithms (PPO)
  • 41. Network Structure • Self-attention X 2 HP / Energy Movement 56 Action State / frame Projectile My movement My action My state My projectile My HP/E Op movement Op action Op state Op projectile Op HP/E Gametime (For my information and the opponent information)
  • 42. Network Structure • Self-attention My movement My action My state My projectile My HP/E Op movement Op action Op state Op projectile Op HP/E Gametime query key value 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 𝑄𝐾𝑇 𝑑 𝑉 Action Value
  • 43. How to Test • After extracting zip file, you can see below files
  • 44. How to Test • Copy into FTG4.50/python directory Copy
  • 45. How to Test • Open a terminal and run the FTG simulator
  • 46. How to Test • Go to python directory • Open a new terminal and run python file
  • 47. WinOrGoHome Developer: Weijun Hong Affiliation: NetEase Games AI Lab, GuangZhou, China Email: hongweijun@corp.netease.com 2021/07/29
  • 48. COMPANY TURNOVER ┌ • WinOrGoHome is a Python agent totally built with deep reinforcement learning and self-play. • Only numpy & py4j are required during inference, where the policy is modeled as a simple 3-layer MLP. • It uses a slightly modified gym API based on [1], with a reduced action space, an enlarged 282-dim observation space, more training-friendly API and some other fault-tolerant mechanisms for distributed training. • It is trained by a distributed asynchronous version of PPO [2]. • 6 stand-alone model is trained for each track (i.e. league in FTGAIC): • We use self-play to train the models for the standard track, with league training to enhance the diversity of opponent strategies [3]. • The models for speed-run track is trained totally against MctsAi (LUD is finetuned from the self-play model). O v e r v i e w [1] https://github.com/TeamFightingICE/Gym-FightingICE [2] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. [3] Vinyals, O., Babuschkin, I., Czarnecki, W.M. et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature
  • 49. COMPANY TURNOVER ┌ • Our training framework is designed after SEED [4] which is featured with centralized inference. • It is an asynchronous architecture with great flexibility of large-scale training. • The controller collects outcomes like winning rate from all environments, and periodically switch the training opponents or saving the current model as a new opponent. Tr a i n i n g F r a m e w o r k [4] Espeholt, L., Marinier, R., Stanczyk, P., Wang, K., & Michalski, M. (2019). Seed rl: Scalable and efficient deep-rl with accelerated central inference. arXiv preprint arXiv:1910.06591.
  • 50. COMPANY TURNOVER ┌ F e a t u re E n g i n e e r i n g [5] Ye, D., Chen, G., Zhang, W., Chen, S., Yuan, B., Liu, B., ... & Liu, W. (2020). Towards playing full moba games with deep reinforcement learning. arXiv preprint arXiv:2011.12692. • We extend the original 143-dim vector in Gym-FightingICE env with some more features: • Relative speed/position/Hp • Projectile info like speed/hit energy/impact distance, etc. • Opponent’s action distribution within a round • Action space is also changed: • Only keep 41~42 useful actions • Extend the effect frames of STAND_GRUAD and CROUCH_GRUAD • Reward: • Hp difference between the previous and current frames of both players is used for the standard track • Only the self_hp diff and an additional reward w.r.t. the remaining time at the end of each game is used for speed-run track • Multiple head value [5] is introduced to reduce the variance of value estimation, but used with just the same discount factor
  • 51. COMPANY TURNOVER ┌ O p p o n e n t P o o l • Firstly we want to say thanks to the other teams in the past years’ competition, including: TeraThunder, ButcherPudge, EmcmAi, SpringAI, CYR_AI, ReiwaThunder, Thunder, FalzAI, MctsAi, SimpleAI, LGIST_Bot, Machete. In consideration of the opponents’ diversity, during self-play, we add these AIs to our initial opponent pool with heuristic sample rates according to both their strength and style. • During self-play, for each character, we train our AI against the agents sampled from the opponent pool, as well as WinOrGoHome’s past generations. Here each generation is trained until convergence and then added to the pool as a new opponent. • After the first 3~5 generations, we train an exploiter for every second generation that plays against the previous generation, in order to find out its weakness. • The whole training procedure ends up with around 10 generations, and the final opponent pool is filled with about 12 past AIs, 6 or 7 self-play AIs, and 3 or 4 exploiter AIs. The last self-play generation is chosen to submit. (GARNET is trained less than 10 generations because it is harder to converge and we do not have enough time.)

Editor's Notes

  1. Next is about the contest.
  2. Results!