This document proposes a system called DraftRec to provide personalized draft recommendations in multiplayer online battle arena (MOBA) games like League of Legends. DraftRec uses a hierarchical neural network model with two transformer networks - a player network to capture individual player styles and preferences from match histories, and a match network to integrate player outputs. It recommends champions that have a high predicted winning probability based on the player's style. Evaluation shows DraftRec outperforms baselines in champion recommendation and match outcome prediction. A user study found players were satisfied with DraftRec's recommendations.
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
draftrec_www22.pdf
1. DraftRec
Personalized Draft Recommendation for Winning
in Multiplayer Online Battle Arena Games
Hojoon Lee*, Dongyoon Hwang*, Hyunseung Kim,
Byungkun Lee, and Jaegul Choo
3. • A MOBA game is divided into two stages: draft stage and play stage.
• Draft Stage: players are split into two teams and alternately select a virtual character
Turn1
4
5
8
2
3
6
7
10
9
(a) (a)
(d)
(e)
(f)
(b)
(d)
(f)
(c)
(b)
PICK
(g)
Figure 1: In game screen of the draft stage in League of Legends.
4. • Play Stage: players control their selected champions to destroy the opponent’s main tower.
Figure 2: In game screen of the play stage in League of Legends.
5. • In League of Legends, there currently exist 156 champions, leading to approximately
𝟒. 𝟒𝟐 ×𝟏𝟎𝟏𝟕 ( 𝟏𝟓𝟔
𝟓
× 𝟏𝟓𝟏
𝟓
) possible champion combinations for a single match.
• The strategy can be significantly different, depending on each player’s playstyle.
6. • Players commonly rely on game analytic applications to find the appropriate combinations.
• Professional teams hire coaches to build optimal draft strategies.
Figure 3: Web page of the game analytic service op.gg. Figure 4: Coach Yu Byeong-jun of Gen.G esports is drafting in
2021 LCK summer split.
7. • Previous work focused on building recommender systems which recommend champions with a high
probability of winning by considering the synergy and competence of the champions [1, 2].
• However, none of these methods take the player’s personal style into consideration.
[1] The art of drafting: A team-oriented hero recommendation system for multiplayer online battle arena games., Chen et al., RecSys 2018.
[2] Towards playing full moba games with deep reinforcement learning., Ye et al., NeurIPS, 2020.
8. • We present DraftRec, a recommender system that suggests champions with a high probability of
winning while understanding the play style of each player within the match.
• DraftRec represents the player’s play style through each player’s match histories.
• Contributions
• This paper formalizes the personalized draft recommendation problem in MOBA games.
• This paper proposes a novel hierarchical architecture which integrates the player’s playstyle.
• DraftRec achieves the state of the art performance.
9. Figure 5: In game screen of the draft stage in League of
Legends for the player at turn 9.
• Draft turn: (1)-(2,3)-(4,5)-(6,7)-(8,9)-(10)
• For each turn 𝑡 at match 𝑖
• (b): players in the same team.
• (c): selected champions. 𝑐!
"
.
• (d): designated roles 𝑟!
"
.
• (e): player-id 𝑝!
"
• State (𝑠!
"
): observable information for the player 𝑝!
"
.
currently selected
champions
team player’s
match histories
team player’s roles.
Given state (𝑠!
"
), the recommendation model 𝑓# predicts the likely champion ̂
𝑝 and the match outcome *
𝑣 .
10. Figure 6: DraftRec exploits a hierarchical architecture with the two Transformer-based networks: (a) the player network and
(b) the match network. While the player network focuses on capturing the individual players’ play styles and preferences, the
followed match network integrates the outputs of the player network
11. • The network parameters 𝜃 are trained to maximize the predicted champion selection probability ̂
𝑝 of
the ground-truth champion 𝑐! while minimizing the error between the predicted match outcome *
𝑣
and the ground-truth match outcome 𝑜!.
• We use binary cross entropy loss for both and .
12. • By filling the [mask] token with champion c, we can predict the expected winning probability of
playing champion c through the match outcome prediction head.
Figure 5: An illustrative example of the unreliable behavior
• However, since out of distribution data might have
arbitrarily inaccurate prediction values, the highest
match outcome value can be inappropriately
assigned to champions which players do not prefer
as Fig.5.(a).
• Therefore, we restrict the decision space by
recommending the champion with the highest
match outcome where its selection probability
exceeds a threshold value.
13. • Dataset
• League of Legends: manually collected match data for League of Legends utilizing the publicly
accessible API endpoint provided by Riot Games1and constructed a MOBA game match dataset
with rich player history.
• Dota2: A publicly available dataset provided by Kaggle.
Table 1: Statistics of the benchmark datasets.
16. Champion Recommendation Match Outcome Prediction
Figure 6: Performance comparison of
recommendation models with varying history length L.
Figure 7: Performance comparison of match outcome
prediction models with varying history length L.
Understanding the player’s personal style is beneficial.
17. Figure 8: Visualization of the attention weights on
League of Legends.
• High attention scores:
• (Top, Jungle)
• (Middle, Jungle)
• (AD Carry, Support)
Attention score reflects the actual role
interaction of League of Legends.
18. • Recommendation Strategies
• DraftRec (p): Strategy based on the champion selection probability.
• DraftRec (v): Strategy based on the match outcome prediction.
• DraftRec (p+v): Proposed recommendation strategy which ranks the champions based on the
match outcome where its selection probability exceeds a threshold value.
• Evaluation
• For each test match data, the model recommends the champion and the separate evaluation
model predicts the match outcome.
(Ours)
19. • Conducted User study to 84 players for 50 matches
The players were satisfied
20. • We present DraftRec, a recommender system that suggests champions with a high probability of
winning while understanding the play style of each player within the match.
• DraftRec achieved the best performance compared to existing baselines.
Codes and datasets can be found in
https://github.com/dojeon-ai/DraftRec