Big Data at Riot Games – Using Hadoop to Understand Player Experience - StampedeCon 2013

6,632 views
6,394 views

Published on

At the StampedeCon 2013 Big Data conference in St. Louis, Riot Games discussed Using Hadoop to Understand and Improve Player Experience. Riot Games aims to be the most player-focused game company in the world. To fulfill that mission, it’s vital we develop a deep, detailed understanding of players’ experiences. This is particularly challenging since our debut title, League of Legends, is one of the most played video games in the world, with more than 32 million active monthly players across the globe. In this presentation, we’ll discuss several use cases where we sought to understand and improve the player experience, the challenges we faced to solve those use cases, and the big data infrastructure that supports our capability to provide continued insight.

Published in: Technology

Big Data at Riot Games – Using Hadoop to Understand Player Experience - StampedeCon 2013

  1. 1. BIG DATA @ RIOT GAMES USING HADOOP TO IMPROVE THE PLAYER EXPERIENCE BARRY LIVINGSTON & SANDEEP SHRESTHA | JULY 2013
  2. 2. SPEAKERS
  3. 3. CONTEXT HIGH LEVEL ARCHITECTURE PLAYER EXPERIENCE USE CASES SUMMARY QUICK DATA WAREHOUSE HISTORY
  4. 4. FIRST, A BIT OF CONTEXT…
  5. 5. WHAT IS LEAGUE OF LEGENDS? 2009 LAUNCH TEAM ORIENTED 100+ CHAMPS MODERN FANTASY
  6. 6. WHAT IS LEAGUE OF LEGENDS?
  7. 7. LEAGUE OF LEGENDS GAMEPLAY - CHAMPIONS
  8. 8. LEAGUE OF LEGENDS GAMEPLAY - GAMEPLAY
  9. 9. A QUICK HISTORY
  10. 10. INITIAL LAUNCH / SCRAPPY START UP PHASE ‣  Had  a  single,  dedicated  MySQL  instance  for  the  DW   ‣  Data  was  ETL’d  from  produc@on  slaves  into  this  instance   ‣  Queries  were  run  in  MySQL   ‣  Repor@ng  was  done  in  Excel   ▾  All  ETLs,  queries  and  repor@ng  were  done  by  one  person   HISTORY   START-­‐UP   THIS WORKED GREAT!
  11. 11. THEN – CRAZY GROWTH HISTORY   START-­‐UP   @me   #  unique  logins   TOTAL  ACTIVE  PLAYERS    June  2012   CRAZY   GROWTH  
  12. 12. THE BREAKING POINT HISTORY   START-­‐UP   CRAZY   GROWTH   BREAKING   POINT   ‣  Data  warehouse  reached  a  breaking  point   ▾  24  hours  of  data  took  24.5  hours  to  ETL   ‣  We  couldn’t  handle…   ▾  Mul@ple  environments  in  a  ver@cal  MySQL  instance     ▾  A  single  environment  in  a  ver@cal  MySQL  instance   ‣  We  needed  to  change    
  13. 13. INTRODUCTION OF HADOOP HISTORY   START-­‐UP   CRAZY   GROWTH   BREAKING   POINT   ‣  Hadoop  has  a  number  of  great  quali@es   ▾  Cost  effec@ve   ▾  Scalable   ▾  Open  source   ▾  We  could  execute  quickly   HADOOP  
  14. 14. HIGH LEVEL ARCHITECTURE – JUNE 2012 Tableau     Hive  Data  Warehouse   Pentaho     +     Custom   ETL     +     Sqoop   MySQL  Pentaho   Analysts   EUROPE   Audit   Plat   LoL   KOREA   Audit   Plat   LoL   NORTH  AMERICA   Audit   Plat   LoL   Business   Analyst  
  15. 15. BUT, THIS WASN’T GOOD ENOUGH ‣  The  @me  to  arrive  at  insight  was  too  long!   ‣  Our  solu@on  required  too  much  data  team  involvement   ▾  Schema  changes   ▾  ETL  tweaks   ▾  Hive  metadata  updates   ‣  Hive  is  painful  for  ad-­‐hoc  or  interac@ve  analysis   ▾  Especially  for  non-­‐technical  folks  
  16. 16. GOALS ‣  Democra@ze  data  access   ▾  Enable  Self-­‐service  Data  Collec@on  and   Analysis   ‣  Create  ac@onable  insights   ‣  Increase  speed  to  insight  
  17. 17. USE CASE: GAME CLIENT PERFORMANCE
  18. 18. CLIENT FOOTPRINT ‣  Significant  por@on  of  our  soware  runs  directly  on  players’   machines   ▾  High  performance  graphics   ▾  Responsiveness   ‣  There  is  logic  in  these  components  that's  ONLY  exercised   on  the  client-­‐side   ‣  Understanding  the  performance,  reliability  and  stability  of   these  features  is  paramount  to  improving  the  player   experience  
  19. 19. PATCHER
  20. 20. LOBBY CLIENT
  21. 21. GAME CLIENT
  22. 22. ITEM SHOP
  23. 23. CHALLENGE: THE GAME IS ALIVE The  game  is  a  living,  breathing  service  that’s  always  in  mo@on   ‣  New  champions   ‣  New  items     ‣  New  effects/par@cles   ‣  Changes  in  environment   ‣  Changes  in  design  and  design   balance       UPDATE 2-3WEEKS
  24. 24. CHALLENGE: WE’RE GLOBAL
  25. 25. CHALLENGE: PC VARIABILITY ‣  Hardware  and  OS  profiles  are  significantly  different  even   within  regions   ▾  OS  and  patch  level   ▾  CPU   ▾  Memory   ▾  Video  card   ▾  Video  card  memory   ▾  Drivers  
  26. 26. CHALLENGE: GRAPHIC SETTINGS
  27. 27. CHALLENGE: CLIENT-SIDE LOGIC
  28. 28. IMPROVING THE PLAYER EXPERIENCE ‣  We  need  to  gather  informa@on  across  all  of  these   dimensions  in  order  to  UNDERSTAND  the  player  experience   ‣  We  use  this  info  to:   ▾  React  quickly  to  changes   ▾  Op@mize  performance   ▾  Op@mize  designs   ▾  Improve  our  tes@ng   •  Like  crea@ng  our  compa@bility  tes@ng  lab  
  29. 29. REACTING QUICKLY
  30. 30. GAME LOAD SCREEN
  31. 31. IMPROVING LOAD TIME
  32. 32. OPTIMIZING DESIGN AND PERFORMANCE
  33. 33. OPTIMIZING DESIGN AND PERFORMANCE
  34. 34. OPTIMIZING DESIGN AND PERFORMANCE
  35. 35. OPTIMIZING DESIGN AND PERFORMANCE
  36. 36. HOW DID WE SOLVE THIS WE HAVE AN ARMY OF TEEMOS WATCHING PLAYERS’ MACHINES THROUGH THEIR TELESCOPES?! (NOT REALLY, BUT WE DID CONSIDER IT)
  37. 37. HONU: GENERATE - COLLECT - ANALYZE ‣  Riot’s  self-­‐service  end-­‐to-­‐end  Big  Data  pipeline   ▾  Cloud-­‐ready  (AWS  compa@ble)   ▾  Internal  data-­‐center  ready   ▾  Persistent  storage:  HDFS/S3   ▾  Batch  processing:  Apache  Hadoop/AWS  EMR   ▾  Data  publish:  Apache  Hive    
  38. 38. EVENT GENERATION ‣  Honu  SDKs:  Java,  C++,  Erlang   ‣  Collector  discovery   ‣  Failover   ‣  Load  balancing   ‣  Buffering/Batching   ‣  Dispatching   ‣  Thri  transport  
  39. 39. HONU CLIENT SDK Select  avg(f[‘pingAVG’])  from  game_client_stats  group  by  f[‘serverId’];   pingAvg   serverId   system  source      app  @mestamp   1234567890   99.123.456.78   game_client   220.9542   12.345.678.90   Intel64  …   GAME_CLIENT_STATS  
  40. 40. EVENT COLLECTION ‣  Honu  collector   ‣  Online  system   ‣  High  availability  –  100%  up@me   ‣  Horizontally  scalable   ‣  Elas@c   ‣  Fault  tolerant   ‣  Neulix  OSS  Eureka  discovery  service  
  41. 41. HONU COLLECTOR ‣  Collect  events  from  mul@ple  clients   (Thri/NIO)   ‣  Save  all  events  to  one  compressed   file  locally   ‣  Upload  that  file  every  XX  minutes  to   HDFS/S3   ‣  Send  a  message  to  Queue/SQS  for   Demux   H  o  n  u  C  o  l  l  e  c  t  o  r  s   S  Q  S   S  3  
  42. 42. EVENT ORGANIZATION ‣  Honu  demux   ‣  Mul@-­‐stage  batch  processing  pipeline   ‣  Elas@c  producer-­‐consumer   ‣  Apache  Hadoop  map  reduce   ‣  Standalone  map  reduce  mode   ‣  Apache  Hive  integra@on  
  43. 43. HONU DEMUX ‣  Mul@-­‐Stage  batch   processing  pipeline   ‣  Bucket  events  to  separate   tables   ‣  Write  Hive  par@@on  files   ‣  Add  par@@ons  to  Hive   metastore   ‣  Merge  par@@ons     Demux    SQS   S3 S3   Standalone Demux Standalone Demux Standalone Demux Standalone Demux S3 S3 S3 S3 HIVE   MERGE  
  44. 44. HONU PIPELINE HONU CLIENT SDK HONU COLLECTORS HONU DEMUX ORGANIZECOLLECTGENERATE
  45. 45. USE CASE: PLAYER BEHAVIOR
  46. 46. PLAYER BEHAVIOR
  47. 47. PLAYER BEHAVIOR INITIATIVES TRIBUNAL JUSTICE ‣  Community  regulated   ‣  In-­‐game  chat  log   ‣  Player  stats   ‣  Inventory   ‣  Game  Info  
  48. 48. PLAYER BEHAVIOR INITIATIVES HONOR SYSTEM ‣  Recognize  posi@ve  experience   ‣  Improve  sportsmanship  
  49. 49. STARTUP TIPS TEAMS THAT USE SMART PINGS TO ALERT OTHER PLAYERS TO THREATS ARE MORE LIKELY TO WIN GAME PLAYERS WHO FOLLOW THE SUMMONER'S CODE WIN 27% MORE GAMES THE TRIBUNAL BANS PLAYERS FOR NEGATIVE BEHAVIOR SUCH AS VERBAL HARASSMENT PLAYERS WHO COOPERATE WITH THEIR TEAM WIN 31% MORE GAMES
  50. 50. HOW WE SOLVED IT – EXTEND HONU HONU CLIENT SDK HONU COLLECTORS HONU DEMUX ORGANIZECOLLECTGENERATE
  51. 51. HONU TOOLS: DRADIS ‣  Hwp  based  data  collec@on   ‣  Large  volume  of  data  from   untrusted  source   ‣  C10K   ‣  Nginx  +  Newy   ‣  4+  billion  API  calls/day   ‣  Peak  100K+  calls/sec    
  52. 52. HONU TOOLS: DRADIS ‣  Json  Messages:   ▾  curl  -­‐d  ’[   {"messageType":  "Foo",  "@mestamp":  1369064555,  "fact":  "Hello  World!"},  {"messageType":   "Foo",  "@mestamp":  1369064555,  "fact":  "Hello  Dradis!",     "fic@on":  "Hello  Honu!"}]’     ‣  Hive  Query:   ▾  Select  *  from  foo  where  f[‘fact’]  =  ‘Hello  Dradis!’   Table:  Foo  
  53. 53. HONU TOOLS: ECHO SERVICE ‣  Web  UI  to  easily  and  immediately  visualize  the  data  that  has  been  sent   to  Honu  collectors   ‣  Self-­‐service  end-­‐to-­‐end  pipeline  
  54. 54. HONU TOOLS: ECHO SERVICE ‣  Web  UI  to  easily  and  immediately  visualize  the  data  that  has  been  sent   to  Honu  collectors   ‣  Self-­‐service  end-­‐to-­‐end  pipeline  
  55. 55. HONU TOOLS: ECHO SERVICE ‣  Web  UI  to  easily  and  immediately  visualize  the  data  that  has  been  sent   to  Honu  collectors   ‣  Self-­‐service  end-­‐to-­‐end  pipeline  
  56. 56. HONU TOOLS: METADATA SERVICE ‣  Data  discovery   ‣  Schema  management   ‣  Counter,  @me  
  57. 57. HONU TOOLS: REAL-TIME SLICING/DICING ‣  Integration with Platfora ‣  End-user ad-hoc analysis tool ‣  Interactive visual feedback ‣  Realtime exploration/graphing @ 109 data points
  58. 58. HONU TOOLS: REAL-TIME SLICING/DICING
  59. 59. HONU TOOLS: WORKFLOW MANAGEMENT ENTERPRISE WORKFLOW MANAGEMENT MATT GOEKE @ LATER TODAY ClientMobile WWW
  60. 60. HONU STATS ‣  7+ billion events/day ‣  Tested @ 70+ billion events/day ‣  100+ tables ▾  10+ tables @ 100M – 1B rows/day ‣  7 Petabytes Game Event Dataset ‣  Semi-global deployment ‣  0 downtime ‣  Runs in cloud (AWS) + datacenter
  61. 61. SUMMARY
  62. 62. GOALS ü Democra@ze  Data  Access   ü Enable  Self-­‐service  Data  Collec@on  and  Analysis   ü Create  Ac@onable  Insights   ü Increase  Speed  to  Insight   HONU HONU CLIENT SDK
  63. 63. FUTURE ‣  Improve  self-­‐service  workflow  &  tooling   ▾  Metadata  management   ▾  Discovery  of  captured  data   ▾  Workflow  management   ▾  Plauora  to  all  teams   ‣  Real@me  event  aggrega@on   ‣  Global  data  infrastructure   ‣  Replace  legacy  audit/event  logging  services  
  64. 64. HANDLE INCREASING DATA VELOCITY JUNE 2012 JULY 2013 MySQL  tables   180   1200   Pipeline  Events/day   0   7+  Billion   Workflows   Cronjob  +  Pentaho   Oozie   Environment   Datacenter   DC  +  AWS   SLA   1  day   2  hours   Event  tracking   •  2+  weeks  (DB   update)   •  Dependencies:  DBA   teams  +  ETL  teams  +   Tools  teams   •  Down@me  (3h  min.)   •  10  minutes   •  Self-­‐Service     •  No  down@me  
  65. 65. DECREASE TEEMO DEATHS?
  66. 66. SHAMELESS HIRING PLUG Like most everybody else at this conference… we’re hiring! PLAYER EXPERIENCE FIRST CHALLENGE CONVENTION FOCUS ON TALENT AND TEAM TAKE PLAY SERIOUSLY STAY HUNGRY, STAY HUMBLE THE RIOT MANIFESTO
  67. 67. SHAMELESS HIRING PLUG AND YES, YOU CAN PLAY GAMES AT WORK IT’S ENCOURAGED!
  68. 68. THANK YOU! QUESTIONS? BARRY LIVINGSTON blivingston@riotgames.com SANDEEP SHRESTHA sshrestha@riotgames.com

×