Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games

35 views

Published on

This is the keynote I gave at the 2018 ACM SIGGRAPH Conference on Motion, Interaction and Games.

Title: Toward a Science of Game Design

Abstract:
Game development is costly, technically challenging, and poorly understood. Increased demand for games as a form of entertainment has motivated research into technology to help ameliorate the burden involved in development. This technology unfortunately has the potential to create more problems than it solves. In this talk, I will argue that this increased demand should motivate more research into human-centered game design, involving both artifact and person. This research requires computationally modeling our human intelligence, as part of an agenda that seeks to codify the precise interplay between a person’s cognition (an inner environment), the game’s controls (an interface), and a fictional universe (an outer environment); the interplay is concerned with attaining design goals by adapting the inner environment to the outer environment. I will present examples of this agenda as embodied through my own work and identify key challenges that I think the MIG community is well-poised to address in service of establishing what Herb Simon might have called a “science of game design.”

Published in: Education
  • Be the first to comment

  • Be the first to like this

Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games

  1. 1. LABORATORY FOR QUANTITATIVE EXPERIENCE DESIGNqed.cs.utah.edu Toward a 
 Science of Game Design Rogelio E. Cardona-Rivera Assistant Professor and Director, QED Lab School of Computing, Entertainment Arts & Engineering University of Utah rogelio@cs.utah.edu @recardona
  2. 2. Acknowledgements
  3. 3. My Work: The Big Picture Developing intelligent systems

  4. 4. My Work: The Big Picture Developing intelligent systems
 which sit at the interface of a virtual world
  5. 5. My Work: The Big Picture Developing intelligent systems
 which sit at the interface of a virtual world and a person's understanding of it, 

  6. 6. My Work: The Big Picture Developing intelligent systems
 which sit at the interface of a virtual world and a person's understanding of it, 
 to enable the automated generation of 
 compelling interactive experiences
  7. 7. Three Methodological Pillars
  8. 8. Three Methodological Pillars •Synthesis
  9. 9. Three Methodological Pillars •Synthesis Narratology
  10. 10. Three Methodological Pillars •Synthesis Narratology Psychology
  11. 11. Three Methodological Pillars •Synthesis Game Design Narratology Psychology
  12. 12. Three Methodological Pillars •Synthesis •Development Game Design Narratology Psychology AI
  13. 13. Three Methodological Pillars •Synthesis •Development •Validation Game Design Narratology Psychology AI
 +
 HCI
  14. 14. Game Design 
 is a Cognitive Science
  15. 15. Game Design is a Cognitive Science
  16. 16. Game Design is a Cognitive Science {"<start>" : "<template>", "<template>" : 
 "<object> in which players <engagement>. 
 | <object> that involves <characteristics>. 
 | <object> <constraints>. 
 | <object> characterized by <relationship>.", "<object>" : 
 … } molleindustria. http://www.gamedefinitions.com/
  17. 17. Game Design is a Cognitive Science –Herbert Simon The act of transforming existing courses of action into preferred ones.
  18. 18. Game Design is a Cognitive Science
  19. 19. Talk Outline Objective: The MIG Community is well- poised to pursue a science of game design • What is a Science of Game Design and why bother? • What are examples of work in this area? • What are MIG-specific opportunities?
  20. 20. What is a Science of Game Design and why bother?
  21. 21. What is a Science of Game Design… A systematically organized 
 body of knowledge
  22. 22. What is a Science of Game Design… A systematically organized 
 body of knowledge composed of 
 observation and experiment
  23. 23. What is a Science of Game Design… A systematically organized 
 body of knowledge composed of 
 observation and experiment that encompasses the 
 structure and behavior of games
  24. 24. …and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  25. 25. …and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  26. 26. The Engineering Challenge •Costly •Technically difficult •Poorly understood
  27. 27. Cost of Most Expensive Games 2011 2012 2013 2014 2015 2016 $124M $80M $140M$137M $105M $200M MGSV
  28. 28. Development Time for them 2011 2012 2013 2014 2015 2016 5 Years 3 Years 4 Years4 Years 3 Years3 Years MGSV
  29. 29. Human Cost Telltale Games
  30. 30. Human Cost Rockstar Games
  31. 31. Authorial Combinatorics Problem •Content authoring increases exponentially with player choice Event Action
  32. 32. 12 writers, 3 years
 200,000 dialogue lines
 = approx. 1M words = 1,094,170 wordsA choose-your-own-adventure (CYOA)
  33. 33. Game Purchase Influence Factors Similarity 9% Sequel 9% Word of mouth 11% Graphics 12% Interesting Story 16% Price 21% Other 22% Essential Facts About the Computer & Video Game Industry (Entertainment Software Association, 2016)
  34. 34. …and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  35. 35. Graphics at the Expense of Stories
  36. 36. There’s a lot of hacks and kludges to get things working… I’m sure you would find tons of duplication of effort, definitely. I’ve been an audio programmer on [X] different games and I’ve written [X] different audio engines.
  37. 37. Meaningless Procedural Generation No Man’s Sky can generate 1.8 × 1019 Planets
  38. 38. Meaningless Procedural Generation The Kaleidoscope Effect Cognitively-grounded Procedural Content Generation (Cardona-Rivera, 2017)
  39. 39. …and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  40. 40. The Player Modeling Principle The whole value of a game is in the mental model of itself it projects into the player’s mind. The Simulation Dream (Sylvester, 2013)
  41. 41. Tacit Learning and Expectations
  42. 42. What is a Science of Game Design and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  43. 43. Game Design Narratology Psychology AI
 +
 HCI What are examples 
 of work in this 
 area?
  44. 44. Narratively
 Intelligent
 Game AI An example agenda in the Science of Game Design
  45. 45. What is Narrative Intelligence? Unique human capacity to 
 understand our environment in 
 terms of stories (Heider and Simmel, 1944)
  46. 46. •Narrative framing makes interaction more compelling Why Narrative Intelligence matters
  47. 47. •Narrative framing makes interaction more compelling ‣ Entertainment Why Narrative Intelligence matters video games
  48. 48. •Narrative framing makes interaction more compelling ‣ Entertainment ‣ Education Why Narrative Intelligence matters training simulations
  49. 49. •Narrative framing makes interaction more compelling ‣ Entertainment ‣ Education ‣ Engagement Why Narrative Intelligence matters gamification
  50. 50. Why Narrative Intelligence matters •Narrative framing makes interaction more compelling ‣ Entertainment ‣ Education ‣ Engagement •Difficult to engineer ‣ AI may help ameliorate authorial burden
  51. 51. Interactive Narrative (IN) • Mediates actions through a 
 narrative framing
  52. 52. Interactive Narrative (IN) Designer Authors
 with Story 
 Director • Mediates actions through a 
 narrative framing • Abstraction of story as trajectory of world states ‣ Narratives as plans
  53. 53. Narratives as Plans • Story generation as a classical planning problem ‣ : initial state ‣ : goal conditions ‣ : set of (domain) actions, 
 predicates, and objects • Search for sequence to transform → P = hsi, g, Di D g si gsi Na Plans and planning in narrative generation: a review of plan-based approaches to the 
 generation of story, discourse and interactivity in narratives (Young et al., 2013)
  54. 54. Narratives as Plans • Actions encoded as template operators ‣ Planning Domain Definition Language Na (:action pick-up
 :parameters (?agent ?item ?location) :precondition (and (at ?item ?location) 
 (at ?agent ?location)) :effect (and (not (at ?item ?location)) 
 (has ?agent ?item)))
  55. 55. (:action pick-up
 :parameters (?agent ?item ?location) :precondition (and (at ?item ?location) 
 (at ?agent ?location)) :effect (and (not (at ?item ?location)) 
 (has ?agent ?item)) :agents (?agent)) Narratives as Plans • Actions encoded as template operators ‣ Planning Domain Definition Language • PDDL expanded with consenting agents Na
  56. 56. Automated Planning • Solution to a planning problem
 is a planP = hsi, g, Di ⇡ = hS, B, Li
  57. 57. Pick-up Automated Planning • Solution to a planning problem
 is a plan ‣ : steps P = hsi, g, Di ⇡ = hS, B, Li S si g Pick-up Disenchant
  58. 58. Automated Planning • Solution to a planning problem
 is a plan ‣ : steps ‣ : bindings P = hsi, g, Di ⇡ = hS, B, Li S B si g Pick-up Disenchant Pick-up
  59. 59. Automated Planning • Solution to a planning problem
 is a plan ‣ : steps ‣ : bindings ‣ : causal links
 (e.g. ) P = hsi, g, Di ⇡ = hS, B, Li hs1, , s2i S B L si g Pick-up Disenchant Pick-up (has ARTHUR SPELLBOOK)
  60. 60. Example: A Knight’s Tale (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) (define (domain KNIGHT) (:requirements :strips) (:predicates (at ?x ?y) (has ?x ?y) (path ?x ?y) 
 (asleep ?x) (enchanted ?x)) (:action pick-up
 :parameters 
 (?agent ?item ?location) …) (:action move
 :parameters 
 (?agent ?from ?to) …) (:action disenchant
 :parameters 
 (?agent ?obj ?location ?book) …) (:action wake-up
 :parameters 
 (?agent ?sleeper ?location) …) ) Na
  61. 61. Example: A Knight’s Tale (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) Na
  62. 62. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR))
  63. 63. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR))
  64. 64. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR))
  65. 65. Example: A Knight’s Tale (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) Na
  66. 66. Example: A Knight’s Tale (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) Na
  67. 67. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR))
  68. 68. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) si g Pick-up Disenchant Pick-up
  69. 69. Interactive Narrative (IN) Na Designer Authors
 with Story 
 Director
  70. 70. Interactive Narrative (IN) Na Designer Authors
 with Story 
 Director si g
  71. 71. Interactive Narrative (IN) Na Interacts
 with Designer Authors
 with Story 
 Director si g Player
  72. 72. Interactive Narrative (IN) Na Interacts
 with Designer Authors
 with Story 
 Director si g Player Player can act as afforded by the logical world state
  73. 73. IN Play as Game Tree Search Na si g Pick-up Disenchant Pick-up Intended Narrative Plan
  74. 74. IN Play as Game Tree Search • Chronology — Player & System take turns ‣ On System Turn: Advance Narrative Agenda ‣ On Player Turn: ??? Na Disenchant Pick-upPick-up si g
  75. 75. IN Play as Game Tree Search Na Disenchant Pick-upPick-up si g
  76. 76. IN Play as Game Tree Search Na Disenchant Pick-upPick-up si g Move
  77. 77. IN Play as Game Tree Search Disenchant Pick-upPick-up si g Na Move Wake-up
  78. 78. IN Play as Game Tree Search Disenchant Pick-upPick-up si g Move Wake-up r q p Na • Many trajectories
  79. 79. IN Play as Game Tree Search Disenchant Pick-upPick-up si g Na Move Wake-up r q p … … … … • Many many trajectories
  80. 80. IN Play as Game Tree Search Na Disenchant Pick-upPick-up si g Move Wake-up r q p … … … … • Many many trajectories ‣ Not all are good
  81. 81. IN Play as Game Tree Search Na is unreachable Disenchant Pick-upPick-up si g Move Wake-up r q p … … … g • Many many trajectories ‣ Not all are good
  82. 82. IN Play as Game Tree Search Na is unreachable Disenchant Pick-upPick-up si g Move Wake-up r q p … … … g • Many many trajectories ‣ Not all are good ‣ Mediator is needed
  83. 83. Interactive Narrative (IN) Na Interacts
 with Designer Authors
 with Story 
 Director si g Player
  84. 84. Interactive Narrative (IN) Na Interacts
 with Designer Authors
 with Story 
 Director si g PlayerMediator
  85. 85. Interactive Narrative (IN) Na Interacts
 with Designer Authors
 with Story 
 Director si g PlayerMediator • Director & Mediator collaborate ‣ Accept ‣ Re-plan around ‣ Fail user actions
  86. 86. Na Disenchant Pick-upPick-up si g Move Wake-up r q p … … … …
  87. 87. Why would players pick these? D Na Ps si g … … … …
  88. 88. Why would players pick these? D Na Ps si g … … … … • Valued as completions
  89. 89. Why would players pick these? D Na Ps si g … … … … • Valued as completions • If often, represents: ‣ More work for system
  90. 90. Why would players pick these? D Na Ps si g … … … … • Valued as completions • If often, represents: ‣ More work for system ‣ Failure of design
  91. 91. Automated Design Problem D Na Ps … …
  92. 92. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence D Na Ps … …
  93. 93. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension D Na Ps … …
  94. 94. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play D Na Ps … …
  95. 95. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency D Na Ps … …
  96. 96. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency ‣ … D Na Ps … …
  97. 97. •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency … …
  98. 98. Example Science of Game Design •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency … …
  99. 99. Example Science of Game Design •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem … …
  100. 100. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension Example Science of Game Design •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem … …
  101. 101. Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016) Modeling Story Understanding •Readers as 
 problem solvers (Gerrig and Bernardo, 1994) Na Ps
  102. 102. Modeling Story Understanding •Readers as 
 problem solvers •Planning is a model of problem solving (Gerrig and Bernardo, 1994) (Tate, 2001) D Na Ps Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  103. 103. Modeling Story Understanding •Readers as 
 problem solvers •Planning is a model of problem solving •Idea: narrative plan 
 as a proxy for 
 mental state (Gerrig and Bernardo, 1994) (Tate, 2001) D Na Ps Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  104. 104. Understanding as Planning • The QUEST Model of Comprehension ‣ Comprehension as Q&A • Predicts normative answers to questions ‣ Why? How? When? What enabled? What was the consequence? (Graesser and Franklin, 1990) D Na Ps Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  105. 105. Understanding as Planning The QUEST Graph D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  106. 106. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Why did Arthur disenchant Excalibur? Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  107. 107. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Why did Arthur disenchant Excalibur? Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  108. 108. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Why did Arthur disenchant Excalibur? Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  109. 109. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Why did Arthur disenchant Excalibur? Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  110. 110. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur wants
 disenchanted Arthur wants
 Excalibur Reason Why did Arthur disenchant Excalibur? Goal Candidate Answers Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  111. 111. Understanding as Planning Plan to QUEST Graph Mapping Algorithm D Na Ps Given a plan : 1. , generate event node ei with a. effects , generate state node ti with 2. Connect Consequence Arcs for all ti → ei , ei →ti+1 in 3. For all literals in , generate goal node li with 4. Connect Reason Arcs for all goal nodes, by ancestry 5. Connect Outcome Arcs for all li →ei in B L ⇡ = hS, B, Li B L L B 8s 2 S 8 e 2 S Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  112. 112. Understanding as Planning Plan to QUEST Graph Mapping Algorithm D Na Ps Given a plan : 1. , generate event node ei with a. effects , generate state node ti with 2. Connect Consequence Arcs for all ti → ei , ei →ti+1 in 3. For all literals in , generate goal node li with 4. Connect Reason Arcs for all goal nodes, by ancestry 5. Connect Outcome Arcs for all li →ei in B L ⇡ = hS, B, Li B L L B 8s 2 S 8 e 2 S Mapping data structure semantics 
 to cognitive semantics Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  113. 113. Understanding as Planning •Replicated QUEST Validation experiment ‣ Original: manual graph ‣ Ours: generated graph • Participants gave goodness-of-answer Likert data for Q&A pairs ‣ Predicted their answers ‣ Strong support for model (N=695) Evaluating the Mapping D Na Ps (Graesser, Lang, and Roberts, 1991) Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  114. 114. Understanding as Planning Takeaway D Na Ps si g Pick-up Disenchant Pick-up Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  115. 115. Understanding as Planning Takeaway D Na Ps Generation si g Pick-up Disenchant Pick-up Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  116. 116. Understanding as Planning Takeaway D Na Ps Generation si g Pick-up Disenchant Pick-up Comprehension Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  117. 117. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension Example Science of Game Design •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem … …
  118. 118. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Role-play Example Science of Game Design … …
  119. 119. Determinants of Player Choice •Tripartite Model of Player Behavior ‣ Person ‣ Player ‣ Persona (Roles) The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 (Waskul and Lusk, 2004) Na Ps
  120. 120. Role-play as Preferred Actions •Roles ‣ Fighter ‣ Wizard ‣ Rogue •Participants (n=210) played 1-of-3 games ‣ Assigned Role (78) ‣ Chosen Role (91) ‣ No Explicit Role (41) The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Na Ps
  121. 121. Role-play as Preferred Actions • Players prefer to act as expected from assigned/chosen role • Players with no explicit role self- select and remain consistent Mimesis Effect The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Na Ps
  122. 122. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Chronology Na Ps
  123. 123. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Chronology Na Ps
  124. 124. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Chronology Na Ps
  125. 125. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Chronology Inferences Na Ps
  126. 126. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Na Ps Chronology Inferences
  127. 127. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Na Ps Chronology Preferred!
  128. 128. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Role-play Example Science of Game Design … …
  129. 129. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Desire for Agency Example Science of Game Design … …
  130. 130. Pursuing Greater Agency •Satisfying power to take meaningful action and see the results of our decisions & choices •What is meaningful? ‣ The effect of feedback ‣ Some choices were “greater agency” ones (Murray 1997) The Wolf Among Us Achieving the Illusion of Agency
 (Fendt, Harrison, Ware, Cardona-Rivera and Roberts, 2012) or v. or Na Ps
  131. 131. Foreseeing Meaningful Choices • Idea: Greater agency — greater difference (greater meaning) • Method: Measure choice story outcomes ‣ Formalize story content ‣ Define story content difference ‣ Compare choices through story content difference Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps
  132. 132. A Formalism of Story Content • The Event-Indexing Model ‣ Consumers “chunk” story information into events (Zwaan, Langston, Graesser 1995) picks uppicks up disenchants space time causal goals entities space time causal goals entities space time causal goals entities Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps
  133. 133. Story Content Difference •Situation Vector picks up space time causal goals entities Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps
  134. 134. Story Content Difference •Situation Vector Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) picks up forest time point 3 primary wants excalibur arthur, excalibur Na Ps
  135. 135. Story Content Difference •Situation Vector •Change Function ‣ Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) picks up forest time point 3 primary wants excalibur arthur, excalibur : SV ! [0, 5] Na Ps
  136. 136. Story Content Difference •Situation Vector •Change Function ‣ Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) picks up forest time point 3 primary wants excalibur arthur, excalibur : SV ! [0, 5] picks up forest time point 1 primary wants excalibur arthur, spellbook Na Ps
  137. 137. Story Content Difference •Situation Vector •Change Function ‣ Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) picks up forest time point 3 primary wants excalibur arthur, excalibur : SV ! [0, 5] picks up forest time point 1 primary wants excalibur arthur, spellbook = 2 Na Ps
  138. 138. Agency as Function of Outcomes •Participants (N=88) 
 played custom CYOA ‣ 6 binary choices •Answered 5-point 
 Likert prompts for 
 agency •Page Trend Test supports our theory = 0 6= 0(Vermeulen et al. 2010) H0 : MdC0 = MdC5 = MdC3 = MdC1 = MdC2 = MdC1 HA : MdC0 < MdC5 < MdC3 < MdC1 < MdC2 < MdC1 Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps
  139. 139. Agency as Function of Outcomes Takeaway Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Chronology Na Ps
  140. 140. Agency as Function of Outcomes Takeaway Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps Chronology Inferences
  141. 141. Agency as Function of Outcomes Takeaway Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps Chronology ⇢ ,agency
  142. 142. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Desire for Agency Example Science of Game Design … …
  143. 143. Example Science of Game Design •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem … …
  144. 144. What are examples 
 of work in this 
 area? • Modeling Story Comprehension as Planning • Modeling Role-play as a Preference over Actions • Modeling Agency as a Function of Choice Outcome Differences
  145. 145. What are MIG-specific opportunities?
  146. 146. Fidelity for Designed Purpose •How much display fidelity is enough?
  147. 147. Fidelity for Designed Purpose •How much display fidelity is enough? •How much display fidelity is enough 
 for X purpose?
  148. 148. Fidelity for Designed Purpose •How much display fidelity is enough? •How much display fidelity is enough 
 for X purpose? •How much Y fidelity is enough for X purpose?
  149. 149. Fidelity for Designed Purpose •How much display fidelity is enough? •How much display fidelity is enough 
 for X purpose? •How much Y fidelity is enough for X purpose? ‣ A Design Space Scenario Interaction Display
  150. 150. Inferencing & Expectations •How do 
 mimetic interfaces elicit expectations? ‣ Interaction ‣ Motion ‣ Games Person’s Bounding Box Player’s Bounding Box Persona’s Bounding Box v. Person’s Bounding Box Player+Persona
 Bounding Box
  151. 151. Storytelling through Motion •Movement attracts attention first
  152. 152. Storytelling through Motion •Movement attracts attention first
  153. 153. Storytelling through Motion •Movement attracts attention first •Classes of movement
 (Kurosawa) ‣ Nature ‣ Groups of People ‣ Individuals ‣ Camera Every Frame a Painting. - https://www.youtube.com/watch?v=doaQC-S8de8
  154. 154. What are MIG-specific opportunities in the Science of Game Design? • Fidelity for Designed Purpose • Understanding the Role of 
 Inferencing & Expectations • Storytelling through Motion
  155. 155. Recap • What is a Science of Game Design and why bother? • What are examples of work in this area? • What are MIG-specific opportunities? Takeaway: The MIG Community is well- poised to pursue a science of game design
  156. 156. Call to Action • Embrace Design ‣ No optimal solutions, only tradeoffs (“it depends”) • Tripartite Model of Games Research ‣ Seek the invariant relationships Content Game Interface Cognition

×