Multiplayer Computer Games - lecture slides 2013


Published on

Multiplayer computer games are distributed applications, which require real-time interaction, consistent view on data and secure communication between the participants. This course focuses on realizing these goals in a networked environment. The topics cover among other things communication architectures, area-of-interest management, dead reckoning algorithms, and cheating prevention.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Multiplayer Computer Games - lecture slides 2013

  1. 1. Multiplayer Computer Games Jouni Smed Department of Information Technology University of Turku
  2. 2. Course Syllabus credits: 5 cp n  recommendable prerequisites: n  n  Algorithms for Computer Games n  knowledge on the basic concepts of computer networks n  assessment n  electronic n  examination course web page:
  3. 3. Lectures n  Lecture times: n  Tuesdays 10–12 a.m. n  Wednesdays 10–12 a.m. October 29 – November 27 n  Lecture room Lambda, ICT Building n 
  4. 4. Examinations 1 (2) n  electronic examination n  n  n  n  opens December 2, 2013 closes March 31, 2014 you can take the examination at most three (3) times for instructions and examination time reservations, see
  5. 5. Examinations 2 (2) n  questions n  based on both lectures and the textbook n  two questions, à 5 points n  to pass the examination, at least 5 points (50%) are required n  grade: g = ⎡p − 5⎤ n  questions are in English, but you can answer in English or in Finnish n  remember to enrol in time!
  6. 6. Textbook n  n  Jouni Smed & Harri Hakonen: Algorithms and Networking for Computer Games, John Wiley & Sons, 2006.
  7. 7. Outline of the Course 8.  Communication layers u  u  u  physical platform logical platform networked application 9.  Compensating resourse limitations u  u  u  u  aspects of compensation protocol optimization dead reckoning local perception filters u  u  synchronized simulation area-of-interest filtering 10.  Cheating prevention u  u  u  u  u  attacking the hosts tampering with network traffic look-ahead cheating collusion offending other players
  8. 8. Components, Relationships and Aspects of a Game es corr representation ce nden po rules definition goal obstr uctio opponent CHALLENGE PLAY CONFLICT player n
  9. 9. So What Is Multiplaying? n  multiplaying vs. single-playing n  opponents are not controlled by a computer but other humans n  interaction amongst the multiple players n  attempt-based n  sports games n  turn-based n  board games, play-by-email games n  real-time n  real-time strategy games, first-person shooters
  10. 10. Answer 1: High-score List attempt-based interaction n  examples n  n  pinball machines n  Sea Wolf (1976) n  Asteroids (1979)
  11. 11. Answer 2: Multiple Game Controllers n  multiple players using the same computer n  multiple controllers n  split screen n  examples n  Pong (1972) n  One on One (1983)
  12. 12. Answer 3: Hot Seat n  one active player at a time (time-slicing) one computer, one controller n  multiple players taking turns n  computer controls the passive players n  n  example n  Formula One Grand Prix (1991)
  13. 13. Answer 4: Server and (Dumb) Clients n  multiple computers n  the game runs on a server n  the clients display the output and convey the input n  examples n  Multi-User Dungeon (1978) n  Xpilot (1991)
  14. 14. Answer 5: Players as Peers n  multiple computers n  the same game runs on each participating computer n  players’ decisions are conveyed via a network n  example n  Doom (1993)
  15. 15. §8 Communication Layers n  physical platform n  logical platform n  networked application
  16. 16. Classification of Shared-Space Technologies 1 (2) synthetic Augmented Virtual Reality Reality n  n  Artificiality physical Physical reality n  Physical Reality Telepresence local remote Transportation n  Telepresence n  n  Benford et al., 1998 resides in the local, physical world here and now a real world location remote from the participant’s physical location a remote-controlled robot
  17. 17. Classification of Shared-Space Technologies 2 (2) synthetic Augmented Virtual Reality Reality n  n  Artificiality physical Augmented reality n  Physical Reality Telepresence local remote Transportation n  Virtual reality n  n  Benford et al., 1998 synthetic objects are overlaid on the local environment a head-up display (HUD) the participants are immersed in a remote, synthetic world multiplayer computer game
  18. 18. §8.1 Physical Platform n  resource limitations n  bandwidth n  latency n  processing n  power for handling the network traffic transmission techniques and protocols n  unicasting, multicasting, broadcasting n  Internet Protocol, TCP/IP, UDP/IP
  19. 19. Network Communication Latency Bandwidth Protocol Reliability
  20. 20. Fundamentals of Data Transfer 1 (3) n  Network latency network delay n  the amount of time required to transfer a bit of data from one point to another n  one of the biggest challenges: n  impacts directly the realism of the game experience n  we cannot much to reduce it n  n  origins speed-of-light delay n  endpoint computers, network hardware, operating systems n  the network itself, routers n 
  21. 21. Fundamentals of Data Transfer 2 (3) n  Network bandwidth n  n  the rate at which the network can deliver data to the destination host (bits per second, bps) Network reliability a measure of how much data is lost by the network during the journey from source to destination host n  types of data loss: n  dropping: the data does not arrive n  corruption: the content has been changed n 
  22. 22. Fundamentals of Data Transfer 3 (3) n  Network protocol a set of rules that two applications use to communicate with each other n  packet formats n  n  n  packet semantics n  n  understanding what the other endpoint is saying what the recipient can assume when it receives a packet error behaviour n  what to do if (when) something goes wrong
  23. 23. Internet Protocol (IP) n  n  n  Low-level protocols used by hosts and routers Guides the packets from source to destination host Hides the transmission path n  n  phone lines, LANs, WANs, wireless radios, satellite links, carrier pigeons,… Applications rarely use the IP directly but the protocols that are written on top of IP n  n  Transmission Control Protocol (TCP/IP) User Datagram Protocol (UDP/IP)
  24. 24. TCP versus UDP Transmission Control Protocol (TCP/IP) n  n  n  n  Point-to-point connection Reliable transmission using acknowledgement and retransmission Stream-based data semantics Big overhead n  n  User Datagram Protocol (UDP/ IP) n  n  n  n  n  n  n  data checksums Hard to ‘skip ahead’ Lightweight data transmission Differs from TCP n  n  connectionless transmission ‘best-efforts’ delivery packet-based data semantics Packets are easy to process Transmission and receiving immediate No connection information for each host in the operating system Packet loss can be handled
  25. 25. Transmission Techniques n  Unicasting n  single n  receiver Multicasting n  one or more receivers that have joined a multicast group n  Broadcasting n  all nodes in the network are receivers
  26. 26. IP Broadcasting n  Using a single UDP/IP socket, the same packet can be sent to multiple destinations by repeating the send call n  n  n  n  ‘unicasting’ great bandwidth is required each host has to maintain a list of other hosts IP broadcasting allows a single transmission to be delivered to all hosts on the network n  a special bit mask of receiving hosts is used as a address n  n  With UDP/IP, the data is only delivered to the applications that are receiving on a designated port Broadcast is expensive n  n  n  each host has to receive and process every broadcast packet Only recommended (and only guaranteed) on the local LAN Not suitable for Internetbased applications
  27. 27. IP Multicasting 1 (3) n  n  n  n  n  Packets are only delivered to subscribers Subscribers must explicitly request packets from the local distributors No duplicate packets are sent down the same distribution path Original ‘publisher’ does not need to know all subscribers Receiver-controlled distribution
  28. 28. IP Multicasting 2 (3) n  n  n  ‘Distributors’ are multicastcapable routers They construct a multicast distribution tree Each multicast distribution tree is represented by a pseudo-IP address (multicast IP address, class D address) n  n  n  n– some addresses are reserved local applications should use– Address collisions possible n  Internet Assigned Number Authority (IANA) n  Application can specify the IP time-to-live (TTL) value n  n  n  n  n  n  n  how far multicast packets should travel 0: to the local host 1: on the local LAN 2–31: to the local site (network) 32–63: to the local region 64–127: to the local continent 128–254: deliver globally
  29. 29. IP Multicasting 3 (3) n  n  n  n  n  Provides desirable network efficiency Allows partitioning of different types of data by using multiple multicast addresses The players can announce their presence by using application’s well-known multicast address Older routers do not support multicasting Multicast-aware routers communicate directly by ‘tunneling’ data past the non-multicast routers (Multicast Backbone, Mbone) n  Participant’s local router has to be multicast-capable
  30. 30. Selecting a Protocol 1 (4) n  Multiple protocols can be used in a single system Not which protocol should I use in my game but which protocol should I use to transmit this piece of information? n  Using TCP/IP n  n  n  n  n  n  reliable data transmission between two hosts packets are delivered in order, error handling relatively easy to use point-to-point limits its use in large-scale multiplayer games bandwidth overhead
  31. 31. Selecting a Protocol 2 (4) n  Using UDP/IP lightweight n  offers no reliability nor guarantees the order of packets n  packets can be sent to multiple hosts n  deliver time-sensitive information among a large number of hosts n  more complex services have to be implemented in the application (serial numbers, timestamps) n  recovery of lost packets n  positive acknowledgement scheme n  negative acknowledgement scheme (more effective when the destination knows the sources and their frequency) n  n  transmit a quench packet if packets are received too often
  32. 32. Selecting a Protocol 3 (4) n  Using IP broadcasting design considerations similar to (unicast) UDP/IP n  limited to LAN n  not for games with a large number of participants n  to distinguish different applications using the same port number (or multicast address): n  Avoid the problem entirely: assign the necessary number n  Detect conflict and renegotiate: notify the participants and direct them to migrate a new port number n  Use protocol and instance magic numbers: each packet includes a magic number at a well-known position n  Use encryption n 
  33. 33. Selecting a Protocol 4 (4) n  Using IP multicasting provides a quite efficient way to transmit information among a large number of hosts n  information delivery is restricted n  time-to-live n  group subscriptions n  preferred method for large-scale multiplayer games n  how to separate the information flows among different multicast groups n  a single group/address for all information n  several multicast groups to segment the information n 
  34. 34. §8.2 Logical Platform n  communication architecture n  peer-to-peer n  client-server n  server-network n  data and control architecture n  centralized n  replicated n  distributed
  35. 35. Communication Architecture Single node! Peer-to-peer! Server-network! Client-server!
  36. 36. Communication Architecture (cont’d) n  Logical connections n  n  how the messages flow LAN Physical connections the wires between the computers n  the limiting factor in communication architecture design n  p1 p2 Two players on a LAN
  37. 37. Example: How Many Players Can We Put into a Two-Player LAN? n  n  n  Distributed Interactive Simulation (DIS) protocol data unit (PDU): 144 bytes (1,152 bits) Graphics: 30 frames/second PDU rates n  n  n  n  n  aircraft 12 PDU/second ground vehicle 5 PDU/second weapon firing 3 PDU/second fully articulated human 30 PDU/second Bandwidth n  Ethernet LAN 10 Mbps n  Assumptions: n  n  n  sufficient processor power no other network usage a mix of player types ⇒ LAN: 8,680 packets/second fully articulated humans + firing = 263 humans aircrafts + firing = 578 aircrafts ground vehicles + firing = 1,085 vehicles n  Typical NPSNET-IV DIS battle n  n  limits to 300 players on a LAN processor and network limitations
  38. 38. Multiplayer Client-Server Systems: Logical Architecture n  Client-server system n  n  n  each player sends packets to other players via a server Server slows down the message delivery Benefits of having a server n  n  n  n  n  no need to send all packets to all players compress multiple packets to a single packet smooth out the packet flow reliable communication without the overhead of a fully connected game administration Communication paths p1 p2 pn Multiplayer client-server - logical architecture
  39. 39. Multiplayer Client-Server Systems: Physical Architecture (on a LAN) n  All messages in the same wire n  Server has to provide some added-value function n  collecting data n  compressing and redistributing information n  additional computation LAN p1 p2 pn Server Multiplayer client-server - physical architecture on a LAN
  40. 40. Traditional Client-Server n  C! C! C! C! C! Server may act as n  n  C! C! S! C! C! C! C! C! C! n  n  broadcast reflector filtering reflector packet aggregation server Scalability problems n  all traffic goes through the server C! ⇒ Server-network architecture
  41. 41. Multiplayer Server-Network Architecture n  Players can locate in the same place in the game world, but reside on different servers n  n  n  n  p1,n Server 1 Server 2 Server 3 WAN, LAN Each server serves a number of client players n  p1,2 real world ≠ game world Server-to-server connections transmit the world state information n  p1,1 LAN, modem, cable modem Scalability p2,1 p2,2 p2,n p3,1 p3,2 p3,n
  42. 42. Partitioning Clients across Multiple Servers n  C! C! C! C! S! C! C! S! C! n  S! S! C! n  C! C! C! C! C! C! The servers exchange control messages among themselves n  n  inform the interests of their clients Reduces the workload on each server Incurs a greater latency The total processing and bandwidth requirements are greater
  43. 43. Partitioning the Game World across Multiple Servers n  S! S! S! n  S! C! C! C! n  n  C! n  C! C! C! n  Each server manages clients located within a certain region Client communicates with different serves as it moves Possibility to aggregate messages Eliminates a lot of network traffic Requires advanced configuration Is a region visible from another region?
  44. 44. Server Hierarchies S! n  S! S! S! n  S! Servers themselves act as clients Packet from an upstream server: n  S! S! C! C! C! C! C! C! C! n  deliver to the interested downstream clients Packet from a downstream client: n  n  deliver to the interested downstream clients if other regions are interested in the packet then deliver it to the upstream server
  45. 45. p1,1 p1,2 p1,n Server 1 Server 2 p2,1 p2,2 Peer-to-Peer Architectures Server 3 p2,n n  p3,1 p3,2 In the ideal large-scale networked game design, avoid having servers at all n  n  n  eventually we cannot scale out a finite number of players Design goal n  n  n  p3,n LAN p2 p1 Peer-to-peer on a LAN peer-to-peer communication scalable within resources Peer-to-peer: communication goes directly from the sending player to the receiving player (or a set of them) pn p1 p2 p3 p4
  46. 46. Peer-to-Peer with Multicast Network n  n  n  For a scalable multiplayer game on a LAN, use multicast To utilize multicast, assign packets to proper multicast groups Area-of-interest management n  n  n  n  assign outgoing packets to the right groups receive incoming packets to the appropriate multicast groups keep track of available groups even out stream information AOIM 1 p1 AOIM 1 p2 AOIM software layer AOIM 1 pn
  47. 47. Peer-Server Systems n  n  Peer-to-peer: minimizes latency, consumes bandwidth Client-server: effective aggregation and filtering, increases latency n  Hybrid peer-server: n  n  n  n  over short-haul, highbandwidth links: peer-to-peer over long-haul, low-bandwidth links: client-server n  n  Each entity has own multicast group Well-connected hosts subscribe directly to a multicast group (peer-topeer) Poorly-connected hosts subscribe to a forwarding server Forwarding server subscribes to the entities’ multicast groups n  aggregation, filtering
  48. 48. Data and Control Architectures n  Where does the data reside and how it can be updated? n  Centralized n  n  Replicated n  n  all nodes hold a full copy of the data Distributed n  n  n  one node holds a full copy of the data one node holds a partial copy of the data all nodes combined hold a full copy of the data Consistency vs. responsiveness
  49. 49. Requirements for Data and Control Architectures n  Consistency: nodes should have the same view on the data centralized: simple—one node binds them all! n  replicated: hard—how to make sure that every replica gets updated? n  distributed: quite simple—only one copy of the piece of data exists (but where?) n  n  Responsiveness: nodes should have a quick access to the data centralized: hard—all updates must go through the centre node n  replicated: simple—just do it! n  distributed: quite simple—just do it (if data is in the local node) or send an update message (but to whom?) n 
  50. 50. Centralized Architecture n  Ensure that all nodes have identical information User! User! User! Synchronization! Locks! State! Centralized! Data Store! State! State! User! User! User!
  51. 51. Problem: Who’s Got the Ball Now? A! B! x, y, z!
  52. 52. ‘Eventual’ Consistency User! User! User! Per-client! FIFO Event! Queues! Synchronization! Locks! State! Centralized! Data Store! State! State! User! User! User! Per-client! FIFO Event! Queues!
  53. 53. Pull and Push n  The clients ‘pull’ information when they need it n  n  n  make a request whenever data access is needed problem: unnecessary delays, if the state data has not changed The server can ‘push’ the information to the clients whenever the state is updated n  clients can maintain a local cache n  problem: excessive traffic, if the clients are interested only a small subset of the overall data
  54. 54. Replicated Architecture n  Nodes exchange messages directly n  ensure that all nodes receive updates n  determine a common global ordering for updates n  No central host n  Every node has an identical view n  All state information is accessed from local node
  55. 55. Distributed Architecture n  State information is distributed among the participating players n  who n  what to do when a new player joins the game? n  what n  gets what? to do when an existing player leaves the game? ⇒ Entity ownership
  56. 56. Problem: Who’s Got the Ball Now? (Part II) A! B!
  57. 57. Entity Ownership n  Ensure that a shared state can only be updated by one node at a time n  n  n  n  exactly one node has the ownership of the state the owner periodically broadcasts the value of the state Typically player’s own representation (avatar) is owned by that player Locks on other entities are managed by a lock manager server n  n  n  n  clients query to obtain ownership and contact to release it the server ensures that each entity has only one owner the server owns the entity if no one else does failure recovery
  58. 58. Lock Manager: Example Lock Manager! The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then Request! Lock! Grant! Lock! A! Request! Lock! Reject! Lock! Update State! B!
  59. 59. Proxy Update A! n  n  n  n  n  Update Position (A)! Request Update Position! Update Position (B)! B! Non-owner sends an update request to the owner of the state The owner decides whether it accepts the update The owner serves as a proxy Generates an extra message on each non-owner update Suitable when non-owner updates are rare or many nodes want to update the state
  60. 60. Ownership Transfer Lock Manager! Notify Lock! Transfer! The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then Acknowledge! Lock Transfer! A! Update Position (A)! Request Ownership! Grant Ownership! Update Position (B)! B!
  61. 61. Ownership Transfer (cont’d) n  The lock manager has the lock information at all times n  If the node fails, the lock manager defines the current lock ownership state n  Lock ownership transfer incurs extra message overhead n  Suitable when a single node is going to make a series of updates and there is little contention among nodes wishing to make updates
  62. 62. §8.3 Networked Application n  Department of Defense (DoD) n  n  n  n  Academic NVEs n  n  n  n  n  SIMNET Distributed Interactive Simulation (DIS) High-Level Architecture (HLA) PARADISE DIVE BrickNet other academic projects Networked games and demos n  n  n  SGI Flight, Dogfight and Falcon A.T. Doom other multiplayer games
  63. 63. History and Evolution 1980! Military! 1990! DIS! SIMNET! 2000! HLA! NPSNET, STOW! DVE! Academic! RB2! CVE! DIVE, Spline, MASSIVE, Coven! Amaze! Entertainment! MUD! Air Warrior! Doom! Ultima Online! WOW!
  64. 64. U.S. Department of Defense (DoD) n  The largest developer of networked virtual environments (NVEs) for use as simulation systems n  n  n  one of the first to develop NVEs with its SIMNET system the first to do work on large-scale NVEs SIMNET (simulator networking) begun 1983, delivered 1990 n  a distributed military virtual environment developed for DARPA (Defense Advanced Research Projects Agency) n  develop a ‘low-cost’ NVE for training small units (tanks, helicopters,…) to fight as a team n 
  65. 65. SIMNET n  Technical challenges how to fabricate high-quality, low-cost simulators n  how to network them together to create a consistent battlefield n  n  Testbed n  n  n  n  n  11 sites with 50–100 simulators at each site a simulator is the portal to the synthetic environment participants can interact/play with others play was unscripted free play confined to the chain of command
  66. 66. SIMNET NSA Basic components i.  An object-event architecture ii.  A notion of autonomous simulator nodes iii.  An embedded set of predictive modelling algorithms (i.e., ‘dead reckoning’)
  67. 67. i. Object-Event Architecture n  Models the world as a collection of objects n  n  n  n  Models interactions between objects as a collection of events n  n  vehicles and weapon systems that can interact a single object is usually managed by a single host ‘selective functional fidelity’ messages indicating a change in the world or object state The basic terrain and structures are separate from the collection of objects n  if the structure can be destroyed then it has to be reclassified as an object, whose state is continually transmitted onto the network
  68. 68. ii. Autonomous Simulator Nodes n  n  n  Individual players, vehicles, and weapon systems on the network are responsible for transmitting accurately their current state Autonomous nodes do not interact with the recipients by any other way Recipients are responsible for n  n  n  Lack of a central server n  n  n  receiving state change information making appropriate changes to their local model of the world single point failures do not crash the whole simulation players can join and leave at any time (persistency) Each node is responsible for one or more objects n  n  the node has to send update packets to the network whenever its objects have changed enough to notify the other nodes of the change a ‘heartbeat’ message, usually every 5 seconds
  69. 69. iii. Predictive Modelling Algorithms n  n  An embedded and well-defined set of predictive modelling algorithms called dead reckoning Average SIMNET packet rates: n  n  n  1 per second for slow-moving ground vehicles 3 per second for air vehicles Other packets n  n  n  n  fire: a weapon has been launced indirect fire: a ballistic weapon has been launced collision: a vehicle hits an object impact: a weapon hits an object
  70. 70. Distributed Interactive Simulation (DIS) n  Derived from SIMNET n  n  n  n  Covers more simulation requirements n  n  n  n  object-event architecture autonomous distributed simulation nodes predictive modelling algorithms to allow any type of player, on any type of machine to achieve larger simulations First version of the IEEE standard for DIS appeared 1993 Protocol data unit (PDU) n  n  n  determine when each vehicle (node) should issue a PDU the DIS standard defines 27 different PDUs only 4 of them interact with the environment n  n  entity state, fire, detonation, and collision the rest of the defined PDUs n  n  simulation control, electronic emanations, and supporting actions not supported and disregarded by most DIS applications
  71. 71. Issuing PDUs n  The vehicle’s node is responsible of issuing PDUs n  entity state PDU when position, orientation, velocity changes sufficiently (i.e., others cannot accurately predict the position any more) n  as a heartbeat if the time threshold (5 seconds) is reached after the last entity state PDU n  n  n  fire PDU detonation PDU a fired projectile explodes n  node’s vehicle has died (death self-determination) n  n  collision PDU vehicle has collided with something n  detection is left up to the individual node n 
  72. 72. Lost PDUs 1 (2) Packets are sent via unreliable UDP broadcast n  State tables may differ among the hosts n  Lost detonation PDU n  ‘from the afterlife’!
  73. 73. Lost PDUs 2 (2) n  Lost entity state PDU n  n  n  Lost fire PDU n  n  not a big problem larger jumps on the display receive entity state PDU for which no ghost entry exists Lost collision PDU n  n  continue to display a vehicle as live next heartbeat packet solves the situation
  74. 74. The Fully Distributed, Heterogeneous Nature of DIS n  n  Any computer that reads/writes PDUs and manages the state of those PDUs can participate a DIS environment The virtual environment can include n  n  n  n  Problem of the advantages of the low-end machines n  n  virtual players (humans at computer consoles) constructive players (computer-driven players) live players (actual weapon systems) the less details in the scenery, the better visuality Problems with modelling n  dynamic terrain n  n  soil movement environmental effects n  weather, smoke, dust,…
  75. 75. High-Level Architecture (HLA) n  n  Aims at providing a general architecture and services for distributed data exchange. While the DIS protocol is closely linked with the properties of military units and vehicles, HLA does not prescribe any specific implementation or technology. could be used also with non-military applications (e.g., computer games) n  targeted towards new simulation developments n  n  HLA was issued as IEEE Standard 1516 in 2000.
  76. 76. Academic Research n  DoD’s projects n  n  most of the research is unavailable n  n  large-scale NVEs lack-of-availability, lack-of-generality Academic community has reinvented, extended, and documented what DoD has done n  PARADISE n  DIVE n  BrickNet n  and many more…
  77. 77. PARADISE n  n  n  n  n  n  Performance Architecture for Advanced Distributed Interactive Simulations Environments (PARADISE) Initiated in 1993 at Stanford University A design for a network architecture for thousands of users Assign a different multicast address to each active object Object updates similar to SIMNET and DIS A hierarchy of area-of-interest servers n  n  monitor the positions of objects which multicast addresses are relevant S!
  78. 78. DIVE n  n  n  n  n  n  Distributed Interactive Virtual Environment (DIVE) Swedish Institute of Computer Science To solve problems of collaboration and interaction Simulate a large shared memory over a network Distributed, fully replicated database Entire database is dynamic n  n  n  add new objects modify the existing databases reliability and consistency
  79. 79. BrickNet n  National University of Singapore, started in 1991 n  Support for graphical, behavioural, and network modelling of virtual worlds n  Allows objects to be shared by multiple virtual worlds n  No replicated database n  The virtual world is partitioned among the various clients 7 713 PowerPC
  80. 80. Other Academic Projects n  MASSIVE n  n  n  Distributed Worlds Transfer and Communication Protocol (DWTP) n  n  each object can specify whether a particular event requires a reliable distribution and what is the event’s maximum update frequency Real-Time Transport Protocol (RTP/I) n  n  different interaction media: graphics, audio and text awareness-based filtering: each entity expresses a focus and nimbus for each medium ensures that all application instances look as if all operations have been executed in the same order Synchronous Collaboration Transport Protocol (SCTP) n  n  collaboration on closely coupled, highly synchronized tasks the interaction stream has critical messages (especially the last one) which are sent reliably, while the rest are sent by best effort transport
  81. 81. Networked Demos and Games n  SGI Flight n  3D aeroplane simulator demo for Silicon Graphics workstation, 1983–84 n  n  n  n  SGI Dogfight n  n  n  n  n  serial cable between two workstations Ethernet network users could see each other’s planes, but no interaction modification of Flight, 1985 interaction by shooting packets were transmitted at frame rate → clogged the network limited up to ten players Falcon A.T. n  n  commercial game by Spectrum Holobyte, 1988 dogfighting between two players using a modem
  82. 82. Networked Games: Doom n  id Software, 1993 n  First-person shooter (FPS) for PCs n  Part of the game was released as shareware in 1993 n  n  n  extremely popular created a gamut of variants Flooded LANs with packets at frame rate
  83. 83. Networked Games: ‘First Generation’ n  Peer-to-peer architectures n  n  n  n  Advantages: n  n  n  each participating computer is an equal to every other inputs and outputs are synchronized each computer executes the same code on the same set of data determinism ensures that each player has the same virtual environment relatively simple to implement Problems: n  n  n  n  persistency: players cannot join and leave the game at will scalability: network traffic explodes with more players reliability: coping with communication failures security: too easy to cheat
  84. 84. Networked Games: ‘Second Generation’ n  Client-server architectures n  n  n  Advantages: n  n  n  n  one computer (a server) keeps the game state and makes decisions on updates clients convey players’ input and display the appropriate output but do not inlude (much) game logic generates less network traffic supports more players allows persistent virtual worlds Problems: n  n  responsiveness: what if the connection to the server is slow or the server gets overburdened? security: server authority abuse, client authority abuse
  85. 85. Networked Games: ‘Third Generation’ n  Client-server architecture with prediction algorithms n  n  Advantages: n  n  n  clients use dead reckoning reduces the network traffic further copes with higher latencies and packet delivery failures Problems: consistency: if there is no unequivocal game state, how to solve conflicts as they arise? n  security: packet interception, look-ahead cheating n 
  86. 86. Networked Games: ‘Fourth Generation’ n  Generalized client-server architecture the game state is stored in a server n  clients maintain a subset of the game state locally to reduce communication n  n  Advantages: n  n  n  traffic between the server and the clients is reduced clients can response more promptly Problems: n  n  n  boundaries: what data is kept locally in the client? updating: does the subset of game state change over time? consistency: how to solve conflicts as they occur?
  87. 87. Networked Games: ‘Fifth Generation’ n  Sharding and cloud servers n  n  n  Advantages: n  n  n  the game world is ‘sharded’ e.g. geographically the game world exists in a 3rd party cloud service possible to support more players and even the server load outsourcing the networking Problems: n  n  players are divided geographically networking depends on the cloud service provider
  88. 88. Typical MMO Architecture login portal! client ! persistence ! server ! connection ! server ! shard ! client ! client ! shard ! connection ! server ! shard ! persistence ! server !
  89. 89. EVE Online running on IBM blade servers in London! SOL ! server ! SOL ! server ! proxy ! server ! client ! client ! main ! DB ! SOL ! server ! client ! SOL ! server !
  90. 90. Communication Layers (Revisited) n  physical platform n  n  n  n  logical platform n  n  n  bandwidth, latency unicasting, multicasting, broadcasting TCP/IP, UDP/IP peer-to-peer, client-server, server-network centralized, replicated, distributed networked application n  n  military simulations, networked virtual environments multiplayer computer games
  91. 91. §9 Compensating Resource Limitations n  aspects of compensation n  information principle equation n  consistency and responsiveness n  scalability protocol optimization n  dead reckoning n  local perception filters n  synchronized simulation n  area-of-interest filtering n 
  92. 92. Information-Centric View of Resources n  n  Bandwidth requirements increase with the number of players Each additional player n  n  n  must receive the initial game state and the updates that other users are already receiving introduces new updates to the existing shared state and new interactions with the existing players introduces new shared state n  n  Additional players require additional processor cycles at the existing player’s host Each additional player n  n  n  introduces new elements to render increases the amount of caching (new shared state ) increases the number of updates to receive and handle
  93. 93. Information Principle The resource utilization is directly related to the amount of information that must be sent and received by each host and how quickly that information must be delivered by the network. n  The most scalable networked application is the one that does not require networking H n  To achieve scalability and performance, the overall resource penalty incurred within a networked application must be reduced
  94. 94. Information Principle Equation Resources = M × H × B × T × P M = number of messages transmitted H = average number of destination hosts for each message B = average amount of network bandwidth required for a message to each destination T = timeliness in which the network must deliver packets to each destination P = number of processor cycles required to receive and process each message
  95. 95. Information Principle Equation as a Tool Each reduction ⇒ a compensating increase or a compensating degradation in the quality n  How to modify depends on the application n  Dead Reckoning M H B T P
  96. 96. Information Principle Equation: Examples p1,1 p1,2 36 bytes Message compression p1,n Server 1 Server-network Server 2 Server 3 24 bytes p2,1 M H B T P M H B p2,2 T p2,n p3,1 P p3,2 p3,n
  97. 97. Consistency and Responsiveness n  consistency n  n  responsiveness n  n  delay that it takes for an update event to be registered by the nodes traditionally, consistency is important n  n  similarity of the view to the data in the nodes belonging to a network distributed databases real-time interaction ⇒ responsiveness is important and consistency can be compromised ⇒  the game world can either be a dynamic world in which information changes frequently or n  a consistent world in which all nodes maintain identical information n  but it cannot be both
  98. 98. Absolute Consistency n  To guarantee absolute consistency among the nodes, the data source must wait until everybody has received the information before it can proceed n  n  n  delay from original message transmission, acknowledgements, possible retransmissions The source can generate updates only at a limited rate Time for the communication protocol to reliably disseminate the state updates to the remote nodes I’m at (15, 25) I’m at (10, 20) A A A is at (10, 20) B Currently B After 100 ms ledge Acknow A B After 200 ms Time
  99. 99. High Update Rate n  There is a delay before the state change is received by other nodes n  If the state information is updated often, it might be updated while the previous update messages are still on the way n  Whilst some nodes see new values, others may still see older ones n  Because of the inherent transmission delay, one cannot update the shared state frequently and still ensure that all remote hosts have already received all previous state updates S!
  100. 100. Trade-off Spectrum n  Available network bandwidth must be allocated between n  messages for updating the state information and n  messages for maintaining a consistent view of the state information among participants. High update rate Absolute consistency The trade-off spectrum
  101. 101. Relay Model local global node network relay
  102. 102. Two-Way Relay ilocal olocal f g oglobal iglobal
  103. 103. Short-Circuit Relay ilocal f oglobal h olocal g iglobal
  104. 104. Scalability n  ability to adapt resource changes n  n  supporting a varying amount of human players allocating synthetic players
  105. 105. Amdahl’s Law n  n  n  time required by serially executed parts cannot be reduced by parallel computation theoretical speedup: S(n) = T(1) / T(n) ≤ T(1) / (T(1) / n) = n execution time has a serial part Ts and parallel part Tp n  n  n  n  Ts + Tp = 1 α = Ts / (Ts + Tp) speedup with optimal serialization: S(n) = (Ts + Tp) / (Ts + Tp/n) ≤ 1/α example: α = 0.05 ⇒ S(n) ≤ 20
  106. 106. Serial and Parallel Execution n  ideally everything should be calculated in parallel n  everybody n  plays their game regardless of others if there is communication, there are serially executed parts n  the players must agree on the sequence of events
  107. 107. Interaction in a Multiplayer Game Turn-based game! player 1! player 2! player 3! time! Real-time game! player 1! player 2! player 3! time!
  108. 108. Communication Capacity: Example client-server using unicasting in a 10 Mbps Ethernet using IPv6 n  each client sends 5 packets/s containing a 32-bit integer value n  n  bits in the message: d = 752 + 32 n  update frequency: f = 5 n  capacity of the communication channel: C = 107 n  number of unicast connections: n = ? n  d · f · n ≤ C ⇒ n ≤ 2551
  109. 109. Communication Capacity Architecture Capacity requirement Single node 0 Peer-to-peer O(n)…O(n2) Client-server O(n) Peer-to-peer server-network O(n/m + m)…O(n/m + m2) Hierarchical server-network O(n)
  110. 110. §9.2 Protocol Optimization n  To transmit data n  n  n  n  n  allocate a buffer write data into the buffer transmit a packet containing the buffer contents Every network packet incurs a processing penalty To improve resource usage, reduce n  n  the size of each network packet (message compression) the number of network packets (message aggregation) M H B T P
  111. 111. Message Compression n  10.0000001 ⇒ 10.0000001 Lossy compression n  Some information may be lost n  10.000000001 ⇒ 10 Error Lossless compression n  Change encoding n  No information loss #bits
  112. 112. Internal and External Compression Internal compression n  n  Manipulates a message based solely on its own content No reference to the previous message External compression n  Manipulates the message data within the context of what has already been transmitted n  n  n  n  delta information Better compression Dependency between messages Need for reliable transmission
  113. 113. Compression Technique Categories Compression technique Lossless compression Lossy compression Internal compression Encode the message in a more efficient format and eliminate redundancy within the message Filter irrelevant information or reduce the detail of the transmitted information External compression Avoid retransmitting information that is identical to that sent in previous messages Avoid retransmitting information that is similar to that sent in previous messages
  114. 114. Compression Methods Huffman coding n  Arithmetic coding n  Substitutional compression n  n  LZ78, LZ77 Wavelets n  Vector quantization n  Fractal compression n 
  115. 115. Protocol Independent Compression Algorithm (PICA) n  Lossless, external Reference State #1 Transmit occasionally numbered reference state snapshots Subsequent update packets snapshot number delta information Snapshots reliably easy retransmission Entity State #1 #1 Reference State #2 Entity State Entity State Entity State Entity State #2 Reference State #3 Entity State Entity State
  116. 116. Application Gateways n  n  n  n  Compression can be WAN! localized to areas of the network having limited bandwidth Packet in uncompressed Application! Router! Gateway! form over the LAN Application Gateway (AG) compress them before they LAN!Uncompressed packets! enter the WAN Client! Client! Client! Client! Quiescent entity service n  handles dead or inactive entities
  117. 117. Message Aggregation n  n  Reduce the number of message by merging multiple messages Reduces the number of headers n  n  UDP/IP: 28 bytes TCP/IP: 40 bytes Header! Data! Merge all messages of the local entities into a single message suits when messages are transmitted at a regular frequency does not decrease the quality if each entity generates updates independently, the host must wait to get enough messages A! B! C! Header! Data! Header! Header! Data! Data! Header! Data! Data! Data!
  118. 118. Aggregation Trade-offs and Strategies n  Wait longer n  n  n  better potential bandwidth savings reduces the value of data Timeout-based transmission policy n  n  n  collect messages for a fixed timeout period guarantees an upper bound for delay reduction varies depending on the entities n  n  Quorum-based transmission policy n  n  n  n  no entity updates ⇒ no aggregation but transmission delay merge messages until there is enough guarantees a particular bandwidth and message rate reduction no limitation on delay Timeliness (timeout) vs. bandwidth reduction (quorum)
  119. 119. Merging Timeout- and QuorumBased Policies n  Wait until enough messages or timeout expired n  After transmission of an aggregated message, reset timeout and message counter n  Adapts to the dynamic entity update rates n  slow update rate ⇒ timeout bounds the delay n  rapid update rate ⇒ better aggregation, bandwidth reduction
  120. 120. Aggregation Servers n  n  n  In many applications, each host only manages a single entity More available updates, larger aggregation messages can be quickly generated Large update pool ⇒ projection aggregation n  a set of entities having a common characteristic n  n  Aggregation server n  n  n  n  n  location, entity type hosts transmit updates to aggregation server(s) server collects updates from multiple hosts server disseminates aggregated update messages Distributes the workload across several processors Improves fault tolerance and overall performance
  121. 121. §9.3 Dead Reckoning n  navigational technique v t! (x, y)! (x0, y0)!
  122. 122. Dynamic Shared State n  Dynamic shared state constitutes the changing information that multiple nodes must maintain n  n  n  n  n  participants, their locations and behaviours environment itself, all objects, weather, natural laws,... In a highly dynamic environment, almost all information about the game world may change ⇒ needs to be shared Accuracy is fundamental to creating realistic environments Makes an environment available to multiple users n  without dynamic shared state, each user works independently (and alone)
  123. 123. Example of Dynamic Shared State I’m at (10, 20) A I’m at (15, 25) A near A is at (10, 20) B B Currently After 100 ms Time
  124. 124. Dead Reckoning of Shared State n  Transmit state update packets less frequently n  Use received information to approximate the true shared state n  In between updates, each node predicts the state of the entities
  125. 125. Dead Reckoning: Example Predicted Path! Transmit! Time 3:! Position (4, 5)! Velocity (3, 2)! Remote Prediction! Time 3.5:! Position (5.5, 6)!
  126. 126. Dead Reckoning Protocol DR protocol consists of two elements: n  prediction technique n  how the entity’s current state is computed based on previously received update packets n  convergence technique n  how to correct the state information when an update is received
  127. 127. Prediction and Convergence Current Predicted Path! Time 4:! Position (7, 7)! Time 3:! Position (4, 5)! Velocity (3, 2)! Time 4:! Position (6, 3)! Velocity (6, 3)! New Predicted Path!
  128. 128. Prediction Using Derivative Polynomials n  n  n  The most common DR protocols use derivative polynomials Involves various derivatives of the entity’s current position Derivatives of position 1.  2.  3.  velocity acceleration jerk
  129. 129. Zero-Order and First-Order Polynomials n  Zero-order polynomial n  n  the object’s instantaneous position, no derivative information n  n  position p predicted position after t seconds = p First-order polynomial n  velocity v n  predicted position after t seconds = vt + p n  update packet provides current position and velocity
  130. 130. Second-Order Polynomials n  We can usually obtain better prediction by incorporating more derivatives n  Second-order polynomial n  acceleration a n  predicted position after t seconds = ½at2 + vt + p n  update packet: current position, velocity, and acceleration n  popular and widely used n  easy to understand and implement n  fast to compute n  relatively good predictions of position
  131. 131. Hybrid Polynomial Prediction n  The remote host can dynamically choose the order of prediction polynomial n  n  first-order or second-order? First-order fewer computational operations n  good when acceleration changes frequently or when acceleration is minimal n  prediction can be more accurate without acceleration information n 
  132. 132. Position History-Based Dead Reckoning n  n  n  Chooses dynamically between first-order and second-order Evaluates the object’s motion over the three most recent position updates If acceleration is minimal or substantial, use first-order n  threshold cut-off values for each entity n  The acceleration behaviour affects to the convergence algorithm selection n  Ignores instantaneous derivative information n  n  n  n  update packets only contain the most recent position estimate velocity and acceleration Reduces bandwidth requirement Improves prediction accuracy in many cases
  133. 133. Limitations of Derivative Polynomials n  n  n  Add more terms to the derivative polynomial—why not? With higher-order polynomials, more information have to be transmitted The computational complexity increases n  n  each additional term requires few extra operations Sensitivity to errors derivative information must be accurate n  inaccurate values for the higher derivatives might actually make the prediction worse n  p(t) = ½at2 + vt + p
  134. 134. Limitations of Derivative Polynomials (cont’d) n  Hard to get accurate instantaneous information n  n  n  entity models typically contain velocity and acceleration higher-order derivatives must be estimated or tracked defining jerk (change in acceleration): predict human behaviour n  air resistance, muscle tension, collisions,… n  n  values of higher-order derivatives tend to change more rapidly than lower-order derivatives ⇒ High-order derivatives should generally be avoided n  The Law of Diminishing Returns n  more effort typically provides progressively less impact on the overall effectiveness of a particular technique
  135. 135. Object-Specialized Prediction n  Derivative polynomials do not take into account n  n  n  what the entity is currently doing what the entity is capable of doing who is controlling the entity n  Managing a wide variety of dead reckoning protocols is expensive n  Aircraft making military flight manoeuvers n  n  n  All information does not need to be transmitted n  n  constant acceleration and instant velocity ⇒ position trajectory the aeroplane’s orientation angle dancing is relevant not the footwork, fire not the flames,… In general, precise behaviour would be nice but overall behaviour is enough
  136. 136. Convergence Algorithms n  Prediction estimates the future value of the shared state n  Convergence tells how to correct inexact prediction n  Correct predicted state quickly but without noticeable visual distortion
  137. 137. Zero-Order Convergence (or Snap) Time 4.5:! Position (8.5, 8)! Current Predicted Path! New Predicted Path! Time 3.5:! Position (5.5, 6)! Time 4.5:! Position (9, 4.5)! Time 4:! Position (6, 3)! Velocity (6, 3)!
  138. 138. Linear Convergence Time 4.5:! Position (8.5, 8)! Current Predicted Path! New Predicted Path! Time 3.5:! Position (5.5, 6)! Convergence! Path! Convergence! Point! Time 5:! Position (12, 6)! Time 4:! Position (6, 3)! Velocity (6, 3)!
  139. 139. Quadratic Convergence Time 4.5:! Position (8.5, 8)! Time 3.5:! Position (5.5, 6)! Current Predicted Path! New Predicted Path! Convergence! Convergence! Point! Path! Time 5:! Position (12, 6)! Time 4:! Position (6, 3)! Velocity (6, 3)!
  140. 140. Convergence with Cubic Spline Time 4.5:! Position (8.5, 8)! Current Predicted Path! Time 3.5:! Position (5.5, 6)! Convergence! Path! Convergence! Point! Time 5:! Position (12, 6)! Time 4:! Position (6, 3)! Velocity (6, 3)! New Predicted! Path! Time 6:! Position (18, 9)!
  141. 141. Nonregular Update Generation n  By taking advance of knowledge about the computations at remote host, the source host can reduce the required state update rate n  The source host can use the same prediction algorithm than the remote hosts n  Transmit updates only when there is a significant divergence between the actual position and the predicted position
  142. 142. Advantages of Nonregular Transmissions n  n  n  Reduces update rates, if prediction algorithm is reasonable accurate Allows to make guarantees about the overall accuracy The source host can dynamically balance its network transmission resources n  n  limited bandwidth ⇒ increase error threshold Nonregular updates provide a way to dynamically balance consistency and responsiveness based on the changing consistency demands
  143. 143. Lack of Update Packets n  n  n  If the prediction algorithm is really good, or if the entity is not moving significantly, the source might never send any updates New participants never receive any initial state Recipients cannot tell the difference between receiving no updates because n  n  n  n  the object’s behaviour has not changed the network has failed the object has left the game world Solution: timeout on packet transmissions
  144. 144. Environmental Effects ?! Wall!
  145. 145. Dead Reckoning: Advantages and Drawbacks n  n  n  Reduces bandwidth requirements because updates can be transmitted at lower-than-frame-rate Because hosts receive updates about remote entities at a slower rate than local entities, receivers must use prediction and convergence to integrate remote and local entities Does not guarantee identical view for all participants n  n  n  n  tolerate and adapt to potential differences Complex to develop, maintain, and evaluate Dead reckoning algorithms must often be customized for particular objects Are entities predictable?
  146. 146. §9.4 Local Perception Filters n  exploiting human’s perceptual limitations n  n  n  level-of-detail: less details where they cannot be observed image, video and audio compression local perception filters n  n  n  n  exploits temporal perception shows possibly out-of-date information (≠ dead reckoning) ensures consistent interaction allows to introduce artificial delays (e.g., bullet time)
  147. 147. Exploiting Perceptual Limitations n  Humans have inherent perceptual limitations Two approaches to exploit 1.  Information can provided at multiple levels of detail and at different update rates 2.  Mask the timeliness characteristics of information
  148. 148. Exploiting Level-of-Detail Perception n  Nearby viewers n  n  n  n  Distant viewers n  n  n  n  expect full graphical details accurate structure, position, orientation update rate → local frame rate can tolerate less graphical details less accurate structure, position, orientation User’s focus is typically nearby Many inaccuracies cannot even be detected on a fine-resolution display A
  149. 149. Multiple-Channel Architecture n  Multiple independent data channels for each entity Low-resolution channel (x, y) (x, y) Low-frequency, low-bandwidth information High-resolution channel High-frequency, high-bandwidth information The overall bandwidth requirements are reduced
  150. 150. Implementation Examples n  Client-server n  n  n  each transmission identifies its channel server dispatches data from channels to clients Multicast group for each region n  assign multiple addresses for each region n  n  Multicast group for each entity n  n  one group provides all of the entities’ high-resolution channels, another group provides all of the entities’ low-resolution channels assign multiple addresses for each entity Different reliabilities to each channel n  low-frequency updates are important n  lost packets can have a significant impact
  151. 151. Selecting the Channels to Provide n  How many channels to provide for an entity? n  n  n  more channels: better service for subscribers each channel imposes a cost (bandwidth and computational) To satisfy the trade-off, three channels for each entity is typically needed n  channels provide order-of-magnitude differences in structural and positional accuracy n  packet rate n  Rigid-body channel! Approximate-body channel! Full-body channel! Far-range viewers! Mid-range viewers! Near-range viewers!
  152. 152. Rigid-Body Channel Demands the least bandwidth and computation n  Represents the entity as a rigid body n  Ignores changes in the entity’s structure n  Update types: n  n  position n  orientation n  structure
  153. 153. Approximate-Body Channel n  More frequent position and orientation updates n  Hosts can render a rough approximation of the entity’s dynamic structure n  n  appendages and other articulated parts Provided information is entity-specific n  corresponds to the dominant changes of the structure
  154. 154. Common Approximations n  Radial length n  n  n  Articulation vector n  n  n  motion towards and away from a centre point update packets include the current radius the current direction of the appendage models a rotating turret, arms and legs Local co-ordinate system points n  n  subset of the entity’s significant vertices relative to the entity’s local co-ordinate system the entity is composed of multiple components Radius!
  155. 155. Full-Body Channel n  Highest level of detail n  High bandwidth and computational requirements n  viewer can subscribe to a limited number of full-body channels n  Frequent transmissions n  Position and orientation n  Accurate structure information
  156. 156. Local Perception Filters (LPFs) introduced by Sharkey, Ryan & Roberts (1998) n  a method for hiding communication delays in networked virtual environments n  exploits the human perceptual limitations by rendering entities slightly out-of-date locations based on the underlying network delays n  n  causality of events is preserved n  rendered view may have temporal distortions n  rendered view ≠ real view
  157. 157. Active and Passive Entities n  An active entity (i.e., player) n  n  n  n  n  takes actions on its own generates updates human participants, computercontrolled entities cannot be predicted typically rendered using state updates adjusted for the latency n  A passive entity n  n  n  n  n  reacts to events from the environment, does not generate its own actions inanimate objects (e.g., rocks, balls, books) active entities interact with passive entities rendered according to the latency of its nearest active entity reacts instantaneously to the actions of a nearby active entity
  158. 158. Rules of LPFs 1.  2.  3.  Player should be able to interact in real-time with the nearby entities. Player should be able to view remote interactions in real-time, although they can be out-of-date. Temporal distortions in the player’s perception should be as unnoticeable as possible. p r n q
  159. 159. Interaction Between Players n  interaction = communication between the players n  n  local players: immediate remote players: subject to the network latency n  n  interaction = players exchanging passive entities n  n  time frame = current time – communication delay passive entities are predictable ⇒ they can be rendered in the past (or in the future) a passive entity can change its time frame dynamically the nearer to a local player, the closer it is rendered to the current time n  the nearer to a remote player, the closer it is rendered to its time frame n 
  160. 160. Example: Pong n  Two active entities: paddles n  movement unpredictable d! n  One passive entity: ball n  movement n  predictable Latency of d seconds
  161. 161. The View of the Blue Player t
  162. 162. The View of the Red Player t
  163. 163. Pong: A Summary n  n  n  n  Each player sees a different representation of the same playing field The ball accelerates as it approaches the local player’s paddle The ball decelerates as it approaches the remote player’s paddle The ball’s rendered position alternates between n  the current time n  n  meaningful interaction for local player a past time reference network latency n  observing meaningful interaction for remote player n 
  164. 164. 3½-Dimensional Temporal Contour n  Represent each player’s perception as a fourdimensional co-ordinate system (x, y, z, t) n  x, y, z: the spatial position relative to the local player’s current position n  n  local player at (0, 0, 0) t: the time associated with rendered information from that position n  n  local player rendered at current time: t = 0 opposing player: t = −d d! (0, 0, 0)!
  165. 165. Temporal Contours in Pong Blue player Red player
  166. 166. Temporal Contour (from the Blue Player’s Perspective) t y x
  167. 167. Temporal Distortion Blue view Orange view
  168. 168. Properties of the Co-ordinate System n  n  n  n  n  The co-ordinate system is defined independently for each player Depends on the player’s current position and the delay of arriving information Changes dynamically as the player moves or as the network properties change Defines how a passive object should be rendered Two interacting objects are rendered at the same time reference point n  n  n  Each user perceives all collisions correctly Objects that approach the local user are rendered in the user’s time Smooth movement
  169. 169. Generalizing the Local Temporal Contour n  Limitations: n  n  n  players are capable of moving along a single axis only supports two active objects only Generalization to a 4D co-ordinate system requires preserving for the local user: n  interacting naturally with passive objects in vicinity n  seeing remote interactions (passive-to-passive, passive-toactive) naturally n  perceiving smooth motion of remote objects
  170. 170. Local Temporal Contour n  n  n  The local user at (0, 0, 0) Each active object is assigned a t value corresponding to its latency Interpolate the contour over all active objects including local n  Contour defines a suitable t value for each spatial point y! local! x! t!
  171. 171. Linear Temporal Contours d(p, r) p r x p r x d(r, p)
  172. 172. 2½-Dimensional Temporal Contour t y x
  173. 173. Multiple Players: Aggregating the Temporal Contours d(p, s) d(p, r) d(p, q) p r q s x p r q s x d(p, s) d(p, r) d(p, q)
  174. 174. Worth Noting simple linear functions instead of continuous temporal contours n  LPFs are the ‘opposite’ of dead reckoning n  n  no n  prediction for remote players the closer the players get, the more noticeable the temporal distortion becomes n  in critical proximity interaction becomes impossible n  no mêlée
  175. 175. Problems possibly visual disruptions on impact ⇒ shadows (see the lecture notes for details) n  sudden changes in the player’s position or delay can cause unwanted effects n  n  if a player leaves the game, what happens to the temporal contour? n  third party instrusion: someone with a high delay ‘blocks’ the incoming entities n  jitter: entities start to bounce back and forth in time
  176. 176. Bullet Time movies: visual effect combining slow motion with dynamic camera movement n  computer games: player can slow down the surroundings to have more time to make decisions n  easy in single player games: slow down the game! n  how about multiplayer games? n 
  177. 177. Bullet Time in Multiplayer Games n  two approaches: n  speed up the player n  slow down the other players n  if a player can slow down/speed up the time, how it will affect the other players? n  localize the temporal distortion to the immediate surroundings of the player n  but how to do that? ⇒ local perception filters!
  178. 178. Adding Bullet Time to LPFs player using the bullet time has more time to react ⇒ the delay between bullet-timed player and the other players increases n  add artificial delay to the temporal contour n 
  179. 179. p Shoots r Without Bullet Time d(p, r) p r x p r x d(r, p)
  180. 180. p Shoots r While p Is Using Bullet Time d(p, r) b(p) p r x d(r, p) r b(p) p x
  181. 181. p Shoots r While r Is Using Bullet Time b(p) d(p, r) p r x p r x b(p) d(r, p)
  182. 182. 2½-Dimensional Temporal Contour and Bullet Time t y x
  183. 183. Open Questions n  non-linear temporal contours n  how to compute quickly? n  noticeable benefits (if any)? n  numerical evaluation n  measuring n  the distortion and its effects practical evaluation n  how well does it work? n  does it allow new kinds of games?
  184. 184. §9.5 Synchronized Simulation n  n  used in Age of Empires (1997) command categories: n  n  n  n  n  deterministic: computer indeterministic: human distribute the indeterministic commands only deterministic commands are derived from pseudo-random numbers → distribute the seed value only consistency checks and recovery mechanisms
  185. 185. Synchronized Simulation in Age of Empires n  n  n  n  Age of Empires game series by Ensemble Studios Real-time strategy (RTS) game Max 8 players, each can have up to 200 moving units ⇒ 1600 moving units ⇒ large-scale simulation Rough breakdown of the processing tasks: n  n  n  30% graphic rendering 30% AI and path-finding 30% running the simulation and maintenance
  186. 186. Synchronized (or Simultaneous) Simulation n  Large simulation ⇒ a lot of data to be transmitted n  Trade-off: computation vs. communication n  n  ‘If you have more updating data than you can move on the network, the only real option is to generate the data on each client’ Run the exact same simulation in each client
  187. 187. Handling Indeterminism n  ‘Indeterministic’ events are either n  n  n  Only the unpredictable events have to be transmitted ⇒ communication n  n  predictable (computers) or unpredictable (humans) apply an identical set of commands that were issued at the same time The predictable events can be calculated locally on each client ⇒ computation n  n  Pseudo-random numbers are deterministic All clients use the same seed for their random number generator n  disseminate the seed Pseudo-random number generator Seed Random number Next
  188. 188. Communication Turns Execute! commands! 100! a! 3200! Execute! commands! Execute! commands! a! b! Turn:! Execute! commands! c! d! e! f! 101! b! 3400! c!d!e! 102! 103! f! 3600! g! 3800! 4000! Time! (ms)!
  189. 189. Division of the Communication Turn Single communication turn! Communications turn (200 msec) - scaled to 'round-trip ping' time estimates Process all messages Frame 50 msec 50 msec Frame Frame Frame - scaled to rendering speed 50 msec 50 msec 20 fps High Internet latency with normal machine performance! Poor machine performance with normal latency!
  190. 190. Features n  Guaranteed delivery using UDP n  n  clients are hard to hack n  any simulation running differently is out-of-sync n  message packet: n  execution turn n  sequence number n  if messages are received out of order, send immediately a resend request n  if acknowledgement arrives late, resend the message Hidden benefits n  Hidden problems programming is demanding n  out-of-sync errors n  checksums for everything n  n  50 Gb message logs
  191. 191. Lessons Learned n  Players can tolerate a high latency as long as it remains constant n  n  Jitter (the variance of the latency) is a bigger problem n  n  hectic situations (like battles) cause spikes in the network traffic Measuring the communication system early on helps the development n  n  consistent slow response is better than alternating between fast and slow Studying player behaviour helps to identify problematic situations n  n  for an RTS game, even 250–500 ms latencies are still playable identify bottlenecks and slowdowns Educating programmers to work on multiplayer environments
  192. 192. §9.6 Area-of-Interest Filtering n  Area-of-interest filters n  n  n  each host provides explicit data filters filters define the interest in data Multicasting n  n  n  use existing routing protocols to restrict the flow of data divide the entities or the region into multicast groups Subscription-based aggregation n  group available data into fine-grained ‘channels’ n  hosts subscribe the appropriate channels
  193. 193. Why to Do Data Flow Restriction? Relea L7 ! Fire o2 ! se loc k (Δx, Δ y, Δz )! Join! t! objec oy Destr
  194. 194. Awareness and the Spatial Model of Interaction User’s ! video aura! Television’s ! video aura! Key concepts: n  medium: communication type n  aura: subspace in which interaction can occur n  awareness: quantifies one object’s significance to another object (in a particular medium) Television’s video nimbus! User’s video! focus! n  n  n  focus: represents an observing object’s interest nimbus: represents an observed object’s wish to be seen adapters: can modify an object’s auras, foci, and nimbi
  195. 195. Nimbus-Focus Information Model n  n  n  Nimbus: entity data should only be made available to entities capable of perceiving that information Focus: each entity is only interested in information from a subset of entities Ideally, all information is processed individually and delivered only to entities observing it n  n  n  ⇒  what about scaling up? processing resouces each packet has a custom set of destination entities ⇒ hard to utilize multicasting Approximate the pure nimbus-focus model
  196. 196. Area-of-Interest Filtering Subscriptions n  n  n  n  Nodes transmit information to a set of subscription managers (or area-of-interest managers, filtering servers) Managers receive subscription descriptions from the participating nodes For each piece of data, the managers determine which of the subscription requests are satisfied and disseminate the information to the corresponding subscribing nodes AOI filtering: n  restricted form of the pure nimbus-focus model n  n  n  ignores nimbus specifications subscription descriptions specify the entity’s focus reduces the processing requirements of the pure model
  197. 197. Subscription Interest Language n  n  Allows the nodes to expess formally their interests in the game world Subscription description can be arbitrarily complex n  n  n  n  a sequence of filters or assertions based on the values of packet fields Boolean operators programmable functions (OR (EQ TYPE "Tank") (AND (EQ TYPE "Truck") (GT LOCATION-X 50) (LTE LOCATION-X 75) (GT LOCATION-Y 83) (LTE LOCATION-Y 94) (EQ PACKET-CLASS INFRARED)))
  198. 198. When to Use Customized Information Flows? 1.  2.  3.  4.  5.  n  Nodes cannot afford the cost of receiving and processing unnecessary messages Nodes are connected over an extremely low-bandwidth network Multicast or broadcast protocols are not available Client subscription patterns change rapidly No a priori categorizations of data Problem when a large number of hosts are interested in the same piece of information n  customized data streams ⇒ unicast ⇒ the same data travels multiple times over the same network
  199. 199. Intrinsic and Extrinsic Filtering Network Header Extrinsic filtering Filters packets based on network properties Implementation efficient Filtering cannot be as sophisticated Application Data Intrinsic filtering The filter must inspect the application content Can dynamically partition data based on fine-grained entity interests
  200. 200. Multicasting n  Transmit a packet to a multicast group (multicast address) n  Packets are delivered to nodes who have subscribed to the multicast group n  Explicit subscription (join group) and unsubscription (leave group) n  A node can subscribe to multiple groups simultaneously n  Transmission to a group does not require subscription n  Challenge: how to partition the available data among a set of multicast groups? n  Each multicast group should deliver a set of related information n  Worst case: each node is interested in a small subset of information from every group ⇒ must subscribe to every multicast address ⇒ broadcast n  Methods: n  group-per-entity allocation n  group-per-region allocation
  201. 201. Group-per-Entity Allocation 1 (2) n  n  n  n  n  n  A different multicast address to each entity Each host receives information about all entities within its focus Subscription filter is executed locally Subscribe to the groups which have interesting entities Entities cannot specify their nimbus; no control over which hosts receive the information Example: PARADISE n  n  each entity subscribes to nearby entities control directional information interests nearby entities that are behind n  nearby and distant entities that are in front n 
  202. 202. Group-per-Entity Allocation 2 (2) n  Multiple multicast group addresses to each entity n  n  n  n  n  n  position updates infrared data Information at a finer granularity More accurate focus by group subscriptions Nodes need a way to learn about nearby entities Entity directory service tracks the current state of the entities entity transmits periodically state information n  directory servers collect the information and provide it to the entities when requested n 
  203. 203. Beacon Servers Beacon! Server! Beacon! Server! Beacon! Server! Beacon! Server!
  204. 204. Drawbacks n  Consumes a large number of multicast addresses n  Address collisions become quite probable n  Network routers have to process the corresponding large number of join and leave requests n  Group search induces network traffic n  Network cards can only support a limited number of simultaneous subscriptions n  too many subscriptions ⇒ ‘promiscuous’ mode
  205. 205. Group-per-Region Allocation n  n  n  n  Partition the world into regions and assign each region to a multicast group An entity transmits to groups corresponding to the region(s) that cover its location The entity subscribes to groups corresponding to interesting regions Entities have limited control over their nimbus but less control over their focus
  206. 206. Region Bounds n  An entity has to change its target group(s) throughout its lifetime n  n  n  n  n  track the bounds of the current region learn the multicast address of a new region boundaries and addresses assigned to the regions are often static In grid-based region assignment there are many points at which multiple grids meet Near these corners an entity has to subscribe to several groups
  207. 207. Environment vs. Regular Tessellation
  208. 208. Hybrid Multicast Aggregation n  n  Balance between finegrained data partitioning and multicast grouping Three-tiered interest management system: 1.  2.  3.  Group-per-region scheme segments data based on location Group-per-entity scheme allows receiver to select individual entities Area-of-interest filter subscriptions
  209. 209. Type! Projections n  Composed Projection! Cars between! (85,70) and ! (110,85)! collect data for a projection n  transmit aggregated packets (projection aggregations) n  Tanks between! (10,25) and ! (30,40)! n  Location! Projection aggregation server Projection composition n  merge the interest specifications of the component projections
  210. 210. Taxonomy of Interest Management Interest management! Aura-based! Extended! aura-based! Zone-based! Static! tesselation! Based on! visibility! Dynamic! tesselation! Based on! content! attributes!
  211. 211. Compensating Resource Limitations: Recapitulation n  n  IPE: Resources = M × H × B × T × P Aspects: n  n  n  n  n  n  n  consistency and responsiveness scalability Protocol optimization Dead reckoning Local perception filters Synchronized simulation Area-of-interest filtering
  212. 212. Retake: Can a Clever Game Design Hide the Communication Latency? n  n  assume: a multiplayer game with interaction amongst the players does real-time response really require real-time communication? n  n  n  no! (e.g. high-score lists) instead of technical solutions the game design can hide latency here, three concepts related to n  n  time span: short, medium, long abstractness of decisions: operational, tactical, strategic
  213. 213. 1. Operational level: Short active turns n  serialize the game events so that each player has a turn ➝ a turn-based game n  n  n  passive turns should be short and interesting n  n  n  n  active turns: make decisions passive turns: view the game events to unfold view statistics prepare for the next active turn view replays of past events candidates: attempt-based sports games n  javelin, long jump, ski jump, darts…
  214. 214. Example: A sports game p₁ p₂ p₃ p₄ active turn render turn filler replay and filler
  215. 215. 2. Tactical level: Semi-autonomous avatars n  tactical commands are not so time-sensitive n  n  n  the avatars are semi-autonomous n  n  n  they receive tactical commands they decide the operations themselves response is not immediate n  n  operational: ‘move forward’, ‘turn left’, ‘shoot’ tactical: ‘attack’, ‘guard’, ‘flee’ copes with high latency outcome can be something else than the player expected: free will!
  216. 216. Example: Semi-autonomous avatars p₁ a₂: guard p₂ p₃ stay put hide scout a₁: attack move right reload gun aim shoot a₃: flee run away
  217. 217. 3. Strategic level: Interaction via proxies n  participating players do not have to be present at the same time n  players set proxies that can later interact with other players n  proxies n  fully autonomous avatars n  game entities (mechanistic objects or gizmos) n  programmable objects
  218. 218. Example: Entrappers
  219. 219. The Bottom Line n  latency is caused by technical limitations n  the speed of light! n  cabling, routers, operating system… n  latency can be hidden n  by technical methods n  by clever game design n  so why not to try to use them both!
  220. 220. §10 Cheating Prevention n  traditional cheating in computer games n  cracking the copy protection n  fiddling with the binaries: boosters, trainers, etc. n  here, the focus is on multiplayer online games n  exploiting technical advantages n  exploiting social advantages n  cheaters’ motivations n  vandalism and dominance n  peer prestige n  greed
  221. 221. The goals of cheating prevention n  protect the sensitive information n  cracking passwords n  pretending to be an administrator n  provide a fair playing field n  tampering the network traffic n  colluding with other players n  uphold a sense of justice inside the game world n  abusing n  gangs beginners
  222. 222. Network Security n  Military n  n  Business, industry, e-commerce,… n  n  private networks → no problem ‘traditional’ security problems Entertainment industry n  multiplayer computer games, online games n  specialized problems
  223. 223. Taxonomy of Online Cheating 1 (4) n  Cheating by compromising passwords n  n  Cheating by social engineering n  n  dictionary attacks password scammers Cheating by denying service from peer players n  n  denial-of-service (DoS) attack clog the opponent’s network connection
  224. 224. Taxonomy of Online Cheating 2 (4) n  Cheating by tampering with the network traffic n  reflex augmentation n  packet interception n  look-ahead cheating n  packet replay attack n  fire! rotate! Cheating with authoritative clients n  receivers accept commands blindly n  requests instead of commands n  checksums from the game state fire!
  225. 225. Taxonomy of Online Cheating 3 (4) n  Cheating due to illicit information n  n  n  Cheating related with internal misuse n  n  access to replicated, hidden game data compromised software or data privileges of system administrators Cheating by exploiting a bug or design flaw n  n  n  repair the observed defects with patches limit the original functionality to avoid the defects good software design in the first place!
  226. 226. Taxonomy of Online Cheating 4 (4) n  Cheating by collusion two or more players play together without informing the other participants n  one cheater participates as two or more players n  n  Cheating related to virtual assets n  n  demand ⇒ supply ⇒ market ⇒ money flow ⇒ cheating Cheating by offending other players n  acting against the ‘spirit’ of the game
  227. 227. Breaking the control protocol: Maladies & remedies n  n  n  n  n  n  n  n  malady: change data in the messages and observe effects remedy: checksums (MD5 algorithm) malady: reverse engineer the checksum algorithm remedy: encrypt the messages malady: attack with packet replay remedy: add state information (pseudo-random numbers) malady: analyse messages based on their sizes remedy: modify messages and add a variable amount of junk data to messages
  228. 228. MD5 algorithm n  n  message digest = a constant length ‘fingerprint’ of the message no one should be able to produce n  n  n  R. L. Rivest: MD5 algorithm n  n  n  two messages having the same message digest the original message from a given message digest produces a 128-bit message digest from an arbitrary length message collision attack: different messages with the same fingerprint finding collisions is (now even technically!) possible n  what is the future of message digest algorithms?
  229. 229. Illicit information n  access to replicated, hidden game data n  n  n  n  n  removing the fog of war compromised graphics rendering drivers cheaters have more knowledge than they should have → passive cheating compromised software or data counter-measures in a networked environment centralized: server maintains integrity among the clients n  distributed: nodes check the validity of each other’s commands to detect cheaters n 
  230. 230. Exploiting design defects n  what can we do to poor designs! n  n  n  client authority abuse n  n  repair the observed defects with patches limit the original functionality to avoid the defects information from the clients is taken face-value regardless its reliability unrecognized (or unheeded) features of the network n  n  operation when the latencies are high coping with DoS and other attacks
  231. 231. Denial-of-Service (DoS) Attack n  Attack types: logic attack: exploit flaws in the software n  flooding attack: overwhelm the victim’s resources by sending a large number of spurious requests n  n  n  n  Distributed DoS attack: attack simultaneously from multiple (possibly cracked) hosts IP spoofing: forge the source address of the outgoing packets Consequences: n  n  wasted bandwidth, connection blockages computational strain on the hosts
  232. 232. Analysing DoS Activity n  n  n  n  n  Backscatter analysis Spoofing using random source address A host on the Internet receives unsolicited responses An attack of m packets, monitor n addresses Expectation of observing an attack: E(X) = nm/232
  233. 233. Look-ahead cheating a1 = Rock p2 p1 s=0 a1 = Rock s=2 a2 = Paper a2 = Paper
  234. 234. Two problems n  delaying one’s decision n  announce own action only after learning the opponent’s decision n  one-to-one and one-to-many n  inconsistent decisions n  announce different actions for the same turn to different opponents n  one-to-many
  235. 235. Lockstep protocol 1.  Announce a commitment to an action. n  n  2.  When everybody has announced their commitments for the turn, announce the action. n  3.  commitment can be easily calculated from the action but the action cannot be inferred from the commitment formed with a one-way function (e.g., hash) everybody knows what everybody else has promised to do Verify that the actions correspond to the commitments. n  if not, then somebody is cheating…
  236. 236. Lockstep protocol p1 a1 = Rock c1 = H(a1) = 4736 c2 = 1832 p2 a2 = Scissors c2 = H(a2) = 1832 c1 = 4736 a1 = Rock a1 = Rock a2 = Paper a2 = Paper H(a2) = 5383 ≠ c2
  237. 237. Loosening the synchronization 1(2) n  the slowest player dictates the speed n  short turns n  time limits for the announcements n  asynchronous n  sphere lockstep protocol of influence: synchronization is needed only when the players can affect each other in the next turn(s) n  otherwise, the players can proceed asynchronously
  238. 238. Loosening the synchronization 2(2) n  pipelined lockstep protocol n  player can send several commitments which are pipelined n  drawback: look-ahead cheating if a player announces action earlier than required n  adaptive pipeline protocol n  measure the actual latencies between the players n  grow or shrink the pipeline size accordingly
  239. 239. Drawbacks of the lockstep protocol n  requires two separate message transmissions n  commitment and action are sent separately n  slows down the communication n  requires a synchronization step n  the slowest player dictates the pace n  improvements: asynchronous lockstep, pipelined lockstep, adaptive pipeline lockstep n  does not solve the inconsistency problem!
  240. 240. Idea #1: Let’s get rid of the repeat! n  send only a single message n  n  but how can we be sure that the opponent cannot learn the action before annoucing its own action? the message is an active object, a delegate program code to be run by the receiver (host) n  delegate’s behaviour cannot be worked out by analytical methods alone n  guarantees the message exchange on a possibly hostile environment n  n  delegate provides the action once the host has sent its own action using the delegate
  241. 241. Example with two players Ap cpap p) (a cr(cp(ap)) Ar cpap p) (a cr(cp(ap)) dp Dr Dp dr cp(c(ar))) cra(ar rr cp(c(ar))) cra(ar rr
  242. 242. Threats n  what if the host delays or prevents the delegate’s message from getting to its originator? n  n  what if the originator is malicious and the delegate spies or wastes the host’s resources? n  n  the host will not receive the next delegate until the message is sent sandbox: the host restricts the resources available to the delegate how can the delegate be sure that it is sending messages to its originator? n  communication check-up
  243. 243. Communication check-up n  the delegate sends a unique identification to its originator n  n  n  the delegate waits until the originator has responded correctly check-ups are done randomly probability can be quite low n  host cannot know whether the transmission is the actual message or just a check-up n  Ap static and dynamic information Dr Ar Dp
  244. 244. Idea #2: Peer pressure n  n  players gossip the other players’ actions from the previous turn(s) compare gossip and recorded actions; if there are inconsistencies, ban the player n  n  n  gossip is piggybacked in the ordinary messages n  n  cheating is detected only afterwards gossiping imposes a threat of getting caught no extra transmissions are required how to be sure that the gossip is not forged? n  rechecking with randomly selected players
  245. 245. How much is enough? n  example: 10 players, 60 turns, 1 cheater who forges 10% of messages, gossip from one previous turn n  n  n  n  example: 100 players, 60 turns, 1 cheater who forges 10% of messages n  n  1% gossip: P(cheater gets caught) = 0.44 5% gossip: P(cheater gets caught) = 0.91 10% gossip: P(cheater gets caught) = 0.98 1% gossip: P(cheater gets caught) = 0.98 example: 10 players, 360 turns, 1 cheater who forges 10% of messages n  1% gossip: P(cheater gets caught) = 0.97
  246. 246. Message action for the current turn t n  delegate for the next turn t + 1 n  set of actions (i.e., gossip) from the previous turn t − 1 n  mp t a pt D p t + 1 G p t − 1 a it − 1 t−1 aj
  247. 247. Collusion n  imperfect information games n  infer the hidden information n  outwit the opponents collusion = two or more players play together without informing the other participants n  how to detect collusion in online game? n  n  players can communicate through other media n  one player can have several avatars
  248. 248. Co-operation and collusion n  Forms of co-operation n  soft play n  alliancing, n  expert ganging help, scouting n  self-sacrificing n  support If co-operation is not allowed by the rules of the game, it is collusion n  collusion = covert co-operation
  249. 249. Example: Co-operation in Age of Empires Forming alliances n  Sharing knowledge n  Donating resources n  Sharing control n  Providing intelligence n 
  250. 250. Key questions about collusion n  What are the different types of collusion? n  different literature n  types seem to be lumped together in the How to detect collusion reliably? n  finding algorithms that recognize intentional behaviour from unintentional n  How to detect collusion as early as possible? n  to n  minimize the harm done by colluders How to prevent collusion? n  the co-operation between the maintenance and collusion detection mechanism
  251. 251. Roles in collusion n  We must discern the roles of partakers in a game n  n  player ≠ participant Two types of collusion (i) collusion among the players collusion happens inside the game n  analyse whether the players’ behaviour diverges from what is reasonably expectable n  (ii) collusion among the participants collusion happens outside the game n  analyse the participants behind the players to detect whether they are colluding n 
  252. 252. Players and participants Instance of the game Players Participants Human Bot Sweatshop
  253. 253. Level of agreement n  Express collusion n  explicit n  hidden agreement Tacit collusion n  no agreement but common interests n  example: n  attacking the strongest/weakest opponent Semi-collusion n  collusion on certain areas, competition on other areas n  example: sharing a resource site, battling elsewhere