Scene Graphs & Component Based Game EnginesBryan Duggan
A presentation I made at the Fermented Poly meetup in Dublin about Scene Graphs & Component Based Game Engines. Lots of examples from my own game engine BGE - where almost everything is a component. Get the code and the course notes here: https://github.com/skooter500/BGE
How we optimized our Game - Jake & Tess' Finding Monsters AdventureFelipe Lira
Presentation I gave at Unite Boston 2015. I'll cover a few techniques we used to optimize our Unity mobile game - Jake & Tess' Finding Monsters Adventure
Unity - Internals: memory and performanceCodemotion
by Marco Trivellato - In this presentation we will provide in-depth knowledge about the Unity runtime. The first part will focus on memory and how to deal with fragmentation and garbage collection. The second part will cover implementation details and their memory vs cycles tradeoffs in both Unity4 and the upcoming Unity5.
Built for performance: the UIElements Renderer – Unite Copenhagen 2019Unity Technologies
In this technical talk, we will describe the science behind the UIElements rendering system, built from the ground up for retained-mode UI. It uses every CPU/GPU trick in the book to render thousands of different elements onscreen in a fraction of a millisecond, all on one thread. This powerful UI performance and optimization tool also supports complex features like clipping and vector graphics, even on low-end devices.
Speaker: Wessam Bahnassi – Unity
Watch the session on YouTube: https://youtu.be/zeCdVmfGUN0
Andrew Musselman, Committer and PMC Member, Apache Mahout, at MLconf Seattle ...MLconf
Andrew recently joined Lucidworks to head up their Advisory practice, and is a Committer and PMC member on the Apache Mahout project.
Abstract summary
Apache Mahout: Distributed Matrix Math for Machine Learning:
Machine learning and statistics tools like R and Scikit-learn are declarative, flexible, and extensible, but they scale poorly. “Big Data” tools such as Apache Spark, Apache Flink, and H2O distribute well, but have rudimentary functionality for machine learning and are not easily extensible. In this talk we present Apache Mahout, which provides a Scala-based, R-like DSL for doing linear algebra on distributed systems, letting practitioners quickly implement algorithms on distributed matrices. We will highlight new features in version 0.13 including the hybrid CPU/GPU-optimized engine, and a new framework for user-contributed methods and algorithms similar to R’s CRAN.
We will cover some history of Mahout, introduce the R-Like Scala DSL, provide an overview of how Mahout is able to operate on matrices distributed across multiple computers, and how it takes advantage of GPUs on each computer in a cluster creating a hybrid distributed/GPU-accelerated environment; then demonstrate the kinds of normally complex or unfeasible problems users can easily solve with Mahout; show an integration which allows Mahout to leverage the visualization packages of projects such as R, Python, and D3; and lastly explain how to develop algorithms and submit them to the Mahout project for other users to use.
Scene Graphs & Component Based Game EnginesBryan Duggan
A presentation I made at the Fermented Poly meetup in Dublin about Scene Graphs & Component Based Game Engines. Lots of examples from my own game engine BGE - where almost everything is a component. Get the code and the course notes here: https://github.com/skooter500/BGE
How we optimized our Game - Jake & Tess' Finding Monsters AdventureFelipe Lira
Presentation I gave at Unite Boston 2015. I'll cover a few techniques we used to optimize our Unity mobile game - Jake & Tess' Finding Monsters Adventure
Unity - Internals: memory and performanceCodemotion
by Marco Trivellato - In this presentation we will provide in-depth knowledge about the Unity runtime. The first part will focus on memory and how to deal with fragmentation and garbage collection. The second part will cover implementation details and their memory vs cycles tradeoffs in both Unity4 and the upcoming Unity5.
Built for performance: the UIElements Renderer – Unite Copenhagen 2019Unity Technologies
In this technical talk, we will describe the science behind the UIElements rendering system, built from the ground up for retained-mode UI. It uses every CPU/GPU trick in the book to render thousands of different elements onscreen in a fraction of a millisecond, all on one thread. This powerful UI performance and optimization tool also supports complex features like clipping and vector graphics, even on low-end devices.
Speaker: Wessam Bahnassi – Unity
Watch the session on YouTube: https://youtu.be/zeCdVmfGUN0
Andrew Musselman, Committer and PMC Member, Apache Mahout, at MLconf Seattle ...MLconf
Andrew recently joined Lucidworks to head up their Advisory practice, and is a Committer and PMC member on the Apache Mahout project.
Abstract summary
Apache Mahout: Distributed Matrix Math for Machine Learning:
Machine learning and statistics tools like R and Scikit-learn are declarative, flexible, and extensible, but they scale poorly. “Big Data” tools such as Apache Spark, Apache Flink, and H2O distribute well, but have rudimentary functionality for machine learning and are not easily extensible. In this talk we present Apache Mahout, which provides a Scala-based, R-like DSL for doing linear algebra on distributed systems, letting practitioners quickly implement algorithms on distributed matrices. We will highlight new features in version 0.13 including the hybrid CPU/GPU-optimized engine, and a new framework for user-contributed methods and algorithms similar to R’s CRAN.
We will cover some history of Mahout, introduce the R-Like Scala DSL, provide an overview of how Mahout is able to operate on matrices distributed across multiple computers, and how it takes advantage of GPUs on each computer in a cluster creating a hybrid distributed/GPU-accelerated environment; then demonstrate the kinds of normally complex or unfeasible problems users can easily solve with Mahout; show an integration which allows Mahout to leverage the visualization packages of projects such as R, Python, and D3; and lastly explain how to develop algorithms and submit them to the Mahout project for other users to use.
Rajat Monga at AI Frontiers: Deep Learning with TensorFlowAI Frontiers
In this talk at AI Frontiers Conference, Rajat Monga shares about TensorFlow that has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk goes over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
Presentation from DICE Coder's Day (2010 November) by Andreas Fredriksson in the Frostbite team.
Goes into detail about Scope Stacks, which are a systems programming tool for memory layout that provides
• Deterministic memory map behavior
• Single-cycle allocation speed
• Regular C++ object life cycle for objects that need it
This makes it very suitable for games.
TensorFlow is the most popular machine learning framework nowadays. TensorFlow Lite (TFLite), open sourced in late 2017, is TensorFlow’s runtime designed for mobile devices, esp. Android cell phones. TFLite is getting more and more mature. One the most interesting new components introduced recently are its GPU delegate and new NNAPI delegate. The GPU delegate uses Open GL ES compute shader on Android platforms and Metal shade on iOS devices. The original NNAPI delegate is an all-or-nothing design (if one of the ops in the compute graph is not supported by NNAPI, the whole graph is not delegated). The new one is a per-op design. When an op in a graph is not supported by NNAPI, the op is automatically fell back to the CPU runtime. I’ll have a quick review TFLite and its interpreter, then walk the audience through example usage of the two delegates and important source code of them.
Decima Engine: Visibility in Horizon Zero DawnGuerrilla
Download the full presentation here: http://www.guerrilla-games.com/read/decima-engine-visibility-in-horizon-zero-dawn
Abstract: Horizon Zero Dawn presented the Decima engine with new challenges in rendering large and dense environments. In particular, we needed to be able to quickly query a very large set of potential objects to find which should be visible. This talk looks at the problems we faced moving from more constrained Killzone levels to Horizon's open world, and our approach to fast visibility queries using the PS4's asynchronous compute hardware. It also covers our recent work on efficiently collecting batches of object instances during the query to reduce load on the entire rendering pipeline. A basic familiarity with GPU compute will be helpful to get the most out of this talk.
Game engines have long been in the forefront of taking advantage of the ever
increasing parallel compute power of both CPUs and GPUs. This talk is about how the
parallel compute is utilized in practice on multiple platforms today in the Frostbite game
engine and how we think the parallel programming models, hardware and software in
the industry should look like in the next 5 years to help us make the best games possible.
Teaching Recurrent Neural Networks using Tensorflow (May 2016)Rajiv Shah
This talk will provide an introduction to recurrent neural networks (RNNs). RNNs are designed to model sequential information and have provided impressive results for a variety of problems, such as speech recognition, language modeling, translation and image captioning. This talk walks through code in tensorflow for modeling a sine wave, performing basic addition, and generating handwriting. This was for a Chicago Tensorflow meetup in May 2016.
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016MLconf
Machine Learning with TensorFlow: TensorFlow has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk will go over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
An Introduction to TensorFlow architectureMani Goswami
Introduces you to the internals of TensorFlow and deep dives into distributed version of TensorFlow. Refer to https://github.com/manigoswami/tensorflow-examples for examples.
Rajat Monga at AI Frontiers: Deep Learning with TensorFlowAI Frontiers
In this talk at AI Frontiers Conference, Rajat Monga shares about TensorFlow that has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk goes over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
Presentation from DICE Coder's Day (2010 November) by Andreas Fredriksson in the Frostbite team.
Goes into detail about Scope Stacks, which are a systems programming tool for memory layout that provides
• Deterministic memory map behavior
• Single-cycle allocation speed
• Regular C++ object life cycle for objects that need it
This makes it very suitable for games.
TensorFlow is the most popular machine learning framework nowadays. TensorFlow Lite (TFLite), open sourced in late 2017, is TensorFlow’s runtime designed for mobile devices, esp. Android cell phones. TFLite is getting more and more mature. One the most interesting new components introduced recently are its GPU delegate and new NNAPI delegate. The GPU delegate uses Open GL ES compute shader on Android platforms and Metal shade on iOS devices. The original NNAPI delegate is an all-or-nothing design (if one of the ops in the compute graph is not supported by NNAPI, the whole graph is not delegated). The new one is a per-op design. When an op in a graph is not supported by NNAPI, the op is automatically fell back to the CPU runtime. I’ll have a quick review TFLite and its interpreter, then walk the audience through example usage of the two delegates and important source code of them.
Decima Engine: Visibility in Horizon Zero DawnGuerrilla
Download the full presentation here: http://www.guerrilla-games.com/read/decima-engine-visibility-in-horizon-zero-dawn
Abstract: Horizon Zero Dawn presented the Decima engine with new challenges in rendering large and dense environments. In particular, we needed to be able to quickly query a very large set of potential objects to find which should be visible. This talk looks at the problems we faced moving from more constrained Killzone levels to Horizon's open world, and our approach to fast visibility queries using the PS4's asynchronous compute hardware. It also covers our recent work on efficiently collecting batches of object instances during the query to reduce load on the entire rendering pipeline. A basic familiarity with GPU compute will be helpful to get the most out of this talk.
Game engines have long been in the forefront of taking advantage of the ever
increasing parallel compute power of both CPUs and GPUs. This talk is about how the
parallel compute is utilized in practice on multiple platforms today in the Frostbite game
engine and how we think the parallel programming models, hardware and software in
the industry should look like in the next 5 years to help us make the best games possible.
Teaching Recurrent Neural Networks using Tensorflow (May 2016)Rajiv Shah
This talk will provide an introduction to recurrent neural networks (RNNs). RNNs are designed to model sequential information and have provided impressive results for a variety of problems, such as speech recognition, language modeling, translation and image captioning. This talk walks through code in tensorflow for modeling a sine wave, performing basic addition, and generating handwriting. This was for a Chicago Tensorflow meetup in May 2016.
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016MLconf
Machine Learning with TensorFlow: TensorFlow has enabled cutting-edge machine learning research at the top AI labs in the world. At the same time it has made the technology accessible to a large audience leading to some amazing uses. TensorFlow is used for classification, recommendation, text parsing, sentiment analysis and more. This talk will go over the design that makes it fast, flexible, and easy to use, and describe how we continue to make it better.
An Introduction to TensorFlow architectureMani Goswami
Introduces you to the internals of TensorFlow and deep dives into distributed version of TensorFlow. Refer to https://github.com/manigoswami/tensorflow-examples for examples.
Tiger Style, the indie studio behind this year's Waking Mars and the 2009 IGF Best Mobile Game, Spider: The Secret of Bryce Manor, is committed to creating innovative gameplay and meaningful artistic content with every project. Despite a strong background in design savvy titles such as Thief, Deus Ex, and Splinter Cell, the team learned that innovation is never easy or predictable. Numerous prototypes, several major revisions, and a series of crucial revelations were survived before the concepts "action gardening" and "ecosystem gameplay" evolved from catch phrases into new interactive experiences the team was proud of. Join creative director Randy Smith through a tour of their well-documented experiments, mistakes, theories, and victories to get a glimpse into what this award-winning studio has learned about the problem of inventing new gameplay.
BSidesDelhi 2018: Headshot - Game Hacking on macOSBSides Delhi
Presenter: Jai Verma
Abstract: Game hacking is a major contributor to the hacking economy. People shell out a lot of money for hacks which help them cheat at multiplayer games. These hacks can range from enhancing in-game abilities for gaining an upper hand in the game, to bypassing in-app purchases. With the advent of anti-cheat such as VAC (Valve Anti Cheat), the techniques employed by hackers to cheat at games have become increasingly refined. These techniques are comparable to those utilised in sophisticated malware and rootkits. These techniques include runtime process injection, function interposition, and proxying network traffic to name a few. To avoid detection, hackers even resort to custom kernel drivers to interact with the game process. This talk will detail a few of the techniques utilised by game hackers and malware developers alike, through the hacking of an open-source game. I will showcase some hacks for an open-source FPS game by describing the method in detail for the macOS platform. While the demo will be macOS specific, the approach applies to all modern operating systems.
Distributed computing and hyper-parameter tuning with RayJan Margeta
Come for a while and learn how to execute your Python computations in parallel, to seamlessly scale the training of your machine learning models from a single machine to a cluster, find the right hyper-parameters, or accelerate your Pandas pipelines with large dataframes.
In this talk, we will cover Ray in action with plenty of examples. Ray is a flexible, high-performance distributed execution framework. Ray is well suited to deep learning workflows but its utility goes far beyond that.
Ray has several interesting and unique components, such as actors, or Plasma - an in-memory object store with zero-copy reads (particularly useful for working on large objects), and includes powerful hyper-parameter tuning tools.
We will compare Ray with its alternatives such as Dask or Celery, and see when it is more convenient or where it might even completely replace them.
The Next Mainstream Programming Language: A Game Developer's Perspectivekfrdbs
Keynote at POPL 2006 by Tim Sweeney, Epic Games
Abstract:
Game developers have long been early adopters of new technologies. This is so because we are largely unburdened by legacy code: With each new hardware generation, we are free to rethink our software assumptions and develop new products using new tools and even new programming languages. As a result, games are fertile ground for applying academic advances in these areas.And never has our industry been in need of such advances as it is now! The scale and scope of game development has increased more than ten-fold over the past ten years, yet the underlying limitations of the mainstream C/C++/Java/C# language family remain largely unaddressed.The talk begins with a high-level presentation of the game developer's world: the kinds of algorithms we employ on modern CPUs and GPUs, the difficulties of componentization and concurrency, and the challenges of writing very complex software with real-time performance requirements.The talk then outlines the ways that future programming languages could help us write better code, providing examples derived from experience writing games and software frameworks that support games. The major areas covered are abstraction facilities -- how we can use them to develop more extensible frameworks and components; practical opportunities for employing stronger typing to reduce run-time failures; and the need for pervasive concurrency support, both implicit and explicit, to effectively exploit the several forms of parallelism present in games and graphics.
Leszek Godlewski
Language: English
Video games are complex and non-deterministic systems. So complex, in fact, that some days the everyday breakpoint just doesn't cut it when you're looking for that next bug. Drawing from the experience of deploying three large titles to four platforms, this talk will discuss the different approaches and borderline magical tricks to debugging different parts of a game: noise filtering when our breakpoint is hit way too often, memory stomping, time-dependent bugs, rendering artifacts… Story of a game programmer's life.
Distributed computing with Ray. Find your hyper-parameters, speed up your Pan...Jan Margeta
In this talk we will explore Ray - a high-performance and low latency distributed execution framework which will allow you to run your Python code on multiple cores, and scale the same code from your laptop to a large cluster.
Ray uses several interesting ideas like actors, fast zero-copy shared-memory object store, or bottom-up scheduling. Moreover, on top of a succinct API, Ray builds tools to your Pandas pipelines faster, tools that find you the best hyper-parameters for your machine learning models, or train state of the art reinforcement learning algorithms, and much more. Come to the talk and learn some more.
Updated the talk with Kubernetes
https://www.pydays.at/
Lecture 02: Layered Architecture of Game Engine | GAMES104 - Modern Game Engi...Piccolo Engine
Follow us
YouTube: https://www.youtube.com/@piccoloengine
Website: https://games104.boomingtech.com/
WeChat Official Accounts: games104
GitHub: https://github.com/BoomingTech/Piccolo
Lecture 02: Layered Architecture of Game Engine | GAMES104 - Modern Game Engine: Theory and Practice
This course will introduce the system architecture, technical points, and engine system-related knowledge involved in modern game engines. Through this course, you can build a comprehensive and complete understanding of game engines. If your hands-on ability is strong enough, you will be able to follow the course and build a complete mini-game engine from 0 to 1. This course is suitable for students in related professional fields, researchers, and anyone interested in game engine design and development.
GDC 2010 - A Dynamic Component Architecture for High Performance Gameplay - M...Terrance Cohen
This presentation will detail a dynamic component architecture used in the Resistance franchise to represent aspects of entity and systems' behavior. This component system addresses several weaknesses of the traditional game object model for high-performance gameplay, particularly in multi-threaded or multi-processor environments. Dynamic components are allocated and deallocated on-demand from efficient memory pools, and the system provides a convenient framework for running updates in parallel and on different processors such as SPUs. The system can be layered on top of a traditional game object model, so a codebase can gradually migrate to this new architecture. The presentation will discuss motivations and goals of the system as well as implementation details.
Part of the proceedings of Code Mesh 2014: http://www.codemesh.io/
Source ODP file (includes videos and presenter notes): https://www.dropbox.com/s/720qkkn27uaz43u/ggd.odp?dl=0
Video games are complex and non-deterministic systems. So complex, in fact, that some days the everyday breakpoint just doesn't cut it when you're looking for that next bug. Drawing from the experience of deploying three large titles to four platforms, this talk will discuss the different approaches and borderline magical tricks to debugging different parts of a game: noise filtering when our breakpoint is hit way too often, memory stomping, time-dependent bugs, rendering glitches… Story of a game programmer's life.
XPDDS17: uniprof: Transparent Unikernel Performance Profiling and Debugging -...The Linux Foundation
Unikernels are increasingly gaining traction as they provide lightweight, low-overhead, high-performance execution of applications, while keeping the high isolation guarantees of virtualization crucial to multi-tenant deployments. However, developers still lack tools to support their development, especially compared to the rich toolsets that full operating systems such as Linux or BSD provide.
In this talk, Florian will present uniprof, a unikernel profiler and performance debugger that gives developers insight into their unikernel behavior transparently, without having to instrument the unikernel itself. Uniprof works on both x86 and ARM, can profile even when frame pointers are unavailable, and can be used with visualization tools such as flame graphs. It incurs only minimal overhead (~0.1% at 100 samples/s) to the unikernel, making it ideal for profiling even on production systems.
The beautiful thing about software engineering is that it gives you the warm and fuzzy illusion of total understanding: I control this machine because I know how it operates. This is the result of layers upon layers of successful abstractions, which hide immense sophistication and complexity. As with any abstraction, though, these sometimes leak, and that's when a good grounding in what's under the hood pays off.
The second talk in this series peels a few layers of abstraction and takes a look under the hood of our "car engine", the CPU. While hardly anyone codes in assembly language anymore, your C# or JavaScript (or Scala or...) application still ends up executing machine code instructions on a processor; that is why Java has a memory model, why memory layout still matters at scale, and why you're usually free to ignore these considerations and go about your merry way.
You'll come away knowing a little bit about a lot of different moving parts under the hood; after all, isn't understanding how the machine operates what this is all about?
(From a talk given at BuildStuff 2016 in Vilnius, Lithuania.)
Separating Hype from Reality in Deep Learning with Sameer FarooquiDatabricks
Deep Learning is all the rage these days, but where does the reality of what Deep Learning can do end and the media hype begin? In this talk, I will dispel common myths about Deep Learning that are not necessarily true and help you decide whether you should practically use Deep Learning in your software stack.
I’ll begin with a technical overview of common neural network architectures like CNNs, RNNs, GANs and their common use cases like computer vision, language understanding or unsupervised machine learning. Then I’ll separate the hype from reality around questions like:
• When should you prefer traditional ML systems like scikit learn or Spark.ML instead of Deep Learning?
• Do you no longer need to do careful feature extraction and standardization if using Deep Learning?
• Do you really need terabytes of data when training neural networks or can you ‘steal’ pre-trained lower layers from public models by using transfer learning?
• How do you decide which activation function (like ReLU, leaky ReLU, ELU, etc) or optimizer (like Momentum, AdaGrad, RMSProp, Adam, etc) to use in your neural network?
• Should you randomly initialize the weights in your network or use more advanced strategies like Xavier or He initialization?
• How easy is it to overfit/overtrain a neural network and what are the common techniques to ovoid overfitting (like l1/l2 regularization, dropout and early stopping)?
Startup.Ml: Using neon for NLP and Localization Applications Intel Nervana
Speaker: Arjun Bansal, co-founder of Nervana Systems
Arjun Bansal’s workshop focused on neon, an open-source python based deep learning framework that has been build from the ground up for speed and ease of use. The workshop highlights how to use neon, build Recurrent Recurrent Neural Networks to generate and analyze text, and build Convolutional Autoencoders to generate images and to localize objects. Arjun also demoed the integration of neon with the Nervana cloud (in private beta) for multi-GPU training of deep networks.
FGS 2011: Making A Game With Molehill: Zombie Tycoonmochimedia
Luc Beaulieu and Jean-Philipe Auclair from Frima Studio share their experience working with Adobe's new Molehill API's in making their new game "Zombie Tycoon".
Similar to Gdc gameplay replication in acu with videos (20)
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij
This is a workshop about communication and collaboration. We will experience how we can analyze the reasons for resistance to change (exercise 1) and practice how to improve our conversation style and be more in control and effective in the way we communicate (exercise 2).
This session will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
Abstract:
Let’s talk about powerful conversations! We all know how to lead a constructive conversation, right? Then why is it so difficult to have those conversations with people at work, especially those in powerful positions that show resistance to change?
Learning to control and direct conversations takes understanding and practice.
We can combine our innate empathy with our analytical skills to gain a deeper understanding of complex situations at work. Join this session to learn how to prepare for difficult conversations and how to improve our agile conversations in order to be more influential without power. We will use Dave Gray’s Empathy Mapping, Argyris’ Ladder of Inference and The Four Rs from Agile Conversations (Squirrel and Fredrick).
In the session you will experience how preparing and reflecting on your conversation can help you be more influential at work. You will learn how to communicate more effectively with the people needed to achieve positive change. You will leave with a self-revised version of a difficult conversation and a practical model to use when you get back to work.
Come learn more on how to become a real influencer!
Mastering the Concepts Tested in the Databricks Certified Data Engineer Assoc...SkillCertProExams
• For a full set of 760+ questions. Go to
https://skillcertpro.com/product/databricks-certified-data-engineer-associate-exam-questions/
• SkillCertPro offers detailed explanations to each question which helps to understand the concepts better.
• It is recommended to score above 85% in SkillCertPro exams before attempting a real exam.
• SkillCertPro updates exam questions every 2 weeks.
• You will get life time access and life time free updates
• SkillCertPro assures 100% pass guarantee in first attempt.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
Collapsing Narratives: Exploring Non-Linearity • a micro report by Rosie WellsRosie Wells
Insight: In a landscape where traditional narrative structures are giving way to fragmented and non-linear forms of storytelling, there lies immense potential for creativity and exploration.
'Collapsing Narratives: Exploring Non-Linearity' is a micro report from Rosie Wells.
Rosie Wells is an Arts & Cultural Strategist uniquely positioned at the intersection of grassroots and mainstream storytelling.
Their work is focused on developing meaningful and lasting connections that can drive social change.
Please download this presentation to enjoy the hyperlinks!
12. NetTokens
Can also be used to wait for all players
t=5
t=4
Peer1
Peer2 Peer3
t=4
t=5
t=6
t=6
13. remote procedure call
Remote procedure call (RPC): execute a function on a peer, this function
will be executed on other peers
network void PlayAnimation(const AnimationParams & params);
• Can be ordered or not
• Can be persisted for Join In Progress
• Can be executed later, or re-executed
22. conclusions
(+) Almost no change in the spawning clients code
(+) Cheap in bandwidth
(+) Data is the same in single and multiplayer
(-) No network balancing
(-) A lot of edge cases to handle when a spawning request is cancelled
(-) Systemic gameplay is not easy to replicate
23.
24. objectives
• Need a way to replicate the player’s movements with good accuracy
• Animation quality of replicas must be the same as masters
• Must support join in progress
• Shouldn’t take too much bandwidth
28. conclusions
(+) Good replication quality
(+) Can recover nicely from divergences
(-) Expensive in bandwidth
(-) 3 state machines (HSM / NetBehaviorAI / animation graph) to
maintained
29.
30. • Replicate NPC behavior at a very low cost
objectives
• Not a perfect match: not a first person game
• Quality of realization: same for masters and replicas
• Existing behaviors easily updated to support multiplayer
32. replication
Animations Navigation Weapons LookAt
Idle IdleIdle Idle
Play animation
Push
Stumble
…
Go to position
Wander
Follow
…
Sheath
Unsheath
Fire
…
Look at
position
Look at
target
RPC
33. conclusions
(+) Cheap in bandwidth
(+) Seamless migrations
(-) Average replication quality
(-) Services (navigation, animation…) must also be synchronized to
prevent behavior branching
39. conclusions
(+) Good replication quality
(+) Cheap in bandwidth
(+) Code and data are the same in single and multiplayer
(-) Join in progress can be tricky
(-) Reaction times can become noticeable
42. stations
How does it work in single player :
• Crowd Life Manager updates all stations
• Stations = data containers (not systemic)
• Stations loaded with cells
• NPCs spawned/released by stations
43. stations
• Amount of stations and NPCs to track
Challenges of replicating:
• Stations are not replicated
• NPCs in stations can be masters, replicas, or even non-
replicated entities
47. conclusions
(+) Only 1 net object per peer = big gains on bandwidth/CPU
(+) Each peer can run their own logics
(+) Data is already available on discovery
(-) Data is broadcast on all peers even if they don’t need it
(-) Execution logic tends to be complicated
50. stats
• Below TRC of 92 kbps per peer (around 30 kbps outside fight)
• Most expensive systems:
• Proxies: list of vectors
• Player, but high replication quality
• Crowd life: lots of stations, actually cheap per station
52. conclusion
(+) Cheap in bandwidth, no additional CPU cost
(+) Good replication quality
(+) Multiplayer team = gameplay team
(-) Limited to a low number of players
(-) Hard to identify sources of divergences
In the previous Assassin’s Creed, the multiplayer game was a different executable. It was running in a different map, not in an open world. Also it was using a server.
Open world game
Coop peer to peer, 1 to 4 players
Huge existing code base
Simulation bubble around each player: 80 meters
Loading grid: blocks of 32 meters (around 40 blocks loaded)
Peer = Player
NPC is an entity
1st entity spawned = master
Peer = Player
NPC is an entity
1st entity spawned = master
Join in progress
Mementos
Leave in progress
Migration
NetTokens are time based: the first one that has requested the token is the winner.
The time used is the network time. The time is in each message, so each peer get the same comparison result.
In case of equality the peer with the lower peer ID wins.
A RPC is converted into a reliable message.
A RPC can be persisted for Join In Progress: these functions returns a cookie than you can persist. As long as it is persisted, this function will be executed on all peers that create a new replica of this object.
You can execute or reexecute a RPC through the cookie.
If you receive a RPC PlayAnimation( Animation * anim ), but your entity is not ready to play this animation, you can keep the cookie, and execute the RPC when your entity is ready. You can even call PlayAnimation each frame until it is successful.
If RPC1 is ordered, RPC2 is not, and RPC1 transport packet is lost, then RPC2 can be executed.
Example 1: the master executes a RPC, it is then broadcast and executed on all replicas.
Example 2: a replica executes a RPC, it is then unicast to the master that executes it, and the master broadcast it to all the replicas except the initial sender.
If player 2 has created a transient replicated entity, and it sends to player 1 a NetHandle to this entity, then player 1 can get the position and the sight of this entity. It can also find out if this entity still exists for player 2.
Asynchronous process: pool of spawning requests.
The spawning manager execute each frame the most urgent spawning requests.
Different strategies:
Spawning: generate a new entity.
Acquisition: reuse an entity released by another spawning client.
Around 500 spawned NPCs, but less than 60 are individually replicated.
The entities not individually replicated are managed by the massive crowd system that handles replication at a higher level.
In Echo, master is spawned, replicas are automatically created
This is for systems with unpredictable spawning request, ex: crowd members, players
Only the NPC type and a seed are broadcast.
In engine, a gameplay system decides to spawn an entity.
This gameplay coordinator can be active on multiple peers, and we don’t want to have as many entities than peers.
A spawning request is added on all peers. It asks for a token, and the spawning request that manage to acquire the token will spawn the master. The token is also used to set the netkey of the entity.
The other peers will spawn a replica.
The generation seed is generated with the ID of the spawning client, so nothing is broadcast as the NPC type is known by all peers
This is a tool to debug player replication: a player replica is created locally, and receives all messages with a fixed latency.
Controller = hardware
HSM = code
Animation = data
We could replicate the controller input, but the game is not deterministic.
We could replicate the animation, but that would be very expensive.
We are replicated the animation logic (called the HSM for Human State Machine).
Replication state machine allows to cope with network realities such as loss packets, latency variations.
Not too intrusive on HSM code.
Virtually no environment checks on replicas, all done by the master = low CPU cost
3 state machines:
HSM -> enter in all states + send final decisions
Animation graph: extra transitions for the replica
Cannot reuse the player behavior replication:
Too expensive in bandwidth.
Doesn’t handle migration: the local player is always a master
Ideally NPCs should be replicated at the character logic.
Since there is a huge existing code base, we have actually added a new layer of replication between the character logic and the animation logic
Intentions: layer of replication between the character logic, and the animation logic
The intention framework is a state machine. There are 4 concurrent states: Animations, navigation, weapons and lookat.
All these states can be active at the same time, as we can have a NPC walking, while playing an upper body animation, with his sword unsheathed, looking at his target.
Under all these concurrent states, we have some exclusive states.
First, there is always an Idle state, when the concurrent state is not active.
Then we have some states for all the possible actions
Transitions to an active state are triggered by RPCs, and are a persisted for join in progress.
Once an action is completed, there is a transition to the Idle state; the RPC is unpersisted.
A gameplay coordinator manages the behavior of a list of NPCs, and handle their interactions with the player.
Once the NPC is spawned, the gameplay coordinator pushes a character logic on its AI component.
Examples of gameplay coordinators: tail, defend, patrol guards… Around 40 different coordinators in ACU.
Examples of character logic: navigate, fight, search, play animation…
Examples: Kill, tail, protect, chase…
Actions are not applied on replicas
Sensors and UI is for local player
Net data is shared between all instances
Actions are applied on masters
A replica coordinator sends data to the master through RPCs. The master does the synthesis of these information, and broadcast it to the replicas through a memento.
Since masters and replicas share the same data, they take the same decisions.
In case of migration, or disconnection, everything is already shared.
Example for shared data: for an interaction coordinator, each peer will notify the master whereas the local player is ready to trigger the interaction.
All players will be aware of the state of the other players, and will allow the interaction accordingly.
Crowd stations: NPCs that are not walkers e.g. merchant stands, artisans, groups of people talking, etc. They are not essential for gameplay.
Around 900 stations loaded, and up to 600 spawned NPCs
non-replicated entities: bulk crowd members.
State: active, started, aborted, completed, nb NPC spawned, released, lost
Master = token winner (token requested by the cell)
How do we replicate: Split Mastering
Each manager creates it's own NetObject
NetObjects discovered are linked to the crowd life manager
Stations:
Check who will manage each station using NetTokens
Winner of token is the Master of that station.
NPCs:
Check if the entity is a master or a replica.
For all “Local Masters”, write the state changes in the master NetObject
For all “Remote Replicas”, listen for the memento callback of remote peers and change state to follow master.
This split mastering pattern is also used for the proxies, and for the players data (inventory, customization)
Proxies are a saving for other systems
Hard to evaluate, similar to playing this game.
Need a lot of logs and a good logging tool to debug
Need a lot of logs and a good logging tool to debug