The Elder Scrolls Blades strove to produce high-quality visuals on modern mobile devices. This talk will describe the challenges of achieving that level of quality in procedurally generated 3D environments.
Speakers:
Simon-Pierre Thibault - Bethesda Game Studios
Sergei Savchenko - Bethesda Game Studios
Watch the session here: https://youtu.be/KbxiGH6igBk
Past, Present and Future Challenges of Global Illumination in GamesColin Barré-Brisebois
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This talk is about our experiences gained during making of the Killzone Shadow Fall announcement demo.
We’ve gathered all the hard data about our assets, memory, CPU and GPU usage and a whole bunch of tricks.
The goal of talk is to help you to form a clear picture of what’s already possible to achieve on PS4.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Game engines have long been in the forefront of taking advantage of the ever increasing parallel compute power of both CPUs and GPUs. This talk is about how the parallel compute is utilized in practice on multiple platforms today in the Frostbite game engine and how we think the parallel programming models, hardware and software in the industry should look like in the next 5 years to help us make the best games possible
Past, Present and Future Challenges of Global Illumination in GamesColin Barré-Brisebois
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This talk is about our experiences gained during making of the Killzone Shadow Fall announcement demo.
We’ve gathered all the hard data about our assets, memory, CPU and GPU usage and a whole bunch of tricks.
The goal of talk is to help you to form a clear picture of what’s already possible to achieve on PS4.
Talk by Fabien Christin from DICE at GDC 2016.
Designing a big city that players can explore by day and by night while improving on the unique visual from the first Mirror's Edge game isn't an easy task.
In this talk, the tools and technology used to render Mirror's Edge: Catalyst will be discussed. From the physical sky to the reflection tech, the speakers will show how they tamed the new Frostbite 3 PBR engine to deliver realistic images with stylized visuals.
They will talk about the artistic and technical challenges they faced and how they tried to overcome them, from the simple light settings and Enlighten workflow to character shading and color grading.
Takeaway
Attendees will get an insight of technical and artistic techniques used to create a dynamic time of day system with updating radiosity and reflections.
Intended Audience
This session is targeted to game artists, technical artists and graphics programmers who want to know more about Mirror's Edge: Catalyst rendering technology, lighting tools and shading tricks.
Game engines have long been in the forefront of taking advantage of the ever increasing parallel compute power of both CPUs and GPUs. This talk is about how the parallel compute is utilized in practice on multiple platforms today in the Frostbite game engine and how we think the parallel programming models, hardware and software in the industry should look like in the next 5 years to help us make the best games possible
Checkerboard Rendering in Dark Souls: Remastered by QLOCQLOC
This is a talk on checkerboard rendering Markus & Andreas held at Digital Dragons 2019.
In it they quickly go through the history of Checkerboard Rendering before taking a deep dive into how it works and how it is implemented in Dark Souls: Remastered. Lastly, they present the quality and performance improvements they got from using it and their conclusion.
PS: The PDF. file includes useful in-depth notes from both authors.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Volumetric Lighting for Many Lights in Lords of the FallenBenjamin Glatzel
In this session I’m going to give you an in-depth insight into the design and the implementation of the volumetric lighting system we’ve developed for ‘Lords of the Fallen’. The system allows the simulation of countless volumetric lighting effects in parallel while still being a feasible solution on next-gen consoles.
This presentation was held at the Digital Dragons 2014 conference.
Videos shown during the talk are available here: http://bglatzel.movingblocks.net/publications
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Progressive Lightmapper: An Introduction to Lightmapping in UnityUnity Technologies
In 2018.1 we removed the preview label from the Progressive Lightmapper – we’ve made memory improvements, optimizations, and have had customers battle test it. We are now also working on a GPU accelerated version of the lightmapper. In this session, Tobias and Kuba will provide an intro to the basics of lightmapping and address of the most common issues that users struggle with and how to solve them. They will also provide an update on the future roadmap for lightmapping in Unity.
Tobias Alexander Franke & Kuba Cupisz (Unity Technologies)
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Unite Berlin 2018 - Book of the Dead Optimizing Performance for High End Cons...Unity Technologies
In this session, the Unity Demo team provides their best tips and tricks for optimizing detailed, complex environment scenes for modern console performance.
Speakers:
Rob Thompson (Unity Technologies)
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Decima Engine: Visibility in Horizon Zero DawnGuerrilla
Download the full presentation here: http://www.guerrilla-games.com/read/decima-engine-visibility-in-horizon-zero-dawn
Abstract: Horizon Zero Dawn presented the Decima engine with new challenges in rendering large and dense environments. In particular, we needed to be able to quickly query a very large set of potential objects to find which should be visible. This talk looks at the problems we faced moving from more constrained Killzone levels to Horizon's open world, and our approach to fast visibility queries using the PS4's asynchronous compute hardware. It also covers our recent work on efficiently collecting batches of object instances during the query to reduce load on the entire rendering pipeline. A basic familiarity with GPU compute will be helpful to get the most out of this talk.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
Killzone Shadow Fall: Creating Art Tools For A New Generation Of GamesGuerrilla
This talk describes the tool improvements Guerrilla Games implemented to make Killzone Shadow Fall shine on the PlayStation 4. It highlights additions to the Maya pipeline, such as Viewport 2.0, Maya's coupling with in-game updates and in-engine deferred renderer features including real-time shadow-casting, volumetric lighting, hardware instancing, lens flares and color grading.
Unity - Internals: memory and performanceCodemotion
by Marco Trivellato - In this presentation we will provide in-depth knowledge about the Unity runtime. The first part will focus on memory and how to deal with fragmentation and garbage collection. The second part will cover implementation details and their memory vs cycles tradeoffs in both Unity4 and the upcoming Unity5.
Developing applications and games in Unity engine - Matej Jariabka, Rudolf Ka...gamifi.cc
gamifi.cc team - Rudolf & Matej presented on local tech/mobile/games conference experience with Unity & game development in general.
We also list some other tools that might help you. First part covers business tips & reasons to use Unity.
Checkerboard Rendering in Dark Souls: Remastered by QLOCQLOC
This is a talk on checkerboard rendering Markus & Andreas held at Digital Dragons 2019.
In it they quickly go through the history of Checkerboard Rendering before taking a deep dive into how it works and how it is implemented in Dark Souls: Remastered. Lastly, they present the quality and performance improvements they got from using it and their conclusion.
PS: The PDF. file includes useful in-depth notes from both authors.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Volumetric Lighting for Many Lights in Lords of the FallenBenjamin Glatzel
In this session I’m going to give you an in-depth insight into the design and the implementation of the volumetric lighting system we’ve developed for ‘Lords of the Fallen’. The system allows the simulation of countless volumetric lighting effects in parallel while still being a feasible solution on next-gen consoles.
This presentation was held at the Digital Dragons 2014 conference.
Videos shown during the talk are available here: http://bglatzel.movingblocks.net/publications
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
A Certain Slant of Light - Past, Present and Future Challenges of Global Illu...Electronic Arts / DICE
Global illumination (GI) has been an ongoing quest in games. The perpetual tug-of-war between visual quality and performance often forces developers to take the latest and greatest from academia and tailor it to push the boundaries of what has been realized in a game product. Many elements need to align for success, including image quality, performance, scalability, interactivity, ease of use, as well as game-specific and production challenges.
First we will paint a picture of the current state of global illumination in games, addressing how the state of the union compares to the latest and greatest research. We will then explore various GI challenges that game teams face from the art, engineering, pipelines and production perspective. The games industry lacks an ideal solution, so the goal here is to raise awareness by being transparent about the real problems in the field. Finally, we will talk about the future. This will be a call to arms, with the objective of uniting game developers and researchers on the same quest to evolve global illumination in games from being mostly static, or sometimes perceptually real-time, to fully real-time.
This presentation was given at SIGGRAPH 2017 by Colin Barré-Brisebois (EA SEED) as part of the Open Problems in Real-Time Rendering course.
Progressive Lightmapper: An Introduction to Lightmapping in UnityUnity Technologies
In 2018.1 we removed the preview label from the Progressive Lightmapper – we’ve made memory improvements, optimizations, and have had customers battle test it. We are now also working on a GPU accelerated version of the lightmapper. In this session, Tobias and Kuba will provide an intro to the basics of lightmapping and address of the most common issues that users struggle with and how to solve them. They will also provide an update on the future roadmap for lightmapping in Unity.
Tobias Alexander Franke & Kuba Cupisz (Unity Technologies)
Course presentation at SIGGRAPH 2014 by Charles de Rousiers and Sébastian Lagarde at Electronic Arts about transitioning the Frostbite game engine to physically-based rendering.
Make sure to check out the 118 page course notes on: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/
During the last few months, we have revisited the concept of image quality in Frostbite. The core of our approach was to be as close as possible to a cinematic look. We used the concept of reference to evaluate the accuracy of produced images. Physically based rendering (PBR) was the natural way to achieve this. This talk covers all the different steps needed to switch a production engine to PBR, including the small details often bypass in the literature.
The state of the art of real-time PBR techniques allowed us to achieve good overall results but not without production issues. We present some techniques for improving convolution time for image based reflection, proper ambient occlusion handling, and coherent lighting units which are mandatory for level editing.
Moreover, we have managed to reduce the quality gap, highlighted by our systematic reference comparison, in particular related to rough material handling, glossy screen space reflection, and area lighting.
The technical part of PBR is crucial for achieving good results, but represents only the top of the iceberg. Frostbite has become the de facto high-end game engine within Electronic Arts and is now used by a large amount of game teams. Moving all these game teams from “old fashion” lighting to PBR has required a lot of education, which have been done in parallel of the technical development. We have provided editing and validation tools to help the transition of art production. In addition, we have built a flexible material parametrisation framework to adapt to the various authoring tools and game teams’ requirements.
A technical deep dive into the DX11 rendering in Battlefield 3, the first title to use the new Frostbite 2 Engine. Topics covered include DX11 optimization techniques, efficient deferred shading, high-quality rendering and resource streaming for creating large and highly-detailed dynamic environments on modern PCs.
Unite Berlin 2018 - Book of the Dead Optimizing Performance for High End Cons...Unity Technologies
In this session, the Unity Demo team provides their best tips and tricks for optimizing detailed, complex environment scenes for modern console performance.
Speakers:
Rob Thompson (Unity Technologies)
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Decima Engine: Visibility in Horizon Zero DawnGuerrilla
Download the full presentation here: http://www.guerrilla-games.com/read/decima-engine-visibility-in-horizon-zero-dawn
Abstract: Horizon Zero Dawn presented the Decima engine with new challenges in rendering large and dense environments. In particular, we needed to be able to quickly query a very large set of potential objects to find which should be visible. This talk looks at the problems we faced moving from more constrained Killzone levels to Horizon's open world, and our approach to fast visibility queries using the PS4's asynchronous compute hardware. It also covers our recent work on efficiently collecting batches of object instances during the query to reduce load on the entire rendering pipeline. A basic familiarity with GPU compute will be helpful to get the most out of this talk.
CEDEC 2018 - Towards Effortless Photorealism Through Real-Time RaytracingElectronic Arts / DICE
Real-time raytracing holds the promise of simplifying rendering pipelines, eliminating artist-intensive workflows, and ultimately delivering photorealistic images. This talk by Tomasz Stachowiak provides a glimpse of the future through the lens of SEED's PICA PICA demo: a game made for artificial intelligence agents, with procedural level assembly, and no precomputation. We dive into technical details of several advanced rendering algorithms, and discuss how Microsoft's DirectX Raytracing technology allows for their intuitive implementation. Several challenges remain -- we will take a look at some of them, discuss how real-time raytracing fits in the spectrum of solutions, and start to plot the course towards robust and artist-friendly image synthesis.
Killzone Shadow Fall: Creating Art Tools For A New Generation Of GamesGuerrilla
This talk describes the tool improvements Guerrilla Games implemented to make Killzone Shadow Fall shine on the PlayStation 4. It highlights additions to the Maya pipeline, such as Viewport 2.0, Maya's coupling with in-game updates and in-engine deferred renderer features including real-time shadow-casting, volumetric lighting, hardware instancing, lens flares and color grading.
Unity - Internals: memory and performanceCodemotion
by Marco Trivellato - In this presentation we will provide in-depth knowledge about the Unity runtime. The first part will focus on memory and how to deal with fragmentation and garbage collection. The second part will cover implementation details and their memory vs cycles tradeoffs in both Unity4 and the upcoming Unity5.
Developing applications and games in Unity engine - Matej Jariabka, Rudolf Ka...gamifi.cc
gamifi.cc team - Rudolf & Matej presented on local tech/mobile/games conference experience with Unity & game development in general.
We also list some other tools that might help you. First part covers business tips & reasons to use Unity.
Presentation Video : http://tinyurl.com/pfhz96m
Stage 3D introduction in Adobe Flash Player and Adobe AIR lets you use techniques such as deferred lighting, screen space dynamic shadow, MRT, and more through vertex and fragment shaders. Join Jean-Philippe Doiron, Principal Architect R&D at Frima Studio, and Jean-Philippe Auclair, R&D Architect, for a deep dive into GPU programming with the new Flash Player, and discover how to produce beautiful GPU effects that are reusable in your games and applications.
The next generation of GPU APIs for Game EnginesPooya Eimandar
Demonstrate about new pipeline of GPU APIs for developing real time game engine.
Developing for DirectX12, Vulkan or Metal requires a redesign of the game engine. Developers can achieve key benefits like reduced power consumption and optimized CPU and GPU, multi-threading on multiple GPU devices.
Using synthetic data for computer vision model trainingUnity Technologies
During this webinar Unity’s computer vision team provides an overview of computer vision, walks through current real-world data workflows, and explains why companies are moving toward synthetically generated data as an alternate data source for model training.
Watch the webinar: https://resources.unity.com/ai-ml/cv-webinar-dec-2021
The Tipping Point: How Virtual Experiences Are Transforming Global IndustriesUnity Technologies
When it comes to emerging technology, Forrester found that “94% of those who have implemented real-time 3D are expanding their investment.”
Wonder why? Learn more in this webinar featuring guest speaker Paul Miller, a principal analyst at Forrester. He covers the key findings of a commissioned study conducted by Forrester Consulting on behalf of Unity, published in March 2020.
Learn more: https://on.unity.com/2Yz49kg
Watch the webinar: https://on.unity.com/3aYGlsF
Take a peek at the slide from the second installment of our 2020 roadmap: Live Games.
Watch the presentation on YouTube: https://www.youtube.com/watch?v=w6sn8bJiZ2g
Got questions about the roadmap? Check out the Q&A over on the Unity forum: https://on.unity.com/2wV3SwD
Take a peek at the slide from the first installment of our 2020 roadmap: Core Engine & Creator Tools.
Watch the presentation (hosted by Will Goldstone, Product Manager) on YouTube: https://www.youtube.com/watch?v=dDjsS4NPqFU
Got questions about the roadmap? Check out the Q&A over on the Unity forum: https://on.unity.com/CreateRoadmapQA
How ABB shapes the future of industry with Microsoft HoloLens and Unity - Uni...Unity Technologies
It's high time for augmented reality to be brought to a wider audience. In ABB, we know that it is not just a gimmick any more. However, with every innovative technology comes new challenges. In these slides, we show how to overcome them and deliver valuable products with Hololens and Unity.
Speakers:
Maciej Włodarczyk - ABB
Rafał Kielar - ABB
Watch the session on YouTube: https://youtu.be/QFsj8Pi_3Ho
Unity XR platform has a new architecture – Unite Copenhagen 2019Unity Technologies
Unity developed a new architecture that improves the support for existing and future augmented reality (AR) and virtual reality (VR) platforms. Learn about the technology under the hood, the consequent benefits, and improvements to the platform, and how it impacts your workflows in creating AR/VR experiences.
Speakers: Mike Durand, Matt Fuad - Unity
Watch the session on Youtube: https://youtu.be/Stqk1GxlSK0
Autodesk and Unity announced a collaboration last year to streamline workflows and enable seamless development across the AEC design, build and operate lifecycles. This fall, Unity Reflect launches, giving designers, architects, and engineers the ability to seamlessly federate their Revit models for real-time 3D.
Andrew Sullivan - Digital Delivery Manager, SHoP Architects will provide an overview of how they are using the product to enable real-time decision making, reduce the time between revisions and meetings, and ultimately improve design review and construction planning processes.
Recording available here: https://youtu.be/qe0yxHA0fHI
How Daimler uses mobile mixed realities for training and sales - Unite Copenh...Unity Technologies
Daimler Protics implemented mixed and augmented reality on mobile devices and used the Microsoft HoloLens for automotive production, training, and marketing. Discover the challenges Daimler Protics faced and the Unity solutions that eased the mixed reality implementation.
Speakers:
Daniel Keßelheim - Daimler Protics
Sebastian Rigling - Daimler Protics
Session available here: https://youtu.be/fTc1c8iTGqU
How Volvo embraced real-time 3D and shook up the auto industry- Unite Copenha...Unity Technologies
Hear from Volvo's lead Unity developer, Timmy Ghiurau, about how he broke new ground by bringing technology forged in gaming into one of the leading brands in the automotive industry. Timmy will share how he used his gaming background to inspire people across his large organization to adopt Unity and embrace real-time 3D as a way of working.
Timmy Ghiurau - Volvo
Session available here: https://youtu.be/CD4Go3Uv5Uc
QA your code: The new Unity Test Framework – Unite Copenhagen 2019Unity Technologies
Are you involved in testing or QA on projects in Unity? In these slides, you'll get an overview of the state of Unity for all things testing-related, and have the opportunity to share your stories of success, failure, pain, and glory. Learn from your fellow developers and give feedback on how Unity could help you hold your projects to a higher standard of quality. You will also get an introduction to the newest features in the Test Framework.
Speakers:
Christian Warnecke - Unity
Richard Fine - Unity
Watch the session on YouTube: https://youtu.be/wTiF2D0_vKA
Engineering.com webinar: Real-time 3D and digital twins: The power of a virtu...Unity Technologies
From buildings and infrastructure to industrial machinery and factories, digital twins are becoming integral across the industrial sector. In this webinar, first shown on Engineering.com, leaders from Unity and Unit040, provider of digital twin platform Prespective, share how digital twins add value at all stages of the project and product lifecycle, from the early stages of design to predictive maintenance using IoT data.
Watch the webinar here: create.unity3d.com/real-time-3d-and-digital-twins
Supplying scalable VR training applications with Innoactive - Unite Copenhage...Unity Technologies
Major automotive brands like Volkswagen are leveraging the power of virtual reality to create immersive training programs that can be delivered across multiple global locations at the same time. Learn how to scale the production and distribution of real-time VR training in enterprise.
Speakers:
Thomas Wimmer - Innoactive
Andreea Raducan - Innoactive
Watch the session on YouTube: https://youtu.be/5DNFUTfyOEc
XR and real-time 3D in automotive digital marketing strategies | Visionaries ...Unity Technologies
Augmented reality (AR), virtual reality (VR), and mixed reality (MR) – collectively known as XR – are making inroads in the automotive industry. Join this session led by Visionaries 777, which works with major auto brands like INFINITI, to learn about the range of immersive experiences you can build with Unity to create a better customer experience that results in more engagement and sales.
Speakers:
David Castañeda - Visionaries 777
Frantz Lasorne - Visionaries 777
Session available here: https://youtu.be/WJpeWHGXyms
Real-time CG animation in Unity: unpacking the Sherman project - Unite Copenh...Unity Technologies
Get a complete walkthrough of the end-to-end animation workflow of the Sherman project. Learn how to use Unity for creating CG animation and take a deep dive into the real-time fur system in Unity.
Speaker:
Mike Wuetherick - Unity
Watch the session on YouTube: https://youtu.be/fFfWxErJMkY
Creating next-gen VR and MR experiences using Varjo VR-1 and XR-1 - Unite Cop...Unity Technologies
The developers of Varjo VR-1 learned a lot about human eye resolution and the demands it puts on virtual reality (VR) content. In these slides, you'll explore what next-generation VR can mean for your VR experiences. Learn about what matters the most when it comes to visual quality, the possible caveats, and the role performance requirements play in this equation.
Speaker:
Mikko Strandborg - Varjo
What's ahead for film and animation with Unity 2020 - Unite Copenhagen 2019Unity Technologies
Unity is enabling film and animation studios to revolutionize their pipelines with features developed specifically to empower storytellers who are creating linear and interactive content. Learn more about features such as Python, Shotgun, the Arbitrary Output Variables (AOV) used in Recorder for export, Alembic, and Universal Scene Description (USD).
Speaker:
Mathieu Muller - Unity
Watch the session on YouTube: https://youtu.be/wrc3R-BoDGs
How to Improve Visual Rendering Quality in VR - Unite Copenhagen 2019Unity Technologies
VR allows for an entirely new level of immersion, leading to more thrilling and engaging content to be delivered and is growing rapidly. Despite this, VR, especially on mobile, currently contains a number of limitations, which can make it an unrealistic, unconvincing and, sometimes, an uncomfortable experience. Virtual reality (VR) is a new way to deliver thrilling and engaging content and allows for a deep level of immersion. Despite this, VR, especially on mobile, currently has several limitations, which can make it an unrealistic, unconvincing and, sometimes, an uncomfortable experience. To achieve the true potential of VR, these limitations must be either solved or mitigated. Ways of mitigating these limitations include optimal alpha compositing approaches, texture filtering techniques and bump mapping methods for use with VR content. In these slides, technology company Arm will outline how to improve the rendering quality of your VR content, describing the most common pitfalls and bad practices, before providing clear examples and mitigation solutions of how to best overcome them.
Speaker:
Ryan O'Shea - ARM
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Developing and optimizing a procedural game: The Elder Scrolls Blades- Unite 2019
1.
2. Developing and Optimizing
a Procedural Game:
The Elder Scrolls: Blades
Simon-Pierre Thibault
Sergei Savchenko
Bethesda Game Studios
3. The Elder Scrolls: Blades
• Advanced visuals on mobile
platforms
• Procedural dungeons
• To achieve this:
• Built custom lighting solutions
• Significantly optimized for
performance and memory
5. Level Example
• Made of standalone building blocks:
Rooms
• Assembled by our dungeon generator at
runtime
• Rooms have different shapes and sizes
• Rooms are universal for each theme
• Rooms as streamed during gameplay
to support larger levels.
6. Room
Prefab that contains all the data required to
use that piece in a level
• Art Content
• Level design data
• Gameplay data
• Lighting
7. Level Lighting
High visual quality is an important pillar
for this project
• Based on the built-in render pipeline
• Uses a modified version of the Unity
Standard shader
• Mix of real-time and baked lighting
9. Lightmap Seams
• Lighting at connections differ from
room to room
• Our global illumination is not really
global…
• Rooms are attached at runtime, so we
can’t predict that at bake time
• Solutions?
10. Lightmap Blending
Concept: Sample the lightmap on the other side of a connection to
blend with it
• Assign the secondary lightmap to the material
• For each vertex
• Assign a blend factor based on the distance to the connection (vertex color
alpha)
• Assign a extra uv to sample the lightmap texture on the other side (3rd uv set)
• Edit the shader to sample the secondary lightmap
12. Connection
Extension Mesh
• Generate lightmap information beyond
the connection
• Never rendered in-game
• Used at runtime to find a proper color
for blending
• Will avoid stretching the color at the
connection
15. Limitations and Drawbacks
• Extra texture fetch per pixel rendered
• Large runtime cost to calculate secondary lightmap UVs for
each vertex near the connection
• Incompatible with Unity’s static batching
17. Light Probes
• Contain directional lighting information
for a point in space
• Is a built-in feature of Unity
• Allows dynamic objects to sample
baked lighting
• Doesn’t work with our procedural
pipeline
• Light probes data is saved in scenes
• Light probes cannot me moved
19. Custom Light Probes Runtime System
• Write probes data to an asset we attach to each room
• Load light probes data for each room
• Attach probes for each room into a global probes network
• Sample the network for each dynamic renderer based on world
position
• Send the interpolated light probe information to the shader
20. Light Probes data
• The editor generates a
UnityEngine.Rendering.SphericalHarmonicsL2 instance for
each light probe.
• The shader expects data in a different format.
• This is explained in the Unity documentation:
• “The Unity shader code for reconstruction is found in UnityCG.cginc
and is using the method from Appendix A10 Shader/CPU code for
Irradiance Environment Maps from Peter-Pikes paper.”
• https://docs.unity3d.com/Manual/LightProbes-
TechnicalInformation.html
21. Writing the shader
properties
Use MaterialPropertyBlocks to fill in shader
properties without going through a
material.
https://docs.unity3d.com/ScriptReference/
MaterialPropertyBlock.html
Use the [PerRendererData] attribute on your
shader properties
https://docs.unity3d.com/Manual/SL-
Properties.html
LightProbeUsage.CustomProvided was
added in 2018.1
22. Considerations
• Sampling the network for a large amount of renderers can be
expensive
• Only sample once for static objects
• Only update dynamic objects if they have moved
• Because we place probe on a grid, we sample the network in
constant time.
• Unity handles ambient color through the same shader
properties
• Ambient color needs to be added to the probe's values
24. Blade’s Frame:
CPU side Rendering Main Performance Drivers:
• # of draw calls
• # of rendering passes
• Efficiency of batching
• Efficiency of the Graphics API
25. Reducing the number of draw calls
• Dynamic visibility culling
• Portal based strategy for
dungeons
• Distance based strategy for
forests
• Dynamic occlusion culling for
the town
• Buildings as occluders
Occlusion Culling
27. OpenGL vs Vulkan:
Chipset CPU GPU 32/ogl 64/vkn
Loading
Sim
Time
Warm
up
Issue Loading
Sim
Time
Warmu
p Issue
Exynos 8895 Octa
Octa-core (4x2.3 GHz Mongoose M2 &
4x1.7 GHz Cortex-A53)
Mali-G71
MP20 25.9 0.041 no 28.1 0.048 no
Qualcomm MSM8998
Snapdragon 835
Octa-core (4x2.35 GHz Kryo & 4x1.9 GHz
Kryo) Adreno 540 29.3 0.0352 yes 21.3 0.0357 no
Qualcomm SDM845
Snapdragon 845
Octa-core (4x2.8 GHz Kryo 385 Gold &
4x1.7 GHz Kryo 385 Silver) Adreno 630 21.8 0.0347 yes 17.6 0.0349 no
Qualcomm MSM8998
Snapdragon 835
Octa-core (4x2.35 GHz Kryo & 4x1.9 GHz
Kryo) Adreno 540 29.5 0.0354 yes 20.4 0.0348 no
Hisilicon Kirin 970
Octa-core (4x2.4 GHz Cortex-A73 & 4x1.8
GHz Cortex-A53)
Mali-G72
MP12 20.8 0.092 no 23.6 0.14 no
28. Dynamic Graphics API Selection
CustomPlayerActivity
onCreate this.getIntent().putExtra("unity", "-force-gles");
E.g.: If some
Mali based
device
yes
29. To Summarize:
• Vulkan is definitely viable and fast
• Your game’s performance needs to be tested on Adrenos and
Malis
• Use dynamic graphics API selection to try to get gains on
both sides
35. To Summarize:
• On Android Devices Unity prefers big cores, this may or may not be beneficial in
all scenarios
• Affinities can be adjusted on initialization to assign UnityMain and Rendering
threads to their own cores and to redistribute workers
• One should not expect a major performance gain from this (but the cost of this
change is also low)
41. To Summarize
• Different tools account for subset of types when reporting (Xcode widget is
the most relevant)
• iOS devices have complex memory management system with many memory
types
• Current memory use can be fetched from the OS
43. Heap Growth Control:
• Various statics from Boehm-Demers GC
implementation can be externed and
accessed
• Specifically GC_free_space_divisor from
alloc.c
• Divisor value controls managed heap’s
growth increment
• Default value: 3
• Custom value e.g: 16
44. Low Level Memory API
• Memory Profiler API
permits capturing
snapshots for both
native and managed
memory
45.
46. To Summarize
• Boehm GC can be tweaked in the run time so that heap
expands in smaller increments
• C# objects may cost significantly in terms of memory
• Low level memory API enables building custom tools for
memory tracking and leak detection
How difficult it is to build a procedural game with advanced visuals for mobile platforms?
We will try to offer some insights
Our experience is from working on The Elder Scrolls Blades – it reimagined TES for modern mobile platforms
Specifically we will talk about:
precomputed lighting in procedural context
some aspects of performance/optimization that we dealt with
This is an example of what a level looks like from an editor point of view
Made of standalone blocks called rooms
The red lines represent connections between rooms
Different shapes and size but they all fit together
Any room can be attached to any other rooms for the same theme.
All fit on a grid that acts as a framework for the level generator
To support longer levels without using too much memory rooms are streamed in while the player progresses through the level
Example of a single rooms
Rooms are prefab that contain all the data we need to use that piece in a level
Unity render pipeline
Scriptable render pipeline was still experimental when we started the project
Realtime lighting
Looks great with the PBR shader, especially with high definition normal maps
Allows dynamic shadow in key areas like this window
Baked global illumination
Both lightmaps and light probes
Add depth to scenes with indirect lighting
Can add lot of baked lights without affecting performance
On this screenshot what you see is a harsh cut produced by lightmaps along the floor and wall were two different rooms connect.
The problem here is that since lightmaps are generated separately for each room, we can get end up with dramatically different colors and brightness around room connection.
Because of the way we bake global illumination per room, our global illumination isn’t really global.
We also can’t predict which rooms will get connected together because they’re assembled randomly at runtime.
Bake time solutions…
We concluded that we needed a runtime solution to this
That runtime solution is lightmap blending.
The concept here is rather simple, what if we could blend the lightmap on each side of a connection to make the seam disappear? Then we could start with a 50-50 blend at the connection and the fade it out based on distance.
Well you need a few things to achieve this. One, we need to add a secondary lightmap for blending to the materials on each side. Each side needs the other side’s lightmap
And since a blend like this needs to be applied per pixel, we need to assign a blend factor to each vertex. And we need a uv set to sample the secondary lightmap texture.
Once we have that we simply edit the shader to sample both lightmaps and blend them based on the blending factor.
All of this is setup can be done at runtime when attaching two rooms together
With this technique we only end up using the lightmap color at the seam and stretching it, so any local detail can get blow out of proportion which doesn’t look good.
The problem is that we don’t have enough to blend that shadow correctly, we don’t know what that shadow in the blue room would have looked like if it had continued naturally.
Well what if we could create that information?
This is exactly what we did!
We’ve created a mesh that is precisely the shape of the connection but stretched based on the blending distance. We make it so that it gets lightmapped by Unity at bake time, but we never actually render it in game. We only use it’s lightmap data.
With this we have more accurate data for blending so we don’t need to stretch anything anymore.
The harsh cut is gone and we don’t have any noticeable artifact. The light on the floor from that torch does fade pretty quickly at the connection. It’s not 100% accurate, but still an acceptable compromise.
Because of the blending we do have an additional texture fetch per pixel, but we only do that for static renderers that are near a connection. So it’s not very significant.
The worst performance problem comes from computing the uv set for the secondary lightmap. It involves finding the closest point on the extension mesh for each vertex of the blended renderers. This can be quite heavy calculations but what we did is run all of that asynchronously in a background thread. When streaming a level, the new rooms are spawned outside of the view of the player so we don’t need to apply lightmap blending immediately.
And because we modify meshes at runtime by adding an extra uv set this doesn’t work with Unity’s static batching. You’ll have to either remove the static batching option on meshes that are blended or disable the feature globally in your project settings.
So this takes care of our biggest lightmap problem, but I mentioned earlier that we also use light probes as part of our lighting pipeline.
Light probes are object that contain directional lighting information for a specific point in space. They’re a built-in feature in Unity. If you setup a LightProbeGroup component in your scene, Unity will generate light probe information when you bake the lighting for that scene.
At runtime they allow the engine to sample baked lighting information for dynamic objects. They’re a complementary feature to lightmaps. Lightmap are for applying baked lightins to static object and light probes are for applying baked lighting to dynamic objects.
The problem with light probes is that Unity’s implementation doesn’t support our procedural pipeline. The first reason is that light probe data is stored in scenes only, but we use prefabs for our rooms.
The second and main reason is that probe positions are set at bake time and cannot be moved. A specific room’s position in world space is not fixed in our case. We have to be able to spawn that room wherever our level generator tell us to.
At a high level this is how the light probes pipeline is separated in Unity.
We have the generation part that runs in the editor when baking lighting. There’s the renderer that samples light probes at runtime for each dynamic object. It finds the probes that surround the object’s position and interpolate their values and then there’s the shader that uses that information to render the object.
The only part of this that’s not compatible with our procedural pipeline is the renderer. So what if we just replaced with our own custom implementation?
So how would do that?
We write the light probes data generated by Unity to a binary file that we attach to each room prefab. When a new room is attached at runtime we deserialize the probes data and attach the probe to a global probes network for the level. Then every frame we sample the probes network to find probe information for each dynamic object based on their position. Then we send over that data to the shader.
How do we manage that data?
Well once lighting an been baked the LightPorbes class contains a SphericalHarmonicsL2 object for each probe in your scene. Under the hood this is just an array of floats so it’s easy to serialize.
The main problem we faced here is that the shader expects a different format and we didn’t know how to translate a SphericalHarmonicsL2 object into the format the shader expects.
Well it turns out there’s a very important line about this in the Unity documentation. It says: “The Unity shader code for reconstruction is found in UnityCG.cginc and is using the method from Appendix A10 Shader/CPU code for Irradiance Environment Maps from Peter-Pikes paper.”
That links to a paper called Stupid Spherical Harmonics tricks by Peter-Pike Sloan. Appendix 10 of that paper contains the precise formulas that you need to feed a SphericalHarmonicsL2 object to Unty’s Standard shader.
But how exactly do you feed that data to the shader.
The standard way of filling in shader properties is through materials. But probe values could be different for every single object even if they share the same material. We don’t want to create a unique material for every object, that would prevent batching.
Unity has a feature for cases like this called MaterialPropertyBlocks. This allows you to fill in shader properties per object without having to go through a material. When using this you also need to set the [PerRendererData] tag on you shader property so that the engine knows your property will be filled in through MaterialPropertyBlocks.
For probes data specifically you should set the lightProbeUsage value of all your renderers to “Custom Provided” This tells Unity that you’re responsible for filling in spherical harmonics. That way Unity won’t override your values.
Overall replacing that system worked well for us, but here are a few considerations.
Sampling and interpolating probes for each dynamic object each frame can be pretty expensive so you need a way to know which renderers will or won’t move. Objects that don’t move should sample the probes once when spawned and then never again. Objects that do move should not sample probes if they have not moved since the last sampling.
In our case we fill rooms with probe based on a 3D grid which means we a regular layout for our probes. This is very helpful because it allows us to find the closest probes to any point in space in constant time. Without a regular layout it becomes a much more complex problem to find the surrounding probes for any position. Unity’s default system supports that with by tetrahedral tessellation, but we didn’t need to reimplement that on our side.
Another thing to keep in mind is that the shader properties used for spherical harmonics also contain the scene’s ambient color, so you’ll need to handle that yourself by adding the ambient color with the probe’s values before sending them to the shader.
Not running at high enough frame rate
Android might be more affected
CPU side rendering is costly and could often be a bottleneck
While GPU side could be as well but
Tile based GPU deal well with overdraw
Generally shader performance is easier to deal with
The number of draw calls is a big performance driver for both CPU and often GPU
Control the number of passes
Shadows
Depth pass
Pixel lights etc.
Efficiency of batching: static and dynamic
Efficiency of graphics API used…
Culling is significant in reducing the number of draw calls
A procedural game cannot use precomputed visibility culling
Dynamic visibility culling
interior strategy
exterior strategy
Dynamic occlusion culling for town environment
Buildings are natural occluders for other buildings, vegetations, decorations, NPCs etc.
Graphics API is a huge performance driver
What is faster OpenGL or Vulkan?
OpenGL is a traditional API maintaining internal state
Vulkan is much more modern with many facilities to reference states and rendering data
Performance depends on graphics drivers…
Loading times and FPS
We discovered shader cache on some OpenGL drivers
Vulkan is generally faster…
But could be slower compared to OpenGL on other devices
Dynamic graphics API selection to attempt the best of both world
Unity cannot change graphics API in the run time
But there is a provision for auto selection and fall back
Can also be initialized with a different API
Requires custom Unity build
To implement -- needs custom player Activity (java)
Checking device or SOC name to pass a flag to unity
Vulkan is viable
Not many graphics artefacts
Faster overall
Beware validation layer
Helps shader compilation caching
Test in your specific case
Both frame performance
And loading times
Dynamic API selection when beneficial
Works in debug out of the box
May need a custom Unity build
May make it into main line
Thanks to Unity for their help working with us to implement
Continuing with Android:
How Unity distributes threads to different cores
Most modern Android devices are 8-cores
There are big cores that are clocked higher
And small cores that are slower but consume less power
Unity game can easily have 30-40 threads running
Most important ones are:
UnityMain
UnityGFXDeviceW – multithreaded rendering is presumed to be used
Worker threads
These will be used more and more with DOTs and Jobs system
Some systems use these already
E.g.: skinning, physics, particles
Use Systrace to find out how threads are distributed
Core jumping
Contention between workers and main/rendering threads
Little cores not used…
Putting everything on big cores might actually be ok
These are faster
Workers and not uber busy
We experimented with alternative affinities
Reserve cores for UnityMain and UnityGFXDeviceW
Allocate workers (and choreographer and audio etc.) to other cores big and small
This is what the result looks like
Generally found this to be measurably faster (5-10%)
Only 4 workers
We asked Unity for a feature to allocate more workers
Nav mesh generation benefits from workers on little cores
Need to code a custom plugin to discover thread ids and adjust affinities
Plugins are very easy with IL2CPP
Setaffinity syscall to set core mask
Not a major performance improvement
Yet measurable
We asked Unity for more workers
Systrace is very useful
iOS has much better single core performance (fewer cores though)
Thanks to Google for their help
Running out of memory
iOS is more affected
Generally less memory on iOS devices, many with only 2G
Tools give different reading
Xcode widget is the most important reading – OS jettisons apps
Other tools give smaller estimates
Modern mobile devices memory system is complex
Native memory for asset data: textures, meshes etc – Reference counted
Managed memory for Unity Objects and C# objects – GC collected
Doesn’t account for everything in a Unity app though
Virtual memory space – large, exceeds physical space
Physical wired memory is only a small part
Consists of wired dirty memory,
Reusable (GPU uses this for storage during rendering)
External, encrypted code
Some read-only assets can be unloaded
Some dirty memory can be compressed
Use mach functionality to get detailed view
A practical view of memory types evolving over time
Overall use of internal memory is important
Managed heap state is important
Compressed memory utilization is important
Xcode widget reflects what OS is using for decision making
There are many memory types and several levels of allocators
Tools compute memory differently and often conservatively
Use mach functionality to get details
Heap growth is sizable steps
Sometimes we are short of memory and have MM of space in unused managed heap slack
Unity uses Boehm-Demers GC
Easy enough to write a plugin that externs some GC statics
One to note is GC_free_space_divisor that controls how heap will expand
Can increase the divisor to have heap expanding less
Unity is great of exposing GameObject hierarchy
Tools can be used to automate finding GameObject leaks
It is harder for native and managed memory
There is a Unity memory profiler which is improving
Allows to compare captures and drill down
Custom tools can be build using low level memory API
The API provides a snapshot of all memory blocks in native and managed memory
We had to build a custom tool to help finding C# level leaks
Can take a snapshot that compare with previous snapshot based on type names
Also built a feature to see which types refer to which types
Often times it is not drilling down into individual object but high level view that helps figuring out leaks
Here is an example? Perhaps a leak?
We have access to Boehm statics
Exercise caution as with anything low level
Thanks to the team