This presentation demonstrates how to efficiently manage GPU buffers using today's APIs. It describes why buffer management is so important, and how inefficient buffer management can cut frame rates in half. Finally, it demonstrates a couple of new techniques; the first being discard-free circular buffers and the second transient buffers.
Siggraph 2016 - Vulkan and nvidia : the essentialsTristan Lorach
This presentation introduces Vulkan components, what you must know to start using this new API. And what you must know when using it on NVIDIA hardware
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Porting the Source Engine to Linux: Valve's Lessons Learnedbasisspace
These slides discuss the techniques applied to porting a large, commercial AAA engine from Windows to Linux. It includes the lessons learned along the way, and pitfalls we ran into to help serve as a warning to other developers.
OpenGL 4.4 provides new features for accelerating scenes with many objects, which are typically found in professional visualization markets. This talk will provide details on the usage of the features and their effect on real-life models. Furthermore we will showcase how more work for rendering a scene can be off-loaded to the GPU, such as efficient occlusion culling or matrix calculations.
Video presentation here: http://on-demand.gputechconf.com/gtc/2014/video/S4379-opengl-44-scene-rendering-techniques.mp4
Siggraph 2016 - Vulkan and nvidia : the essentialsTristan Lorach
This presentation introduces Vulkan components, what you must know to start using this new API. And what you must know when using it on NVIDIA hardware
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
Porting the Source Engine to Linux: Valve's Lessons Learnedbasisspace
These slides discuss the techniques applied to porting a large, commercial AAA engine from Windows to Linux. It includes the lessons learned along the way, and pitfalls we ran into to help serve as a warning to other developers.
OpenGL 4.4 provides new features for accelerating scenes with many objects, which are typically found in professional visualization markets. This talk will provide details on the usage of the features and their effect on real-life models. Furthermore we will showcase how more work for rendering a scene can be off-loaded to the GPU, such as efficient occlusion culling or matrix calculations.
Video presentation here: http://on-demand.gputechconf.com/gtc/2014/video/S4379-opengl-44-scene-rendering-techniques.mp4
Recorded video here:
http://on-demand.gputechconf.com/siggraph/2017/video/sig1757-tristan-lorach-vkFX-effective-approach-for-vulkan-api.html
Vulkan is a complex low-level API, full of structures and dedicated objects. Using it may be tedious and often leads to complicated source code. We propose here a way to define and use Vulkan components in a convenient and readable way. Then we will show how this infrastructure allows to introduce and use higher concepts, such as Techniques, Passes; and even how to instantiate resources, render-targets right from within the effect, making it self-sufficient and consistent as a general description. The overall purpose of this open-source project is to improve and enhance the use of Vulkan API, while keeping its strength and flexibility. This project can run in two different ways: either as a compiler generating C++ code for you; or at runtime, to load effects and use them right away.
vkFx comes from a former project called nvFx, presented few years ago. While nvFx was intended to be Generic (OpenGL & D3D compliant), vkFx is Vulkan-specific: so the project is thin and doesn’t break important paradigms that Vulkan requires to stay powerful.
Ever wondered how to use modern OpenGL in a way that radically reduces driver overhead? Then this talk is for you.
John McDonald and Cass Everitt gave this talk at Steam Dev Days in Seattle on Jan 16, 2014.
NVIDIA OpenGL and Vulkan Support for 2017Mark Kilgard
Learn how NVIDIA continues improving both Vulkan and OpenGL for cross-platform graphics and compute development. This high-level talk is intended for anyone wanting to understand the state of Vulkan and OpenGL in 2017 on NVIDIA GPUs. For OpenGL, the latest standard update maintains the compatibility and feature-richness you expect. For Vulkan, NVIDIA has enabled the latest NVIDIA GPU hardware features and now provides explicit support for multiple GPUs. And for either API, NVIDIA's SDKs and Nsight tools help you develop and debug your application faster.
NVIDIA booth theater presentation at SIGGRAPH in Los Angeles, August 1, 2017.
http://www.nvidia.com/object/siggraph2017-schedule.html?id=sig1732
Get your SIGGRAPH driver release with OpenGL 4.6 and the latest Vulkan functionality from
https://developer.nvidia.com/opengl-driver
OpenGL NVIDIA Command-List: Approaching Zero Driver OverheadTristan Lorach
This presentation introduces a new NVIDIA extension called Command-list.
The purpose of this presentation is to explain the basic concepts on how to use it and show what are the benefits.
The sample I used for the talk is here: https://github.com/nvpro-samples/gl_commandlist_bk3d_models
The driver for trying should be PreRelease 347.09
http://www.nvidia.com/download/driverResults.aspx/80913/en-us
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
New Addressable Asset System for Speed and PerformanceUnity Technologies
The new Addressable Asset system makes it much easier to manage your game assets and project workflow, and it gives you better options to optimize performance. In this session, you’ll learn how the new system works, and explore some common and challenging use cases.
Presenters: Stephen Palmer, Bill Ramsour (Unity Technologies)
Game engines have long been in the forefront of taking advantage of the ever
increasing parallel compute power of both CPUs and GPUs. This talk is about how the
parallel compute is utilized in practice on multiple platforms today in the Frostbite game
engine and how we think the parallel programming models, hardware and software in
the industry should look like in the next 5 years to help us make the best games possible.
The past few years have seen a sharp increase in the complexity of rendering algorithms used in modern game engines. Large portions of the rendering work are increasingly written in GPU computing languages, and decoupled from the conventional “one-to-one” pipeline stages for which shading languages were designed. Following Tim Foley’s talk from SIGGRAPH 2016’s Open Problems course on shading language directions, we explore example rendering algorithms that we want to express in a composable, reusable and performance-portable manner. We argue that a few key constraints in GPU computing languages inhibit these goals, some of which are rooted in hardware limitations. We conclude with a call to action detailing specific improvements we would like to see in GPU compute languages, as well as the underlying graphics hardware.
This talk was originally given at SIGGRAPH 2017 by Andrew Lauritzen (EA SEED) for the Open Problems in Real-Time Rendering course.
This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE's Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.
Presented September 30, 2009 in San Jose, California at GPU Technology Conference.
Describes the new features of OpenGL 3.2 and NVIDIA's extensions beyond 3.2 such as bindless graphics, direct state access, separate shader objects, copy image, texture barrier, and Cg 2.2.
Checkerboard Rendering in Dark Souls: Remastered by QLOCQLOC
This is a talk on checkerboard rendering Markus & Andreas held at Digital Dragons 2019.
In it they quickly go through the history of Checkerboard Rendering before taking a deep dive into how it works and how it is implemented in Dark Souls: Remastered. Lastly, they present the quality and performance improvements they got from using it and their conclusion.
PS: The PDF. file includes useful in-depth notes from both authors.
strangeloop 2012 apache cassandra anti patternsMatthew Dennis
random list of Apache Cassndra Anti Patterns. There is a lot of info on what to use Cassandra for and how, but not a lot of information on what not to do. This presentation works towards filling that gap.
What this presentation covers:
● Slow/Blocked Requests
○ What is a slow request?
○ Possible causes
○ Types of Slow Requests
○ Common Troubleshooting Techniques
● Flapping OSD's when RGW buckets have millions of objects
○
○ Where to start
Possible causes
○ Temporary solutions
○ Permanent Solutions
Recorded video here:
http://on-demand.gputechconf.com/siggraph/2017/video/sig1757-tristan-lorach-vkFX-effective-approach-for-vulkan-api.html
Vulkan is a complex low-level API, full of structures and dedicated objects. Using it may be tedious and often leads to complicated source code. We propose here a way to define and use Vulkan components in a convenient and readable way. Then we will show how this infrastructure allows to introduce and use higher concepts, such as Techniques, Passes; and even how to instantiate resources, render-targets right from within the effect, making it self-sufficient and consistent as a general description. The overall purpose of this open-source project is to improve and enhance the use of Vulkan API, while keeping its strength and flexibility. This project can run in two different ways: either as a compiler generating C++ code for you; or at runtime, to load effects and use them right away.
vkFx comes from a former project called nvFx, presented few years ago. While nvFx was intended to be Generic (OpenGL & D3D compliant), vkFx is Vulkan-specific: so the project is thin and doesn’t break important paradigms that Vulkan requires to stay powerful.
Ever wondered how to use modern OpenGL in a way that radically reduces driver overhead? Then this talk is for you.
John McDonald and Cass Everitt gave this talk at Steam Dev Days in Seattle on Jan 16, 2014.
NVIDIA OpenGL and Vulkan Support for 2017Mark Kilgard
Learn how NVIDIA continues improving both Vulkan and OpenGL for cross-platform graphics and compute development. This high-level talk is intended for anyone wanting to understand the state of Vulkan and OpenGL in 2017 on NVIDIA GPUs. For OpenGL, the latest standard update maintains the compatibility and feature-richness you expect. For Vulkan, NVIDIA has enabled the latest NVIDIA GPU hardware features and now provides explicit support for multiple GPUs. And for either API, NVIDIA's SDKs and Nsight tools help you develop and debug your application faster.
NVIDIA booth theater presentation at SIGGRAPH in Los Angeles, August 1, 2017.
http://www.nvidia.com/object/siggraph2017-schedule.html?id=sig1732
Get your SIGGRAPH driver release with OpenGL 4.6 and the latest Vulkan functionality from
https://developer.nvidia.com/opengl-driver
OpenGL NVIDIA Command-List: Approaching Zero Driver OverheadTristan Lorach
This presentation introduces a new NVIDIA extension called Command-list.
The purpose of this presentation is to explain the basic concepts on how to use it and show what are the benefits.
The sample I used for the talk is here: https://github.com/nvpro-samples/gl_commandlist_bk3d_models
The driver for trying should be PreRelease 347.09
http://www.nvidia.com/download/driverResults.aspx/80913/en-us
Bill explains some of the ways that the Vertex Shader can be used to improve performance by taking a fast path through the Vertex Shader rather than generating vertices with other parts of the pipeline in this AMD technology presentation from the 2014 Game Developers Conference in San Francisco March 17-21. Check out more technical presentations at http://developer.amd.com/resources/documentation-articles/conference-presentations/
Graphics Gems from CryENGINE 3 (Siggraph 2013)Tiago Sousa
This lecture covers rendering topics related to Crytek’s latest engine iteration, the technology which powers titles such as Ryse, Warface, and Crysis 3. Among covered topics, Sousa presented SMAA 1TX: an update featuring a robust and simple temporal antialising component; performant and physically-plausible camera related post-processing techniques such as motion blur and depth of field were also covered.
New Addressable Asset System for Speed and PerformanceUnity Technologies
The new Addressable Asset system makes it much easier to manage your game assets and project workflow, and it gives you better options to optimize performance. In this session, you’ll learn how the new system works, and explore some common and challenging use cases.
Presenters: Stephen Palmer, Bill Ramsour (Unity Technologies)
Game engines have long been in the forefront of taking advantage of the ever
increasing parallel compute power of both CPUs and GPUs. This talk is about how the
parallel compute is utilized in practice on multiple platforms today in the Frostbite game
engine and how we think the parallel programming models, hardware and software in
the industry should look like in the next 5 years to help us make the best games possible.
The past few years have seen a sharp increase in the complexity of rendering algorithms used in modern game engines. Large portions of the rendering work are increasingly written in GPU computing languages, and decoupled from the conventional “one-to-one” pipeline stages for which shading languages were designed. Following Tim Foley’s talk from SIGGRAPH 2016’s Open Problems course on shading language directions, we explore example rendering algorithms that we want to express in a composable, reusable and performance-portable manner. We argue that a few key constraints in GPU computing languages inhibit these goals, some of which are rooted in hardware limitations. We conclude with a call to action detailing specific improvements we would like to see in GPU compute languages, as well as the underlying graphics hardware.
This talk was originally given at SIGGRAPH 2017 by Andrew Lauritzen (EA SEED) for the Open Problems in Real-Time Rendering course.
This session presents a detailed programmer oriented overview of our SPU based shading system implemented in DICE's Frostbite 2 engine and how it enables more visually rich environments in BATTLEFIELD 3 and better performance over traditional GPU-only based renderers. We explain in detail how our SPU Tile-based deferred shading system is implemented, and how it supports rich material variety, High Dynamic Range Lighting, and large amounts of light sources of different types through an extensive set of culling, occlusion and optimization techniques.
Presented September 30, 2009 in San Jose, California at GPU Technology Conference.
Describes the new features of OpenGL 3.2 and NVIDIA's extensions beyond 3.2 such as bindless graphics, direct state access, separate shader objects, copy image, texture barrier, and Cg 2.2.
Checkerboard Rendering in Dark Souls: Remastered by QLOCQLOC
This is a talk on checkerboard rendering Markus & Andreas held at Digital Dragons 2019.
In it they quickly go through the history of Checkerboard Rendering before taking a deep dive into how it works and how it is implemented in Dark Souls: Remastered. Lastly, they present the quality and performance improvements they got from using it and their conclusion.
PS: The PDF. file includes useful in-depth notes from both authors.
strangeloop 2012 apache cassandra anti patternsMatthew Dennis
random list of Apache Cassndra Anti Patterns. There is a lot of info on what to use Cassandra for and how, but not a lot of information on what not to do. This presentation works towards filling that gap.
What this presentation covers:
● Slow/Blocked Requests
○ What is a slow request?
○ Possible causes
○ Types of Slow Requests
○ Common Troubleshooting Techniques
● Flapping OSD's when RGW buckets have millions of objects
○
○ Where to start
Possible causes
○ Temporary solutions
○ Permanent Solutions
This talk, delivered at GDC 2014, describes a method to detect CPU-GPU sync points. CPU-GPU sync points rob applications of performance and often go undetected. As a single CPU-GPU sync point can halve an application's frame rate, it is important that they be understood and detected as quickly as possible.
Kernel Recipes 2016 - Speeding up development by setting up a kernel build farmAnne Nicolas
Building a full kernel takes time but is often necessary during development or when backporting patches. The nature of the kernel makes it easy to distribute its build on multiple cheap machines. This presentation will explain how to set up a build farm based on cost, size, and performance.
Willy Tarreau, HaProxy
Experiences from debugging ZFS in production in Illumos and Linux from Delphix. Introduction of the SDB debugger and how it can be used to debug ZFS on Linux.
How a BEAM runner executes a pipeline. Apache BEAM Summit London 2018javier ramirez
In this talk I will present the architecture that allows runners to execute a Beam pipeline. I will explain what needs to happen in order for a compatible runner to know which transforms to run, how to pass data from one step to the next, and how beam allows runners to be SDK agnostic when running pipelines.
Customize and Secure the Runtime and Dependencies of Your Procedural Language...VMware Tanzu
Customize and Secure the Runtime and Dependencies of Your Procedural Languages Using PL/Container
Greenplum Summit at PostgresConf US 2018
Hubert Zhang and Jack Wu
Ceph at Work in Bloomberg: Object Store, RBD and OpenStackRed_Hat_Storage
Bloomberg's Chris Jones and Chris Morgan joined Red Hat Storage Day New York on 1/19/16 to explain how Red Hat Ceph Storage helps the financial giant tackle its data storage challenges.
LCU14 201- Binary Analysis Tools
---------------------------------------------------
Speaker: C. Lyon & O. Javaid
Date: September 16, 2014
---------------------------------------------------
★ Session Summary ★
This session will be a presentation about currently available binary analysis tools, including: Sanitizers, perf (a performance counter and tracing profiling tool), record/replay (a reverse debugging facility in GDB) and prelink rootfs.
---------------------------------------------------
★ Resources ★
Zerista: http://lcu14.zerista.com/event/member/137726
Google Event: https://plus.google.com/u/0/events/ca2pdo9sn9r8n81l5vrbiibvcts
Video: https://www.youtube.com/watch?v=QIu601HYwSA&list=UUIVqQKxCyQLJS6xvSmfndLA
Etherpad: http://pad.linaro.org/p/lcu14-201
---------------------------------------------------
★ Event Details ★
Linaro Connect USA - #LCU14
September 15-19th, 2014
Hyatt Regency San Francisco Airport
---------------------------------------------------
http://www.linaro.org
http://connect.linaro.org
“Show Me the Garbage!”, Garbage Collection a Friend or a FoeHaim Yadid
“Just leave the garbage outside and we will take care of it for you”. This is the panacea promised by garbage collection mechanisms built into most software stacks available today. So, we don’t need to think about it anymore, right? Wrong! When misused, garbage collectors can fail miserably. When this happens they slow down your application and lead to unacceptable pauses. In this talk we will go over different garbage collectors approaches and understand under which conditions they function well.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
5. General Guidance
● D3D11 >> D3D9 (generally)
● It’s much harder to hit the ultra-slow path (aka CPU-
GPU Sync Points)
● Reduce your API calls where possible
● Batch up buffer updates
● Alignment matters! (16-byte, please)
● Aligned copies can be ~30x faster
6. More General Guidance
● D3D11Device will grab a mutex for you, but each
DeviceContext can only be called from one
thread at a time
● This is the source of many crashes blamed on the driver
● UpdateSubresource requires more CPU time
● When possible, prefer Map/Unmap
● D3D11 Debug Runtime is awesome!
● Please use it, ensure you are running clean
8. CPU-GPU Sync Points
● CPU-GPU Sync Points are caused when the CPU
needs the GPU to complete work before an API
call can return
● These make us sad
9. CPU-GPU sync point examples
● Explicit
● Spin-lock waiting for query results
● Readback of Framebuffer you just rendered to
● Implicit (potential sync points)
● GPU Memory Allocation after Deallocation
● Buffer Rename operation (MAP_DISCARD) after
deallocation
● Immediate update of a buffer still in use
10. Why are they bad?
● Ideal frame time should be max(CPU time, GPU
time)
● CPU-GPU Sync point turns this into CPU Time +
GPU Time.
Ideal
GPU
CPU
With Sync point
Presents Presents
11. Really? That bad?
● One bad sync point can halve your frame rate
● Even worse: the more sync points you have, the
harder they are to find.
● Performance will just seem generally slow
● The badness depends, in part, on where in the
frame the sync-point occurs
● Generally, the later the sync point, the worse it is
● Early sync-points are also bad if your workload is very
lopsided towards either the CPU or the GPU
12. Check your middleware
● Middleware is generally written in a vacuum
● What works best in the small might not scale well
● Especially check for CPU-GPU sync points
13. A quick D3D9 interlude
● CPU-GPU sync points are trivial to introduce in
D3D9
● Locking any buffer in D3D9 with flags=0 is a
virtually guaranteed CPU-GPU Sync point if that
buffer is still in use.
16. “Forever” Buffers
● Useful for geometry that is loaded
once
● Ex: Level BSPs, loaded behind a load
screen
● Don’t use this for streaming data
● Hitching during allocation is possible/likely
● IMMUTABLE flag at creation time
● Cannot update these!
UpdatesMoreOften
“Forever”
Long Lived
Transient
Temporary
Constants
17. Long Lived Buffers
● Data that is streamed in from disk,
but is expected to last for “awhile”
● Ex: Character geometry
● Reuse these; stream into them
● DEFAULT flag at creation time
● UpdateSubresource to update
UpdatesMoreOften
“Forever”
Long Lived
Transient
Temporary
Constants
18. Temporary buffers
● Fire-and-forget data
● E.g. Particle systems
● Almost certainly lives in system
RAM
● DYNAMIC flag at create time
● Prefer Map/Unmap to update these
● UpdateSubresource involves an extra copy
UpdatesMoreOften
“Forever”
Long Lived
Transient
Temporary
Constants
19. Constant Buffers
● These are different than other
buffers in D3D11.
● The GPU can deal with many of
them in flight at once
● Create with DYNAMIC
● Map/DISCARD to Update
● More on these in a bit
UpdatesMoreOften
“Forever”
Long Lived
Transient
Temporary
Constants
20. We skipped one…
● Transient Buffers
● New informal class of Buffer
● Used for (e.g.) UI/Text
● Things that are dynamic, but few vertices each—and
may need to be updated on odd schedules
● DYNAMIC flag at creation time
● Transient Buffers are part of a new class of
buffer…
22. Transient Buffer Overview
● Treat Buffer as a Memory Heap,
with a twist
● On CPU, Freed memory available now
● On GPU, Freed memory is available
when GPU is finished with it
● Assume memory is in use until told otherwise
● Determine when GPU must be finished with Freed
memory, then return to the “really free” list
UpdatesMoreOften
“Forever”
Long Lived
Transient
Temporary
Constants
23. CTransientBuffer
● On Alloc, walk a Free list
looking for best fit
● Data is updated using
Map/NO_OVERWRITE
● Return opaque, immutable
handle
● On Free, record that chunk
was freed—into
RetiredFrames.back()
● Just after present, an
“OnPresent” function is
called
class CTransientBuffer
{
ID3D11Buffer* mBuffer;
UINT mLengthBytes;
ID3D11Device* mOwner;
vector<CSubAlloc> mFreeList;
list<RetiredFrame> mRetiredFrames;
public:
CSubAlloc* Alloc(UINT, void*,
ID3D11DeviceContext*);
void Free(CSubAlloc*);
void OnPresent(ID3D11DeviceContext*);
33. CTransientBuffer: Handling OOM
● Ways to handle Out of Memory on Alloc:
● Spin-lock waiting for RetiredFrame Queries to return
● Allocate a new, larger buffer
● Release current buffer
● Requires a system memory copy to initially fill new buffer
● These will (probably) stall
● But in your code
● can be easily logged -and/or-
● Recorded to adjust and avoid for subsequent runs
34. Transient Buffer Pattern
● Works in D3D9 as well
● Can be extended and simplified to contention-
free Temporary Buffers, too!
● Let’s take a quick look at that.
35. Discard-Free Temporary Buffers
● Allocate out of Buffer as a circular buffer
● No opaque handle needed
● Remember ending address of the last allocation
● Per frame: Assuming any allocations, issue query
● Later: When query returns, move the end pointer
to indicate additional available space
● Credit: Blizzard’s StarCraft 2 Team (thanks!)
43. Constant Buffer Organization
● Group by frequency of update
● The cheapest buffers are the ones you never
update
● You can bind multiple buffers in one call (Reduce
those API calls!)
44. Proposed Buffer Grouping
● Assuming you are not vertex shading limited
● Don’t solve the travelling salesman in your VS
● Seriously: this isn’t common
45. Multiple Constant Buffers
● One for per-frame constants (GI values, lights)
● One for per-camera constants (ViewProj matrix,
camera position in world, RT dimensions)
oPos = in.Position
* cWorldViewPos;
oPos = in.Position
* cWorld
* cViewPos;
^
One extra 3x3 matrix
multiply in the VS.
No biggie.
Old HLSL New HLSL
46. Multiple Constant Buffers cont’d
● One for per-object constants (World matrix,
dynamic material properties, etc)
● One for per-material constants (if these are
shared—if not then drop them
in with per-object constants)
● Splitting constants this way
eliminates constant updates
for static objects.
47. Constant Buffer Tricks
● Use shared structs to update when possible
● Struct can be included from both hlsl and C++
● Makes buffer updates trivial!
● Assign them to slots by convention:
● b0: Per-Frame, b1: Per-Camera, etc
● Slot assignment can live in shared header, too.
49. Performance Investigation
● Scene from a Typical D3D11 Application
(unreleased)
● 115 Dynamic Vertex Buffer Updates (particles) per
frame
● Total Time: 4.36ms / frame
Per- Call Frame
Map/Unmap 0.036 ms 3.79 ms
Memcpy ~0.004 ms 0.4 ms
50. Let’s buffer the updates
● All Dynamic Updates during one update
● 1 Map per frame (using MAP_DISCARD)
● Still 115 memcpys (I’m lazy)
● Total Time: 0.267ms / frame (savings: 4.1ms!)
Per- Call Frame
Map/Unmap 0.036 ms 0.036 ms
Memcpy ~0.002 ms 0.231 ms
51. Buffered update, no discards
● One update into a triple buffer
● 1 Map per frame (using MAP_NOOVERWRITE)
● Still 115 memcpys (I’m still lazy)
● Total Time: 0.217ms / frame (savings: 4.15ms)
● Bonus: No hitching ever
● Downside: 3x the memory
Per- Call Frame
Map/Unmap 0.031 ms 0.031 ms
Memcpy ~0.002 ms 0.231 ms
52. Performance Results
● Reducing API usage was a huge CPU-side savings
(4.09 ms). GPU Perf Unaffected
● Discard-Free updates were marginally faster still—
but would never hitch.
Total Frame Time
Original 4.360 ms
Buffered Updates 0.267 ms
Discard-Free 0.217 ms
53. GPUView
● Covered by Jon Story earlier today
● Hopefully you caught it!
● Great for finding CPU-GPU sync points