• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Why Graphics Is Fast, and What It Can Teach Us About Parallel Programming
 

Why Graphics Is Fast, and What It Can Teach Us About Parallel Programming

on

  • 3,291 views

Graphics has been at the forefront of the resurgence in parallel computation. Real-time graphics and games have been the source of many of today’s new programming models and architectures for ...

Graphics has been at the forefront of the resurgence in parallel computation. Real-time graphics and games have been the source of many of today’s new programming models and architectures for parallel computation. Modern games are arguably the only successful mainstream application of highly parallel programming in heterogeneous, million-line codebases. But while graphics is thought of as an embarrassingly parallel application, there has been little success in implementing high-performance graphics systems in any single general-purpose parallel programming model, ironically including those which have come from the GPGPU community.

I will talk about key patterns of parallelism and locality used in graphics pipelines and games, and how existing tools and monolithic programming models fail to express these patterns with sufficient efficiency. I will try to synthesize some directions for future programming systems based on this experience, including my current thoughts for how a compile-time continuation passing transform could help formalize patterns emerging in how high performance systems are manually overcoming the limitations of existing GPU programming models.

This talk will be at least as much informal, educational and speculative as it will be about any currently active research.

Statistics

Views

Total Views
3,291
Views on SlideShare
3,279
Embed Views
12

Actions

Likes
4
Downloads
103
Comments
0

3 Embeds 12

http://www.slideshare.net 9
http://www.slashdocs.com 2
http://www.docseek.net 1

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

CC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike LicenseCC Attribution-NonCommercial-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Graphics and games among the most successful mainstream applications of highly parallel computing: <br /> <br /> - multi-million-line code bases, many real subsystems - not 50-line kernel expressed in 100k lines for performance, like SciComp <br /> - tight real-time budgets, memory constraints, &#x2026; <br /> - Consoles look like mini Computers of the Future: <br /> - manycore, multithreaded <br /> - SIMD, in-order/throughput-oriented <br /> - heterogeneous: GPU + CPU + SPU <br /> - Portable across platforms: PC, PS3, Xbox <br /> - SIMD, threading, task systems <br /> <br /> Philosophy: if we can get graphics right&#x2014;complex, heterogeneous, dynamic, but also well-understood&#x2014;then good start to generalize <br /> <br /> So here I&#x2019;m going to focus on explaining some key design patterns that have emerged from current graphics pipelines, to set the stage for the sorts of things future systems-programming tools need to support/express. <br /> <br /> <br /> --- <br /> What I&#x2019;m hoping to do: parallel programming system good enough for graphics and games on next-generation (software+throughput processor) systems. <br /> <br /> That is: not just after graphics&#x2014;whole game engine core, plus other complex/heterogeneous systems programming applications
  • I&#x2019;m an Nth year PhD student at MIT, where I work with Fredo Durand, and more recently also Saman Amarasinghe <br /> <br /> I&#x2019;ve been in graphics and graphics systems for a while: <br /> Stanford: lucky to spend 4 years with Pat Hanrahan <br /> ILM: offline rendering, compilers - Lightspeed, at SIGGRAPH 2007 <br /> NVIDIA, ATI - graphics architecture - Decoupled Sampling, in revision for Trans. Graph., presented at next SIGGRAPH <br /> Intel: graphics architecture (now software - Larrabee!), <br /> data-parallel compilers, some chip architecture
  • as a motivating example, look at 1 modern game frame
  • 1000s of independent entities: <br /> - characters <br /> - vehicles <br /> - projectiles <br /> <br /> explosions, physics <br /> destruction <br /> <br /> many lights, changing occlusion/visibility <br /> <br /> In all: <br /> 10s of GFLOPS, 100s of thousands of lines of code executed
  • This is the task graph for a single frame from the engine used to make that image. <br /> <br /> To render that: <br /> <br /> hundreds of (heterogeneous) tasks <br /> <br /> locally task, data, and pipeline parallel (point at graph) <br /> <br /> Task = 10k-100k lines/each&#x2014;fairly large
  • and even within those tasks <br /> <br /> data, <br /> pipeline, <br /> task parallel <br /> <br /> entire invocations of graphics pipeline <br /> <br /> many levels => braided parallelism
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • Logical Direct3D 10 pipeline. Red = programmable. Blue = fixed-function. <br /> &#x201C;Fixed-function&#x201D; still configured by programmable state (and implied by shader input/output). <br /> <br /> - Inter-stage data = always fixed flow/non-programmable <br /> <br /> IA: reads from pointer to index, vertex buffers and pulls in vertex data (indirectly/complex traversal). Finite State Machine. <br /> VS: user program. Can read (only) from textures <br /> PrimAsm: indexes vertex stream and groups together whole triangles. Finite State Machine. <br /> GS: . Variable output (filter/amplify) <br /> <br /> NOTE: programmable stages delineated by common one-to-one data stream. <br /> Any time there is reordering or complex stream amplification/filtering, we split stages. <br /> <br /> THIS IS LOGICAL PIPELINE, but in reality, fast implementation is
  • New shader stages with new semantics, characteristics. <br /> <br /> (Unordered) read-write memory from the pixel shader
  • Future: Programmable output blending? <br /> <br /> Ordered read-modify-write buffers. <br /> <br /> Also: the more that becomes software-controlled, the more expensive synchronization becomes. <br /> <br /> As we&#x2019;ll see: biggest difference between pipeline choice for LRB vs. GPU is based on cost&#x2192;granularity of synchronization.
  • Needs to be able to drive essentially maximum resource utilization, sustained. <br /> <br /> Dynamic load balance: load balance between stages shifts not just across apps or frames, but at very fine granularity within frame. (Triangles are different size. Shaders are different length.) <br /> <br /> Pipeline parallelism: has to overlap operations, passes to avoid bubbles. <br /> <br /> Producer-consumer locality is essential: way too much intermediate data to spill to memory <br /> <br /> Task parallelism and producer-consumer are why (ironically) you can&#x2019;t do a fast graphics pipeline in &#x201C;GPGPU&#x201D;/CUDA.
  • First, how parallelism is exploited <br /> and how the different stages work.
  • Parallelism across all data types: vertices, primitives, fragments, pixels. <br /> <br /> Hierarchical application of parallelism mirrors a hierarchical application of most major ideas from parallel, latency-tolerant (i.e. &#x201C;throughput&#x201D;) processor architecture. <br /> <br /> To maintain high regularity within shaders, <br /> Marshaling of data to/from complex or irregular structures is factored apart from the core programmable data-parallel shaders, in logically fixed-function stages.
  • There are two major schools of implementation today: <br /> - In a hardware (NVIDIA) GPU, <br /> - In a software graphics pipeline (Larrabee) <br /> <br /> but the core hierarchical approach to exploiting parallelism and hiding latency is essentially the same, only the constants are shifted by the given implementation. <br /> <br /> So the high-order bit: <br /> - SIMD+vector+thread+core hierarchy to exploit parallelism <br /> Balance very high arithmetic intensity with retaining dynamic load balance <br /> - same basic model in software or hardware <br /> - schedulers are application/pipeline-specific, so software has potential advantage
  • data-parallelism over different data types is subtly different, but in ways which necessitate different implementation and optimization tradeoffs for peak efficiency. <br /> <br /> a fast graphics pipeline implementation must optimize for all of these stage specific characteristics, <br /> and this is just in the &#x201C;simple data-parallel kernels&#x201D; parts of the pipeline. <br /> <br /> Clearly: triviality of embarrassing parallelism is over-stated.
  • The fixed-function stages introduce even more workload variation. <br /> <br /> - IA <br /> - Rast <br /> - ROP <br /> <br /> Ordered read-modify-write buffers. (As always, order is the enemy of parallelism.) <br /> Also: the more that becomes software-controlled, the more expensive synchronization becomes. <br /> As we&#x2019;ll see: biggest difference between pipeline choice for LRB vs. GPU is based on cost&#x2192;granularity of synchronization.
  • Popping up a level, to the pipeline as a whole:
  • All this talk of parallelism and asynchrony, but the logical pipeline was scalar. <br /> <br /> API-defined ordering is that all pixel updates must happen in exactly the order defined by the input primitive sequence. <br /> <br /> This synchronization is a key reason why graphics is not trivially parallel. Many operations are order-dependent. <br /> <br /> There still is parallelism: <br /> <br /> - screen-space <br /> - reorder buffer for asynchronous completion <br /> <br /> I&#x2019;ll also point out as an aside that these strict semantics exist to make the API highly predictable and usable for many applications. <br /> In practice, could often be much looser, given application-specific semantics, <br /> so I&#x2019;ll speculate that application-defined software rendering pipelines will exploit optimization here
  • Here&#x2019;s how those semantics are exploited in 2 kinds of modern pipelines&#x2026; <br /> <br /> Most Hardware GPUs. <br /> <br /> Looks like logical pipeline -- because directly implemented to run API -- but highly parallel at most stages
  • Tradeoff in memory spill: stream vs. cache/buffer intra-pipeline data. <br /> - Stream over intra-pipeline data (post-transformed primitives), cache framebuffer, vs: <br /> - Stream over framebuffer, buffer intra-pipeline data. <br /> <br /> Also critical: lower scheduling overhead, coarser-grained synchronization
  • To tie back to the whole, in a single frame <br /> <br /> this pipeline as a whole is driven by sequences of passes made up of many draw batches <br /> the app&#x2019;s logical rendering pipeline != D3D pipeline <br /> both for core algorithm reasons <br /> and for performance optimization reasons
  • an entire one of those passes of the pipeline is just one node in this graph <br /> (large circles, I think) <br /> <br /> And there are many other task stages: <br /> - physics simulations, sound <br /> - culling <br /> - streaming, decompression <br /> - AI, agent updates <br /> <br /> Others are going to become *more* data-parallel, *more* like graphics pipeline, to utilize future processors. <br /> <br /> (Biggest current barrier to full-on GPGPU for these is latency/communication overhead, <br /> but devs want to)
  • People have obviously built parallel programming models before. <br /> <br /> Task-centric systems: <br /> - dynamic+ <br /> - high overhead- - can&#x2019;t achieve sufficient compute density (real work) <br /> <br /> Data-parallel systems: <br /> I&#x2019;ve divided into more dynamic and more static ones. <br /> <br /> Many historical DP systems had dynamic runtimes <br /> - targeted dynamically independent processors <br /> - overhead still too high <br /> <br /> Streaming as in StreamIt is extreme end point of static data parallel optimization: <br /> - very low overhead, but struggles with variable/dynamic loads. <br /> <br /> Graphics on StreamIt: too static, load imbalance - obvious: triangles vary in size! <br /> <br /> <br /> this is by no means all, just represents a sampling of systems covering major individual models I&#x2019;ve talked about <br /> <br /> but biggest key: focus has traditionally been on single-model parallel programming systems. <br /> large systems like graphics and games require all of the above
  • Consider CUDA, because on massively parallel throughput machines, this is the leading candidate model actually available today. <br /> <br /> Canonical model: <br /> - thread per-work-item <br /> - purely data-parallel, streaming data accesses <br /> <br /> Challenges: <br /> - struggle with dynamic load balance, like any semi-static <br /> - producer-consumer locality <br /> <br /> Ironically, these are two key things graphics pipelines have to do very well: <br /> - dynamic load balance within and between tasks, data elements <br /> - producer-consumer locality for bandwidth savings on huge streams
  • promising directions are emerging, <br /> and challenges for the future are starting to become clear
  • Going forward, it&#x2019;s interesting to look at how things might change. <br /> <br /> Dynamic task decompositions will become even more important: <br /> - first: just put each subsystem on a thread. doesn&#x2019;t scale far. <br /> - now: several hundred jobs/frame, goes somewhere. <br /> - next: many more jobs. <br /> <br /> Key issue in the future will be continuing to radically increase the number of independent tasks which can be extracted. <br /> <br /> Problem with traditional &#x201C;task systems&#x201D; is false control dependence. Graphics Pipeline already shows how this can be overcome. <br /> Pure data-flow? <br /> <br /> Orchestration, complexity, dynamism in general will be challenging - language, tools? <br /> ------ <br /> Individual tasks will become internally more complex. Hierarchical decomposition, low-level data parallelism important for efficiency. Data-centric tasks will become More like graphics/rendering. <br /> ------ <br /> At the data-parallel level, one Interesting idea/trend emerged at SIGGRAPH this year: <br /> dynamic scheduling between multiple in-flight logical kernels on their one-kernel-at-a-time GPUs. <br /> <br /> every interesting demo from NVIDIA breaking the CUDA model <br /> <br /> demonstrated many alternative rendering systems on GPUs, entirely in software, but breaking the pure data-parallel, streaming model to achieve essential performance and flexibility
  • Idea is: <br /> <br /> take what is logically a series of pipeline stages, <br /> which would
  • you can use them for recursive code and cyclic graphs, not just trivial pipelines.
  • and you can even use them for dynamic branching between arbitrary points in the flow, by logically decomposing stages and entry/exit points into sub-states. <br /> <br /> or if stages are internally fanning out to go data parallel, dynamically <br /> <br /> --- <br /> NVIDIA OptiX uses this pattern
  • NVIDIA&#x2019;s OptiX system uses this pattern to implement an entirely different rendering &#x201C;pipeline,&#x201D; <br /> entirely in software, and with recursion (which isn&#x2019;t formally supported in the CUDA model). <br /> <br /> Uses a just-in-time &#x201C;pipeline compiler&#x201D; to generate the CUDA &#xFC;ber-kernel for a given pipeline configuration and shader binding. <br /> <br /> Effectively, a special-purpose continuation compiler. <br /> <br /> This is the idea of one place I&#x2019;d like to go next: explicit continuations as a low-level primitive. <br /> Static compiler transform
  • One thing I&#x2019;m playing with doing next: <br /> General-purpose abstraction of this idea could be much simpler than domain-specific compilers like: <br /> - Larrabee shader JIT <br /> - OptiX pipeline JIT <br /> <br /> Lesson: those systems are complex because they fuse proven need for application-specific scheduling with compiler transform to support it. <br /> <br /> Separation of concerns: <br /> Contain application-specific complexity to *code in system*, while keeping compiler transform totally agnostic. <br /> <br /> Useful for many things: <br /> <br /> - texture fetches with software latency hiding, as in Larrabee <br /> - recursive ray tracing <br /> - dynamically coalescing work items <br /> - lower-level task and pipeline parallelism (producer-consumer) within generally data-parallel, arithmetically-intense jobs <br /> <br /> --- <br /> Are there good references for prior work in this area? <br /> *Static* continuation, state machine transform - not *dynamic*, heavy-weight mechanisms like from Lisp world.
  • Popping up another level, <br /> this pipeline as a whole is driven by sequences of passes made up of many draw batches
  • the app&#x2019;s logical rendering pipeline != D3D pipeline <br /> <br /> both for core algorithm reasons <br /> and for performance optimization reasons <br /> <br /> THIS IS PART OF THE MOTIVATION FOR PROGRAMMABLE PIPELINES
  • pass folding is further motivation for programmable pipelines

Why Graphics Is Fast, and What It Can Teach Us About Parallel Programming Why Graphics Is Fast, and What It Can Teach Us About Parallel Programming Presentation Transcript

  • Why Graphics is Fast, and what it can teach us about parallel programming Jonathan Ragan-Kelley, MIT 7 December 2009, University College London 13 November 2009, Harvard
  • (me) PhD student at MIT with Frédo Durand, Saman Amarasinghe Previously Stanford Industrial Light + Magic The Lightspeed Automatic Interactive Lighting Preview System (SIGGRAPH 2007) NVIDIA, ATI Decoupled Sampling for Real-Time Graphics Pipelines (ACM Transactions on Graphics 2010) Intel
  • 1 game frame
  • via Johan Andersson, DICE
  • via Johan Andersson, DICE
  • Game Engine Parallelism Each task (~200-300/frame) is potentially: data parallel physics, sound, AI image post-processing, streaming/decompression pipeline parallel (internally) task parallel entire invocation of the graphics pipeline ↳ braided parallelism
  • The Graphics Pipeline
  • Vertex Buffer Input Assembler Index Buffer Texture Vertex Shader memory Primitive Assembler Texture Geometry Shader Setup/Rasterizer Texture Pixel Shader Color Buffer Depth Buffer Output Merger
  • Vertex Buffer Input Assembler Index Buffer Texture Vertex Shader memory Primitive Assembler Texture Geometry Shader Setup/Rasterizer Texture Pixel Shader Color Buffer Depth Buffer Output Merger
  • Vertex Buffer Input Assembler Index Buffer Texture Vertex Shader memory Primitive Assembler Texture Geometry Shader Setup/Rasterizer Texture Pixel Shader Color Buffer Depth Buffer Output Merger
  • Vertex Buffer Input Assembler Index Buffer Texture Vertex Shader memory Primitive Assembler Texture Geometry Shader Setup/Rasterizer Texture Pixel Shader Color Buffer Depth Buffer Output Merger
  • Vertex Buffer Input Assembler Index Buffer Texture Vertex Shader memory Primitive Assembler Texture Geometry Shader Setup/Rasterizer Texture Pixel Shader Color Buffer Depth Buffer Output Merger
  • Vertex Buffer Input Assembler Index Buffer Texture Vertex Shader memory Primitive Assembler Texture Geometry Shader Setup/Rasterizer Texture Pixel Shader Color Buffer Depth Buffer Output Merger
  • Vertex Buffer Input Assembler Index Buffer Texture Vertex Shader memory Primitive Assembler Texture Geometry Shader Setup/Rasterizer Texture Pixel Shader Color Buffer Depth Buffer Output Merger
  • Vertex Buffer Input Assembler 2% Index Buffer Texture Vertex Shader 10% memory Primitive Assembler 2% Texture Geometry Shader 8% Setup/Rasterizer 8% Texture Pixel Shader 50% Color Buffer Depth Buffer Output Merger 20%
  • Fast Implementation 90% resource utilization Massive data parallelism Fine-grained dynamic load balance Pipeline parallelism Producer-consumer locality Global dependence analysis, scheduling Efficient fixed-function
  • Local Parallelism
  • Shader Execution Model Highly data-parallel vertices, primitives, fragments, pixels Hierarchical parallelism Instruction bandwidth: SIMD ALUs Pipeline latency: vector execution/software pipelining Unpredictable latency: hardware multithreading Memory latency: dynamic threading/fibering Task independence: multicore Regular input, output Marshaling/unmarshaling from data structures handled in “fixed-function” for efficiency
  • Shader Implementation Hardware shader architecture (GPU) static SIMD warps many dynamic hardware threads, tens of cores fine-grained dynamic load balance many kernels, types simultaneously heuristic knowledge of specific pipeline Software shader architecture (Larrabee) similar static SIMD, inside dynamic latency-hiding, inside many-core threading via software fibering (microthreads), software scheduling
  • Shader Stage-specific Variation Vertex less latency hiding needed, so use less local memory, [perhaps] no dynamic fibering (simpler schedule). Geometry variable output. more state = less parallelism, but (hopefully) need less latency hiding. Fragment derivatives → neighborhood communication. texture intensive → more (dynamic) latency hiding.
  • Fixed-function Stages [Input,Primitive] Assembly Hairy finite state machine Indirect indexing Rasterization Locally data-parallel (~16xSIMD), No memory access Globally branchy, incremental, hierarchical Output Merge Really data-parallel, but ordered read-modify-write Implicit data amplification (for antialiasing)
  • Pipeline Implementation
  • Ordering Semantics Logically sequential input primitive order defines framebuffer update order Pixels are independent spatial parallelism in framebuffer Buffer to reorder otherwise (Could be looser in custom pipelines)
  • Sort-Last Fragment Shaders fully parallel buffer to reorder between stages (logically FIFO) 100s-1,000s of triangles in-flight Output Merger is screen-space parallel crossbar from Pixel Shade to Output Merge (“sort-last”) fine-grained scoreboarding Full producer-consumer locality all inter-stage communication on-chip primitive-order processing framebuffer cache filters some read-modify-write b/w
  • Sort-Middle Front-end: transform/vertex processing Scatter all geometry to screen-space tiles Merge sort (through memory) Maintain primitive order Back-end: pixel processing In-order (per-pixel) scoreboard per-pixel → screen-space parallelism One tile per-core framebuffer data in local store one scheduler, several worker [hardware] threads collaborating Pixel Shader + Output Merge together lower scheduling overhead
  • return to task system to put back in context. reinforce braided nature. Game Frame
  • Existing Models
  • Parallel Programming Models Task parallelism (Cilk, Jade, etc.) dynamic, heavyweight items very flexible, high overhead Data parallelism (NESL, FortranX, C*) dynamic, lightweight items flexible, moderate overhead Streaming (StreamIt, Brook) static data + pipeline parallelism very low overhead, but inflexible ➞ load imbalance
  • CUDA, GPGPU Canonical model Key challenges thread = work-item one kernel at a time } dynamic load balance purely data parallel streaming data access } producer-consumer locality
  • Future directions
  • Task decomposition continuing to grow Past: subsystem threads: 2-3 threads utilized Present: task systems: 6-8 threads, some headroom Future: many more tasks, reduce false dependence orchestration language? More data, braided parallelism within tasks Essential to extracting FLOPS in future throughput chips “Über-kernels” (NVIDIA ray tracing, Reyes demos; OptiX system) Dynamic task scheduling inside a single-kernel, purely data-parallel architecture
  • Über-kernels stage 1 stage_1(): // do stage 1… stage_2(): stage 2 // do stage 2… stage_3(): stage 3 // do stage 3…
  • Über-kernels while(true): stage 1 state = scheduler() switch(state): case stage_1: stage 2 // do stage 1… case stage_2: // do stage 2… case stage_3: stage 3 // do stage 3…
  • Über-kernels while(true): stage 1 state = scheduler() switch(state): case stage_1: stage 2 // do stage 1… case stage_2: // do stage 2… case stage_3: stage 3 // do stage 3…
  • Über-kernels while(true): state = scheduler() stage 1 switch(state): case stage_1: // do stage 1… case stage_2_1: stage 2 // beginning of 2… case stage_2_2: // end of 2… stage 3 case stage_3_1: // beginning of 3… case stage_3_2: // end of 3…
  • The Ray Tracing Pipeline Host Buffers Texture Samplers Entry Points Variables Ray Generation Program Exception Program Traversal Intersection Program Trace Any Hit Program Selector Visit Program Ray Shading Closest Hit Program Miss Program
  • Explicit continuation programs stage_1(): non_blocking_fetch() stage 1 if(c): recurse(s_1) else: tail_call(s_2) non_blocking_fetch(): stage 2 prefetch() call/cc(myScheduler) return result stage 3 recurse(s): yield(myScheduler) s()
  • Acknowledgements CPS ideas Tim Foley, Mark Lacey, Mark Leone - Intel General Discussion Saman Amarasinghe, Frédo Durand - MIT Solomon Boulos, Kurt Akeley - Stanford/MSR OptiX details, slides Austin Robison, Steve Parker - NVIDIA Current game-engine details, figures Johan Andersson - DICE
  • extras
  • Need Locality across many (potentially competing) axes application-specific scheduling knowledge autotuning for complex optimization space? Braids of parallelism data-parallelism static - SIMD, extreme arithmetic intensity dynamic - latency hiding task parallelism static - pipelines) dynamic - to add dynamism to data-parallel layer
  • Key issues Hierarchy Braided decomposition Managing complexity separating orchestration from kernel implementation recursively, at multiple levels Hybrid static-dynamic extreme arithmetic intensity + dynamic load balance Hybrid pipeline and data-parallel competing axes of locality
  • Task Optimization
  • Draw Batch Group of primitives bound to common state JIT compile, optimize on rebinding state changing any shader stage potential inter-stage elimination of dead outputs late-binding constant folding changing some fixed-function mode state not on changing inputs (too expensive) “near-static” compilation
  • Rendering Pass Common output buffer binding with no swap/clear Synthesize renderer from canonical pipeline render shadow, environment maps 2D image post-processing Optimize performance avoid wasted work (Z cull pre-pass, deferred shading) reorder/coalesce batches (deferred lighting)
  • Inter-pass Optimization Buffers never consumed keep in scratchpad “fast clear” use optimized/native formats resolve antialiasing before write-out Passes overlapped no startup/wind-down bubbles Pass folding e.g. merge 2D post-processing into back-end stage, on local tile memory
  • Control vs. Data-Dependence Simple task graph has control dependence between stages D3D scheduling based on resource (data) dependence, not control dependence ‣Finer-grained scheduling ‣Fewer false hazards Generalize to whole engine core? AI, sound Physics: rigid, soft, IK, cloth, particles, fracture
  • Core principles TODO: update Hierarchical parallelism Shader-style data-parallel kernels Kernel fusion/global optimization Separate kernel and orchestration Separate-but-connected languages/runtime models Resource-based scheduling Application-specific/user-controlled scheduling and memory management