SlideShare a Scribd company logo
1 of 62
GRAPHICS PIPELINING
IN GPU
A Presentation
Of Subject
ADVANCE OPERATING SYSTEM.
BASE CONCEPTS
 Computer graphics
 Raster graphics
 vector graphics
 3d scene
 3d object
 Rasterization
 2d graphics
 3d graphics
BASE CONCEPTS
 Rendering
 graphic card
 hardware accelerator
 directX
 openGL
 3d modelling
 Clipping
 Interpolation
 gpu
THE BEGINNING
 Once upon a time, The graphics development was
limited to writing to a relatively small matrix of color
intensities (the frame buffer) and sending it to
hardware.
 A 640×480px image (a common resolution at that
time) contains 307,200 pixels. And the CPUs were so
much slower then, that you couldn't really do much in
the short time given to you to draw that frame. After
all, if you want to keep drawing at 30FPS.
 Then came along the cool GPUs with rendering
pipelines
GRAPHICS PIPELINING
 In 3D graphics rendering, the stages required to transform a
three-dimensional image into a two-dimensional screen. The
stages are responsible for processing information initially
provided just as properties at the end points (vertices) or control
points of the geometric primitives used to describe what is to be
rendered. The typical primitives in 3D graphics are lines and
triangles. The type of properties provided per vertex include x-y-z
coordinates, specially.
An Assembly Line
Graphics rendering is like a manufacturing assembly line with
each stage adding something to the previous one. Within a
graphics processor, all stages are working in parallel. Because of
this pipeline architecture, today's graphics processing units
(GPUs) perform billions of geometry calculations per second. They
are increasingly designed with more memory and more stages, so
that more data can be worked on at the same time.
TYPES OF GRAPHICS PIPELINING
 There are two types of graphics
pipelining.
 FIXED FUNCTION RENDERING PIPELINE.
 PROGRAMMABLE RENDERING PIPELINE.
FIXED FUNCTION PIPELINING
 We the developers sending our instructions to the GPU, and it would do the heavy-lifting on
them.
 Those were fixed-function rendering pipelines :
 meaning that we can not configure the functions they performed. We could tell them "make the
fog dark gray" and we could configure a lot of the other parameters, but the functions themselves
remained.
 The hardware was wired and narrowly specialized so that it performed some standard operations
on your data. And because it was wired that way, it was so much faster than doing them on our
processor. But one downside was that you were paying a lot for that speed: we were paying in
flexibility. But if you ever wanted to draw anything complex, the CPU simply wasn't fast enough.
 This is roughly how the fixed-function pipeline worked.
 The graphics world was still changing rapidly. Just like all creative and talented people, developers
love challenges, and one of the challenges for them was (and will remain!), for a long time, to
render ever-better looking, realistic images.
 The fixed-function pipeline provided some nice features, such as multiple blending modes, per-
vertex Gouraud shading, fog effects, stencil buffers (for shadow volumes) and such, so the
developers used what they could. Soon, there were some really impressive effects going on, all
by virtue of real-life phenomena simulated using a few cheap tricks (well, cheap by today's
standards).
 This was all going very well, but it was still limited by the amount of functions the fixed pipeline
could do. After all, the real world has so many different materials, and for simulating them, the
only variation they were allowed to perform was changing some blend modes, adding some more
textures, or tweaking the light reflection colors.
programmable pipelining
 Then it happened: the first programmable GPUs came along.
 These GPUs had what was called a
 programmable rendering pipeline: we could now
write programs, called shaders, in a limited assembly
language, and have them execute for each vertex or fragment
 This was a big leap forward, and it was just getting better.
 Soon the assembly languages increased in complexity and
expressiveness, and high-level languages for GPU programming
emerged, such as HLSL, GLSL, and later Cg. Today we have
geometry shaders which can even stream out new vertices, or
shaders which dynamically control tessellation and tessellated
triangles, and inside them we can sample an awful lot of textures,
dynamically branch and do all sorts of crazy math on the input
values.
SWITCH
 Soon, fixed function pipelines were obsolete, at least for
game developers. After all, why bother with such walled
gardens when you can program precisely what happens to
your data? They stayed in use for a lot longer in some
applications. But, they were getting ignored.
 OpenGL ES 2.0, released in 2007, deprecated or removed
its fixed-function pipeline in favor of a programmable one.
OpenGL 3.2, back in 2009, finally removed all notion of
fixed-function vertex and fragment processing (however, it
remains available for legacy use via a compatibility profile).
It's clear that it makes very little sense today to work with
the limited pipeline when you've got powerful GPUs capable
of doing awesome things at your disposal.
HOW IMAGE IS DISPLAYED
Modern day computer has dedicated Graphics Processing
Unit (GPU) to produce images for the display, with its
own graphics memory (or Video RAM or VRAM).
 All modern displays are raster-based. A raster is a 2D
rectangular grid of pixels (or picture elements). A pixel
has two properties: a color and a position. Color is
expressed in RGB (Red-Green-Blue) components -
typically 8 bits per component or 24 bits per pixel (or
true color). The position is expressed in terms of (x, y)
coordinates. The origin (0, 0) is located at the top-left
corner, with x-axis pointing right and y-axis pointing
down. This is different from the conventional 2D
Cartesian coordinates, where y-axis is pointing upwards.
HOW IMAGE IS DISPLAYED
 The number of color-bits per pixel is called
the depth (or precision) of the display. The number of rows
by columns of the rectangular grid is called
the resolution of the display, which can range from
640x480 (VGA), 800x600 (SVGA), 1024x768 (XGA) to
1920x1080 (FHD), or even higher.
 The color values of the pixels are stored in a special part of
graphics memory called frame buffer. The GPU writes the
color value into the frame buffer. The display reads the
color values from the frame buffer row-by-row, from left-
to-right, top-to-bottom, and puts each of the values onto
the screen. This is known as raster-scan. The display
refreshes its screen several dozen times per second,
typically 60Hz for LCD monitors and higher for CRT tubes.
This is known as the refresh rate.
 A complete screen image is called a frame.
GRAPHICS PIPELINING STEPS
The 3D graphics rendering pipeline consists of the
following main stages:
Vertex Processing: Process and transform
individual vertices.
Rasterization: Convert each primitive
(connected vertices) into a set of fragments. A
fragment can be treated as a pixel in 3D spaces,
which is aligned with the pixel grid, with attributes
such as position, color, normal and texture.
Fragment Processing: Process individual
fragments.
Output Merging: Combine the fragments of all
primitives (in 3D space) into 2D color-pixel for the
display.
IMPORTANT!
“ In modern GPUs, the vertex processing stage and fragment
processing stage are programmable. You can write programs,
known as vertex shader and fragment shader to perform your
custom transform for vertices and fragments. The shader
programs are written in C-like high level languages such as
GLSL (OpenGL Shading Language), HLSL (High-Level Shading
Language for Microsoft Direct3D), or Cg (C for Graphics by
NVIDIA).
On the other hand, the rasterization and output merging
stages are not programmable, but configurable - via
configuration commands issued to the GPU.”
3D COORDINATE SYSTEM
 The inputs to the Graphics Rendering
Pipeline are
geometric
Primitives
(such as triangle,
point, line or quad),
which is formed by one
or more vertices.
Vertices, Primitives, Fragment
and Pixels
Vertices, Primitives, Fragment
and Pixels
 Recall that a primitive is made up of one or more vertices. A vertex, in
computer graphics, has these attributes:
 Position in 3D space V=(x, y, z): typically expressed in floating point
numbers.
 Color: expressed in RGB (Red-Green-Blue) or RGBA (Red-Green-Blue-
Alpha) components.
 Vertex-Normal N=(nx, ny, nz): We are familiar with the concept of surface
normal, where the normal vector is perpendicular to the surface. In
computer graphics, however, we need to attach a normal vector to each
vertex, known as vertex-normal. Normals are used to differentiate the
front- and back-face, and for other processing such as lighting. The
normal is pointing outwards, indicating the outer surface (or front-face)
 Texture T=(s, t): In computer graphics, we often wrap a 2D image to an
object to make it seen realistic. A vertex could have a 2D texture
coordinates (s, t), which provides a reference point to a 2D texture
image.
 Others.
Vertices, Primitives, Fragment
and Pixels
Vertices, Primitives, Fragment
and Pixels
Vertices, Primitives, Fragment
and Pixels
 Indexed Vertices
 Primitives often share vertices. Instead of
repeatedly specifying the vertices, it is
more efficient to create an index list of
vertices, and use the indexes in specifying
the primitives.
Vertices, Primitives, Fragment
and Pixels
 Pixels refers to the dots on the display, which
are aligned in a 2-dimensional grid of a
certain rows and columns corresponding to
the display's resolution. A pixel is 2-
dimensional, with a (x, y) position and a RGB
color value (there is no alpha value for
pixels). The purpose of the Graphics
Rendering Pipeline is to produce the color-
value for all the pixels for displaying on the
screen, given the input primitives.
Vertices, Primitives, Fragment
and Pixels
 In order to produce the grid-aligned pixels for
the display, the rasterizer of the graphics
rendering pipeline, as its name implied, takes
each input primitive and perform raster-
scan to produce a set of grid-aligned
fragments enclosed within the primitive. A
fragment is 3-dimensional, with a (x, y, z)
position. The (x, y) are aligned with the 2D
pixel-grid. The z-value (not grid-aligned)
denotes its depth.
Vertices, Primitives, Fragment
and Pixels
 Fragments are produced via interpolation of
the vertices. Hence, a fragment has all the
vertex's attributes such as color, fragment-
normal and texture coordinates.
 In modern GPU, vertex processing and
fragment processing are programmable. The
programs are called vertex shader and
fragment shader.
 (Direct3D uses the term "pixel" for
"fragment".)
Vertex Processing
Vertex Processing
 The process used to produce a 3D scene on the display in
Computer Graphics is like taking a photograph with a
camera. It involves four transformations:
 Arrange the objects (or models, or avatar) in the world
(Model Transformation or World transformation).
 Position and orientation the camera (View transformation).
 Select a camera lens (wide angle, normal or telescopic),
adjust the focus length and zoom factor to set the camera's
field of view (Projection transformation).
 Print the photo on a selected area of the paper (Viewport
transformation) - in rasterization stage
Vertex Processing
 A transform converts a vertex V from one
space (or coordinate system) to another
space V'. In computer graphics, transform
is carried by multiplying the vector with
a transformation matrix, i.e.,V' = M V.
Vertex Processing
 Each object (or model or avatar) in a 3D
scene is typically drawn in its own coordinate
system, known as its model space (or local
space, or object space). As we assemble the
objects, we need to transform
the vertices from their local spaces to
the world space, which is common to all the
objects. This is known as the world
transform. The world transform consists of a
series of scaling (scale the object to match
the dimensions of the world), rotation (align
the axes), and translation (move the origin).
Vertex Processing
Vertex Processing
 MATHEMATICALLY:
Vertex Processing
 After the world transform, all the objects
are assembled into the world space. We
shall now place the camera to capture the
view.
Vertex Processing
Vertex Processing
 Positioning the Camera
 In 3D graphics, we position the camera onto the
world space by specifying three view
parameters: EYE, AT and UP, in world space.
 The point EYE (ex, ey, ez) defines the location of
the camera.
 The vector AT (ax, ay, az) denotes the direction
where the camera is aiming at, usually at the
center of the world or an object.
 The vector UP (ux, uy, uz) denotes the upward
orientation of the camera roughly. UP is typically
coincided with the y-axis of the world space.
Vertex Processing
 Computing the Camera Coordinates
 From EYE, AT and UP, we first form the
coordinate (xc, yc, zc) for the camera,
relative to the world space. We fix zc to be
the opposite of AT, i.e., AT is pointing at
the -zc. We can obtain the direction
of xc by taking the cross-product of AT
and UP. Finally, we get the direction
of yc by taking the cross-product
of xc and zc.
Vertex Processing
 CROSS PRODUCT:
Vertex Processing
 Transforming from World Space to Camera
Space
 Now, the world space is represented by standard
orthonormal bases (e1, e2, e3), where e1=(1, 0,
0), e2=(0, 1, 0) and e3=(0, 0, 1), with origin at
O=(0, 0, 0). The camera space has orthonormal
bases (xc, yc, zc) with origin at EYE=(ex, ey, ez).
 It is much more convenience to express all the
coordinates in the camera space. This is done
via view transform.
 The view transform consists of two operations: a
translation, followed by a rotation :
Vertex Processing
 Model-View Transform
 In Computer Graphics, moving the objects
relative to a fixed camera (Model
transform), and moving the camera
relative to a fixed object (View transform)
produce the same image, and therefore
are equivalent.
Vertex Processing
 Once the camera is positioned and oriented,
we need to decide what it can see (analogous
to choosing the camera's field of view by
adjusting the focus length and zoom factor),
and how the objects are projected onto the
screen. This is done by selecting a projection
mode and specifying a viewing volume
or clipping volume. Objects outside the
clipping volume are clipped out of the scene
and cannot be seen.
Vertex Processing
 View Frustum in Perspective View
 The camera has a limited field of view, which exhibits
a view frustum (truncated pyramid), and is specified
by four parameters: fovy, aspect, zNear and zFar.
 Fovy: specify the total vertical angle of view in
degrees.
 Aspect: the ratio of width vs. height. For a
particular z, we can get the height from the fovy, and
then get the width from the aspect.
 zNear; the near plane.
 zFar: the far plane.
 The camera space (xc, yc, zc) is renamed to the
familiar (x, y, z) for convenience.
Vertex Processing
Vertex Processing
 Clipping-Volume Cuboid
 Next, we shall apply a so-called projection
matrix to transform the view-frustum into
a axis-aligned cuboid clipping-volume of
2x2x1 centered on the near plane, as
illustrated. The near plane has z=0,
whereas the far plane has z=-1. The
planes have dimension of 2x2, with range
from -1 to +1.
Vertex Processing
Vertex Processing
 The final step is to flip the z-axis, so that
the near plane is still located at z=0, but
the far plane is flipped and located at z=1
(instead of z=-1). In other words, the
larger the z, the further is the object.
Vertex Processing
Vertex Processing
 Besides the commonly-used perspective
projection, there is another so-
called orthographic projection (or parallel
projection), which is a special case where
the camera is placed very far away from
the world (analogous to using telescopic
lens). The view volume for orthographic
projection is a parallelepiped (instead of a
frustum in perspective projection).
Vertex Processing
Vertex Processing
 Each vertex is transformed and positioned in the
clipping-volume cuboid space, together with their
vertex-normal. The x and y coordinates (in the
range of -1 to +1) represent its position on the
screen, and the z value (in the range of 0 to 1)
represents its depth, i.e., how far away from the
near plane.
 The vertex processing stage transform individual
vertices. The relationships between vertices (i.e.,
primitives) are not considered in this stage.

RASTERIZATION
 In the previous vertex processing stage,
the vertices, which is usually represented
in a float value, are not necessarily
aligned with the pixel-grid of the display.
The relationship of vertices, in term of
primitives, are also not considered.
RASTERIZATION
 In this rasterization stage, each primitive
(such as triangle, quad, point and line),
which is defined by one or more vertices,
are raster-scan to obtain a set of fragments
enclosed within the primitive. Fragments can
be treated as 3D pixels, which are aligned
with the pixel-grid. The 2D pixels have a
position and a RGB color value. The 3D
fragments, which are interpolated from the
vertices, have the same set of attributes as
the vertices, such as position, color, normal,
texture. The rasterizer is not programmable,
RASTERIZATION
VIEWPORT
 Viewport is a rectangular display area on the
application window, which is measured in
screen's coordinates (in pixels, with origin at
the top-left corner). A viewport defines the
size and shape of the display area to map the
projected scene captured by the camera onto
the application window. It may or may not
occupy the entire screen.
 In 3D graphics, a viewport is 3-dimensional
to support z-ordering,
VIEWPORT
Back-Face Culling
 While view frustum culling discard objects
outside the view frustum, back-face
culling discard primitives which is not
facing the camera.
 Back-face culling shall not be enabled if
the object is transparent and alpha
blending is enabled.
Fragment Processing
 After rasterization, we have a set of
fragments for each primitive. A fragment
has a position, which is aligned to the
pixel-grid. It has a depth, color, normal
and texture coordinates, which are
interpolated from the vertices.
 The fragment processing focuses on
the texture and lighting, which has the
greatest impact on the quality of the final
image.
Fragment Processing
 The operations involved in the fragment
processor are:
 The first operation in fragment processing is
texturing.
 Next, primary and secondary colors are
combined, and fog calculation may be applied.
 The optional scissor test, alpha test, stencil test,
and depth-buffer test are carried out, if enabled.
 Then, the optional blending, dithering, logical
operation, and bitmasking may be performed.
Output Merging
 z-buffer (or depth-buffer) can be used to
remove hidden surfaces (surfaces blocked by
other surfaces and cannot be seen from the
camera). The z-buffer of the screen is
initialized to 1 (farthest) and color-buffer
initialized to the background color. For each
fragment (of each primitive) processed, its z-
value is checked against the buffer value. If
its z-value is smaller than the z-buffer, its
color and z-value are copied into the buffer.
Output Merging
 Hidden-surface removal works only if the front
object is totally opaque. In computer graphics, a
fragment is not necessarily opaque, and could
contain an alpha value specifying its degree of
transparency. The alpha is typically normalized to
the range of [0, 1], with 0 denotes totally
transparent and 1 denotes totally opaque. If the
fragment is not totally opaque, then part of its
background object could show through, which is
known as alpha blending. Alpha-blending and
hidden-surface removal are mutually exclusive.
 Lighting refers to the handling of
interactions between the light sources and
the objects in the 3D scene. Lighting is
one of the most important factor in
producing a realistic scene
 In computer graphics, we often overlay
(or paste or wrap) images, called
textures, over the graphical objects to
make them realistic.

More Related Content

What's hot

GPU - An Introduction
GPU - An IntroductionGPU - An Introduction
GPU - An IntroductionDhan V Sagar
 
Raster scan displays ppt
Raster scan displays pptRaster scan displays ppt
Raster scan displays pptABHISHEK KUMAR
 
Nvidia (History, GPU Architecture and New Pascal Architecture)
Nvidia (History, GPU Architecture and New Pascal Architecture)Nvidia (History, GPU Architecture and New Pascal Architecture)
Nvidia (History, GPU Architecture and New Pascal Architecture)Saksham Tanwar
 
2D transformation (Computer Graphics)
2D transformation (Computer Graphics)2D transformation (Computer Graphics)
2D transformation (Computer Graphics)Timbal Mayank
 
GRPHICS01 - Introduction to 3D Graphics
GRPHICS01 - Introduction to 3D GraphicsGRPHICS01 - Introduction to 3D Graphics
GRPHICS01 - Introduction to 3D GraphicsMichael Heron
 
graphics processing unit ppt
graphics processing unit pptgraphics processing unit ppt
graphics processing unit pptNitesh Dubey
 
Introduction to computer graphics part 1
Introduction to computer graphics part 1Introduction to computer graphics part 1
Introduction to computer graphics part 1Ankit Garg
 
Raster scan systems with video controller and display processor
Raster scan systems with video controller and display processorRaster scan systems with video controller and display processor
Raster scan systems with video controller and display processorhemanth kumar
 
HOMOGENEOUS CO-ORDINATES IN COMPUTER GRAPHICS PPT
HOMOGENEOUS CO-ORDINATES IN COMPUTER GRAPHICS PPTHOMOGENEOUS CO-ORDINATES IN COMPUTER GRAPHICS PPT
HOMOGENEOUS CO-ORDINATES IN COMPUTER GRAPHICS PPTAhtesham Ullah khan
 
Graphics display devices
Graphics display devicesGraphics display devices
Graphics display devicesalldesign
 
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath YogiTekendra Nath Yogi
 
Monitors & workstation,Donald ch-2
Monitors & workstation,Donald ch-2Monitors & workstation,Donald ch-2
Monitors & workstation,Donald ch-2Iftikhar Ahmad
 
Graphic Processing Unit
Graphic Processing UnitGraphic Processing Unit
Graphic Processing UnitKamran Ashraf
 

What's hot (20)

GPU - An Introduction
GPU - An IntroductionGPU - An Introduction
GPU - An Introduction
 
GPU - Basic Working
GPU - Basic WorkingGPU - Basic Working
GPU - Basic Working
 
03.Scan Conversion.ppt
03.Scan Conversion.ppt03.Scan Conversion.ppt
03.Scan Conversion.ppt
 
Raster scan displays ppt
Raster scan displays pptRaster scan displays ppt
Raster scan displays ppt
 
Nvidia (History, GPU Architecture and New Pascal Architecture)
Nvidia (History, GPU Architecture and New Pascal Architecture)Nvidia (History, GPU Architecture and New Pascal Architecture)
Nvidia (History, GPU Architecture and New Pascal Architecture)
 
2D transformation (Computer Graphics)
2D transformation (Computer Graphics)2D transformation (Computer Graphics)
2D transformation (Computer Graphics)
 
3D Transformation
3D Transformation 3D Transformation
3D Transformation
 
Computer graphics
Computer graphicsComputer graphics
Computer graphics
 
GRPHICS01 - Introduction to 3D Graphics
GRPHICS01 - Introduction to 3D GraphicsGRPHICS01 - Introduction to 3D Graphics
GRPHICS01 - Introduction to 3D Graphics
 
graphics processing unit ppt
graphics processing unit pptgraphics processing unit ppt
graphics processing unit ppt
 
Introduction to computer graphics part 1
Introduction to computer graphics part 1Introduction to computer graphics part 1
Introduction to computer graphics part 1
 
Computer Graphics
Computer GraphicsComputer Graphics
Computer Graphics
 
computer graphics unit 1-I.pptx
computer graphics unit 1-I.pptxcomputer graphics unit 1-I.pptx
computer graphics unit 1-I.pptx
 
Graphics card
Graphics cardGraphics card
Graphics card
 
Raster scan systems with video controller and display processor
Raster scan systems with video controller and display processorRaster scan systems with video controller and display processor
Raster scan systems with video controller and display processor
 
HOMOGENEOUS CO-ORDINATES IN COMPUTER GRAPHICS PPT
HOMOGENEOUS CO-ORDINATES IN COMPUTER GRAPHICS PPTHOMOGENEOUS CO-ORDINATES IN COMPUTER GRAPHICS PPT
HOMOGENEOUS CO-ORDINATES IN COMPUTER GRAPHICS PPT
 
Graphics display devices
Graphics display devicesGraphics display devices
Graphics display devices
 
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
 
Monitors & workstation,Donald ch-2
Monitors & workstation,Donald ch-2Monitors & workstation,Donald ch-2
Monitors & workstation,Donald ch-2
 
Graphic Processing Unit
Graphic Processing UnitGraphic Processing Unit
Graphic Processing Unit
 

Viewers also liked

Team5.Thu.11am
Team5.Thu.11amTeam5.Thu.11am
Team5.Thu.11amZak Meyer
 
Prezentare proiect didactic boer andreea israel
Prezentare proiect didactic  boer andreea israelPrezentare proiect didactic  boer andreea israel
Prezentare proiect didactic boer andreea israelBoer Andreea
 
Suhaimi Fadzir Description of Works as at 260713
Suhaimi Fadzir Description of Works as at 260713Suhaimi Fadzir Description of Works as at 260713
Suhaimi Fadzir Description of Works as at 260713Afzaa Aziz
 
Linux Firewall - NullCon Chennai Presentation
Linux Firewall - NullCon Chennai PresentationLinux Firewall - NullCon Chennai Presentation
Linux Firewall - NullCon Chennai PresentationVinoth Sivasubramanan
 
Paper #15 UNCLOS Implications for Africa_Chege
Paper #15 UNCLOS Implications for Africa_ChegePaper #15 UNCLOS Implications for Africa_Chege
Paper #15 UNCLOS Implications for Africa_ChegeNelson Chege
 
Drg47 Ud Visuals V2008
Drg47 Ud Visuals V2008Drg47 Ud Visuals V2008
Drg47 Ud Visuals V2008davidpatrickk
 
OORT smart home – the perfect solution for energy providers
OORT smart home – the perfect solution for energy providersOORT smart home – the perfect solution for energy providers
OORT smart home – the perfect solution for energy providersRadek Tadajewski
 

Viewers also liked (13)

Asedio
AsedioAsedio
Asedio
 
Team5.Thu.11am
Team5.Thu.11amTeam5.Thu.11am
Team5.Thu.11am
 
Elias_cv 2
Elias_cv 2Elias_cv 2
Elias_cv 2
 
TT6
TT6TT6
TT6
 
Prezentare proiect didactic boer andreea israel
Prezentare proiect didactic  boer andreea israelPrezentare proiect didactic  boer andreea israel
Prezentare proiect didactic boer andreea israel
 
Suhaimi Fadzir Description of Works as at 260713
Suhaimi Fadzir Description of Works as at 260713Suhaimi Fadzir Description of Works as at 260713
Suhaimi Fadzir Description of Works as at 260713
 
Red de redes
Red de redesRed de redes
Red de redes
 
Linux Firewall - NullCon Chennai Presentation
Linux Firewall - NullCon Chennai PresentationLinux Firewall - NullCon Chennai Presentation
Linux Firewall - NullCon Chennai Presentation
 
Paper #15 UNCLOS Implications for Africa_Chege
Paper #15 UNCLOS Implications for Africa_ChegePaper #15 UNCLOS Implications for Africa_Chege
Paper #15 UNCLOS Implications for Africa_Chege
 
OORT for enterprises
OORT for enterprisesOORT for enterprises
OORT for enterprises
 
Drg47 Ud Visuals V2008
Drg47 Ud Visuals V2008Drg47 Ud Visuals V2008
Drg47 Ud Visuals V2008
 
OORT smart home – the perfect solution for energy providers
OORT smart home – the perfect solution for energy providersOORT smart home – the perfect solution for energy providers
OORT smart home – the perfect solution for energy providers
 
Wiki
WikiWiki
Wiki
 

Similar to GPU Graphics Pipelining

Introduction to Computer graphics
Introduction to Computer graphicsIntroduction to Computer graphics
Introduction to Computer graphicsLOKESH KUMAR
 
unit1_updated.pptx
unit1_updated.pptxunit1_updated.pptx
unit1_updated.pptxRYZEN14
 
3 d graphics with opengl part 1
3 d graphics with opengl part 13 d graphics with opengl part 1
3 d graphics with opengl part 1Sardar Alam
 
Computer graphics
Computer graphicsComputer graphics
Computer graphicsamitsarda3
 
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...JinTaek Seo
 
PhD defense talk (portfolio of my expertise)
PhD defense talk (portfolio of my expertise)PhD defense talk (portfolio of my expertise)
PhD defense talk (portfolio of my expertise)Gernot Ziegler
 
Computer Graphics Notes
Computer Graphics NotesComputer Graphics Notes
Computer Graphics NotesGurpreet singh
 
GPU-accelerated Path Rendering
GPU-accelerated Path RenderingGPU-accelerated Path Rendering
GPU-accelerated Path RenderingMark Kilgard
 
Gpu presentation
Gpu presentationGpu presentation
Gpu presentationspartasoft
 
VisionizeBeforeVisulaize_IEVC_Final
VisionizeBeforeVisulaize_IEVC_FinalVisionizeBeforeVisulaize_IEVC_Final
VisionizeBeforeVisulaize_IEVC_FinalMasatsugu HASHIMOTO
 
Graphics processing unit (GPU)
Graphics processing unit (GPU)Graphics processing unit (GPU)
Graphics processing unit (GPU)Amal R
 
Basic of computer graphic - Computer Graphic - Notes
Basic of computer graphic - Computer Graphic - NotesBasic of computer graphic - Computer Graphic - Notes
Basic of computer graphic - Computer Graphic - NotesOmprakash Chauhan
 
Matlab Training in Chandigarh
Matlab Training in ChandigarhMatlab Training in Chandigarh
Matlab Training in ChandigarhE2Matrix
 

Similar to GPU Graphics Pipelining (20)

Introduction to Computer graphics
Introduction to Computer graphicsIntroduction to Computer graphics
Introduction to Computer graphics
 
Chapter 1
Chapter 1Chapter 1
Chapter 1
 
(2) gui drawing
(2) gui drawing(2) gui drawing
(2) gui drawing
 
unit1_updated.pptx
unit1_updated.pptxunit1_updated.pptx
unit1_updated.pptx
 
3 d graphics with opengl part 1
3 d graphics with opengl part 13 d graphics with opengl part 1
3 d graphics with opengl part 1
 
Computer graphics
Computer graphicsComputer graphics
Computer graphics
 
A0280105
A0280105A0280105
A0280105
 
Mod 2 hardware_graphics.pdf
Mod 2 hardware_graphics.pdfMod 2 hardware_graphics.pdf
Mod 2 hardware_graphics.pdf
 
Praseed Pai
Praseed PaiPraseed Pai
Praseed Pai
 
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
 
PhD defense talk (portfolio of my expertise)
PhD defense talk (portfolio of my expertise)PhD defense talk (portfolio of my expertise)
PhD defense talk (portfolio of my expertise)
 
Computer Graphics Notes
Computer Graphics NotesComputer Graphics Notes
Computer Graphics Notes
 
GPU-accelerated Path Rendering
GPU-accelerated Path RenderingGPU-accelerated Path Rendering
GPU-accelerated Path Rendering
 
Gpu presentation
Gpu presentationGpu presentation
Gpu presentation
 
VisionizeBeforeVisulaize_IEVC_Final
VisionizeBeforeVisulaize_IEVC_FinalVisionizeBeforeVisulaize_IEVC_Final
VisionizeBeforeVisulaize_IEVC_Final
 
Graphics processing unit (GPU)
Graphics processing unit (GPU)Graphics processing unit (GPU)
Graphics processing unit (GPU)
 
Kompüter Qrafikasına giriş
Kompüter Qrafikasına girişKompüter Qrafikasına giriş
Kompüter Qrafikasına giriş
 
An35225228
An35225228An35225228
An35225228
 
Basic of computer graphic - Computer Graphic - Notes
Basic of computer graphic - Computer Graphic - NotesBasic of computer graphic - Computer Graphic - Notes
Basic of computer graphic - Computer Graphic - Notes
 
Matlab Training in Chandigarh
Matlab Training in ChandigarhMatlab Training in Chandigarh
Matlab Training in Chandigarh
 

Recently uploaded

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 

Recently uploaded (20)

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 

GPU Graphics Pipelining

  • 1. GRAPHICS PIPELINING IN GPU A Presentation Of Subject ADVANCE OPERATING SYSTEM.
  • 2. BASE CONCEPTS  Computer graphics  Raster graphics  vector graphics  3d scene  3d object  Rasterization  2d graphics  3d graphics
  • 3. BASE CONCEPTS  Rendering  graphic card  hardware accelerator  directX  openGL  3d modelling  Clipping  Interpolation  gpu
  • 4. THE BEGINNING  Once upon a time, The graphics development was limited to writing to a relatively small matrix of color intensities (the frame buffer) and sending it to hardware.  A 640×480px image (a common resolution at that time) contains 307,200 pixels. And the CPUs were so much slower then, that you couldn't really do much in the short time given to you to draw that frame. After all, if you want to keep drawing at 30FPS.  Then came along the cool GPUs with rendering pipelines
  • 5. GRAPHICS PIPELINING  In 3D graphics rendering, the stages required to transform a three-dimensional image into a two-dimensional screen. The stages are responsible for processing information initially provided just as properties at the end points (vertices) or control points of the geometric primitives used to describe what is to be rendered. The typical primitives in 3D graphics are lines and triangles. The type of properties provided per vertex include x-y-z coordinates, specially. An Assembly Line Graphics rendering is like a manufacturing assembly line with each stage adding something to the previous one. Within a graphics processor, all stages are working in parallel. Because of this pipeline architecture, today's graphics processing units (GPUs) perform billions of geometry calculations per second. They are increasingly designed with more memory and more stages, so that more data can be worked on at the same time.
  • 6.
  • 7. TYPES OF GRAPHICS PIPELINING  There are two types of graphics pipelining.  FIXED FUNCTION RENDERING PIPELINE.  PROGRAMMABLE RENDERING PIPELINE.
  • 8. FIXED FUNCTION PIPELINING  We the developers sending our instructions to the GPU, and it would do the heavy-lifting on them.  Those were fixed-function rendering pipelines :  meaning that we can not configure the functions they performed. We could tell them "make the fog dark gray" and we could configure a lot of the other parameters, but the functions themselves remained.  The hardware was wired and narrowly specialized so that it performed some standard operations on your data. And because it was wired that way, it was so much faster than doing them on our processor. But one downside was that you were paying a lot for that speed: we were paying in flexibility. But if you ever wanted to draw anything complex, the CPU simply wasn't fast enough.  This is roughly how the fixed-function pipeline worked.  The graphics world was still changing rapidly. Just like all creative and talented people, developers love challenges, and one of the challenges for them was (and will remain!), for a long time, to render ever-better looking, realistic images.  The fixed-function pipeline provided some nice features, such as multiple blending modes, per- vertex Gouraud shading, fog effects, stencil buffers (for shadow volumes) and such, so the developers used what they could. Soon, there were some really impressive effects going on, all by virtue of real-life phenomena simulated using a few cheap tricks (well, cheap by today's standards).  This was all going very well, but it was still limited by the amount of functions the fixed pipeline could do. After all, the real world has so many different materials, and for simulating them, the only variation they were allowed to perform was changing some blend modes, adding some more textures, or tweaking the light reflection colors.
  • 9.
  • 10. programmable pipelining  Then it happened: the first programmable GPUs came along.  These GPUs had what was called a  programmable rendering pipeline: we could now write programs, called shaders, in a limited assembly language, and have them execute for each vertex or fragment  This was a big leap forward, and it was just getting better.  Soon the assembly languages increased in complexity and expressiveness, and high-level languages for GPU programming emerged, such as HLSL, GLSL, and later Cg. Today we have geometry shaders which can even stream out new vertices, or shaders which dynamically control tessellation and tessellated triangles, and inside them we can sample an awful lot of textures, dynamically branch and do all sorts of crazy math on the input values.
  • 11.
  • 12. SWITCH  Soon, fixed function pipelines were obsolete, at least for game developers. After all, why bother with such walled gardens when you can program precisely what happens to your data? They stayed in use for a lot longer in some applications. But, they were getting ignored.  OpenGL ES 2.0, released in 2007, deprecated or removed its fixed-function pipeline in favor of a programmable one. OpenGL 3.2, back in 2009, finally removed all notion of fixed-function vertex and fragment processing (however, it remains available for legacy use via a compatibility profile). It's clear that it makes very little sense today to work with the limited pipeline when you've got powerful GPUs capable of doing awesome things at your disposal.
  • 13. HOW IMAGE IS DISPLAYED Modern day computer has dedicated Graphics Processing Unit (GPU) to produce images for the display, with its own graphics memory (or Video RAM or VRAM).  All modern displays are raster-based. A raster is a 2D rectangular grid of pixels (or picture elements). A pixel has two properties: a color and a position. Color is expressed in RGB (Red-Green-Blue) components - typically 8 bits per component or 24 bits per pixel (or true color). The position is expressed in terms of (x, y) coordinates. The origin (0, 0) is located at the top-left corner, with x-axis pointing right and y-axis pointing down. This is different from the conventional 2D Cartesian coordinates, where y-axis is pointing upwards.
  • 14. HOW IMAGE IS DISPLAYED  The number of color-bits per pixel is called the depth (or precision) of the display. The number of rows by columns of the rectangular grid is called the resolution of the display, which can range from 640x480 (VGA), 800x600 (SVGA), 1024x768 (XGA) to 1920x1080 (FHD), or even higher.  The color values of the pixels are stored in a special part of graphics memory called frame buffer. The GPU writes the color value into the frame buffer. The display reads the color values from the frame buffer row-by-row, from left- to-right, top-to-bottom, and puts each of the values onto the screen. This is known as raster-scan. The display refreshes its screen several dozen times per second, typically 60Hz for LCD monitors and higher for CRT tubes. This is known as the refresh rate.  A complete screen image is called a frame.
  • 15.
  • 17. The 3D graphics rendering pipeline consists of the following main stages: Vertex Processing: Process and transform individual vertices. Rasterization: Convert each primitive (connected vertices) into a set of fragments. A fragment can be treated as a pixel in 3D spaces, which is aligned with the pixel grid, with attributes such as position, color, normal and texture. Fragment Processing: Process individual fragments. Output Merging: Combine the fragments of all primitives (in 3D space) into 2D color-pixel for the display.
  • 18. IMPORTANT! “ In modern GPUs, the vertex processing stage and fragment processing stage are programmable. You can write programs, known as vertex shader and fragment shader to perform your custom transform for vertices and fragments. The shader programs are written in C-like high level languages such as GLSL (OpenGL Shading Language), HLSL (High-Level Shading Language for Microsoft Direct3D), or Cg (C for Graphics by NVIDIA). On the other hand, the rasterization and output merging stages are not programmable, but configurable - via configuration commands issued to the GPU.”
  • 20.  The inputs to the Graphics Rendering Pipeline are geometric Primitives (such as triangle, point, line or quad), which is formed by one or more vertices. Vertices, Primitives, Fragment and Pixels
  • 21. Vertices, Primitives, Fragment and Pixels  Recall that a primitive is made up of one or more vertices. A vertex, in computer graphics, has these attributes:  Position in 3D space V=(x, y, z): typically expressed in floating point numbers.  Color: expressed in RGB (Red-Green-Blue) or RGBA (Red-Green-Blue- Alpha) components.  Vertex-Normal N=(nx, ny, nz): We are familiar with the concept of surface normal, where the normal vector is perpendicular to the surface. In computer graphics, however, we need to attach a normal vector to each vertex, known as vertex-normal. Normals are used to differentiate the front- and back-face, and for other processing such as lighting. The normal is pointing outwards, indicating the outer surface (or front-face)  Texture T=(s, t): In computer graphics, we often wrap a 2D image to an object to make it seen realistic. A vertex could have a 2D texture coordinates (s, t), which provides a reference point to a 2D texture image.  Others.
  • 24. Vertices, Primitives, Fragment and Pixels  Indexed Vertices  Primitives often share vertices. Instead of repeatedly specifying the vertices, it is more efficient to create an index list of vertices, and use the indexes in specifying the primitives.
  • 25. Vertices, Primitives, Fragment and Pixels  Pixels refers to the dots on the display, which are aligned in a 2-dimensional grid of a certain rows and columns corresponding to the display's resolution. A pixel is 2- dimensional, with a (x, y) position and a RGB color value (there is no alpha value for pixels). The purpose of the Graphics Rendering Pipeline is to produce the color- value for all the pixels for displaying on the screen, given the input primitives.
  • 26. Vertices, Primitives, Fragment and Pixels  In order to produce the grid-aligned pixels for the display, the rasterizer of the graphics rendering pipeline, as its name implied, takes each input primitive and perform raster- scan to produce a set of grid-aligned fragments enclosed within the primitive. A fragment is 3-dimensional, with a (x, y, z) position. The (x, y) are aligned with the 2D pixel-grid. The z-value (not grid-aligned) denotes its depth.
  • 27. Vertices, Primitives, Fragment and Pixels  Fragments are produced via interpolation of the vertices. Hence, a fragment has all the vertex's attributes such as color, fragment- normal and texture coordinates.  In modern GPU, vertex processing and fragment processing are programmable. The programs are called vertex shader and fragment shader.  (Direct3D uses the term "pixel" for "fragment".)
  • 28.
  • 30. Vertex Processing  The process used to produce a 3D scene on the display in Computer Graphics is like taking a photograph with a camera. It involves four transformations:  Arrange the objects (or models, or avatar) in the world (Model Transformation or World transformation).  Position and orientation the camera (View transformation).  Select a camera lens (wide angle, normal or telescopic), adjust the focus length and zoom factor to set the camera's field of view (Projection transformation).  Print the photo on a selected area of the paper (Viewport transformation) - in rasterization stage
  • 31. Vertex Processing  A transform converts a vertex V from one space (or coordinate system) to another space V'. In computer graphics, transform is carried by multiplying the vector with a transformation matrix, i.e.,V' = M V.
  • 32. Vertex Processing  Each object (or model or avatar) in a 3D scene is typically drawn in its own coordinate system, known as its model space (or local space, or object space). As we assemble the objects, we need to transform the vertices from their local spaces to the world space, which is common to all the objects. This is known as the world transform. The world transform consists of a series of scaling (scale the object to match the dimensions of the world), rotation (align the axes), and translation (move the origin).
  • 35. Vertex Processing  After the world transform, all the objects are assembled into the world space. We shall now place the camera to capture the view.
  • 37. Vertex Processing  Positioning the Camera  In 3D graphics, we position the camera onto the world space by specifying three view parameters: EYE, AT and UP, in world space.  The point EYE (ex, ey, ez) defines the location of the camera.  The vector AT (ax, ay, az) denotes the direction where the camera is aiming at, usually at the center of the world or an object.  The vector UP (ux, uy, uz) denotes the upward orientation of the camera roughly. UP is typically coincided with the y-axis of the world space.
  • 38. Vertex Processing  Computing the Camera Coordinates  From EYE, AT and UP, we first form the coordinate (xc, yc, zc) for the camera, relative to the world space. We fix zc to be the opposite of AT, i.e., AT is pointing at the -zc. We can obtain the direction of xc by taking the cross-product of AT and UP. Finally, we get the direction of yc by taking the cross-product of xc and zc.
  • 40. Vertex Processing  Transforming from World Space to Camera Space  Now, the world space is represented by standard orthonormal bases (e1, e2, e3), where e1=(1, 0, 0), e2=(0, 1, 0) and e3=(0, 0, 1), with origin at O=(0, 0, 0). The camera space has orthonormal bases (xc, yc, zc) with origin at EYE=(ex, ey, ez).  It is much more convenience to express all the coordinates in the camera space. This is done via view transform.  The view transform consists of two operations: a translation, followed by a rotation :
  • 41. Vertex Processing  Model-View Transform  In Computer Graphics, moving the objects relative to a fixed camera (Model transform), and moving the camera relative to a fixed object (View transform) produce the same image, and therefore are equivalent.
  • 42. Vertex Processing  Once the camera is positioned and oriented, we need to decide what it can see (analogous to choosing the camera's field of view by adjusting the focus length and zoom factor), and how the objects are projected onto the screen. This is done by selecting a projection mode and specifying a viewing volume or clipping volume. Objects outside the clipping volume are clipped out of the scene and cannot be seen.
  • 43. Vertex Processing  View Frustum in Perspective View  The camera has a limited field of view, which exhibits a view frustum (truncated pyramid), and is specified by four parameters: fovy, aspect, zNear and zFar.  Fovy: specify the total vertical angle of view in degrees.  Aspect: the ratio of width vs. height. For a particular z, we can get the height from the fovy, and then get the width from the aspect.  zNear; the near plane.  zFar: the far plane.  The camera space (xc, yc, zc) is renamed to the familiar (x, y, z) for convenience.
  • 45. Vertex Processing  Clipping-Volume Cuboid  Next, we shall apply a so-called projection matrix to transform the view-frustum into a axis-aligned cuboid clipping-volume of 2x2x1 centered on the near plane, as illustrated. The near plane has z=0, whereas the far plane has z=-1. The planes have dimension of 2x2, with range from -1 to +1.
  • 47. Vertex Processing  The final step is to flip the z-axis, so that the near plane is still located at z=0, but the far plane is flipped and located at z=1 (instead of z=-1). In other words, the larger the z, the further is the object.
  • 49. Vertex Processing  Besides the commonly-used perspective projection, there is another so- called orthographic projection (or parallel projection), which is a special case where the camera is placed very far away from the world (analogous to using telescopic lens). The view volume for orthographic projection is a parallelepiped (instead of a frustum in perspective projection).
  • 51. Vertex Processing  Each vertex is transformed and positioned in the clipping-volume cuboid space, together with their vertex-normal. The x and y coordinates (in the range of -1 to +1) represent its position on the screen, and the z value (in the range of 0 to 1) represents its depth, i.e., how far away from the near plane.  The vertex processing stage transform individual vertices. The relationships between vertices (i.e., primitives) are not considered in this stage. 
  • 52. RASTERIZATION  In the previous vertex processing stage, the vertices, which is usually represented in a float value, are not necessarily aligned with the pixel-grid of the display. The relationship of vertices, in term of primitives, are also not considered.
  • 53. RASTERIZATION  In this rasterization stage, each primitive (such as triangle, quad, point and line), which is defined by one or more vertices, are raster-scan to obtain a set of fragments enclosed within the primitive. Fragments can be treated as 3D pixels, which are aligned with the pixel-grid. The 2D pixels have a position and a RGB color value. The 3D fragments, which are interpolated from the vertices, have the same set of attributes as the vertices, such as position, color, normal, texture. The rasterizer is not programmable,
  • 55. VIEWPORT  Viewport is a rectangular display area on the application window, which is measured in screen's coordinates (in pixels, with origin at the top-left corner). A viewport defines the size and shape of the display area to map the projected scene captured by the camera onto the application window. It may or may not occupy the entire screen.  In 3D graphics, a viewport is 3-dimensional to support z-ordering,
  • 57. Back-Face Culling  While view frustum culling discard objects outside the view frustum, back-face culling discard primitives which is not facing the camera.  Back-face culling shall not be enabled if the object is transparent and alpha blending is enabled.
  • 58. Fragment Processing  After rasterization, we have a set of fragments for each primitive. A fragment has a position, which is aligned to the pixel-grid. It has a depth, color, normal and texture coordinates, which are interpolated from the vertices.  The fragment processing focuses on the texture and lighting, which has the greatest impact on the quality of the final image.
  • 59. Fragment Processing  The operations involved in the fragment processor are:  The first operation in fragment processing is texturing.  Next, primary and secondary colors are combined, and fog calculation may be applied.  The optional scissor test, alpha test, stencil test, and depth-buffer test are carried out, if enabled.  Then, the optional blending, dithering, logical operation, and bitmasking may be performed.
  • 60. Output Merging  z-buffer (or depth-buffer) can be used to remove hidden surfaces (surfaces blocked by other surfaces and cannot be seen from the camera). The z-buffer of the screen is initialized to 1 (farthest) and color-buffer initialized to the background color. For each fragment (of each primitive) processed, its z- value is checked against the buffer value. If its z-value is smaller than the z-buffer, its color and z-value are copied into the buffer.
  • 61. Output Merging  Hidden-surface removal works only if the front object is totally opaque. In computer graphics, a fragment is not necessarily opaque, and could contain an alpha value specifying its degree of transparency. The alpha is typically normalized to the range of [0, 1], with 0 denotes totally transparent and 1 denotes totally opaque. If the fragment is not totally opaque, then part of its background object could show through, which is known as alpha blending. Alpha-blending and hidden-surface removal are mutually exclusive.
  • 62.  Lighting refers to the handling of interactions between the light sources and the objects in the 3D scene. Lighting is one of the most important factor in producing a realistic scene  In computer graphics, we often overlay (or paste or wrap) images, called textures, over the graphical objects to make them realistic.