SlideShare a Scribd company logo
1 of 264
Computer Graphics
Coursecode:CoSc 3072
Credithours:3, ECTS:5
Prerequisite:CoSc 1012 Computer Programming
Assosa University
Preparedby: kehussen12@gmail.com
Course description
 This course will introduce students to all aspects of computer graphics
including hardware, software and applications.
 Students will gain experience using a graphics application
programming interface (OpenGL) by completing several programming
projects.
Course objectives
 By the end of this course, students will be able to:
 Have a basic understanding of the core concepts of computer
graphics.
 Be capable of using OpenGL to create interactive computer
graphics.
 Understand a typical graphics pipeline.
 Have made pictures with their computer.
Introduction to interactive computer
graphics
Chapter one
What is Computer Graphics?
 Computer graphics is an art of drawing pictures on
computer screens with the help of programming.
 It involves
• Creation
• Computation and
• Manipulation of data
What is Computer Graphics?
 computer graphics is a rendering tool for the generation and
manipulation of images.
Types of Computer Graphics
 Interactive
 users have some controls over the image
 the user can make any changes to the image produced
 E.g. Ping-pong game, drawing on touch screen,
animating pictures or graphics in movies etc…
 Non-interactive
 user has no control over the image.
 E.g. screen savers, map representation of data etc…
History of computer graphics
 simple graphics on early computers
 pop-up menus
 constraint-based drawing
 hierarchical modeling
Ivan Sutherland (1950s- 1960s) - SKETCHPAD
History of computer graphics
 Uses mathematical equation to represent lines and curves.
 mathematically based computer image format
 development of languages like SAGE and Sketchpad that
allowed users to interactively create and manipulate
graphical objects.
Vector graphics (1960s- 1970s)
History of computer graphics
 The introduction of raster displays and bitmap
graphics revolutionized computer graphics.
 In the mid-1970s, Xerox PARC developed the first
computer with a graphical desktop, known as the
"Alto."
Pixel graphics (1970s- 1980s)
History of computer graphics
 In 1986, Pixar released "Luxo Jr.," a short film that
showcased the potential of computer-generated 3D
animation.
 The film's success led to advancements in 3D rendering
and animation, paving the way for computer-generated
imagery (CGI) in films.
Pixar and 3D Graphics (1980s)
History of computer graphics
 In the 1970s and 1980s, arcade games like "Pong,"
"Space Invaders," and "Pac-Man" popularized pixel
graphics and pushed for faster and more advanced
hardware.
Video Games (1970s-1990s)
History of computer graphics
 The development of graphical user interfaces (GUIs) in
the 1980s,
 such as Apple's Macintosh and Microsoft Windows,
made computer graphics more accessible to the general
public.
 GUIs replaced command-line interfaces with visually
intuitive elements like icons, windows, and menus.
Graphical User Interfaces (1980s-1990s)
History of computer graphics
 3D graphics cards with specialized hardware accelerators
became available, greatly improving real-time 3D
rendering capabilities.
 This allowed for more immersive and visually impressive
video games and applications.
3D Graphics Acceleration (1990s)
History of computer graphics
 introduced VR headsets for gaming
 VR research continued and experienced a resurgence in
the 2010s with the advent of more advanced VR devices.
Virtual Reality (1990s-2000s)
History of computer graphics
 High-definition displays, advanced rendering techniques,
and powerful GPUs have enabled incredibly realistic
graphics in video games, movies, and virtual reality
experiences.
 Additionally, fields like data visualization, scientific
simulations, and augmented reality have benefited
significantly from computer graphics advancements.
Modern Era (2000s-present)
3D graphics techniques and terminology
 The term three-dimensional, or 3D, means that an object
being described or displayed has three dimensions of
measurement:
 width,
 height, and
 depth.
Fig 1.1 2D and 3D image
3D graphics techniques and terminology
 The process by which mathematical and image data is transformed
into a 3D dimensional image is called rendering.
 When used as a verb, it is the process that your computer goes
through to create the three dimensional image.
 Rendering is also used as a noun, simply to refer to the final image
produced.
3D graphics techniques and terminology
 The following Figure shows the initial output of the BLOCK
example program, which shows a line drawing of a cube on a
table or platform.
Transformations and
Projections
3D wireframe cube
 The points themselves are
called vertices(or vertex in the
singular)
Components of a 3D graphic system
 3D Modeling :
o A way to describe the 3D world or scene, which is composed of
mathematical representations of 3D objects called models.
 3D Rendering :
o A mechanism responsible for producing a 2D image from 3D models.
 Simple 3D objects can be modeled using mathematical equations
operating in the 3-dimensional Cartesian coordinate system.
3D Modeling
the equation 𝑥2 + 𝑦2 + 𝑧2 = 𝑟2
is a model of a perfect sphere
with radius r.
3D graphics techniques and terminology
 By moving the points around, and drawing lines between them
we can produce the illusion of a 3D world on a flat 2D screen.
 The earliest flight simulators employed technology no more
sophisticated than this.
Transformations and
Projections
3D graphics techniques and terminology
 A projection matrix takes care of the mathematics necessary to
turn our 3D coordinates into two-dimensional screen
coordinates, where the final line drawing actually takes place.
Transformations and
Projections
Rasterization
 The actual drawing, or filling in of the pixels between each
vertex to make the lines is called rasterization.
3D graphics techniques and terminology
key concepts and terms
 Vertex: In 3D graphics, a vertex (plural: vertices) is a point
in space with three-dimensional coordinates (x, y, z).
 Vertices are the building blocks of 3D models and are
connected to form edges and faces.
Points
3D graphics techniques and terminology
key concepts and terms
 Edge: An edge is a line segment connecting two vertices
and define the shape of a 3D model and determine its
overall structure.
vertex
edge
edge
(𝑥, 𝑦)
3D graphics techniques and terminology
key concepts and terms
 Face: A face is a flat polygon defined by three or more
vertices.
Faces create the visible surfaces of a 3D model, and their
arrangement determines the shape of the object.
3D graphics techniques and terminology
key concepts and terms
 Rendering: The process of generating a 2D image from a
3D scene.
 It involves calculating the color, shading, and lighting of
objects in the scene to create a realistic or stylized
representation.
3D graphics techniques and terminology
key concepts and terms
 Polygon: A polygon is a closed two-dimensional shape
formed by connecting multiple vertices with edges.
 Triangles and quads (four-sided polygons) are the most
common types used in 3D modeling.
3D graphics techniques and terminology
key concepts and terms
 Texture Mapping: Applying a 2D image (texture) to the surface
of a 3D model.
 Texture mapping adds details, color, and texture to objects,
enhancing their visual realism.
 models are often described not only by geometry, but by
textures as well.
 To each point of an object, we can associate some property
(surface color is a common one), and this property is then used
in rendering the object.
3D graphics techniques and terminology
key concepts and terms
 Shading: Determining the appearance of surfaces by
calculating how light interacts with them.
 Shading techniques include flat shading (each face has a
single color), Gouraud shading (smoothly interpolated
colors across vertices), and Phong shading (smoothly
interpolated normal for more realistic lighting).
3D graphics techniques and terminology
Common Uses of Computer Graphics
Real-Time 3D
An OpenGL-based flight simulator, courtesy of x-plane.com.
Common Uses of Computer Graphics
3D graphics used for computer-aided design (CAD) (image courtesy of Software Bisque).
Common Uses of Computer Graphics
3D graphics used for medical imaging applications
Common Uses of Computer Graphics
3D graphics used for medical imaging applications
Designing Effective Step-By-Step Assembly Instructions
Common Uses of Computer Graphics
Virtual Reality
 Experiencing things through our computers that
don't really exist.
 A believable, interactive 3D computer-created
world that you can explore so you feel you really
are there, both mentally and physically.
 Windows on World(WoW)
Types of Virtual Reality System
 Also called Desktop VR
 Using a conventional computer monitor to display
the 3D virtual world.
 Immersive VR
 Completely immerse the user's personal viewpoint
inside the virtual 3D world.
 Completely immerse the user's personal viewpoint
inside the virtual 3D world.
Immersive VR (cont’d)
Head-Mounted Display (HMD)
 A Helmet or a face mask providing the visual and
auditory displays.
 Often equipped with a Head Mounted Display
(HMD)
Cont’d
 Telepresence
 A variation of visualizing complete computer
generated worlds.
 Links remote sensors in the real world with the
senses of a human operator. The remote sensors
might be located on a robot. Useful for performing
operations in dangerous environments.
Telepresence(Cont’d)
 Mixed reality (Augmented reality)
Types of Virtual Reality System (cont’d)
 The seamless merging of real space and virtual
space.
 Integrate the computer-generated virtual objects
into the physical world which become in a sense an
equal part of our natural environment..
 Mixed reality (Augmented reality)
Types of Virtual Reality System (cont’d)
Application Area
 Computer Aided Design (CAD)
 Presentation Graphics
 Computer Art
 Entertainment (animation, games, …)
 Education & Training
 Visualization (scientific & business)
 Image Processing
 Weather Maps
 Cartography
 Simulation and modeling
 Graphical User Interfaces
Computer Aided Design (CAD)
 Used in design of buildings, automobiles, aircraft, watercraft,
spacecraft, computers, textiles & many other products
 Objects are displayed in wire frame outline form
Software packages provide multi-window environment
Presentation Graphics
 Used to produce illustrations for reports or generate slides for use with
projectors
 Commonly used to summarize financial, statistical, mathematical,
scientific, economic data for research reports, managerial reports &
customer information bulletins
 Examples : Bar charts, line graphs, pie charts, surface graphs, time
chart
Computer art
 Used in fine art & commercial art
 Includes artist’s paintbrush programs, paint packages, CAD
packages and animation packages
 These packages provides facilities for designing object shapes &
specifying object motions.
 Examples : Cartoon drawing, paintings, product
advertisements, logo design
Examples
Entertainment (Movies and games)
 Movie Industry
 Used in motion pictures, music
 videos, and television shows.
 Used in making of cartoon animation
films
Entertainment (Movies and games)
Graphics
Animation
Game Industry
 Focus on interactivity
 Cost effective solutions
Education and training
 Computer generated models of physical, financial and economic
systems are used as educational aids.
 Models of physical systems, physiological systems, population trends,
or equipment such as color-coded diagram help trainees understand
the operation of the system
Training
Flight simulators, computer aided instruction, etc.
Image processing
 CG- Computer is used to create a picture
 Image Processing – applies techniques to modify or interpret
existing pictures such as photographs and TV scans
 Medical applications
 Picture enhancements
 Tomography(a technique for displaying a cross section
through a human body or other solid object using X-rays or
ultrasound.)
 Applications of image processing
Improving picture quality
Machine perception of visual information (Robotics)
Image processing
Graphical user interface
 Major component – Window manager (multiple-window
areas)
 To make a particular window active, click in that window
(using an interactive pointing device)
 Interfaces display – menus & icons
Graphical user interface
Graphics Hardware
Chapter Two
Graphics Hardware
 Graphics hardware plays a vital role in computer graphics.
 It provides the necessary processing power and
capabilities to render and display graphics efficiently.
 This chapter explores the key components and
functionalities of graphics hardware, including graphics
cards, GPUs (Graphics Processing Units), and display
technologies.
2.1. Graphics Card
 also known as a video card or GPU card.
 is an expansion card that plugs into a computer's motherboard
to handle graphics-related tasks.
 It consists of various components, including:
 GPU (Graphics Processing Unit)
 VRAM (Video RAM)
 Cooling System
 Video Outputs
…cont’d
 GPU is the heart of a graphics card.
 It is a specialized processor designed to perform complex
mathematical calculations required for rendering 2D and 3D
graphics.
 Modern GPUs have hundreds or thousands of cores, enabling
parallel processing and efficient rendering.
GPU (Graphics Processing Unit)
…cont’d
 VRAM is dedicated memory on the graphics card used to store
textures, frame buffers, and other graphical data.
 It provides high-speed access to data required for rendering,
improving overall performance.
VRAM (Video RAM)
…cont’d
 Graphics cards have video outputs (HDMI, DisplayPort, DVI,
etc.) to connect to external monitors and displays.
Video Output
…cont’d
 Graphics cards generate a significant amount of heat during
intensive rendering tasks.
 Cooling systems, such as fans and heat sinks, are essential to
dissipate heat and maintain the GPU's temperature within safe
operating limits.
Cooling System
2.2. Graphics API
 Graphics APIs, such as OpenGL, DirectX, and Vulkan, act as
an intermediary between the software and graphics hardware.
 They provide a set of functions and commands that
programmers can use to interact with the GPU and perform tasks
like rendering, shading, and texture mapping.
shading Texture Mapping
2.3. Display Technology
 Graphics hardware is responsible for driving
displays and monitors to present the rendered
graphics to users.
 The most commonly used display device is called
Liquid Crystal Device (LCD).
2.3.1 Raster display system
 Raster: A rectangular array of points or dots.
 Pixel: One dot or picture element of the raster.
 Pixel - one element of the framebuffer
 Scan Line: A row of pixels
 In a raster scan system, the electron beam is swept
across the screen, one row at a time from top to
bottom
 Redraw the picture repeatedly by quickly directing the
electron beam back over the same points. This is
cont’d
Scan line
Pixel
Raster Scan Displays
 Horizontal retrace: The return to the left of the screen, after
refreshing each scan line.
 Vertical retrace: At the end of each frame (displayed in
1/80th to 1/60th of a second) the electron beam returns to the
top left corner of the screen to begin the next frame.
68
Cont’d
 A Raster Scan Display is based on intensity control of
pixels in the form of a rectangular box called Raster
on the screen.
 Information of on and off pixels is stored in refresh
buffer or Frame buffer
 Televisions in our house are based on Raster Scan
Method.
Cont’d
 Frame Buffer is also known as Raster or bit
map.
 In Frame Buffer the positions are called picture
elements or pixels
Advantages of Raster Display system:
 Realistic image
 Million Different to be generated
 Shadow Scenes are possible.
Disadvantages Raster Display system :
 Low Resolution
 Expensive
Raster Images
 The quality of a raster image is determined by the total
number pixels (resolution), and the amount of information in
each pixel (color depth).
 Raster graphics cannot be scaled to a higher resolution
without loss of apparent quality.
…cont’d
 Raster displays work on the principle of rendering images as a
grid of pixels.
 Each pixel represents a single point on the screen and stores
color information to produce the overall image.
 The resolution of a raster display is determined by the number of
pixels in both the horizontal and vertical directions.
Pixel-based rendering
…cont’d
 The frame buffer is a dedicated region of memory in the
graphics hardware where the entire screen image is stored.
 It contains a pixel value for each location on the display.
 The frame buffer serves as an intermediate buffer between the
graphics processor and the display screen, allowing efficient
rendering and screen updates.
Frame-based rendering
…cont’d
 The refresh rate of a raster display system is the number of
times per second that the frame buffer is updated and the
screen is redrawn.
 Common refresh rates include 60Hz, 120Hz, and 144Hz.
 Higher refresh rates result in smoother motion and reduced
flicker.
Refresh rate
…cont’d
 The color depth, also known as bit depth, determines the
number of colors a raster display can reproduce.
 Common color depths include 8-bit (256 colors), 16-bit
(65,536 colors), 24-bit (16.7 million colors), and higher.
 Higher color depths allow for more realistic and detailed
color representation.
Color depth
…cont’d
 The resolution of a raster display is defined by the number
of pixels in each dimension, such as 1920x1080 for Full
HD or 3840x2160 for 4K.
 Higher resolutions result in sharper and more detailed
images.
Resolution
…cont’d
 The aspect ratio is the ratio of the width to the height of the
display.
 Common aspect ratios include 4:3, 16:9, and 21:9.
 The aspect ratio affects the shape and dimensions of the
displayed image.
Aspect ratio
 Raster display systems have been the standard for computer
graphics for many years due to their
 simplicity,
 efficiency, and
 widespread compatibility.
 However, newer display technologies, such as vector displays
and ray tracing, are emerging to provide even more realistic and
advanced graphics rendering techniques in certain applications.
Summary of raster display
3D graphics pipeline
 Also known as the graphics rendering pipeline.
 It is a series of stages and processes that transform 3D objects
and scenes into 2D images for display on a 2D screen.
 This pipeline is an essential concept in computer graphics and
provides a structured approach to efficiently render complex 3D
scenes in real-time.
o Vector processing
o Primitive assembly & rasterization
o Fragment processing
o frame buffer
o display
3D graphics
pipeline stages
3D graphics pipeline stages
 The pipeline begins with vertex processing.
 where 3D models are transformed and manipulated at the vertex
level.
 This stage includes transformations such as
o translation,
o rotation,
o scaling, and
o projection.
Vertex Processing
 A translation is applied to an object by repositioning it along a
straight-line path from one coordinate location to another.
 The result is the conversion of 3D vertices into 2D coordinates
on the screen, a process known as projection.
 We translate a two-dimensional point by adding translation
distances, 𝑡𝑥, and 𝑡𝑦, to the original coordinate position (x, y) to
move the point to a new position (x', y’)
𝑥′ = 𝑥 + 𝑡𝑥, 𝑦′ = 𝑦 + 𝑡𝑦,
Translation
Translation
Translation of triangle
 The result is the conversion of 3D vertices into 2D
coordinates on the screen, a process known as projection.
Rotation
 A two-dimensional rotation is applied to an object by
repositioning it along a circular path in the 𝑥𝑦 plane.
 To generate a rotation, we specify a rotation angle 𝜃 and the
position (𝑥, 𝑦) of the rotation point (or pivot point) about which the
object is rotated.
Before rotation
After 90 ‘ rotation
Scaling
 A scaling transformation alters the size of an object.
 This operation can be carried out for polygons by multiplying the
coordinate values (𝑥, 𝑦) of each vertex by scaling factors 𝑠𝑥, and
𝑠𝑦, to produce the transformed coordinates (𝑥′, 𝑦′):
𝑥′ = 𝑥 ⋅ 𝑠𝑥
𝑦′ = 𝑦 ⋅ 𝑠𝑦
3D graphics pipeline stages
 After vertex processing, the graphics pipeline assembles vertices
into primitives, such as
 points,
 lines, or
 triangles.
 Triangles are the most common primitive used in modern 3D
graphics due to their simplicity and ability to tessellate complex
surfaces.
 Once primitives are formed, the rasterization stage determines
which pixels on the screen are covered by these primitives.
Primitive assembly and
rasterization
Primitive assembly
Rasterization
3D graphics pipeline stages
 In this stage, fragments (also known as pixels) generated by
rasterization undergo further processing.
 Each fragment contains information about its position on the
screen and its attributes, such as color and depth.
 Fragment processing includes operations like:
 texture mapping,
 shading, and
 depth testing.
Fragment processing
Fragment processing
Reading Assignment
texture mapping, shading, depth testing.
3D graphics pipeline stages
 The final processed fragments are written into the frame buffer,
a dedicated portion of memory that stores the 2D image to be
displayed on the screen.
 The frame buffer contains pixel values representing colors and
other display information.
Frame buffer
Frame buffer
Frame buffer
Frame buffer
Simple color frame buffer
3D graphics pipeline stages
 The last stage of the pipeline is the display stage,
 where the contents of the frame buffer are converted into analog
signals and sent to the display device, such as a monitor or
screen, for visual presentation.
Display
Color CRT Monitors
 Colored pictures can be displayed using a combination of
phosphors that emit different color light.
 Commonly used techniques for display of colors are:
 Beam Penetration
 Shadow Mask
Beam Penetration
 Uses multilayer phosphor.
 Used with random scan display
 Two layers of phosphor (usually red and green) are
coated on inner side of CRT screen.
cont’d
 Electron Beam intensity decides the displayed color.
 High potential electron beam excite the green phosphor.
 Low potential electron beam excite red phosphor.
 Intermediate beam gives combinations of green and red
light i.e. orange and yellow.
cont’d
Advantage
• Inexpensive Method.
Disadvantages
• Limited colors are possible.
• Poor picture quality.
• Difficulty in changing electron beam potential by large
amount.
Shadow Mask
 Shadow mask method is used in majority of color TV
set and computer monitors.
 It can display wide range of colors.
 This method is commonly used in raster scan displays
 It has red, green & blue color dots at each pixel
position on the screen (forms a delta)
Cont’d
 It also has three electron guns one for each color
dot (forms a delta).
 A shadow-mask grid is placed just behind the
phosphor-coated screen with holes, corresponding to
each pixel on screen.
 The electron beam from all the three guns are
brought to the same point of focus on the shadow
mask.
Cont’d
Cont’d
 Advantage:
 produce realistic images
 also produced different colors
 and shadows scenes.
 Disadvantages
 low resolution
 expensive
 electron beam directed to whole screen
2.4. The Z Buffer For Hidden Surface Removal
 Z-buffer is a 2D array that stores a depth value for each pixel.
 This is referred to as the Z-buffer, since depth of an object is
mainly calculated from the view plane along the 𝑧 axis of a
coordinate system.
Z buffer
Z buffer
Z buffer
Z buffer Algorithm
Z buffer Examples
Z buffer Screen
Z buffer Examples
Z buffer Examples
Rendering Process with OpenGL
Chapter Three
What is OpenGL?
 OpenGL is strictly defined as “a software interface to graphics
hardware.”
 it is a 3D graphics and modeling library that is highly portable
and very fast.
 Using OpenGL, you can create elegant and beautiful 3D
graphics with exceptional visual quality.
…Cont’d
 Initially, it used algorithms carefully developed and optimized by
Silicon Graphics, Inc. (SGI), an acknowledged world leader in
computer graphics and animation.
 Over time, OpenGL has evolved as other vendors have
contributed their expertise and intellectual property to develop
high-performance implementations of their own.
…Cont’d
 The OpenGL API itself is not a programming language like C or
C++.
 It is more like the C runtime library, which provides some
prepackaged functionality.
 OpenGL is intended for use with computer hardware that is
designed and optimized for the display and manipulation of 3D
graphics.
 However, Software-only implementations of OpenGL are also
possible.
The key steps involved in
creating and displaying
graphics
1. Setting up the graphics context
 The first step in using OpenGL is to set up the graphics context.
 This involves creating a window or surface for rendering and
initializing the OpenGL context within it.
 Platform-specific libraries like GLFW (OpenGL Framework) or
GLUT (OpenGL Utility Toolkit) can be used to manage the
window and OpenGL context.
2. Defining the geometry
 In OpenGL, 3D objects are represented as collections of
vertices, edges, and faces.
 To render a 3D scene, the application must define the geometry
of the objects using vertex data.
 The vertex data includes the coordinates of the vertices,
information about normal (surface directions), texture
coordinates, and other attributes.
3. Sending Data to the GPU
 The vertex data is sent from the CPU (Central Processing Unit)
to the GPU (Graphics Processing Unit) using OpenGL buffer
objects.
 These buffer objects efficiently store the vertex data in the
GPU's memory for fast access during rendering.
4. Compiling and Linking Shaders
 Shaders are small programs written in the OpenGL Shading
Language (GLSL) that run on the GPU.
 There are two types of shaders used in OpenGL:
1. vertex shaders, which manipulate vertices, and
2. fragment shaders, which determine the color and depth of
fragments (pixels) generated during rasterization.
 The application must compile and link the shader programs
before they can be used in the rendering process.
5. Rendering Loop
 The rendering process in OpenGL occurs within a rendering
loop, also known as the game loop.
 This loop continually updates the scene and renders it on the
screen.
 The loop typically involves the following steps:
A) Clearing the Frame Buffer
B) Updating the Scene
C) Setting Up the Camera
D) Binding Shaders and Uniforms
E) Drawing the Geometry
F) Displaying the Frame
3.1. Role of OpenGL in the Reference Model
 The Reference Model is a conceptual framework that defines
the components and processes involved in creating and
displaying graphics on a computer screen.
 OpenGL fits into this model as the graphics API responsible for
rendering 2D and 3D graphics efficiently and interactively.
3.2. Coordinate system
 A coordinate system is a mathematical framework used to
specify the precise location of points in space or on a surface.
 It provides a way to represent and measure positions or
directions relative to a reference point or reference axes.
 The origin of the 2D Cartesian system is at 𝑥 = 0, 𝑦 = 0.
3.2. Coordinate system
Figure 3.1 Cartesian space
OpenGL takes care of the mapping between Cartesian coordinates
and window pixels when it comes time to rasterize (actually draw) your
geometry on-screen.
Cont’d
 For example, a standard VGA screen has 640 pixels from
left to right and 480 pixels from top to bottom.
 To specify a point in the middle of the screen, you specify
that a point should be plotted at (320,240)
 that is, 320 pixels from the left of the screen and 240 pixels
down from the top of the screen.
 In OpenGL, or almost any 3D API, when you create a
window to draw in, you must also specify the coordinate
system you want to use and how to map the specified
coordinates into physical screen pixels.
3.3. Viewing Using a Synthetic Camera
 Takes in a 3D scene
 Places (i.e., projects) the scene onto a 2D
medium such as a roll of film or a digital pixel
array
What does a camera do?
Cont’d
 The synthetic camera is a programmer’s model
for specifying how a 3D scene is projected onto
the screen.
Pinhole
3D Viewing: The Synthetic Camera
 General synthetic camera: each package has
its own but they are all (nearly) equivalent, with
the following parameters/degrees of freedom:
 Camera Position and Orientation
 Field of view (angle of view, e.g., wide,
narrow/telephoto, normal...)
 Depth of field/focal distance (near distance, far
distance)
 Tilt of view/ film plane (if not perpendicular to viewing
direction, produces oblique projections)
 Perspective or Orthographic Projection
3.4. Output primitives
 Output primitives are the basic geometric shapes
that can be rendered by a graphics system.
 Common output primitives:
› Points: A single pixel or dot.
› Lines: A straight line segment connecting two vertices
› Triangles: A three-sided polygon defined by three
vertices.
› Quads: A four-sided polygon defined by four vertices
› Other Polygons: Graphics systems may support
polygons with more than three or four sides, but they are
typically tessellated into triangles for rendering.
Output attributes
 Output attributes define the characteristics and
appearance of output primitives.
 These attributes include::
› Vertex Attributes: Each vertex of an output primitive may
have various attributes, such as position, color, texture
coordinates, normal vectors, and other user-defined
properties.
› Color: The color attribute determines the color of a
primitive. Color can be represented using RGB (Red,
Green, Blue) values, RGBA (RGB with an alpha
component for transparency), or other color models.
Geometry and Line Generation
Chapter Four
Introduction
 All graphics packages construct pictures from basic
building blocks known as graphics primitives
 Primitives that describe the geometry, or shape, of these
building blocks are known as geometric primitives.
 They can be anything from 2-D primitives such as points,
lines and polygons to more complex 3-D primitives such
as spheres and polyhedra (a polyhedron is a 3-D surface
made from a mesh of 2-D polygons).
Cont’d…
 In the following sections we will examine some
algorithms for drawing different primitives, and where
appropriate we will introduce the routines for
displaying these primitives in OpenGL.
OpenGL Point drawing primitives
 The most basic type of primitive is the point.
 Many graphics packages, including OpenGL, provide
routines for displaying points.
glBegin(GL_POINTS);
glVertex2f(-0.5, 0.5); // First point (top-left)
glVertex2f(0.5, 0.5); // Second point (top-right)
glVertex2f(-0.5, -0.5); // Third point (bottom-left)
glVertex2f(0.5, -0.5); // Fourth point (bottom-right)
glEnd();
Cont’d
Line Drawing Algorithms
 Lines are a very common primitive and will be supported by almost
all graphics packages.
 Lines are normally represented by the two end-points of the line, and
points 𝑥, 𝑦 along the line must satisfy the following slope-intercept
equation:
𝑦 = 𝑚𝑥 + 𝑏 ----------------------------------------- eq. (1)
where 𝑚 is the slope or gradient of the line, and 𝑏 is the coordinate
at which the line intercepts the 𝑦 axes
Cont’d…
 Given two end-points (𝑥0, 𝑦0) and (𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑), we can calculate values
for 𝑚 and 𝑏 as follows:
 
 
0
0
x
x
y
y
m
end
end



 Furthermore, for any given x-interval Δ𝑥, we can calculate the
corresponding
y-interval Δ𝑦 ∶
Δ𝑦 = 𝑚. Δ𝑥 -------------------------------------------------------------------eq. (4)
Δ𝑥 =
1
𝑚
⋅ Δ𝑦 --------------------------------------------------------------- eq. (5)
𝑏 = 𝑦 − 𝑚. Δ𝑥 -------------------------------------------------- eq. (3)
−−−−−−−−−−−−−−−−−−−−−−−−−− eq. (2)
DDA Line-Drawing Algorithm
 The Digital Differential Analyser (DDA) algorithm operates by starting
at one end-point of the line,
 and then using 𝐸𝑞. (4) and (5) to generate successive pixels until
the second end-point is reached.
 Therefore, first, we need to assign values for Δ𝑦 and Δ𝑥
Suppose we simply increment the value of 𝑥 at each iteration (i.e. Δ𝑥 = 1)
and then compute the corresponding value for y using
𝑒𝑞. (2) 𝑎𝑛𝑑 (4).
Cont’d
 This would compute correct line points but, as illustrated by Figure
4.1,
it would leave gaps in the line.
 The reason for this is that the value of Δ𝑦 is greater than one, so the
gap between subsequent points in the line is greater than 1 pixel.
Figure 1 – ‘Holes’ in a Line Drawn by
Incrementing 𝑥 and Computing the
Corresponding y-Coordinate
Cont’d
 The solution to this problem is to make sure that both Δ𝑥 and Δ𝑦
have values less than or equal to one.
 To ensure this, we must first check the size of the line gradient. The
conditions are:
 𝐼𝑓 |𝑚| ≤ 1:
o Δ 𝑥 = 1
o Δ 𝑦 = 𝑚
 𝐼𝑓 |𝑚| > 1:
o Δ 𝑥 = 1/𝑚
o Δ 𝑦 = 1
Cont’d
 Once we have computed values for Δ 𝑥 and Δ 𝑦, the basic DDA
algorithm is:
 Start with (𝑥0, 𝑦0)
 Find successive pixel positions by adding on ( Δ𝑥 , Δ 𝑦) and
rounding to the nearest integer, i.e.
o 𝑥𝑘 + 1 =
𝑥𝑘 + Δ𝑥
o 𝑦𝑘 + 1 =
𝑦𝑘 + Δ 𝑦
For each position (𝑥𝑘, 𝑦𝑘) computed, plot a line point at
(𝑟𝑜𝑢𝑛𝑑(𝑥𝑘), 𝑟𝑜𝑢𝑛𝑑(𝑦𝑘)), where the round function will round to the
nearest integer
Note that the actual pixel value used will be calculated by rounding to the
nearest integer, but we keep the real-valued location for calculating the
next pixel position.
Examples (DDA algorithm)
 Apply the DDA algorithm for drawing a straight-line segment.
 Given: 𝑥0, 𝑦0 = 10,10
𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑 = (15,13)
 First compute a value for the gradient
𝑚:
 
 
6
.
0
5
3
)
10
15
(
)
10
13
(
0
0








x
x
y
y
m
end
end
 Now, because |𝑚| ≤ 1, we compute Δ𝑥 and Δ𝑦 as follows:
Δ𝑥 = 1 and Δy = 0.6
Cont’d…
 Using these values of Δ𝑥 and Δy we can now start to plot line points:
 Start with (𝑥0, 𝑦0) = (𝟏𝟎, 𝟏𝟎) – colour this pixel
 Next, (𝑥1, 𝑦1) = (10 + 1,10 + 0.6) = (11,10.6) – so we colour pixel
(11,11)
 Next, (𝑥2, 𝑦2) = (11 + 1,10.6 + 0.6) = (12,11.2) – so we colour pixel
(12,11)
 Next, (𝑥3, 𝑦3) = (12 + 1,11.2 + 0.6) = (13,11.8) – so we colour pixel
(13,12)
 Next, (𝑥4, 𝑦4) = (13 + 1,11.8 + 0.6) = (14,12.4) – so we colour pixel
(14,12)
 Next, (𝑥5, 𝑦5) = (14 + 1,12.4 + 0.6) = (15,13) – so we colour pixel
(15,13)
 We have now reached the end-point (𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑), so the algorithm
terminates
Cont’d…
𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑 = (15,13)
𝑥0, 𝑦0 = 10,10
Bresenham’s Line-Drawing Algorithm
 Bresenham’s line-drawing algorithm provides significant improvements
in efficiency over the DDA algorithm.
 These improvements arise from the observation that for any given line,
 if we know the previous pixel location, we only have a choice of 2
locations for the next pixel.
 This concept is illustrated in Figure 3: given that we know (𝑥𝑘, 𝑦𝑘) is a
point on the line, we know the next line point must be either pixel A or
pixel B.
Cont’d
 Therefore we do not need to compute the actual floating-point location
of the ‘true’ line point; we need only make a decision between pixels A
and B.
Figure 3 - Bresenham's Line-Drawing Algorithm
Cont’d
 Bresnahan's Algorithm work as follows:
 First, we denote by dupper and dlower the distances between the centres
of pixels A and B and the ‘true’ line (see Figure 3).
 Using 𝐸𝑞. (1) the ‘true’ y-coordinate at 𝑥𝑘 + 1 can be calculated as:
𝑦 = 𝑚(𝑥𝑘 + 1) + 𝑏 ----------------------------------------- eq. (6)
Therefore we compute dupper and dlower as:
𝑑𝑙𝑜𝑤𝑒𝑟 = (𝑦 − 𝑦𝑘)= 𝑚(𝑥𝑘 + 1) + 𝑏 - 𝑦𝑘 …………………………… eq. (7)
𝑑𝑢𝑝𝑝𝑒𝑟 = (𝑦𝑘 − 𝑦 )= (𝑦k+1) − 𝑦𝑘))) = ((𝑦k+1) − 𝑚(𝑥𝑘 + 1) − 𝑏 ….. eq.
(8)
Cont’d
 Now, we can decide which of pixels A and B to choose based on
comparing the values of dupper and dlower:
o If dlower > dupper, choose pixel A
o Otherwise choose pixel B
 We make this decision by first subtracting dupper from dlower:
𝑚(𝑥𝑘 + 1) + 𝑏 - 𝑦𝑘 - (𝑦k+1 + 𝑚 𝑥𝑘 + 1 + 𝑏
=> 2𝑚(𝑥𝑘 + 1) – 2 𝑦𝑘 + 2𝑏 − 1
Cont’d
 If the value of this expression is positive we choose pixel A;
 otherwise we choose pixel B.
 The question now is how we can compute this value efficiently.
 To do this, we define a decision variable 𝑝𝑘 for the kth step in the
algorithm and try to formulate pk
 so that it can be computed using only integer operations.
 To achieve this substitute 𝑚 by
Δ𝑦
Δ𝑥
 We make this decision by first subtracting dupper from
dlower:
𝑃𝑘= dlower - dupper = Δ𝑥 (2𝑚(𝑥𝑘 + 1) – 2 𝑦𝑘 + 2𝑏 − 1) … . 𝑒𝑞(9)
Cont’d
𝑃𝑘 = Δ𝑥 (2
Δ𝑦
Δ𝑥
(𝑥𝑘 + 1) – 2 𝑦𝑘 + 2𝑏 − 1)
𝑃𝑘 = 2Δ𝑦𝑥𝑘 + 2Δ𝑦 − 2Δ𝑥𝑦𝑘 + 2𝑏Δ𝑥 − Δ𝑥
Now, Collect like terms
𝑃𝑘 = 2Δ𝑦𝑥𝑘 - 2Δ𝑥𝑦𝑘 + 2Δ𝑦 +2𝑏Δ𝑥 − Δ𝑥
 Always xk+1 = xk+1
 If pk < 0, then yk+1 = yk, otherwise yk+1 = yk+1
𝑑
Cont’d
 Therefore we can define the incremental calculation as:
y
p
p k
k 


 2
1
x
y
p
p k
k 




 2
2
1
if pk < 0
if pk ≥ 0
Cont’d
 Bresenham's Algorithm if 𝑚 < 1
 Plot the start-point of the line 𝑥0, 𝑦0
 Compute the first decision variable:
o 𝑃0 = 2Δ𝑦 −Δ𝑥
 For each k, starting with k=0
o If 𝑃𝑘 < 0
• Plot 𝑥𝑘 + 1, 𝑦𝑘
• 𝑃𝑘+1 = 𝑃𝑘 + 2Δ𝑦
o If 𝑃𝑘 ≥ 0
• Plot 𝑥𝑘 + 1, 𝑦𝑘 + 1
• 𝑃𝑘+1 = 𝑃𝑘 + 2Δ𝑦 - 2Δ𝑥
 Repeat the steps until Δ𝑥 times
Cont’d
Bresnahan's Algorithm Example
𝑥0, 𝑦0 = 10,10 , 𝑎𝑛𝑑 𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑 = 15,13
Δ𝑥=5, Δ𝑦=3, 𝑚= 0.6, 𝑃0 = 2Δ𝑦 −Δ𝑥 = 1
 𝑚 < 1 true so use the above algorithm
 Plot 𝑥0, 𝑦0 = 10,10 , color pixel (10,10)
• 𝑃0 ≥ 0, so
• Plot (11,11), color pixel(11,11)
• 𝑃1 = 𝑃0 + 2Δ𝑦 −2Δ𝑥 = -3
• 𝑃1 < 0, so
• Plot(12,11), color pixel(12,11)
• 𝑃2 = 𝑃1 + 2Δy = -3+6 =3
Cont’d
• 𝑃2 ≥ 0, so
• Plot (13,12), color pixel (13,12)
• 𝑃3 = 𝑃2 + 2Δ𝑦 −2Δ𝑥 = 3+ 2(3) – 2(5) = -1
• 𝑃3 < 0, so
• Plot(14,12), color pixel (14,12)
• 𝑃4 = 𝑃3 + 2Δy = -1+6=5
• 𝑃4 ≥ 0, so
• Plot(15,13), color pixel (15,13)
 Now we are reached the maximum iteration Δ𝑥 times
Cont’d
𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑 = (15,13)
𝑥0, 𝑦0 = 10,10
Reading Assignment
Bresnahan's Algorithm if 𝑚 > 1 and if 𝑚 = 0
?
The figure below shows the start and end points of a straight line.
Show how the
Bresenham's algorithm
would draw the line
between the two points
Circle Drawing Algorithm
 Some graphics packages allow us to draw circle primitives.
 Before we examine algorithms for circle-drawing we will
consider the mathematical equations of a circle.
 In Cartesian coordinates we can write:
2
2
2
)
(
)
( r
y
y
x
x c
c 



where (𝑥𝑐, 𝑦𝑐) is the centre of the circle.
 Alternatively, in polar coordinates we can write:

cos
r
x
x c 


sin
r
y
y c 

Plotting using cartesian co-ordinates
 Suppose we successively increment the x-coordinate and calculate
the corresponding y-coordinate using
 2
2
x
x
r
y
y c
c 



 This would correctly generate points on the boundary of a circle
Cont’d
 Show how the following circle-drawing algorithms would draw a circle of
radius 5 centred on the origin.
 You need only consider the upper-right octant of the circle, i.e. the arc
shown in red in the figure below.
A) Plotting points using Cartesian coordinates.
B) Plotting points using polar coordinates.
Example
Cont’d
 Given
• r= 5, 𝑥𝐶, 𝑦𝐶 = 0,0
• We start from 𝑥 = 0, and then successively increment 𝑥 and calculate
the corresponding 𝑦 using
Plotting using Cartesian co-
ordinate
 2
2
x
x
r
y
y c
c 



 We can tell that we have left the first octant when 𝑥 > 𝑦.
• 𝑥 = 0, 𝑦 = 0 + 52 − 0 − 0 2 = 5 , 𝑝𝑙𝑜𝑡 0,5
• 𝑥 = 1, 𝑦 = 0 + 52 − 1 − 0 2 = 4.9 , 𝑝𝑙𝑜𝑡 1,5
• 𝑥 = 2, 𝑦 = 0 + 52 − 2 − 0 2 = 4.58 , 𝑝𝑙𝑜𝑡 2,5
• 𝑥 = 3, 𝑦 = 0 + 52 − 3 − 0 2 = 4 , 𝑝𝑙𝑜𝑡 3,4
• 𝑥 = 4, 𝑦 = 0 + 52 − 4 − 0 2 = 3 , so 𝑥 > 𝑦 and we stop
Cont’d
Plotting using Cartesian co-
ordinate
Plotting using polar co-ordinates
 An alternative technique is to use the polar coordinate equations.
 Recall that in polar coordinates we express a position in the
coordinate system as an angle 𝜃 and a distance 𝑟.
 For a circle, the radius r will be constant, but we can increment 𝜃
and compute the corresponding 𝑥 and 𝑦 values
Cont’d
 Show how the following circle-drawing algorithms would draw a circle of
radius 5 centred on the origin.
 You need only consider the upper-right octant of the circle, i.e. the arc
shown in red in the figure below.
A) Plotting points using polar coordinates.
Example
Cont’d
Example
Plotting using Polar co-ordinate
 Given
• r= 5, 𝑥𝐶, 𝑦𝐶 = 0,0
 We can tell that we have left the first octant when 𝜃 < 450.
• 𝜃=
1
𝑟
×
1800
𝜋
=
1
5
×
1800
𝜋
= 11.46o
• Then we start with 𝜃 = 90o, and compute 𝑥 and 𝑦 using:

cos
r
x
x c 


sin
r
y
y c 

 for successive value of 𝜃, subtracting 11.46o at each iteration.
 We stop when 𝜃 becomes less than 45o
Cont’d
Example
Plotting using Polar co-ordinate
• 𝑥 = 0 + 5 ∗ cos 90o = 0, 𝑦= 0 + 5 ∗ sin 90o = 5, plot(0,5)
• 𝑥 = 0 + 5 ∗ cos 78.54o = 0.99, 𝑦= 0 + 5 ∗ sin 78.54o = 4.9,
plot(1,5)
• 𝑥 = 0 + 5 ∗ cos 67.08o = 1.95 𝑦= 0 + 5 ∗ sin 67.08o = 4.6,
plot(2,5)
• 𝑥 = 0 + 5 ∗ cos 55.62o = 2.82, 𝑦= 0 + 5 ∗ sin 55.62o = 4.13,
plot(3,4)
• 𝜃 = 44.160
which is less than 45o, so we stop.
 The points plotted are the same as for the Cartesian plotting
algorithm
Fill Area Primitives
 The most common type of primitive in 3-D computer graphics is the
fill-area primitive.
 The term fill-area primitive refers to any enclosed boundary that can
be filled with a solid colour or pattern.
 However, fill-area primitives are normally polygons, as they can be
filled more efficiently by graphics packages.
A Polygon with an Edge Crossing
Cont’d
 Polygons are the most common form of graphics primitive because
they form the basis of polygonal meshes,
 which is the most common representation for 3-D graphics objects.
 Polygonal meshes approximate curved surfaces by forming a mesh
of simple polygons
Examples of Polygonal Mesh Surfaces
Convex and Concave Polygons
 We can differentiate between convex and concave polygons:
• Convex polygons have all interior angles ≤ 180o
• Concave polygons have at least one interior angle > 180o
(a) Convex (b) Concave
Polygons inside-outside test
 In order to fill polygons we need some way of telling if a given point
is inside or outside the polygon boundary:
 we call this an inside-outside test.
 Two different inside-outside tests:
• Odd – Even rule
• Non zero winding number rule
Odd – Even rule
 Draw a line from P to some distant point (that is known to be
outside the polygon boundary).
 Count the number of crossings of this line with the polygon
boundary:
o If the number of crossings is odd, then P is inside the polygon
boundary.
o If the number of crossings is even, then P is outside the polygon
boundary.
Nonzero Winding Number rule
 This time we consider each edge of the polygon to be a
vector.
 They have a direction as well as a position.
 These vectors are directed in a particular order around
the boundary of the polygon (the programmer defines
which direction the vectors go).
Cont’d
 Now we decide if a point P is inside or outside the
boundary as follows:
 Draw a line from P to some distant point (that is known to be
outside the polygon boundary).
 At each edge crossing, add 1 to the winding number if the edge
goes from right to left, and subtract 1 if it goes from left to right.
o If the total winding number is nonzero, P is inside the
polygon boundary.
o If the total winding number is zero, P is outside the polygon
boundary.
Cont’d
 We can see from the following figures that the nonzero winding number
rule gives a slightly different result from the odd-even rule for the example
polygon given.
 In fact, for most polygons (including all convex polygons) the
two algorithms give the same result.
Cont’d
 Show which parts of the fill-area primitive shown below would be classified
as inside or outside using the following inside-outside tests:
 A) Odd-even rule
 B) Nonzero winding number rule
Example
Cont’d
 The odd-even rule classifies the inner polygon as outside because points
inside it have two (an even number) line crossings to reach any distant
point.
 For the nonzero winding number rule, both of the line crossings go from
right to left, so the winding number is incremented in both cases.
• Therefore the total winding number for points inside the inner polygon
is 2, which in nonzero and so the points are classified as inside.
• By reversing the direction of the edge vectors of the inner (or outer)
polygon we could get the same result as the odd-even rule.
Answer
Cont’d
Answer
Text and Characters
 The final type of graphics primitive we will consider is the character
primitive.
 Character primitives can be used to display text characters.
 Representation for characters:
 Bitmap or Font
 Stroke or outline
 In bitmap representation (or font), ), characters are stored as a grid of
pixel values.
 This is a simple representation that allows fast rendering of the character.
 However, such representations are not easily scalable
Cont’d
 In stroke, or outline, representation, characters are stored using line or
curve primitives.
 To draw the character we must convert these primitives into pixel values on
the display.
 They are much more easily scalable.
 To generate a larger version of the character we just multiply the
coordinates of the line/curve primitives by some scaling factor.
 Bold and italic characters can be generated using a similar approach.
 The disadvantage of stroke fonts is that they take longer to draw than
bitmap fonts.
Cont’d
Bitmap and Stroke Character Primitives
Geometrical Transformation
Chapter Five
2D Transformation
 First of all let us review some basics of matrices.
 2x2 matrices can be multiplied according to the following equation.
2D Matrix Transformation





























dh
cf
dg
ce
bh
af
bg
ae
h
g
f
e
d
c
b
a
………………………….… (1)
 For example,













































 
4
4
6
1
0
1
2
2
2
1
1
2
0
1
2
3
2
1
1
3
0
2
2
1
1
2
1
3
2D Transformation
 In general, for matrices A and B to be multiplied, the number of
columns in A must be equal to the number of rows in B.
 Matrix multiplication is not commutative.
 In other words, for two matrices A and B, 𝑨𝑩 ≠ 𝑩𝑨.
 We can see this from the following example.


















 








2
6
1
5
1
2
1
3
0
2
2
1
























 
4
4
6
1
0
2
2
1
1
2
1
3
2D Transformation
 However, matrix multiplication is associative.
 This means that if we have three matrices A, B and C, then
• (𝐴𝐵) 𝐶 = 𝐴 (𝐵𝐶).
• We can see this from the following example.























































 
4
12
6
13
1
2
0
1
4
4
6
1
1
2
0
1
0
2
2
1
1
2
1
3
























 






























 
4
12
6
13
0
2
2
5
1
2
1
3
1
2
0
1
0
2
2
1
1
2
1
3
2D Translation
 Translation is the ability to reposition an image or object along a
straight line path from one location to another.
 The translation transformation shifts all points by the same
amount.
 Therefore, in 2-D, we must define two translation parameters:
• the 𝑥-translation 𝑡𝑥
• the 𝑦-translation 𝑡𝑦
 To translate a point 𝑃 to 𝑃’ we add on a vector 𝑇:









y
x
p
p
P












y
x
p
p
P









y
x
t
t
T
…………………………………………………
.……….. (2)
…………………………………………………
.……….. (3)
…………………………………………………
.……….. (4)
Cont’d
 Therefore, from Eq. (5) we can see that the relationship between
points before and after the translation is:
…………………………………………………
.……….. (5)
x
x
x t
p
p 


y
y
y t
p
p 


…………………………………………………
.……….. (6)
…………………………………………………
.……….. (7)
Translation by 𝑡𝑥 = 3, 𝑡𝑦 = 1
2D Rotation
 Rotation is the ability to reposition an image along a circular path in 𝑥𝑦 −
𝑝𝑙𝑎𝑛𝑒.
 The rotation transformation rotates all points about a centre of rotation.
 Normally this centre of rotation is assumed to be at the origin (0,0),
 although as we will see later on it is possible to rotate about any point.
 The rotation may be either clockwise or anticlockwise.
• Positive value of rotation angle provide an anticlockwise rotation
• negative value produce a clockwise rotation about the given point
 The rotation transformation has a single parameter: the angle of rotation,
𝜃.
Cont’d
 To rotate a point P anti-clockwise by 𝜃0, we apply the rotation
matrix 𝑅:







 





cos
sin
sin
cos
R …………………………………………………
.……….. (8)















 











y
x
y
x
p
p
p
p




cos
sin
sin
cos …………………………………………………
.……….. (9)
 Therefore, from 𝐸𝑞. (9) we can see that the relationship
between points before and after the rotation is:

 sin
cos y
x
x p
p
p 



 sin
cos px
p
p y
y 


…………………………………………………
.……….. (10)
…………………………………………………
.……….. (11)
Cont’d
 To rotate a point P anti-clockwise by 𝜃0, we apply the rotation
matrix 𝑅:







 





cos
sin
sin
cos
R
…………………………………………………
.……….. (8)















 











y
x
y
x
p
p
p
p




cos
sin
sin
cos
…………………………………………………
.……….. (9)
 Therefore, from 𝐸𝑞. (9) we can see that the relationship
between points before and after the rotation is:

 sin
cos y
x
x p
p
p 



 sin
cos px
p
p y
y 


…………………………………………………
.……….. (10)
…………………………………………………
.……….. (11)
Rotation about the Origin
2D Scaling
 Scaling is the ability to change the size of an image.
 The operation is necessary when we have to either ZOOM in an
object so that we can get a better view of the object or ZOOM out
so that we can see more object.
 The scaling transformation multiplies each coordinate of each
point by a scale factor.
 The scale factor can be different for each coordinate (e.g. for the 𝑥
and 𝑦 coordinates).
Cont’d
 To scale a point P by scale factors Sx and Sy we apply the scaling
matrix S:
𝑥𝑛𝑒𝑤
= 𝑃𝑥 ⋅ 𝑆𝑥
𝑦𝑛𝑒𝑤
= 𝑃𝑦 ⋅ 𝑆𝑦









y
x
S
S
S
0
0



























y
x
y
x
y
x
p
p
S
S
p
p
0
0
…………………………………………………
.……….. (12)
…………………………………………………
.……….. (13)
Cont’d
 Therefore, from 𝐸𝑞. (13) we can see that the relationship between
points before and after the scaling is:
…………………………………………………
.……….. (14)
…………………………………………………
.……….. (15)
x
x
x p
S
p 

y
y
y p
S
p 

A 2-D Scaling by 𝑆𝑥 = 2, 𝑆𝑦 = 2
Homogeneous Coordinates
 Homogeneous coordinates are a system of coordinates used in projective
geometry.
 Points at infinity can be represented using finite coordinates.
 A single matrix can represent affine transformations and projective
transformations.
 Homogeneous coordinates allow us to do combinational transformation.
 With homogeneous coordinates we add an extra coordinate, the
homogenous parameter, to each point in Cartesian coordinates
 So 2-D points are stored as three values:
• the 𝑥-coordinate,
• the 𝑦-coordinate and
• the ℎ𝑜𝑚𝑜𝑔𝑒𝑛𝑒𝑜𝑢𝑠 parameter.
Cont’d
 The relationship between homogeneous points and their corresponding
Cartesian points is:
 Homogeneous point =
𝑥
𝑦
𝑤
, Cartesian point =
𝑥
𝑤
𝑦
𝑤
1
 Normally the homogenous parameter is given the value 1, , in which case
homogenous coordinates are the same as Cartesian coordinates but with
an extra value which is always 1.
2D Translation 𝑤 homogeneous coordinates
 we can express a translation transformation using a single matrix
multiplication:











1
y
x
p
p
P …………………………………………………
.……….. (16)














1
y
x
p
p
P …………………………………………………
.……….. (17)











1
0
0
1
0
0
1
y
x
t
t
T …………………………………………………
.……….. (18)

































1
1
0
0
1
0
0
1
1
y
x
y
x
y
x
p
p
t
t
p
p
…………………………………………………
.……….. (19)
Cont’d
 Therefore, x
x
x t
p
p 

 y
y
y t
p
p 


, and
 exactly the same as before, but we used a matrix multiplication instead
of an addition.
2D Rotation 𝑤 homogeneous coordinates
 with the exception that the rotation matrix 𝑹 has an extra row and extra
column.
…………………………………………………
.……….. (20)
…………………………………………………
.……….. (21)









 

1
0
0
0
cos
sin
0
sin
cos




R



















 













1
1
0
0
0
cos
sin
0
sin
cos
1
y
x
y
x
p
p
p
p




 Therefore, 
 sin
cos y
x
x p
p
p 

 , and 
 sin
cos px
p
p y
y 


• which is the same outcome as before.
2D Scaling 𝑤 homogeneous coordinates
 Finally, we can also express scaling using homogeneous coordinates, as
shown by the following equations
…………………………………………………
.……….. (22)
…………………………………………………
.……….. (23)
, and











1
0
0
0
0
0
0
y
x
S
S
S

































1
1
0
0
0
0
0
0
1
y
x
y
x
y
x
p
p
S
S
p
p
 Therefore, x
x
x p
S
p 
 y
y
y p
S
p 
 exactly the same as
before.
Matrix Composition
 The use of homogenous coordinates allows us to compose a sequence of
transformations into a single matrix.
 This can be very useful in the graphics viewing pipeline,
 but also allows us to define different types of transformation from those we
have already seen.
 Using matrix composition, we can achieve this using the following sequence
of transformations:
• Translate from pivot point to origin
• Rotate about origin
• Translate from origin back to pivot point
Cont’d
 An example of this sequence of transformations is shown in Figure bellow.
 Here we perform a rotation about the pivot point (2,2),
• Translating by (-2,-2) to the origin,
• Rotating about the origin and
• Then translating by (2,2) back to the pivot point.
 Let us denote our transformations as follows:
a. T1 is a matrix translation by (-2,-2)
b. R is a matrix rotation by 𝜃0 about the origin
c. T2 is a matrix translation by (2,2)
 Therefore, using homogenous coordinates we can compose all three
matrices into one composite transformation, C:
𝐶 =
𝑇2𝑅𝑇1 …………………………………………………………………………………..
Cont’d
 The composite matrix C can now be computed from the three constituent
matrices T2, R and T1,
• and represents a rotation about the pivot point (2,2) by θo.
 Note from Eq. (24) that 𝑻𝟏 is applied first, followed by 𝑹 and then 𝑇2.
 For instance, if we were to apply the three transformations to a point P the
result would be
𝑃’ = 𝑇2𝑅𝑇1𝑃.
3D Matrix Transformation
 The concept of homogenous coordinates is easily extended
into 3-D:
 we just introduce a fourth coordinate in addition to the
• 𝑥, 𝑦 and 𝑧-coordinates.
 In this section we review the forms of 3-D translation, rotation
and scaling matrices using homogeneous coordinates.
3D Translation 𝑤 homogeneous coordinates
 The 3-D homogeneous coordinate’s translation matrix is similar in form to the
2-D matrix, and is given by:















1
0
0
0
1
0
0
0
1
0
0
0
1
z
y
t
t
t
T
………………………………………….. (25)
 We can see that 3-D translations are defined by three
translation parameters: tx, ty and tz.
𝑡𝑥
Cont’d
 We apply this transformation as follows:
………………………………………….. (26)
 Therefore,














































1
1
0
0
0
1
0
0
0
1
0
0
0
1
1
z
y
x
z
y
z
y
x
p
p
p
t
t
t
p
p
p
x
x
x t
p
p 

 y
y
y t
p
p 

 z
z
z t
p
p 


, , and
𝑡𝑥
3D Scaling 𝑤 homogeneous coordinates
 Similarly, 3-D scaling are defined by three scaling parameters, Sx, Sy and Sz.
 The matrix is:
We apply this transformation as follows:















1
0
0
0
0
0
0
0
0
0
0
0
0
z
y
x
S
S
S
S














































1
1
0
0
0
0
0
0
0
0
0
0
0
0
1
z
y
x
z
y
x
z
y
x
p
p
p
S
S
S
p
p
p
 Therefore, x
x
x p
S
p 
 y
y
y p
S
p 
 z
z
z p
S
p 

, , and
3D Rotation 𝑤 homogeneous coordinates
 For rotations in 3-D we have three possible axes of rotation:
• the 𝑥, 𝑦 and 𝑧 axes.
 Therefore the form of the rotation matrix depends on which type of rotation
we want to perform.
 For a rotation about the 𝑥 − 𝑎𝑥𝑖𝑠 the matrix is:
















1
0
0
0
0
cos
sin
0
0
sin
cos
0
0
0
0
1




x
R
 For a rotation about the 𝑦 − 𝑎𝑥𝑖𝑠 the matrix is:
















1
0
0
0
0
cos
0
sin
0
0
1
0
0
sin
0
cos




y
R
3D Rotation 𝑤 homogeneous coordinates
 For a rotation about the 𝑧 − 𝑎𝑥𝑖𝑠 the matrix is:













 

1
0
0
0
0
1
0
0
0
0
cos
sin
0
0
sin
cos




z
R
OpenGL Rotation Function
glRotatef(angle, 0.0, 0.0, 1.0);
• angle : This is the angle of rotation, specified in degrees.
• 0.0 : This is the x-component of the rotation axis. In this case, it's set to
zero, which means there is no rotation around the x-axis.
• 0.0 : This is the Y-component of the rotation axis. In this case, it's set to zero,
which means there is no rotation around the Y-axis.
• 1.0 : This is the z-component of the rotation axis. The value of 1.0 means that
the rotation will occur around the z-axis.
OpenGL Translation Function
glTranslatef(𝑡𝑥, 𝑡𝑦, 0.0f);
• 𝑡𝑥 : It specifies how much the subsequent geometry will be moved
horizontally.
• 𝑡𝑦 : It specifies how much the subsequent geometry will be moved vertically.
• 0.0f : This is the translation along the z-axis In this case, it's set to 0.0f
because we are dealing with 2D translation.
OpenGL Scaling Function
glScalef(𝑆𝑥, 𝑆𝑦, 1.0f); // you can omit 𝑧-Scaling factor in
2D
• 𝑆𝑥 : It specifies how much the subsequent geometry will be stretched or
compressed horizontally.
• If 𝑆𝑥 is greater than 1, the geometry will be stretched; if it's less than 1, the
geometry will be compressed.
• s𝑦 : It specifies how much the subsequent geometry will be stretched or
compressed vertically.
• If s𝑦 is greater than 1, the geometry will be stretched; if it's less than 1, the
geometry will be compressed.
GLfloat s𝑥= 3.0f;
GLfloat s𝑦= 3.0f;
State Management and Drawing
Geometric objects
Chapter Six
Basic State Management
 OpenGL maintains many states and state variables.
 An object may be rendered with lighting, texturing, hidden surface
removal, fog, and other states affecting its appearance.
 By default, most of these states are initially inactive.
 These states may be costly to activate; for example, turning on
texture mapping will almost certainly slow down the speed of
rendering a primitive.
 However, the quality of the image will improve and look more
realistic, due to the enhanced graphics capabilities.
Cont’d
 To turn on and off many of these states, use these two simple commands:
• void glEnable(GLenum cap);
• void glDisable(GLenum cap);
 glEnable() turns on a capability, and glDisable() turns it off.
 The glEnable function is used to enable specific OpenGL capabilities.
 It takes a single argument ‘cap’ which is an enumeration (GLenum)
representing the capability to be enabled.
Examples
 // Enable depth testing for accurate rendering of 3D scenes
• glEnable(GL_DEPTH_TEST);
 // Enable blending for transparency
• glEnable(GL_BLEND);
 // Enable face culling for efficient rendering of closed surfaces
• glEnable(GL_CULL_FACE);
• Face culling is a technique used to improve rendering performance by
omitting the drawing of polygons that are facing away from the viewer.
 // Enable Fog
• glEnable(GL_FOG);
• By enabling fog, you simulate effects like foggy weather or mist, and
distant objects appear less distinct.
Displaying Point, Lines, and Polygons
 By default, a point is drawn as a single pixel on the screen,
 A line is drawn solid and one pixel wide, and
 polygons are drawn solidly filled in.
Points Details
 To control the size of a rendered point, use glPointSize() and supply the
desired size in pixels as the argument.
 void glPointSize(GLfloat size);
• Sets the width in pixels for rendered points; size must be greater than
0.0 and by default is 1.0.
• if the width is 1.0, the square is 1 pixel by 1 pixel;
• if the width is 2.0, the square is 2 pixels by 2 pixels, and so on.
Lines Details
 With OpenGL, you can specify lines with different widths and lines that are
stippled in various ways :
• dotted,
• dashed,
• drawn with alternating dots and dashes, and so on.
Wide Lines
 void glLineWidth(GLfloat width);
 Sets the width in pixels for rendered lines; width must be greater than 0.0
and by default is 1.0.
Cont’d
Stippled Lines
 To make stippled (dotted or dashed) lines,
 You use the command glLineStipple() to define the stipple pattern, and then
you enable line stippling with glEnable().
glEnable(GL_LINE_STIPPLE);
glLineStipple(1, 0x3F07);
Cont’d
Figure 6.1 Stippled Lines
Polygons Details
 Polygons are typically drawn by filling in all the pixels enclosed within the
boundary,
 but you can also draw them as outlined polygons or simply as points at the
vertices.
 A filled polygon might be solidly filled or stippled with a certain pattern.
Polygons as Points, Outlines, or
Solids
 A polygon has two faces:
• Front and
• Back
 and might be rendered differently depending on which side is
facing the viewer.
Cont’d
 By default, both front and back faces are drawn in the same way.
 To change this, or to draw only outlines or vertices, use glPolygonMode().
glPolygonMode(GL_FRONT,
GL_FILL);
glPolygonMode(GL_BACK, GL_LINE);
Stippling Polygons
glEnable(GL_POLYGON_STIPPLE);
Normal Vectors
 A normal vector (or normal, for short) is a vector that points in a direction
that’s perpendicular to a surface.
 An object’s normal vectors define the orientation of its surface in space in
particular, its orientation relative to light sources.
Cont’d
 normal vectors are essential for simulating realistic lighting and shading
effects in computer graphics.
glVertex3f(0.0, 0.0, 0.0);
glVertex3f(2.0, 0.0, 0.0); //
length
glVertex3f(0.0,
0.0,
0.0);
glVertex3f(0.0,
0.0,
3.0);
//
length
Vector Arrays
 You may have noticed that OpenGL requires many function calls to render
geometric primitives.
 Drawing a 100−sided polygon requires 102 function calls:
• one call to glBegin(), one call for each of the vertices, and a final call to
glEnd().
• Generally, to draw 𝒏 sided polygon, 𝒏 + 𝟐 call.
 In the others code, additional information (polygon boundary, edge flags or
surface normal) added function calls for each vertex.
 This can quickly double or triple the number of function calls required for
one geometric object.
 For some systems, function calls have a great deal of overhead and can
hinder performance.
Cont’d
 OpenGL has vertex array routines that allow you to specify a lot of
vertex−related data with just a few arrays and to access that data with
equally few function calls.
 Using vertex array routines, all 𝒏 vertices in a 𝒏 −sided polygon could be
put into one array and called with one function.
 Arranging data in vertex arrays may increase the performance of your
application.
 Also, using vertex arrays may allow non−redundant processing of shared
vertices(Vertex sharing is not supported on all implementations of
OpenGL).
Cont’d
 There are three steps to use vertex arrays to render geometry.
i. Activate (enable) up to six arrays, each to store a different type of
data: vertex coordinates, RGBA colors, color indices, surface
normals, texture coordinates, or polygon edge flags.
ii. Put data into the array or arrays. The arrays are accessed by the
addresses of (that is, pointers to) their memory locations.
iii. Draw geometry with the data. OpenGL obtains the data from all
activated arrays by dereferencing the pointers.
Cont’d
 Step 1: Enabling Arrays
• The first step is to call glEnableClientState() with an
enumerated parameter, which activates the chosen array.
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY)
;
 Step 2: Specifying Data for the Arrays
Specify Vertex Array and Color Array
static GLint vertices[] = {25,
25,
100, 325,
175, 25};
static GLfloat colors[] = {1.0, 0.2,
0.2,
0.2, 0.2, 1.0,
0.8, 1.0, 0.2};
Cont’d
 Step 3: Setting up the pointer
glColorPointer(3, GL_FLOAT, 0, colors);
glVertexPointer(2, GL_INT, 0, vertices);
Color (RGB) DataType starting Offset(index)
colorArrays
2D 𝑥, 𝑦 DataType starting Offset(index)
VertexArrays
Cont’d
 Step 4: Draw then polygon
glDrawArrays(GL_POLYGON, 0, 3);
PrimitiveType starting Offset(index) Ending Offset(index)
 Step 5: Disable client states
glDisableClientState(GL_COLOR_ARRAY
);
glDisableClientState(GL_VERTEX_ARRA
Y);
 Step 6: Swap Buffers
glutSwapBuffers(); // Swap the front and back buffers
o // Perform rendering operations in the back buffer
Representing Objects
Chapter Seven
Introduction
 A 3D object, or three-dimensional object, is a physical or digital entity
that exists in three-dimensional space.
 In computer graphics and geometry, a 3D object is typically described
using coordinates in a three-dimensional Cartesian coordinate
system.
 These objects have length, width, and height, providing a more
realistic representation compared to 2D objects.
Modeling Using Polygon
 3D modeling using polygons is a common and widely used approach in
computer graphics.
 In this method, 3D objects are represented as surfaces made up of
interconnected polygons, typically triangles or quads.
Creating Polygon Meshes
 Creating representational polygon meshes involves techniques that enable
the modeling of 3D objects with polygons in a way that accurately represents
the intended shapes.
Cont’d
 Polygonal meshes be represented using Table of data
• Geometric tables:
o These store information about the geometry of the polygonal mesh,
o i.e. what are the shapes/positions of the polygons?
 Attribute tables:
o These store information about the appearance of the polygonal
mesh,
o i.e. what colour is it, is it opaque or transparent, etc.
o This information can be specified for each polygon individually or
for the mesh as a whole.
Cont’d
Non Polygon Representations
 In computer graphics, non-polygonal representations are alternative
methods for representing 3D objects that don't rely on polygonal
meshes.
 These representations often provide specific advantages in certain
applications.
 Here are some common non-polygonal representations:
• Voxel Representation
• Point Clouds
• Skeleton or Wireframe Models
• Blobby Models
Voxel Representation
 Voxel (volume element) grids divide space into small 3D cubes.
 Each voxel stores information about the object's presence or
properties.
 Voxels are a unit of graphic information that defines a point in three-
dimensional space
Point Cloud Representation
 A collection of individual points in 3D space, where each point
represents a specific position.
 There's no connectivity information between points.
Skeleton or Wireframe Model
 Represent the 3D object using a skeletal structure composed of lines
or curves.
 The structure defines the overall shape.
Color and Images
Chapter Eight
Color Models
PROPERTIES OF LIGHT
 What we perceive as 'light", or different colors, is a narrow frequency
band within the electromagnetic spectrum.
 A few of the other frequency bands within this spectrum are called
radio waves, microwaves, infrared waves, and X-rays.
 Each frequency value within the visible band corresponds to a distinct color
 At the low-frequency end is a red color (4.3 X 10" hertz), and the highest
frequency we can see is a violet color (7.5 X 10" hertz)..
Cont’d
 We perceive EM radiation with in the 400-700 nm range, the tiny piece of
spectrum between infra-red and ultraviolet.
Cont’d
 The purpose of a color model is to facilitate the specification of colors
in some standard generally accepted way.
 Each industry that uses color employs the most suitable color model.
RGB Color Models
 In the RGB model, each color appears as a combination of red, green, and
blue.
 This model is called additive, and the colors are called primary colors.
 The primary colors can be added to produce the secondary colors of light.
Magenta= Red + Blue
Cyan = Green + Blue
Yellow = Red + Green
Cont’d
 The color subspace of interest is a cube.
 RGB values are normalized to 0 to 1, in which RGB values are at three corners;
 cyan, magenta, and yellow are the three other corners,
 black is at their origin; and white is at the corner farthest from the origin.
Cont’d
White = W ⇔ (r, g, b) = (1, 1,
1) Black = K ⇔ (r, g, b) = (0, 0,
0) Red = R ⇔ (r, g, b) = (1, 0,
0) Green = G ⇔ (r, g, b) = (0,
1, 0) Blue = B ⇔ (r, g, b) = (0,
0, 1) Cyan = C ⇔ (r, g, b) = (0,
1, 1) Magenta = M ⇔ (r, g, b)
= (1, 0,1) Yellow = Y ⇔ (r, g,
b) = (1, 1, 0)
CIE color Space
 Color is a human perception (a percept).
 Color is not a physical property..
 But, it is related the light spectrum of a stimulus.
 The CIE XYZ color space is a fundamental color space defined by the
International Commission on Illumination (CIE).
 It is designed to be a linear model that encompasses all perceivable colors.
 The CIE XYZ color space is based on the concept of tristimulus values,
which represent the amounts of three imaginary primaries: X, Y, and Z.
 These values are derived from the spectral power distributions of light.
Cont’d
Tristimulus Values:
1) 𝑥, 𝑦, z Components:
•X (Red-Green Axis): Represents the amount of energy in the red-green
axis.
•Y (Luminance): Represents the brightness or luminance.
•Z (Blue-Yellow Axis): Represents the amount of energy in the blue-yellow
axis.
2) Normalization:
• Color perceived by human eye.
𝑥 =
𝑥
𝑥 + 𝑦 + 𝑧
y =
y
𝑥 + 𝑦 + 𝑧
Cont’d
Image Format and their application
 An image format is a standardized way of representing and storing
digital images.
 It defines the structure and encoding of the data that makes up
an image,
 specifying how the visual information is organized and stored in a
file.
 Different image formats have distinct characteristics,
compression methods, color representations, and features that
make them suitable for specific use cases.
JPEG (Joint Photographic Experts Group)
 JPEG is a lossy compression format, meaning it achieves high
compression ratios by discarding some image data.
 This can result in a reduction in image quality, especially at
higher compression levels.
 JPEG supports 24-bit color, allowing it to represent millions of
colors.
Applications:
Photography
Web Images
PNG (Portable Network Graphics)
 PNG uses lossless compression, preserving all image data
without loss of quality. It is suitable for images that require high
fidelity.
 It is suitable for images that require high fidelity.
 PNG supports an alpha channel, allowing for transparency.
 This makes PNG ideal for images with a need for a transparent
background.
 PNG supports 24-bit color as well as 8-bit color with alpha
channel, providing a wide range of color options.
Applications:
Graphics with Transparency
Images with Text
GIF (Graphics Interchange Format):
 GIF uses lossless compression but is less effective than PNG in
terms of compression ratios.
 GIF supports up to 8 bits per pixel, limiting the number of colors to
256.
 This makes it less suitable for complex photographic images
but sufficient for simple graphics.
 GIF supports animation by combining multiple frames into a
single file,
 GIF supports a single color to be fully transparent, which can be
useful for creating simple images with transparent regions.
Applications:
Simple Graphics: Suitable for icons, logos, and other simple
graphics.
Basic Animations: Used for creating simple animated
images.
Viewing A local Illumination Model
Chapter Nine
Illumination Model
 An illumination model, also known as a lighting model or
shading model,
 It is a mathematical representation or algorithm used in computer
graphics to simulate how light interacts with surfaces in a virtual
3D environment.
 Illumination models are used in computer graphics for
applications such as 3D rendering, computer-aided design, virtual
reality, and animation.
3D Camera model
 2D and 3D refer to the actual dimensions in a
computer's workspace.
 2D is 'flat', using the X & Y (horizontal and vertical)
axis,
 3D adds the '𝑧' dimension.
 This third dimension allows for rotation and depth.
Cont’d
 The synthetic camera is a programmer’s model
for specifying how a 3D scene is projected onto
the screen.
Pinhole
Orthographic projection
 Orthographic projection is a type of projection where
parallel lines remain parallel after projection.
 It is often described as a "parallel projection.“
 objects are projected onto the image plane along lines that
are parallel to the viewing direction.
 Objects appear the same size regardless of their distance
from the viewer.
 Commonly used in technical drawings, architectural
plans, and certain types of 3D modeling where precise
measurements are essential.
Cont’d
 glOrtho(X-min, X-max, Y-min, Y-max, Near, Far);
• X-min: negative x-axis
• X-max: positive x-axis
• Y-min: negative Y-axis
• Y-max: positive Y-axis
• Near: the nearest distance from view point
• Far: the farthest distance from view point
 glOrtho(-10, 10,-10,10,-1,1);
𝑥
𝑧
Cont’d
Perspective projection
 Objects that are farther from the viewer appear smaller,
 foreshortening, where objects closer to the viewer
appear larger than objects farther away.
 Widely used in computer graphics, video games, and
virtual environments to create a realistic sense of depth
and distance.
 In orthographic projection, objects have consistent sizes
regardless of their depth.
 perspective projection, sizes vary based on distance from
the viewer.
Cont’d

More Related Content

Similar to Computer Graphics Power Point using Open GL and C Programming

Ha5 Full Article
Ha5 Full Article Ha5 Full Article
Ha5 Full Article
nixon2011
 
Graphics pdf
Graphics pdfGraphics pdf
Graphics pdf
aa11bb11
 
3D Final Work
3D Final Work3D Final Work
3D Final Work
conor0994
 
Computer Graphics Practical
Computer Graphics PracticalComputer Graphics Practical
Computer Graphics Practical
Neha Sharma
 
Applications of cg
Applications of cgApplications of cg
Applications of cg
Ankit Garg
 

Similar to Computer Graphics Power Point using Open GL and C Programming (20)

computer graphics unit 1.ppt
computer graphics unit 1.pptcomputer graphics unit 1.ppt
computer graphics unit 1.ppt
 
unit1_updated.pptx
unit1_updated.pptxunit1_updated.pptx
unit1_updated.pptx
 
Specialized Application.pdf
Specialized Application.pdfSpecialized Application.pdf
Specialized Application.pdf
 
Computer graphics.
Computer graphics.Computer graphics.
Computer graphics.
 
Ha5 Full Article
Ha5 Full Article Ha5 Full Article
Ha5 Full Article
 
foedumed:Computer graphics 11_16
foedumed:Computer graphics 11_16foedumed:Computer graphics 11_16
foedumed:Computer graphics 11_16
 
Graphics pdf
Graphics pdfGraphics pdf
Graphics pdf
 
Computer Graphics Notes
Computer Graphics NotesComputer Graphics Notes
Computer Graphics Notes
 
Digital arts
Digital artsDigital arts
Digital arts
 
Know More About 3D Modeling
Know More About 3D ModelingKnow More About 3D Modeling
Know More About 3D Modeling
 
3D Final Work
3D Final Work3D Final Work
3D Final Work
 
Computer Graphics Practical
Computer Graphics PracticalComputer Graphics Practical
Computer Graphics Practical
 
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
 
Computer graphics notes
Computer graphics notesComputer graphics notes
Computer graphics notes
 
topic_- introduction of computer graphics.
   topic_- introduction of computer graphics.   topic_- introduction of computer graphics.
topic_- introduction of computer graphics.
 
Computer graphics lec 1
Computer graphics lec 1Computer graphics lec 1
Computer graphics lec 1
 
Computer graphics ppt
Computer graphics pptComputer graphics ppt
Computer graphics ppt
 
Applications of cg
Applications of cgApplications of cg
Applications of cg
 
Cg
CgCg
Cg
 
Introduction to graphics
Introduction to graphicsIntroduction to graphics
Introduction to graphics
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
How to Check CNIC Information Online with Pakdata cf
How to Check CNIC Information Online with Pakdata cfHow to Check CNIC Information Online with Pakdata cf
How to Check CNIC Information Online with Pakdata cf
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Simplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptxSimplifying Mobile A11y Presentation.pptx
Simplifying Mobile A11y Presentation.pptx
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
The Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and InsightThe Zero-ETL Approach: Enhancing Data Agility and Insight
The Zero-ETL Approach: Enhancing Data Agility and Insight
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Navigating Identity and Access Management in the Modern Enterprise
Navigating Identity and Access Management in the Modern EnterpriseNavigating Identity and Access Management in the Modern Enterprise
Navigating Identity and Access Management in the Modern Enterprise
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Choreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringChoreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software Engineering
 
Modernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using BallerinaModernizing Legacy Systems Using Ballerina
Modernizing Legacy Systems Using Ballerina
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 

Computer Graphics Power Point using Open GL and C Programming

  • 1. Computer Graphics Coursecode:CoSc 3072 Credithours:3, ECTS:5 Prerequisite:CoSc 1012 Computer Programming Assosa University Preparedby: kehussen12@gmail.com
  • 2. Course description  This course will introduce students to all aspects of computer graphics including hardware, software and applications.  Students will gain experience using a graphics application programming interface (OpenGL) by completing several programming projects.
  • 3. Course objectives  By the end of this course, students will be able to:  Have a basic understanding of the core concepts of computer graphics.  Be capable of using OpenGL to create interactive computer graphics.  Understand a typical graphics pipeline.  Have made pictures with their computer.
  • 4. Introduction to interactive computer graphics Chapter one
  • 5. What is Computer Graphics?  Computer graphics is an art of drawing pictures on computer screens with the help of programming.  It involves • Creation • Computation and • Manipulation of data
  • 6. What is Computer Graphics?  computer graphics is a rendering tool for the generation and manipulation of images.
  • 7. Types of Computer Graphics  Interactive  users have some controls over the image  the user can make any changes to the image produced  E.g. Ping-pong game, drawing on touch screen, animating pictures or graphics in movies etc…  Non-interactive  user has no control over the image.  E.g. screen savers, map representation of data etc…
  • 8. History of computer graphics  simple graphics on early computers  pop-up menus  constraint-based drawing  hierarchical modeling Ivan Sutherland (1950s- 1960s) - SKETCHPAD
  • 9. History of computer graphics  Uses mathematical equation to represent lines and curves.  mathematically based computer image format  development of languages like SAGE and Sketchpad that allowed users to interactively create and manipulate graphical objects. Vector graphics (1960s- 1970s)
  • 10. History of computer graphics  The introduction of raster displays and bitmap graphics revolutionized computer graphics.  In the mid-1970s, Xerox PARC developed the first computer with a graphical desktop, known as the "Alto." Pixel graphics (1970s- 1980s)
  • 11. History of computer graphics  In 1986, Pixar released "Luxo Jr.," a short film that showcased the potential of computer-generated 3D animation.  The film's success led to advancements in 3D rendering and animation, paving the way for computer-generated imagery (CGI) in films. Pixar and 3D Graphics (1980s)
  • 12. History of computer graphics  In the 1970s and 1980s, arcade games like "Pong," "Space Invaders," and "Pac-Man" popularized pixel graphics and pushed for faster and more advanced hardware. Video Games (1970s-1990s)
  • 13. History of computer graphics  The development of graphical user interfaces (GUIs) in the 1980s,  such as Apple's Macintosh and Microsoft Windows, made computer graphics more accessible to the general public.  GUIs replaced command-line interfaces with visually intuitive elements like icons, windows, and menus. Graphical User Interfaces (1980s-1990s)
  • 14. History of computer graphics  3D graphics cards with specialized hardware accelerators became available, greatly improving real-time 3D rendering capabilities.  This allowed for more immersive and visually impressive video games and applications. 3D Graphics Acceleration (1990s)
  • 15. History of computer graphics  introduced VR headsets for gaming  VR research continued and experienced a resurgence in the 2010s with the advent of more advanced VR devices. Virtual Reality (1990s-2000s)
  • 16. History of computer graphics  High-definition displays, advanced rendering techniques, and powerful GPUs have enabled incredibly realistic graphics in video games, movies, and virtual reality experiences.  Additionally, fields like data visualization, scientific simulations, and augmented reality have benefited significantly from computer graphics advancements. Modern Era (2000s-present)
  • 17. 3D graphics techniques and terminology  The term three-dimensional, or 3D, means that an object being described or displayed has three dimensions of measurement:  width,  height, and  depth. Fig 1.1 2D and 3D image
  • 18. 3D graphics techniques and terminology  The process by which mathematical and image data is transformed into a 3D dimensional image is called rendering.  When used as a verb, it is the process that your computer goes through to create the three dimensional image.  Rendering is also used as a noun, simply to refer to the final image produced.
  • 19. 3D graphics techniques and terminology  The following Figure shows the initial output of the BLOCK example program, which shows a line drawing of a cube on a table or platform. Transformations and Projections 3D wireframe cube  The points themselves are called vertices(or vertex in the singular)
  • 20. Components of a 3D graphic system  3D Modeling : o A way to describe the 3D world or scene, which is composed of mathematical representations of 3D objects called models.  3D Rendering : o A mechanism responsible for producing a 2D image from 3D models.
  • 21.  Simple 3D objects can be modeled using mathematical equations operating in the 3-dimensional Cartesian coordinate system. 3D Modeling the equation 𝑥2 + 𝑦2 + 𝑧2 = 𝑟2 is a model of a perfect sphere with radius r.
  • 22. 3D graphics techniques and terminology  By moving the points around, and drawing lines between them we can produce the illusion of a 3D world on a flat 2D screen.  The earliest flight simulators employed technology no more sophisticated than this. Transformations and Projections
  • 23. 3D graphics techniques and terminology  A projection matrix takes care of the mathematics necessary to turn our 3D coordinates into two-dimensional screen coordinates, where the final line drawing actually takes place. Transformations and Projections Rasterization  The actual drawing, or filling in of the pixels between each vertex to make the lines is called rasterization.
  • 24. 3D graphics techniques and terminology key concepts and terms  Vertex: In 3D graphics, a vertex (plural: vertices) is a point in space with three-dimensional coordinates (x, y, z).  Vertices are the building blocks of 3D models and are connected to form edges and faces. Points
  • 25. 3D graphics techniques and terminology key concepts and terms  Edge: An edge is a line segment connecting two vertices and define the shape of a 3D model and determine its overall structure. vertex edge edge (𝑥, 𝑦)
  • 26. 3D graphics techniques and terminology key concepts and terms  Face: A face is a flat polygon defined by three or more vertices. Faces create the visible surfaces of a 3D model, and their arrangement determines the shape of the object.
  • 27. 3D graphics techniques and terminology key concepts and terms  Rendering: The process of generating a 2D image from a 3D scene.  It involves calculating the color, shading, and lighting of objects in the scene to create a realistic or stylized representation.
  • 28. 3D graphics techniques and terminology key concepts and terms  Polygon: A polygon is a closed two-dimensional shape formed by connecting multiple vertices with edges.  Triangles and quads (four-sided polygons) are the most common types used in 3D modeling.
  • 29. 3D graphics techniques and terminology key concepts and terms  Texture Mapping: Applying a 2D image (texture) to the surface of a 3D model.  Texture mapping adds details, color, and texture to objects, enhancing their visual realism.  models are often described not only by geometry, but by textures as well.  To each point of an object, we can associate some property (surface color is a common one), and this property is then used in rendering the object.
  • 30. 3D graphics techniques and terminology key concepts and terms  Shading: Determining the appearance of surfaces by calculating how light interacts with them.  Shading techniques include flat shading (each face has a single color), Gouraud shading (smoothly interpolated colors across vertices), and Phong shading (smoothly interpolated normal for more realistic lighting).
  • 31. 3D graphics techniques and terminology
  • 32. Common Uses of Computer Graphics Real-Time 3D An OpenGL-based flight simulator, courtesy of x-plane.com.
  • 33. Common Uses of Computer Graphics 3D graphics used for computer-aided design (CAD) (image courtesy of Software Bisque).
  • 34. Common Uses of Computer Graphics 3D graphics used for medical imaging applications
  • 35. Common Uses of Computer Graphics 3D graphics used for medical imaging applications Designing Effective Step-By-Step Assembly Instructions
  • 36. Common Uses of Computer Graphics Virtual Reality  Experiencing things through our computers that don't really exist.  A believable, interactive 3D computer-created world that you can explore so you feel you really are there, both mentally and physically.
  • 37.  Windows on World(WoW) Types of Virtual Reality System  Also called Desktop VR  Using a conventional computer monitor to display the 3D virtual world.  Immersive VR  Completely immerse the user's personal viewpoint inside the virtual 3D world.  Completely immerse the user's personal viewpoint inside the virtual 3D world.
  • 38. Immersive VR (cont’d) Head-Mounted Display (HMD)  A Helmet or a face mask providing the visual and auditory displays.  Often equipped with a Head Mounted Display (HMD)
  • 39. Cont’d  Telepresence  A variation of visualizing complete computer generated worlds.  Links remote sensors in the real world with the senses of a human operator. The remote sensors might be located on a robot. Useful for performing operations in dangerous environments.
  • 41.  Mixed reality (Augmented reality) Types of Virtual Reality System (cont’d)  The seamless merging of real space and virtual space.  Integrate the computer-generated virtual objects into the physical world which become in a sense an equal part of our natural environment..
  • 42.  Mixed reality (Augmented reality) Types of Virtual Reality System (cont’d)
  • 43. Application Area  Computer Aided Design (CAD)  Presentation Graphics  Computer Art  Entertainment (animation, games, …)  Education & Training  Visualization (scientific & business)  Image Processing  Weather Maps  Cartography  Simulation and modeling  Graphical User Interfaces
  • 44. Computer Aided Design (CAD)  Used in design of buildings, automobiles, aircraft, watercraft, spacecraft, computers, textiles & many other products  Objects are displayed in wire frame outline form Software packages provide multi-window environment
  • 45. Presentation Graphics  Used to produce illustrations for reports or generate slides for use with projectors  Commonly used to summarize financial, statistical, mathematical, scientific, economic data for research reports, managerial reports & customer information bulletins  Examples : Bar charts, line graphs, pie charts, surface graphs, time chart
  • 46. Computer art  Used in fine art & commercial art  Includes artist’s paintbrush programs, paint packages, CAD packages and animation packages  These packages provides facilities for designing object shapes & specifying object motions.  Examples : Cartoon drawing, paintings, product advertisements, logo design
  • 48. Entertainment (Movies and games)  Movie Industry  Used in motion pictures, music  videos, and television shows.  Used in making of cartoon animation films
  • 49. Entertainment (Movies and games) Graphics Animation
  • 50. Game Industry  Focus on interactivity  Cost effective solutions
  • 51. Education and training  Computer generated models of physical, financial and economic systems are used as educational aids.  Models of physical systems, physiological systems, population trends, or equipment such as color-coded diagram help trainees understand the operation of the system
  • 52. Training Flight simulators, computer aided instruction, etc.
  • 53. Image processing  CG- Computer is used to create a picture  Image Processing – applies techniques to modify or interpret existing pictures such as photographs and TV scans  Medical applications  Picture enhancements  Tomography(a technique for displaying a cross section through a human body or other solid object using X-rays or ultrasound.)  Applications of image processing Improving picture quality Machine perception of visual information (Robotics)
  • 55. Graphical user interface  Major component – Window manager (multiple-window areas)  To make a particular window active, click in that window (using an interactive pointing device)  Interfaces display – menus & icons
  • 58. Graphics Hardware  Graphics hardware plays a vital role in computer graphics.  It provides the necessary processing power and capabilities to render and display graphics efficiently.  This chapter explores the key components and functionalities of graphics hardware, including graphics cards, GPUs (Graphics Processing Units), and display technologies.
  • 59. 2.1. Graphics Card  also known as a video card or GPU card.  is an expansion card that plugs into a computer's motherboard to handle graphics-related tasks.  It consists of various components, including:  GPU (Graphics Processing Unit)  VRAM (Video RAM)  Cooling System  Video Outputs
  • 60. …cont’d  GPU is the heart of a graphics card.  It is a specialized processor designed to perform complex mathematical calculations required for rendering 2D and 3D graphics.  Modern GPUs have hundreds or thousands of cores, enabling parallel processing and efficient rendering. GPU (Graphics Processing Unit)
  • 61. …cont’d  VRAM is dedicated memory on the graphics card used to store textures, frame buffers, and other graphical data.  It provides high-speed access to data required for rendering, improving overall performance. VRAM (Video RAM)
  • 62. …cont’d  Graphics cards have video outputs (HDMI, DisplayPort, DVI, etc.) to connect to external monitors and displays. Video Output
  • 63. …cont’d  Graphics cards generate a significant amount of heat during intensive rendering tasks.  Cooling systems, such as fans and heat sinks, are essential to dissipate heat and maintain the GPU's temperature within safe operating limits. Cooling System
  • 64. 2.2. Graphics API  Graphics APIs, such as OpenGL, DirectX, and Vulkan, act as an intermediary between the software and graphics hardware.  They provide a set of functions and commands that programmers can use to interact with the GPU and perform tasks like rendering, shading, and texture mapping. shading Texture Mapping
  • 65. 2.3. Display Technology  Graphics hardware is responsible for driving displays and monitors to present the rendered graphics to users.  The most commonly used display device is called Liquid Crystal Device (LCD).
  • 66. 2.3.1 Raster display system  Raster: A rectangular array of points or dots.  Pixel: One dot or picture element of the raster.  Pixel - one element of the framebuffer  Scan Line: A row of pixels  In a raster scan system, the electron beam is swept across the screen, one row at a time from top to bottom  Redraw the picture repeatedly by quickly directing the electron beam back over the same points. This is
  • 68. Raster Scan Displays  Horizontal retrace: The return to the left of the screen, after refreshing each scan line.  Vertical retrace: At the end of each frame (displayed in 1/80th to 1/60th of a second) the electron beam returns to the top left corner of the screen to begin the next frame. 68
  • 69. Cont’d  A Raster Scan Display is based on intensity control of pixels in the form of a rectangular box called Raster on the screen.  Information of on and off pixels is stored in refresh buffer or Frame buffer  Televisions in our house are based on Raster Scan Method.
  • 70. Cont’d  Frame Buffer is also known as Raster or bit map.  In Frame Buffer the positions are called picture elements or pixels Advantages of Raster Display system:  Realistic image  Million Different to be generated  Shadow Scenes are possible. Disadvantages Raster Display system :  Low Resolution  Expensive
  • 71. Raster Images  The quality of a raster image is determined by the total number pixels (resolution), and the amount of information in each pixel (color depth).  Raster graphics cannot be scaled to a higher resolution without loss of apparent quality.
  • 72. …cont’d  Raster displays work on the principle of rendering images as a grid of pixels.  Each pixel represents a single point on the screen and stores color information to produce the overall image.  The resolution of a raster display is determined by the number of pixels in both the horizontal and vertical directions. Pixel-based rendering
  • 73. …cont’d  The frame buffer is a dedicated region of memory in the graphics hardware where the entire screen image is stored.  It contains a pixel value for each location on the display.  The frame buffer serves as an intermediate buffer between the graphics processor and the display screen, allowing efficient rendering and screen updates. Frame-based rendering
  • 74. …cont’d  The refresh rate of a raster display system is the number of times per second that the frame buffer is updated and the screen is redrawn.  Common refresh rates include 60Hz, 120Hz, and 144Hz.  Higher refresh rates result in smoother motion and reduced flicker. Refresh rate
  • 75. …cont’d  The color depth, also known as bit depth, determines the number of colors a raster display can reproduce.  Common color depths include 8-bit (256 colors), 16-bit (65,536 colors), 24-bit (16.7 million colors), and higher.  Higher color depths allow for more realistic and detailed color representation. Color depth
  • 76. …cont’d  The resolution of a raster display is defined by the number of pixels in each dimension, such as 1920x1080 for Full HD or 3840x2160 for 4K.  Higher resolutions result in sharper and more detailed images. Resolution
  • 77. …cont’d  The aspect ratio is the ratio of the width to the height of the display.  Common aspect ratios include 4:3, 16:9, and 21:9.  The aspect ratio affects the shape and dimensions of the displayed image. Aspect ratio
  • 78.  Raster display systems have been the standard for computer graphics for many years due to their  simplicity,  efficiency, and  widespread compatibility.  However, newer display technologies, such as vector displays and ray tracing, are emerging to provide even more realistic and advanced graphics rendering techniques in certain applications. Summary of raster display
  • 79. 3D graphics pipeline  Also known as the graphics rendering pipeline.  It is a series of stages and processes that transform 3D objects and scenes into 2D images for display on a 2D screen.  This pipeline is an essential concept in computer graphics and provides a structured approach to efficiently render complex 3D scenes in real-time. o Vector processing o Primitive assembly & rasterization o Fragment processing o frame buffer o display 3D graphics pipeline stages
  • 80. 3D graphics pipeline stages  The pipeline begins with vertex processing.  where 3D models are transformed and manipulated at the vertex level.  This stage includes transformations such as o translation, o rotation, o scaling, and o projection. Vertex Processing
  • 81.  A translation is applied to an object by repositioning it along a straight-line path from one coordinate location to another.  The result is the conversion of 3D vertices into 2D coordinates on the screen, a process known as projection.  We translate a two-dimensional point by adding translation distances, 𝑡𝑥, and 𝑡𝑦, to the original coordinate position (x, y) to move the point to a new position (x', y’) 𝑥′ = 𝑥 + 𝑡𝑥, 𝑦′ = 𝑦 + 𝑡𝑦, Translation
  • 82. Translation Translation of triangle  The result is the conversion of 3D vertices into 2D coordinates on the screen, a process known as projection.
  • 83. Rotation  A two-dimensional rotation is applied to an object by repositioning it along a circular path in the 𝑥𝑦 plane.  To generate a rotation, we specify a rotation angle 𝜃 and the position (𝑥, 𝑦) of the rotation point (or pivot point) about which the object is rotated. Before rotation After 90 ‘ rotation
  • 84. Scaling  A scaling transformation alters the size of an object.  This operation can be carried out for polygons by multiplying the coordinate values (𝑥, 𝑦) of each vertex by scaling factors 𝑠𝑥, and 𝑠𝑦, to produce the transformed coordinates (𝑥′, 𝑦′): 𝑥′ = 𝑥 ⋅ 𝑠𝑥 𝑦′ = 𝑦 ⋅ 𝑠𝑦
  • 85. 3D graphics pipeline stages  After vertex processing, the graphics pipeline assembles vertices into primitives, such as  points,  lines, or  triangles.  Triangles are the most common primitive used in modern 3D graphics due to their simplicity and ability to tessellate complex surfaces.  Once primitives are formed, the rasterization stage determines which pixels on the screen are covered by these primitives. Primitive assembly and rasterization
  • 88. 3D graphics pipeline stages  In this stage, fragments (also known as pixels) generated by rasterization undergo further processing.  Each fragment contains information about its position on the screen and its attributes, such as color and depth.  Fragment processing includes operations like:  texture mapping,  shading, and  depth testing. Fragment processing
  • 90. Reading Assignment texture mapping, shading, depth testing.
  • 91. 3D graphics pipeline stages  The final processed fragments are written into the frame buffer, a dedicated portion of memory that stores the 2D image to be displayed on the screen.  The frame buffer contains pixel values representing colors and other display information. Frame buffer
  • 95. 3D graphics pipeline stages  The last stage of the pipeline is the display stage,  where the contents of the frame buffer are converted into analog signals and sent to the display device, such as a monitor or screen, for visual presentation. Display
  • 96. Color CRT Monitors  Colored pictures can be displayed using a combination of phosphors that emit different color light.  Commonly used techniques for display of colors are:  Beam Penetration  Shadow Mask
  • 97. Beam Penetration  Uses multilayer phosphor.  Used with random scan display  Two layers of phosphor (usually red and green) are coated on inner side of CRT screen.
  • 98. cont’d  Electron Beam intensity decides the displayed color.  High potential electron beam excite the green phosphor.  Low potential electron beam excite red phosphor.  Intermediate beam gives combinations of green and red light i.e. orange and yellow.
  • 99. cont’d Advantage • Inexpensive Method. Disadvantages • Limited colors are possible. • Poor picture quality. • Difficulty in changing electron beam potential by large amount.
  • 100. Shadow Mask  Shadow mask method is used in majority of color TV set and computer monitors.  It can display wide range of colors.  This method is commonly used in raster scan displays  It has red, green & blue color dots at each pixel position on the screen (forms a delta)
  • 101. Cont’d  It also has three electron guns one for each color dot (forms a delta).  A shadow-mask grid is placed just behind the phosphor-coated screen with holes, corresponding to each pixel on screen.  The electron beam from all the three guns are brought to the same point of focus on the shadow mask.
  • 103. Cont’d  Advantage:  produce realistic images  also produced different colors  and shadows scenes.  Disadvantages  low resolution  expensive  electron beam directed to whole screen
  • 104. 2.4. The Z Buffer For Hidden Surface Removal  Z-buffer is a 2D array that stores a depth value for each pixel.  This is referred to as the Z-buffer, since depth of an object is mainly calculated from the view plane along the 𝑧 axis of a coordinate system.
  • 109. Z buffer Examples Z buffer Screen
  • 112. Rendering Process with OpenGL Chapter Three
  • 113. What is OpenGL?  OpenGL is strictly defined as “a software interface to graphics hardware.”  it is a 3D graphics and modeling library that is highly portable and very fast.  Using OpenGL, you can create elegant and beautiful 3D graphics with exceptional visual quality.
  • 114. …Cont’d  Initially, it used algorithms carefully developed and optimized by Silicon Graphics, Inc. (SGI), an acknowledged world leader in computer graphics and animation.  Over time, OpenGL has evolved as other vendors have contributed their expertise and intellectual property to develop high-performance implementations of their own.
  • 115. …Cont’d  The OpenGL API itself is not a programming language like C or C++.  It is more like the C runtime library, which provides some prepackaged functionality.  OpenGL is intended for use with computer hardware that is designed and optimized for the display and manipulation of 3D graphics.  However, Software-only implementations of OpenGL are also possible.
  • 116. The key steps involved in creating and displaying graphics
  • 117. 1. Setting up the graphics context  The first step in using OpenGL is to set up the graphics context.  This involves creating a window or surface for rendering and initializing the OpenGL context within it.  Platform-specific libraries like GLFW (OpenGL Framework) or GLUT (OpenGL Utility Toolkit) can be used to manage the window and OpenGL context.
  • 118. 2. Defining the geometry  In OpenGL, 3D objects are represented as collections of vertices, edges, and faces.  To render a 3D scene, the application must define the geometry of the objects using vertex data.  The vertex data includes the coordinates of the vertices, information about normal (surface directions), texture coordinates, and other attributes.
  • 119. 3. Sending Data to the GPU  The vertex data is sent from the CPU (Central Processing Unit) to the GPU (Graphics Processing Unit) using OpenGL buffer objects.  These buffer objects efficiently store the vertex data in the GPU's memory for fast access during rendering.
  • 120. 4. Compiling and Linking Shaders  Shaders are small programs written in the OpenGL Shading Language (GLSL) that run on the GPU.  There are two types of shaders used in OpenGL: 1. vertex shaders, which manipulate vertices, and 2. fragment shaders, which determine the color and depth of fragments (pixels) generated during rasterization.  The application must compile and link the shader programs before they can be used in the rendering process.
  • 121. 5. Rendering Loop  The rendering process in OpenGL occurs within a rendering loop, also known as the game loop.  This loop continually updates the scene and renders it on the screen.  The loop typically involves the following steps: A) Clearing the Frame Buffer B) Updating the Scene C) Setting Up the Camera D) Binding Shaders and Uniforms E) Drawing the Geometry F) Displaying the Frame
  • 122. 3.1. Role of OpenGL in the Reference Model  The Reference Model is a conceptual framework that defines the components and processes involved in creating and displaying graphics on a computer screen.  OpenGL fits into this model as the graphics API responsible for rendering 2D and 3D graphics efficiently and interactively.
  • 123. 3.2. Coordinate system  A coordinate system is a mathematical framework used to specify the precise location of points in space or on a surface.  It provides a way to represent and measure positions or directions relative to a reference point or reference axes.  The origin of the 2D Cartesian system is at 𝑥 = 0, 𝑦 = 0.
  • 124. 3.2. Coordinate system Figure 3.1 Cartesian space OpenGL takes care of the mapping between Cartesian coordinates and window pixels when it comes time to rasterize (actually draw) your geometry on-screen.
  • 125. Cont’d  For example, a standard VGA screen has 640 pixels from left to right and 480 pixels from top to bottom.  To specify a point in the middle of the screen, you specify that a point should be plotted at (320,240)  that is, 320 pixels from the left of the screen and 240 pixels down from the top of the screen.  In OpenGL, or almost any 3D API, when you create a window to draw in, you must also specify the coordinate system you want to use and how to map the specified coordinates into physical screen pixels.
  • 126. 3.3. Viewing Using a Synthetic Camera  Takes in a 3D scene  Places (i.e., projects) the scene onto a 2D medium such as a roll of film or a digital pixel array What does a camera do?
  • 127. Cont’d  The synthetic camera is a programmer’s model for specifying how a 3D scene is projected onto the screen. Pinhole
  • 128. 3D Viewing: The Synthetic Camera  General synthetic camera: each package has its own but they are all (nearly) equivalent, with the following parameters/degrees of freedom:  Camera Position and Orientation  Field of view (angle of view, e.g., wide, narrow/telephoto, normal...)  Depth of field/focal distance (near distance, far distance)  Tilt of view/ film plane (if not perpendicular to viewing direction, produces oblique projections)  Perspective or Orthographic Projection
  • 129. 3.4. Output primitives  Output primitives are the basic geometric shapes that can be rendered by a graphics system.  Common output primitives: › Points: A single pixel or dot. › Lines: A straight line segment connecting two vertices › Triangles: A three-sided polygon defined by three vertices. › Quads: A four-sided polygon defined by four vertices › Other Polygons: Graphics systems may support polygons with more than three or four sides, but they are typically tessellated into triangles for rendering.
  • 130. Output attributes  Output attributes define the characteristics and appearance of output primitives.  These attributes include:: › Vertex Attributes: Each vertex of an output primitive may have various attributes, such as position, color, texture coordinates, normal vectors, and other user-defined properties. › Color: The color attribute determines the color of a primitive. Color can be represented using RGB (Red, Green, Blue) values, RGBA (RGB with an alpha component for transparency), or other color models.
  • 131. Geometry and Line Generation Chapter Four
  • 132. Introduction  All graphics packages construct pictures from basic building blocks known as graphics primitives  Primitives that describe the geometry, or shape, of these building blocks are known as geometric primitives.  They can be anything from 2-D primitives such as points, lines and polygons to more complex 3-D primitives such as spheres and polyhedra (a polyhedron is a 3-D surface made from a mesh of 2-D polygons).
  • 133. Cont’d…  In the following sections we will examine some algorithms for drawing different primitives, and where appropriate we will introduce the routines for displaying these primitives in OpenGL.
  • 134. OpenGL Point drawing primitives  The most basic type of primitive is the point.  Many graphics packages, including OpenGL, provide routines for displaying points. glBegin(GL_POINTS); glVertex2f(-0.5, 0.5); // First point (top-left) glVertex2f(0.5, 0.5); // Second point (top-right) glVertex2f(-0.5, -0.5); // Third point (bottom-left) glVertex2f(0.5, -0.5); // Fourth point (bottom-right) glEnd();
  • 136. Line Drawing Algorithms  Lines are a very common primitive and will be supported by almost all graphics packages.  Lines are normally represented by the two end-points of the line, and points 𝑥, 𝑦 along the line must satisfy the following slope-intercept equation: 𝑦 = 𝑚𝑥 + 𝑏 ----------------------------------------- eq. (1) where 𝑚 is the slope or gradient of the line, and 𝑏 is the coordinate at which the line intercepts the 𝑦 axes
  • 137. Cont’d…  Given two end-points (𝑥0, 𝑦0) and (𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑), we can calculate values for 𝑚 and 𝑏 as follows:     0 0 x x y y m end end     Furthermore, for any given x-interval Δ𝑥, we can calculate the corresponding y-interval Δ𝑦 ∶ Δ𝑦 = 𝑚. Δ𝑥 -------------------------------------------------------------------eq. (4) Δ𝑥 = 1 𝑚 ⋅ Δ𝑦 --------------------------------------------------------------- eq. (5) 𝑏 = 𝑦 − 𝑚. Δ𝑥 -------------------------------------------------- eq. (3) −−−−−−−−−−−−−−−−−−−−−−−−−− eq. (2)
  • 138. DDA Line-Drawing Algorithm  The Digital Differential Analyser (DDA) algorithm operates by starting at one end-point of the line,  and then using 𝐸𝑞. (4) and (5) to generate successive pixels until the second end-point is reached.  Therefore, first, we need to assign values for Δ𝑦 and Δ𝑥 Suppose we simply increment the value of 𝑥 at each iteration (i.e. Δ𝑥 = 1) and then compute the corresponding value for y using 𝑒𝑞. (2) 𝑎𝑛𝑑 (4).
  • 139. Cont’d  This would compute correct line points but, as illustrated by Figure 4.1, it would leave gaps in the line.  The reason for this is that the value of Δ𝑦 is greater than one, so the gap between subsequent points in the line is greater than 1 pixel. Figure 1 – ‘Holes’ in a Line Drawn by Incrementing 𝑥 and Computing the Corresponding y-Coordinate
  • 140. Cont’d  The solution to this problem is to make sure that both Δ𝑥 and Δ𝑦 have values less than or equal to one.  To ensure this, we must first check the size of the line gradient. The conditions are:  𝐼𝑓 |𝑚| ≤ 1: o Δ 𝑥 = 1 o Δ 𝑦 = 𝑚  𝐼𝑓 |𝑚| > 1: o Δ 𝑥 = 1/𝑚 o Δ 𝑦 = 1
  • 141. Cont’d  Once we have computed values for Δ 𝑥 and Δ 𝑦, the basic DDA algorithm is:  Start with (𝑥0, 𝑦0)  Find successive pixel positions by adding on ( Δ𝑥 , Δ 𝑦) and rounding to the nearest integer, i.e. o 𝑥𝑘 + 1 = 𝑥𝑘 + Δ𝑥 o 𝑦𝑘 + 1 = 𝑦𝑘 + Δ 𝑦 For each position (𝑥𝑘, 𝑦𝑘) computed, plot a line point at (𝑟𝑜𝑢𝑛𝑑(𝑥𝑘), 𝑟𝑜𝑢𝑛𝑑(𝑦𝑘)), where the round function will round to the nearest integer Note that the actual pixel value used will be calculated by rounding to the nearest integer, but we keep the real-valued location for calculating the next pixel position.
  • 142. Examples (DDA algorithm)  Apply the DDA algorithm for drawing a straight-line segment.  Given: 𝑥0, 𝑦0 = 10,10 𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑 = (15,13)  First compute a value for the gradient 𝑚:     6 . 0 5 3 ) 10 15 ( ) 10 13 ( 0 0         x x y y m end end  Now, because |𝑚| ≤ 1, we compute Δ𝑥 and Δ𝑦 as follows: Δ𝑥 = 1 and Δy = 0.6
  • 143. Cont’d…  Using these values of Δ𝑥 and Δy we can now start to plot line points:  Start with (𝑥0, 𝑦0) = (𝟏𝟎, 𝟏𝟎) – colour this pixel  Next, (𝑥1, 𝑦1) = (10 + 1,10 + 0.6) = (11,10.6) – so we colour pixel (11,11)  Next, (𝑥2, 𝑦2) = (11 + 1,10.6 + 0.6) = (12,11.2) – so we colour pixel (12,11)  Next, (𝑥3, 𝑦3) = (12 + 1,11.2 + 0.6) = (13,11.8) – so we colour pixel (13,12)  Next, (𝑥4, 𝑦4) = (13 + 1,11.8 + 0.6) = (14,12.4) – so we colour pixel (14,12)  Next, (𝑥5, 𝑦5) = (14 + 1,12.4 + 0.6) = (15,13) – so we colour pixel (15,13)  We have now reached the end-point (𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑), so the algorithm terminates
  • 144. Cont’d… 𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑 = (15,13) 𝑥0, 𝑦0 = 10,10
  • 145. Bresenham’s Line-Drawing Algorithm  Bresenham’s line-drawing algorithm provides significant improvements in efficiency over the DDA algorithm.  These improvements arise from the observation that for any given line,  if we know the previous pixel location, we only have a choice of 2 locations for the next pixel.  This concept is illustrated in Figure 3: given that we know (𝑥𝑘, 𝑦𝑘) is a point on the line, we know the next line point must be either pixel A or pixel B.
  • 146. Cont’d  Therefore we do not need to compute the actual floating-point location of the ‘true’ line point; we need only make a decision between pixels A and B. Figure 3 - Bresenham's Line-Drawing Algorithm
  • 147. Cont’d  Bresnahan's Algorithm work as follows:  First, we denote by dupper and dlower the distances between the centres of pixels A and B and the ‘true’ line (see Figure 3).  Using 𝐸𝑞. (1) the ‘true’ y-coordinate at 𝑥𝑘 + 1 can be calculated as: 𝑦 = 𝑚(𝑥𝑘 + 1) + 𝑏 ----------------------------------------- eq. (6) Therefore we compute dupper and dlower as: 𝑑𝑙𝑜𝑤𝑒𝑟 = (𝑦 − 𝑦𝑘)= 𝑚(𝑥𝑘 + 1) + 𝑏 - 𝑦𝑘 …………………………… eq. (7) 𝑑𝑢𝑝𝑝𝑒𝑟 = (𝑦𝑘 − 𝑦 )= (𝑦k+1) − 𝑦𝑘))) = ((𝑦k+1) − 𝑚(𝑥𝑘 + 1) − 𝑏 ….. eq. (8)
  • 148. Cont’d  Now, we can decide which of pixels A and B to choose based on comparing the values of dupper and dlower: o If dlower > dupper, choose pixel A o Otherwise choose pixel B  We make this decision by first subtracting dupper from dlower: 𝑚(𝑥𝑘 + 1) + 𝑏 - 𝑦𝑘 - (𝑦k+1 + 𝑚 𝑥𝑘 + 1 + 𝑏 => 2𝑚(𝑥𝑘 + 1) – 2 𝑦𝑘 + 2𝑏 − 1
  • 149. Cont’d  If the value of this expression is positive we choose pixel A;  otherwise we choose pixel B.  The question now is how we can compute this value efficiently.  To do this, we define a decision variable 𝑝𝑘 for the kth step in the algorithm and try to formulate pk  so that it can be computed using only integer operations.  To achieve this substitute 𝑚 by Δ𝑦 Δ𝑥  We make this decision by first subtracting dupper from dlower: 𝑃𝑘= dlower - dupper = Δ𝑥 (2𝑚(𝑥𝑘 + 1) – 2 𝑦𝑘 + 2𝑏 − 1) … . 𝑒𝑞(9)
  • 150. Cont’d 𝑃𝑘 = Δ𝑥 (2 Δ𝑦 Δ𝑥 (𝑥𝑘 + 1) – 2 𝑦𝑘 + 2𝑏 − 1) 𝑃𝑘 = 2Δ𝑦𝑥𝑘 + 2Δ𝑦 − 2Δ𝑥𝑦𝑘 + 2𝑏Δ𝑥 − Δ𝑥 Now, Collect like terms 𝑃𝑘 = 2Δ𝑦𝑥𝑘 - 2Δ𝑥𝑦𝑘 + 2Δ𝑦 +2𝑏Δ𝑥 − Δ𝑥  Always xk+1 = xk+1  If pk < 0, then yk+1 = yk, otherwise yk+1 = yk+1 𝑑
  • 151. Cont’d  Therefore we can define the incremental calculation as: y p p k k     2 1 x y p p k k       2 2 1 if pk < 0 if pk ≥ 0
  • 152. Cont’d  Bresenham's Algorithm if 𝑚 < 1  Plot the start-point of the line 𝑥0, 𝑦0  Compute the first decision variable: o 𝑃0 = 2Δ𝑦 −Δ𝑥  For each k, starting with k=0 o If 𝑃𝑘 < 0 • Plot 𝑥𝑘 + 1, 𝑦𝑘 • 𝑃𝑘+1 = 𝑃𝑘 + 2Δ𝑦 o If 𝑃𝑘 ≥ 0 • Plot 𝑥𝑘 + 1, 𝑦𝑘 + 1 • 𝑃𝑘+1 = 𝑃𝑘 + 2Δ𝑦 - 2Δ𝑥  Repeat the steps until Δ𝑥 times
  • 153. Cont’d Bresnahan's Algorithm Example 𝑥0, 𝑦0 = 10,10 , 𝑎𝑛𝑑 𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑 = 15,13 Δ𝑥=5, Δ𝑦=3, 𝑚= 0.6, 𝑃0 = 2Δ𝑦 −Δ𝑥 = 1  𝑚 < 1 true so use the above algorithm  Plot 𝑥0, 𝑦0 = 10,10 , color pixel (10,10) • 𝑃0 ≥ 0, so • Plot (11,11), color pixel(11,11) • 𝑃1 = 𝑃0 + 2Δ𝑦 −2Δ𝑥 = -3 • 𝑃1 < 0, so • Plot(12,11), color pixel(12,11) • 𝑃2 = 𝑃1 + 2Δy = -3+6 =3
  • 154. Cont’d • 𝑃2 ≥ 0, so • Plot (13,12), color pixel (13,12) • 𝑃3 = 𝑃2 + 2Δ𝑦 −2Δ𝑥 = 3+ 2(3) – 2(5) = -1 • 𝑃3 < 0, so • Plot(14,12), color pixel (14,12) • 𝑃4 = 𝑃3 + 2Δy = -1+6=5 • 𝑃4 ≥ 0, so • Plot(15,13), color pixel (15,13)  Now we are reached the maximum iteration Δ𝑥 times
  • 155. Cont’d 𝑥𝑒𝑛𝑑, 𝑦𝑒𝑛𝑑 = (15,13) 𝑥0, 𝑦0 = 10,10
  • 156. Reading Assignment Bresnahan's Algorithm if 𝑚 > 1 and if 𝑚 = 0 ? The figure below shows the start and end points of a straight line. Show how the Bresenham's algorithm would draw the line between the two points
  • 157. Circle Drawing Algorithm  Some graphics packages allow us to draw circle primitives.  Before we examine algorithms for circle-drawing we will consider the mathematical equations of a circle.  In Cartesian coordinates we can write: 2 2 2 ) ( ) ( r y y x x c c     where (𝑥𝑐, 𝑦𝑐) is the centre of the circle.  Alternatively, in polar coordinates we can write:  cos r x x c    sin r y y c  
  • 158. Plotting using cartesian co-ordinates  Suppose we successively increment the x-coordinate and calculate the corresponding y-coordinate using  2 2 x x r y y c c      This would correctly generate points on the boundary of a circle
  • 159. Cont’d  Show how the following circle-drawing algorithms would draw a circle of radius 5 centred on the origin.  You need only consider the upper-right octant of the circle, i.e. the arc shown in red in the figure below. A) Plotting points using Cartesian coordinates. B) Plotting points using polar coordinates. Example
  • 160. Cont’d  Given • r= 5, 𝑥𝐶, 𝑦𝐶 = 0,0 • We start from 𝑥 = 0, and then successively increment 𝑥 and calculate the corresponding 𝑦 using Plotting using Cartesian co- ordinate  2 2 x x r y y c c      We can tell that we have left the first octant when 𝑥 > 𝑦. • 𝑥 = 0, 𝑦 = 0 + 52 − 0 − 0 2 = 5 , 𝑝𝑙𝑜𝑡 0,5 • 𝑥 = 1, 𝑦 = 0 + 52 − 1 − 0 2 = 4.9 , 𝑝𝑙𝑜𝑡 1,5 • 𝑥 = 2, 𝑦 = 0 + 52 − 2 − 0 2 = 4.58 , 𝑝𝑙𝑜𝑡 2,5 • 𝑥 = 3, 𝑦 = 0 + 52 − 3 − 0 2 = 4 , 𝑝𝑙𝑜𝑡 3,4 • 𝑥 = 4, 𝑦 = 0 + 52 − 4 − 0 2 = 3 , so 𝑥 > 𝑦 and we stop
  • 162. Plotting using polar co-ordinates  An alternative technique is to use the polar coordinate equations.  Recall that in polar coordinates we express a position in the coordinate system as an angle 𝜃 and a distance 𝑟.  For a circle, the radius r will be constant, but we can increment 𝜃 and compute the corresponding 𝑥 and 𝑦 values
  • 163. Cont’d  Show how the following circle-drawing algorithms would draw a circle of radius 5 centred on the origin.  You need only consider the upper-right octant of the circle, i.e. the arc shown in red in the figure below. A) Plotting points using polar coordinates. Example
  • 164. Cont’d Example Plotting using Polar co-ordinate  Given • r= 5, 𝑥𝐶, 𝑦𝐶 = 0,0  We can tell that we have left the first octant when 𝜃 < 450. • 𝜃= 1 𝑟 × 1800 𝜋 = 1 5 × 1800 𝜋 = 11.46o • Then we start with 𝜃 = 90o, and compute 𝑥 and 𝑦 using:  cos r x x c    sin r y y c    for successive value of 𝜃, subtracting 11.46o at each iteration.  We stop when 𝜃 becomes less than 45o
  • 165. Cont’d Example Plotting using Polar co-ordinate • 𝑥 = 0 + 5 ∗ cos 90o = 0, 𝑦= 0 + 5 ∗ sin 90o = 5, plot(0,5) • 𝑥 = 0 + 5 ∗ cos 78.54o = 0.99, 𝑦= 0 + 5 ∗ sin 78.54o = 4.9, plot(1,5) • 𝑥 = 0 + 5 ∗ cos 67.08o = 1.95 𝑦= 0 + 5 ∗ sin 67.08o = 4.6, plot(2,5) • 𝑥 = 0 + 5 ∗ cos 55.62o = 2.82, 𝑦= 0 + 5 ∗ sin 55.62o = 4.13, plot(3,4) • 𝜃 = 44.160 which is less than 45o, so we stop.  The points plotted are the same as for the Cartesian plotting algorithm
  • 166. Fill Area Primitives  The most common type of primitive in 3-D computer graphics is the fill-area primitive.  The term fill-area primitive refers to any enclosed boundary that can be filled with a solid colour or pattern.  However, fill-area primitives are normally polygons, as they can be filled more efficiently by graphics packages. A Polygon with an Edge Crossing
  • 167. Cont’d  Polygons are the most common form of graphics primitive because they form the basis of polygonal meshes,  which is the most common representation for 3-D graphics objects.  Polygonal meshes approximate curved surfaces by forming a mesh of simple polygons Examples of Polygonal Mesh Surfaces
  • 168. Convex and Concave Polygons  We can differentiate between convex and concave polygons: • Convex polygons have all interior angles ≤ 180o • Concave polygons have at least one interior angle > 180o (a) Convex (b) Concave
  • 169. Polygons inside-outside test  In order to fill polygons we need some way of telling if a given point is inside or outside the polygon boundary:  we call this an inside-outside test.  Two different inside-outside tests: • Odd – Even rule • Non zero winding number rule
  • 170. Odd – Even rule  Draw a line from P to some distant point (that is known to be outside the polygon boundary).  Count the number of crossings of this line with the polygon boundary: o If the number of crossings is odd, then P is inside the polygon boundary. o If the number of crossings is even, then P is outside the polygon boundary.
  • 171. Nonzero Winding Number rule  This time we consider each edge of the polygon to be a vector.  They have a direction as well as a position.  These vectors are directed in a particular order around the boundary of the polygon (the programmer defines which direction the vectors go).
  • 172. Cont’d  Now we decide if a point P is inside or outside the boundary as follows:  Draw a line from P to some distant point (that is known to be outside the polygon boundary).  At each edge crossing, add 1 to the winding number if the edge goes from right to left, and subtract 1 if it goes from left to right. o If the total winding number is nonzero, P is inside the polygon boundary. o If the total winding number is zero, P is outside the polygon boundary.
  • 173. Cont’d  We can see from the following figures that the nonzero winding number rule gives a slightly different result from the odd-even rule for the example polygon given.  In fact, for most polygons (including all convex polygons) the two algorithms give the same result.
  • 174. Cont’d  Show which parts of the fill-area primitive shown below would be classified as inside or outside using the following inside-outside tests:  A) Odd-even rule  B) Nonzero winding number rule Example
  • 175. Cont’d  The odd-even rule classifies the inner polygon as outside because points inside it have two (an even number) line crossings to reach any distant point.  For the nonzero winding number rule, both of the line crossings go from right to left, so the winding number is incremented in both cases. • Therefore the total winding number for points inside the inner polygon is 2, which in nonzero and so the points are classified as inside. • By reversing the direction of the edge vectors of the inner (or outer) polygon we could get the same result as the odd-even rule. Answer
  • 177. Text and Characters  The final type of graphics primitive we will consider is the character primitive.  Character primitives can be used to display text characters.  Representation for characters:  Bitmap or Font  Stroke or outline  In bitmap representation (or font), ), characters are stored as a grid of pixel values.  This is a simple representation that allows fast rendering of the character.  However, such representations are not easily scalable
  • 178. Cont’d  In stroke, or outline, representation, characters are stored using line or curve primitives.  To draw the character we must convert these primitives into pixel values on the display.  They are much more easily scalable.  To generate a larger version of the character we just multiply the coordinates of the line/curve primitives by some scaling factor.  Bold and italic characters can be generated using a similar approach.  The disadvantage of stroke fonts is that they take longer to draw than bitmap fonts.
  • 179. Cont’d Bitmap and Stroke Character Primitives
  • 181. 2D Transformation  First of all let us review some basics of matrices.  2x2 matrices can be multiplied according to the following equation. 2D Matrix Transformation                              dh cf dg ce bh af bg ae h g f e d c b a ………………………….… (1)  For example,                                                4 4 6 1 0 1 2 2 2 1 1 2 0 1 2 3 2 1 1 3 0 2 2 1 1 2 1 3
  • 182. 2D Transformation  In general, for matrices A and B to be multiplied, the number of columns in A must be equal to the number of rows in B.  Matrix multiplication is not commutative.  In other words, for two matrices A and B, 𝑨𝑩 ≠ 𝑩𝑨.  We can see this from the following example.                             2 6 1 5 1 2 1 3 0 2 2 1                           4 4 6 1 0 2 2 1 1 2 1 3
  • 183. 2D Transformation  However, matrix multiplication is associative.  This means that if we have three matrices A, B and C, then • (𝐴𝐵) 𝐶 = 𝐴 (𝐵𝐶). • We can see this from the following example.                                                          4 12 6 13 1 2 0 1 4 4 6 1 1 2 0 1 0 2 2 1 1 2 1 3                                                           4 12 6 13 0 2 2 5 1 2 1 3 1 2 0 1 0 2 2 1 1 2 1 3
  • 184. 2D Translation  Translation is the ability to reposition an image or object along a straight line path from one location to another.  The translation transformation shifts all points by the same amount.  Therefore, in 2-D, we must define two translation parameters: • the 𝑥-translation 𝑡𝑥 • the 𝑦-translation 𝑡𝑦  To translate a point 𝑃 to 𝑃’ we add on a vector 𝑇:          y x p p P             y x p p P          y x t t T ………………………………………………… .……….. (2) ………………………………………………… .……….. (3) ………………………………………………… .……….. (4)
  • 185. Cont’d  Therefore, from Eq. (5) we can see that the relationship between points before and after the translation is: ………………………………………………… .……….. (5) x x x t p p    y y y t p p    ………………………………………………… .……….. (6) ………………………………………………… .……….. (7) Translation by 𝑡𝑥 = 3, 𝑡𝑦 = 1
  • 186. 2D Rotation  Rotation is the ability to reposition an image along a circular path in 𝑥𝑦 − 𝑝𝑙𝑎𝑛𝑒.  The rotation transformation rotates all points about a centre of rotation.  Normally this centre of rotation is assumed to be at the origin (0,0),  although as we will see later on it is possible to rotate about any point.  The rotation may be either clockwise or anticlockwise. • Positive value of rotation angle provide an anticlockwise rotation • negative value produce a clockwise rotation about the given point  The rotation transformation has a single parameter: the angle of rotation, 𝜃.
  • 187. Cont’d  To rotate a point P anti-clockwise by 𝜃0, we apply the rotation matrix 𝑅:               cos sin sin cos R ………………………………………………… .……….. (8)                             y x y x p p p p     cos sin sin cos ………………………………………………… .……….. (9)  Therefore, from 𝐸𝑞. (9) we can see that the relationship between points before and after the rotation is:   sin cos y x x p p p      sin cos px p p y y    ………………………………………………… .……….. (10) ………………………………………………… .……….. (11)
  • 188. Cont’d  To rotate a point P anti-clockwise by 𝜃0, we apply the rotation matrix 𝑅:               cos sin sin cos R ………………………………………………… .……….. (8)                             y x y x p p p p     cos sin sin cos ………………………………………………… .……….. (9)  Therefore, from 𝐸𝑞. (9) we can see that the relationship between points before and after the rotation is:   sin cos y x x p p p      sin cos px p p y y    ………………………………………………… .……….. (10) ………………………………………………… .……….. (11) Rotation about the Origin
  • 189. 2D Scaling  Scaling is the ability to change the size of an image.  The operation is necessary when we have to either ZOOM in an object so that we can get a better view of the object or ZOOM out so that we can see more object.  The scaling transformation multiplies each coordinate of each point by a scale factor.  The scale factor can be different for each coordinate (e.g. for the 𝑥 and 𝑦 coordinates).
  • 190. Cont’d  To scale a point P by scale factors Sx and Sy we apply the scaling matrix S: 𝑥𝑛𝑒𝑤 = 𝑃𝑥 ⋅ 𝑆𝑥 𝑦𝑛𝑒𝑤 = 𝑃𝑦 ⋅ 𝑆𝑦          y x S S S 0 0                            y x y x y x p p S S p p 0 0 ………………………………………………… .……….. (12) ………………………………………………… .……….. (13)
  • 191. Cont’d  Therefore, from 𝐸𝑞. (13) we can see that the relationship between points before and after the scaling is: ………………………………………………… .……….. (14) ………………………………………………… .……….. (15) x x x p S p   y y y p S p   A 2-D Scaling by 𝑆𝑥 = 2, 𝑆𝑦 = 2
  • 192. Homogeneous Coordinates  Homogeneous coordinates are a system of coordinates used in projective geometry.  Points at infinity can be represented using finite coordinates.  A single matrix can represent affine transformations and projective transformations.  Homogeneous coordinates allow us to do combinational transformation.  With homogeneous coordinates we add an extra coordinate, the homogenous parameter, to each point in Cartesian coordinates  So 2-D points are stored as three values: • the 𝑥-coordinate, • the 𝑦-coordinate and • the ℎ𝑜𝑚𝑜𝑔𝑒𝑛𝑒𝑜𝑢𝑠 parameter.
  • 193. Cont’d  The relationship between homogeneous points and their corresponding Cartesian points is:  Homogeneous point = 𝑥 𝑦 𝑤 , Cartesian point = 𝑥 𝑤 𝑦 𝑤 1  Normally the homogenous parameter is given the value 1, , in which case homogenous coordinates are the same as Cartesian coordinates but with an extra value which is always 1.
  • 194. 2D Translation 𝑤 homogeneous coordinates  we can express a translation transformation using a single matrix multiplication:            1 y x p p P ………………………………………………… .……….. (16)               1 y x p p P ………………………………………………… .……….. (17)            1 0 0 1 0 0 1 y x t t T ………………………………………………… .……….. (18)                                  1 1 0 0 1 0 0 1 1 y x y x y x p p t t p p ………………………………………………… .……….. (19)
  • 195. Cont’d  Therefore, x x x t p p    y y y t p p    , and  exactly the same as before, but we used a matrix multiplication instead of an addition.
  • 196. 2D Rotation 𝑤 homogeneous coordinates  with the exception that the rotation matrix 𝑹 has an extra row and extra column. ………………………………………………… .……….. (20) ………………………………………………… .……….. (21)             1 0 0 0 cos sin 0 sin cos     R                                   1 1 0 0 0 cos sin 0 sin cos 1 y x y x p p p p      Therefore,   sin cos y x x p p p    , and   sin cos px p p y y    • which is the same outcome as before.
  • 197. 2D Scaling 𝑤 homogeneous coordinates  Finally, we can also express scaling using homogeneous coordinates, as shown by the following equations ………………………………………………… .……….. (22) ………………………………………………… .……….. (23) , and            1 0 0 0 0 0 0 y x S S S                                  1 1 0 0 0 0 0 0 1 y x y x y x p p S S p p  Therefore, x x x p S p   y y y p S p   exactly the same as before.
  • 198. Matrix Composition  The use of homogenous coordinates allows us to compose a sequence of transformations into a single matrix.  This can be very useful in the graphics viewing pipeline,  but also allows us to define different types of transformation from those we have already seen.  Using matrix composition, we can achieve this using the following sequence of transformations: • Translate from pivot point to origin • Rotate about origin • Translate from origin back to pivot point
  • 199. Cont’d  An example of this sequence of transformations is shown in Figure bellow.  Here we perform a rotation about the pivot point (2,2), • Translating by (-2,-2) to the origin, • Rotating about the origin and • Then translating by (2,2) back to the pivot point.  Let us denote our transformations as follows: a. T1 is a matrix translation by (-2,-2) b. R is a matrix rotation by 𝜃0 about the origin c. T2 is a matrix translation by (2,2)  Therefore, using homogenous coordinates we can compose all three matrices into one composite transformation, C: 𝐶 = 𝑇2𝑅𝑇1 …………………………………………………………………………………..
  • 200. Cont’d  The composite matrix C can now be computed from the three constituent matrices T2, R and T1, • and represents a rotation about the pivot point (2,2) by θo.  Note from Eq. (24) that 𝑻𝟏 is applied first, followed by 𝑹 and then 𝑇2.  For instance, if we were to apply the three transformations to a point P the result would be 𝑃’ = 𝑇2𝑅𝑇1𝑃.
  • 201. 3D Matrix Transformation  The concept of homogenous coordinates is easily extended into 3-D:  we just introduce a fourth coordinate in addition to the • 𝑥, 𝑦 and 𝑧-coordinates.  In this section we review the forms of 3-D translation, rotation and scaling matrices using homogeneous coordinates.
  • 202. 3D Translation 𝑤 homogeneous coordinates  The 3-D homogeneous coordinate’s translation matrix is similar in form to the 2-D matrix, and is given by:                1 0 0 0 1 0 0 0 1 0 0 0 1 z y t t t T ………………………………………….. (25)  We can see that 3-D translations are defined by three translation parameters: tx, ty and tz. 𝑡𝑥
  • 203. Cont’d  We apply this transformation as follows: ………………………………………….. (26)  Therefore,                                               1 1 0 0 0 1 0 0 0 1 0 0 0 1 1 z y x z y z y x p p p t t t p p p x x x t p p    y y y t p p    z z z t p p    , , and 𝑡𝑥
  • 204. 3D Scaling 𝑤 homogeneous coordinates  Similarly, 3-D scaling are defined by three scaling parameters, Sx, Sy and Sz.  The matrix is: We apply this transformation as follows:                1 0 0 0 0 0 0 0 0 0 0 0 0 z y x S S S S                                               1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 z y x z y x z y x p p p S S S p p p  Therefore, x x x p S p   y y y p S p   z z z p S p   , , and
  • 205. 3D Rotation 𝑤 homogeneous coordinates  For rotations in 3-D we have three possible axes of rotation: • the 𝑥, 𝑦 and 𝑧 axes.  Therefore the form of the rotation matrix depends on which type of rotation we want to perform.  For a rotation about the 𝑥 − 𝑎𝑥𝑖𝑠 the matrix is:                 1 0 0 0 0 cos sin 0 0 sin cos 0 0 0 0 1     x R  For a rotation about the 𝑦 − 𝑎𝑥𝑖𝑠 the matrix is:                 1 0 0 0 0 cos 0 sin 0 0 1 0 0 sin 0 cos     y R
  • 206. 3D Rotation 𝑤 homogeneous coordinates  For a rotation about the 𝑧 − 𝑎𝑥𝑖𝑠 the matrix is:                 1 0 0 0 0 1 0 0 0 0 cos sin 0 0 sin cos     z R
  • 207. OpenGL Rotation Function glRotatef(angle, 0.0, 0.0, 1.0); • angle : This is the angle of rotation, specified in degrees. • 0.0 : This is the x-component of the rotation axis. In this case, it's set to zero, which means there is no rotation around the x-axis. • 0.0 : This is the Y-component of the rotation axis. In this case, it's set to zero, which means there is no rotation around the Y-axis. • 1.0 : This is the z-component of the rotation axis. The value of 1.0 means that the rotation will occur around the z-axis.
  • 208. OpenGL Translation Function glTranslatef(𝑡𝑥, 𝑡𝑦, 0.0f); • 𝑡𝑥 : It specifies how much the subsequent geometry will be moved horizontally. • 𝑡𝑦 : It specifies how much the subsequent geometry will be moved vertically. • 0.0f : This is the translation along the z-axis In this case, it's set to 0.0f because we are dealing with 2D translation.
  • 209. OpenGL Scaling Function glScalef(𝑆𝑥, 𝑆𝑦, 1.0f); // you can omit 𝑧-Scaling factor in 2D • 𝑆𝑥 : It specifies how much the subsequent geometry will be stretched or compressed horizontally. • If 𝑆𝑥 is greater than 1, the geometry will be stretched; if it's less than 1, the geometry will be compressed. • s𝑦 : It specifies how much the subsequent geometry will be stretched or compressed vertically. • If s𝑦 is greater than 1, the geometry will be stretched; if it's less than 1, the geometry will be compressed. GLfloat s𝑥= 3.0f; GLfloat s𝑦= 3.0f;
  • 210. State Management and Drawing Geometric objects Chapter Six
  • 211. Basic State Management  OpenGL maintains many states and state variables.  An object may be rendered with lighting, texturing, hidden surface removal, fog, and other states affecting its appearance.  By default, most of these states are initially inactive.  These states may be costly to activate; for example, turning on texture mapping will almost certainly slow down the speed of rendering a primitive.  However, the quality of the image will improve and look more realistic, due to the enhanced graphics capabilities.
  • 212. Cont’d  To turn on and off many of these states, use these two simple commands: • void glEnable(GLenum cap); • void glDisable(GLenum cap);  glEnable() turns on a capability, and glDisable() turns it off.  The glEnable function is used to enable specific OpenGL capabilities.  It takes a single argument ‘cap’ which is an enumeration (GLenum) representing the capability to be enabled.
  • 213. Examples  // Enable depth testing for accurate rendering of 3D scenes • glEnable(GL_DEPTH_TEST);  // Enable blending for transparency • glEnable(GL_BLEND);  // Enable face culling for efficient rendering of closed surfaces • glEnable(GL_CULL_FACE); • Face culling is a technique used to improve rendering performance by omitting the drawing of polygons that are facing away from the viewer.  // Enable Fog • glEnable(GL_FOG); • By enabling fog, you simulate effects like foggy weather or mist, and distant objects appear less distinct.
  • 214. Displaying Point, Lines, and Polygons  By default, a point is drawn as a single pixel on the screen,  A line is drawn solid and one pixel wide, and  polygons are drawn solidly filled in.
  • 215. Points Details  To control the size of a rendered point, use glPointSize() and supply the desired size in pixels as the argument.  void glPointSize(GLfloat size); • Sets the width in pixels for rendered points; size must be greater than 0.0 and by default is 1.0. • if the width is 1.0, the square is 1 pixel by 1 pixel; • if the width is 2.0, the square is 2 pixels by 2 pixels, and so on.
  • 216. Lines Details  With OpenGL, you can specify lines with different widths and lines that are stippled in various ways : • dotted, • dashed, • drawn with alternating dots and dashes, and so on. Wide Lines  void glLineWidth(GLfloat width);  Sets the width in pixels for rendered lines; width must be greater than 0.0 and by default is 1.0.
  • 217. Cont’d Stippled Lines  To make stippled (dotted or dashed) lines,  You use the command glLineStipple() to define the stipple pattern, and then you enable line stippling with glEnable(). glEnable(GL_LINE_STIPPLE); glLineStipple(1, 0x3F07);
  • 219. Polygons Details  Polygons are typically drawn by filling in all the pixels enclosed within the boundary,  but you can also draw them as outlined polygons or simply as points at the vertices.  A filled polygon might be solidly filled or stippled with a certain pattern. Polygons as Points, Outlines, or Solids  A polygon has two faces: • Front and • Back  and might be rendered differently depending on which side is facing the viewer.
  • 220. Cont’d  By default, both front and back faces are drawn in the same way.  To change this, or to draw only outlines or vertices, use glPolygonMode(). glPolygonMode(GL_FRONT, GL_FILL); glPolygonMode(GL_BACK, GL_LINE); Stippling Polygons glEnable(GL_POLYGON_STIPPLE);
  • 221. Normal Vectors  A normal vector (or normal, for short) is a vector that points in a direction that’s perpendicular to a surface.  An object’s normal vectors define the orientation of its surface in space in particular, its orientation relative to light sources.
  • 222. Cont’d  normal vectors are essential for simulating realistic lighting and shading effects in computer graphics. glVertex3f(0.0, 0.0, 0.0); glVertex3f(2.0, 0.0, 0.0); // length glVertex3f(0.0, 0.0, 0.0); glVertex3f(0.0, 0.0, 3.0); // length
  • 223. Vector Arrays  You may have noticed that OpenGL requires many function calls to render geometric primitives.  Drawing a 100−sided polygon requires 102 function calls: • one call to glBegin(), one call for each of the vertices, and a final call to glEnd(). • Generally, to draw 𝒏 sided polygon, 𝒏 + 𝟐 call.  In the others code, additional information (polygon boundary, edge flags or surface normal) added function calls for each vertex.  This can quickly double or triple the number of function calls required for one geometric object.  For some systems, function calls have a great deal of overhead and can hinder performance.
  • 224. Cont’d  OpenGL has vertex array routines that allow you to specify a lot of vertex−related data with just a few arrays and to access that data with equally few function calls.  Using vertex array routines, all 𝒏 vertices in a 𝒏 −sided polygon could be put into one array and called with one function.  Arranging data in vertex arrays may increase the performance of your application.  Also, using vertex arrays may allow non−redundant processing of shared vertices(Vertex sharing is not supported on all implementations of OpenGL).
  • 225. Cont’d  There are three steps to use vertex arrays to render geometry. i. Activate (enable) up to six arrays, each to store a different type of data: vertex coordinates, RGBA colors, color indices, surface normals, texture coordinates, or polygon edge flags. ii. Put data into the array or arrays. The arrays are accessed by the addresses of (that is, pointers to) their memory locations. iii. Draw geometry with the data. OpenGL obtains the data from all activated arrays by dereferencing the pointers.
  • 226. Cont’d  Step 1: Enabling Arrays • The first step is to call glEnableClientState() with an enumerated parameter, which activates the chosen array. glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_VERTEX_ARRAY) ;  Step 2: Specifying Data for the Arrays Specify Vertex Array and Color Array static GLint vertices[] = {25, 25, 100, 325, 175, 25}; static GLfloat colors[] = {1.0, 0.2, 0.2, 0.2, 0.2, 1.0, 0.8, 1.0, 0.2};
  • 227. Cont’d  Step 3: Setting up the pointer glColorPointer(3, GL_FLOAT, 0, colors); glVertexPointer(2, GL_INT, 0, vertices); Color (RGB) DataType starting Offset(index) colorArrays 2D 𝑥, 𝑦 DataType starting Offset(index) VertexArrays
  • 228. Cont’d  Step 4: Draw then polygon glDrawArrays(GL_POLYGON, 0, 3); PrimitiveType starting Offset(index) Ending Offset(index)  Step 5: Disable client states glDisableClientState(GL_COLOR_ARRAY ); glDisableClientState(GL_VERTEX_ARRA Y);  Step 6: Swap Buffers glutSwapBuffers(); // Swap the front and back buffers o // Perform rendering operations in the back buffer
  • 230. Introduction  A 3D object, or three-dimensional object, is a physical or digital entity that exists in three-dimensional space.  In computer graphics and geometry, a 3D object is typically described using coordinates in a three-dimensional Cartesian coordinate system.  These objects have length, width, and height, providing a more realistic representation compared to 2D objects.
  • 231. Modeling Using Polygon  3D modeling using polygons is a common and widely used approach in computer graphics.  In this method, 3D objects are represented as surfaces made up of interconnected polygons, typically triangles or quads.
  • 232.
  • 233.
  • 234.
  • 235. Creating Polygon Meshes  Creating representational polygon meshes involves techniques that enable the modeling of 3D objects with polygons in a way that accurately represents the intended shapes.
  • 236. Cont’d  Polygonal meshes be represented using Table of data • Geometric tables: o These store information about the geometry of the polygonal mesh, o i.e. what are the shapes/positions of the polygons?  Attribute tables: o These store information about the appearance of the polygonal mesh, o i.e. what colour is it, is it opaque or transparent, etc. o This information can be specified for each polygon individually or for the mesh as a whole.
  • 238. Non Polygon Representations  In computer graphics, non-polygonal representations are alternative methods for representing 3D objects that don't rely on polygonal meshes.  These representations often provide specific advantages in certain applications.  Here are some common non-polygonal representations: • Voxel Representation • Point Clouds • Skeleton or Wireframe Models • Blobby Models
  • 239. Voxel Representation  Voxel (volume element) grids divide space into small 3D cubes.  Each voxel stores information about the object's presence or properties.  Voxels are a unit of graphic information that defines a point in three- dimensional space
  • 240. Point Cloud Representation  A collection of individual points in 3D space, where each point represents a specific position.  There's no connectivity information between points.
  • 241. Skeleton or Wireframe Model  Represent the 3D object using a skeletal structure composed of lines or curves.  The structure defines the overall shape.
  • 243. Color Models PROPERTIES OF LIGHT  What we perceive as 'light", or different colors, is a narrow frequency band within the electromagnetic spectrum.  A few of the other frequency bands within this spectrum are called radio waves, microwaves, infrared waves, and X-rays.  Each frequency value within the visible band corresponds to a distinct color  At the low-frequency end is a red color (4.3 X 10" hertz), and the highest frequency we can see is a violet color (7.5 X 10" hertz)..
  • 244. Cont’d  We perceive EM radiation with in the 400-700 nm range, the tiny piece of spectrum between infra-red and ultraviolet.
  • 245. Cont’d  The purpose of a color model is to facilitate the specification of colors in some standard generally accepted way.  Each industry that uses color employs the most suitable color model.
  • 246. RGB Color Models  In the RGB model, each color appears as a combination of red, green, and blue.  This model is called additive, and the colors are called primary colors.  The primary colors can be added to produce the secondary colors of light. Magenta= Red + Blue Cyan = Green + Blue Yellow = Red + Green
  • 247. Cont’d  The color subspace of interest is a cube.  RGB values are normalized to 0 to 1, in which RGB values are at three corners;  cyan, magenta, and yellow are the three other corners,  black is at their origin; and white is at the corner farthest from the origin.
  • 248. Cont’d White = W ⇔ (r, g, b) = (1, 1, 1) Black = K ⇔ (r, g, b) = (0, 0, 0) Red = R ⇔ (r, g, b) = (1, 0, 0) Green = G ⇔ (r, g, b) = (0, 1, 0) Blue = B ⇔ (r, g, b) = (0, 0, 1) Cyan = C ⇔ (r, g, b) = (0, 1, 1) Magenta = M ⇔ (r, g, b) = (1, 0,1) Yellow = Y ⇔ (r, g, b) = (1, 1, 0)
  • 249. CIE color Space  Color is a human perception (a percept).  Color is not a physical property..  But, it is related the light spectrum of a stimulus.  The CIE XYZ color space is a fundamental color space defined by the International Commission on Illumination (CIE).  It is designed to be a linear model that encompasses all perceivable colors.  The CIE XYZ color space is based on the concept of tristimulus values, which represent the amounts of three imaginary primaries: X, Y, and Z.  These values are derived from the spectral power distributions of light.
  • 250. Cont’d Tristimulus Values: 1) 𝑥, 𝑦, z Components: •X (Red-Green Axis): Represents the amount of energy in the red-green axis. •Y (Luminance): Represents the brightness or luminance. •Z (Blue-Yellow Axis): Represents the amount of energy in the blue-yellow axis. 2) Normalization: • Color perceived by human eye. 𝑥 = 𝑥 𝑥 + 𝑦 + 𝑧 y = y 𝑥 + 𝑦 + 𝑧
  • 252. Image Format and their application  An image format is a standardized way of representing and storing digital images.  It defines the structure and encoding of the data that makes up an image,  specifying how the visual information is organized and stored in a file.  Different image formats have distinct characteristics, compression methods, color representations, and features that make them suitable for specific use cases.
  • 253. JPEG (Joint Photographic Experts Group)  JPEG is a lossy compression format, meaning it achieves high compression ratios by discarding some image data.  This can result in a reduction in image quality, especially at higher compression levels.  JPEG supports 24-bit color, allowing it to represent millions of colors. Applications: Photography Web Images
  • 254. PNG (Portable Network Graphics)  PNG uses lossless compression, preserving all image data without loss of quality. It is suitable for images that require high fidelity.  It is suitable for images that require high fidelity.  PNG supports an alpha channel, allowing for transparency.  This makes PNG ideal for images with a need for a transparent background.  PNG supports 24-bit color as well as 8-bit color with alpha channel, providing a wide range of color options. Applications: Graphics with Transparency Images with Text
  • 255. GIF (Graphics Interchange Format):  GIF uses lossless compression but is less effective than PNG in terms of compression ratios.  GIF supports up to 8 bits per pixel, limiting the number of colors to 256.  This makes it less suitable for complex photographic images but sufficient for simple graphics.  GIF supports animation by combining multiple frames into a single file,  GIF supports a single color to be fully transparent, which can be useful for creating simple images with transparent regions. Applications: Simple Graphics: Suitable for icons, logos, and other simple graphics. Basic Animations: Used for creating simple animated images.
  • 256. Viewing A local Illumination Model Chapter Nine
  • 257. Illumination Model  An illumination model, also known as a lighting model or shading model,  It is a mathematical representation or algorithm used in computer graphics to simulate how light interacts with surfaces in a virtual 3D environment.  Illumination models are used in computer graphics for applications such as 3D rendering, computer-aided design, virtual reality, and animation.
  • 258. 3D Camera model  2D and 3D refer to the actual dimensions in a computer's workspace.  2D is 'flat', using the X & Y (horizontal and vertical) axis,  3D adds the '𝑧' dimension.  This third dimension allows for rotation and depth.
  • 259. Cont’d  The synthetic camera is a programmer’s model for specifying how a 3D scene is projected onto the screen. Pinhole
  • 260. Orthographic projection  Orthographic projection is a type of projection where parallel lines remain parallel after projection.  It is often described as a "parallel projection.“  objects are projected onto the image plane along lines that are parallel to the viewing direction.  Objects appear the same size regardless of their distance from the viewer.  Commonly used in technical drawings, architectural plans, and certain types of 3D modeling where precise measurements are essential.
  • 261. Cont’d  glOrtho(X-min, X-max, Y-min, Y-max, Near, Far); • X-min: negative x-axis • X-max: positive x-axis • Y-min: negative Y-axis • Y-max: positive Y-axis • Near: the nearest distance from view point • Far: the farthest distance from view point  glOrtho(-10, 10,-10,10,-1,1); 𝑥 𝑧
  • 263. Perspective projection  Objects that are farther from the viewer appear smaller,  foreshortening, where objects closer to the viewer appear larger than objects farther away.  Widely used in computer graphics, video games, and virtual environments to create a realistic sense of depth and distance.  In orthographic projection, objects have consistent sizes regardless of their depth.  perspective projection, sizes vary based on distance from the viewer.