The A-buffer method is an extension of the depth-buffer method that allows for anti-aliasing and transparency. It works by building a pixel mask for each polygon fragment and determining the visible areas to average color values. The key data structure is the accumulation buffer, which stores color, opacity, depth, coverage, and other data for each pixel. It operates similar to a depth buffer but also considers opacity to determine the final pixel color.
This document describes the midpoint circle algorithm for drawing circles given a radius and center point. It works by starting at an initial point on the circumference, calculating a decision parameter, and then iteratively determining the next point by testing if the decision parameter is positive or negative and updating the parameter according to the point's coordinates. It also explains how to determine additional points in the other octants and shift the calculated pixel positions to be centered on the given center point.
The document discusses several methods for visible surface detection or hidden surface removal in 3D computer graphics, including object space and image space methods. Object space methods determine visibility in 3D coordinates and include depth sorting and binary space partitioning (BSP) trees, while image space methods determine visibility on a per-pixel basis and include the depth-buffer or z-buffer method and ray casting. The depth-buffer method uses two buffers, a frame buffer and depth buffer, to render surfaces from back to front on a pixel-by-pixel basis. BSP trees recursively subdivide space with splitting planes to give a rendering order that correctly draws objects from back to front.
Clipping algorithms identify portions of an image that are inside or outside a specified clipping region. They are used to extract a defined scene for viewing, identify visible surfaces, and perform other drawing and display operations. Common types of clipping include point, line, polygon, and curve clipping. Algorithms like Cohen-Sutherland and mid-point subdivision use codes and binary subdivision to efficiently determine which image portions are visible and should be displayed.
This document summarizes computer graphics and display devices. It discusses that computer graphics involves displaying and manipulating images and data using a computer. A typical graphics system includes a host computer, display devices like monitors, and input devices like keyboards and mice. Common applications of computer graphics include GUIs, charts, CAD/CAM, maps, multimedia, and more. Display technologies discussed include CRT monitors, LCD panels, and other devices. Key aspects of CRT monitors like refresh rate, resolution, and bandwidth are also summarized.
1. The document discusses the Digital Differential Analyzer (DDA) line drawing algorithm, which is used to approximate and draw line segments on a discrete pixel grid.
2. The DDA algorithm works by calculating the slope of the line between two endpoints, then incrementally stepping from pixel to pixel and calculating the corresponding y-value.
3. The algorithm handles both horizontal, vertical, and diagonal lines by calculating the change in x and y between endpoints and using those values to determine how much to increment x and y with each step.
There are two main categories of visible surface detection algorithms: object-space methods that compare objects in a 3D scene, and image-space methods that determine visibility on a per-pixel basis. The painter's algorithm sorts polygons by depth and draws more distant polygons before closer ones, covering some areas. The binary space partitioning algorithm draws polygons farther from the viewpoint before closer ones. Back face detection identifies whether a surface's normal vector points towards or away from the viewpoint.
The A-buffer method is an extension of the depth-buffer method that allows for anti-aliasing and transparency. It works by building a pixel mask for each polygon fragment and determining the visible areas to average color values. The key data structure is the accumulation buffer, which stores color, opacity, depth, coverage, and other data for each pixel. It operates similar to a depth buffer but also considers opacity to determine the final pixel color.
This document describes the midpoint circle algorithm for drawing circles given a radius and center point. It works by starting at an initial point on the circumference, calculating a decision parameter, and then iteratively determining the next point by testing if the decision parameter is positive or negative and updating the parameter according to the point's coordinates. It also explains how to determine additional points in the other octants and shift the calculated pixel positions to be centered on the given center point.
The document discusses several methods for visible surface detection or hidden surface removal in 3D computer graphics, including object space and image space methods. Object space methods determine visibility in 3D coordinates and include depth sorting and binary space partitioning (BSP) trees, while image space methods determine visibility on a per-pixel basis and include the depth-buffer or z-buffer method and ray casting. The depth-buffer method uses two buffers, a frame buffer and depth buffer, to render surfaces from back to front on a pixel-by-pixel basis. BSP trees recursively subdivide space with splitting planes to give a rendering order that correctly draws objects from back to front.
Clipping algorithms identify portions of an image that are inside or outside a specified clipping region. They are used to extract a defined scene for viewing, identify visible surfaces, and perform other drawing and display operations. Common types of clipping include point, line, polygon, and curve clipping. Algorithms like Cohen-Sutherland and mid-point subdivision use codes and binary subdivision to efficiently determine which image portions are visible and should be displayed.
This document summarizes computer graphics and display devices. It discusses that computer graphics involves displaying and manipulating images and data using a computer. A typical graphics system includes a host computer, display devices like monitors, and input devices like keyboards and mice. Common applications of computer graphics include GUIs, charts, CAD/CAM, maps, multimedia, and more. Display technologies discussed include CRT monitors, LCD panels, and other devices. Key aspects of CRT monitors like refresh rate, resolution, and bandwidth are also summarized.
1. The document discusses the Digital Differential Analyzer (DDA) line drawing algorithm, which is used to approximate and draw line segments on a discrete pixel grid.
2. The DDA algorithm works by calculating the slope of the line between two endpoints, then incrementally stepping from pixel to pixel and calculating the corresponding y-value.
3. The algorithm handles both horizontal, vertical, and diagonal lines by calculating the change in x and y between endpoints and using those values to determine how much to increment x and y with each step.
There are two main categories of visible surface detection algorithms: object-space methods that compare objects in a 3D scene, and image-space methods that determine visibility on a per-pixel basis. The painter's algorithm sorts polygons by depth and draws more distant polygons before closer ones, covering some areas. The binary space partitioning algorithm draws polygons farther from the viewpoint before closer ones. Back face detection identifies whether a surface's normal vector points towards or away from the viewpoint.
Raster animation is created by displaying a sequence of raster images rapidly to create the illusion of motion. Each raster image is stored as a bitmap in system memory and contains information about individual pixels that make up the image. By refreshing the frame buffer with a new bitmap, raster animation is created. There are two main types - traditional using sprite sheets and modern using programming languages. Raster animation provides more realistic images than vector animation but requires more memory and processing power. It is used for applications like 3D/2D animation, games, and movies.
The depth buffer method is used to determine visibility in 3D graphics by testing the depth (z-coordinate) of each surface to determine the closest visible surface. It involves using two buffers - a depth buffer to store the depth values and a frame buffer to store color values. For each pixel, the depth value is calculated and compared to the existing value in the depth buffer, and if closer the color and depth values are updated in the respective buffers. This method is implemented efficiently in hardware and processes surfaces one at a time in any order.
This document contains information about 3D display methods in computer graphics presented by a group of 5 students. It discusses parallel projection, perspective projection, depth cueing, visible line identification, and surface rendering techniques. The goal is to generate realistic 3D images and correctly display depth relationships between objects.
Visible surface detection in computer graphicanku2266
Visible surface detection aims to determine which parts of 3D objects are visible and which are obscured. There are two main approaches: object space methods compare objects' positions to determine visibility, while image space methods process surfaces one pixel at a time to determine visibility based on depth. Depth-buffer and A-buffer methods are common image space techniques that use depth testing to handle occlusion.
The Lian-Barsky algorithm is a line clipping algorithm. This algorithm is more efficient than Cohen–Sutherland line clipping algorithm and can be extended to 3-Dimensional clipping. This algorithm is considered to be the faster parametric line-clipping algorithm. The following concepts are used in this clipping:
The parametric equation of the line.
The inequalities describing the range of the clipping window which is used to determine the intersections between the line and the clip window.
The document discusses window to viewport transformation. It defines a window as a world coordinate area selected for display and a viewport as a rectangular region of the screen selected for displaying objects. Window to viewport mapping requires transforming coordinates from the window to the viewport. This involves translation, scaling and another translation. Steps include translating the window to the origin, resizing it based on the viewport size, and translating it to the viewport position. An example transforms a sample window to a viewport through these three steps.
This document discusses the Digital Differential Analyzer (DDA) algorithm, which is a basic line drawing algorithm used in computer graphics. The DDA algorithm uses slope-intercept form (y=mx+b) to incrementally calculate pixel positions along the line between two points. It handles cases where the slope is less than or greater than 1 by incrementing either the x or y coordinate by 1 at each step. The DDA algorithm is simple to implement but requires floating point calculations and has orientation dependency issues.
Projection is the transformation of a 3D object into a 2D plane by mapping points from the 3D object to the projection plane. There are two main types of projection: perspective projection and parallel projection. Perspective projection uses lines that converge to a single point, while parallel projection uses parallel lines. Perspective projection includes one-point, two-point, and three-point perspectives. Parallel projection includes orthographic projection, which projects lines perpendicular to the plane, and oblique projection, where lines are parallel but not perpendicular to the plane.
Anti-aliasing is a technique used to reduce aliasing, which makes curved or slanted lines appear jagged when displayed on a lower resolution output device like a monitor. Aliasing occurs because the device lacks enough resolution to smoothly represent curved lines. Anti-aliasing works by adding subtle color changes around lines, which causes jagged edges to blur together when viewed from a distance. There are several anti-aliasing techniques, including increasing the display resolution, area sampling to shade pixels based on the area covered by thickened lines, and post-filtering by generating a higher resolution virtual image and averaging it down.
The document discusses different methods for 3D display and projection. It describes parallel projection, where lines of sight are parallel, and perspective projection, where lines converge at vanishing points. The key types of projection are outlined as parallel (orthographic and oblique) and perspective. Orthographic projection uses perpendicular lines, while oblique projection uses arbitrary angles. Perspective projection creates realistic size variation with distance and can have one, two, or three vanishing points.
OpenCV is an open-source library for computer vision and machine learning. The document discusses OpenCV's features including its modular structure, common computer vision algorithms like Canny edge detection, Hough transform, and cascade classifiers. Code examples are provided to demonstrate how to implement these algorithms using OpenCV functions and data types.
This document discusses various two-dimensional geometric transformations including translations, rotations, scaling, reflections, shears, and composite transformations. Translations move objects without deformation using a translation vector. Rotations rotate objects around a fixed point or pivot point. Scaling transformations enlarge or shrink objects using scaling factors. Reflections produce a mirror image of an object across an axis. Shearing slants an object along an axis. Composite transformations combine multiple basic transformations using matrix multiplication.
This includes different line drawing algorithms,circle,ellipse generating algorithms, filled area primitives,flood fill ,boundary fill algorithms,raster scan fill approaches.
The document discusses concepts related to basic illumination models. It covers key components like ambient light, diffuse illumination, and specular reflection that contribute to how objects are illuminated. It notes that illumination models try to approximate real world lighting in a realistic but not perfectly accurate way. The document also discusses challenges like accounting for all light rays reflected between nearby objects and having multiple light sources and viewing directions in a scene.
The document discusses algorithms for drawing circles and filling polygons on a computer screen. It covers the mid-point circle algorithm for determining pixel positions on a circle, as well as boundary filling and flood filling algorithms for coloring the interior of polygon shapes. The mid-point circle algorithm uses a decision parameter to iteratively calculate pixel coordinates on the circle path. Filling algorithms like boundary fill use recursion to color neighboring pixels of the same color as the initially selected point.
Computer Graphics - Hidden Line Removal AlgorithmJyotiraman De
This document discusses various algorithms for hidden surface removal when rendering 3D scenes, including the z-buffer method, scan-line method, spanning scan-line method, floating horizon method, and discrete data method. The z-buffer method uses a depth buffer to track the closest surface at each pixel. The scan-line method only considers visible surfaces within each scan line. The floating horizon method finds the visible portions of curves using a horizon array. The discrete data method handles surfaces defined by discrete points rather than mathematical equations.
This document discusses different algorithms for filling polygons in computer graphics, including the scan-line fill algorithm, boundary fill algorithm, and flood fill algorithm. The scan-line fill algorithm involves horizontally scanning a polygon from bottom to top and identifying edge intersections with the scan line. The boundary fill algorithm starts at an interior point and recursively fills outward until the boundary color is encountered. The flood fill algorithm replaces all pixels of a specified interior color with a fill color within connected regions. Pseudocode and examples are provided for each algorithm.
The document discusses viewport transformations which map objects from one coordinate system to another, such as mapping from world coordinates to screen coordinates. It explains that viewport transformations are used for viewing transformations like zooming and moving drawings. The document also describes how a matrix M can be determined to map a window defined in world coordinates to a viewport defined in screen pixel coordinates.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
Surface rendering means a procedure for applying a lighting model to obtain pixel intensities for all the projected surface positions in a scene.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface point.
Raster animation is created by displaying a sequence of raster images rapidly to create the illusion of motion. Each raster image is stored as a bitmap in system memory and contains information about individual pixels that make up the image. By refreshing the frame buffer with a new bitmap, raster animation is created. There are two main types - traditional using sprite sheets and modern using programming languages. Raster animation provides more realistic images than vector animation but requires more memory and processing power. It is used for applications like 3D/2D animation, games, and movies.
The depth buffer method is used to determine visibility in 3D graphics by testing the depth (z-coordinate) of each surface to determine the closest visible surface. It involves using two buffers - a depth buffer to store the depth values and a frame buffer to store color values. For each pixel, the depth value is calculated and compared to the existing value in the depth buffer, and if closer the color and depth values are updated in the respective buffers. This method is implemented efficiently in hardware and processes surfaces one at a time in any order.
This document contains information about 3D display methods in computer graphics presented by a group of 5 students. It discusses parallel projection, perspective projection, depth cueing, visible line identification, and surface rendering techniques. The goal is to generate realistic 3D images and correctly display depth relationships between objects.
Visible surface detection in computer graphicanku2266
Visible surface detection aims to determine which parts of 3D objects are visible and which are obscured. There are two main approaches: object space methods compare objects' positions to determine visibility, while image space methods process surfaces one pixel at a time to determine visibility based on depth. Depth-buffer and A-buffer methods are common image space techniques that use depth testing to handle occlusion.
The Lian-Barsky algorithm is a line clipping algorithm. This algorithm is more efficient than Cohen–Sutherland line clipping algorithm and can be extended to 3-Dimensional clipping. This algorithm is considered to be the faster parametric line-clipping algorithm. The following concepts are used in this clipping:
The parametric equation of the line.
The inequalities describing the range of the clipping window which is used to determine the intersections between the line and the clip window.
The document discusses window to viewport transformation. It defines a window as a world coordinate area selected for display and a viewport as a rectangular region of the screen selected for displaying objects. Window to viewport mapping requires transforming coordinates from the window to the viewport. This involves translation, scaling and another translation. Steps include translating the window to the origin, resizing it based on the viewport size, and translating it to the viewport position. An example transforms a sample window to a viewport through these three steps.
This document discusses the Digital Differential Analyzer (DDA) algorithm, which is a basic line drawing algorithm used in computer graphics. The DDA algorithm uses slope-intercept form (y=mx+b) to incrementally calculate pixel positions along the line between two points. It handles cases where the slope is less than or greater than 1 by incrementing either the x or y coordinate by 1 at each step. The DDA algorithm is simple to implement but requires floating point calculations and has orientation dependency issues.
Projection is the transformation of a 3D object into a 2D plane by mapping points from the 3D object to the projection plane. There are two main types of projection: perspective projection and parallel projection. Perspective projection uses lines that converge to a single point, while parallel projection uses parallel lines. Perspective projection includes one-point, two-point, and three-point perspectives. Parallel projection includes orthographic projection, which projects lines perpendicular to the plane, and oblique projection, where lines are parallel but not perpendicular to the plane.
Anti-aliasing is a technique used to reduce aliasing, which makes curved or slanted lines appear jagged when displayed on a lower resolution output device like a monitor. Aliasing occurs because the device lacks enough resolution to smoothly represent curved lines. Anti-aliasing works by adding subtle color changes around lines, which causes jagged edges to blur together when viewed from a distance. There are several anti-aliasing techniques, including increasing the display resolution, area sampling to shade pixels based on the area covered by thickened lines, and post-filtering by generating a higher resolution virtual image and averaging it down.
The document discusses different methods for 3D display and projection. It describes parallel projection, where lines of sight are parallel, and perspective projection, where lines converge at vanishing points. The key types of projection are outlined as parallel (orthographic and oblique) and perspective. Orthographic projection uses perpendicular lines, while oblique projection uses arbitrary angles. Perspective projection creates realistic size variation with distance and can have one, two, or three vanishing points.
OpenCV is an open-source library for computer vision and machine learning. The document discusses OpenCV's features including its modular structure, common computer vision algorithms like Canny edge detection, Hough transform, and cascade classifiers. Code examples are provided to demonstrate how to implement these algorithms using OpenCV functions and data types.
This document discusses various two-dimensional geometric transformations including translations, rotations, scaling, reflections, shears, and composite transformations. Translations move objects without deformation using a translation vector. Rotations rotate objects around a fixed point or pivot point. Scaling transformations enlarge or shrink objects using scaling factors. Reflections produce a mirror image of an object across an axis. Shearing slants an object along an axis. Composite transformations combine multiple basic transformations using matrix multiplication.
This includes different line drawing algorithms,circle,ellipse generating algorithms, filled area primitives,flood fill ,boundary fill algorithms,raster scan fill approaches.
The document discusses concepts related to basic illumination models. It covers key components like ambient light, diffuse illumination, and specular reflection that contribute to how objects are illuminated. It notes that illumination models try to approximate real world lighting in a realistic but not perfectly accurate way. The document also discusses challenges like accounting for all light rays reflected between nearby objects and having multiple light sources and viewing directions in a scene.
The document discusses algorithms for drawing circles and filling polygons on a computer screen. It covers the mid-point circle algorithm for determining pixel positions on a circle, as well as boundary filling and flood filling algorithms for coloring the interior of polygon shapes. The mid-point circle algorithm uses a decision parameter to iteratively calculate pixel coordinates on the circle path. Filling algorithms like boundary fill use recursion to color neighboring pixels of the same color as the initially selected point.
Computer Graphics - Hidden Line Removal AlgorithmJyotiraman De
This document discusses various algorithms for hidden surface removal when rendering 3D scenes, including the z-buffer method, scan-line method, spanning scan-line method, floating horizon method, and discrete data method. The z-buffer method uses a depth buffer to track the closest surface at each pixel. The scan-line method only considers visible surfaces within each scan line. The floating horizon method finds the visible portions of curves using a horizon array. The discrete data method handles surfaces defined by discrete points rather than mathematical equations.
This document discusses different algorithms for filling polygons in computer graphics, including the scan-line fill algorithm, boundary fill algorithm, and flood fill algorithm. The scan-line fill algorithm involves horizontally scanning a polygon from bottom to top and identifying edge intersections with the scan line. The boundary fill algorithm starts at an interior point and recursively fills outward until the boundary color is encountered. The flood fill algorithm replaces all pixels of a specified interior color with a fill color within connected regions. Pseudocode and examples are provided for each algorithm.
The document discusses viewport transformations which map objects from one coordinate system to another, such as mapping from world coordinates to screen coordinates. It explains that viewport transformations are used for viewing transformations like zooming and moving drawings. The document also describes how a matrix M can be determined to map a window defined in world coordinates to a viewport defined in screen pixel coordinates.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
Surface rendering means a procedure for applying a lighting model to obtain pixel intensities for all the projected surface positions in a scene.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface point.
Gouraud shading and Phong shading are two common techniques for interpolating shading across polygon surfaces in 3D graphics. Gouraud shading linearly interpolates intensities across polygon surfaces, improving on constant shading but still resulting in Mach bands or streaks. Phong shading interpolates normal vectors and applies lighting models at each surface point, producing more realistic highlights but requiring more computation than Gouraud shading. Fast Phong shading approximates calculations to speed up rendering with Phong shading at the cost of some accuracy.
This document discusses illumination and lighting schemes. It begins with an introduction to artificial lighting and important terminology like luminous flux and illumination. It then describes different lighting schemes including direct, semi-direct, semi-indirect and indirect lighting. It also discusses the design of indoor lighting schemes and different types of lamps such as incandescent, halogen, discharge and fluorescent lamps. Finally, it covers topics like industrial lighting, street lighting and references used.
This document discusses illumination and lighting design. It begins by outlining the objectives of studying illumination for architects, including providing proper ambient lighting, safety, and energy efficiency. It then defines key lighting terms like illuminance, luminous intensity, and luminance. The document covers the inverse square law and Lambert's cosine law governing light distribution. It describes the history of lighting technologies from candles to modern LEDs. It also discusses light sources like fluorescent lamps and the types of lighting schemes and lamps used in various applications.
This document is a lecture outline for an introduction to computer graphics course. It outlines the course information and administrative details, provides an overview of topics to be covered including graphics systems, techniques, operations and a mathematical review. It also defines computer graphics, discusses image processing and analysis, and explains why computer graphics is an important field due to advances in computing power, visualization, and interaction capabilities.
This document provides an overview of various types of electrical lighting sources and illumination concepts. It discusses the basic terms used in illumination like luminous flux, lumen, candle power, and inverse square and Lambert's cosine laws. It then describes different electrical light sources including incandescent, fluorescent, mercury vapor, sodium vapor, neon and halogen lamps. For each light source, it explains the working principle, construction details, advantages and applications. The document serves as a useful reference for understanding various electrical lighting techniques and concepts of illumination.
This document discusses lighting and material properties in OpenGL. It describes how to define light sources, including point lights, directional lights, and spot lights. It also covers material properties like ambient, diffuse, specular colors and shininess. The document provides code examples for setting up different light types and moving a light source. It outlines tasks to add a new point light, make it hover, and set the teapot material to ruby properties.
You can find the video recording here: https://www.youtube.com/watch?v=lYoHh19RNy4
Heiko Behrens and Matthew Hungerford present advanced programming techniques for Pebble. This presentation focused on graphics techniques including run-time dithering, offline dithering, pixel manipulations, and frame-buffer drawing.
This talk featured the amiga boing ball dithering demo.
Day 1 - Video 3B
This document summarizes a research paper that presents a new parallel collision detection algorithm. The algorithm solves the all-pairs collision detection problem using parallel processing with a theoretical performance of O((n log n)/k) run time. It was implemented in a collision detection system with a client-server architecture designed to utilize both shared-memory and clustered computers. Experimental results demonstrated the algorithm's performance matches its calculated theoretical performance.
Rendering involves several steps: identifying visible surfaces, projecting surfaces onto the viewing plane, shading surfaces appropriately, and rasterizing. Rendering can be real-time, as in games, or non-real-time, as in movies. Real-time rendering requires tradeoffs between photorealism and speed, while non-real-time rendering can spend more time per frame. Lighting is an important part of rendering, as the interaction of light with surfaces through illumination, reflection, shading, and shadows affects realism.
The document discusses several methods for hidden surface removal in 3D computer graphics:
1) Back-face detection uses polygon normals and viewing direction to determine if a polygon is facing away from the viewer.
2) Depth-buffer methods like the z-buffer algorithm use a depth buffer to store the depth value of the visible surface at each pixel location.
3) Scan-line methods process all polygons intersecting a scan-line at once before moving to the next scan-line.
Discuss about Secret invention -> patentability
The principle secret -> build working models -> introduce how to review patentability -> Including 4 legal requirements -> statutory class (focus on class 1: process)
1) Social TV began in 2001 with prototypes allowing friends to connect while viewing, and has evolved to include content sharing and recommendations, communication features, and status updates.
2) Major interactive Social TV activities include content selection and sharing with friends, direct synchronous and asynchronous communication around TV viewing, community building through comments and ratings, and updating friends on what one is currently watching.
3) Challenges include determining the right devices, modality of interaction, strength of social ties, and handling synchronization between solitary and shared viewing experiences.
Lighting provides illumination through both natural and artificial means. Good lighting has several key requirements including sufficient brightness distributed evenly without glare or shadows. Different types of lighting serve specific purposes such as task, accent, and general lighting. Lighting is measured using various metrics like luminous intensity, flux, illuminance and luminance. Artificial lighting aims to provide optimal illumination while minimizing energy consumption and potential health impacts like light pollution.
Weather, News, ... Terrestrial network
Social TV
• Facebook, Twitter, ...
Hybrid TV
• Integrate broadcast & internet
• Enrich TV experience
• Personalized service
Ref: III project
14
(III)
Hybrid features on TV
- Integrate broadcast & internet content
- Personalized recommendation
- Social TV
- Second screen experience
- Voice/gesture control
- Apps
- Advertisement
- Payment service
Ref: III project
15
(III)
Display content: Diversity video service
- Live TV
- Video on Demand (VoD)
- Time-shift TV
- Electronic Program
How to separate the related inventions or combine those in one
case study: radio met car
Filing application tips - "KISS rule“
Trademark note
Software & biz methods notes including source code & flowchart
First sketches skills (Official Gazette, OG)
Drafting the specification hints
This document discusses different methods for visible surface determination in 3D computer graphics. It describes object-space methods that compare objects within a scene to determine visibility, and image-space methods that decide visibility on a point-by-point basis at each pixel. Specific methods mentioned include the back-face detection method, depth-buffer/z-buffer method, and A-buffer method. The depth-buffer method stores depth and color values in buffers for each pixel and compares surface depths to determine visibility. The A-buffer method extends this to allow accumulation of intensities for transparent surfaces.
In Computer Graphics, Hidden surface determination also known as Visible Surface determination or hidden surface removal is the process used to determine which surfaces
of a particular object are not visible from a particular angle or particular viewpoint. In this scribe we will describe the object-space method and image space method. We
will also discuss Algorithm based on Z-buffer method, A-buffer method, and Scan-Line Method.
This document discusses various visible surface detection methods in computer graphics. It describes object-space methods like back-face detection that compare object surfaces, and image-space methods like depth buffering that determine visibility point-by-point. Specific algorithms covered include depth buffering, scan-line, depth sorting, BSP trees, ray casting, and methods for curved and wireframe surfaces. It also provides examples and discusses functions for implementing visibility detection in OpenGL.
This document discusses algorithms for visible surface determination (VSD) to determine which surfaces are visible during 3D rendering. It describes two main approaches: image precision, which operates at the display resolution, and object precision, which operates at the object level. It also discusses techniques like the depth buffer and depth sorting algorithms. The depth buffer method uses two buffers - a depth buffer and frame buffer - to track pixel depth and color values. It processes objects and surfaces, testing pixels and updating the buffers. Depth sorting paints surfaces in order of decreasing depth to resolve visibility.
The document discusses various techniques for constructing shadows and lighting effects in 3D computer graphics, including using projection matrices to generate shadow polygons and accounting for factors like light source positioning, radial intensity attenuation, and surface reflectance properties. It also examines methods for animating camera movement and introducing texture mapping to surfaces.
This document discusses various surface detection methods for 3D graphics, including:
- Back-face detection, which discards back-facing polygons based on their normal vectors.
- The depth-buffer (z-buffer) method, which compares depth values at each pixel and only draws surfaces with smaller depths.
- The A-buffer method, an extension of depth buffering that allows rendering of transparent surfaces using an accumulation buffer.
- Scan-line and depth-sorting methods, which perform visibility calculations along scanlines or by sorting surfaces from back to front.
This document provides an overview of collision detection in 3D environments. It begins with definitions of key geometry concepts like Euclidean geometry, affine geometry, and projective geometry. It then discusses spatial data structures commonly used for collision detection like bounding volumes, space partitioning structures, and scene graphs. The document outlines various collision detection algorithms for basic 3D shapes like spheres, boxes, and triangles. It also covers algorithms for convex objects and concepts like configuration space obstacles, Voronoi diagrams, and different types of coherences used in collision detection.
Computer Graphics - Lecture 03 - Virtual Cameras and the Transformation Pipeline💻 Anton Gerdelan
Slides from when I was teaching CS4052 Computer Graphics at Trinity College Dublin in Ireland.
These slides aren't used any more so they may as well be available to the public!
There are some mistakes in the slides, I'll try to comment below these.
- The lecture covered graphics math topics including homogeneous coordinates and projective transformations.
- Homework 2 was due and an in-class quiz was given. Details on Project 1 were announced.
- The final exam date was moved and last class will be a review session. Daily quiz solutions will be provided.
- Office hours and last lecture topics were reviewed to introduce the current lecture on further graphics math concepts.
This document discusses different types of projections used in computer graphics, including perspective and parallel projections. It describes orthographic projection, which projects points along the z-axis onto the z=0 plane. Perspective projection is also covered, including how it creates the effect of objects appearing smaller with distance using similar triangles. The document provides the equation for a perspective projection matrix and an example. It concludes by discussing defining a viewing region or frustum using functions like glFrustum and gluPerspective in OpenGL.
The document discusses different concepts related to clipping in computer graphics including 2D and 3D clipping. It describes how clipping is used to eliminate portions of objects that fall outside the viewing frustum or clip window. Various clipping techniques are covered such as point clipping, line clipping, polygon clipping, and the Cohen-Sutherland algorithm for 2D region clipping. The key purposes of clipping are to avoid drawing objects that are not visible, improve efficiency by culling invisible geometry, and prevent degenerate cases.
Identify those parts of a scene that are visible from a chosen viewing position.
Visible-surface detection algorithms are broadly classified according to whether
they deal with object definitions directly or with their projected images.
These two approaches are called object-space methods and image-space methods, respectively
An object-space method compares
objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible.
In an image-space algorithm, visibility is decided point by point at each pixel position on the projection plane.
This document discusses algorithms for visible surface detection in 3D computer graphics. It describes two main types of algorithms - object space algorithms that determine which parts of objects are visible, and image space algorithms that determine visibility on a per-pixel basis. Four common algorithms are explained in detail: back-face elimination, depth buffering, depth sorting, and ray casting. The document provides examples of how each algorithm works and their pros and cons. It concludes by noting factors to consider when choosing an algorithm, such as scene complexity, object types, and available hardware.
This document discusses different algorithms for visible surface detection in 3D computer graphics. It describes two main types of algorithms - object space algorithms that determine which parts of objects are visible, and image space algorithms that determine visibility on a per-pixel basis. It then explains four specific algorithms in more detail: back-face elimination, depth buffering, depth sorting, and ray casting. For each algorithm it provides an overview of the approach, terminology, and considerations around performance and implementation. The document concludes by comparing the different algorithms and noting how factors like hardware availability and scene complexity should guide algorithm selection.
The document discusses several methods for hidden surface removal in 3D computer graphics:
1) Back-face detection uses polygon normals and viewing direction to determine if a polygon is facing away from the viewer.
2) Depth-buffer methods like the z-buffer algorithm use a depth buffer to store the depth value of the visible surface at each pixel location.
3) Scan-line methods process all polygons intersecting a scan-line at once before moving to the next scan-line.
AU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSEThiyagarajan G
This document contains a summary of a computer graphics exam with 10 multiple choice questions in Part A and 4 long answer questions in Part B. Some of the key topics covered include: image resolution, scaling matrices, color conversion between RGB and CMY color modes, Bezier curves, projection planes, dithering, animation principles, turtle attributes in graphics, Bresenham's circle algorithm, Liang-Barsky line clipping algorithm, viewing transformations, cubic Bezier curves, and backface detection. Part B also includes questions on orthographic vs axonometric vs oblique projections, ambient lighting models, raster vs keyframe animation, ray tracing, and morphing.
The document discusses different types of projections in 3D computer graphics, including orthographic, oblique, and perspective projections. It explains the geometry and matrices used to perform perspective projections, mapping 3D points onto a 2D view plane using similar triangles. The text also compares perspective versus parallel projections, noting that perspective projection preserves angles and looks more realistic while parallel projection preserves distances and angles.
Similar to CG OpenGL surface detection+illumination+rendering models-course 9 (20)
Intellectual Property Guide
patent requirements
industry approaches
patent strength -> patent map
International property offices
Scenario extends to more patents) - case study
.....
Smart TV content converged service & social mediafungfung Chen
This document discusses the convergence of smart TV content and social media. It begins with an outline covering the topics of what TV is in a globally connected world, content issues like apps and distribution/monetization, and social media case studies. The document then covers hybrid TV platforms, the development of HbbTV standards, bandwidth needs with more applications, and how CE manufacturers' value chains may change. It also discusses content issues around apps, distribution, synchronization and monetization solutions. Finally, it explores social media history and case studies involving Facebook, mobile apps, and gamification solutions.
Integrating the journey on IP protections to discuss the strategies
cope pirate issue & how to handle a non-patentability case (ex. Trademark, copyright, trade secret…)
Give some suggestions for lay inventors to prepare a patent application
Some online patent application forms & the template of RPA & PTO’s response after you sent a application
Start to the procedure on classification search (patents)
Point out reading skill & how to define the scope of a patent
Use some case studies to illustrations
Design around lawsuits for Apple against Samsung
Development online DB approaches (cloud)
Finally, show other useful information (online search) such as official gazette, trademarks …
The document discusses the Patent Prosecution Highway (PPH) program which allows applicants to use examination results from one patent office to accelerate examination in another office, and describes PPH routes including the Paris route, PCT-PPH, and expanded PPH MOTTAINAI program. It also provides an example of using search results for a mobile phone multimedia controller patent to determine relevant patent classification codes between the USPTO and JPO technical fields.
Start from overseas patent protection including
PCT (patent cooperation treaty), PPH (patent prosecution highway), CAF/CCD, new route pilot (work-sharing), tripway pilot (search sharing) …
How to audit work of patent agent/lawyer
Keep document & bill
Pass a Bar exam to show your ability
The document discusses conducting a patentability search, including why they are important, how to search patent classifications, and examples of patent infringement cases. It describes searching major patent databases like Espacenet and USPTO to find prior art and check for patent applications. Classification searches involve searching patents within a particular class and subclass. The Cooperative Patent Classification (CPC) system aims to develop a joint classification for the European and US patent offices.
Introduce the requirement #3: Novelty & the date issue on “first to file” & 1 yr law
Talk the satisfy condition on the novelty requirement
Finally, share the law recognizes 3 types of novelty (Section 102) on the part of Physical (hardware or method)
Start from Patentability – 4 legal requirements
req.1: a statutory class – lawsuits & 5 classes cover processes, Machines, Manufactures, Compositions, New use
req.2: Utility including software, non-patentable case by drug issue & how to turn back a whimsical case
Req.3: Novelty - prior art (Reduction to Practice)
CG OpenGL 3D object representations-course 8fungfung Chen
This document discusses 3D object representations in computer graphics. It describes how regular polyhedrons like cubes can be used to represent simple 3D objects. It also discusses representing curved surfaces like spheres, ellipsoids, and tori through parametric equations in spherical or Cartesian coordinates and approximating them with polygon meshes. OpenGL and GLUT functions for drawing common 3D primitives like spheres and cones are also covered.
The document discusses 3D viewing frameworks and how to generate 3D views of objects and scenes by setting up a camera position and orientation, projecting object descriptions onto a view plane using different projection types like parallel, perspective, and oblique projections, and transforming the view for output. It also covers topics like depth cueing, aspect ratios, and the steps involved in the 3D viewing process using computer graphics.
The document discusses 2D viewing and simple animation techniques in computer graphics, including how to define a viewing region, perform viewing transformations, construct basic animations using techniques like double buffering and periodic motion, and manage frame rates for smooth animation playback. It also provides OpenGL code examples for tasks like setting the viewport and scaling images.
This document discusses vectors, coordinate systems, and geometric transformations that are fundamental concepts in computer graphics. It provides examples of different coordinate systems and how to project points from one system to another. It also explains various 2D affine transformations like translation, scaling, rotation, shearing, and reflection through transformation matrices. Homogeneous coordinates are introduced as a technique to represent 2D points as 3D homogeneous coordinates to allow for general linear transformations.
This document discusses various topics related to computer graphics and input devices, including:
1. It provides an overview of polar coordinates and how to convert between polar and Cartesian coordinates.
2. It describes different input device modes including request, sample, and event mode and provides examples of each.
3. It covers color information and graphics functions in OpenGL related to color, including color tables, pixel arrays, and color functions.
4. It discusses additional graphics functions in OpenGL related to points, lines, polygons and filling algorithms.
This document discusses various topics related to computer graphics and video processing, including coordinate systems, line drawing algorithms, circle generation, polygon filling, and 3D modeling. It provides explanations of techniques like Bresenham's line algorithm, the midpoint circle algorithm, scanline polygon filling, and boundary representation for 3D objects. Examples are given for 2D and 3D coordinate systems, parallel line drawing, concave polygon splitting, inside-outside testing, and representing adjacent polygon surfaces.
OpenGL - point & line design
introduce the construction of displayers (CRT, Flat-panel, LCD, PDP, projector...)
those render is based on graphic skills (point & line)
User stories are estimated in story points to plan project timelines. Story points are a relative unit used to estimate complexity rather than time. The team estimates stories together by first independently assigning points, then discussing to converge on a shared estimate. Velocity is calculated based on the number of points completed in an iteration to predict future capacity. Pair programming may impact velocity but not the story point estimates themselves. Estimates should consider the story complexity and effort from the team perspective rather than individuals.
RPWORLD offers custom injection molding service to help customers develop products ramping up from prototypeing to end-use production. We can deliver your on-demand parts in as fast as 7 days.
1. Visible-surface detection
Methods + Illumination
Models & surface-
rendering models
Chen Jing-Fung (2006/12/8)
Assistant Research Fellow,
Digital Media Center,
National Taiwan Normal University
Ch9: Computer Graphics with OpenGL 3th, Hearn Baker
Ch11-6: Computer Graphics with OpenGL 3th, Hearn Baker
Ch10: Computer Graphics with OpenGL 3th, Hearn Baker
3. Visible-surface
detection
• How to describe the surface?
– Object-space
• Compare objects or parts of objects in the scene
– We should label as visible
– Advantage: effectively locate visible surfaces in the
same case
– Image-space
• Visibility is decided each pixel position’s point (the
closest to viewer) on the projection plane
• Most interface is only image-space
4. Object- & Image-
space
• The major differences is in the basic
approaches about the various visible-
surface detection algorithms
– Sorting method
• Based on facilitate depth (interval distance)
which used to divide surfaces in a scene
– Coherence method
• Using the regularities in a scene
5. Polygon’s vertices & plane
plane: Ax By Cz D 0
• Using three points resolve a
plane (x1,y1,z1)
NPlane
any plane function: Ax By Cz D 0 (x ,y ,z )
2 2 2
(x3,y3,z3)
– Each intercept between the plane
and the axis
• x-axis: A/D, y-axis: B/D, z-axis: C/D
(A / D)xk (B / D)y k (C / D)zk 1 k=1,2,3
,
Cramer’s rule: 1 y1 z1 x1 1 z1 x1 y1 1
A 1 y 2 z2 B x2 1 z2 C x2 y 2 1
1 y3 z3 x3 1 z3 x3 y3 1
x1 y1 z1
D x2 y2 z2
x3 y3 z3
6. Back-face detection
(object-space)
• Simplify the back-face test (right-hand)
– N is a polygon surface’s normal vector
– Vview is a viewing direction vector from
our camera position
plane: Ax By Cz D 0 Our monitor:
xv
N=(A,B,C) zv
Vview N=(A,B,C) yv
Vview
Back face: Vview.N > 0 If plane Vview // zv axis,
C<0, we label any
C=0, cannot see any
polygon as a back face
face
7. Back-face removals
• In general, back-face removal can
eliminate about partly or completely
of polygon surfaces
– Convex surface: C 0 C=0, cannot see any face
• All hidden surfaces C < 0, we label any
polygon as a back face
– C>0 C > 0, hidden surface
C0
– Concave surface:
• Like to combine two convex objects
One face is be partially hidden by
other faces of the object
8. OpenGL and Concave
Polygons
• A tessellator object in the GLU
library can tessellate a given polygon
into flat convex polygons
– Draw a simple polygon without holes
• basic idea is to describe a contour
mytess = gluNewTess(); //include
gluTessProperty()
gluTessBeginPolygon(mytess, NULL); //send them off to be rendered
gluTessBeginContour(mytess);// draw contour (outside)
for(i=0; i < n_vertices; i++)
glTessVertex(mytess,vertex[i],vertex[i]);
gluTessEndContour();
gluTessEndPolygon(mytess);
9. • Tessellation
algorithm in OpenGL
is based on the
winding rule to set
the winding number
gluTessProperty(mytess,
GLU_TESS_WINDING_RULE,*);
GLU_TESS_WINDING_ODD
GLU_TESS_WINDING_NONZERO
GLU_TESS_WINDING_POSITIVE
GLU_TESS_WINDING_NEGATIVE
GLU_TESS_WINDING_ABS_GEQ_TWO
http://pheatt.emporia.edu
10. Depth-Buffer method
• A common image-space approach for
detecting visible surfaces is depth-
buffer method
– One pixel position at a time appear to
the surface
– Also called to z-buffer method
• Object depth is usually measured along z
axis of a viewing system
11. Depth buffer
• Three surfaces and view plane
overlap pixel position (x,y) on the
view plane
– The visible surface has the smallest
depth value P Py aP b pseudodepth
( x, y, z ) x , , z
P P P
z z z
View plane
Depthvalue = 1.0 yv
-> far (x,y)
xv
Depthvalue=0.0
-> near zv
12. Possible data type to
hold face data
Class face{
int nVerts; //number of vertices in vertex-array
Point *pt; //array of vertices in real screen coord’s
float *depth; //array of vertex depths
Plane plane; //data for the plane of the face
Exface extent; //the extent of the face
//other properties
};
Ch13: Computer Graphics 2th, F. S. Hill Jr.
13. Depth-Buffer Algorithm
• Initialize the depth buffer and frame
buffer so that for all buffer positions (x,y)
depthBuff (x,y) = 1.0, frameBuff (x,y) = backgndColor
• Process each polygon in a scene, one at a
time
– For each projected (x,y) pixel position of a
polygon, calculate the depth z (if not allready
known)
– If z < depthBuff(x,y), compute the surface
color at that position and set
depthBuff (x,y) = z, frameBuff (x,y) = surfColor(x,y)
14. Basic flow of depth-
buffer algorithm
• pseudocode
for (each face F)
for (each pixel (x,y) covering the face){
depth = depth of F at (x,y);
if (depth < d[x][y]) { //F is closest so far
c = color of F at (x,y) //set the pixel color at (x,y) to c
d[x][y] = depth; //updata the depth buffer
}
}
15. A-Buffer Method
• Extension of above depth-buffer ideas is
A-Buffer procedure
– Combined with antialiasing, area-averaging and
visiblity-detection method
– Developed at Lucasfilm Studios to include in
the surface-rendering system called REYES
(Renders Everything You Ever Saw)
– The buffer region is referred to as the
accumulation buffer
• Store a variety of surface data and depth values
16. Two fields in A-buffer
method
• Each position in the A-buffer has two
fields:
– Depth field
• Stores a real-number value (+,- or 0)
– Surface data field
• Stores surface data or a pointer
17. Two buffer representations
about A-buffer
• A single surface overlaps the pixel,
the surface depth, color and other
information
RGB and
– Buffer list depth≧0
other info
• More than one surface overlaps the
pixel, a linked list of surface data
(also can list the priority)
Surf1 Surf2
depth<0
info info …
18. A-buffer
• Summary the surface information in A-
buffer
– RGB intensity components
– Opacity parameter (percent of transparency)
– Depth
– Percent of area coverage
– Surface identifier
– Other surface-rendering parameters
19. Design a pick-buffer
program in OpenGL
• Interactive to select objects
– Pointing screen positions (ps. We cann’t
use OpenGL to directly pick at any
position)
• Design pick window to form a revised view
volume
• Assign integer ID to point a object
• Intersect those objects and the revised
view volume which are stored in a pick-
buffer array
20. Picking a screen’s
procedure
• Create and display a scene
• Pick a screen position and the mouse
callback function
– Set up a pick buffer
– Activate the picking operations (selection mode)
– Initialize an ID name stack for object ID
– Save the current viewing and geometric-
transformation matrix
– Specify a pick window for the mouse input
21. – Assign identifiers to objects and reprocess the
screen using revised view volumne. (pick
information is stored in pick buffer)
– Restore the original viewing and geometric-
transformation matrix
– Determine the number of objects that have
been picked and return to the normal rendering
mode
– Process the pick information
22. • Pick-buffer glSelectBuffer (pickBuffSize, pickBuffer);
– This buffer array store the integer
information for each object
– Several records of information store in
pick buffer
• Depending on the size and location of the
pick window
23. Each record in pick-
buffer’s information
• The stack position of the object which is
the number of identifiers in the name
stack up to and including the position of
the picked object
• Min depth of the picked object
• Max depth of the same object
• The list of the identifies in the name stack
from the first (bottom) identifier to the
identifier for the picked object
Range = 0.0~1.0 multiplied by 232-1
24. • The OpenGL picking operations are
activated with glRenderMode (GL_SELECT);
– Means a scene is processed through the
viewing pipeline (not stored in frame buffer)
– A record each object have been displayed in
normal rendering mode is placed in pick-buffer
• This command returns the number of picked object
(record in the pick-buffer)
25. After mouse hit
• To return to the normal rendering
mode (the default)
glRenderMode (GL_RENDER);
• third option (GL_FEEDBACK)
– Store object coordinates without
displaying
27. OpenGL visibility-
detection function
• Back-face removal glEnable (GL_CULL_FACE);
glCullFace (mode);
glDisable(GL_CULL_FACE);
– Where mode can be changed
• GL_FRONT: remove the back faces
– The viewing position is inside a building
• GL_BACK: remove the front faces
– The viewing position moves outside the building
• GL_FRONT_AND_BACK :remove both front and back
faces
– Eliminate all polygon surfaces in a scene
– Another front-facing view function
• glFrontFace()
28. OpenGL Depth-Buffer
Functions
• Using the OpenGL depth-buffer
visibility-detection routines
– First, modify the GLUT initialization
function (still frame)
glutInitDisplayMode ( GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH );
– Depth buffer values can be initialized
glClear (GL_DEPTH_BUFFER_BIT);
• Refresh buffer of the background color
30. • The depth-buffer visibility testing use
other initial value for the max-depth
glClearDepth (maxDepth);
glClear(GL_DEPTH_BUFFER_BIT);
– The depth buffer is initialized with the default
value 1.0 (maximum depth-buffer in OpenGL)
– This function can be used to speed up the
depth-buffer routines
• That is like many distant objects behind the
foreground objects
minimum depth-buffer in OpenGL = 0.0
31. Projection application
• Projection coordinates in OpenGL
– normalize the range -1.0~1.0
– The depth values between near and far are
normalized to the range 0.0~1.0
• Default: Near: 0.0; far:1.0
glDepthRange (nearnormdepth, farnormdepth)
– Any value can be set
including nearnormdepth>farnormdepth
32. Other option about
depth-buffer in OpenGL
glDepthFunc (testCondition);
– Parameter testCondition can be assigned
8 symbolic constants
• GL_LESS (default), GL_GREATER,
GL_EQUAL, GL_NOTEQUAL, GL_LEQUAL,
GL_GEQUAL, GL_NEVER (no point),
GL_ALWAYS (all points).
– They are used to compare each incoming pixel z
value with present z value in the depth-buffer
– GL_LESS: the depth-value is less than current
value in the depth-buffer
33. Set the depth-buffer
state
• Set the state of the depth-buffer to
read-only or read-write
glDepthMask (writeState);
– writeState = GL_TRUE (default): read-
write
– writeState = GL_FALSE: only can
retrieve values
34. The advantage of the
setting state
• The feature is useful in complicated
background v.s. different foreground
objects can be displayed
– Disable the background write mode and
process foreground
• Allow to generate a series of frames with different
foreground objects
• Or one object in different positions for an animation
sequence.
• Therefore, only depth values for the background are
saved
35. OpenGL Depth-Cueing
Function
• Varity brightness of an object is like
object’s distance function from the
viewing position Depth-cueing method
dmax d
glEnable (GL_FOG); fdepth (d )
dmax dmin
glFogi (GL_FOG_MODE, GL_LINEAR);
• Can set different values for dmax and dmin
glFogf (GL_FOG_START, minDepth);
glFogf (GL_FOG_END, maxDepth);
Default: dmax=1.0, dmin=0.0
36. Approaches to infinity
• There are many complex pictures
which could be compose to the simple
component
– Tesselations could based on a single
regular polygon
• Only a triangle, square and hexagon tile the
plane
37. Drawing simple
tesselations
• Consider how to draw in an application the
3-gon tillings
– The 3-gon version
• Row : draw those side-by-side equilateral triangles
which are all the same orientation
• Other row must be offset horizontally by ½ the base
of the triangle (each line is drawn only once)
• Clipping can be used to chop off the tilings at the
borders of the desired window
38. Drawing simple
tesselations
• Code fragment
for i=1~ rowsnum
for j=1 ~ colsnum
if i=Odd
offset+=shift
else offset = 0
Triangle (j*colWidth+offset, i*rowWidth, 1);
42. Ambient light
• Ambient light set a general brightness
level in a scene
– object combine all background lighting ~ the
global diffuse reflections
• Assuming that only monochromatic lighting
– Intensity parameter Ia
– Ambient light’s reflections
• a simply form of diffuse reflection
• Independent of viewing direction & spatial
orientation of a surface
42
43. Diffuse reflection (1)
• Diffuse reflection model
– Assume the incident light is scattered
with equal intensity in all directions
(ideal diffuse reflectors)
• the reflected radiant light energy is
calculated with Lambert’s cosine law
radiant energy per unit time
Intensity
projected area
cos N
~ constant
dA cos N
43
44. Diffuse reflection (2)
• Radiant energy from a surface area
element dA in direction ψN relative to
the surface normal direction is
proportional to cos ψN
– a monochromatic light source, kd:0.0~1.0
N
ψN Incident light Highly reflective surface kd=1.0
ψN
dA Radiant-energy
direction
Absorbs the incident light kd=0.0
44
45. Lighting & reflection (1)
• Background lighting effects
– Every surface is fully illuminated by the
ambient light Ia
– Ambient lighting contribution to the
diffuse reflection at any point
– Produces a abnormally flat shading
Iambdiff=kdIa kd:0.0~1.0
abnormally flat shading
…
45
46. Lighting & reflection (2)
• Modeling the amount of incident light
(Il,incident) on a surface (A) from a
light source with intensity (Il)
Il,incident=Il cosθ
• The diffuse reflections N
θ
A
Incident
light θ A cosθ
Il,diff=kdIl,incident=kdIl cosθ
Incoming light ⊥ the surface => θ= 0°, Il,diffuse=kdIl
cosθ≦0.0, light source is behind the surface
46
47. kd Il (N L), if N L 0
Il ,diff
0.0, if N L 0 N
L
θ
Psource Psurf To light cosθ=N.L
L source
Psource Psurf
• Diffuse reflections from a spherical surface
• Point light source color = white
• Diffuse reflectivity coefficient : 0 <= kd <= 1
47
48. kaIa kd Il (N L), if N L 0
Idiff
k a Ia , if N L 0
• Combine the
ambient & point-
source intensity
calculations the
total diffuse
reflection
– Ambient-
reflection
coefficient ka
48
50. • The specular reflection direction for
a position on an illuminated surface
N R
L θ θ
φ V
N: normal surface vector
R: ideal specular reflection Rs
L: the direction toward the point light source
V: point to viewer
An ideal reflector (perfect mirror)
Il,incident = Rs &
we would see reflected light when V and R coincide
50
51. Specular-refection
function
• The variation of specular intensity with
angle of incidence is described by
Fresnel’s laws of reflection N
L θ θR
Il ,spec W ( )Il cos
ns
φ V
– Spectral-reflection function W(θ)
– The intensity of the light source Il
– φ: between the viewing angle (V) and R
51
53. Specular-reflection
coefficient
• Approximate variation of the
specular-reflection coefficient
for different materials, as a
function of the angle of
incidence
– W(θ): 0.0~1.0
– In general, W(θ) tends to
increase (θ=0°->90°°)
– θ=90°, W(θ)=1
• All of the incidence is reflected
Ex: glass,
-θ=0°->4% of the incident light on a
glass surface
-and most of the rangeθ-> < 10%
53
54. Simple specular- θ
reflection function (1) θ
• Many opaque materials’ specular-
reflection is nearly constant
– Set ks=W(θ), ks:0.0~1.0
ksIl ( V R )ns , if V R 0 and N L 0
Il ,spec
0.0, if V R 0 or N L 0
– R? can be computed from L and N
R + L = (2N.L) N
=> R = (2N.L) N - L
54
55. Simple specular-
reflection function (2)
H
• We replace V.R with N.H L
Nα R
φ V
– Replace cosφ with cosα
LV
Halfway vector H
LV
– Advantage
• For nonplanar surfaces, N.H requires less
computation than V.R because R related with N
– Viewer and light source is sufficiently far
• V, L and H are all constant
If α > 90°, N.H = negative and set Il,spec=0.0
55
56. ksIl (N H)ns , if N H 0
Il ,spec
0.0, if N H 0
56
57. Combined diffuse and
specular reflection
• A single point light source
I Idiff Ispec
kaIa kd Il (N L) ksIl (N H)ns
• Multiple light sources
n
I Iambdiff [Il ,diff Il ,spec ]
l 1
n
kaIa Il [kd (N Ll ) ks (N Hl )ns ]
l 1
57
58. Wire-frame Ambient lighting
Diffuse reflections from ambient Diffuse and specular reflections from
lighting + a single point source ambient lighting + a single point source
58
60. Light refraction (2)
• Snell’s law
i
sin r sin i
r
– Different material
have different
refracted
coefficience ηair~ 1
(ηmaterial)
ηglass~ 1.61
60
62. Flat surface rendering
• Flat surface rendering is also called
constant-intensity surface rendering
– In general, flat surface rendering provides an
accurate display
• Polygon is one polyhedron’s face (flat face)
• All light sources are sufficiently far, N.L = constant
• The viewing position is also sufficiently far, V.R =
constant
glShadeModel(GL_FLAT)
ksIl ( V R )ns , if V R 0 and N L 0
Il ,spec
0.0, if V R 0 or N L 0
62
63. N
Gouraud surface
rendering
• Gouraud surface rendering is also called
intensity-interpolation surface rendering
– Linear interpolates vertex intensity
– Each polygon section of a tessellated curved
surface
• Determine the average unit normal vector at each
vertex of the polygon
• Apply an illumination model to obtain the light
intensity at that position
• Linearly interpolate the vertex intensities over the
projected area
glShadeModel(GL_SMOOTH)
N1 N2 N3 N4
N
N1 N2 N3 N4 63
64. Two surface rendering
Using Flat surface Using Gouraud
mesh rendering surface rendering
glEnable(GL_COLOR_MATERIAL); //object’s color
64
65. N’
Phong surface rendering
• Phong surface rendering is also called
normal-vector interpolation rendering
– Each polygon section of a tessellated curved
surface
• Determine the average unit normal vector at each
vertex
• Linearly interpolate the vertex normal over the
projected area
• Apply an illumination model at positions along scan
lines to calculate pixel intensities
N( ) (1 )N1 ( )N2
N '( ) (1 )N3 ( )N2
65
66. OpenGL lights
• OpenGL light-source function
– light-source position
– Light-source colors
• OpenGL global lighting parameters
• OpenGL surface-property function
• OpenGL Spotlights
66
67. OpenGL light-source
function
• Multiple point light sources can be included
in OpenGL scene description which has
various properties.
glLight*(lightName, lightProperty, propertyValue);
– Function name’s suffix code: i (int) or f (float)
– v (vector):
• propertyValue use v to represent a pointer to an
array
• lightName: GL_LIGHT0,…,GL_LIGHT7
• lightProperty: can be assigned one of ten symbolic
property constants
67
68. OpenGL light-source
position
// light1 is designated as a local source at (2.0,0.0,3.0)
GLfloat light1PosType [ ] = {2.0, 0.0, 3.0, 1.0}
//light2 is a distant source with light emission in –y axis
GLfloat light2PosType [ ] = {0.0, 1.0, 0.0, 0.0}
glLightfv(GL_LIGHT1, GL_POSITION, light1PosType);
glEnable(GL_LIGHT1); // turn on light1
….
68
70. Energylight = 1
dl
Energylight = 1/dl2
• Radial-intensity attenuation coefficients
with dl as the distance from a light-source
position
– Three OpenGL property constants
• GL_CONSTANT_ATTENUATION,
GL_LINEAR_ATTENUATION and
GL_QUADRATIC_ATTENUATION
• Each coefficients a0, a1, a2 <- which can be designated
a0
glLightfv (GL_LIGHT6, GL_CONSTANT_ATTENUATION, 1.5);
…
70
71. OpenGL global lighting
parameters
• OpenGL lighting parameters can be
specified at the global level
glLightModel* (paramName, ParamValue);
– Besides the ambient color for individual light
sources, we can also set it to be background
lighting as a global value
• Ex: set background lighting to a low-intensity dark-
blue color and an alpha value = 1.0
globalAmbient [ ] = {0.0, 0.0, 0.3, 1.0}
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, globalAmbient);
Default global Ambient color = (0.2,0.2,0.2,1.0)
//dark gray
71
72. OpenGL surface-
property function
• Reflection coefficients and other optical properties
for surfaces
glMaterial*(surFace, surfProperty, propertyValue);
– Parameter surFace
• GL_FRONT, GL_BACK or GL_FRONT_AND_BACK
– Parameter surfProperty is a symbolic constant identifying a
surface parameter
• Isurf, ka, ks, or ns
– Parameter propertyValue is set to the corresponding value
with surfProperty
• All properties are specified as vector values
• Beside ns(the specular-reflection exponent)
72
73. Ambient & diffuse realization
• The ambient and diffuse coefficients
should be assigned the same vector values
– GL_AMBIENT_AND_DIFFUSE
• To set the specular-reflection exponent
– GL_SHININESS
• The range of the value : 0 ~ 128
diffuseCoeff [ ] = {0.2, 0.4, 0.9, 1.0};//light-blue color
specularCoeff [ ] = {1.0, 1.0, 1.0, 1.0};//white light
glMaterialfv (GL_FRONT_AND_BACK,
GL_AMBIENT_AND_DIFFUSE, diffuseCoeff);
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specularCoeff);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, 25.0);
73
74. OpenGL Spotlights
• Spotlights is directional light sources
– Three OpenGL property constants for
directional effects
• GL_SPOT_DIRECTION, GL_SPOT_CUTOFF
and GL_SPOT_EXPONENT
– Ex: θl = 30°, cone axis= x-axis and the attenuation
exponent = 2.5
GLfloat dirVector [ ] = {1.0, 0.0, 0.0}; To object Vobj
glLightfv(GL_LIGHT4, GL_SPOT_DIRECTION, dirVector); vertex
α
glLightf(GL_LIGHT4, GL_SPOT_CUTOFF, 30.0);
glLightf(GL_LIGHT4, GL_SPOT_EXPONENT, 2.5);
cone axis
Light
source θl
74
demo