This document discusses illumination models and color models in computer graphics. It begins by introducing illumination models which determine the perceived color and intensity at points on a surface given lighting conditions. It then covers various lighting models including point light sources, damping of light intensity over distance, and the Phong illumination model for specular reflection. It also discusses surface illumination factors like reflection, transmission and absorption of light. Basic illumination models are presented combining ambient, diffuse and specular reflection. The document concludes by covering rendering of polygons using constant, Gouraud and Phong shading to interpolate colors across surfaces.
Cs8092 computer graphics and multimedia unit 5SIMONTHOMAS S
This document discusses multimedia authoring tools and techniques. It covers several topics:
1. Types of multimedia authoring tools including card/page based tools, icon based tools, and time based tools. Popular examples are discussed.
2. Key features and capabilities of authoring tools including editing, programming, interactivity, playback, delivery, and project organization.
3. Authoring system metaphors like hierarchical, flow control, and different technologies focused on like hypermedia.
4. Considerations for multimedia production, presentation, and automatic authoring. Professional development tools are also outlined.
A color model specifies a color space and visible subset of colors within it. There are four main hardware-oriented color models: RGB, CMY, CMYK, and YIQ. However, these are not intuitive for describing color in terms of hue, saturation and brightness. Therefore, models like HSV, HLS, and HVC were developed which relate more directly to human perception of color. The RGB and CMY models represent colors as combinations of red, green, blue and cyan, magenta, yellow primary colors respectively and are used in monitors and printing.
This document discusses different color models used in computer graphics and printing. It explains that color models are systems for creating a range of colors from a small set of primary colors. The two main types are additive models which use light, like RGB, and subtractive models which use inks, like CMYK. RGB uses red, green and blue light and is for computer displays. CMYK uses cyan, magenta, yellow and black inks and is the standard for color printing. It provides details on how each model mixes colors and describes other models like HSV which represents color in terms of hue, saturation and value.
full color,pseudo color,color fundamentals,Hue saturation Brightness,color model,RGB color model,CMY and CMYK color model,HSI color model,Coverting RGB to HSI, HSI examples
Halftoning is the process of converting a greyscale image to a binary image made up of black and white dots. In newspapers, halftoning simulates greyscale using patterns of black dots of varying sizes on a white background. Traditionally, halftoning was done photographically by projecting an image through a halftone screen with an etched grid onto film. Different screen frequencies control dot size. Digital halftoning techniques include patterning, which replaces each pixel with a pattern from a binary font, and dithering, which thresholds the image against a dither matrix to determine black and white pixels.
This document discusses image enhancement and restoration techniques in digital image processing. It describes various arithmetic and logical operations that can be performed on images, including addition, averaging, subtraction, multiplication/division, AND, and OR. These operations allow images to be combined, adjusted for brightness, and manipulated to enhance features or remove artifacts. Pixel value ranges must be normalized back to 0-255 after arithmetic operations.
This document discusses color image processing and provides details on color fundamentals, color models, and pseudocolor image processing techniques. It introduces color image processing, full-color versus pseudocolor processing, and several color models including RGB, CMY, and HSI. Pseudocolor processing techniques of intensity slicing and gray level to color transformation are explained, where grayscale values in an image are assigned colors based on intensity ranges or grayscale levels.
Cs8092 computer graphics and multimedia unit 5SIMONTHOMAS S
This document discusses multimedia authoring tools and techniques. It covers several topics:
1. Types of multimedia authoring tools including card/page based tools, icon based tools, and time based tools. Popular examples are discussed.
2. Key features and capabilities of authoring tools including editing, programming, interactivity, playback, delivery, and project organization.
3. Authoring system metaphors like hierarchical, flow control, and different technologies focused on like hypermedia.
4. Considerations for multimedia production, presentation, and automatic authoring. Professional development tools are also outlined.
A color model specifies a color space and visible subset of colors within it. There are four main hardware-oriented color models: RGB, CMY, CMYK, and YIQ. However, these are not intuitive for describing color in terms of hue, saturation and brightness. Therefore, models like HSV, HLS, and HVC were developed which relate more directly to human perception of color. The RGB and CMY models represent colors as combinations of red, green, blue and cyan, magenta, yellow primary colors respectively and are used in monitors and printing.
This document discusses different color models used in computer graphics and printing. It explains that color models are systems for creating a range of colors from a small set of primary colors. The two main types are additive models which use light, like RGB, and subtractive models which use inks, like CMYK. RGB uses red, green and blue light and is for computer displays. CMYK uses cyan, magenta, yellow and black inks and is the standard for color printing. It provides details on how each model mixes colors and describes other models like HSV which represents color in terms of hue, saturation and value.
full color,pseudo color,color fundamentals,Hue saturation Brightness,color model,RGB color model,CMY and CMYK color model,HSI color model,Coverting RGB to HSI, HSI examples
Halftoning is the process of converting a greyscale image to a binary image made up of black and white dots. In newspapers, halftoning simulates greyscale using patterns of black dots of varying sizes on a white background. Traditionally, halftoning was done photographically by projecting an image through a halftone screen with an etched grid onto film. Different screen frequencies control dot size. Digital halftoning techniques include patterning, which replaces each pixel with a pattern from a binary font, and dithering, which thresholds the image against a dither matrix to determine black and white pixels.
This document discusses image enhancement and restoration techniques in digital image processing. It describes various arithmetic and logical operations that can be performed on images, including addition, averaging, subtraction, multiplication/division, AND, and OR. These operations allow images to be combined, adjusted for brightness, and manipulated to enhance features or remove artifacts. Pixel value ranges must be normalized back to 0-255 after arithmetic operations.
This document discusses color image processing and provides details on color fundamentals, color models, and pseudocolor image processing techniques. It introduces color image processing, full-color versus pseudocolor processing, and several color models including RGB, CMY, and HSI. Pseudocolor processing techniques of intensity slicing and gray level to color transformation are explained, where grayscale values in an image are assigned colors based on intensity ranges or grayscale levels.
Any colour that can be specified using a model will correspond to a single point within the subspace it defines. Each colour model is oriented towards either specific hardware (RGB,CMY,YIQ), or image processing applications (HSI).
1. The document discusses the key elements of digital image processing including image acquisition, enhancement, restoration, segmentation, representation and description, recognition, and knowledge bases.
2. It also covers fundamentals of human visual perception such as the anatomy of the eye, image formation, brightness adaptation, color fundamentals, and color models like RGB and HSI.
3. The principles of video cameras are explained including the construction and working of the vidicon camera tube.
The RGB and CMY color models are two primary systems for representing color digitally. The RGB model uses additive color mixing of red, green, and blue light to reproduce a wide gamut of colors on computer screens. It is well-suited for digital imaging. The CMYK model uses subtractive color mixing of cyan, magenta, yellow, and black inks to reproduce colors for print. It is widely used in color printing. Both models can be described using numeric values or percentages of their primary colors to precisely define a specific hue.
This document discusses color image processing and provides information on various color models and color fundamentals. It describes full-color and pseudo-color processing, color fundamentals including the visible light spectrum, color perception by the human eye, and color properties. It also summarizes RGB, CMY/CMYK, and HSI color models, conversions between models, and methods for pseudo-color image processing including intensity slicing and intensity to color transformations.
RGB color stands for RED,GREEN and BLUE. This color model is used in computer monitors, television sets,
and theater. It's an additive color model.
CMYK refers to the four inks used in some color printing: cyan, magenta, yellow and key (black).
This document provides an overview of mathematical morphology and its applications to image processing. Some key points:
- Mathematical morphology uses concepts from set theory and uses structuring elements to probe and extract image properties. It provides tools for tasks like noise removal, thinning, and shape analysis.
- Basic operations include erosion, dilation, opening, and closing. Erosion shrinks objects while dilation expands them. Opening and closing combine these to smooth contours or fill gaps.
- Hit-or-miss transforms allow detecting specific shapes. Skeletonization reduces objects to 1-pixel wide representations.
- Morphological operations can be applied to binary or grayscale images. Structuring elements are used to specify the neighborhood of pixels
Color fundamentals and color models - Digital Image ProcessingAmna
This presentation is based on Color fundamentals and Color models.
~ Introduction to Colors
~ Color in Image Processing
~ Color Fundamentals
~ Color Models
~ RGB Model
~ CMY Model
~ CMYK Model
~ HSI Model
~ HSI and RGB
~ RGB To HSI
~ HSI To RGB
This document discusses lighting and shading models in computer graphics. It explains that lighting has two main components - the lighting model which calculates intensity at surface points, and surface rendering methods like ray tracing. Common lighting models include ambient, diffuse, and specular components. The diffuse component follows Lambert's cosine law, while the specular component uses Snell's law and the Phong reflection model. Together these components make up the lighting equation, which is approximated using shading techniques like constant, Gouraud, and Phong shading to assign colors to pixels.
Graphic hardware and software are used to create and edit images. Common hardware includes RAM, CPU, graphics cards, and hard drives which are used to store and process graphical files. Software like Photoshop, IPhoto, and Paint allow users to edit photos with different tools and levels of complexity. File formats like JPG, PNG, and TIFF determine image quality and compression for different uses. While powerful programs and devices provide robust editing, they can also be expensive and complex for beginners.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
LZW coding is a lossless compression technique that removes spatial redundancies in images. It works by assigning variable length code words to sequences of input symbols using a dictionary. As the dictionary grows, longer matches are encoded, improving compression ratios. LZW compression is fast, simple to implement, and effective for images with repeating patterns, making it widely used in formats like GIF and TIFF [END SUMMARY]
This document discusses image restoration and reconstruction techniques for noise removal. It begins by defining image restoration as attempting to reverse degradation processes to restore degraded images. Various noise models are described, including Gaussian, Rayleigh, Erlang, exponential, uniform, and impulse noise. Spatial domain filtering techniques like mean, median, and order statistics filters are covered for noise removal. Frequency domain filtering using band reject filters is also discussed, as well as adaptive filtering techniques. Examples are provided to demonstrate noise removal.
This document discusses various color models used in computer graphics including RGB, HSV, HSL, CMY, and CMYK. It explains the key components of each model such as hue, saturation, value, and how colors are represented. Common applications of different color models are also summarized such as RGB for computer displays and CMYK for printing. In addition, the concepts of dithering and half-toning techniques used to reproduce colors on devices are introduced.
The document discusses different concepts related to clipping in computer graphics including 2D and 3D clipping. It describes how clipping is used to eliminate portions of objects that fall outside the viewing frustum or clip window. Various clipping techniques are covered such as point clipping, line clipping, polygon clipping, and the Cohen-Sutherland algorithm for 2D region clipping. The key purposes of clipping are to avoid drawing objects that are not visible, improve efficiency by culling invisible geometry, and prevent degenerate cases.
The document discusses edge detection methods including gradient based approaches like Sobel and zero crossing based techniques like Laplacian of Gaussian. It proposes a new algorithm that applies fuzzy logic to the results of gradient and zero crossing edge detection on an image to more accurately identify edges. The algorithm calculates gradient and zero crossings, applies fuzzy rules to classify pixels, and thresholds to determine final edge pixels.
This document discusses color models and color spaces. It defines color models as specifications for representing colors as points within a coordinate system. Common color models include RGB, grayscale, and binary. It describes how human vision perceives color through red, green, and blue cone receptors in the eye. Hue, saturation, and brightness are also defined as the three properties that describe color, with hue being the actual color, saturation being the purity of the color, and brightness being the relative intensity.
This document discusses techniques for image compression including bit-plane coding, bit-plane decomposition, constant area coding, and run-length coding. It explains that bit-plane decomposition represents a grayscale image as a collection of binary images based on its representation as a binary polynomial. Run-length coding compresses each row of a binary image by coding contiguous runs of 0s or 1s with their length, separately for black and white runs. Constant area coding classifies blocks of pixels as all white, all black, or mixed and codes them with special codewords.
This document provides information about a digital image processing lecture given by Dr. Moe Moe Myint from Technological University in Kyaukse, Myanmar. It includes the lecture schedule and contact information for Dr. Myint. The document also provides an overview of Chapter 2 which discusses elements of visual perception, light and the electromagnetic spectrum, image sensing and acquisition, image sampling and quantization, and basic relationships between pixels. It provides examples of different types of digital images including intensity, RGB, binary, and index images. It also discusses the effects of spatial and intensity level resolution on images.
This document discusses the HSL and HSV color models. HSL represents colors in terms of hue, saturation, and lightness. It was developed in 1970 for computer graphics. Hue represents the color spectrum, saturation represents the amount of gray, and lightness represents brightness from black to white. HSV is similar but uses value instead of lightness. It represents a cylindrical-geometric relationship between color components. Both models attempt to be more intuitive than RGB for describing colors.
At the end of this lesson, you should be able to;
identify color formation and how color visualize.
describe primary and secondary colors.
describe display on CRT and LCD.
comprehend RGB, CMY, CMYK and HSI color models.
The document describes the Phong shading model for modeling specular reflections. It explains that specular reflection results from total or near-total reflection of incident light in a concentrated region around the specular reflection angle. The Phong model sets the intensity of specular reflection proportional to the cosine of the viewing angle raised to a power 'n'. Higher values of 'n' produce shinier surfaces, while lower values produce duller surfaces. The model calculates specular reflection based on vectors representing the light source, viewer, and specular reflection direction.
Any colour that can be specified using a model will correspond to a single point within the subspace it defines. Each colour model is oriented towards either specific hardware (RGB,CMY,YIQ), or image processing applications (HSI).
1. The document discusses the key elements of digital image processing including image acquisition, enhancement, restoration, segmentation, representation and description, recognition, and knowledge bases.
2. It also covers fundamentals of human visual perception such as the anatomy of the eye, image formation, brightness adaptation, color fundamentals, and color models like RGB and HSI.
3. The principles of video cameras are explained including the construction and working of the vidicon camera tube.
The RGB and CMY color models are two primary systems for representing color digitally. The RGB model uses additive color mixing of red, green, and blue light to reproduce a wide gamut of colors on computer screens. It is well-suited for digital imaging. The CMYK model uses subtractive color mixing of cyan, magenta, yellow, and black inks to reproduce colors for print. It is widely used in color printing. Both models can be described using numeric values or percentages of their primary colors to precisely define a specific hue.
This document discusses color image processing and provides information on various color models and color fundamentals. It describes full-color and pseudo-color processing, color fundamentals including the visible light spectrum, color perception by the human eye, and color properties. It also summarizes RGB, CMY/CMYK, and HSI color models, conversions between models, and methods for pseudo-color image processing including intensity slicing and intensity to color transformations.
RGB color stands for RED,GREEN and BLUE. This color model is used in computer monitors, television sets,
and theater. It's an additive color model.
CMYK refers to the four inks used in some color printing: cyan, magenta, yellow and key (black).
This document provides an overview of mathematical morphology and its applications to image processing. Some key points:
- Mathematical morphology uses concepts from set theory and uses structuring elements to probe and extract image properties. It provides tools for tasks like noise removal, thinning, and shape analysis.
- Basic operations include erosion, dilation, opening, and closing. Erosion shrinks objects while dilation expands them. Opening and closing combine these to smooth contours or fill gaps.
- Hit-or-miss transforms allow detecting specific shapes. Skeletonization reduces objects to 1-pixel wide representations.
- Morphological operations can be applied to binary or grayscale images. Structuring elements are used to specify the neighborhood of pixels
Color fundamentals and color models - Digital Image ProcessingAmna
This presentation is based on Color fundamentals and Color models.
~ Introduction to Colors
~ Color in Image Processing
~ Color Fundamentals
~ Color Models
~ RGB Model
~ CMY Model
~ CMYK Model
~ HSI Model
~ HSI and RGB
~ RGB To HSI
~ HSI To RGB
This document discusses lighting and shading models in computer graphics. It explains that lighting has two main components - the lighting model which calculates intensity at surface points, and surface rendering methods like ray tracing. Common lighting models include ambient, diffuse, and specular components. The diffuse component follows Lambert's cosine law, while the specular component uses Snell's law and the Phong reflection model. Together these components make up the lighting equation, which is approximated using shading techniques like constant, Gouraud, and Phong shading to assign colors to pixels.
Graphic hardware and software are used to create and edit images. Common hardware includes RAM, CPU, graphics cards, and hard drives which are used to store and process graphical files. Software like Photoshop, IPhoto, and Paint allow users to edit photos with different tools and levels of complexity. File formats like JPG, PNG, and TIFF determine image quality and compression for different uses. While powerful programs and devices provide robust editing, they can also be expensive and complex for beginners.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
LZW coding is a lossless compression technique that removes spatial redundancies in images. It works by assigning variable length code words to sequences of input symbols using a dictionary. As the dictionary grows, longer matches are encoded, improving compression ratios. LZW compression is fast, simple to implement, and effective for images with repeating patterns, making it widely used in formats like GIF and TIFF [END SUMMARY]
This document discusses image restoration and reconstruction techniques for noise removal. It begins by defining image restoration as attempting to reverse degradation processes to restore degraded images. Various noise models are described, including Gaussian, Rayleigh, Erlang, exponential, uniform, and impulse noise. Spatial domain filtering techniques like mean, median, and order statistics filters are covered for noise removal. Frequency domain filtering using band reject filters is also discussed, as well as adaptive filtering techniques. Examples are provided to demonstrate noise removal.
This document discusses various color models used in computer graphics including RGB, HSV, HSL, CMY, and CMYK. It explains the key components of each model such as hue, saturation, value, and how colors are represented. Common applications of different color models are also summarized such as RGB for computer displays and CMYK for printing. In addition, the concepts of dithering and half-toning techniques used to reproduce colors on devices are introduced.
The document discusses different concepts related to clipping in computer graphics including 2D and 3D clipping. It describes how clipping is used to eliminate portions of objects that fall outside the viewing frustum or clip window. Various clipping techniques are covered such as point clipping, line clipping, polygon clipping, and the Cohen-Sutherland algorithm for 2D region clipping. The key purposes of clipping are to avoid drawing objects that are not visible, improve efficiency by culling invisible geometry, and prevent degenerate cases.
The document discusses edge detection methods including gradient based approaches like Sobel and zero crossing based techniques like Laplacian of Gaussian. It proposes a new algorithm that applies fuzzy logic to the results of gradient and zero crossing edge detection on an image to more accurately identify edges. The algorithm calculates gradient and zero crossings, applies fuzzy rules to classify pixels, and thresholds to determine final edge pixels.
This document discusses color models and color spaces. It defines color models as specifications for representing colors as points within a coordinate system. Common color models include RGB, grayscale, and binary. It describes how human vision perceives color through red, green, and blue cone receptors in the eye. Hue, saturation, and brightness are also defined as the three properties that describe color, with hue being the actual color, saturation being the purity of the color, and brightness being the relative intensity.
This document discusses techniques for image compression including bit-plane coding, bit-plane decomposition, constant area coding, and run-length coding. It explains that bit-plane decomposition represents a grayscale image as a collection of binary images based on its representation as a binary polynomial. Run-length coding compresses each row of a binary image by coding contiguous runs of 0s or 1s with their length, separately for black and white runs. Constant area coding classifies blocks of pixels as all white, all black, or mixed and codes them with special codewords.
This document provides information about a digital image processing lecture given by Dr. Moe Moe Myint from Technological University in Kyaukse, Myanmar. It includes the lecture schedule and contact information for Dr. Myint. The document also provides an overview of Chapter 2 which discusses elements of visual perception, light and the electromagnetic spectrum, image sensing and acquisition, image sampling and quantization, and basic relationships between pixels. It provides examples of different types of digital images including intensity, RGB, binary, and index images. It also discusses the effects of spatial and intensity level resolution on images.
This document discusses the HSL and HSV color models. HSL represents colors in terms of hue, saturation, and lightness. It was developed in 1970 for computer graphics. Hue represents the color spectrum, saturation represents the amount of gray, and lightness represents brightness from black to white. HSV is similar but uses value instead of lightness. It represents a cylindrical-geometric relationship between color components. Both models attempt to be more intuitive than RGB for describing colors.
At the end of this lesson, you should be able to;
identify color formation and how color visualize.
describe primary and secondary colors.
describe display on CRT and LCD.
comprehend RGB, CMY, CMYK and HSI color models.
The document describes the Phong shading model for modeling specular reflections. It explains that specular reflection results from total or near-total reflection of incident light in a concentrated region around the specular reflection angle. The Phong model sets the intensity of specular reflection proportional to the cosine of the viewing angle raised to a power 'n'. Higher values of 'n' produce shinier surfaces, while lower values produce duller surfaces. The model calculates specular reflection based on vectors representing the light source, viewer, and specular reflection direction.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
Surface rendering means a procedure for applying a lighting model to obtain pixel intensities for all the projected surface positions in a scene.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface point.
1. There are two types of color image processing: pseudocolor processing which assigns colors to grayscale images, and full color processing which manipulates real color images.
2. The human visual system perceives color through photoreceptor cells (cones) in the retina that are sensitive to red, green, and blue wavelengths. Color images can be represented in various color spaces like RGB, HSI, CMYK.
3. Pseudocolor processing techniques include intensity slicing, color coding, and gray level to color transformations to visualize grayscale images. Full color processing involves operations on color components like color balancing, complement, slicing, smoothing and sharpening.
illumination model in Computer Graphics by irru pychukarsyedArr
The document discusses illumination models used to calculate light intensity on object surfaces in 3D scenes. It describes how surface rendering uses illumination models to determine pixel intensities. Diffuse and specular reflection are explained along with parameters like ambient light, material properties, number of light sources, attenuation, and shadows. Color considerations and transparent surfaces are also covered at a high level.
- The lecture covered lighting surfaces in computer graphics, specifically how light interacts with visible surfaces through illumination models like ambient, diffuse, and specular lighting.
- Assignments were given including turning in Homework #3, an upcoming Homework #4, and Project #2 on texturing, shading, and lighting due after Spring Break.
- A midterm exam was announced for March 8th and office hours were provided for questions.
The document discusses illumination models and surface rendering methods in computer graphics. It provides information on several key topics:
1. Illumination models (also called lighting models or shading models) are used to calculate the color and intensity of illuminated surfaces. Common illumination models include ambient light, diffuse reflection, and specular reflection (Phong model).
2. Surface rendering methods determine the pixel colors for all positions in a 3D scene. Polygon rendering methods approximate object surfaces with polygons and calculate color/intensity at polygon vertices (Gouraud) or interior points (Phong).
3. Additional concepts covered include light sources, reflection, transparency, shadows, color, and intensity attenuation with distance from light sources.
Compututer Graphics - Color Modeling And RenderingPrince Soni
This document discusses various color models and rendering techniques. It describes additive and subtractive color models, including the RGB, CMY, and HSV color models. It also discusses illumination models, including ambient light, diffuse reflection, and specular reflection. Common rendering techniques like Gouraud shading and Phong shading are summarized, which interpolate lighting across triangle surfaces. Ray tracing is also briefly explained as a technique for simulating light paths.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
This document discusses color in images and video. It begins with an overview of color science, including how light is characterized by wavelength and how the human eye perceives color. It then covers color models used in images and video, and explores color models further. The document details how color is formed in images based on the illumination, surface reflectance, and the eye's response. It also discusses color models used in camera systems and issues like gamma correction. The document provides an in-depth explanation of color matching functions and the CIE chromaticity diagram. It concludes with discussions of color monitor specifications and how to handle out-of-gamut colors.
This document discusses lighting and shading techniques in computer graphics. It begins by distinguishing between lighting, which refers to light-matter interaction, and shading, which determines pixel colors. Three common lighting models are described: Lambert, Phong, and Torrance-Sparrow. For shading, it covers flat, Gouraud, and Phong shading. Gouraud shading improves on flat shading but can cause visual artifacts that Phong shading helps address by interpolating normals rather than colors at each pixel.
This document provides an overview of light, color, and human color perception. It discusses that color is a psychological property resulting from light interacting with our visual system. The physics of light is described in terms of wavelength. Human color vision involves three types of cones that differ in photopigment sensitivity. Color can be represented using models like RGB, CIE XYZ, and HSV. Computer vision applications make use of color through techniques like color histograms, skin detection, and image segmentation.
This document discusses illumination models and shading techniques used in 3D rendering. It describes common illumination models including ambient illumination, diffuse reflection, and specular reflection. It also covers different polygon rendering methods like flat shading, Gouraud shading, and Phong shading. Examples are provided to illustrate the different illumination models and how they are used in rendering 3D objects and surfaces under various lighting conditions.
Interactive Volumetric Lighting Simulating Scattering and ShadowingMarc Sunet
This document describes an interactive volumetric lighting model that simulates scattering and hard shadows. It presents a lighting model based on emission, attenuation, and scattering. Light is propagated through the volume using a two-pass algorithm to compute incoming and outgoing light. Rendering applies the lighting model and uses a transfer function. A user study found the model provided better depth perception than Phong lighting. Future work includes integrating light sources and evaluating the model.
This document discusses various lighting and shading techniques used in computer graphics, including:
- Ray tracing and radiosity methods that aim to approximate physical light behavior more accurately but with higher computational cost.
- Phong illumination model that provides relatively fast approximations of light interactions.
- Calculation of diffuse and specular reflection components in the Phong model based on surface normals, light direction, and view direction.
- Different shading techniques like flat, Gouraud, and Phong shading that determine color values at polygon vertices and faces.
The document describes implementing Phong shading over polygonal surfaces using OpenGL. Key aspects include reading mesh files to obtain vertex and face data, calculating vertex normals, setting up a light source, and applying the Phong illumination model at each point. Phong shading is computationally expensive but produces higher quality results than Gouraud shading by interpolating normals. The implementation subdivides triangles recursively until the pixel level to apply Phong's equations. Results using pyramid and octahedron meshes demonstrated Phong shading generated superior images compared to Gouraud shading.
The document discusses color image processing and color models. It describes how color is perceived by the human visual system through rods and cones in the retina. Various color models are examined, including RGB, CMY, HSV, YIQ, and YUV. Color models transform between different representations of color, such as representing a color by its hue, saturation, and intensity rather than red, green, and blue values.
1) The illumination model calculates the intensity of light reflected at points on a surface based on three factors: the light source, surface properties, and observer position.
2) There are three types of light sources: point sources that emit equally in all directions, parallel sources like the sun, and distributed sources from a finite area. The structure of a surface determines how much light is reflected or absorbed.
3) Color models like RGB, CMY, HSV describe how colors can be created and mixed. The RGB model uses additive mixing of red, green and blue light, while CMY uses subtractive mixing of pigments.
Do Not just learn computer graphics an close your computer tab and go away..
APPLY them in real business,
Visit Daroko blog for real IT skills applications,androind, Computer graphics,Networking,Programming,IT jobs Types, IT news and applications,blogging,Builing a website, IT companies and how you can form yours, Technology news and very many More IT related subject.
-simply google:Daroko blog(professionalbloggertricks.com)
• Daroko blog (www.professionalbloggertricks.com)
• Presentation by Daroko blog, to see More tutorials more than this one here, Daroko blog has all tutorials related with IT course, simply visit the site by simply Entering the phrase Daroko blog (www.professionalbloggertricks.com) to search engines such as Google or yahoo!, learn some Blogging, affiliate marketing ,and ways of making Money with the computer graphic Applications(it is useless to learn all these tutorials when you can apply them as a student you know),also learn where you can apply all IT skills in a real Business Environment after learning Graphics another computer realate courses.ly
• Be practically real, not just academic reader
This document discusses shading and the modified Phong lighting model. It introduces ambient light, which depends on material properties and light color/intensity. Distance terms soften point light sources. The Phong model adds diffuse, specular, and ambient light from each source with 9 coefficients per light. Blinn's modified Phong model uses the halfway vector instead of reflecting vector to calculate specular highlights more efficiently. Normals are required to compute lighting and can be determined from plane equations, implicit functions like spheres, or surface parameterizations.
Similar to Cs8092 computer graphics and multimedia unit 1 (20)
Cs8092 computer graphics and multimedia unit 4SIMONTHOMAS S
This document provides an overview of multimedia system design and multimedia file handling. It discusses multimedia basics and system architecture. Key topics covered include defining objects for multimedia systems, multimedia data interface standards, compression and decompression, data and file format standards, and multimedia I/O technologies. It also examines digital voice and audio, video, image and animation, and full motion video. Storage and retrieval technologies are also mentioned.
Cs8092 computer graphics and multimedia unit 3SIMONTHOMAS S
The document discusses various methods for representing 3D objects in computer graphics, including polygon meshes, curved surfaces defined by equations or splines, and sweep representations. It also covers 3D transformations like translation, rotation, and scaling. Key representation methods discussed are polygonal meshes, NURBS curves and surfaces, and extruded and revolved shapes. Transformation operations covered are translation using addition of a offset vector, and rotation using a rotation matrix.
Cs8092 computer graphics and multimedia unit 2SIMONTHOMAS S
This document discusses two-dimensional graphics transformations and matrix representations. It covers topics such as translation, rotation, scaling, reflections, shearing, and representing composite transformations using matrix multiplication. Homogeneous coordinates are also introduced as a way to represent 2D points using 3-dimensional vectors and matrices for transformations.
Take minutes, post minutes, track action items
Responsible for inter-group communication
Liaison with other teams
Coordinate interfaces
Responsible for quality of work products
Enforce standards and guidelines
Review work products before delivery
Responsible for team motivation and morale
Responsible for resource allocation within the team
Responsible for risk management within the team
Responsible for scope management within the team
Responsible for schedule management within the team
Responsible for budget management within the team
Responsible for configuration management within the team
The document discusses the project management process and inspection process. It provides details on the typical roles and responsibilities of a project manager, including planning, monitoring, communication facilitation, and postmortem analysis. It also outlines the steps for risk management, including identification, analysis, planning, and review. Finally, it describes the inspection process for reviewing work products, including planning, individual review, group review meetings, rework, and roles like moderator and scribe.
This document discusses risk management concepts including risk assessment, prioritization, and planning. It provides formulas for calculating risk exposure based on potential damage and probability of occurrence. It also includes qualitative descriptors for probability and impact levels and introduces a probability-impact matrix for risk analysis. Finally, it outlines different approaches for dealing with risks, such as acceptance, avoidance, reduction, transfer, and mitigation.
The document discusses various software project life cycle models and cost estimation techniques. It begins by describing agile methods like Scrum and Extreme Programming that emphasize iterative development, communication, and customer involvement. It then covers traditional models like waterfall and incremental development. Key estimation techniques discussed include function points, COCOMO, and analogy-based estimation. The document provides details on calculating sizes and estimating effort for different models.
The document discusses software project management activities and methodologies. It describes the typical activities covered in project management, including feasibility studies, planning, execution, and the software development life cycle. The software development life cycle includes requirements analysis, architecture design, coding and testing, integration, qualification testing, installation, and acceptance support. The document also discusses plans, methods, and methodologies, categorizing different types of projects, and identifying stakeholders.
The document discusses data retention policies and handling of confidential and sensitive data. It provides details on:
1) Data retention policies - their purpose, requirements, scope and how they are managed. Different retention periods are defined depending on the type of data.
2) Laws and regulations around data retention in India, particularly for telecommunication companies. Specific requirements for retaining call detail records, network logs, and other subscriber information are outlined.
3) Types of sensitive data, including personal, business, and classified information. Guidelines for properly handling sensitive data through access policies, authentication, training, and other security practices.
The document discusses principles of information architecture and its framework. It describes the responsibilities of information architects in collecting information from various sources, organizing large amounts of data on websites, understanding user needs, and testing user experiences. It also defines different dimensions of information architecture including contents, context, users. Components of information architecture discussed include labeling systems, navigation systems, organization systems, and searching systems.
The document discusses master data management (MDM) including its definition, need, and implementation process. MDM aims to create and maintain consistent and accurate master data across systems. It discusses key aspects like the different types of data, MDM architecture styles, and domains. The implementation involves identifying data sources, developing data models, deploying tools, and maintaining processes to manage master data effectively.
The document discusses various aspects of program security including types of flaws, malicious code, and controls against threats. It describes different types of flaws such as buffer overflows, incomplete mediation, and time-of-check to time-of-use errors. Malicious code like viruses, trojan horses, and worms are also explained. Controls during software development include following principles of modularity, encapsulation, and information hiding. Techniques like code reviews and testing aim to identify and fix flaws to enhance program security.
The document discusses IT6701 - Information Management, which covers topics such as database modeling, management and development, information governance, and information architecture. It describes objectives, units, database design, data modeling, entity relationship models, normalization, Java database connectivity, stored procedures, and big data technologies including Hadoop, HDFS, MapReduce, Hive and enhancements.
Quick sort is an internal sorting technique that uses the divide and conquer approach. It works by picking a pivot element and partitioning the array so that elements less than the pivot are moved left and greater elements right. The pivot is placed in its correct position, then quick sort is recursively applied to the left and right subarrays. It has a best case of O(n log n) and average case of O(n log n), but worst case of O(n^2).
Breadth first traversal (BFS) is a graph traversal algorithm that begins at a starting node and explores all neighboring nodes at the present distance from the node before proceeding to nodes at the next distance. It uses a queue to keep track of nodes to visit at each level. The key steps are to enqueue the starting node, dequeue nodes and enqueue any unvisited neighbors, repeating until the queue is empty. BFS can be used to check if a graph is connected or not. Depth first search (DFS) recursively explores as far as possible along each branch before backtracking. It involves marking the starting node visited, recursively searching adjacent unvisited nodes, and marking nodes visited along the way.
Binary search trees have nodes where the left child is less than the root node and the right child is greater than the root. Nodes are inserted by traversing the tree recursively to find an empty spot. Values are found by checking and traversing left or right based on whether the value is less than or greater than the current node. Minimum and maximum values are found by traversing all the way left or right. Nodes are deleted by checking if they have 0, 1, or 2 children and adjusting pointers accordingly or replacing with a child node.
The document defines and describes a stack data structure. A stack follows LIFO (last in, first out) and FILO (first in, last out) principles. Elements can be inserted using push and deleted using pop. Stacks have only one end for insertion/deletion and can be implemented using arrays or linked lists. The document provides code examples to implement stacks using arrays and linked lists and describes some applications of stacks like evaluating expressions and balancing symbols.
This document discusses using linked lists to represent polynomials and perform operations like addition and subtraction on them. It also discusses radix sort, which sorts integers based on their digits, and multi-linked lists, which have multiple links between nodes allowing for multiple lists to be embedded in a single data structure. Linked lists allow storing polynomial terms with coefficient and power, and traversing the lists to add/subtract terms with the same power and output a new polynomial list. Radix sort requires multiple passes equal to the largest number's digits to sort based on each digit place value. Multi-lists generalize linked lists by having nodes with multiple pointers connecting separate embedded lists.
This document discusses structures in C programming. It defines a structure as a collection of variables under a single name that provides a way to group related data. Structures allow heterogeneous data of different types to be stored together. The document covers defining and declaring structure types and variables, initializing structure members, using pointers to structures, and aggregate and segregate operations on structures like accessing members and taking the address of a structure.
This document discusses two ways to store a list of strings: 1) using an array of strings which stores multiple strings in a two-dimensional character array, and 2) using an array of character pointers where the starting addresses of each string are stored in an array. It provides examples of declaring and initializing an array of strings, and notes that command line arguments can be retrieved by defining the main function to accept an argument count and array of character pointers.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
This presentation is about Food Delivery Systems and how they are developed using the Software Development Life Cycle (SDLC) and other methods. It explains the steps involved in creating a food delivery app, from planning and designing to testing and launching. The slide also covers different tools and technologies used to make these systems work efficiently.
Sri Guru Hargobind Ji - Bandi Chor Guru.pdfBalvir Singh
Sri Guru Hargobind Ji (19 June 1595 - 3 March 1644) is revered as the Sixth Nanak.
• On 25 May 1606 Guru Arjan nominated his son Sri Hargobind Ji as his successor. Shortly
afterwards, Guru Arjan was arrested, tortured and killed by order of the Mogul Emperor
Jahangir.
• Guru Hargobind's succession ceremony took place on 24 June 1606. He was barely
eleven years old when he became 6th Guru.
• As ordered by Guru Arjan Dev Ji, he put on two swords, one indicated his spiritual
authority (PIRI) and the other, his temporal authority (MIRI). He thus for the first time
initiated military tradition in the Sikh faith to resist religious persecution, protect
people’s freedom and independence to practice religion by choice. He transformed
Sikhs to be Saints and Soldier.
• He had a long tenure as Guru, lasting 37 years, 9 months and 3 days
Levelised Cost of Hydrogen (LCOH) Calculator ManualMassimo Talia
The aim of this manual is to explain the
methodology behind the Levelized Cost of
Hydrogen (LCOH) calculator. Moreover, this
manual also demonstrates how the calculator
can be used for estimating the expenses associated with hydrogen production in Europe
using low-temperature electrolysis considering different sources of electricity
This study Examines the Effectiveness of Talent Procurement through the Imple...DharmaBanothu
In the world with high technology and fast
forward mindset recruiters are walking/showing interest
towards E-Recruitment. Present most of the HRs of
many companies are choosing E-Recruitment as the best
choice for recruitment. E-Recruitment is being done
through many online platforms like Linkedin, Naukri,
Instagram , Facebook etc. Now with high technology E-
Recruitment has gone through next level by using
Artificial Intelligence too.
Key Words : Talent Management, Talent Acquisition , E-
Recruitment , Artificial Intelligence Introduction
Effectiveness of Talent Acquisition through E-
Recruitment in this topic we will discuss about 4important
and interlinked topics which are
2. Introduction 1
Illumination model:
Given a point on a surface, what is the perceived
color and intensity? Known as Lighting Model,
or Shading Model
Surface rendering:
Apply the Illumination model to color all pixels
of the surface.
H&B 17:531-532
4. Introduction 3
Illumination:
• Physics:
– Material properties, light sources, relative
positions, properties medium
• Psychology:
– Perception, what do we see
– Color!
• Often approximating models
H&B 17:531-532
5. Light sources 1
Light source: object that radiates energy.
Sun, lamp, globe, sky…
Intensity I = (Ired , Igreen , Iblue)
If Ired = Igreen = Iblue : white light
H&B 17-1:532-536
6. Light sources 2
Simple model: point light source
• position P and intensity I
• Light rays along straight lines
• Good approximation for small
light sources
H&B 17-1:532-536
7. Light sources 3
Simpler yet: point light source at infinity
• Direction V and intensity I
• Sunlight
V
H&B 17-1:532-536
8. Light sources 4
Damping: intensity of light decreases with distance
Energy is distributed over area sphere, hence
Il = I / d2,
with d distance to light source.
In practice often too ‘agressive’,
hence Il = I / (a0 +a1d+a2d2)
If light source at infinity: No damping with distance
d
H&B 17-1:532-536
9. Light sources 5
Directed light source, spotlight:
Light is primarily send in direction of Vlight .
P
Q
d.
illuminate
is
then
cos
|
|
If
:
Or
d.
illuminate
is
then
cos
cos
If
l
light
l
Q
V
P
Q
P
Q
Q
l
light cone
Vlight
H&B 17-1:532-536
10.
11. Light sources 6
More subtle: Let I decrease with increasing angle .
P
Q
decreases.
light
he
stronger t
the
,
larger
The
.
cos
:
used
Often
n
I
I n
l
l
light cone
Vlight
H&B 17-1:532-536
12.
13. Surface illumination 1
• When light hits a surface, three things can
happen:
reflection
transmission
absorption
H&B 17-2:536-537
14. Surface illumination 2
• Suppose, a light source radiates white light,
consisting of red, green and blue light.
reflection
transmission
absorption
If only red light is reflected,
then we see a red surface.
H&B 17-2:536-537
15. Surface illumination 3
• Diffuse reflection: Light is uniformly reflected in
all directions
• Specular reflection: Light is stronger reflected in
one direction.
specular reflection
diffuse reflection
H&B 17-2:536-537
16. Surface illumination 4
• Ambient light: light from the environment. Undirected
light, models reflected light of other objects.
H&B 17-2:536-537
17. Basic illumination model 1
Basic illumination model:
• Ambient light;
• Point light sources;
• Ambient reflection;
• Diffuse reflection;
• Specular reflection.
H&B 17-3:537-546
V
P of
,
l
a
I
I
s
s
d
a
n
k
k
k
,
)
,
,
(
ts
coefficien
reflection
:
,
,
blue
p,
green
p,
red
p, k
k
k
k
k
k
k
p
s
d
a
18. Basic illumination model 2
• Ambient light: environment light. Undirected light,
models reflected light of other objects.
a
a
amb I
k
I
H&B 17-3:537-546
19. Basic illumination model 3
Perfect diffuse reflector: light is reflected
uniformly in all directions.
dA/cos
cos
cos
/
dA
dA
area.
projected
energy
Intensity
H&B 17-3:537-546
20. Basic illumination model 4
Perfect diffuse reflector: light is reflected
uniformly in all directions.
.
N
L
dA/cos
Lambert’s law:
Reflected energy is
proportional with cos , where
denotes the angle between the
normal N and a vector to the light
source L.
H&B 17-3:537-546
21. Basic illumination model 5
Perfect diffuse reflector: light is reflected
uniformly in all directions.
N
L
|
|
and
1
0
with
0
if
0
0
if
)
(
:
reflection
diffuse
model
Graphics
surf
source
surf
source
diff
l,
P
P
P
P
L
L
N
L
N
L
N
d
l
d
k
I
k
I
Psurf
Psource
H&B 17-3:537-546
22. Basic illumination model 6
Perfect specular reflector: light is only reflected in one
direction. Angle of incidence is angle of reflection.
N
L R
H&B 17-3:537-546
23. Basic illumination model 7
Imperfect specular reflector: light is distributed in the
direction of the angle of reflection, dependent on the
roughness of the surface.
N
L R
N
L R
glad ruw
H&B 17-3:537-546
24. Basic illumination model 8
Phong model: empirical model for specular reflection
N
L
R
V
viewer
direction
:
light
of
ray
reflected
direction
:
,
and
between
angle
:
,
and
between
angle
:
glad),
100
ruw,
(1
smoothness
,
)
(
with
,
cos
)
(
,
V
R
V
R
L
N
s
s
n
l
spec
l
n
k
W
I
W
I s
H&B 17-3:537-546
25. Basic illumination model 9
Phong model: empirical model for specular reflection
N
L
R
V
0
or
0
if
0
0
and
0
if
)
(
,
L
N
R
V
L
N
R
V
R
V s
n
l
s
spec
l
I
k
I
H&B 17-3:537-546
26. Basic illumination model 10
Phong model: calculating the vectors
N
L R
)
2
(
hence
)
2
(
L
N
L
N
R
N
L
N
L
R
L
N.L
V
|
| surf
view
surf
view
P
P
P
P
V
H&B 17-3:537-546
27. Basic illumination model 11
N
L
R
V
H
|
V
L
|
V
L
H
Phong model: variant with halfway vector H.
Use instead of .
)
(
,
s
n
l
s
spec
l I
k
I H
N
If light source and viewer far away:
H constant.
H&B 17-3:537-546
28. Basic illumination model 12
n
l
l
s
l
d
a
a
l
s
l
d
a
a
spec
dif
amb
I
k
I
k
I
k
I
I
k
I
k
I
k
I
I
I
I
1
n
n
s
s
))
,
0
(max(
))
,
0
(max(
:
sources
light
Multiple
))
,
0
(max(
))
,
0
(max(
H
N
L
N
H
N
L
N
All together:
H&B 17-3:537-546
29. Color (reprise):
Light intensity I and reflection coefficients k: (r,g,b) triplets
So for instance:
Plastic: kd is colored (r,g,b), ks is grey (w,w,w)
Metal: kd and ks same color
Basic model: simple but effective.
It can be done much better though…
Basic illumination model 13
)
)
,
0
(max(
,
,
, L
N
R
l
R
d
R
dif I
k
I
H&B 17-3:537-546
31. Transparancy 2
Snell’s law of refraction:
N
i
L R
T
refraction
of
index
:
,
sin
sin
i
r
i
r
i
r
L
N
T
r
i
r
i
r
i
cos
cos
and
for
solve
and
,
,
cos
,
1
.
law,
s
Snell'
Use
:
Derivation
L
N
T
N
T
T
T
r
H&B 17-4:546-549
33. Transparancy 3
Very thin surface:
• Discard shift
opacity
:
1
ncy
transpara
:
1
0
)
1
(
:
model
Simple
trans
refl
t
t
t
t
t
k
k
k
I
k
I
k
I
Poor result for silhouette
edges… H&B 17-4:546-549
37. Rendering polygons 1
Basic illumination model:
Can be used per point, but that’s
somewhat expensive
More efficient:
Illumination model gives color for some
points;
Surface is filled in using interpolation of
these colors.
H&B 17-10:559-564
38. Rendering polygons 2
Constant-intensity rendering aka flat surface rendering:
• Determine color for center of polygon;
• Fill the polygon with a constant color.
Ok if:
• Object consists of planar faces, and
• Light sources are far away, and
• Eye point is far away,
or
• Polygons are about a pixel in size.
H&B 17-10:559-564
39. Rendering polygons 2
Constant-intensity rendering aka flat surface rendering:
• Determine color for center of polygon;
• Fill the polygon with a constant color.
Highlights not visible,
Facetted appearance, increased by Mach banding effect.
H&B 17-10:559-564
40. • Human perception: edges are given
emphasis, contrast is increased near edges.
Mach banding
Angel (2000)
H&B 17-10:559-564
41. Rendering polygons 2
Gouraud surface rendering:
• Determine average normal on vertices;
• Determine color for vertices;
• Interpolate the colors per polygon (incrementally).
N1
N2
N3
N4
V
n
1
k k
n
1
k k
V
N
N
N
H&B 17-10:559-564
42. Rendering polygons 3
Gouraud surface rendering:
• Much better result for curved surfaces
• Errors near highlights
• Linear interpolation still gives Mach banding
• Silhouettes are still not smooth
Gouraud Flat
43. Rendering polygons 4
Phong surface rendering:
• Determine average normal per vertex;
• Interpolate normals per polygon (incrementally);
• Calculate color per pixel.
Fast Phong surface rendering:
Like Phong surface rendering, but use
2nd order approximation of color over
polygon:
f
ey
dx
cy
bxy
ax
y
x
I
2
2
)
,
(
H&B 17-10:559-564
44. Rendering polygons 5
Phong surface rendering:
• Even better result for curved surfaces
• No errors at high lights
• No Mach banding
• Silhouettes remain coarse
• More expensive than flat or Gouraud shading
H&B 17-10:559-564
46. OpenGL Illumination
Glfloat lightPos[] = {2.0, 0.0, 3.0, 0.0};
Glfloat whiteColor[] = {1.0, 1.0, 1.0, 1.0};
Glfloat pinkColor[] = {1.0, 0.5, 0.5, 1.0};
glShadeModel(GL_SMOOTH); // Use smooth shading
glEnable(GL_LIGHTING); // Enable lighting
glEnable(GL_LIGHT0); // Enable light source #0
glLightfv(GL_LIGHT0, GL_POSITION, lightPos); // position LS 0
glLightfv(GL_LIGHT0, GL_DIFFUSE, whiteColor); // set color LS 0
glMaterialfv(GL_FRONT, GL_DIFFUSE, pinkColor); // set surface
// color
glBegin(GL_TRIANGLES);
glNormal3fv(n1); glVertex3fv(v1); // draw triangle, give
glNormal3fv(n2); glVertex3fv(v2); // first normal, followed
glNormal3fv(n3); glVertex3fv(v3); // by vertex
glEnd();
H&B 17-11:564-574
47. OpenGL Light-sources 1
H&B 17-11:564-574
First, enable lighting in general:
glEnable(GL_LIGHTING);
OpenGL provides (at least) eight light-sources:
lightName = GL_LIGHT0, GL_LIGHT1, … , GL_LIGHT7
Enable the one(s) you need with:
glEnable(lightName);
Set properties with
glLight*(lightName, lightProperty, propertyValue);
* = i, f, iv, or fv (i: integer, f: float, v vector)
48. OpenGL Light-sources 2
H&B 17-11:564-574
Position light-source:
Glfloat sunlightPos[] = {2.0, 0.0, 3.0, 0.0};
Glfloat lamplightPos[] = {2.0, 0.0, 3.0, 1.0};
glLightfv(GL_LIGHT1, GL_POSITION, sunlightPos);
glLightfv(GL_LIGHT2, GL_POSITION, lamplightPos);
• Fourth coordinate = 0: source at infinity
• Fourth coordinate = 1: local source
• Specified in world-coordinates, according to the current
ModelView specification – just like geometry. Hence, take
care when you specify the position.
• Light from above looks more natural
49. OpenGL Light-sources 3
H&B 17-11:564-574
Color light-source:
Glfloat greyColor[] = {0.3, 0.3, 0.3, 1.0};
Glfloat pinkColor[] = {1.0, 0.7, 0.7, 1.0};
Glfloat whiteColor[] = {1.0, 1.0, 1.0, 1.0};
glLightfv(GL_LIGHT1, GL_AMBIENT, greyColor);
glLightfv(GL_LIGHT1, GL_DIFFUSE, pinkColor);
glLightfv(GL_LIGHT1, GL_SPECULAR, whiteColor);
• OpenGL light-source has three color properties, dependent on
reflection surface. Not realistic, can be used for special effects.
• If you don’t have ambient light, things often appear black.
• Colors are always 4-vectors here: Fourth coordinate is alpha.
Most cases: set it to 1.0.
• More settings: See book
50. OpenGL Global Lighting
H&B 17-11:564-574
Global parameters:
glLightModel*(paramName, paramValue);
* = i, f, iv, or fv (i: integer, f: float, v vector)
Global ambient light:
Glfloat globalAmbient[] = {0.3, 0.3, 0.3, 1.0};
glLightModelfv(GL_LIGHT_MODEL_AMBIENT, globalAmbient);
More precise specular reflection, take view position into account:
glLightModelI(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
Two-sided lighting:
glLightModelI(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);
52. OpenGL Surface properties 2
H&B 17-11:564-574
If colors are changed often (for instance, per vertex):
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE);
glBegin(…);
for i = ...
for j = ...
glColor3f(red(i,j), green(i,j), blue(i,j));
glVertex3f(x(i,j), y(i,j), z(i,j));
glEnd(…);
53. OpenGL Surface properties 3
H&B 17-11:564-574
Transparent surfaces:
• First, draw all opaque surfaces;
• Next, draw transparent surfaces, back to front*, using
something like:
glColor4f(R, G, B, A); // A: alpha, for instance 0.40
glEnable(GL_BLEND);
glBlendFunc(GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA);
... Draw transparent surfaces.
glDisable(GL_BLEND);
* OpenGL cannot automatically handle transparency, because of the
z-buffer algorithm used for hidden surface removal. More on this
later.
54. OpenGL Surface properties 4
H&B 17-11:564-574
Color Blending (see also H&B: 135-136):
Source: the new graphics object to be drawn;
Destination: the current image built up.
(RS, GS, BS, AS): Source color + alpha
(RD, GD, BD, AD): Destination color + alpha
(SR, SG, SB, SA): Source blending factors
(DR, DG, DB, DA): Destination blending factors
Components of Source and Destination are weighted and added:
(SRRS+ DRRD, SGGS+ DGGD, SBBS+ DBBD, SAAS+ DAAD)
is stored in the current image.
55. OpenGL Surface properties 5
H&B 17-11:564-574
(RS, GS, BS, AS): Source color + alpha
(RD, GD, BD, AD): Destination color + alpha
(SR, SG, SB, SA): Source blending factors
(DR, DG, DB, DA): Destination blending factors
glBlendFunc(sFactor, dFactor): specify the blending factors.
glBlendFunc(GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA);
// Use alpha of source as transparency
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// Use alpha of source as opacity
More options available for special effects.
56. OpenGL Surface-Rendering 1
H&B 17-11:564-574
glShadeModel(m): specify the rendering method
m = GL_FLAT or m = GL_SMOOTH (Gouraud, default)
glNormal*(Nx, Ny, Nz) : specify the normal vector
Flat version:
glNormal3fv(nV);
glBegin(GL_TRIANGLES);
glVertex3fv(V1);
glVertex3fv(V2);
glVertex3fv(V3);
glEnd();
Smooth version:
glBegin(GL_TRIANGLES);
glNormal3fv(nV1);
glVertex3fv(V1);
glNormal3fv(nV2);
glVertex3fv(V2);
glNormal3fv(nV3);
glVertex3fv(V3);
glEnd();
57. OpenGL Surface-Rendering 2
H&B 17-11:564-574
glShadeModel(m): specify the rendering method
m = GL_FLAT or m = GL_SMOOTH (Gouraud, default)
glNormal*(Nx, Ny, Nz) : specify the normal vector
glEnable(GL_NORMALIZE): Let OpenGL normalize the normals for
you. And, also take care of effects of
scaling, shearing, etc.
63. Hue - Paint Mixing
• Physical mix of
opaque paints
• Primary: RYB
• Secondary: OGV
• Neutral: R + Y + B
64. Hue - Ink Mixing
• Subtractive mix of
transparent inks
• Primary: CMY
• Secondary: RGB
• ~Black: C + M + Y
• Actually use CMYK
to get true black
65. Hue - Ink Mixing
Assumption: ink printed on pure white paper
CMY = White – RGB:
C = 1 – R, M = 1 – G, Y = 1 – B
CMYK from CMY (K is black ink):
K = min(C, M, Y)
C = C – K, M = M – K, Y = Y - K
66. Hue - Light Mixing
• Additive mix of
colored lights
• Primary: RGB
• Secondary: CMY
• White = R + G + B
• Show demonstration
of optical mixing
91. Output Primitives
• The basic objects out of which a graphics
display is created are called.
• Describes the geometry of objects and –
typically referred to as geometric
primitives.
• Examples: point, line, text, filled region,
images, quadric surfaces, spline curves
• Each of the output primitives has its own
set of attributes.
91
94. Output Primitives
• Polylines (open)
• A set of line segments joined end to end.
• Attributes: Color, Thickness, Type
glLineWidth(p);
glBegin(GL_LINE_STRIP);
glVertex2d(x1, y1);
glVertex2d(x2, y2);
glVertex2d(x3, y3);
glVertex2d(x4, y4);
glEnd()
95. Output Primitives
• Polylines (closed)
• A polyline with the last point connected to the first
point .
• Attributes: Color, Thickness, Type
Note: A closed polyline cannot be filled.
glBegin(GL_LINE_LOOP);
glVertex2d(x1, y1);
glVertex2d(x2, y2);
glVertex2d(x3, y3);
glVertex2d(x4, y4);
glEnd()
96. Output Primitives
• Polygons
• A set of line segments joined end to end.
• Attributes: Fill color, Thickness, Fill pattern
Note: Polygons can be filled.
glBegin(GL_POLYGON);
glVertex2d(x1, y1);
glVertex2d(x2, y2);
glVertex2d(x3, y3);
glVertex2d(x4, y4);
glEnd()
98. Output Primitives
• Images
• Attributes: Image Size, Image Type, Color
Depth.
• Image Type:
• Binary (only two levels)
• Monochrome
• Color.
• Color Depth:
Number of bits used to represent color.
98
99. TCS2111
99
Output Primitives
Output Primitive Attributes
Point Size
Color
Line Thickness (1pt, 2pt …)
Type (Dashed, Dotted, Solid)
Color
Text Font (Arial, Courier, Times Roman…)
Size (12pt, 16pt ..)
Spacing
Orientation (Slant angle)
Style (Bold, Underlined, Double lined)
Color
Filled Region Fill Pattern
Fill Type (Solid Fill, Gradient Fill)
Fill Color
Images Color Depth (Number of bits/pixel)
101. Line Drawing
• Line drawing is fundamental to computer graphics.
• We must have fast and efficient line drawing functions.
Rasterization Problem: Given only the two end points, how
to compute the intermediate pixels, so that the set of pixels
closely approximate the ideal line.
102. Line Drawing - Analytical Method
y y
x x
y x
b a
m
b a
c a ma
y
x
y=mx+c
ax bx
A(ax,ay)
B(bx,by)
103. • Directly based on the analytical equation of a line.
• Involves floating point multiplication and addition
• Requires round-off function.
double m = (double)(by-ay)/(bx-ax);
double c = ay - m*ax;
double y;
int iy;
for (int x=ax ; x <=bx ; x++) {
y = m*x + c;
iy = round(y);
setPixel (x, iy);
}
Line Drawing - Analytical Method
104. Compute one point based on the previous point:
(x0, y0)…….…………..(xk, yk) (xk+1, yk+1) …….
I have got a pixel on the line (Current Pixel).
How do I get the next pixel on the line?
Next pixel on next column
(when slope is small)
Next pixel on next row
(when slope is large)
Incremental Algorithms
105. Current
Pixel
(xk, yk)
To find (xk+1, yk+!):
xk+1 = xk+1
yk+1 = ?
(5,2)
(6,1)
(6,2)
(6,3)
• Assumes that the next pixel to be set is on the next column of
pixels (Incrementing the value of x !)
• Not valid if slope of the line is large.
Incrementing along x
106. Digital Differential Analyzer Algorithm is an incremental
algorithm.
Assumption: Slope is less than 1 (Increment along x).
Current Pixel = (xk, yk).
(xk, yk) lies on the given line. yk = m.xk + c
Next pixel is on next column. xk+1 = xk+1
Next point (xk+1, yk+1) on the line yk+1 = m.xk+1 + c
= m (xk+1) +c
= yk + m
Given a point (xk, yk) on a line, the next point is given by
xk+1 = xk+1
yk+1 = yk + m
Line Drawing - DDA
107. • Does not involve any floating point multiplication.
• Involves floating point addition.
• Requires round-off function
Line Drawing - DDA
double m = (double) (by-ay)/(bx-ax);
double y = ay;
int iy;
for (int x=ax ; x <=bx ; x++) {
iy = round(y);
setPixel (x, iy);
y+ = m;
}
108. xk+1 = xk+1
yk+1 = Either yk or yk+1
Midpoint algorithm is an incremental algorithm
Midpoint Algorithm
Assumption:
Slope < 1
Current
Pixel
109. Candidate Pixels
Current Pixel
( xk, yk)
Midpoint
Line
Coordinates of Midpoint = ( xk+1, yk+(1/2) )
( xk+1, yk)
( xk+1, yk+1)
Midpoint Algorithm -
Notations
110. Midpoint Below Line Midpoint Above Line
Midpoint Algorithm:
Choice of the next pixel
•If the midpoint is below the line, then the next pixel is (xk+1, yk+1).
•If the midpoint is above the line, then the next pixel is (xk+1, yk).
111. A(ax,ay)
B(bx,by)
Equation of a line revisited.
y x
y y x x
y a x a
b a b a
Let w = bx ax, and h = by ay.
Then, h (x ax) w (y ay) = 0.
(h, w , ax , ay are all integers).
In other words, every point (x, y) on the line
satisfies the equation F(x, y) =0, where
F(x, y) = h (x ax) w (y ay).
Equation of the line:
112. Midpoint Algorithm:
Regions below and above the line.
F (x,y) > 0
(for any point below line)
F(x,y) < 0
(for any point above line)
F(x,y) = 0
113. F(MP) > 0
0
)
,
(
y
x
f
Midpoint below line
F(MP) < 0
Midpoint above line
Midpoint Algorithm
Decision Criteria
114. Midpoint Algorithm
Decision Criteria
F(MP) = F(xk+1, yk+ ½) = Fk (Notation)
If Fk < 0 : The midpoint is above the line. So the next
pixel is (xk+1, yk).
If Fk 0 : The midpoint is below or on the line. So the
next pixel is (xk+1, yk+1).
Decision Parameter
115. Midpoint Algorithm – Story so far.
Midpoint Below Line
Next pixel = (xk+1, yk+1)
Fk > 0
yk+1 = yk+1
Midpoint Above Line
Next pixel = (xk+1, yk)
Fk < 0
yk+1 = yk
116. Midpoint Algorithm
Update Equation
Fk = F(xk+1, yk+ ½) = h (xk+1 ax) w (yk+½ ay)
But, Fk+1 = Fk + h w (yk+1 yk). (Refer notes)
So,
Fk< 0 : yk+1 = yk. Hence, Fk+1 = Fk + h .
Fk 0 : yk+1 = yk+1. Hence, Fk+1 = Fk + h w.
F0 = h w/2.
Update Equation
117. Midpoint Algorithm
117
int h = by-ay;
int w = bx-ax;
float F=h-w/2;
int x=ax, y=ay;
for (x=ax; x<=bx; x++){
setPixel(x, y);
if(F < 0)
F+ = h;
else{
F+ = h-w;
y++;
}
}
118. Bresenham’s Algorithm
118
int h = by-ay;
int w = bx-ax;
int F=2*h-w;
int x=ax, y=ay;
for (x=ax; x<=bx; x++){
setPixel(x, y);
if(F < 0)
F+ = 2*h;
else{
F+ = 2*(h-w);
y++;
}
}
120. Midpoint Circle Drawing
Algorithm
• To determine the closest pixel position to the
specified circle path at each step.
• For given radius r and screen center position (xc,
yc), calculate pixel positions around a circle path
centered at the coodinate origin (0,0).
• Then, move each calculated position (x, y) to its
proper screen position by adding xc to x and yc
to y.
• Along the circle section from x=0 to x=y in the
first quadrant, the gradient varies from 0 to -1.
120
122. TCS2111
122
Midpoint Circle Drawing Algorithm
Circle function: fcircle (x,y) = x2 + y2 –r2
> 0, (x,y) outside the circle
< 0, (x,y) inside the circle
= 0, (x,y) is on the circle
boundary
{
fcircle (x,y) =
123. TCS2111
123
Midpoint Circle Drawing Algorithm
yk
yk-1
midpoint
Next pixel = (xk+1, yk)
Fk < 0
yk+1 = yk
yk
yk-1
midpoint
Next pixel = (xk+1, yk-1)
Fk >= 0
yk+1 = yk - 1
124. TCS2111
124
Midpoint Circle Drawing Algorithm
We know xk+1 = xk+1,
Fk = F(xk+1, yk- ½)
Fk = (xk +1)2 + (yk - ½)2 - r2 -------- (1)
Fk+1 = F(xk+1, yk- ½)
Fk+1 = (xk +2)2 + (yk+1 - ½)2 - r2 -------- (2)
(2) – (1)
Fk+1 = Fk + 2(xk+1) + (y2
k+1 – y2
k) - (yk+1 – yk) + 1
If Fk < 0, Fk+1 = Fk + 2xk+1+1
If Fk >= 0, Fk+1 = Fk + 2xk+1+1 – 2yk+1
125. TCS2111
125
Midpoint Circle Drawing Algorithm
For the initial point, (x0 , y0) = (0, r)
f0 = fcircle (1, r-½ )
= 1 + (r-½ )2 – r2
= 5 – r
4
≈ 1 – r
126. TCS2111
126
Midpoint Circle Drawing Algorithm
Example:
Given a circle radius = 10, determine the circle octant
in the first quadrant from x=0 to x=y.
Solution:
f0 = 5 – r
4
= 5 – 10
4
= -8.75
≈ –9
128. TCS2111
128
Midpoint Circle Drawing Algorithm
void circleMidpoint (int xCenter, int yCenter, int radius)
{
int x = 0;
Int y = radius;
int f = 1 – radius;
circlePlotPoints(xCenter, yCenter, x, y);
while (x < y) {
x++;
if (f < 0)
f += 2*x+1;
else {
y--;
f += 2*(x-y)+1; }
}
circlePlotPoints(xCenter, yCenter, x, y);
}
129. TCS2111
129
Midpoint Circle Drawing Algorithm
void circlePlotPoints (int xCenter, int yCenter,
int x, int y)
{
setPixel (xCenter + x, yCenter + y);
setPixel (xCenter – x, yCenter + y);
setPixel (xCenter + x, yCenter – y);
setPixel (xCenter – x, yCenter – y);
setPixel (xCenter + y, yCenter + x);
setPixel (xCenter – y, yCenter + x);
setPixel (xCenter + y, yCenter – x);
setPixel (xCenter – y, yCenter – x);
}
131. Antialiasing
131
Antialiasing is a technique used to diminish
the jagged edges of an image or a line, so
that the line appears to be smoother; by
changing the pixels around the edges to
intermediate colors or gray scales.
Eg. Antialiasing disabled:
Eg. Antialiasing enabled:
134. Fill Area Algorithms
• Fill-Area algorithms are used to fill the
interior of a polygonal shape.
• Many algorithms perform fill operations
by first identifying the interior points,
given the polygon boundary.
134
135. Basic Filling Algorithm
135
The basic filling algorithm is commonly used
in interactive graphics packages, where the
user specifies an interior point of the region to
be filled.
4-connected pixels
136. Basic Filling Algorithm
[1] Set the user specified point.
[2] Store the four neighboring pixels in a
stack.
[3] Remove a pixel from the stack.
[4] If the pixel is not set,
Set the pixel
Push its four neighboring pixels into the stack
[5] Go to step 3
[6] Repeat till the stack is empty.
136
138. Basic Filling Algorithm
• Requires an interior point.
• Involves considerable amount of stack
operations.
• The boundary has to be closed.
• Not suitable for self-intersecting
polygons
138
139. Types of Basic Filling Algorithms
• Boundary Fill Algorithm
• For filling a region with a single boundary
color.
• Condition for setting pixels:
• Color is not the same as border color
• Color is not the same as fill color
• Flood Fill Algorithm
• For filling a region with multiple boundary
colors.
• Condition for setting pixels:
• Color is same as the old interior color
139
140. TCS2111
140
void boundaryFill(int x, int y,
int fillColor, int borderColor)
{
getPixel(x, y, color);
if ((color != borderColor)
&& (color != fillColor)) {
setPixel(x,y);
boundaryFill(x+1,y,fillColor,borderColor);
boundaryFill(x-1,y,fillColor,borderColor);
boundaryFill(x,y+1,fillColor,borderColor);
boundaryFill(x,y-1,fillColor,borderColor);
}
}
Boundary Fill Algorithm (Code)
141. TCS2111
141
void floodFill(int x, int y,
int fillColor, int oldColor)
{
getPixel(x, y, color);
if (color != oldColor)
{
setPixel(x,y);
floodFill(x+1, y, fillColor, oldColor);
floodFill(x-1, y, fillColor, oldColor);
floodFill(x, y+1, fillColor, oldColor);
floodFill(x, y-1, fillColor, oldColor);
}
}
Flood Fill Algorithm (Code)