Introduction• What is realistic image?– A picture that capture many of the effect of lightinteracting with real physical object.– A continuum and speak freely of pictures, and of thetechniques used to create them, as being more or lessrealistic is called realistic images.– At one end of the continuum are examples of what isoften called photographic realism– These pictures attempt to synthesize the field of lightintensities that would be focused on the film plane ofa camera aimed at the objects depicted.
Cont.• Realistic picture is not necessarily a moredesirable, the ultimate goal of picture is toconvey information, then a picture that is freeof the complications of the shadows andreflections may well be more successful than aphotographic realism.
Cont.Creating a realistic pictures involves following stages:1) Model of the object.2) Viewing specifications and lighting conditions,3) Visible surface determination.4) Color determination of each pixel is a function of thelight reflected and transmitted by the objects.5) Animated sequence:- time varying changes in themodels lighting and viewing specifications must bedefined.The process of creating realistic images from models is calledrendering.
Application of realistic pictures1. Simulation2. Design of 3D object such as automobiles,airplanes etc.3. Entertainment and advertisement4. Research and education5. Command and control
Difficulties• Total visual realism is the complexity of thereal world. observe the richness of yourenvironment.• Solution:– Achieved sub goal of realism: to provide sufficientinformation to let the viewer understand the 3 Dspatial relationships among several objects.
Cont.• Line drawing of two house:– Simple line drawing, suffices to persuade us thatone building is partially behind the other.• Most display devices are 2D therefore, 3Dobjects must be projected into 2D, withconsiderable attendant loss of informationwhich can sometimes create ambiguities inthe image.
Stairway being viewed from above or from below?
Rendering Techniques for line drawing• Multiple orthographic views:– The projection plane is perpendicular to a principalaxis, depth information is discarded.– Training and experience sharpen one’s interpretivepower.• Axonometric and oblique projections:– A point ‘s Z coordinate influences its x and ycoordinates in the projection.– These projections provide constant foreshorting, andtherefore lack the convergence of parallel line anddecreasing size of object with increasing distance.
Cont.• Perspective projections:– An object’s size is scaled in inverse proportion toits distance from viewer.– Cube examplePerspective projection of a cube
Cont.– If we view a picture in which an elephant is the same sizeas a human, we assume that the elephant is further awaysince we know that elephants are larger than humans.• Depth cueing:– The depth can be represented by the intensity of theimage. parts of the objects that are intended to appearfarther from the viewer are displayed at lower intensity,this effect is know as depth cueing.– In vector displays, depth cueing is implemented byinterpolating the intensity of the beam along a vector as afunction of its starting and ending z coordinates.– Distant object appear dimmer than the closer objects
Cont.• Depth clipping:– The back clipping plane is placed so as to cut throughthe objects being displayed. By allowing the positionof one or both planes to be varied dynamically, thesystem can convey more depth information to theviewer.– in depth cueing intensity is a smooth function of z, indepth clipping it is a step function. Highlighting allpoints on the object intersected by some plane. Thistechnique is effective when the slicing plane is shownmoving through the object dynamically.
Cont.• Texture:– Texture may be applied to an object, texturesfollow the shape of an object and delineate itmore clearly.– Texturing one of a set of otherwise identical facecan clarify a potentially ambiguous projection.• Color:– Color may used symbolically to distinguish oneobject from another, color can also be used in linedrawing to provide other information.
Cont.• Visible –line Determination– Determine visible lines. Only surfaces bounded bylines can obscure other lines. So objects that areto block others must be modeled either ascollections of surfaces or as solids.– Hidden-line removed views convey less depthinformation so instead of removing that can beshown as dashed lines.
Rendering techniques for ShadedImages• Visible-surface Determination:– Hidden surface removal– Displaying only those parts of surfaces that arevisible to the viewer.– If surfaces are rendered as opaque areas, thenvisible-surface determination is essential for thepicture to make sense.
Cont.• Illumination and shading– Problem with Visible surface determination eachobject appears as flat.– So, next step of realism is to shade the visiblesurfaces.– Each surface’s appearance should depend on thetypes of light sources illuminating it, itproperties(color, texture, reflectance), and it’sposition and orientation with respect to the lightsources, viewer and other surface.
Cont.• Types of light source– Point source: rays emanate from a single point,can approximate a small bulb.– Ambient light: ambient light impinges from alldirections.• Its easiest kind of light source to model.• Assumed to produce constant illumination on allsurface, regardless of their position or orientation.
Cont.– Directional source: whose rays all come from thesame direction e.g. sun• Modeling this source requires additional work because theireffect depend on the surface’s orientation.• The surface is normal (perpendicular) to the incident lightrays, it is brightly illuminated; more oblique the surface is tothe light rays, the less its illumination.– Distributed and extended source: whose surface areaemits light comes from neither a single direction nora single point.• Fluorescent lights• Even more complex to model.
Cont.• Interpolated shading:– Shading information is computed for each polygonvertex and interpolated across the polygons todetermine the shading at each pixel.– Shading information computed at each vertex can bebased on the surface’s actual orientation at that pointand is used for all of the polygons that share thatvertex.– Interpolating among these values across a ploygonapproximates the smooth changes in shade that occuracross a curved.– Ground shading e.g.
Cont.• Material properties:– Realism is further enhanced if the materialproperties of each object are taken into accountwhen shading is determined– Dull and disperse reflected light about equally inall direction (diffuse reflection),– Shiny reflected light only in certain directionsrelative to the viewer and light source like mirror(speculr reflection) E.g. mirror.
• Texture– Object texture not only provide additional depthcues but also can mimic the surface detail of realobjects.• Shadows– Further realism can be added by reproducingshadows cast by objects on one another.
– This is the technique in which the appearance of anobject’s visible surfaces is affected by other objects.– Shadows enhance realism and provide additionaldepth cues: If object A casts a shadow on surface onsurface B, then we know that A is between B and adirect or reflected light source.– A point light source casts sharp shadows, becausefrom any point it is either totally visible or invisible.– Extended light source casts soft shadows, since thereis a smooth transition from these points that see all ofthe light source, through those that see only part of it,to those that see none of it.
• Transparency and reflection– So far we have considered surface as opaque. Now weconsider transparent surface.– Simple models of transparency do not include therefraction (bending) of light through a transparentsolid.– More complex models include refraction, diffusetransparency and the attenuation of light with respectto distance. Also consider diffuse and specularreflection .– It requires knowledge of other surfaces besides thesurface being shaded.
– It requires objects actually modeled as solids ratherthan just as surfaces. We should also know somethingabout the materials through which a light ray passesand the distance it travels to model its refractionproperty.– The amount of light from a light source illuminating asurface is inversely proportional to the square of thedistance from the light source to the surface. Hence,surfaces of an object that are further from the lightsource are darker (shading), which gives cues of bothdepth and shape.– Shadows cast by one object on another (shadowing)also give cues to relative position and size.
• Improved Camera Models–So far we have consider camera modelwith a pinhole lens and an infinitely fastshutter: all objects are in sharp focus andrepresent the world at one instance intime.–It is possible to model more accuratelythe way that we and cameras see theworld.
– e.g. by modeling the focal properities oflenses ,we can produce pictures that showdepth of field.: some parts of objects are infocus, whereas closer and farther parts areout of focus.– Moving objects look different from stationaryobjects in a picture taken with a regular stillor moving camera. Because the shutter isopen for a finite period of time, visible partsof moving objects are blurred across the filmplane. This effect , called motion blur.
• Other ways of realism• Dynamics: We mean changes that spread across asequence of pictures, including changes in position,size, material, properties ,lighting and viewingspecification– Most popular kind of dynamics is motion ranging fromsimple transformation to animation.– If a series of projections of same object, from a slightlydifferent viewpoint around the object, is displayed in rapidsuccession then the object appears to rotate.– Object in motion can be rendered with less detail
• Stereopis–Look at your desk or table top first withone eye, then with the other.–The two views differ slightly because oureyes are separated from each other by afew inches.–The binocular disparity caused by thisseparation provides a power depth cuecalled stereopis or stereo vision .
–Our brain fuses the two separate imagesinto one that is interpreted as being in 3D.The two images are called a stereo pair.They are used today in the commontoy, .The view master.–You can fuse the two images into one 3Dimage by viewing them such that each eyesees only one image. E.g. by placing a stiffpiece of paper between the two imagesperpendicular to the page.
– A variety of other techniques exists for providingdifferent images of each eye, including glasseswith polarizing filters and holography.– Some of these techniques make possible 3dimages that occupy space, rather than beingprojected on a single plane. These display canprovide an additional 3D depth cue : Closerobjects are closer, just as in real life, so theviewer’s eyes focus differently on differentobjects, depending on each object’s proximity.
• Improved displays– Improvement in displays themselves haveheightened the illusion of reality.– But still not possible to achieve the contrast andcolor of a well printed professional photograph.– Limited display resolution makes it impossible toproduce extremely fine detail
Cont.• Interacting with our other senses– Final step towards realism is the integration ofrealistic imagery with information to our senses.– e.g. Head –worn simulator monitors head motion,making possible another important 3D depth cuecalled head-motion parallax: whne the user movesher head from side to side perhaps to try to seemore a partially hidden object the views changesas it would in real life.
Aliasing and Anti-Aliasing• The primitives of graphics have commonproblem : jagged edges• It is also know as staircasing.• Jaggies are an instance of phenomenon knowas aliasing.• The application of techniques that reduce oreliminate aliasing is referred to as antialiasing• Primitive or image produced using thesetechniques are called antialiased.
Cont.• Signal: which is a function that conveysinformation, it is function of time.• Signal can equally well be functions of othervariables.• Image as intensity variations over space,• Image signals in the spatial domain (asfunction of spatial coordinates) rather thenthe temporal domain (as function of time).
Cont.• Signal can be classified by whether or not they havevalues at all points in the spatial domain.• Continuous signal is defined at continuum of positionsin space• Discrete signal is defined at a set of discrete points inspace.• The process of selecting a finite set of values from asignal is known as sampling.• And selected values are called samples• Recreating original continuous signal from samples,this process is know as reconstruction.
Cont.• Point sampling:– Select one point for each pixel, evaluate the originalsignal at this point and assign its value to pixel.– Points are arranged in gird.– More sample we collect from signal, more we knowabout it.– Value of each pixel determine by combining severaladjacent sample.– Taking more then one sample for each pixel andcombining them is known as super-sampling.
Cont.• Area Sampling:– Integrating the signal over a square centered abouteach gird point, dividing by the square’s area andusing this average intensity as that of the pixel. Thistechniques called unweighted area sampling.– Each object’s projection, no matter howsmall, contributes to those pixels that contain it, instrict proportion to the amount of each pixel’s area itcovers, without regarding the location of that area inthe pixel.
Cont.• Unweighted area sampling has drawback– The object crossover into an adjoining pixel, the valueof the original pixel and the adjoining pixel are bothaffected.– The object causes the image change only when itcrosses pixel boundaries.• Weighted area sampling allows us to assigndifferent weights to different parts of pixel.• Weighted function of adjacent pixels shouldoverlap.