~ ComputerGraphics,Volume24, Number4, August1990AdaptiveRadiosityTexturesfor BidirectionalRay TracingPaul S. HeckbertDept....
O SIGGRAPH 90, Dallas, August 6-10, 1990diffuse roughspecular ideal specularFigure 1: Three classes of reflectance: diffus...
~ ComputerGraphics,Volume24, Number4, August1990We examine four data structures for storing radiosities: lightimages, poly...
O SIGGRAPH 90, Dallas, August 6-10, 1990We make the following assumptions:(1) Only surfaces are relevant. The scattering o...
~ ComputerGraphics,Volume24, Number4, August19903.2 All Sampling is AdaptiveThere are three separate multidimensional samp...
@SIGGRAPH 90, Dallas, August 6-10, 1990The adaptive algorithm for light ray tracing must ensure that:(a) a minimum level o...
~ ComputerGraphics,Volume24, Number4, August19901/16 1/16...O0 i! 0• • 0 oFigure 6: Light quadtree shown schematically (le...
SIGGRAPH 90, Dallas, August 6-10, 1990.7~:~"Z :" : •I • ":JgLgm":+ .... z2q " .4:!Figure 7: Noisy appearance results when ...
~ ComputerGraphics,Volume24, Number4, August1990Figure 11: Rex quadtree in (u, v) space of previous figures floor.Each lea...
@SIGGRAPH 90, Dallas, August 6-10, 1990[Buckalew89] Chris Buckalew, Donald FusseU, "Illumination Networks: FastRealistic R...
Upcoming SlideShare
Loading in...5
×

Heckbert p s__adaptive_radiosity_textures_for_bidirectional_r

286

Published on

Published in: Technology, Art & Photos
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
286
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Heckbert p s__adaptive_radiosity_textures_for_bidirectional_r

  1. 1. ~ ComputerGraphics,Volume24, Number4, August1990AdaptiveRadiosityTexturesfor BidirectionalRay TracingPaul S. HeckbertDept. of Electrical Engineering and Computer ScienceUniversity of California, Berkeley, CA 94720AbstractWe present a rendering method designed to provideaccurate, general simulation of global illumination for re-alistic image synthesis. Separating surface interaction intodiffuse plus specular, we compute the specular componenton the fly, as in ray tracing, and store the diffuse compo-nent (the radiosity) for later-reuse, similar to a radiosityalgorithm. Radiosities are stored in adaptive radiosity tez-lures (fezes) that record the pattern of light and shadowon every diffuse surface in the scene. They adaptively sub-divide themselves to the appropriate level of detail for thepicture being made, resolving sharp shadow edges auto-matically.We use a three-pass, bidirectional ray tracing algorithmthat traces rays from both the lights and the eye. The "sizepass" records visibility iufornmtion on diffuse surfaces; the"light pass" progressivelytraces rays from lights and brightsurfaces to deposit photons on diffuse surfaces to constructthe radiosity textures; and the "eye pass" traces rays fromthe eye, collecting light from diffuse surfaces to make apicture.CtL Categories: 1.3.3 [Computer Graphicsl: Picture/ImageGeneration- display algorithms; 1.3.7 [Computer Graphics]:Three-Dimensional Graphics and ReMism - visible line~surfacealgorithms.General Terms: algorithms.Additional Key Words and Phrases: global illumination, den-sity estimation, texture mapping, quadtree, adaptive subdivision,sampling.1 IntroductionThe presentation is divided into four sections. We first discussprevious work on the global illumination problem. Then we out-line our bidirectional ray tracing approach in an intuitive way.Next we describe our implementation and some early results. Weconclude with a summary of the method and of our experiencesto date.Permission to copy without fee all or part of this material is grantedprovided that the copies are not made or distributed for directcommercial advantage, the ACM copyright notice and the title of thepublication and its date appear, and notice is given that copying is bypermission of the Association for Computing Machinery. To copyotherwise, or to republish, requires a fee and/or specific permission.2 The Global Illumination ProblemThe primary goal of realistic image synthesis is the develop-ment of methods for modeling and realistically rendering three-dimensional scenes. One of the most chMlenging tasks of realisticimage synthesis is the accurate and efficient simulation of globalillumination effects: the illumination of surfaces in a scene byother surfaces. Early rendering programs treated the visibility(hidden surface) and shading tasks independently, employing alocal illumination model which assumed that the shading of eachsurface is independent of the shading of every other surface. Lo-cal illumination typically assumes that light comes from a finiteset of point light sources only. Global illumination models, on theother hand~ recognize that the visibility and shading are interre-lated: the shade of a smface point is determined by the shadesof all of the surfaces visible flom that point.The intensity of light traveling in a given outgoing directiono~t from a surface point is the integral of the incident inten-sity times the bidirectional distribution function (BDF) over allpossible incoming directions in:intensity(oWl) = fphere intensity(i~) BDF(in, o~t) d(in)The bidirectional distribution function is the fraction of energyreflected or transmitted from the incoming direction in = (¢i, Oi)to the outgoing direction out = (¢o, 0o); it is the sum of thebidirectional reflectance distribution function (BRDF) and thebidirectional transmittance distribution function (BTDF). See[Hall89] for a more detailed discussion of the physics of illumina-tion. We will characterize previous global illumination algorithmsby the approximations they make to the above integral.Because of the superposition properties of electromagnetic ra-diatlon, we can segregate surface reflectance into two types: dif-fuse and specular. We define diffuse interaction (both reflectionand transmission) to be the portion of interaction that scattersfight equally in all directions, and specular interaction to be theremaining portion. BDF = BDFdilf + BDF~pee. For many ma-terials, specular interaction scatters light in only a small cone ofdirections. When this cone includes just a finite number of direc-tions, each a cone with solid angle zero, we call the interactionideal specular, otherwise, when the cone(s) have a positive finiteangle, we call it rough specular. Two examples of ideal specularsurfaces are: (1) a perfect mirror that reflects in one direction,and (2) a perfect transmitter that refracts in one direction andreflects in another. An ideal specular surface with micro-bumpsbehaves statistically like a rough specular surface. Oar threeclasses of interaction are diagrammed in figure 1.©1990 ACM-O-89791-344-2/90/O0810145 $00.75 145
  2. 2. O SIGGRAPH 90, Dallas, August 6-10, 1990diffuse roughspecular ideal specularFigure 1: Three classes of reflectance: diffuse, rough specular,and ideal specular; showing a polar plot of the reflectance coeffi-cient for fixed incoming direction and varying outgoing direction.Transmittance is similar.A diffuse surface appears equally bright from all viewing di-rections, but a specular surfaces brightness varies with viewingdirection, so we say that diffuse interaction is view-independentwhile specular interaction is view-dependent. The simplest ma-terials have a position-invariant, isotropic BDF consisting of alinear combination of diffuse and ideal specular interaction, buta fully-general BDF can simulate textured, anisotropic, diffuseand rough specular surfaces.2.1 Ray Tracing vs. l:tadiosityThe two most popular algorithms for global illumination are raytracing and radiosity. Ray tracing is both a visibility algorithmand a shading algorithm~ but radiosity is just a shading algo-rithm.2.1.1 Ray TracingClassic ray tracing generates a picture by tracing rays from theeye into the scene, reeursively exploring specularly reflected andtransmitted directions, and tracing rays towaxd point light sourcesto simulate shadowing [Whitted80]. It assumes that the BDFcontains no rough specular, and that the incident light relevantto the diffuse computation is a sum of delta functions in the di-rection of each light source. This latter assumption implies alocal illumination model for diffuse.A more realistic illumination model includes rough specularBDFs and computes diffuse interaction globally. Exact simu-lation of these effects requires the integration of incident lightover cones of finite solid angle. Ray tracing can be generalizedto approximate such computations using distribution ray trac-ing [Cook84], ILee85], [Dippe85], [Cook86], [Kajiya86]. (We pro-pose the name "distribution ray tracing" as an alternative tothe current name, "distributed ray tracing", which is confusingbecause of its parallel hardware connotations.) In distributionray tracing, rays are distributed, either uniformly or stochasti-cally, throughout any distributions needing integration. Manyrays must be traced to accurately integrate the broad reflectancedistributions of rough specular and diffuse surfaces: often hun-dreds or thousands per surface intersection.2.1.2 RadiosityThe term radiosity is used in two senses. First, radiosity is aphysical quantity equal to power per unit area, which determinesthe intensity of light diffusely reflected by a surface, and second,radiosity is a shading algorithm. The meaning of each use shouldbe clear by context.The Classic radiosity algorithm subdivides each surface intopolygons and determines the fraction of energy diffusely radi-ated from each polygon to every other polygon: the pairs form]actor. From the form factors, a large system of equations isconstructed whose solution is the radiosities of each polygon[SiegelS1], [Gora184], [Nishita85]. This system can be solved ei-ther with Gauss-Seidel iteration or, most conveniently, with pro-gressive techniques that compute the matrix and solve the sys-tem a piece at a time [Cohen88]. Form factors can be determinedanalytically for simple geometries [Siege181], [Baum89], but forcomplex geometries a numerical approach employing a visibilityalgorithm is necessary. The most popular visibility method forthis purpose is a hemicube computed using a z-buffer [Cohen85],but ray tracing has recently been promoted as an alternative[Wallace89], [Sillion89]. Classic radiosity assumes an entirely dif-fuse reflectance, so it does not simulate specular interaction atall.The output of the radiosity algorithm is one radiosity valueper polygon. Since diffuse interaction is by definition view-inde-pendent, these radiosities are valid from any viewpoint. Theradiosity computation must be followed by a visibility algorithmto generate a picture.The radiosity method can be generalized to simulate specu-lar interaction by storing not just a single radiosity value witheach polygon, but a two-dimensional array [Imme186], [Shao88],[Buckalew89]. The resulting algorithm, which we call directionalradiosity, simulates both diffuse and specular interaction globally,but the memory requirements are so excessive as to be impracti-cal.2.1.3 Hybrid MethodsRay tracing is best at speculax and radiosity is best at diffuse,and the above attempts to generalize ray tracing to diffuse andto generalize radiosity to specular stretch the algorithms beyondthe reflectance realms for which each is best suited, making themless accurate and less efficient. Another class of algorithms isformed by hybridizing the methods, using a two-pass algorithmthat applies a radiosity pass followed by the ray tracing pass.This is the approach used by [Wallace87] and [Sillion89].The first pass of Wallaces algorithm consists of classic radios-ity extended to include diffuse-to-diffuse interactions that bounceoff planar mirrors. He follows this with a classic ray tracing pass(implemented using a z-buffer). Unfortunately, the method islimited to planar surfaces (because of the polygonization involvedin the radiosity algorithm) and to perfect planar mirrors.Sillions algorithm is llke Wallaces but it computes its formfactors using ray tracing instead of hemicubes. This eliminatesthe restriction to planar mirrors. The method still suffers fromthe polygonization inherent in the radiosity step, however.2.2 Sampling RadlositlesMany of the sampling problems of ray tracing have been solvedby recent adaptive algorithms [WhittedS0], [Cook86], [Lee85],[Dippe85], [Mitchel187], [Painter89], particularly for the simula-tion of specular interaction. The sampling problems of the radios-ity algorithm are less well studied, probably because its samplingprocess is less explicit than that of ray tracing.146
  3. 3. ~ ComputerGraphics,Volume24, Number4, August1990We examine four data structures for storing radiosities: lightimages, polygons, samples in 3-D, and textures. Several differentalgorithms have been used to generate these data structures: ra-diosities have been generated analytically, with hemicubes at thereceiver (gathering), with hemieubes at the sender (shooting),and by tracing rays from the eye or from the light.2.2.1 Light ImagesThe simplest data structure, the light image, simulates only shad-ows, the first order effects of diffuse interreflection. Light imagesare pictures of the scene from the point of view of each light~ource. They are most often generated using the z-buffer shadowalgorithm, which saves the z-buffers of these light images anduses them while rendering from the point of view of the eye totest if visible points are in shadow [Wilhams78], [Reeves87]. Thisshadow algorithm is more flexible than most, since it is not lim-ited to polygons, but it is difficult to tune. Choosing the resolu-tion for the light images is critical, since aliasing of shadow edgesresults if the light images are too. coarse.2.2.2 Polygonlzed l~adiosltyThe Atherton-Weiler algorithm is another method for comput-ing shadows that renders from the point of view of the lights[Atherton78]. It uses the images rendered from the lights to gen-erate "surface detail polygons", modifying the scene descriptionby splitting all polygons into shadowed and unshadowed portionsthat are shaded appropriately in the final rendering from the eye.Surface detail polygons are an example of polygonized radiosity,the storage of radiosity as polygons. The shadows computed bythe Atherton-Weiler algorithm are a first-approximation to theinterrefiection simulated by radiosity algorithms.The most common method for computing polygonized radios-ity is, of course, the classic radiosity algorithm. A major prob-lem with this Mgorithm is that surfaces are polygonized beforeradiosities are computed. Difficulties result if this polygonizationis either too coarse or too fine.Sharp shadow edges caused by small light sources can be un-dersampled if the polygonization is too coarse, resulting in blur-ring or abasing of the radiosities. Cohen developed the "sub-structuring" technique in response to this problem [Cohen86].It makes an initial pass computing radiosities at low resolution,then splits polygons that appear to be in high-variance regionsand recomputes radiosities. Substructuring helps, but it is notfully automatic, as the subdivision stopping criterion appears tobe a polygon size selected in some ad hoc manner. The limi-tations of the method are further demonstrated by the absenceto date of radiosity pictures in published work exhibiting sharpshadow edges.The other extreme of radiosity problems is oversampling ofradiosities due to polygonization that is too fine for the hemicube.The resulting quantization can be cured by adaptive subdivisionof the hemicube or of the light rays [Wallace89], [Baum89].We conclude that polygonization criteria remain a difficultproblem for the radiosity method.It is interesting to note the similaxities between radiosity al-gorithms and the Atherton-Weiler algorithm. Conceptually, theoriginal radiosity method gathers light to each polygon by ren-dering the scene from the point of view of each receiver, butthe progressive radiosity algorithm shoots light by rendering thescene from the point of view of each sender (a light source). Aprogressive radiosity algorithm using a hemicube is thus muchlike repeated application of the Atherton-Weiler shadow algo-rithm.2.2.3 Samples in 3-Dl~adiosities can be computed using brute force distribution raytracing [Kajiya86], but the method is inefficient because it sam-ples the slowly-varying radiosity function densely. To exploit thecoherence of radiosity values, Ward sampled the diffuse compo-nent sparsely, and saved this information in a world space octree[Ward88]. Because his algorithm shot rays from the eye towardthe lights, and not vice-versa, it had difficulty detecting lightsources reflected by specular surfaces.2.2.4 Radiosity TextureThe fourth data structure for radiosities is the radiosity texture.Instead of polygonizing each surface and storing one radiosityvalue per polygon, radiosity samples are stored in a texture onevery diffuse surface in the scene [Arvo86]. Arvo called his tex-tures "illumination maps". He computed them by tracing raysfrom the light sources.2.3 Light Ray TracingRays traced from the eye we call eye rays and rays traced from thelights we call light rays. We avoid the terms "forward ray tracing"and "backward ray tracing" because they are ambiguous: somepeople consider photon motion ~forward, while others considerWhitteds rays "forward".Light ray tracing was originally proposed by Appel [Appe168],who "stored" his radiosities on paper with a plotter. Light raytracing was proposed for beams in previous work with Hanra-han [Heckbert84] where we stored radiosities as surface detailpolygons like Atherton-Weiler. This approach was modified byStrauss, who deposited light directly in screen pixels when a dif-fuse surface was hit by a beam, rather than store the radiositieswith the surface [Strauss88]. Watt has recently implemented lightbeam tracing to simulate refraction at water surfaces [Wattg0].Arvo used light ray tracing to compute his radiosity textures[Arvo86]. Light ray tracing is often discussed but has been littleused, to date.3 Bidirectional Ray Tracing Using Adap-tive Radiosity TexturesIn quest of realistic image synthesis, we seek efficient algorithmsfor simulating global illumination that can accommodate curvedsurfaces, complex scenes, and arbitrary surface characteristics(BDFs), and generate pictures perceptually indistinguishablefrom reality. These goals are not realizable at present, but wecan make progress if we relax our requirements.147
  4. 4. O SIGGRAPH 90, Dallas, August 6-10, 1990We make the following assumptions:(1) Only surfaces are relevant. The scattering or absorp-tion of volumes can be ignored.(2) Curved surfaces are important. The world is notpolygonal.(3) Shadows, penumbras, texture, diffuse interreflection,specular reflection, and refraction are all important.(4) We can ignore the phenomena of fluorescence (lightwavelength crosstalk), polarization, and diffraction.(5) Surface properties can be expressed as a linear com-bination of diffuse and specular reflectance and trans-mission functions:BDF =kd~BRDFdity + ksrB1~DF~pec+kdtBTDFdi// + kstBTDFspecThe coet-ficients klj are not assumed constant.(6) Specular surfaces are not rough; all specular interac-tion is ideal.3.1 ApproachOur approach is a hybrid of radiosity and ray tracing ideas.Rather than patch together these two Mgorithms, however, weseek a simple, coherent, hybrid algorithm. To provide the great-est generality of shape primitives and optical effects, we chooseray tracing as the visibility algorithm. Because ray tracing isweak at simulating global diffuse interaction, the principal taskbefore us is therefore to determine an etficient method for calcu-lating radiosities using ray tracing.To exploit the view-independence and coherence of radiosity,we store radioslty with each diffuse surface, using an adaptiveradiosity texture, or rex. A rex records the pattern of light andshadow and color bleeding on a surface. We store radiosity asa texture, rather than as a polygonization, in order to decouplethe data structures for geometry and shading, and to facilitateadaptive subdivision of radieslty information; and we store itwith the surface, ratlier than in a global octree [Ward88], or in alight image, based on the intuition that radiosities are intrinsicproperties of a surface. We expect that the memory required forrexes will not be excessive, since dense sampling of radiosity willbe necessary only where it has a high gradient, such as at shadowedges.Next we need a general technique for computing the rexes.The paths by which photons travel through a scene can motivateour algorithm (figure 2). We can characterize each interactionalong a photons path from light (L) to eye (E) as either diffuse(D) or specular (S). Each path can therefore be labeled withsome string in the set given by the regular expression L(D]S)*E.Classic ray tracing simulates only LDS*E [LS*E paths, whileclassic radioslty simulates only LD*E. Eye ray tracing has dif-ficulty finding paths such as LS+DE because it doesnt knowwhere to look for specularly reflected light when integrating thehemisphere. Such paths are easily simulated by light ray tracing,however.We digress for a moment to discuss units. Light rays carrypower (energy/time) and eye rays carry intensity (energy / (time* projected area * solid angle)). Each light ray carries a fractionFigure 2: Selected photon paths from light (L) to eye (E) byway of diffuse (D) and specular (S) surfaces. For simplicity, thesurfaces shown are entirely diffuse or entirely specular; normallyeach surface would be a mixture.D° /Figure 3: Left: first level light ray tracing propagates photonsfrom the light to the first diffuse surface on a path (e.g. LDand LSD); higher levels of progressive light ray tracing simulateindirect diffuse interaction (e.g. LDD). Right: eye ray trac-ing shoots rays from the eye, extracting radiosities from diffusesurfaces (e.g. it traces DE and DSE in reverse).of the total power emitted by the light.We can simulate paths of the form LS*D by shooting lightrays (photons) into the scene, depositing the photons power intothe rex of the first diffuse surface encountered (figure 3, left).Such a light ray tracing pass will compute a first approximationto the radiosities. This can be followed by an eye ray tracing passin which we trace DS*E paths in a backward direction, extract-ing intensity from the rex of the first diffuse surface encountered(figure 3, right). The net effect of these two passes will be thesimulation of all LS*DS*E paths. The rays of the two passes"meet in the middle" to exchange information. To simulate dif-fuse interreflection, we shoot progressively from bright surfaces[Cohen88] during the light ray tracing pass, thereby accountingfor all paths: L(S*D)*S*E = L(D[S)*E. We call these twopasses the light pass and eye pass. Such bidirectional ray tracingusing adaptive radiosity textures can thus simulate all photonpaths, in principle.Our bidirectional ray tracing algorithm is thus a hybrid. Fromradiosity we borrowed the idea of saving and reusing the diffusecomponent, whicil is view-independent, and from ray tracing weborrowed the idea of discarding and recomputing the specularcomponent, which is view-dependent.148
  5. 5. ~ ComputerGraphics,Volume24, Number4, August19903.2 All Sampling is AdaptiveThere are three separate multidimensional sampling processesinvolved in this approach: sampling of directions from the light,sampling of directions from the eye (screen sampling), and sam-pling of radiosity on each diffuse surface.3.3 Adaptive Radiosity Textures (Rexes)Rexes are textures indexed by surface parameters u and v, as instandard texture mapping [Blinn76], [Heckbert86]. We associatea rex with every diffuse or partially-diffuse surface. By usinga texture and retaining the initial geometry, instead of polygo-nizing, we avoid the polygonized silhouettes of curved surfacescommon in radiosity pictures.In the bidirectional ray tracing algorithm, the rexes collectpower from incident photons during the light pass, and this in-formation is used to estimate the true radiosity function duringthe eye pass (figure 4). Our rexes thus serve much like den-sity estimators that estimate the probability density of a randomvariable from a set of samples of that random variable [Silver-man86]. Density can be estimated using either histogram meth-ods, which subdivide the domain into buckets; or kernel estima-tors, which store every sample and reconstrnct the density as asum of weighted kernels (similar to a spline).The resolution of a rex should be related to its screen size.Ideally, we want to resolve shadow edges sharply in the finalpicture, which means that rexes should store details as fine asthe preimage of a screen pixel. On the other hand, resolutionof details smaller than this is unnecessary, since subpixel detailis beyond the Nyquist limit of screen sampling. Cohens sub-structuring technique is adaptive, but its criteria appear to beindependent of screen space, so it cannot adapt and optimize theradiosity samples for a particular view.To provide the light pass with information about rex resolu-tion we precede the light pass with a size pass in which we tracerays from the eye, labeling each diffuse surface with the minimumrex feature size.3.8.1 Adaptive Light SamplingAdaptive sampling of light rays is desirable for seYeral reasons.Sharp resolution of shadow edges requires rays only where thelight source sees a silhouette. Also, it is only necessary to tracelight paths that hit surfaces visible (directly or indirectly) to theeye. Thirdly, omnidirectional lights disperse photons in a sphereof directions, but when such lights are far from the visible scene,as is the sun, the light ray directions that affect the final picturesubtend a small solid angle. Finally, stratified sampling should beused for directional lights to effect their goniometric distribution.Thus, to avoid tracing irrelevant rays, we sample the sphere ofdirections adaptively [Sillion89], [Wallace89].For area light sources, we use stratified sampling to distributethe ray origins across the surface with a density proportional tothe local radiosity. Stratified sampling should also be used toshoot more light rays near the normal, since it is intensity thatis constant with outgoing angle, while power is proportional tothe cosine of the angle with the normal. If the surface has both astandard texture and a rex mapped onto it, then the rex shouldbe modulated by this standard texture before shooting. WithFigure 4: Photons incident on a rex (shown as spikes with heightproportional to power) are samples from the true, piecewise-continuous radiosity function (the curve). We try to estimatethe function from the samples.area light sources, the distribution to be integrated is thus four-dimensional: two dimensions for surface parameters u and v,and two dimensions for ray direction. For best results, a 4-Ddata structure such as a k-d tree should be used to record andadapt the set of light rays used.3.3.2 Adaptive Eye SamplingEye rays (screen pixels) are sampled adaptively as well. Tech-niques for adaptive screen sampling have been covered well byothers [Warnock69], [Whitted80], [Mitchell87], [Painter89].3.4 Three Pass AlgorithmOur bidirectional ray tracing algorithm thus has three passes.We discuss these passes here in a general way; the details of aparticular implementation are discussed in §4. The passes are:size pass - record screen size information in each rexlight pass - progressively trace rays from lights and brightsurfaces, depositing photons on diffuse surfaces toconstruct radiosity textureseye pass - trace rays from eye, extracting light from dif-fuse surfaces to make a pictureSpecular reflection and transmission bounces are followed on allthree passes. Distribution ray tracing can be used in all passesto simulate the broad distributions of rough specular reflectionsand other effects.3.4.1 Size PassAs previously described, the size pass traces rays from the eye,recording information about the mapping between surface pa-rameter space and screen space. This information is used by eachrex during the light pass to terminate its adaptive subdivision.3.4.2 Light PassIndirect diffuse interaction is simulated during the llght pass byregarding bright diffuse surfaces as light sources, and shootinglight rays from them~ as in progressive radiosity. The rex recordsthe shot and unshot power.149
  6. 6. @SIGGRAPH 90, Dallas, August 6-10, 1990The adaptive algorithm for light ray tracing must ensure that:(a) a minimum level of light sampling is achieved; (b) more raysare devoted near silhouettes, shadows, and high curvature areas;(c) sharp radiosity gradients are resolved to screen pixel size; and(d) light rays and rexes are subdivided cooperatively.3.4.3 Eye PassThe eye pass is like a standard ray tracing algorithm except thatthe diffuse intensity is extracted out of the text instead of from ashadow ray. The radiosity of a surface patch is its power dividedby its world-space surface area.After the three passes are run, one could move the eye pointand re-run the eye pass to generate other views of the seen% butthe results would be inferior to those made by recomputing therexes adapted to the new viewpoint.3.4.4 ObservationsBecause light rays are concentrated on visible portions of thescene and radlosity is resolved adaptive to each surface~s projec-tion in screen space, the radiosity calculation performed in thelight pass is view-dependent. But this is as it should be: al-though the exact radiosity values are view-independent, the ra-diosity sample locations needed to make a picture are not. Whencomputing moving-camera animation~ one could prime the rexesby running the size pass for selected key frames to achieve moreview-independent sampling.4 Implementation and ResultsThe current implementation realizes many, but not all, of theideas proposed here. It performs bidirectional ray tracing usingadaptive sampling for light~ eyed and rex. It has no size pass ejust a light pass and an eye pass. The program can render scenesconsisting of CSG combinations of spheres and polyhedra. Spec-ular interaction is assumed ideal, and diffuse transmission is notsimulated. The light pass shoots photons from omnidirectionalpoint light sources, and does not implement progressive radios-ity. The implementation thus simulates only iS*DS*E paths atpresent. We trace ray trees, not just ray paths [Kajiya86].4.0.5 Data StructuresQuadtrees were used for each of the 2-D sampling processes[Samet90]: one for the outgoing directions of each light, one forthe parameter space of each radiosity texture, and one for theeye.The light and eye quadtrees are quite similar; their recordsare shown below in pseudocode. Each node contains pointers toits child nodes (if not a leaf) and to its parent node. Light spaceis parameterized by (r,s), where r is latitude and s is longitude.and eye space (screen space) is parameterized by (x,y). Eachnode represents a square region of the parameter space whosecorner is given by (to, so) or (x0, y0) and whose size is propor-tional to 2 -level.lowFigure 5: I~ex quadtree on a surface. Adaptive fez subdivisiontries to subdivide more finely near a shadow edge.The light quadtree sends one light ray per node at a locationuniformly distributed over the square. Also stored in each lightquadtree node is the ID of the surface hit by the light ray, if any,and the surface parameters (u, v) at the intersection point. Thisinformation is used to determine the distance in parameter spacebetween rex hits.Eye quadtrees are simpler. Each node has pointers to theintensities at its corners. These are shared with neighbors andchildren. Eye ray tracing is currently uniform, not stochastic.A rex quadtree node represents a square re,on of (u, v) pa-rameter space on a diffuse surface (figure 5). Leaves in the rexquadtree act as histogram buckets~ accumulating the number ofphotons and their power. Rex nodes also record the world spacesurface area of their surface patch.lisht_node: type =recordleaf: boolean;mark: boolean;level: int;parent: "lisht_node;nw, ne, se, ss: "light_node;tO, sO: real;r~ e: real;surfnc: int;u, v: real;end;eye_node: type =recordleaf: boolean;mark: boolean;level: int;parent: "eye_node;nw, ne. se, sw: "eye_node;xO. yO: real;inw. ine. ise. is,: "color;end;{LIGHT QUADTREE NODE>{is this a leaf?>{should node be split?}{level in tree (rook=O)}{parent node. if any}{four child/sn, if not a leaf}{params of corner of square}{dir. params o2 ray (lat.lon)}{id of surface hit. if any}{surf params of surface hit}{EYE QUADTREE NODE}{is this a leaf?}{should node be split?}{level in tree (root=O)}{parent node. if any}{four children, if not a leaf}{coords of corner of square}{intensity samples at corners}rex_node: type =recordleaf: boolean;mark: boolean;level: int;parent: "rex_node;nw, ne, se, sw: "rex_node;nO. vO: real;area: real;count: int;power: color;end;{REX QUADTREENODE}{is this a leaf?}{should node be split?}{level in tree (root=O)}{parent node. if any}{four children, if not a leaf}{surf params of square corner}{surface area of this bucket}{~photon~ in bucket, if leaf}{accumulated power of bucket}150
  7. 7. ~ ComputerGraphics,Volume24, Number4, August19901/16 1/16...O0 i! 0• • 0 oFigure 6: Light quadtree shown schematically (left) and in lightdirection parameter space (right). When a light quadtree node issplit, its power is redistributed to its four sub-nodes, which eachsend a ray in a direction (r,s) jittered ~oithin their parametersquare. The fractional power of each light ray is shown next tothe leaf node that sends it.The current implementation uses the following algorithm.4.1 Light PassFirst, rex quaxitrees are initialized to a chosen starting level (level3, say, for 8x8 subdivision), and the counts and powers of allleaves are zeroed.For each light, light ray tracing proceeds in breadth first orderwithin the light quadtree, at level 0 tracing a single ray carryingthe total power of the light, at level 1 tracing up to 4 rays, at level2 tracing up to 16 rays, etc (figure 6). At each level, we adaptivelysubdivide both the light quadtree and the rex quadtrees. Chang-ing the rex quadtrees in the midst of light ray shooting raisesthe histogram redistribution problem, however: if a histogrambucket is split during collection, it is necessary to redistributethe parents mass among the children. There is no way to do thisreliably without a priori knowledge, so we clear the rex at thebeginning of each level and reshoot.Processing a given level k of light rays involves three steps:(1) rex subdivision to split rex buckets containing a high densityof photons, (2) light marking to mark light quadtree nodes wheremore light rays should be sent, and (3) light subdivision to splitmarked light nodes.Rex subdivision consists of a sweep through every rex quadtreein the scene, splitting all rex buckets whose photon count exceedsa chosen limit. All counts and powers are zeroed at the end ofthis sweep.Light marking traverses the light quadtree, marking all levelk nodes that meet the subdivision criteria listed below.(1) Always subdivide until a minimum level is reached.(2) Never subdivide beyond a maximum level (if a sizepass were implemented, it would determine this max-imum level locally).Otherwise, look at the light quadtree neighbors above, below,left, and right, and subdivide if the following is true:(3) The ray hit a diffuse surface, and one of the fourneighbors of the rex node hit a different surface orwas beyond a threshold distance in (u, v) parameterspace from the center rays.To help prevent small feature neglect, we also mark for subdi-vision all level k - 1 leaves that neighbor on level k leaves thatare m~ked for subdivision. This last rule guarantees a restrictedquadtree [Von Herzen87] where each leaf nodes neighbors are ata level within plus or minus one of the center nodes.Light subdivision traverses the light quadtree splitting themarked nodes. Subdividing a node splits a ray of power p intofour rays of power p/4 (figure 6). When a light node is cre-ated (during initialization or subdivision) we select a point atrandom within its square (r, s) domain to achieve jittered sam-pling [Cook86] and trace a ray in that direction. Marked nodesthus shoot four new rays, while unmarked nodes re-shoot theirrays. During light ray tracing we follow specular bounces, split-ting the ray tree and subdividing the power according to the re-flectance/transmittance coefficients kij, and deposit their poweron any diffuse surface that are hit. When a diffuse surface is hit,we determine (u,v) of the intersection point, and descend thesurfaces rex quadtree to find the rex node containing that point.The power of that node is incremented by the power of the raytimes the cosine of the incident angle.4.2 Eye PassThe eye pass is a fairly standard adaptive supersampling raytracing algorithm: nodes are split when the intensity differencebetween the four corners exceeds some threshold. To generate apicture, nodes larger than a pixel perform bilinear interpolationto fill in the pixels they cover, while nodes smaller than a pixelare averaged together to compute a pixel. The picture is storedin floating point format initially, then scaled and clamped to therange [0,255] in each channel.4.3 ResultsFigures 7-12 were generated with this program. Figures 7, 8, and9 show the importance of coordinating the light ray sampling pro-cess with the rex resolution. Sending too few light rays resultsin a noisy radiosity estimate from the rex, and too coarse a rexresults in blocky appearance. When the rex buckets are approxi-mately screen pixel size and the light ray density deposits severalphotons per bucket (at least 10, s~y), the results are satisfac-tory. We estimate the radiosity using a function that is constantwithin each bucket; this simple estimator accounts for the blocki-ness of the images. If bilinear interpolation were used, as in mostradiosity algorithms, we could trade off blockiness for blurriness.Figure 10 shows adaptive subdivision of a rex quadtree, split-ting more densely near shadow edges (the current splitting cri-teria cause unnecessary splitting near the border of the square).Its rex quadtree is shown in figure 11.Figure 12 shows off some of the effects that are simulated bythis algorithm.151
  8. 8. SIGGRAPH 90, Dallas, August 6-10, 1990.7~:~"Z :" : •I • ":JgLgm":+ .... z2q " .4:!Figure 7: Noisy appearance results when too few light rays arereceived in each rex bucket (too few light rays or too fine a rex).Scene consists of a diffuse sphere above a diffuse floor both illu-minated by an overhead light source.Figure 8: Blocky or blurry appearance results when rex bucketsare much larger than a screen pixel (too coarse a rex).Figure 9: Proper balance of light sampling and rex samplingreduces both noise and blockiness.Figure 10: Rez with adaptation: the rex of the floor is initially asingle bucket, but it splits adaptively near the edges of the squareand near the shadow edge.Statistics for these images are listed below, including thenumber of light rays, the percentage of light rays striking an ob-ject, the resolution of the rex, the resolution of the final picture,the number of eye rays, and the CPU time. All images were com-puted on a MIPS R2000 processor. The lens image used about 20megabytes of memory~ mostly :for the light qu~dtree. Ray treeswere traced to a depth of 5.#LRAYS %HIT REX I EYE #ERAYS87,400 10% 128~ I 256~ 246,00087,400 10% 82 2562 139,000822,000 68% 1282 2562 146,000331,000 20% vbl 2562 139,0001,080,000 61% 256~ 5122 797,000TIME I FIG1.0 min. fig. 70.6 rain. fig. 83.5 rain. fig. 91.3 rain. fig. 106.4 rain. fig. 12152
  9. 9. ~ ComputerGraphics,Volume24, Number4, August1990Figure 11: Rex quadtree in (u, v) space of previous figures floor.Each leaf nodes square is colored randomly. Note the subdivisionnear the shadow edge and the quadtree restriction.5 ConclusionsThe bidirectional ray tracing algorithm outlined here appears tobe an accurate, general approach for global illumination of scenesconsisting of diffuse and pure specular surfaces. It is accurate be-cause it can account for all possible light paths; and it is generalbecause it supports both the radiosity and ray tracing realms:shapes bosh planar and curved materials both diffuse and spec-ular, and lights both large and small. Distribution ray tracingcan be used to simulate effects not directly supported by thealgorithm.Adaptive radiosity textures (rexes) are a new data struc-ture that have several advantages over previous radiosity storageschemes. They can adaptively subdivide themselves to resolvesharp shadow edges to screen pixel size, thereby eliminating vis-ible artifacts of radiosity sampling. Their subdivision can beautomatic, requiring no ad hoc user-selected parameters.The current implementation is young, however, and manyproblems remain. A terse fist follows: Good adaptive samplingof area light sources appears to require a 4-D data structure. Bet-ter methods are needed to determine the number of light rays.The redistribution problems of histograms caused us to send eachlight ray multiple times. To avoid this problem we could storeall (or selected) photon locations using kernel estimators [Sil-verman86]. Excessive memory is currently devoted to the lightquadtree, since one node is stored per light ray. Perhaps thequadtree could be subdivided in more-or-less scanline order, andthe memory recycled (quadtree restriction appears to complicatethis, however). Adaptive subdivision algorithms that comparethe ray trees of neighboring rays do not mix easily with pathtracing and distribution ray tracing, because the latter obscurecoherence. Last but not least, the interdependence of light raysubdivision and rex subdivision is precarious.Figure 12: Light focusing and reflection from a lens and chromeball. Scene is a glass lens formed by CSG intersection of twospheres, a chrome ball, and a diffuse floor~ illuminated by a lightsource off screen to the right. Note focusing of light through lensonto floor at center (an LSSD path), reflection of refracted lightoff ball onto floor (an LSS SD path involving both transmissionand reflection), the reflection of light off lens onto floor forminga parabolic arc (an LSD path), and the reflection of the lens inthe ball (a LSSDSSE path, in full).In spite of these challenges, we are hopeful. The approachof bidirectional ray tracing using adaptive radiosity textures ap-pears to contain the mechanisms needed to simulate global illu-mination in a general way.6 AcknowledgementsThanks to Greg Ward for discussions about the global illumina-tion problem, to Steve Omohundro for pointing me to the densityestimation literature, to Ken Turkowski and Apple Computer forfinancial support, and to NSF grant CDA-8722788 for "Mam-moth" time.7 References[Appe168] Arthur Appel, "Some Techniques for Shading Machine Render-ings of Solids", AFIPS 1968 Spring Joint Computer Con].,col. 32,1968, pp. 37-45.[Arvo86] James Afro, "Backward Ray Tracing", SIGGRAPH 86Develop-ments in Ray Tracingseminar notes,Aug. 1986.[Atherton?8] Peter R. Atherton, Kevin Weiler, Donald P. Greenberg, "Poly-gon Shadow Generation°, Computer Graphics(SIGGRAPH 78 Pro-ceedings), vol. 12, no. 3, Aug. 1978~pp. 275-281.[Baurn89] Daniel R. Baum, Holly E. Rushmeier, James M. Winget, "Im-proving lqadiosity Solutions Through the Use of Analytically Deter-mined Form Factors", Computer Graphics(SIGGRAPH 89 Proceed-ings), col. 23, no. 3, July 1989, pp. 325-334.[BlJnn76] James F. Blinn, Martin E. Newell, "Texture and Reflection inComputer Generated Images", CACM, col. 19, no. 10, Oct. 1976,pp. 842-547.153
  10. 10. @SIGGRAPH 90, Dallas, August 6-10, 1990[Buckalew89] Chris Buckalew, Donald FusseU, "Illumination Networks: FastRealistic Rendering with General Reflectance Functions", ComputerGraphics (SIGGRAPH 89 Proceedings), vol. 23, no. 3, July 1989,pp. 89-98.[Cohengg] Michael F. Cohen, Donald P. Greenberg, "The Hemi-Cube: ARadiosity Solution for Complex Environments", Computer Graphics(SIGGRAPH 85 Proceedings), vol. 19, no. 3, July 1985, pp. 31-40.[Cohen86] Michael F. Cohen, Donald P. Greenberg, David S. ]rnmel, PhilipJ. Brock, "An Efficient Radiosity Approach for Realistic Image Syn-thesis", [EEE Computer Graphics and Applications, Mar. 1986, pp.26-35.[CohenS8] Michael F. Cohen, Shenehang Eric Chen, John R. Wallace, Don-ald P. Greenberg, "A Progressive R.ellnement Approach to Fast R.~.-diosity Image Generation", Computer Graphics(SIGGRAPH 88 Pro-eeedings), vol. 22, no. 4, Aug. 1988, pp. 75-84.[Cook84] Robert L. Cook, Thomas Porter, Loren Carpenter, "DistributedRay Tracing", Computer Graphics(SIGGRAPH 84 Proceedings), vol.18, no. 3, July 1984, pp. 137-145.[Cook86] P~obert L. Cook, "Stocha~stie Sampling in Computer Graphics",ACM Transactions on Graphics, vol. 5, no. 1, Jan. 1986, pp. 51-72.[Dippe851 Mark A. Z. Dippe, Erling Henry Wold, "Antiallaslng ThroughStochastic Sampling", Computer Graphics(SIGGRAPH 85 Proceed-ings), vol. 19, no. 3, July 1985, pp. 69-78.[Gora184] Cindy M. Goral, Kenneth E. Torrance, Donald P. Greenberg,Bennett Battaile, "Modeling the Interaction of Light Between DiffuseSurfaces", Computer Graphics(SIGGRAPH 84 Proceedings), vol. 18,no. 3, July 1984, pp. 213-222.[Hallg9] Roy Hall, Rlumination and Color in Computer Generatedlmagery,Springer Verlag, New York, 1989.[I-Iockbert84] Paul S. Heekbert, Pat Hanrahan, "Beam Tracing PolygonalObjects", Computer Graphics (SIGGRAPH 84 Proceedings), vol. 18,no. 3, July 1984, pp. 119-127.[Heckbert86] Paul S. Heckbezt, "Survey of Texture Mapping", 1EEE Com-puter Graphicsand Applications, vol. 6, no. 11, Nov. 1986, pp. 56-67.[ImmelS6~ David S. Immel, Michael F. Cohen, Donald P. Greenherg, "ARadiosity Method for Non-Diffuse Environments", Computer Graphics(SIGGRAPH 86 Proceedings), vol. 20, no. 4, Aug. 1986, pp. 133-142.[Kajiya88] James T. Kajiya, "The Rendering Equation", Computer Graph-ics (SIGGRAPH 86 Proceedings), vol. 20, no. 4, Aug. 1986, pp.143-150.[Lee85] Mark E. Lee, 1Lichard A. Redner, Samuel P. Uselton, "StatisticallyOptimized Sampling for Distributed Ray Tracing", Computer Graph-ics (SIGGRAPH 85 Proceedings), vol. 19, no. 3, July 1985, pp.61-67.[Mitchell87] Don P. Mitchell, "Generating Antialiased Images at Low Sam-piing Densities", Computer Graphics (SIGGRAPH 87 Proceedings),vol. 21, us. 4, July 1987, pp. 65-72.[INishita85] Tomoyukilqishita, EihachiroNakamae, "Continuous Tone Rep-resentation of 3-D Objects Taking Account ofShadows and lute,reflec-tion", Computer Graphics (SIGGRAPH 85 Proceedings), voh 19, no.3, July 1985, pp. 23-30.[Painter89] James Painter, Kenneth S]oan, "Antialiased Ray Tracing byAdaptive Progressive Refinement", Computer Graphics(SIGGRAPH89 Pxoceedings), vol. 23, no. 3, July 1989, pp. 281-288.[Keeves87] William T. Reeves, David H. Salesin, Robert L. Cook, "Ren-dering Antialiascd Shadows with Depth Maps", Computer Graphics(SIGGRAPH 87 Proceedings), vol. 21, no. 4, July 1987, pp. 283-291.[Samet90] Hanan Samet, The Design and Analysis of Spatial Data Struc-tures, Reading, MA, Addison-Wesley, 1990.[Shaog8] Min-Zhi ShaD, Qun-Slieng Peng, You-Dong Liang, "A New Ra-diosity Approach by Procedural Refinements for Realistic Image Syn-thesis", Computer Graphics (SIGGRAPH 88 Proceedings), vol. 22,us. 4, Aug. 1988, pp. 93-101.[Siegel81] Robert Siegel, John R. Howell, Thermal Radiation Heat Trans-fer, Hemisphere Publishing Corp., Washington, DC, 1981.[SiUion89] Francois Sillion, Claude Puech, ~A General Two-Pass MethodIntegrating Specular and Diffuse ReflectionS, Computer Graphics(SIC-GRAPH 89 Proceedings), vol. 23, no. 3~July 1989~ pp. 335-344.[Silvermang6] B.W. Silverman, Density Estimation for Statistics and DataAnalysis, Chapman and Hall, London, 1986.[Strnuasg8] Paul S. Strauss, BAGS: The Brotvn Animation GenerationSys-tern, PhD thesis, Tech. Report CS-88-2, Dept. of CS, Brown U, May1988.[Von Herren87] Brian Von Herren, Alan H. Burr, "Accurate Triangula-tions of Deformed, Intersecting Surfaces", Computer Graphics (SIC-GRAPH 87 Proceedings), vol. 21, no. 4, July 1987, pp. 103-110.[WallaceS7] John R. Wallace, Michael F. Cohen, Donald P. Greenber8~ "ATwo-Pass Solution to the Rendering Equation: A Synthesis of RayTracing and Radiosity Methods", Computer Graphics (SIGGRAPH87 Proceedings), vol. 21, no. 4, July 1987, pp. 311-320.[Wallace89] John R. Wallace, Kells A. Elmquist, Eric A. Haines, "A RayTracing Algorithm for Progressive Radiosity", Computer Graphics(SIGGRAPH 89 Proceedings), vol. 23, no. 3, July 1989, pp. 315-324.[Ward88] Gregory J. Ward, Francis M. Rubinstein, Robert D. Clear, "ARay Tracing Solution for Diffuse Interrefleetion", Computer Graphics(SIGGRAPH 88 Proceedings), vol. 22, no. 4, Aug. 1988, pp. 85-92.[Warnock69] John E. Warnoek, A Hidden Surface Algorithm for ComputerGenerated Halftone Pictures, TR 4-15, CS Dept, U. of Utah, June1969.[Watt90] Mark Watt, "Light-Ware, interaction using Backward BeamTraeing, Computer Graphics (SIGGRAPH 90 Proceedings), Aug. 1990.[Whlttedg0] Turner Whirred, "An Improved Illumination Model for ShadedDisplay"~ CACM, voh 23, no. 6, June 1980, pp. 343-349.[Wllllams78] Lance Williams, "Casting Curved Shadows on Curved Sur-faces", Computer Graphics (SIGGRAPH 78 ProeeedingS)r voL 12,no. 3, Aug. 1978, pp. 270-274.154

×