Surface reconstruction and display from range
Upcoming SlideShare
Loading in...5
×
 

Surface reconstruction and display from range

on

  • 255 views

 

Statistics

Views

Total Views
255
Slideshare-icon Views on SlideShare
255
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Surface reconstruction and display from range Surface reconstruction and display from range Document Transcript

    • International Journal of Information Technology & Management Information TECHNOLOGY & INTERNATIONAL JOURNAL OF INFORMATION System (IJITMIS), ISSN 0976– 6605(Print), ISSN 0976 – 6413(Online) Volume 3, Number 1, January – June (2012), © IAEME MANAGEMENT INFORMATION SYSTEM (IJITMIS)ISSN 0976 – 6605(Print)ISSN 0976 – 6413(Online)Volume 3, Issue 1, January- June (2012), pp. 26-32 IJITMIS© IAEME: www.iaeme.com/ijitmis.htmlJournal Impact Factor (2011): 0.7315 (Calculated by GISI) ©IAEMEwww.jifactor.com SURFACE RECONSTRUCTION AND DISPLAY FROM RANGE AND COLOR DATA UNDER REALISTIC SITUATION Mr.J.Rajarajana, Dr.G.Kalivarathanb a Research Scholar, CMJ University, Meghalaya, Shillong, b Principal/PSN Institute of Technology and Science, Tirunelveli, Tamilnadu, Supervisor, CMJ university, Shillong.Email:sakthi_eswar@yahoo.comABSTRACT This paper deals the problem of scanning both the color and geometry of real objects anddisplaying realistic images of the scanned objects from arbitrary viewpoints. A complete systemthat uses a stereo camera system with active lighting to scan the object surface geometry andcolor as visible from one point of view. Scans expressed in sensor coordinates are registered intoa single object-centered coordinate system by aligning both the color and geometry where thescans overlap. The range data are integrated into a surface model using a robust hierarchicalspace carving method. The fit of the resulting approximate mesh to data is improved and themesh structure is simplified using mesh optimization methods. In addition, two methods aredeveloped for view-dependent display of the reconstructed surfaces. The first method integratesthe geometric data into a single model as described above and projects the color data from theinput images onto the surface. The second method models the scans separately as texturedtriangle meshes, and integrates the data during display by rendering several partial models fromthe current viewpoint and combining the images pixel by pixel.Keywords: Stereo Camera, Realistic images, Data acquisition, Surface reconstruction, Meshoptimization1.0 INTRODUCTION Computer Vision and Computer Graphics are like opposite sides of the same coin: invision, a description of an object or scene is derived from images, while in graphics images arerendered from geometric descriptions. by developing methods for sampling both the surfacegeometry and color of individual objects, building surface representations, and finally displayingrealistic color images of those objects from arbitrary viewpoints. Defining a set of sub problemswithin the greater framework of scanning and displaying objects benefit from techniques 26
    • International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976– 6605(Print), ISSN 0976 – 6413(Online) Volume 3, Number 1, January – June (2012), © IAEMEdeveloped in this paper. Surface reconstruction applications take geometric scan data as inputand produce surface descriptions that interpolate or approximate the data. The geometric scandata is typically expressed as a set of range maps called views. Each view is like a 2D image,except that at each pixel the coordinates of the closest surface point visible through the pixel arestored instead of a color value.2.0 SURFACE RECONSTRUCTION Surface reconstruction applications take geometric scan data as input and produce surfacedescriptions that interpolate or approximate the data. The geometric scan data is typicallyexpressed as a set of range maps called views. Each view is like a 2D image, except that at eachpixel the coordinates of the closest surface point visible through the pixel are stored instead of acolor value. Today’s CAM (Computer Aided Manufacturing) systems allows one to manufactureobjects from their CAD (Computer Aided Design) specifications. However, there may not exist aCAD model for an old or a custom-made part. If a replica is needed, one could scan the objectand create a geometric model that is then used to reproduce the object. Although CAD systemuser interfaces are becoming more natural, they still provide a poor interface for designing free-form surfaces. Dragging control points on a computer display is a far cry from the directinteraction an artist can have with clay or wood. Surface reconstruction allows designers tocreate initial models using materials of their choice and then scan the geometry and convert it toCAD models. One could also iterate between manufacturing prototypes, manually reshapingthem, and constructing a new model of the modified version. Manufactured objects can bescanned and the data can be compared to the specifications. Detected deviations from the modelscan be used to calibrate a new manufacturing process, or it can used in quality control to detectand discard faulty parts. Surface reconstruction has many applications in medicine. Scanning theshape of a patient’s body can help a doctor decide the direction and magnitude of radiation forremoving a tumor. In plastic surgery, scanning the patient can help a doctor to quantify howmuch fat to remove or how large an implant to insert to obtain the desired outcome. Surfacereconstruction allows automatic custom fitting of generic products to a wide variety of bodysizes and shapes. Good examples of customized products include prosthetics and clothes.3.0 Textured objects For displaying objects, rather than replicating or measuring them, one needs colorinformation in addition to geometric information. Some scanners, including ours, indeed producenot only the 3D coordinates of visible surface points, but also the color of the surface at thosepoints. The requirements for the accuracy of the geometric data are often much more lenient, asthe color data can capture the appearance of fine surface detail. Populating virtual worlds orgames with everyday objects and characters can be a labor intensive task if the models are to becreated by artists using CAD software. Further, such objects tend to look distinctly artificial.This task can be made easier and the results more convincing by scanning both the geometry andappearance of real-world objects, or even people. The influence of the internet is becoming morepervasive in our society. The World Wide Web used to contain only text and some images, but3D applications have already begun to appear. Instead of looking at an object from some fixedviewpoint, one can now view objects from an arbitrary viewpoint. Obvious applications includebuilding virtual museums and thus making historical artifacts more accessible both to scholars 27
    • International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976– 6605(Print), ISSN 0976 – 6413(Online) Volume 3, Number 1, January – June (2012), © IAEMEand to the general public, as well as commerce over the internet where the potential buyer canvisualize the products before purchase. Computer graphics is increasingly used in films. Specialeffects that would be otherwise impossible, infeasible, or just expensive can be digitallycombined with video sequences. The extra characters, objects, or backgrounds tend to look morerealistic if they are scanned from real counterparts than if they were completely generated by acomputer.4.0 DATA ACQUISITION Data acquisition is the process of sampling signals that measure real world physicalconditions and converting the resulting samples into digital numeric values that can bemanipulated by a computer. Data acquisition systems (abbreviated with the acronym DAS orDAQ) typically convert analog waveforms into digital values for processing. The components ofdata acquisition systems include: Sensors that convert physical parameters to electrical signals.Signal conditioning circuitry to convert sensor signals into a form that can be converted to digitalvalues. Analog-to-digital converters, which convert conditioned sensor signals to digital values.Data acquisition applications are controlled by software programs developed using variousgeneral purpose programming languages such as BASIC, C, Fortran, Java, Lisp, Pascal. Fig.1 The Scanning HardwareOur scanning system consists of the following main parts (Fig. 1). Four Sony 107-A color videocameras are mounted on an aluminum bar. Each camera is equipped with manually adjustablezoom and aperture. The cameras are connected to a Matrox Meteor digitizing board, which canswitch between the four inputs under computer control, and produces images at 640 480resolution. The digitizing board is attached to a Pentium PC. Below the cameras, a slide projectorsits on a computer-controlled turntable. The slide projector emits a vertical stripe of white light,which is manually focused to the working volume. The object to be scanned is placed on a lighttable that is covered by translucent plastic. When the lamps under the light table are turned on,the background changes; thus we can easily detect background pixels by locating the pixels thatchange color. Next to the scanner is a set of adjustable lamps that are used to control the objectillumination when we capture color images. 28
    • International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976– 6605(Print), ISSN 0976 – 6413(Online) Volume 3, Number 1, January – June (2012), © IAEME5.0 SCANNING COLOR IMAGES In order to detect the vertical beam projected onto the scene we have to turn all other lights off whilescanning the object, and consequently we cannot capture reliable color information while scanning thegeometry. Instead, we first scan the geometry, then turn the lights on (and turn the stripe projector off)and take a color image with the camera that is used as the base camera in range triangulation. This waythe color and range data are automatically registered, i.e., each range measurement is associated withexactly one pixel in the color image.6.0 SPACETIME ANALYSIS Our scanning method requires that we accurately locate the center of the stripe in a camera image. Atfirst this seems quite easy; after all, the intensity distribution across the width of the stripe isapproximately Gaussian, and simple techniques such as taking an average Fig. 2 Sources of errors in range by triangulationIn all images the central vertical line is the light beam with Gaussian intensity distribution, on the left andright is two sensors. (a) Ideal situation: planar object. Both sensors see the stripe and the surface isscanned correctly. (b) The right sensor can’t see the whole stripe because of self-occlusion, pulling theestimate forward. (c) The beam hits the surface only partially, pulling the surface estimate away from thecorner, though not out of the surface. (d) The sensors see only half of the stripe well, resulting in too largea depth estimate. Weighted by pixel intensities across the beam should give good results. However, thereis a hidden assumption that the surface illuminated by the beam is locally planar and that the whole widthof the stripe is visible to the camera7.0 SURFACE RECONSTRUCTION There are two important steps in surface reconstruction from range data. The data must beintegrated into a single representation, and surfaces must be inferred either from that representation ordirectly from the original data. There does not seem to be a clear concensusin which order these two steps should be taken. Then the individual meshes are then connectedto form a single mesh covering the whole object. First divide the range data into subsets based on whichsurface regions are visible within each view. In each subset, the redundancy of the views is used toimprove the surface approximation. Finally, the triangulated non-overlapping subsets are connected into asingle mesh. Chose first to integrate the data into a signed distance function and then extract a polygonalmesh using the marching cubes algorithm method was designed towork with arbitrary point clouds. They first estimate local tangent planes to each data point, and thenpropagate the tangent plane orientations over the whole data set using the minimum spanning tree of thepoints1. 29
    • International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976– 6605(Print), ISSN 0976 – 6413(Online) Volume 3, Number 1, January – June (2012), © IAEME Fig.3 Eight intensity images corresponding to the views of the miniature chairThe distance of a 3D point is evaluated as the distance to the closest tangent plane, where thedistance is positive if the point is above the plane, negative otherwise. Ideally the zero set of thedistance function would follow the object surface. However, the local tangent plane estimationphase is likely to smooth the zero set later improved the results by fitting the result of themarching cubes approximation better to the original data improved the distance functionestimation for the case that the input is structured in the form of a set of range. They define avolumetric function based on a weighted average of signed distances to each range image. Theirscheme evaluates this volumetric function at discrete points on a uniform 3D grid, and uses thesediscrete samples to determine a surface that approximates the zero set of the volumetric function.Like Hoppe et al., their scheme may fail to detect features smaller than the grid spacing and hasdifficulties with thin features. Also, it requires a significant amount of storage space, althoughthis can be alleviated through run-length encoding techniques. However, their use of spacecarving (throwing away regions known to lie outside the object) makes the approach much morerobust. Since their approach is less susceptible to smoothing, they evaluate the signed distancefunction at very fine resolution, and use the output of the marching cubes algorithm as their finalresult. Fig. 4 An example where the old method for obtaining initial surface with correct topology fails. (a) The registered point set. (b) The result from using the method by Hoppeet al. (c) The result from our method 30
    • International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976– 6605(Print), ISSN 0976 – 6413(Online) Volume 3, Number 1, January – June (2012), © IAEMEIt works quite nicely if the data does not contain outliers and uniformly samples the underlyingsurface. Unfortunately real data often violate both of those requirements. Figure 4-1 shows eightviews of a miniature chair, and Fig. 4 (a) shows the range maps after registration. Although mostof the outliers and background data points were interactively removed from the data sets prior toinitial mesh estimation, many outliers remain, especially around the spokes of the back support,some data was missing, and the data wasn’t uniform enough for the algorithm to produce atopologically correct result. Figure 4(b) shows the incorrect result. We decided to abandonHoppe et al.’s initial mesh method and create a more robust method that produced the resultshown in Fig. 4 (c). Like Curless and Levoy, we use the idea of space carving. However, we usea hierarchical approach that saves storage space and execution time, and we do not attempt todirectly obtain the final result. Instead, we concentrate on capturing the object topology ascorrectly as possible given the input data. Additionally, we use only robust methods such asinterval analysis [Snyder 92] that enable us to recover thin objects that are typically difficult tomodel using the signed distance function approach. We kept the mesh optimization method of[Hoppe et al. 93] since for our purposes we typically want a mesh that is both accurate andconcise. Typically one-pass methods can only produce meshes that are either dense or not veryaccurate. Hoppe et al.’s mesh optimization first improves the accuracy, and can also simplify themesh drastically with little sacrifice in accuracy.7.0 CONCLUSION Generally it is predicted that the optimality of surface reconstruction for various realisticimages are based on the accuracy of drastic improvements in intensified images as well as thecomplexity of the geometry. Moreover, it is necessary to investigate all the criteria’s for surfacereconstruction as a domain which indicates the cluster of feasible solution and algorithms. Hencea prominent model for the visualization methods has to be investigated under various feasiblesolutions. However, it is necessary to incorporated the steps which are highly cordial forperfection and magnification in the context of improving the resolution of images.REFERENCES[1] S. Barnard and M. Fischler. Computational stereo. Computing Surveys, 14(4):554–572, 1982.[2] R. Bergevin, D. Laurendeau, and D. Poussart. Registering range views of multipart objects.Computer Vision and Image Understanding, 61(1):1–16, January 1995.[3] P. J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans. Patt. Anal.Machine Intell., 14(2):239–256, February 1992.[4] P. J. Besl. Surfaces in early range image understanding. PhD dissertation, University ofMichigan, Ann Arbor, 1986.[5] P. J. Besl. Surfaces in range image understanding. Springer-Verlag, 1988.[6] P. J. Besl. The free-form surface matching problem. In H. Freeman, editor, Machine Visionfor Three-Dimensional Scenes. Academic, New York, 1990.[7] G. Blais and M. D. Levine. Registering multiview range data to create 3d computer objects.Technical Report TR-CIM-93-16, Centre for Intelligent Machines, McGill University, 1993.[8] G. Blais and M. D. Levine. Registering multiview range data to create 3d computer objects.IEEE Trans. Patt. Anal. Machine Intell., 17(8):820–824, August 1995. 31
    • International Journal of Information Technology & Management Information System (IJITMIS), ISSN 0976– 6605(Print), ISSN 0976 – 6413(Online) Volume 3, Number 1, January – June (2012), © IAEME[9] G. Champleboux, S. Lavall´ee, R. Szeliski, and L. Brunie. From accurate range imagingsensor calibration to accurate model-based 3-d object localization. In Proc. IEEE Conf. onComputer Vision and Pattern Recognition, pages 83–89, June 1992.[10] Y. Chen and G. Medioni. Object modelling by registration of multiple range images. Imageand Vision Computing, 10(3):145–155, April 1992.[11] E. Chen and L.Williams. View interpolation for image synthesis. In Computer Graphics(SIGGRAPH ’93 Proceedings), volume 27, pages 279– 288, August 1993.[12] Y. Chen. Description of Complex Objects Using Multiple Range Images. PhD dissertation,Institute for Robotics and Intelligent Systems, University of Southern California, 1994. Alsotechnical report IRIS-94-328.[13] S. E. Chen. Quicktime VR - an image-based approach to virtual environment navigation. InSIGGRAPH 95 Conference Proceedings, pages 29–38.ACM SIGGRAPH, Addison Wesley,August 1995.[14] C. H. Chien, Y. B. Sim, and J. K. Aggarwal. Generation of volume/surface octree fromrange data. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’88), pages254–260, June 1988.[15] C. Chua and R. Jarvis. 3-D free-form surface registration and object recognition.International Journal of Computer Vision, 17(1):77–99, January 1996.[16] B. Curless and M. Levoy. Better optical triangulation through spacetime analysis. In Proc.IEEE Int. Conf. on Computer Vision (ICCV), pages 987– 994, June 1995.[17] B. Curless and M. Levoy. A volumetric method for building complex models from rangeimages. In SIGGRAPH 96 Conference Proceedings, pages 303–312. ACM SIGGRAPH,Addison Wesley, August 1996.[18] B. L. Curless. New methods for surface reconstruction from range images. PhD dissertation,Department of Electrical Engineering, Stanford University,1997.[19] L. Darsa, B. C. Silva, and A. Varshney. Navigating static environments using image-spacesimplification and morphing. In Proc. 1997 Symposium on Interactive 3D graphics, April 1997. 32