SlideShare a Scribd company logo
1 of 56
Download to read offline
Department of Science and Technology Institutionen för teknik och naturvetenskap
Linköpings Universitet Linköpings Universitet
SE-601 74 Norrköping, Sweden 601 74 Norrköping
Examensarbete
LITH-ITN-MT-EX--05/020--SE
Mesh simplification of complex
VRML models
Rebecca Lagerkvist
2005-02-24
LITH-ITN-MT-EX--05/020--SE
Mesh simplification of complex
VRML models
Examensarbete utfört i medieteknik
vid Linköpings Tekniska Högskola, Campus
Norrköping
Rebecca Lagerkvist
Handledare Pär Klingstam
Examinator Stefan Gustavson
Norrköping 2005-02-24
Rapporttyp
Report category
Examensarbete
B-uppsats
C-uppsats
D-uppsats
_ ________________
Språk
Language
Svenska/Swedish
Engelska/English
_ ________________
Titel
Title
Författare
Author
Sammanfattning
Abstract
ISBN
_____________________________________________________
ISRN
_________________________________________________________________
Serietitel och serienummer ISSN
Title of series, numbering ___________________________________
Nyckelord
Keyword
Datum
Date
URL för elektronisk version
Avdelning, Institution
Division, Department
Institutionen för teknik och naturvetenskap
Department of Science and Technology
2005-02-24
x
x
LITH-ITN-MT-EX--05/020--SE
http://www.ep.liu.se/exjobb/itn/2005/mt/020/
Mesh simplification of complex VRML models
Rebecca Lagerkvist
In a large part of their work Volvo Cars uses digital models - in the design process,
geometry simulation, safety tests, presentation material etc. The models
may be used for many purposes but they are normally only produced in one level
of detail. In general, the level that suits the most extreme demands. High resolution
models challenge rendering performances, transmission bandwidth and
storage capacities. At Volvo Cars there is time, money and energy to be saved
by adapting the the model’s level of detail to their area of usage.
The aim of this thesis is to investigate if the Volvo Cars models can be reduced
to containing less than 20% of its original triangles without compromising
too much in quality. In the thesis, the mesh simplification field is researched
and the simplification algorithm judged to best suit the needs of Volvo Cars is
implemented in a C++ program. The program is used to test and analyze the
Volvo Cars’ models.
The results show that it is possible to take away more than 80% of the the
model’s polygons hardly without affecting the appearance.
mesh simplification, polygon reduction, Volvo Cars, Visualization, Computer Graphics
Upphovsrätt
Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –
under en längre tid från publiceringsdatum under förutsättning att inga extra-
ordinära omständigheter uppstår.
Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,
skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för
ickekommersiell forskning och för undervisning. Överföring av upphovsrätten
vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av
dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,
säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ
art.
Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i
den omfattning som god sed kräver vid användning av dokumentet på ovan
beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan
form eller i sådant sammanhang som är kränkande för upphovsmannens litterära
eller konstnärliga anseende eller egenart.
För ytterligare information om Linköping University Electronic Press se
förlagets hemsida http://www.ep.liu.se/
Copyright
The publishers will keep this document online on the Internet - or its possible
replacement - for a considerable time from the date of publication barring
exceptional circumstances.
The online availability of the document implies a permanent permission for
anyone to read, to download, to print out single copies for your own use and to
use it unchanged for any non-commercial research and educational purpose.
Subsequent transfers of copyright cannot revoke this permission. All other uses
of the document are conditional on the consent of the copyright owner. The
publisher has taken technical and administrative measures to assure authenticity,
security and accessibility.
According to intellectual property law the author has the right to be
mentioned when his/her work is accessed as described above and to be protected
against infringement.
For additional information about the Linköping University Electronic Press
and its procedures for publication and for assurance of document integrity,
please refer to its WWW home page: http://www.ep.liu.se/
© Rebecca Lagerkvist
Abstract
In a large part of their work Volvo Cars uses digital models - in the design pro-
cess, geometry simulation, safety tests, presentation material etc. The models
may be used for many purposes but they are normally only produced in one level
of detail. In general, the level that suits the most extreme demands. High res-
olution models challenge rendering performances, transmission bandwidth and
storage capacities. At Volvo Cars there is time, money and energy to be saved
by adapting the the model’s level of detail to their area of usage.
The aim of this thesis is to investigate if the Volvo Cars models can be re-
duced to containing less than 20% of its original triangles without compromising
too much in quality. In the thesis, the mesh simplification field is researched
and the simplification algorithm judged to best suit the needs of Volvo Cars is
implemented in a C++ program. The program is used to test and analyze the
Volvo Cars’ models.
The results show that it is possible to take away more than 80% of the the
model’s polygons hardly without affecting the appearance.
Preface
The Master Thesis was conducted at Volvo Car Cooperation, Torslanda, and
was initiated by Robert Jacobsson - Teamleader Geometry Simulation. The
thesis is the final examination of the Master of Media Technology program at
Link¨oping University1
.
I would like to express my gratitude to the following people who helped me
through and made this thesis possible:
- Robert Jacobsson for initiating this project and for seeing a solution where
other people saw a problem.
- Johan Segeborn for putting much effort and energy into the project.
- P¨ar Klingstam for giving the work the academic touch it needed.
- Hˆakan Pettersson for being there when I could not support another Visual
Studio link error.
- Sven Rud´en and Jan Hallqvist for feed-back on the test results.
- Stefan Gustavsson for his unlimited competence within computer graphics
and that he never stopped believing in me.
- Bj¨orn Andersson for well thought through feed-back on the report.
- Michael Hemph for his great knowledge of C++.
- Olle Hellgren and our soon-to-be-born for their support.
Rebecca Lagerkvist
G¨oteborg, March 6, 2005
1Civilingenj¨orsutbildningen i Medieteknik (180 po¨ang)
1
Contents
1 Overview 6
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Research approach . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Terminology 10
3 Frame of reference 14
3.1 What mesh simplification is not . . . . . . . . . . . . . . . . . . . 14
3.2 What mesh simplification is . . . . . . . . . . . . . . . . . . . . . 15
3.3 Geometry-based methods of mesh simplification . . . . . . . . . . 16
3.3.1 Vertex decimation . . . . . . . . . . . . . . . . . . . . . . 16
3.3.2 Vertex clustering . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.3 Edge contraction . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.4 Simplification envelopes . . . . . . . . . . . . . . . . . . . 18
3.4 Image-based methods of mesh simplification . . . . . . . . . . . . 18
3.5 Major features of the mesh simplification algorithms . . . . . . . 20
3.6 VRML - Virtual Reality Modeling Language . . . . . . . . . . . . 20
3.6.1 General description . . . . . . . . . . . . . . . . . . . . . . 21
3.6.2 VRML as a scene graph . . . . . . . . . . . . . . . . . . . 21
3.6.3 How geometry is described in VRML . . . . . . . . . . . . 21
3.6.4 The Volvo Cars VRML models . . . . . . . . . . . . . . . 22
4 Results - research and development 24
4.1 Researching the mesh simplification field . . . . . . . . . . . . . . 24
4.1.1 Requirements for the mesh simplification algorithm . . . . 24
4.1.2 Choosing an algorithm . . . . . . . . . . . . . . . . . . . . 25
4.1.3 Mesh simplification algorithm of choice . . . . . . . . . . 28
4.2 Development of the application . . . . . . . . . . . . . . . . . . . 31
4.2.1 Part one . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.2 Part two . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2.3 Part three . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2
5 Results - testings 34
5.1 The size of the file . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.2 Result of the testings . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2.1 The bracket model . . . . . . . . . . . . . . . . . . . . . . 35
5.2.2 The robot model . . . . . . . . . . . . . . . . . . . . . . . 36
5.2.3 Late testings . . . . . . . . . . . . . . . . . . . . . . . . . 39
6 Conclusion 43
6.1 Evaluating the algorithm chosen . . . . . . . . . . . . . . . . . . 43
6.2 Evaluating the application created . . . . . . . . . . . . . . . . . 44
6.3 Evaluating the results of the testing . . . . . . . . . . . . . . . . 45
7 Future work 46
7.1 Geometric error measure . . . . . . . . . . . . . . . . . . . . . . . 46
7.2 Hidden surface removal . . . . . . . . . . . . . . . . . . . . . . . 46
7.3 Replacement of complex shapes . . . . . . . . . . . . . . . . . . . 47
A VRML syntax 49
B Open Source packages 51
B.1 The VRML parser . . . . . . . . . . . . . . . . . . . . . . . . . . 51
B.2 The Qslim package . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3
List of Figures
1.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Shaded and wire frame polygon mesh . . . . . . . . . . . . . . . 11
2.2 Manifold 3D Surface . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Non manifold 3D surface . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Moebius strip . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Convex polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Concave polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1 Same model different resolution . . . . . . . . . . . . . . . . . . . 15
3.2 Two planes with different resolution but the same accuracy. . . 15
3.3 Vertex Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 Vertex Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 Edge Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.6 Image-Driven Simplification . . . . . . . . . . . . . . . . . . . . . 19
3.7 Comparison Image-Driven/Geometry Driven . . . . . . . . . . . 20
4.1 Disconnected surfaces . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Standard edge contraction . . . . . . . . . . . . . . . . . . . . . . 28
4.3 Non-edge contraction . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4 Geometric interpretation of quadratic error metrics. . . . . . . . 30
4.5 Overview of the C++ program . . . . . . . . . . . . . . . . . . . 31
5.1 Bracket - original model . . . . . . . . . . . . . . . . . . . . . . . 35
5.2 Bracket - simplification 12% . . . . . . . . . . . . . . . . . . . . . 36
5.3 Bracket - adjusted simplification 12% . . . . . . . . . . . . . . . . 36
5.4 Robot - original . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.5 Robot - details of original . . . . . . . . . . . . . . . . . . . . . . 38
5.6 Robot - simplification 25% . . . . . . . . . . . . . . . . . . . . . . 39
5.7 Robot - adjusted simplification 25% . . . . . . . . . . . . . . . . 40
5.8 Volvo V70 original shaded . . . . . . . . . . . . . . . . . . . . . . 41
5.9 Volvo V70 original wire frame . . . . . . . . . . . . . . . . . . . . 41
5.10 Volvo V70 simplified shaded . . . . . . . . . . . . . . . . . . . . . 42
5.11 Volvo V70 simplified wire frame . . . . . . . . . . . . . . . . . . . 42
4
5
A.1 A piece of VRML code . . . . . . . . . . . . . . . . . . . . . . . . 50
Chapter 1
Overview
1.1 Introduction
In a large part of their work Volvo Cars uses digital models - in the design
process, geometry simulation, safety tests, presentation material etc.
The digital models are either produced at Volvo Cars or by one of their sup-
pliers. The models may be used for many purposes but they are normally only
produced in one level of detail. In general, the level that suits the most extreme
demands like those of collision detection simulation. Few other applications de-
mand such level of detail but since there is only one version of the model the
same high resolution is used no matter what the purpose is.
High resolution models challenge rendering performances, transmission band-
width and storage capacities. To work with excessive resolution wastes money.
It slows the work process and demands extra resources.
At Volvo Cars there are no routines around the simplification of digital
models. There is no software that automatically reduce the level of detail.
When simplification is needed it is done manually by adjusting the settings in
the Computer Aided Engineering (CAE) programs. This is laborious work and
the result is not always satisfactory.
For documentation and presentation purposes it is often needed to export
the models from the CAE-programs to a file format that can be opened in the
program Cosmo Player1
on a standard PC. The standard CAE-programs at
Volvo Cars are RobCad and Catia ran on UNIX. The only export option from
these programs are the Virtual Reality Modeling Language (VRML). The high
level of detail of the models and the properties of the VRML format makes the
files extremely large. Most of the time too large to be opened in Cosmo Player
on standard hardware.
Consequently, the VRML exports are not used and instead the presentation
materials contain print-screen pictures of the models. Using static 2D pictures
imply that the interactivity is lost, there is no possibility of making animations
1Volvo Cars standard Virtual Reality Modeling Language (VRML) browser
6
CHAPTER 1. OVERVIEW 7
or to show different views of the object. To make it possible to use VRML the
level of detail of the models must be lowered. Ideally, to containing less than
20% of the original polygons without losing too much in visual quality.
Presentation material is important at Volvo Cars. It is the link between
separate work groups like design, simulation and testing. The clearer the com-
munication between these groups the better the work flow in the entire Volvo
Cars organization.
Volvo Cars needs to analyze their digital model work. There is an obvious
need to lower the level of detail of the 3D models. Primarily, to be able to use
VRML in presentation material but in a broader perspective also to ease the
work of everyone concerned with the models.
It is important to find out if the models carry redundant information. If
that is the case they can be simplified without losing precision and the approxi-
mations would be good enough even for a wider use than presentation material.
1.2 Research approach
1.2.1 Aim
The primary goal of the thesis is to examine if the Volvo Cars’ 3D digital models
could be reduced to containing less than 20% of its original polygons using a mesh
simplification algorithm. To reach this goal the thesis focuses the development of
a C++ program performing mesh simplification based on an existing algorithm.
To accomplish the goal, three main research questions have been put forth. The
questions are to be answered by the thesis.
Q1 Which of the presently existing mesh simplification algorithms conforms
the most to the needs of Volvo Cars?
Q2 What features do a mesh simplification program able to reduce the Volvo
cars’ 3D digital models need to have?
Q3 If it is not possible to reduce the Volvo Cars’ 3D models by more than a
80% using the mesh simplification program constructed, what character-
istics must be added?
1.2.2 Scope
The building of software is a never ending process and it will always be possible
to make improvements. To limit the extent, the thesis work has been bounded
to comprise the following:
- Literature study The literature study has been limited to include mesh
simplification in the sense of how to reduce the number of polygons in a
generic mesh. The study will not be directed towards specific cases such
as multilayer meshes or hidden surface removal.
CHAPTER 1. OVERVIEW 8
- Application development The application will be confined to accepting a
limited input. It will only be developed to allow the VRML syntax that
is most common to the Volvo Cars’ digital models.
As far as possible the application will be based on Open Source material.
Even if the code available is not optimized for the purpose of this thesis
it will be taken advantage of. Reusing other peoples code is the only
possible solution in order to develop such an extensive program as the
mesh simplification application within the time span of the thesis.
The mesh simplification program will not include a Graphic User Interface
(GUI).
- Testing The major tests will be performed on smaller models (100 000-
30000 polygons). Working with large models is very time consuming. In
order to speed up the iteration process (see section ?? on page ??) it has
been judge wise not to include larger models in the testing until perhaps
the very last iteration cycle.
1.2.3 Method
The work of the thesis can roughly be divided into three parts:
- Conceptualization
- Iterative development
- Final analysis
The first part of the thesis, the conceptualization, would be to define how the aim
of the thesis - to examine if the Volvo Cars’ 3D digital models could be reduced
to containing less than 20% of its original polygons using a mesh simplification
algorithm - could be transformed into practical act.
To examine if the Volvo Cars’ models could be reduced, a tool to perform
the testing with is needed. The primer work of this thesis was therefore to
create such a tool. Hence, the conceptualization consisted in trying to give a
concrete form to how the mesh simplification algorithm could be implemented.
The conceptualization would be done primarily through researching the field of
mesh simplification software. To look at existing software - both commercial
and open source - in order to create an idea of what the difficulties and actual
problems with developing one would be. A classical literature study, with the
aim of finding theories that apply to the research questions of section 1.2.1,
would also be a part of the initial phase.
The conceptualization phase was to be keept short. The thesis is an edu-
cational project which means that there is little experience brought into it. To
design software on an abstract basis demands a lot of know-how, more than
was available in this project. Hence, the ambition was to early move into the
development phase and there build the software by trial and error.
CHAPTER 1. OVERVIEW 9
To successfully build something by trial and error there must be several trials.
The development phase was therefor made iterative. What the development
phase was to consist in can be seen in Fig. 1.1 on page 9.
Figure 1.1: The thesis’ three phases. The development phase is shaded in gray.
The thought was to have outer and inner loops. The inner loop would
consist of programming, testing and validation and would be repeated more
often than the two outer loops. In the beginning of the development phase the
iterations would mostly follow the upper outer loop and go through literature
study, modeling and back to the inner loop. As the project advanced the idea
was that the work would move on to the path of the lower outer loop and focus
more on documentation. Nevertheless, if needed it would be possible to go back
and do more literature studies even rather late in the development.
The project was planned according to Fig. 1.1 to make it possible to bring
in new knowledge even during the development phase.
The testings of the inner loops would follow a simple methodology. Since
computer graphics is much about what looks good is good the only validation
would be to look at the models to see if their appearance seemed okay. The tests
merely would consist in running the Volvo Cars’ models through the program
created and examine the outcome, if there would be an outcome.
Once the development phase would be over all the testing and programming
would be ended and the results analyzed and put into the documentation.
Chapter 2
Terminology
The following terminology will be used through out this master thesis report
without further explanation.
Rendering
Rendering is the action of drawing the 3D computer graphics model on to the
2D screen. The following terminology can be associated with rendering.
Frames per second
Rendering speed is often measured in frames per second and it refers to how
many pictures of the model that can be drawn on the screen per second.
GPU rendering vs. CPU rendering
To render large models could be a time consuming process. Over the years the
speed of rendering has been improved mainly be the introduction of hardware
rendering performed on the GPU (Graphics Processing Unit) instead of the CPU
(Central Processing Unit). The GPU:s have many parallel processors optimized
for rendering purposes. Using the GPU’s processors correctly can speed up
rendering by hundreds or even thousands of frames per seconds.
To use GPU rendering the program performing the rendering must support
it. Both OpenGL and DirectX, the most widespread graphic programming
packages, support GPU rendering.
Wireframe
When a model is rendered in wire frame no surfaces properties are being shown.
This is illustrated in Fig. 2.1 on page 11.
10
CHAPTER 2. TERMINOLOGY 11
Polygon Mesh
A polygon mesh is assumed to be a collection of vertices in three dimensional
space, and an associated collection of polygons composed of those vertices. The
polygons could be seen as the building blocks of the model (see Fig 2.1 on page
11). A simplification algorithm is intended to simplify a polygonal mesh by
reducing the number of polygons in the mesh, while retaining as much fidelity
to the model as possible. The following terminology could be associated with a
polygon mesh.
Figure 2.1: 3D computer graphics model. In the model to the right, which is
rendered in wire frame, the triangles constituting the model can clearly be seen.
Vertices and coordinates
In this thesis report the term vertices will be used for x, y and z-values that
constitutes the points which are linked together to form the polygons.
The term coordinates refer to the integer numbers telling how the vertices
will be connected into polygons. This terminology is not standard in all litera-
ture and it might be that the two terms are used differently elsewhere.
Polygons and triangles
In this thesis report the term polygon refer the shapes that are formed when
the vertices are linked together. There is no limit to how many vertices each
polygon may have. As said, the polygons constitute the mesh.
Triangles are virtually the same thing as polygons with the only difference
that triangles can only have three vertices.
CHAPTER 2. TERMINOLOGY 12
Tessellated
A tessellated surface is a surface described by polygons as opposed to a surface
defined mathematically through NURBS or implicit functions.
Manifold
A manifold polygon mesh consists of a collection of edges, vertices and triangles
connected such that each edge is shared by at most two triangles. In a manifold
without boundaries, each edge is shared by exactly two triangles.
Figure 2.2: Manifold 3D Surface
Non-manifold
A non-manifold polygon mesh consists of a collection of edges, vertices and tri-
angles connected such that each edge may be shared by more than two triangles.
The triangles may also intersect each other arbitrarily.
Figure 2.3: Non manifold 3D surface
Orientable
A surface is said to be orientable if all the of the face normals of a mesh may be
oriented consistently. It is easier to understand the concept of orientable from
the concept of non-orientable (see next paragraph).
Non-orientable
The most common example of a non-orientable surface is the moebius strip (Fig.
2.4 on page 13). It has only one surface and only one edge. Sliding a surface
normal along its surface the direction of the surface normal will have changed
CHAPTER 2. TERMINOLOGY 13
with an 180 degrees when it reaches the point it started from. This is what
makes the surface non-orientable.
Figure 2.4: Moebius strip
Topology
The topology of a mesh deals with the dependencies between its polygons. Sim-
plified it could be said that the topology of a models tells how many holes there
are in the model.
Convex polygons
A planar polygon is said to be convex if it contains all the line segment con-
necting any pair of its points. Circling a convex polygon all the turns around
the corners will be in the same directions.
Figure 2.5: Convex polygon
Concave polygons
A concave polygon is a polygon that is not convex. It must have at least four
sides and one internal angle that is greater than 180◦
.
Figure 2.6: Concave polygon
Chapter 3
Frame of reference
3.1 What mesh simplification is not
Before digging deeper into mesh simplification it is necessary to put clear some
things that are often misconceived.
When speaking about mesh simplification one speaks about the reduction
of the complexity of the 3D computer graphics model itself to a lower level of
detail approximation of the original one.
Reducing the entire 3D model should not be confounded with reducing the
complexity of the rendered image of the model. Today’s 3D model viewers are
usually very efficient when transforming a 3D model to a 2D image seen on the
screen. Level of detail is adjusted according to the distance from which the
model is looked at, the z-buffer makes sure that objects hidden behind other
objects are never drawn on the screen etc. These techniques are concerned with
the mapping of the model from 3D to 2D. They are dependent on knowing in
which direction the model is viewed upon and they do nothing to the actual
model, only to the picture of it drawn on the screen. The computer still needs
to handle equal amount of data every time a calculation is performed on the
model.
It is also important to keep clear the difference between reducing the model
and compressing the file containing the model. There are many efficient tech-
niques to compress the 3D model files. To illustrate, if a 3D model contains
many of the same objects it is enough to describe the geometry of the first ob-
ject and then refer back to that one for the rest of them. However, when shown
on a screen the model is still drawn with the same amount of polygons as before
the compression. The model is not reduced the file is compressed.
The new standard file format for 3D computer models at Volvo Cars is
Jupiter (.jt). Jupiter is a light format, the files are small, because intelligent
algorithms of compression are used. However, when drawing the .jt-models on
the screen they will be just as heavy as any other model containing the same
number of polygons.
14
CHAPTER 3. FRAME OF REFERENCE 15
3.2 What mesh simplification is
Many computer graphics application require complex, highly detailed models
to serve their purposes. Collision detection simulation, for example, sometimes
demands the models to be made to an accuracy of millimeters and represented
in very high resolution. Sometimes the resolution is as high that the human eye
cease to see the difference between the original model and the approximation
(see Fig. 3.1 on page 15). Since the computational cost of using a model is
Figure 3.1: Little difference can be seen between the original circles (the second
rendered in wire frame) to the left and the approximations, containing only 0,5
per cent of the polygons, to the right.
directly related to complexity is it often desirable to use an approximations in
place of the excessively detailed originals. It must be possible to adapt the level
of detail to the fit the usage of the model.
Reducing complexity of the computer graphics models can be done through
mesh simplification. Mesh simplification is the taking away of triangles from the
mesh . The more triangles a mesh has the higher the resolution. High resolution
is not directly proportional to high accuracy. Take the example of Fig 3.2 on
page 15. The plane to the right is by no means better described than the plane
Figure 3.2: Two planes with different resolution but the same accuracy.
to the left. Even though the plane to the right would be simplified and triangles
taken away until it only consisted of the two triangles of the plane to the left
the definition of the plane would not be less exact. There is no loss in accuracy
because before the simplification there was a redundancy in information. Thus
there can be mesh simplification without decreasing the accuracy. The normal
case is however that the taking away of triangles creates a coarser approximation
of the original model.
CHAPTER 3. FRAME OF REFERENCE 16
Over the last 10 to 15 years there have been a large number of published
articles on the subject of mesh simplification. Every method consists of two
parts - how to decide which triangles to remove and how to remove them.
Deciding which triangles to remove is called calculating the cost of reduction
or simplification and removing the triangles is simply called simplification or
reduction.
Roughly the different methods presented can be divided into two categories
based on the manner in which the cost is calculated - Geometry-based and
Image-based. Infinitely more research has been put into the Geometry-based
field than into the Image-based which appeared just recently.
3.3 Geometry-based methods of mesh simplifi-
cation
Common to so called geometry-based simplification processes is the use of a ge-
ometric 3D distance measure between the original and the simplified model to
guide the simplification process. Minimizing this distance is equivalent to min-
imizing the error of the approximation. Consequently, the goal of the geometry
based methods is geometric fidelity. In reality there are very few applications in
which geometric fidelity is of much importance. Normally it is more interesting
that the approximation is visually similar to the original.
Nevertheless, research has been confined to geometry based methods. It was
not until July 2000 when Lindstrom and Turk released their article on Image-
Driven Simplification [10] that anything different appeared (see section 3.4 on
page 18).
Even though visual similarity might be more interesting than geometric fi-
delity it is not said that the geometry-based methods are not useful.
Geometry-based methods have been the subject of research since the sev-
enties. Down below, an overview of what is judged to be the most important
categories of these methods is presented. However, due to the abundance of
publications within this field the list is bound to be incomplete.
3.3.1 Vertex decimation
Initially described by Schroeder et al. [4]. The method iteratively selects a
vertex for removal, remove all adjacent triangles, and re-triangulates the result-
ing hole (see Fig. 3.3 on page 17). The first algorithm presented by Schroeder
in 1992 carefully preserved topology, which is restricting for multi resolution
rendering systems. However, in 1997 Schroeder presented a supplement to his
article [9] describing a vertex decimation technique that did not maintain the
topology of the models and could handle non-manifold surfaces, something that
the first one could not. The cost of removing triangles or as Schroeder calls it -
the error (e) - is based on the triangle’s area (a):
ei =
√
ai
CHAPTER 3. FRAME OF REFERENCE 17
This error measure has the effect of removing small, isolated triangles first.
Schroeder’s decimation technique has O(n) time complexity which makes it
suitable for large models. It also preserves high-frequency information such as
sharp edges and corners.
Figure 3.3: Vertex Decimation - The triangles are taken away and the hole is
re-triangulated.
3.3.2 Vertex clustering
Algorithm initially described by Rossignac and Borrel [5]. A bounding box is
placed around the original model and it is divided into a grid. Within each cell,
the cell’s vertices are clustered together into a single vertex, and the model’s
triangles are updated accordingly (see Fig. 3.4 on page 17). Instead of an
uniform grid an adaptive structure such as an octree1
can be used. The process
of vertex clustering can be very fast but it can also make drastic topological
changes to the mesh. The size of the grid provide an error bound. The error
bound gives a measurement of the geometric difference between the original
model and the approximation yielded.
Figure 3.4: Vertex Clustering - model is divided into a grid and the vertices
within the same cell is clustered to one.
3.3.3 Edge contraction
An edge of a triangle is contracted into a single point. The triangles supported
by that edge become degenerate and are removed during the contraction (see
1Octree - It is a hierarchical representation of 3D objects, designed to use less memory
than representing every voxel of the object explicitly. Octrees are based on subdividing the
full voxel space containing the represented object into 8 octets by planes perpendicular to the
three coordinate axes.
CHAPTER 3. FRAME OF REFERENCE 18
Fig. 3.5 on page 18). The process is continued iteratively. Several researchers
have made use of this method in their algorithms and the biggest difference
between them lies in how the edge to be contracted is chosen. Examples of
edge contracting algorithms have been presented by Hoppe [6] and Ronfard
and Rossignac [7]. Garland and Heckbert later developed the edge contraction
technique even further [2]. They called it pair contraction and their algorithm
is capable of joining disconnected surfaces.
Figure 3.5: Edge Contraction - The two triangles charing one edge in the middle
are taken away
3.3.4 Simplification envelopes
Cohen et al [8] has described this simplification technique. The mesh is encapsu-
lated between an inner and an outer surface. The surfaces define the boundaries
that the mesh must be contained within during the simplification process. There
are a number of assumptions that the simplification surface has to fulfill which
restricts the use of the algorithm. There is a local as well as a global version
to the algorithm. Both algorithms try to construct new triangles from existing
vertices by combining them so that the new triangles fit within the bounding
envelopes. The local algorithm provides a fast method for generating approxi-
mations of large input meshes while as the complexity of the global algorithm
is at least O(n2
) which makes it unfit to apply on large models. An advantage
of the global algorithm is that the bounding envelopes supply an error bound.
The simplification envelop algorithm can only handle manifold surfaces.
3.4 Image-based methods of mesh simplification
The notion of Image-Based or Image-Driven simplification was recently put
forward by Peter Lindstrom and Greg Turk. With their article Image-Driven
Simplification [10] they introduced a new concept of controlling the simplifica-
tion process. Instead of using the geometric 3D distance measured between the
original model and the approximation Lindstrom and Turk guided their algo-
rithm through measuring the difference between pictures of the original model
and the approximated model. Turk and Lindstrom were the first to introduce
such an approach and so far they seem to be the only ones researching it.
CHAPTER 3. FRAME OF REFERENCE 19
As many geometry-based methods Lindstrom’s and Turk’s uses the edge
collapse operator to make incremental changes to the model. To evaluate which
edge to collapse, i.e the cost of taking it away, they use an image metric - a
function that give a measure of the distance between two images. Lindstrom
and Turk use the root mean square (RMS) error to evaluate the difference
between the pixels in two images. Since it is virtually impossible to capture the
entire appearance of an object in a single image the object is capture from 20
different directions (see Fig. 3.6 on page 19) The corresponding images of the
original and the approximation are compared in order to evaluate which edges
to collapse in the following iteration.
Figure 3.6: Image-Driven Simplification is guided by the comparison between
images of the original model and the approximated one. In the test accounted
for in their article [10] Turk and Lindstrom used 20 images from different angles.
Image-driven simplification removes detail that has little effect on rendered
images of a model. When some part of a model is invisible in all views it will
be drastically simplified. This feature is highly usable to CAE-models that are
to be reduced for visualization purposes since they usually contain many layers
of objects. To take away hidden interior objects is otherwise a difficult task.
The biggest drawback of the image-driven algorithm of Turk and Lindstrom
is that it is slow. In their article Turk and Lindstrom compare with the Garland
and Heckbert algorithm (see section 4.1.3 on page 28) and the image-driven one
shows many times slower Fig. 3.7 on page 20 shows a model of a donut reduced
to 512 triangles. The result of the two algorithm is similar but the processing
time different. The Turk and Lindstrom algorithm took almost five times as
long (5.33 seconds compared to 1.04).
To the image-driven algorithm’s defense it should be said that when com-
paring it with a geometry-based algorithm plus an algorithm that takes away
interior objects the time difference probably would not be that large.
CHAPTER 3. FRAME OF REFERENCE 20
Figure 3.7: To the left the Qslim algorithm by Garland and Heckbert, in the
middle Image-driven by Turk and Lindstrom and to the right the original model.
3.5 Major features of the mesh simplification al-
gorithms
This section will summon up the major features of the above described mesh
simplification algorithms. The features are put into a table in order to make
it easy to compare the different methods. Even though they were presented
together in section 3.3 the Pair Contraction algorithm of Michael Garland and
Paul Heckbert is put separate from the Edge Contraction algorithms. This
reason is that the Pair Contraction algorithm contains one crucial feature that
Edge Contraction algorithms do not have.
The heading Error measurement in the below table refers to if there is a
measure of the geometric difference between the original model and the approx-
imation produced by the algorithm. If the quality can be told by a geometric
measure.
The heading fast refers to if the algorithm is reasonably fast. A No in this
column means that one of the significant features of the algorithm is that it is
slow.
If the table says Depends the answer can be both Yes and No depending
on which algorithm within that category of algorithms that is referred to. For
example, there are Edge Contraction algorithms that can handle arbitrary input
and there are those who cannot.
3.6 VRML - Virtual Reality Modeling Language
VRML is the format of the export of the Volvo Cars’ CAE-program. In order
to build an application that take VRML as an input one must understand its
syntax. To easier understand this chapter it is recommended to take a look at
the schematic picture of the VRML syntax in the Appendix A.1 on page 50.
CHAPTER 3. FRAME OF REFERENCE 21
Table 3.1: The table shows the major features of the mesh simplification algo-
rithms examined
Method Takes
arbitrary
input
Joins dis-
connected
surfaces
Error
measure-
ment
Maintains
topology
Fast
Vertex Decimation Yes No No No Yes
Vertex Clustering Depends Yes Yes No Yes
Edge Contraction Depends No No Depends Yes
Pair Contraction Yes Yes No No Yes
Simplification en-
velops
No Yes Depends No Depends
Imaged-Driven Yes Yes No No No
3.6.1 General description
The Virtual Reality Modeling Language (VRML) is a textual language describ-
ing 3D scenes. VRML differs from other programming language like C/C++
in that instead of describing how the computer should act through a series
of commands, it describes how a 3D scene should look. At the highest level
of abstraction, VRML is just a way for objects to read and write themselves.
VRML defines a set objects useful for doing 3D graphics. These objects are
called nodes. The basic node categories are: Shape, Camera/Light, Property,
Transformation, Grouping and WWW. Within each categories there are several
subtypes defined. To illustrate, a Transformation node can be of the subtype
Transform, Rotation or Translation. Each containing different property fields.
3.6.2 VRML as a scene graph
The nodes in the VRML world are organized in a hierarchical structure called
scene graph. The scene graph decides the ordering of the nodes. It also has a
notion of state which implies that nodes higher up in the hierarchy can affect
nodes that are lower down. A scene graph has many advantages for 3D graphics
purposes.
3.6.3 How geometry is described in VRML
The basic building blocks in VRML are shapes described by nodes and their
accompanying fields and field values. The basic node type that defines the
geometry of an VRML-object is the Shape node. The Shape node typically
contains a geometry and a appearance field. The geometry field describes the
3D-structure while as the appearance defines the surface properties. The surface
properties are based on the material that could be either a color or a texture.
The fields of a node could be defined either by values directly or by another
node.
CHAPTER 3. FRAME OF REFERENCE 22
Predefined primitives
VRML provides several predefined primitive geometry nodes that can be used
to define the objects. Such primitives are Box, Cone, Cylinder and Sphere.
These objects cannot be simplified directly since the vertices and coordinates of
the objects are not explicitly defined.
Vertices and coordinates
Except for the case with the predefined primitives the geometry of an object is
described through vertices and coordinates. Each vertex sets a point in space
- i.e. the x-, y- and z-values. The coordinates tell how the vertices are linked
together into triangles, quadratics or shapes with more corners. However, ren-
dering systems can only handle triangles and if the vertices are not defined as
such there must be splitting before the object can be rendered.
The IndexedFaceSet node can be used as the value of the geometry field in
the Shape node (see Appendix A.1 on page 50). Then the vertices are declared
in the coord field and the coordinates in the coordIndex field.
There are many variations to the VRML syntax. Nevertheless, the mesh
simplification application will only accept input where the geometry is described
in the IndexedFaceSet node.
Normals
Normals in VRML are unit vectors used to direct the lighting and shading of
the object. The normals are specified in the normal field of the Shape node.
Typically the normal field value is specified by the Normal node. If the normal
field is empty the normals are automatically computed for each triangle or coor-
dinate in the triangle set and the normalIndex and normalPerVertex fields are
ignored. Otherwise, if the field is not empty, the normalIndex field specifies the
x-, y- and z-values of the normals and the normalPerVertex field tells whether
the normals should be used for each face or each coordinate.
3.6.4 The Volvo Cars VRML models
The Volvo Cars 3D VRML models that will be used in the tests later on have
the following properties.
- The models constitute many smaller objects linked together by the scene
graph.
- The size of the files is in general very large and could easily contain millions
of polygons.
- There is nothing known about the how the models are exported from
the CAE-programs and thus it is better assumed that the structure of
the mesh is arbitrary, i.e. it can be manifold, non-manifold, orientable,
non-orientable etc.
CHAPTER 3. FRAME OF REFERENCE 23
- The models contain no textures but the different parts are of different
colors.
- The SCALE property in VRML implies that the objects will not necessar-
ily have the same size as the vertices of the triangles tell since they can be
declared to one size and then in the rendering process scaled to another.
- The vertices and coordinates are declared in the IndexedFaceSet node.
Chapter 4
Results - research and
development
The chapter presents the results of the research and development phase of the
thesis. The results shown in this chapter give the answers to research question
one - Which of the presently existing mesh simplification algorithms conforms
the most to the needs of Volvo Cars? - and two - What features do a mesh
simplification program able to reduce the Volvo cars’ 3D digital models need to
have?.
4.1 Researching the mesh simplification field
At Volvo Cars the 3D models are considered to be the originals while as the
real, physical objects are the copies. Quality of the digital material is therefore
important. To pick a mesh simplification algorithm that manages to reduce the
resolution without compromising too much on quality is naturally the aim of
the selection process.
It is also important to look at the constitution of the polygon mesh in Volvo
Cars’ models. Since nothing is known about the export functions of the CAE-
programs used at Volvo Cars it is better assumed that it is arbitrary, i.e mani-
fold, non-manifold, orientable, non-orientable etc. Thus the algorithm must be
capable of handling such input.
The following section summarizes the perquisites that a mesh simplification
algorithm should fulfill in order to function on the Volvo Cars digital models. To
understand some of the requirements it is recommended to keep the properties
of the Volvo Cars’ models in mind. These are stated in section 3.6.4 on page
22.
4.1.1 Requirements for the mesh simplification algorithm
- The algorithm should be capable of handling arbitrary input.
24
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 25
- It is preferable that the algorithm can close disconnected surfaces (see Fig.
4.1 on page 25).
Figure 4.1: The original model to the left is simplified with an algorithm that
disallows the joining of disconnected surfaces (middle) and with one that allows
it (right).
- The algorithm should be able to handle large models.
- It is preferable that the algorithm is reasonably fast.
- The algorithm should not cause any flipping over of triangles. A flipped-
over triangle will not reflect light properly and appear as black in the
rendered model.
- It is preferable if the algorithm has a built-in handling of hidden objects.
- The algorithm should be fairly straightforward to implement.
- It is preferable that the algorithm produces a geometric measure of how
much the approximation differs from the original model.
- The manner in which triangles are taken away should be intelligent and
relate to the purpose of mesh simplification.
4.1.2 Choosing an algorithm
The requirements stated in the previous section automatically exclude some
simplification algorithms. Which ones to leave out can be deduced by looking
at the major features of the various algorithms.
To make it easier for the reader the table from section 3.5 on page 20 is
included again. However, if there is a need for instructions on how to read the
table the reader must turn back to 3.5.
Note that there is a difference between the Error measurement of the table
and the Error metrics that is mentioned further on. Error metrics refers to
what guides the algorithm. How it is decided which triangles to take away. Er-
ror measurement refers to if the algorithm produces a geometric measurement
telling how much the approximated model differs from the original. An Error
measurement is interesting to have since it makes it possible to set boundary
values in terms of a geometric measure. It is possible to say that the model
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 26
Table 4.1: The table shows the major features of the mesh simplification algo-
rithms examined
Method Takes
arbitrary
input
Joins dis-
connected
surfaces
Error
measure-
ment
Maintains
topology
Fast
Vertex Decimation Yes No No No Yes
Vertex Clustering Depends Yes Yes No Yes
Edge Contraction Depends No No Depends Yes
Pair Contraction Yes Yes No No Yes
Simplification en-
velops
No No Depends Yes Depends
Imaged-Driven Yes Yes No No No
should be reduced until the the boundary value of 5 millimeters difference be-
tween the original and the approximation is reached instead of just stating that
50% of the faces should be taken away.
Every algorithm has Error Metrics but not all provide an Error measure-
ment.
The table tells that the Simplification envelops algorithm does not fulfill the
requirement of arbitrary input. Arbitrary input is an important claim and not
accepting it strongly speaks against the Simplification envelop algorithm.
The Pair Contraction algorithm is an improvement of the the Edge Contrac-
tion algorithms. As can be seen from the table the Pair Contraction algorithm
complete more requirements than the Edge Contraction algorithms since it can
join disconnected surfaces. In this case the Edge Contraction algorithms do not
have any other advantage over the Pair Contraction and they are thus excluded
in favor of Pair Contraction.
This leaves four algorithms Vertex Decimation, Vertex Clustering, Pair Con-
traction and Image-Driven.
The error metrics of Vertex Decimation does not seem convincing. The
triangles are taken away in oder of area going from small to large. Even though
it is plausible that taking away small triangles causes less harm than taken away
large, there is no guarantee. It would be more interesting if the error metrics
was based on the difference between the original and the approximation.
Vertex Clustering is known to be a fast and sloppy algorithm. Sloppy in the
sense that it can make drastic changes to the mesh. It does not have a thought
through methodology on how to decide which triangles to take away. Instead all
the vertices within a grid cell are clustered together no matter what the cost of
such an operation might be. Since it is important that the approximated model
differ as little as possible from their originals the rather unscientific approach
of the Vertex Clustering algorithm is unappealing.
There are three major arguments that decide between the Image-Driven and
the Pair Contraction algorithms. The first argument can be found in the table.
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 27
The Image Driven algorithm is slow. As stated in the requirements in section
3.6.4 the input models are usually very large making the speed of the algorithm
an issue. This obviously speaks against the Image-Driven algorithm. On the
other hand, there is no claim that the application should be capable of working
in real-time and thus the time-argument is not strong enough to solely exclude
this algorithm.
The advantage of the Image-Driven algorithm over the Pair Contraction is
that it has an built-in handling of hidden objects. Looking at the structure of
many of the Volvo Cars’ models (see section 5.2.2 on page 36) it becomes clear
that such a feature could be very useful. However, one would want to have
the choice of turning the hidden surface handler on and off since there might
be cases where the interior of a model should be maintained. With the Image-
Driven algorithm that would be impossible. It is also perfectly possible to add
a hidden surface handler to any mesh simplification algorithm. Consequently,
the built-in handling of hidden objects cannot be used as an argument for the
Image-Driven algorithm.
The third argument is the most important one. As described, the Image-
Driven algorithm is guided by the visual comparison between the original and
the approximated model. This is a excellent technique if it can be secured that
it is only the visual that matters. With the Volvo Cars application there is no
such warranty. If the mesh simplification application was to be used only on
presentation material a visual comparison could be enough. However, there is a
possibility that Volvo Cars would want to extend the use of mesh simplification
and apply it on all their models. Even the ones used for geometry simulation
and hence collision detection. In such a case geometry becomes more important
than appearance and guiding the algorithm by what looks good would not be
sufficient.
Judging by the above described arguments the algorithm that seems to best
fit the requirements is the Garland and Heckbert Pair Contraction [2]. The
algorithm takes arbitrary input, joins disconnected surfaces, does not maintain
topology and it is reasonably fast. The one major feature missing is the geo-
metric error measure. It is however perfectly possibly to add such a quality to
the algorithm. Garland himself has published a paper [1] on how this could be
done.
The Garland and Heckbert algorithm is guided by an error metrics that
carefully measures how much the mesh would change if a certain triangle is
taken away. It is actually this error metrics that gives the algorithm its cred-
ibility and it is probably the reason to why the algorithm has reached such a
success within the computer graphics community. The error metrics of other
algorithm such as the one of Vertex Clustering or the one of Vertex Decimation
are not adapted to the actual purpose as the Pair Contraction’s is. Garland
and Heckbert has looked at what is really important in the context - a mini-
mized difference between original and approximation - and let their algorithm
be guided accordingly.
The Garland and Heckbert algorithm is presented in detail in the next sec-
tion.
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 28
4.1.3 Mesh simplification algorithm of choice
Garland and Heckbert first presented their algorithm in an article[2] - Surface
simplification using error metrics - published on SIGGRAPH in 1997. Since
then there has been some improvements made to it [3]. Primarily for the han-
dling of colors and textures. However, the part of the algorithm used in this
theses work is entirely from the 1997 article.
The essence of the Garland and Heckbert algorithm is here presented.
Pair contraction
The Michael Garland and Paul S. Heckbert surface simplification algorithm is
based on the iterative contraction of vertex pairs. A pair contraction written
(v1, v2) → v, moves the vertices v1 and v2 to the new position v, connects all
their incident edges to v1, and deletes the vertex v2. If (v1, v2) is an edge, then
one or more faces will be removed (see Fig. 4.2 on page 28) . Otherwise, two
previously separate sections of the model will be joined at v (see Fig. 4.3 on
page 29). Using pair contraction instead of edge contraction makes it possible
to merge individual components into a single object.
The algorithm is based on the assumption that, in a good approximation,
points do not move far from their original positions. A pair is valid for contrac-
tion if either:
1. (v1, v2) is an edge
2. ||v1 − v2|| < t, where t is a threshold parameter
Using a threshold of t = 0 gives a simple edge contraction algorithm (see Fig.
4.2 on page 28). Higher thresholds allow non-connected vertices to be paired
(see Fig. 4.3 on page 29). If the threshold is too high, widely separated portions
of the model can be connected and it could create O(n2
) pairs. The set of
Figure 4.2: Standard edge contraction (t = 0) The edge shared between the two
shaded triangles is contracted into a single point. The shaded triangles become
degenerate and are removed.
valid pairs is chosen at initialization time. When the contraction (v1, v2) → v
is performed every occurrence of v2 in a valid pair is replaced by v1, and
duplicated pairs are removed.
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 29
Figure 4.3: Non-edge contraction (t > 0) When the threshold t > 0 non-edge
pairs can be contracted and unconnected regions joined together.
Defining the cost of a contraction
In order to select which contraction to perform Garland and Heckbert have
introduced the notion of cost. To define the cost, they attempt to characterize
the error at each vertex. In the original model, each vertex is the solution of
the intersection of a set of planes of the triangles that meet at the vertex. A set
of plane can be associated with each vertex and the error can be defined as the
sum of squared distances to its planes:
∆v = ∆([vxvyvz1]T
) =
p∈planes(v)
(pT
v)2
(4.1)
where p = [a b c d]T
represents the plane defined by the equation (ax + by +
cz + d) = 0 where a2
+ b2
+ c2
= 1. The error metric described above can be
rewritten in the quadratic form:
∆v =
p∈planes(v)
(vT
p)(pT
v)
=
p∈planes(v)
vT
(ppT
)v
= vT
(
p∈planes(v)
KP)v (4.2)
where KP is the matrix:
KP = ppT
=




a2
ab ac ad
ab b2
bc bd
ac bc c2
cd
ad bd cd d2



 (4.3)
KP can be used to find the squared distance of any point in space to the plane
p.
The sum of all Kp can be represented with Q which then represents the
entire set of planes for each vertex. The error at vertex v = [vxvyvz1]T
can then
be expressed as the quadratic form:
∆(v) = vT
Qv (4.4)
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 30
For every contraction (v1, v2) → v a new matrix Q containing the approxima-
tions of the error at the new vertex v must be derived. Garland and Heckbert
have chosen to use the simple additive rule Q = Q1 + Q2.
Selecting v
Before the contraction (v1, v2) → v can be performed a new position for v must
be chosen. The easy solution would be to select either v1, v2 or (v1 + v2)/2 de-
pending on which of them that has the lowest value of ∆(v). However, Garland
and Heckbert go farther than that and chooses the v that minimizes ∆(v). The
error function ∆ is quadratic and finding its minimum is thus a linear problem.
v is found by solving ∂∆/∂x = ∂∆/∂y = ∂∆/∂z = 0 which is the same as
solving for v in following system of equations:




q11 q12 q13 q14
q12 q22 q23 q24
q13 q23 q33 q34
0 0 0 1



 v =




0
0
0
1



 (4.5)
Assuming that the Q matrix is invertible gives the following solution to equation
4.5:
v =




q11 q12 q13 q14
q12 q22 q23 q24
q13 q23 q33 q34
0 0 0 1




−1 



0
0
0
1



 (4.6)
If Q is not invertible, the algorithm seeks the optimal vertex along the edge
v1v2. If that also fails v is chosen from the endpoints and the midpoints.
Geometric interpretation of the cost
The level surface ∆(v) = , i.e. the set of all points whose error with respect to
Q is , represents a quadratic surface. Geometrically these are almost always
ellipsoids with v as the center of the ellipsoid (see Fig. 4.4 on 30).
Figure 4.4: Geometric interpretation of quadratic error metrics.
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 31
4.2 Development of the application
The primary aim of the thesis is to examine if the Volvo Cars’ 3D digital models
could be reduced to containing less than 20% of its original polygons using a mesh
simplification algorithm. The method (see section 1.1 on page 9) chosen to reach
this aim was to create a C++ program that would take the Volvo Cars’ VRML
models as an input and give simplified approximations of these as an output.
The basis of the C++ program would be the implementation of an existing
mesh simplification algorithm.
Section 4.1 on page 24 gives an account of which algorithm that was selected
and why. As seen, the choice fell on Michael Garland’s and Paul Heckbert’s Pair
Contraction simplification with Quadratic error measure [2]. The theory of the
algorithm is fully presented in section 4.1.3 on page 28.
The first program created consisted of the three parts visualized in Fig. 4.5
on page 31. Part one takes care of the user input from the DOS-prompt and
the parsing of the VRML file with the original model. To parse a file means
to read it and to extract information from it. The file is read and the model
is stored in the memory as a scene graph structure. (Section 3.6 on page 20
tells more about this structure.) Part one then runs a loop that extracts the
Figure 4.5: The C++ program created for the mesh simplification application
consists of three parts and the user interface.
vertices and coordinates from the first node in scene graph. The polygons are
split into triangles and the information is sent on to Part two where the actual
simplification takes place. Once the vertices and coordinates are simplified to
desired percentage they are sent on to Part three where they are reinserted into
the three structure. The program then moves back to Part one and the next
node in the hierarchy is approached. The loop goes on until all nodes has been
simplified. Once finished Part three writes the new VRML structure to a file.
This is the basis of the program. To yield better results the program has been
altered slightly during testing. The next chapter accounts for these changes.
The specifics about each part of the application are accounted for in the
three following sections.
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 32
4.2.1 Part one
For Part one of the program there were three major problems that had to be
solved and the essence of those will be accounted for here.
Node by node vs. entire model
The VRML-model is simplified node by node and not the entire model at the
same time. This means than an object in the VRML scene graph is extracted,
simplified and reinserted into the hierarchy before the next object is approached.
There are advantages and disadvantages with this method. An advantage is
that there is no limit to what size the entire model may have since it is divided
into parts and the algorithm only treats a little at the time.
Simplifying only the vertices and coordinates of the same node does not
change their surface properties (since they are the same for all of them) and
thus there is no need to send that information along. It also preserves the
internal order of the objects and does not break the VRML scene graph apart.
Simplifying node by node also makes it possible to improve the speed of the
application by creating parallel threads in the program so that nodes are read
from the VRML-file, simplified and reinserted simultaneously.
The main disadvantage with node by node simplification appear when the
polygons are unevenly distributed in the model. If the entire model is simplified
at the same time, polygons will be taken away primarily from the denser part of
the model because the cost of removing them there is lower. When the model
is reduced in parts every node will be reduced to the same extent (50%, 70%,
90% etc) no matter if it belongs to the dense part or not. This is a major
disadvantage and could lower the quality considerately.
Which information to extract
The only information that is extracted from the VRML node and sent onto the
next part of the program are the vertices (the x, y and z values of each point) and
the coordinates (the information about how the vertices are connected together).
The VRML-model is reduced node by node and thus the information about
the surface properties does not need to be extracted and connected to each
triangle. All the triangles that are reduced at the same time have the same
surface properties and after simplification is performed they are put back into
their original node where their surface properties are described.
Not even the normals are taken out and connected to the triangles. Instead
they are taken away for each node in the VRML-model. It is not necessary to
set the normals explicitly in VRML. If the the normals are not set the VRML-
viewer will calculate new normals for each triangle.
How to split the polygons
In VRML a polygon can easily consist of more than 20 vertices. Since the Gar-
land and Heckbert algorithm (see section 4.1.3 on 28) only can handle triangles
CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 33
the VRML polygons must be split before sent onto simplification.
From VRML1.3 and onward there is support for non-convex polygons. Nev-
ertheless, the use of them is not recommended and in this application it has been
assumed that all polygons are convex. Only dealing with convex polygons make
splitting easier. From the sixth vertex of a polygon and onward the splitting
can be done according to the following algorithm:
setCoordIndex(6th
vertex, i, i + 1, −1)
To familiarize yourself with the VRML syntax please have a look at Fig A.1 on
page 50
There are two exceptional cases for polygons with four and five vertices. In
larger polygons the first five vertices are treated according to the algorithm of
five polygons and the rest according to the above described algorithm.
4.2.2 Part two
In Part two the actual simplification takes place. The code is an implementa-
tion of the Garland’s and Heckbert’s mesh simplification algorithm described in
section 4.1.3 on page 28.
The code uses functions from the Qslim packages which is an Open Source
software put together by Garland himself. Appendix B tells more about the
Qslim package. The flow of the code in Part two can be summarized as follows:
- Compute the Q matrices for all the initial vertices
- Select all valid pairs
- Compute the optimal contraction target v for each valid pair (v1, v2). The
error vT
(Q1 + Q2)v of this target vertex becomes the cost of contracting
that pair.
- Place all the pairs in a heap ordered by cost with the minimum cost pair
on the top.
- Iteratively remove the pair (v1, v2) of least cost from the heap, contract
this pair, and update the cost of the pairs involving v1.
Once the vertices and coordinates been simplified they are sent on to Part three.
4.2.3 Part three
The work of Part three is actually quite simple. No major difficulties were
encountered and not big problems were needed to be solved. The work of Part
three is to delete the old vertices and coordinates from the VRML node and
to insert the new ones. This was achieved simply by calling functions from the
Cyber97VRML library. To print the VRML scene graph back to the file the
print function of Cyber97VRML was used.
Chapter 5
Results - testings
The previous chapter accounted for the choice of mesh simplification algorithm
and how if was implemented. Now it is the time to use the application created
in order to fulfill the aim of the thesis - to examine if the Volvo Cars’ 3D digital
models could be reduced to containing less than 20% of its original polygons using
a mesh simplification algorithm.
The results where produced in several sets. When Garland’s Qslim pack-
aged had been linked to the VRML parser and the in between interface was
functioning satisfactory the first round of testing was performed. Then followed
improvements of the application and thereafter new tests.
The expectation was to be able to take away more than 80% of the polygons
from the original models without creating major degeneracies in the approxi-
mations.
The tests were performed on models that are used in the simulation process
at Volvo Cars.
In this chapter the testing and analysis of two different models will be ac-
counted for. The models have distinct characteristics but are both representative
for what the Volvo Cars’ models might look like.
5.1 The size of the file
It is important to realize that the size of the file containing a 3D-model is not
directly proportional to the number of triangles the model contains. (This fact
has been discussed previously in section 3.1 on page 14.) A file of size 25 Mb can
actually contain a lighter model than a file of 20Mb. It depends very much on
how compressed the writing of the file is. When splitting the polygons of more
than three vertices into triangles the VRML file becomes bigger in size because
expressing the polygons as triangles is a less compressed way of writing.
At Volvo Cars the size of the models is always measured in the size of the
the file which is misleading.
34
CHAPTER 5. RESULTS - TESTINGS 35
5.2 Result of the testings
For the first tests small models of some 100 000 polygons were used. Simplifying
larger models (some millions of polygons) is very time consuming and in the
beginning quick feed back is important.
5.2.1 The bracket model
The first object tested was a model of a so called bracket which is a small part
of the outer front of the car.
At the first testings the results were not at all satisfactory. A simplification
of merely 50% already created problems and taking away more than 80% of the
original polygon showed unthinkable. The first conclusion drawn was that the
model was not as high resolutioned as thought from the beginning and could
therefore not be simplified as much. It seemed curious however that this small
model would contain as many polygons if it was not of high resolution. An
application was created that counted the polygons of each node in the VRML-
models. When printing this information very interesting facts were discovered.
It proved that the polygons were unevenly distributed throughout the model.
Some parts were as simplified as they could possibly be while as others were of
extreme resolution. In this particular case the parts modeled in a very a high
resolution was rather small and thus difficult to discover merely by looking at
the model rendered in wire frame. This is illustrated in Fig. 5.1 on page 35,
the two small holds in the upper corners of the model actually constitute 88%
of the total number of polygons in the entire model.
Figure 5.1: Original model (91 638 polygons) with the high resolution hold to
the right. The hold almost appear not to be rendered in wire frame due to the
excessiveness in polygons.
Simplifying the model in Fig. 5.1 applying the same amount of reduction
to the entire model produced a very poor result which can be seen in Fig. 5.2
on page 36. 88% of the polygons are taken away and the appearance of the
approximation created is far from the original. Especially disturbing are the
large holes created when simplifying already low resolution parts of the model.
CHAPTER 5. RESULTS - TESTINGS 36
Figure 5.2: Approximation containing only 12% of the original polygons. Every
component is reduced to the same extent leaving the holds with superfluous
polygons and the rest of the model with too few of them.
Setting the parameters differently so that only the high resolution holds were
simplified while the rest of the model was left untouched generated a completely
different result. Achieving the same amount of simplification (88% of the original
polygons taken away) the approximation produced this time, to be seen in Fig.
5.3 on page 36, is hard to tell from the original model from Fig. 5.1.
Figure 5.3: Approximation containing only 12% of the original polygons. The
holds are reduced to only containing 0.5% of the original polygons while the
rest of the model is maintained intact.
The result shown in Fig. 5.3 is indeed satisfactory. However, to reach such an
outcome the model had to be analyzed and the parameters of the simplification
algorithm set according to the structure. Since this is not an automatized
procedure it would be time consuming to do it for every single model.
5.2.2 The robot model
The next object to be examined is a model of an industrial robot. Before robots
can be used in production it must be decided how they should move and what
they should do. This is done through geometry simulation using the digital
models.
CHAPTER 5. RESULTS - TESTINGS 37
Analyzing the model
Not every model has unevenly distributed polygons. The robot model in Fig.
5.4 on page 37 has no such defects. Its polygons are perfectly spread over the
model and the resolution is well balanced. Consequently there is not much to cut
from. However, there may still be a need to reduce the number of polygons. To
reach a good result one might be forced to look outside the scope of traditional
mesh simplification. Analyzing the model might give some clues to alternative
solutions.
Figure 5.4: The polygons of this robot are evenly distributed over the model
and the resolution is already rather low.
The model contains many small items especially around the hook as could
be seen in Fig. 5.5 on page 38. In the same figure it can also be seen that the
gray tube holders on the top of the robot have an advanced shape. They are
not plain cylinders but they have more the shape of two cones merged together.
To describe such shapes demand more polygons than to describe cylinders.
Examining the body of the robot it is discovered that it is hollow. In Fig.
5.5 a cylindrical part is displayed and it shows clearly that not only the outside
of the tube is modeled but also the inside. This kind of modeling demands
almost twice as many polygons than if the objects would have been made solid
or only as a surface. The visual appearance of the outside of the model does
not depend on whether it is modeled as hollow or solid nor does it matter for
applications such as collision detection.
CHAPTER 5. RESULTS - TESTINGS 38
Figure 5.5: Details of the robot displayed in Fig. 5.4 on page 37.
Test results
Simplifying the Robot model from Fig 5.4 by taking away 75% of its polygons
yields a rather dreadful result illustrated in Fig 5.6 on page 39.
The tube holders are badly damaged and look like a bird’s nest. The back
side of the robot has lost its shape. The small gray tubes that run on the body
of the robot have at some places been replaced by one large triangle instead of
cylindrical tubes. Some of the small objects around the hook look more or less
the same while as others have disappeared completely. It is still possible to tell
what the model represents but the approximation does not look professional.
Since simplifying the robot model straight off without any adjustments did
not yield a very good result the implementation was improved in order to make
it possible to adjust the simplification to fit the robot model.
The first thing that was thought of was to simplify separate objects within
the model differently depending on their size. The robot model contains many
small items. Normally the robots are shown in the context of a robot cell where
there are several robots and other objects. In such a context the small items
cannot be seen.
The parameters were set so that the small items of the robot model were
completely taken away and the rest of the model simplified. The simplification
was set to produce the same amount of total simplification as in the previous
test - 75% of the polygons taken away. However, since the small items were
reduced harder the larger objects were not as heavily simplified.
The approximated model displayed in Fig. 5.7 on page 40 shows the result
of the test with a heavier reduction of small items. The main body of the
robot is satisfactory maintained and the approximation of the tub holders is
not as bad as in Fig. 5.6, but not completely acceptable either. There are still
some degeneracies in them that are visually unappealing. However, the overall
appearance is much better.
Taking away small objects showed to be an interesting approach for some
applications - especially presentation material since it only needs to look good
CHAPTER 5. RESULTS - TESTINGS 39
Figure 5.6: The robot from Fig. 5.4 has been simplified and have had 75% of
its polygons taken away.
and not always geometrically correct.
To yield even better results for the robot model and others similar models
it would be interesting to test some other methods. One approach would be to
try and eliminate the polygons that cannot bee seen from the outside such as
the ones inside the cylinder from Fig. 5.5. Implementing such a method would
probably make it possible to take away 30-40% of the polygons without any loss
in visual appearance. It will not be mesh simplification in its true sense but
serves well in order to reduce the number of polygons in the models.
Another interesting approach would be the replacement of objects with simi-
lar but simpler ones. Taking the tube holders of the robot model as one example.
The hollow objects on the top of them (see Fig. 5.5) could be replaced by cylin-
ders of the same size demanding fewer polygons. Seen from a distance such a
swoop would not affect the visual appearance much. In the same model the
rods, modeled as narrow cylinders, could be represented as boxes which do not
demand as many polygons.
5.2.3 Late testings
As the very last test a larger model of a Volvo V70 body was tested. The model
contained 294 400 triangles and the file 28 Mb. With Volvo measures this is
still not a large file but compared with the other files tested it was considerably
big. Fig. 5.8 on page 41 shows the model rendered with surface properties and
Fig. 5.9 on page 41 shows the same model but rendered in wire frame.
CHAPTER 5. RESULTS - TESTINGS 40
Figure 5.7: The robot from Fig. 5.4 has been to containing only 25% of its
original polygons. The small objects have been reduced harder than the large
ones.
In the wire frame model in Fig. 5.9 it can clearly be seen that some parts
appear denser that others. These sections are modeled in a higher resolution.
It is the high resolution parts that primarily should be simplified.
At the first attempt the model was simplified node by node as described in
section 4.2.1 on page 32. Even though there were several trials with different
settings of the parameters the results obtained were not satisfactory. As little
as a simplification of 50% caused problems.
As a last try the C++ program was changed so that the entire model was
reduced at once instead of node by node. The change yielded much better
results. The model in Fig. 5.10 on page 42 is simplified to containing 17% of
the original model’s polygons. The result is satisfactory. However, looking at
the model closely one can see that there are small holes in the mesh. Some
were there in the original model as well but others are new. Especially those
in the horizontal line on the doors. These small holes are not acceptable in
an application such as collision detection. The holes are probably due to bad
connections between the different objects in the model which make simplification
more difficult.
The wire frame rendering of the approximated model, to be viewed in 5.11
on page 42, shows that the dense parts of the model still is of high resolution.
Playing around with the parameters, in order to avoid the problem with the
small holes in the mesh, would probably make it possible to simplify the model
some 10% more.
CHAPTER 5. RESULTS - TESTINGS 41
Figure 5.8: Volvo V70 original shaded. The model contains 294 400 triangles.
The file is 28 Mb.
Figure 5.9: Volvo V70 original wire frame. The denser parts are parts with
higher resolution.
The two models - the original in Fig. 5.8 and the approximation in Fig.
5.10 - do not have the exact same color. This is due to in changing the C++
program to test for full-model-simplification no implementation for colors was
made.
CHAPTER 5. RESULTS - TESTINGS 42
Figure 5.10: Volvo V70 simplified shaded. The models is reduced to containing
17% of the original polygons.
Figure 5.11: Volvo V70 simplified wire frame. As seen the high resolution parts
are still rather dense. Nevertheless, simplifying more yields degeneracies in other
parts of the mesh.
Chapter 6
Conclusion
The primer aim of this thesis was to examine if the Volvo Cars’ 3D digital
models could be reduced to containing less than 20% of its original polygons
using a mesh simplification algorithm. To reach this goal the thesis focused the
development of a C++ program performing mesh simplification based on an
existing algorithm. To accomplish the goal, three main research questions, that
was to be answered by the thesis, were put forth. The questions were:
Q1 Which of the presently existing mesh simplification algorithms conforms
the most to the needs of Volvo Cars?
Q2 What features do a mesh simplification program able to reduce the Volvo
cars’ 3D digital models need to have?
Q3 If it is not possible to reduce the Volvo Cars’ 3D models by a more than
80% using the mesh simplification program constructed, what character-
istics must be added?
The primer aim of the thesis has been achieved. The Results chapter accounts for
an analysis of Volvo Cars models reduced using a mesh simplification algorithm
implemented in a C++ program created for this thesis.
The three research questions have been studied and the three following sec-
tions will summarize their answers.
6.1 Evaluating the algorithm chosen
To start with the mesh simplification was surveyed and the six algorithms that
seemed most interesting were selected. Details about the algorithms are pre-
sented in section 3.3 on page 16 and section 3.4 on page 18.
Out of the methods described the Garland and Heckbert Quadratic error
metrics algorithm (see section 4.1.3 on page 28) was considered to be the most
suitable one. The reasons of the choice are stated in section 4.1.2 on page
25. Garland and Heckbert’s fulfilled all but one of the requirements put forth
43
CHAPTER 6. CONCLUSION 44
in section 4.1.1 on page 24. The one claim that was not accomplished was
the one about the built-in handling of hidden objects. The requirements said
that it would be preferable to have such a function since one of the problems
with CAE-exported models is that they usually contain many layers of surfaces.
Section 7.2 in chapter 7 explains how a hidden surface algorithm could be added
to the Garland and Heckbert algorithm.
The Garland and Heckbert algorithm showed to be a good choice primarily
for the following reasons:
- The algorithm could handle the input from every model it was tested on.
Garland and Heckbert claim that their algorithm can handle any arbitrary
input and so far it showed true.
- When there was enough resolution to start from the algorithm produced
good results (see Fig. 5.3 on page 36 and Fig. 5.7 on page 40)
- There was no flipping over of faces.
- There was never any problem with the size of the input not even when
models containing more than a million triangles were tested.
- The algorithms in itself does not provide any obstacle for the improve-
ments found necessary after analyzing the result (see section 6.3 on page
45)
6.2 Evaluating the application created
The program created in this thesis focused the implementation of the the Gar-
land and Heckbert algorithm. The program is a good basis and it apparently
serves its purpose of simplifying models. The following improvements are sug-
gested for the existing C++ program in order to increase its performance and
usability.
- The VRML parser is too slow to handle large models within reasonable
time when the application is ran on standard hardware. It is foremost the
reading from file that constitute the bottle neck. One suggestion would
be to skip the Cyber97VRML package and create a new parser optimized
for the purpose.
- Multi-threading should be added to the program so that all three parts
of the application (see section 4.2 on page 31) could run simultaneously.
This would increase the speed of the application.
- To make it easier for the user a GUI should be added to the application.
- As the last testings showed better results can sometimes be obtained if
the entire model is simplified at the same time (see section 5.2.3 on page
39). The program should be expanded to fully handle this option. As
for today the program only consider surfaces properties surface properties
when the model is simplified node by node (see section 4.2.1 on page 32).
CHAPTER 6. CONCLUSION 45
6.3 Evaluating the results of the testing
The results (see chapter 5 on page 34) showed that it is possible to take away
more than 80% of the model’s polygons without compromising too much in
quality. Nevertheless, with some model the following features should be added
to the application in order to achieve the desired simplification. These features
are:
- Geometric error measure.
- Hidden surface removal.
- Replacement of complex shapes with geometrically similar but not as poly-
gon demanding ones.
In the next chapter it will be explained what these features consists in and how
they could be added to the application.
Chapter 7
Future work
7.1 Geometric error measure
Section 5.2.1 on page 35 that analyzes the simplification of the bracket model
concludes that better results are achieved if not all parts of the models are re-
duced to the same extent. To optimize the parameter settings for each model
is a tedious process and demands too much of the user. It must be done auto-
matically.
Looking at the model tested in section 5.2.1 one can see that its polygons are
unevenly distributed. The mission is to adjust the program to reduce the high
resolutioned parts to greater extent and the less dense parts to a lower extent.
A geometric error measure tells how much the approximated model differs
from the original. Introducing such a measure to the algorithm would make
it possible to set the boundary of the simplification in millimeters instead of
number of polygons taken away. As an alternative to letting the program con-
tinue the simplification until 80% of the polygons are taken away the process
could continue until the approximation differs at the most 2 millimeter from the
original. Such a method would imply that more polygons automatically would
be taken away from the denser part of the model since it will take longer before
the geometry changes there.
There are various theories on how to apply geometric error measure to a
mesh simplification algorithm. The most interesting article found in the research
for this thesis was one by Steve Zelinka and Michael Garland[1]. Their error
measure works as an add-on to Garland’s simplification algorithm used in this
thesis and should be fairly straightforward to implement. Steve Zelinka even
offers free source code for the purpose.
7.2 Hidden surface removal
Section 5.2.2 on page 37 discusses the hidden surface problem. The robot model
analyzed in that section contains surfaces that never shows (see Fig. 5.5 on page
46
CHAPTER 7. FUTURE WORK 47
38). Taking away the polygons of these surfaces would reduce the number of
polygons substantially without affecting the appearance of the model.
The problem with hidden surfaces could probably be solved in various ways.
One would be to take the idea that Lindstrom and Turk used for their Image-
Driven algorithm (see section 3.4 on page 18) and transform it into a hidden
surface algorithm. Instead of using the different-angle pictures of the object (see
Fig. 3.6 on page 19) to measure the difference between the original model and
the approximation the images could be used to find out which polygons that
cannot be seen from the outside. It is possible to program the GPU so that
it tells which polygons that were never drawn when the pictures of the model
were rendered out.
7.3 Replacement of complex shapes
The research of this thesis did not give rise to any ideas of how complex shapes
could be replaced by similar but simpler ones. One solution could be to look at
some sort of image-based technique where the shapes of the objects are compared
to standard shapes like cylinders and boxes. Another technique would be to
create bounding boxes around the objects and compare these with standard
shapes.
Bibliography
[1] S. Zelinka and M. Garland. Permission Grids: Practical, Error-Bounded
Simplification. ACM Transaction on Graphics. April 2002 To be down-
loaded from http://graphics.cs.uiuc.edu/ garland
[2] Michael Garland and Paul Heckbert. Surface Simplification using quadric
error metrics. SIGGRAPH 97 Proc., pages 209-216, 1997 To be downloaded
from http://graphics.cs.uiuc.edu/ garland
[3] M. Garland and P. Heckbert. Simplifying Surfaces with Color and Texture
using Quadratic Error Metrics. IEEE Visualization 98 Proc. 1998
[4] William J. Schroeder, Jonathan A. Zarge and William E. Lorensen. Dec-
imation of triangle meshes. Computer Graphics (SIGGRAPH ’92 Proc.),
26(2):65-70 July 1992
[5] Jarek Rossignac and Paul Borrel. Multi-resolution 3D approximations for
rendering complex scenes. Modeling in Computer Graphics: Methods and
Applications, pages 455-465, 1993
[6] Hugues Hoppe, Progressive meshes. In SIGGRAPH ’96 Proc., pages 99-108,
Aug. 1996
[7] R´emi Ronfard and Jarek Rossignac. Full-range approximation of triangu-
lated polyhedra. Computer Graphics Forum, 15(3), Aug. 1996 Proc. Euro-
graphics ’96
[8] Jonathan Cohen et al. Simplification envelopes. SIGGRAPH ’96 Proc.
pages 119-128 Aug. 1996
[9] William J. Schroeder. A topology modifying progressive decimation algo-
rithm 8th conference on Visualization ’97 Proc. Oct 1997
[10] Peter Lindstrom and Greg Turk. Image-Driven Simplification ACM Trans-
action on Graphics Vol. 19, No. 3, pages 204-241 July 2000 To be down-
loaded from http://www.gvu.gatech.edu/people/faculty/greg.turk/
[11] K. Rule 3D Graphics File Formats
48
Appendix A
VRML syntax
The scematic picture in Fig. A.1 on page 50 shows a piece of VRML code.
Nodes and fields are boxed and numbered in order to make it easier to interpret
the VRML syntax. The following paragraph explains the key words of the code
shown in the picture.
The Shape node (1) contains the two fields appearance and geometry (2).
The geometry field is defined by the IndexedFaceSet node (3) which contains
the two fields coord and coordIndex (4) that define the vertices and coordinates.
The appearance field (2) value is set by the Appearance node (5) in of which
material field (6) the surface properties are set.
49
APPENDIX A. VRML SYNTAX 50
Figure A.1: Schematic picture of VRML syntax
Appendix B
Open Source packages
B.1 The VRML parser
The VRML parser is based on the CyberVRML97 library1
for C++ which
is an Open Source software. The library allows to read and write VRML
files, set and get the scene graph information, draw the geometries and con-
vert from the VRML file format to X3D. For the CyberVRML97 library to
compile the OpenGL 1.1 library and the GLUT 3.x library must be installed.
The CyberVRML97 also demands the xercesc package to be included. The
CyberVRML97 library comes with little documentation.
B.2 The Qslim package
The Qslim Package2
was developed by Michael Garland, assistant professor in
the Department of Computer Science at the University of Illinois. The Qslim
package is based on the experimental software that Garland produced to test
Heckbert’s and his mesh simplification algorithm picked as the algorithm of
choice (see section 4.1.3 on page 28) in this Master thesis. Since built for
Garland’s personal purposes the code is not easily interpreted. Basically, it
comes without documentation and it is not to be considered industrially safe.
The code compiles on the Unix as well as Windows systems but it requires the
OpenGL and the XForms libraries.
1Downloaded from http://cybergarage.org/vrml/cv97/cv97cc
2Downloaded from http://graphics.cs.uiuc.edu/ garland
51

More Related Content

Similar to Mesh Simplification of Complex VRML Models

disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1Pavel Prochazka
 
Lecture notes on mobile communication
Lecture notes on mobile communicationLecture notes on mobile communication
Lecture notes on mobile communicationInocentshuja Ahmad
 
Project final report
Project final reportProject final report
Project final reportALIN BABU
 
masteroppgave_larsbrusletto
masteroppgave_larsbruslettomasteroppgave_larsbrusletto
masteroppgave_larsbruslettoLars Brusletto
 
Machine_Learning_Blocks___Bryan_Thesis
Machine_Learning_Blocks___Bryan_ThesisMachine_Learning_Blocks___Bryan_Thesis
Machine_Learning_Blocks___Bryan_ThesisBryan Collazo Santiago
 
Scale The Realtime Web
Scale The Realtime WebScale The Realtime Web
Scale The Realtime Webpfleidi
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 
ILIC Dejan - MSc: Secure Business Computation by using Garbled Circuits in a ...
ILIC Dejan - MSc: Secure Business Computation by using Garbled Circuits in a ...ILIC Dejan - MSc: Secure Business Computation by using Garbled Circuits in a ...
ILIC Dejan - MSc: Secure Business Computation by using Garbled Circuits in a ...Dejan Ilic
 
Fulltext02
Fulltext02Fulltext02
Fulltext02Al Mtdrs
 
Nweke digital-forensics-masters-thesis-sapienza-university-italy
Nweke digital-forensics-masters-thesis-sapienza-university-italyNweke digital-forensics-masters-thesis-sapienza-university-italy
Nweke digital-forensics-masters-thesis-sapienza-university-italyAimonJamali
 
MSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverMSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverAkshat Srivastava
 
Finite_element_method_in_machining_processes
Finite_element_method_in_machining_processesFinite_element_method_in_machining_processes
Finite_element_method_in_machining_processesEllixies Fair
 
ImplementationOFDMFPGA
ImplementationOFDMFPGAImplementationOFDMFPGA
ImplementationOFDMFPGANikita Pinto
 
Distributed Mobile Graphics
Distributed Mobile GraphicsDistributed Mobile Graphics
Distributed Mobile GraphicsJiri Danihelka
 

Similar to Mesh Simplification of Complex VRML Models (20)

disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1
 
Lecture notes on mobile communication
Lecture notes on mobile communicationLecture notes on mobile communication
Lecture notes on mobile communication
 
KHAN_FAHAD_FL14
KHAN_FAHAD_FL14KHAN_FAHAD_FL14
KHAN_FAHAD_FL14
 
Project final report
Project final reportProject final report
Project final report
 
masteroppgave_larsbrusletto
masteroppgave_larsbruslettomasteroppgave_larsbrusletto
masteroppgave_larsbrusletto
 
thesis
thesisthesis
thesis
 
Machine_Learning_Blocks___Bryan_Thesis
Machine_Learning_Blocks___Bryan_ThesisMachine_Learning_Blocks___Bryan_Thesis
Machine_Learning_Blocks___Bryan_Thesis
 
diss
dissdiss
diss
 
Scale The Realtime Web
Scale The Realtime WebScale The Realtime Web
Scale The Realtime Web
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
ILIC Dejan - MSc: Secure Business Computation by using Garbled Circuits in a ...
ILIC Dejan - MSc: Secure Business Computation by using Garbled Circuits in a ...ILIC Dejan - MSc: Secure Business Computation by using Garbled Circuits in a ...
ILIC Dejan - MSc: Secure Business Computation by using Garbled Circuits in a ...
 
Thesis_Report
Thesis_ReportThesis_Report
Thesis_Report
 
Fulltext02
Fulltext02Fulltext02
Fulltext02
 
Nweke digital-forensics-masters-thesis-sapienza-university-italy
Nweke digital-forensics-masters-thesis-sapienza-university-italyNweke digital-forensics-masters-thesis-sapienza-university-italy
Nweke digital-forensics-masters-thesis-sapienza-university-italy
 
MSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land RoverMSc Thesis - Jaguar Land Rover
MSc Thesis - Jaguar Land Rover
 
MSc_Thesis
MSc_ThesisMSc_Thesis
MSc_Thesis
 
Finite_element_method_in_machining_processes
Finite_element_method_in_machining_processesFinite_element_method_in_machining_processes
Finite_element_method_in_machining_processes
 
ImplementationOFDMFPGA
ImplementationOFDMFPGAImplementationOFDMFPGA
ImplementationOFDMFPGA
 
Distributed Mobile Graphics
Distributed Mobile GraphicsDistributed Mobile Graphics
Distributed Mobile Graphics
 
LC_Thesis_Final (1).pdf
LC_Thesis_Final (1).pdfLC_Thesis_Final (1).pdf
LC_Thesis_Final (1).pdf
 

Mesh Simplification of Complex VRML Models

  • 1. Department of Science and Technology Institutionen för teknik och naturvetenskap Linköpings Universitet Linköpings Universitet SE-601 74 Norrköping, Sweden 601 74 Norrköping Examensarbete LITH-ITN-MT-EX--05/020--SE Mesh simplification of complex VRML models Rebecca Lagerkvist 2005-02-24
  • 2. LITH-ITN-MT-EX--05/020--SE Mesh simplification of complex VRML models Examensarbete utfört i medieteknik vid Linköpings Tekniska Högskola, Campus Norrköping Rebecca Lagerkvist Handledare Pär Klingstam Examinator Stefan Gustavson Norrköping 2005-02-24
  • 3. Rapporttyp Report category Examensarbete B-uppsats C-uppsats D-uppsats _ ________________ Språk Language Svenska/Swedish Engelska/English _ ________________ Titel Title Författare Author Sammanfattning Abstract ISBN _____________________________________________________ ISRN _________________________________________________________________ Serietitel och serienummer ISSN Title of series, numbering ___________________________________ Nyckelord Keyword Datum Date URL för elektronisk version Avdelning, Institution Division, Department Institutionen för teknik och naturvetenskap Department of Science and Technology 2005-02-24 x x LITH-ITN-MT-EX--05/020--SE http://www.ep.liu.se/exjobb/itn/2005/mt/020/ Mesh simplification of complex VRML models Rebecca Lagerkvist In a large part of their work Volvo Cars uses digital models - in the design process, geometry simulation, safety tests, presentation material etc. The models may be used for many purposes but they are normally only produced in one level of detail. In general, the level that suits the most extreme demands. High resolution models challenge rendering performances, transmission bandwidth and storage capacities. At Volvo Cars there is time, money and energy to be saved by adapting the the model’s level of detail to their area of usage. The aim of this thesis is to investigate if the Volvo Cars models can be reduced to containing less than 20% of its original triangles without compromising too much in quality. In the thesis, the mesh simplification field is researched and the simplification algorithm judged to best suit the needs of Volvo Cars is implemented in a C++ program. The program is used to test and analyze the Volvo Cars’ models. The results show that it is possible to take away more than 80% of the the model’s polygons hardly without affecting the appearance. mesh simplification, polygon reduction, Volvo Cars, Visualization, Computer Graphics
  • 4. Upphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under en längre tid från publiceringsdatum under förutsättning att inga extra- ordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/ Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/ © Rebecca Lagerkvist
  • 5. Abstract In a large part of their work Volvo Cars uses digital models - in the design pro- cess, geometry simulation, safety tests, presentation material etc. The models may be used for many purposes but they are normally only produced in one level of detail. In general, the level that suits the most extreme demands. High res- olution models challenge rendering performances, transmission bandwidth and storage capacities. At Volvo Cars there is time, money and energy to be saved by adapting the the model’s level of detail to their area of usage. The aim of this thesis is to investigate if the Volvo Cars models can be re- duced to containing less than 20% of its original triangles without compromising too much in quality. In the thesis, the mesh simplification field is researched and the simplification algorithm judged to best suit the needs of Volvo Cars is implemented in a C++ program. The program is used to test and analyze the Volvo Cars’ models. The results show that it is possible to take away more than 80% of the the model’s polygons hardly without affecting the appearance.
  • 6. Preface The Master Thesis was conducted at Volvo Car Cooperation, Torslanda, and was initiated by Robert Jacobsson - Teamleader Geometry Simulation. The thesis is the final examination of the Master of Media Technology program at Link¨oping University1 . I would like to express my gratitude to the following people who helped me through and made this thesis possible: - Robert Jacobsson for initiating this project and for seeing a solution where other people saw a problem. - Johan Segeborn for putting much effort and energy into the project. - P¨ar Klingstam for giving the work the academic touch it needed. - Hˆakan Pettersson for being there when I could not support another Visual Studio link error. - Sven Rud´en and Jan Hallqvist for feed-back on the test results. - Stefan Gustavsson for his unlimited competence within computer graphics and that he never stopped believing in me. - Bj¨orn Andersson for well thought through feed-back on the report. - Michael Hemph for his great knowledge of C++. - Olle Hellgren and our soon-to-be-born for their support. Rebecca Lagerkvist G¨oteborg, March 6, 2005 1Civilingenj¨orsutbildningen i Medieteknik (180 po¨ang) 1
  • 7. Contents 1 Overview 6 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 Research approach . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.1 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.2 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2 Terminology 10 3 Frame of reference 14 3.1 What mesh simplification is not . . . . . . . . . . . . . . . . . . . 14 3.2 What mesh simplification is . . . . . . . . . . . . . . . . . . . . . 15 3.3 Geometry-based methods of mesh simplification . . . . . . . . . . 16 3.3.1 Vertex decimation . . . . . . . . . . . . . . . . . . . . . . 16 3.3.2 Vertex clustering . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.3 Edge contraction . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.4 Simplification envelopes . . . . . . . . . . . . . . . . . . . 18 3.4 Image-based methods of mesh simplification . . . . . . . . . . . . 18 3.5 Major features of the mesh simplification algorithms . . . . . . . 20 3.6 VRML - Virtual Reality Modeling Language . . . . . . . . . . . . 20 3.6.1 General description . . . . . . . . . . . . . . . . . . . . . . 21 3.6.2 VRML as a scene graph . . . . . . . . . . . . . . . . . . . 21 3.6.3 How geometry is described in VRML . . . . . . . . . . . . 21 3.6.4 The Volvo Cars VRML models . . . . . . . . . . . . . . . 22 4 Results - research and development 24 4.1 Researching the mesh simplification field . . . . . . . . . . . . . . 24 4.1.1 Requirements for the mesh simplification algorithm . . . . 24 4.1.2 Choosing an algorithm . . . . . . . . . . . . . . . . . . . . 25 4.1.3 Mesh simplification algorithm of choice . . . . . . . . . . 28 4.2 Development of the application . . . . . . . . . . . . . . . . . . . 31 4.2.1 Part one . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.2.2 Part two . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2.3 Part three . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2
  • 8. 5 Results - testings 34 5.1 The size of the file . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5.2 Result of the testings . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.2.1 The bracket model . . . . . . . . . . . . . . . . . . . . . . 35 5.2.2 The robot model . . . . . . . . . . . . . . . . . . . . . . . 36 5.2.3 Late testings . . . . . . . . . . . . . . . . . . . . . . . . . 39 6 Conclusion 43 6.1 Evaluating the algorithm chosen . . . . . . . . . . . . . . . . . . 43 6.2 Evaluating the application created . . . . . . . . . . . . . . . . . 44 6.3 Evaluating the results of the testing . . . . . . . . . . . . . . . . 45 7 Future work 46 7.1 Geometric error measure . . . . . . . . . . . . . . . . . . . . . . . 46 7.2 Hidden surface removal . . . . . . . . . . . . . . . . . . . . . . . 46 7.3 Replacement of complex shapes . . . . . . . . . . . . . . . . . . . 47 A VRML syntax 49 B Open Source packages 51 B.1 The VRML parser . . . . . . . . . . . . . . . . . . . . . . . . . . 51 B.2 The Qslim package . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3
  • 9. List of Figures 1.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Shaded and wire frame polygon mesh . . . . . . . . . . . . . . . 11 2.2 Manifold 3D Surface . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Non manifold 3D surface . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Moebius strip . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.5 Convex polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.6 Concave polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1 Same model different resolution . . . . . . . . . . . . . . . . . . . 15 3.2 Two planes with different resolution but the same accuracy. . . 15 3.3 Vertex Decimation . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.4 Vertex Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.5 Edge Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.6 Image-Driven Simplification . . . . . . . . . . . . . . . . . . . . . 19 3.7 Comparison Image-Driven/Geometry Driven . . . . . . . . . . . 20 4.1 Disconnected surfaces . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 Standard edge contraction . . . . . . . . . . . . . . . . . . . . . . 28 4.3 Non-edge contraction . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.4 Geometric interpretation of quadratic error metrics. . . . . . . . 30 4.5 Overview of the C++ program . . . . . . . . . . . . . . . . . . . 31 5.1 Bracket - original model . . . . . . . . . . . . . . . . . . . . . . . 35 5.2 Bracket - simplification 12% . . . . . . . . . . . . . . . . . . . . . 36 5.3 Bracket - adjusted simplification 12% . . . . . . . . . . . . . . . . 36 5.4 Robot - original . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.5 Robot - details of original . . . . . . . . . . . . . . . . . . . . . . 38 5.6 Robot - simplification 25% . . . . . . . . . . . . . . . . . . . . . . 39 5.7 Robot - adjusted simplification 25% . . . . . . . . . . . . . . . . 40 5.8 Volvo V70 original shaded . . . . . . . . . . . . . . . . . . . . . . 41 5.9 Volvo V70 original wire frame . . . . . . . . . . . . . . . . . . . . 41 5.10 Volvo V70 simplified shaded . . . . . . . . . . . . . . . . . . . . . 42 5.11 Volvo V70 simplified wire frame . . . . . . . . . . . . . . . . . . . 42 4
  • 10. 5 A.1 A piece of VRML code . . . . . . . . . . . . . . . . . . . . . . . . 50
  • 11. Chapter 1 Overview 1.1 Introduction In a large part of their work Volvo Cars uses digital models - in the design process, geometry simulation, safety tests, presentation material etc. The digital models are either produced at Volvo Cars or by one of their sup- pliers. The models may be used for many purposes but they are normally only produced in one level of detail. In general, the level that suits the most extreme demands like those of collision detection simulation. Few other applications de- mand such level of detail but since there is only one version of the model the same high resolution is used no matter what the purpose is. High resolution models challenge rendering performances, transmission band- width and storage capacities. To work with excessive resolution wastes money. It slows the work process and demands extra resources. At Volvo Cars there are no routines around the simplification of digital models. There is no software that automatically reduce the level of detail. When simplification is needed it is done manually by adjusting the settings in the Computer Aided Engineering (CAE) programs. This is laborious work and the result is not always satisfactory. For documentation and presentation purposes it is often needed to export the models from the CAE-programs to a file format that can be opened in the program Cosmo Player1 on a standard PC. The standard CAE-programs at Volvo Cars are RobCad and Catia ran on UNIX. The only export option from these programs are the Virtual Reality Modeling Language (VRML). The high level of detail of the models and the properties of the VRML format makes the files extremely large. Most of the time too large to be opened in Cosmo Player on standard hardware. Consequently, the VRML exports are not used and instead the presentation materials contain print-screen pictures of the models. Using static 2D pictures imply that the interactivity is lost, there is no possibility of making animations 1Volvo Cars standard Virtual Reality Modeling Language (VRML) browser 6
  • 12. CHAPTER 1. OVERVIEW 7 or to show different views of the object. To make it possible to use VRML the level of detail of the models must be lowered. Ideally, to containing less than 20% of the original polygons without losing too much in visual quality. Presentation material is important at Volvo Cars. It is the link between separate work groups like design, simulation and testing. The clearer the com- munication between these groups the better the work flow in the entire Volvo Cars organization. Volvo Cars needs to analyze their digital model work. There is an obvious need to lower the level of detail of the 3D models. Primarily, to be able to use VRML in presentation material but in a broader perspective also to ease the work of everyone concerned with the models. It is important to find out if the models carry redundant information. If that is the case they can be simplified without losing precision and the approxi- mations would be good enough even for a wider use than presentation material. 1.2 Research approach 1.2.1 Aim The primary goal of the thesis is to examine if the Volvo Cars’ 3D digital models could be reduced to containing less than 20% of its original polygons using a mesh simplification algorithm. To reach this goal the thesis focuses the development of a C++ program performing mesh simplification based on an existing algorithm. To accomplish the goal, three main research questions have been put forth. The questions are to be answered by the thesis. Q1 Which of the presently existing mesh simplification algorithms conforms the most to the needs of Volvo Cars? Q2 What features do a mesh simplification program able to reduce the Volvo cars’ 3D digital models need to have? Q3 If it is not possible to reduce the Volvo Cars’ 3D models by more than a 80% using the mesh simplification program constructed, what character- istics must be added? 1.2.2 Scope The building of software is a never ending process and it will always be possible to make improvements. To limit the extent, the thesis work has been bounded to comprise the following: - Literature study The literature study has been limited to include mesh simplification in the sense of how to reduce the number of polygons in a generic mesh. The study will not be directed towards specific cases such as multilayer meshes or hidden surface removal.
  • 13. CHAPTER 1. OVERVIEW 8 - Application development The application will be confined to accepting a limited input. It will only be developed to allow the VRML syntax that is most common to the Volvo Cars’ digital models. As far as possible the application will be based on Open Source material. Even if the code available is not optimized for the purpose of this thesis it will be taken advantage of. Reusing other peoples code is the only possible solution in order to develop such an extensive program as the mesh simplification application within the time span of the thesis. The mesh simplification program will not include a Graphic User Interface (GUI). - Testing The major tests will be performed on smaller models (100 000- 30000 polygons). Working with large models is very time consuming. In order to speed up the iteration process (see section ?? on page ??) it has been judge wise not to include larger models in the testing until perhaps the very last iteration cycle. 1.2.3 Method The work of the thesis can roughly be divided into three parts: - Conceptualization - Iterative development - Final analysis The first part of the thesis, the conceptualization, would be to define how the aim of the thesis - to examine if the Volvo Cars’ 3D digital models could be reduced to containing less than 20% of its original polygons using a mesh simplification algorithm - could be transformed into practical act. To examine if the Volvo Cars’ models could be reduced, a tool to perform the testing with is needed. The primer work of this thesis was therefore to create such a tool. Hence, the conceptualization consisted in trying to give a concrete form to how the mesh simplification algorithm could be implemented. The conceptualization would be done primarily through researching the field of mesh simplification software. To look at existing software - both commercial and open source - in order to create an idea of what the difficulties and actual problems with developing one would be. A classical literature study, with the aim of finding theories that apply to the research questions of section 1.2.1, would also be a part of the initial phase. The conceptualization phase was to be keept short. The thesis is an edu- cational project which means that there is little experience brought into it. To design software on an abstract basis demands a lot of know-how, more than was available in this project. Hence, the ambition was to early move into the development phase and there build the software by trial and error.
  • 14. CHAPTER 1. OVERVIEW 9 To successfully build something by trial and error there must be several trials. The development phase was therefor made iterative. What the development phase was to consist in can be seen in Fig. 1.1 on page 9. Figure 1.1: The thesis’ three phases. The development phase is shaded in gray. The thought was to have outer and inner loops. The inner loop would consist of programming, testing and validation and would be repeated more often than the two outer loops. In the beginning of the development phase the iterations would mostly follow the upper outer loop and go through literature study, modeling and back to the inner loop. As the project advanced the idea was that the work would move on to the path of the lower outer loop and focus more on documentation. Nevertheless, if needed it would be possible to go back and do more literature studies even rather late in the development. The project was planned according to Fig. 1.1 to make it possible to bring in new knowledge even during the development phase. The testings of the inner loops would follow a simple methodology. Since computer graphics is much about what looks good is good the only validation would be to look at the models to see if their appearance seemed okay. The tests merely would consist in running the Volvo Cars’ models through the program created and examine the outcome, if there would be an outcome. Once the development phase would be over all the testing and programming would be ended and the results analyzed and put into the documentation.
  • 15. Chapter 2 Terminology The following terminology will be used through out this master thesis report without further explanation. Rendering Rendering is the action of drawing the 3D computer graphics model on to the 2D screen. The following terminology can be associated with rendering. Frames per second Rendering speed is often measured in frames per second and it refers to how many pictures of the model that can be drawn on the screen per second. GPU rendering vs. CPU rendering To render large models could be a time consuming process. Over the years the speed of rendering has been improved mainly be the introduction of hardware rendering performed on the GPU (Graphics Processing Unit) instead of the CPU (Central Processing Unit). The GPU:s have many parallel processors optimized for rendering purposes. Using the GPU’s processors correctly can speed up rendering by hundreds or even thousands of frames per seconds. To use GPU rendering the program performing the rendering must support it. Both OpenGL and DirectX, the most widespread graphic programming packages, support GPU rendering. Wireframe When a model is rendered in wire frame no surfaces properties are being shown. This is illustrated in Fig. 2.1 on page 11. 10
  • 16. CHAPTER 2. TERMINOLOGY 11 Polygon Mesh A polygon mesh is assumed to be a collection of vertices in three dimensional space, and an associated collection of polygons composed of those vertices. The polygons could be seen as the building blocks of the model (see Fig 2.1 on page 11). A simplification algorithm is intended to simplify a polygonal mesh by reducing the number of polygons in the mesh, while retaining as much fidelity to the model as possible. The following terminology could be associated with a polygon mesh. Figure 2.1: 3D computer graphics model. In the model to the right, which is rendered in wire frame, the triangles constituting the model can clearly be seen. Vertices and coordinates In this thesis report the term vertices will be used for x, y and z-values that constitutes the points which are linked together to form the polygons. The term coordinates refer to the integer numbers telling how the vertices will be connected into polygons. This terminology is not standard in all litera- ture and it might be that the two terms are used differently elsewhere. Polygons and triangles In this thesis report the term polygon refer the shapes that are formed when the vertices are linked together. There is no limit to how many vertices each polygon may have. As said, the polygons constitute the mesh. Triangles are virtually the same thing as polygons with the only difference that triangles can only have three vertices.
  • 17. CHAPTER 2. TERMINOLOGY 12 Tessellated A tessellated surface is a surface described by polygons as opposed to a surface defined mathematically through NURBS or implicit functions. Manifold A manifold polygon mesh consists of a collection of edges, vertices and triangles connected such that each edge is shared by at most two triangles. In a manifold without boundaries, each edge is shared by exactly two triangles. Figure 2.2: Manifold 3D Surface Non-manifold A non-manifold polygon mesh consists of a collection of edges, vertices and tri- angles connected such that each edge may be shared by more than two triangles. The triangles may also intersect each other arbitrarily. Figure 2.3: Non manifold 3D surface Orientable A surface is said to be orientable if all the of the face normals of a mesh may be oriented consistently. It is easier to understand the concept of orientable from the concept of non-orientable (see next paragraph). Non-orientable The most common example of a non-orientable surface is the moebius strip (Fig. 2.4 on page 13). It has only one surface and only one edge. Sliding a surface normal along its surface the direction of the surface normal will have changed
  • 18. CHAPTER 2. TERMINOLOGY 13 with an 180 degrees when it reaches the point it started from. This is what makes the surface non-orientable. Figure 2.4: Moebius strip Topology The topology of a mesh deals with the dependencies between its polygons. Sim- plified it could be said that the topology of a models tells how many holes there are in the model. Convex polygons A planar polygon is said to be convex if it contains all the line segment con- necting any pair of its points. Circling a convex polygon all the turns around the corners will be in the same directions. Figure 2.5: Convex polygon Concave polygons A concave polygon is a polygon that is not convex. It must have at least four sides and one internal angle that is greater than 180◦ . Figure 2.6: Concave polygon
  • 19. Chapter 3 Frame of reference 3.1 What mesh simplification is not Before digging deeper into mesh simplification it is necessary to put clear some things that are often misconceived. When speaking about mesh simplification one speaks about the reduction of the complexity of the 3D computer graphics model itself to a lower level of detail approximation of the original one. Reducing the entire 3D model should not be confounded with reducing the complexity of the rendered image of the model. Today’s 3D model viewers are usually very efficient when transforming a 3D model to a 2D image seen on the screen. Level of detail is adjusted according to the distance from which the model is looked at, the z-buffer makes sure that objects hidden behind other objects are never drawn on the screen etc. These techniques are concerned with the mapping of the model from 3D to 2D. They are dependent on knowing in which direction the model is viewed upon and they do nothing to the actual model, only to the picture of it drawn on the screen. The computer still needs to handle equal amount of data every time a calculation is performed on the model. It is also important to keep clear the difference between reducing the model and compressing the file containing the model. There are many efficient tech- niques to compress the 3D model files. To illustrate, if a 3D model contains many of the same objects it is enough to describe the geometry of the first ob- ject and then refer back to that one for the rest of them. However, when shown on a screen the model is still drawn with the same amount of polygons as before the compression. The model is not reduced the file is compressed. The new standard file format for 3D computer models at Volvo Cars is Jupiter (.jt). Jupiter is a light format, the files are small, because intelligent algorithms of compression are used. However, when drawing the .jt-models on the screen they will be just as heavy as any other model containing the same number of polygons. 14
  • 20. CHAPTER 3. FRAME OF REFERENCE 15 3.2 What mesh simplification is Many computer graphics application require complex, highly detailed models to serve their purposes. Collision detection simulation, for example, sometimes demands the models to be made to an accuracy of millimeters and represented in very high resolution. Sometimes the resolution is as high that the human eye cease to see the difference between the original model and the approximation (see Fig. 3.1 on page 15). Since the computational cost of using a model is Figure 3.1: Little difference can be seen between the original circles (the second rendered in wire frame) to the left and the approximations, containing only 0,5 per cent of the polygons, to the right. directly related to complexity is it often desirable to use an approximations in place of the excessively detailed originals. It must be possible to adapt the level of detail to the fit the usage of the model. Reducing complexity of the computer graphics models can be done through mesh simplification. Mesh simplification is the taking away of triangles from the mesh . The more triangles a mesh has the higher the resolution. High resolution is not directly proportional to high accuracy. Take the example of Fig 3.2 on page 15. The plane to the right is by no means better described than the plane Figure 3.2: Two planes with different resolution but the same accuracy. to the left. Even though the plane to the right would be simplified and triangles taken away until it only consisted of the two triangles of the plane to the left the definition of the plane would not be less exact. There is no loss in accuracy because before the simplification there was a redundancy in information. Thus there can be mesh simplification without decreasing the accuracy. The normal case is however that the taking away of triangles creates a coarser approximation of the original model.
  • 21. CHAPTER 3. FRAME OF REFERENCE 16 Over the last 10 to 15 years there have been a large number of published articles on the subject of mesh simplification. Every method consists of two parts - how to decide which triangles to remove and how to remove them. Deciding which triangles to remove is called calculating the cost of reduction or simplification and removing the triangles is simply called simplification or reduction. Roughly the different methods presented can be divided into two categories based on the manner in which the cost is calculated - Geometry-based and Image-based. Infinitely more research has been put into the Geometry-based field than into the Image-based which appeared just recently. 3.3 Geometry-based methods of mesh simplifi- cation Common to so called geometry-based simplification processes is the use of a ge- ometric 3D distance measure between the original and the simplified model to guide the simplification process. Minimizing this distance is equivalent to min- imizing the error of the approximation. Consequently, the goal of the geometry based methods is geometric fidelity. In reality there are very few applications in which geometric fidelity is of much importance. Normally it is more interesting that the approximation is visually similar to the original. Nevertheless, research has been confined to geometry based methods. It was not until July 2000 when Lindstrom and Turk released their article on Image- Driven Simplification [10] that anything different appeared (see section 3.4 on page 18). Even though visual similarity might be more interesting than geometric fi- delity it is not said that the geometry-based methods are not useful. Geometry-based methods have been the subject of research since the sev- enties. Down below, an overview of what is judged to be the most important categories of these methods is presented. However, due to the abundance of publications within this field the list is bound to be incomplete. 3.3.1 Vertex decimation Initially described by Schroeder et al. [4]. The method iteratively selects a vertex for removal, remove all adjacent triangles, and re-triangulates the result- ing hole (see Fig. 3.3 on page 17). The first algorithm presented by Schroeder in 1992 carefully preserved topology, which is restricting for multi resolution rendering systems. However, in 1997 Schroeder presented a supplement to his article [9] describing a vertex decimation technique that did not maintain the topology of the models and could handle non-manifold surfaces, something that the first one could not. The cost of removing triangles or as Schroeder calls it - the error (e) - is based on the triangle’s area (a): ei = √ ai
  • 22. CHAPTER 3. FRAME OF REFERENCE 17 This error measure has the effect of removing small, isolated triangles first. Schroeder’s decimation technique has O(n) time complexity which makes it suitable for large models. It also preserves high-frequency information such as sharp edges and corners. Figure 3.3: Vertex Decimation - The triangles are taken away and the hole is re-triangulated. 3.3.2 Vertex clustering Algorithm initially described by Rossignac and Borrel [5]. A bounding box is placed around the original model and it is divided into a grid. Within each cell, the cell’s vertices are clustered together into a single vertex, and the model’s triangles are updated accordingly (see Fig. 3.4 on page 17). Instead of an uniform grid an adaptive structure such as an octree1 can be used. The process of vertex clustering can be very fast but it can also make drastic topological changes to the mesh. The size of the grid provide an error bound. The error bound gives a measurement of the geometric difference between the original model and the approximation yielded. Figure 3.4: Vertex Clustering - model is divided into a grid and the vertices within the same cell is clustered to one. 3.3.3 Edge contraction An edge of a triangle is contracted into a single point. The triangles supported by that edge become degenerate and are removed during the contraction (see 1Octree - It is a hierarchical representation of 3D objects, designed to use less memory than representing every voxel of the object explicitly. Octrees are based on subdividing the full voxel space containing the represented object into 8 octets by planes perpendicular to the three coordinate axes.
  • 23. CHAPTER 3. FRAME OF REFERENCE 18 Fig. 3.5 on page 18). The process is continued iteratively. Several researchers have made use of this method in their algorithms and the biggest difference between them lies in how the edge to be contracted is chosen. Examples of edge contracting algorithms have been presented by Hoppe [6] and Ronfard and Rossignac [7]. Garland and Heckbert later developed the edge contraction technique even further [2]. They called it pair contraction and their algorithm is capable of joining disconnected surfaces. Figure 3.5: Edge Contraction - The two triangles charing one edge in the middle are taken away 3.3.4 Simplification envelopes Cohen et al [8] has described this simplification technique. The mesh is encapsu- lated between an inner and an outer surface. The surfaces define the boundaries that the mesh must be contained within during the simplification process. There are a number of assumptions that the simplification surface has to fulfill which restricts the use of the algorithm. There is a local as well as a global version to the algorithm. Both algorithms try to construct new triangles from existing vertices by combining them so that the new triangles fit within the bounding envelopes. The local algorithm provides a fast method for generating approxi- mations of large input meshes while as the complexity of the global algorithm is at least O(n2 ) which makes it unfit to apply on large models. An advantage of the global algorithm is that the bounding envelopes supply an error bound. The simplification envelop algorithm can only handle manifold surfaces. 3.4 Image-based methods of mesh simplification The notion of Image-Based or Image-Driven simplification was recently put forward by Peter Lindstrom and Greg Turk. With their article Image-Driven Simplification [10] they introduced a new concept of controlling the simplifica- tion process. Instead of using the geometric 3D distance measured between the original model and the approximation Lindstrom and Turk guided their algo- rithm through measuring the difference between pictures of the original model and the approximated model. Turk and Lindstrom were the first to introduce such an approach and so far they seem to be the only ones researching it.
  • 24. CHAPTER 3. FRAME OF REFERENCE 19 As many geometry-based methods Lindstrom’s and Turk’s uses the edge collapse operator to make incremental changes to the model. To evaluate which edge to collapse, i.e the cost of taking it away, they use an image metric - a function that give a measure of the distance between two images. Lindstrom and Turk use the root mean square (RMS) error to evaluate the difference between the pixels in two images. Since it is virtually impossible to capture the entire appearance of an object in a single image the object is capture from 20 different directions (see Fig. 3.6 on page 19) The corresponding images of the original and the approximation are compared in order to evaluate which edges to collapse in the following iteration. Figure 3.6: Image-Driven Simplification is guided by the comparison between images of the original model and the approximated one. In the test accounted for in their article [10] Turk and Lindstrom used 20 images from different angles. Image-driven simplification removes detail that has little effect on rendered images of a model. When some part of a model is invisible in all views it will be drastically simplified. This feature is highly usable to CAE-models that are to be reduced for visualization purposes since they usually contain many layers of objects. To take away hidden interior objects is otherwise a difficult task. The biggest drawback of the image-driven algorithm of Turk and Lindstrom is that it is slow. In their article Turk and Lindstrom compare with the Garland and Heckbert algorithm (see section 4.1.3 on page 28) and the image-driven one shows many times slower Fig. 3.7 on page 20 shows a model of a donut reduced to 512 triangles. The result of the two algorithm is similar but the processing time different. The Turk and Lindstrom algorithm took almost five times as long (5.33 seconds compared to 1.04). To the image-driven algorithm’s defense it should be said that when com- paring it with a geometry-based algorithm plus an algorithm that takes away interior objects the time difference probably would not be that large.
  • 25. CHAPTER 3. FRAME OF REFERENCE 20 Figure 3.7: To the left the Qslim algorithm by Garland and Heckbert, in the middle Image-driven by Turk and Lindstrom and to the right the original model. 3.5 Major features of the mesh simplification al- gorithms This section will summon up the major features of the above described mesh simplification algorithms. The features are put into a table in order to make it easy to compare the different methods. Even though they were presented together in section 3.3 the Pair Contraction algorithm of Michael Garland and Paul Heckbert is put separate from the Edge Contraction algorithms. This reason is that the Pair Contraction algorithm contains one crucial feature that Edge Contraction algorithms do not have. The heading Error measurement in the below table refers to if there is a measure of the geometric difference between the original model and the approx- imation produced by the algorithm. If the quality can be told by a geometric measure. The heading fast refers to if the algorithm is reasonably fast. A No in this column means that one of the significant features of the algorithm is that it is slow. If the table says Depends the answer can be both Yes and No depending on which algorithm within that category of algorithms that is referred to. For example, there are Edge Contraction algorithms that can handle arbitrary input and there are those who cannot. 3.6 VRML - Virtual Reality Modeling Language VRML is the format of the export of the Volvo Cars’ CAE-program. In order to build an application that take VRML as an input one must understand its syntax. To easier understand this chapter it is recommended to take a look at the schematic picture of the VRML syntax in the Appendix A.1 on page 50.
  • 26. CHAPTER 3. FRAME OF REFERENCE 21 Table 3.1: The table shows the major features of the mesh simplification algo- rithms examined Method Takes arbitrary input Joins dis- connected surfaces Error measure- ment Maintains topology Fast Vertex Decimation Yes No No No Yes Vertex Clustering Depends Yes Yes No Yes Edge Contraction Depends No No Depends Yes Pair Contraction Yes Yes No No Yes Simplification en- velops No Yes Depends No Depends Imaged-Driven Yes Yes No No No 3.6.1 General description The Virtual Reality Modeling Language (VRML) is a textual language describ- ing 3D scenes. VRML differs from other programming language like C/C++ in that instead of describing how the computer should act through a series of commands, it describes how a 3D scene should look. At the highest level of abstraction, VRML is just a way for objects to read and write themselves. VRML defines a set objects useful for doing 3D graphics. These objects are called nodes. The basic node categories are: Shape, Camera/Light, Property, Transformation, Grouping and WWW. Within each categories there are several subtypes defined. To illustrate, a Transformation node can be of the subtype Transform, Rotation or Translation. Each containing different property fields. 3.6.2 VRML as a scene graph The nodes in the VRML world are organized in a hierarchical structure called scene graph. The scene graph decides the ordering of the nodes. It also has a notion of state which implies that nodes higher up in the hierarchy can affect nodes that are lower down. A scene graph has many advantages for 3D graphics purposes. 3.6.3 How geometry is described in VRML The basic building blocks in VRML are shapes described by nodes and their accompanying fields and field values. The basic node type that defines the geometry of an VRML-object is the Shape node. The Shape node typically contains a geometry and a appearance field. The geometry field describes the 3D-structure while as the appearance defines the surface properties. The surface properties are based on the material that could be either a color or a texture. The fields of a node could be defined either by values directly or by another node.
  • 27. CHAPTER 3. FRAME OF REFERENCE 22 Predefined primitives VRML provides several predefined primitive geometry nodes that can be used to define the objects. Such primitives are Box, Cone, Cylinder and Sphere. These objects cannot be simplified directly since the vertices and coordinates of the objects are not explicitly defined. Vertices and coordinates Except for the case with the predefined primitives the geometry of an object is described through vertices and coordinates. Each vertex sets a point in space - i.e. the x-, y- and z-values. The coordinates tell how the vertices are linked together into triangles, quadratics or shapes with more corners. However, ren- dering systems can only handle triangles and if the vertices are not defined as such there must be splitting before the object can be rendered. The IndexedFaceSet node can be used as the value of the geometry field in the Shape node (see Appendix A.1 on page 50). Then the vertices are declared in the coord field and the coordinates in the coordIndex field. There are many variations to the VRML syntax. Nevertheless, the mesh simplification application will only accept input where the geometry is described in the IndexedFaceSet node. Normals Normals in VRML are unit vectors used to direct the lighting and shading of the object. The normals are specified in the normal field of the Shape node. Typically the normal field value is specified by the Normal node. If the normal field is empty the normals are automatically computed for each triangle or coor- dinate in the triangle set and the normalIndex and normalPerVertex fields are ignored. Otherwise, if the field is not empty, the normalIndex field specifies the x-, y- and z-values of the normals and the normalPerVertex field tells whether the normals should be used for each face or each coordinate. 3.6.4 The Volvo Cars VRML models The Volvo Cars 3D VRML models that will be used in the tests later on have the following properties. - The models constitute many smaller objects linked together by the scene graph. - The size of the files is in general very large and could easily contain millions of polygons. - There is nothing known about the how the models are exported from the CAE-programs and thus it is better assumed that the structure of the mesh is arbitrary, i.e. it can be manifold, non-manifold, orientable, non-orientable etc.
  • 28. CHAPTER 3. FRAME OF REFERENCE 23 - The models contain no textures but the different parts are of different colors. - The SCALE property in VRML implies that the objects will not necessar- ily have the same size as the vertices of the triangles tell since they can be declared to one size and then in the rendering process scaled to another. - The vertices and coordinates are declared in the IndexedFaceSet node.
  • 29. Chapter 4 Results - research and development The chapter presents the results of the research and development phase of the thesis. The results shown in this chapter give the answers to research question one - Which of the presently existing mesh simplification algorithms conforms the most to the needs of Volvo Cars? - and two - What features do a mesh simplification program able to reduce the Volvo cars’ 3D digital models need to have?. 4.1 Researching the mesh simplification field At Volvo Cars the 3D models are considered to be the originals while as the real, physical objects are the copies. Quality of the digital material is therefore important. To pick a mesh simplification algorithm that manages to reduce the resolution without compromising too much on quality is naturally the aim of the selection process. It is also important to look at the constitution of the polygon mesh in Volvo Cars’ models. Since nothing is known about the export functions of the CAE- programs used at Volvo Cars it is better assumed that it is arbitrary, i.e mani- fold, non-manifold, orientable, non-orientable etc. Thus the algorithm must be capable of handling such input. The following section summarizes the perquisites that a mesh simplification algorithm should fulfill in order to function on the Volvo Cars digital models. To understand some of the requirements it is recommended to keep the properties of the Volvo Cars’ models in mind. These are stated in section 3.6.4 on page 22. 4.1.1 Requirements for the mesh simplification algorithm - The algorithm should be capable of handling arbitrary input. 24
  • 30. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 25 - It is preferable that the algorithm can close disconnected surfaces (see Fig. 4.1 on page 25). Figure 4.1: The original model to the left is simplified with an algorithm that disallows the joining of disconnected surfaces (middle) and with one that allows it (right). - The algorithm should be able to handle large models. - It is preferable that the algorithm is reasonably fast. - The algorithm should not cause any flipping over of triangles. A flipped- over triangle will not reflect light properly and appear as black in the rendered model. - It is preferable if the algorithm has a built-in handling of hidden objects. - The algorithm should be fairly straightforward to implement. - It is preferable that the algorithm produces a geometric measure of how much the approximation differs from the original model. - The manner in which triangles are taken away should be intelligent and relate to the purpose of mesh simplification. 4.1.2 Choosing an algorithm The requirements stated in the previous section automatically exclude some simplification algorithms. Which ones to leave out can be deduced by looking at the major features of the various algorithms. To make it easier for the reader the table from section 3.5 on page 20 is included again. However, if there is a need for instructions on how to read the table the reader must turn back to 3.5. Note that there is a difference between the Error measurement of the table and the Error metrics that is mentioned further on. Error metrics refers to what guides the algorithm. How it is decided which triangles to take away. Er- ror measurement refers to if the algorithm produces a geometric measurement telling how much the approximated model differs from the original. An Error measurement is interesting to have since it makes it possible to set boundary values in terms of a geometric measure. It is possible to say that the model
  • 31. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 26 Table 4.1: The table shows the major features of the mesh simplification algo- rithms examined Method Takes arbitrary input Joins dis- connected surfaces Error measure- ment Maintains topology Fast Vertex Decimation Yes No No No Yes Vertex Clustering Depends Yes Yes No Yes Edge Contraction Depends No No Depends Yes Pair Contraction Yes Yes No No Yes Simplification en- velops No No Depends Yes Depends Imaged-Driven Yes Yes No No No should be reduced until the the boundary value of 5 millimeters difference be- tween the original and the approximation is reached instead of just stating that 50% of the faces should be taken away. Every algorithm has Error Metrics but not all provide an Error measure- ment. The table tells that the Simplification envelops algorithm does not fulfill the requirement of arbitrary input. Arbitrary input is an important claim and not accepting it strongly speaks against the Simplification envelop algorithm. The Pair Contraction algorithm is an improvement of the the Edge Contrac- tion algorithms. As can be seen from the table the Pair Contraction algorithm complete more requirements than the Edge Contraction algorithms since it can join disconnected surfaces. In this case the Edge Contraction algorithms do not have any other advantage over the Pair Contraction and they are thus excluded in favor of Pair Contraction. This leaves four algorithms Vertex Decimation, Vertex Clustering, Pair Con- traction and Image-Driven. The error metrics of Vertex Decimation does not seem convincing. The triangles are taken away in oder of area going from small to large. Even though it is plausible that taking away small triangles causes less harm than taken away large, there is no guarantee. It would be more interesting if the error metrics was based on the difference between the original and the approximation. Vertex Clustering is known to be a fast and sloppy algorithm. Sloppy in the sense that it can make drastic changes to the mesh. It does not have a thought through methodology on how to decide which triangles to take away. Instead all the vertices within a grid cell are clustered together no matter what the cost of such an operation might be. Since it is important that the approximated model differ as little as possible from their originals the rather unscientific approach of the Vertex Clustering algorithm is unappealing. There are three major arguments that decide between the Image-Driven and the Pair Contraction algorithms. The first argument can be found in the table.
  • 32. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 27 The Image Driven algorithm is slow. As stated in the requirements in section 3.6.4 the input models are usually very large making the speed of the algorithm an issue. This obviously speaks against the Image-Driven algorithm. On the other hand, there is no claim that the application should be capable of working in real-time and thus the time-argument is not strong enough to solely exclude this algorithm. The advantage of the Image-Driven algorithm over the Pair Contraction is that it has an built-in handling of hidden objects. Looking at the structure of many of the Volvo Cars’ models (see section 5.2.2 on page 36) it becomes clear that such a feature could be very useful. However, one would want to have the choice of turning the hidden surface handler on and off since there might be cases where the interior of a model should be maintained. With the Image- Driven algorithm that would be impossible. It is also perfectly possible to add a hidden surface handler to any mesh simplification algorithm. Consequently, the built-in handling of hidden objects cannot be used as an argument for the Image-Driven algorithm. The third argument is the most important one. As described, the Image- Driven algorithm is guided by the visual comparison between the original and the approximated model. This is a excellent technique if it can be secured that it is only the visual that matters. With the Volvo Cars application there is no such warranty. If the mesh simplification application was to be used only on presentation material a visual comparison could be enough. However, there is a possibility that Volvo Cars would want to extend the use of mesh simplification and apply it on all their models. Even the ones used for geometry simulation and hence collision detection. In such a case geometry becomes more important than appearance and guiding the algorithm by what looks good would not be sufficient. Judging by the above described arguments the algorithm that seems to best fit the requirements is the Garland and Heckbert Pair Contraction [2]. The algorithm takes arbitrary input, joins disconnected surfaces, does not maintain topology and it is reasonably fast. The one major feature missing is the geo- metric error measure. It is however perfectly possibly to add such a quality to the algorithm. Garland himself has published a paper [1] on how this could be done. The Garland and Heckbert algorithm is guided by an error metrics that carefully measures how much the mesh would change if a certain triangle is taken away. It is actually this error metrics that gives the algorithm its cred- ibility and it is probably the reason to why the algorithm has reached such a success within the computer graphics community. The error metrics of other algorithm such as the one of Vertex Clustering or the one of Vertex Decimation are not adapted to the actual purpose as the Pair Contraction’s is. Garland and Heckbert has looked at what is really important in the context - a mini- mized difference between original and approximation - and let their algorithm be guided accordingly. The Garland and Heckbert algorithm is presented in detail in the next sec- tion.
  • 33. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 28 4.1.3 Mesh simplification algorithm of choice Garland and Heckbert first presented their algorithm in an article[2] - Surface simplification using error metrics - published on SIGGRAPH in 1997. Since then there has been some improvements made to it [3]. Primarily for the han- dling of colors and textures. However, the part of the algorithm used in this theses work is entirely from the 1997 article. The essence of the Garland and Heckbert algorithm is here presented. Pair contraction The Michael Garland and Paul S. Heckbert surface simplification algorithm is based on the iterative contraction of vertex pairs. A pair contraction written (v1, v2) → v, moves the vertices v1 and v2 to the new position v, connects all their incident edges to v1, and deletes the vertex v2. If (v1, v2) is an edge, then one or more faces will be removed (see Fig. 4.2 on page 28) . Otherwise, two previously separate sections of the model will be joined at v (see Fig. 4.3 on page 29). Using pair contraction instead of edge contraction makes it possible to merge individual components into a single object. The algorithm is based on the assumption that, in a good approximation, points do not move far from their original positions. A pair is valid for contrac- tion if either: 1. (v1, v2) is an edge 2. ||v1 − v2|| < t, where t is a threshold parameter Using a threshold of t = 0 gives a simple edge contraction algorithm (see Fig. 4.2 on page 28). Higher thresholds allow non-connected vertices to be paired (see Fig. 4.3 on page 29). If the threshold is too high, widely separated portions of the model can be connected and it could create O(n2 ) pairs. The set of Figure 4.2: Standard edge contraction (t = 0) The edge shared between the two shaded triangles is contracted into a single point. The shaded triangles become degenerate and are removed. valid pairs is chosen at initialization time. When the contraction (v1, v2) → v is performed every occurrence of v2 in a valid pair is replaced by v1, and duplicated pairs are removed.
  • 34. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 29 Figure 4.3: Non-edge contraction (t > 0) When the threshold t > 0 non-edge pairs can be contracted and unconnected regions joined together. Defining the cost of a contraction In order to select which contraction to perform Garland and Heckbert have introduced the notion of cost. To define the cost, they attempt to characterize the error at each vertex. In the original model, each vertex is the solution of the intersection of a set of planes of the triangles that meet at the vertex. A set of plane can be associated with each vertex and the error can be defined as the sum of squared distances to its planes: ∆v = ∆([vxvyvz1]T ) = p∈planes(v) (pT v)2 (4.1) where p = [a b c d]T represents the plane defined by the equation (ax + by + cz + d) = 0 where a2 + b2 + c2 = 1. The error metric described above can be rewritten in the quadratic form: ∆v = p∈planes(v) (vT p)(pT v) = p∈planes(v) vT (ppT )v = vT ( p∈planes(v) KP)v (4.2) where KP is the matrix: KP = ppT =     a2 ab ac ad ab b2 bc bd ac bc c2 cd ad bd cd d2     (4.3) KP can be used to find the squared distance of any point in space to the plane p. The sum of all Kp can be represented with Q which then represents the entire set of planes for each vertex. The error at vertex v = [vxvyvz1]T can then be expressed as the quadratic form: ∆(v) = vT Qv (4.4)
  • 35. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 30 For every contraction (v1, v2) → v a new matrix Q containing the approxima- tions of the error at the new vertex v must be derived. Garland and Heckbert have chosen to use the simple additive rule Q = Q1 + Q2. Selecting v Before the contraction (v1, v2) → v can be performed a new position for v must be chosen. The easy solution would be to select either v1, v2 or (v1 + v2)/2 de- pending on which of them that has the lowest value of ∆(v). However, Garland and Heckbert go farther than that and chooses the v that minimizes ∆(v). The error function ∆ is quadratic and finding its minimum is thus a linear problem. v is found by solving ∂∆/∂x = ∂∆/∂y = ∂∆/∂z = 0 which is the same as solving for v in following system of equations:     q11 q12 q13 q14 q12 q22 q23 q24 q13 q23 q33 q34 0 0 0 1     v =     0 0 0 1     (4.5) Assuming that the Q matrix is invertible gives the following solution to equation 4.5: v =     q11 q12 q13 q14 q12 q22 q23 q24 q13 q23 q33 q34 0 0 0 1     −1     0 0 0 1     (4.6) If Q is not invertible, the algorithm seeks the optimal vertex along the edge v1v2. If that also fails v is chosen from the endpoints and the midpoints. Geometric interpretation of the cost The level surface ∆(v) = , i.e. the set of all points whose error with respect to Q is , represents a quadratic surface. Geometrically these are almost always ellipsoids with v as the center of the ellipsoid (see Fig. 4.4 on 30). Figure 4.4: Geometric interpretation of quadratic error metrics.
  • 36. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 31 4.2 Development of the application The primary aim of the thesis is to examine if the Volvo Cars’ 3D digital models could be reduced to containing less than 20% of its original polygons using a mesh simplification algorithm. The method (see section 1.1 on page 9) chosen to reach this aim was to create a C++ program that would take the Volvo Cars’ VRML models as an input and give simplified approximations of these as an output. The basis of the C++ program would be the implementation of an existing mesh simplification algorithm. Section 4.1 on page 24 gives an account of which algorithm that was selected and why. As seen, the choice fell on Michael Garland’s and Paul Heckbert’s Pair Contraction simplification with Quadratic error measure [2]. The theory of the algorithm is fully presented in section 4.1.3 on page 28. The first program created consisted of the three parts visualized in Fig. 4.5 on page 31. Part one takes care of the user input from the DOS-prompt and the parsing of the VRML file with the original model. To parse a file means to read it and to extract information from it. The file is read and the model is stored in the memory as a scene graph structure. (Section 3.6 on page 20 tells more about this structure.) Part one then runs a loop that extracts the Figure 4.5: The C++ program created for the mesh simplification application consists of three parts and the user interface. vertices and coordinates from the first node in scene graph. The polygons are split into triangles and the information is sent on to Part two where the actual simplification takes place. Once the vertices and coordinates are simplified to desired percentage they are sent on to Part three where they are reinserted into the three structure. The program then moves back to Part one and the next node in the hierarchy is approached. The loop goes on until all nodes has been simplified. Once finished Part three writes the new VRML structure to a file. This is the basis of the program. To yield better results the program has been altered slightly during testing. The next chapter accounts for these changes. The specifics about each part of the application are accounted for in the three following sections.
  • 37. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 32 4.2.1 Part one For Part one of the program there were three major problems that had to be solved and the essence of those will be accounted for here. Node by node vs. entire model The VRML-model is simplified node by node and not the entire model at the same time. This means than an object in the VRML scene graph is extracted, simplified and reinserted into the hierarchy before the next object is approached. There are advantages and disadvantages with this method. An advantage is that there is no limit to what size the entire model may have since it is divided into parts and the algorithm only treats a little at the time. Simplifying only the vertices and coordinates of the same node does not change their surface properties (since they are the same for all of them) and thus there is no need to send that information along. It also preserves the internal order of the objects and does not break the VRML scene graph apart. Simplifying node by node also makes it possible to improve the speed of the application by creating parallel threads in the program so that nodes are read from the VRML-file, simplified and reinserted simultaneously. The main disadvantage with node by node simplification appear when the polygons are unevenly distributed in the model. If the entire model is simplified at the same time, polygons will be taken away primarily from the denser part of the model because the cost of removing them there is lower. When the model is reduced in parts every node will be reduced to the same extent (50%, 70%, 90% etc) no matter if it belongs to the dense part or not. This is a major disadvantage and could lower the quality considerately. Which information to extract The only information that is extracted from the VRML node and sent onto the next part of the program are the vertices (the x, y and z values of each point) and the coordinates (the information about how the vertices are connected together). The VRML-model is reduced node by node and thus the information about the surface properties does not need to be extracted and connected to each triangle. All the triangles that are reduced at the same time have the same surface properties and after simplification is performed they are put back into their original node where their surface properties are described. Not even the normals are taken out and connected to the triangles. Instead they are taken away for each node in the VRML-model. It is not necessary to set the normals explicitly in VRML. If the the normals are not set the VRML- viewer will calculate new normals for each triangle. How to split the polygons In VRML a polygon can easily consist of more than 20 vertices. Since the Gar- land and Heckbert algorithm (see section 4.1.3 on 28) only can handle triangles
  • 38. CHAPTER 4. RESULTS - RESEARCH AND DEVELOPMENT 33 the VRML polygons must be split before sent onto simplification. From VRML1.3 and onward there is support for non-convex polygons. Nev- ertheless, the use of them is not recommended and in this application it has been assumed that all polygons are convex. Only dealing with convex polygons make splitting easier. From the sixth vertex of a polygon and onward the splitting can be done according to the following algorithm: setCoordIndex(6th vertex, i, i + 1, −1) To familiarize yourself with the VRML syntax please have a look at Fig A.1 on page 50 There are two exceptional cases for polygons with four and five vertices. In larger polygons the first five vertices are treated according to the algorithm of five polygons and the rest according to the above described algorithm. 4.2.2 Part two In Part two the actual simplification takes place. The code is an implementa- tion of the Garland’s and Heckbert’s mesh simplification algorithm described in section 4.1.3 on page 28. The code uses functions from the Qslim packages which is an Open Source software put together by Garland himself. Appendix B tells more about the Qslim package. The flow of the code in Part two can be summarized as follows: - Compute the Q matrices for all the initial vertices - Select all valid pairs - Compute the optimal contraction target v for each valid pair (v1, v2). The error vT (Q1 + Q2)v of this target vertex becomes the cost of contracting that pair. - Place all the pairs in a heap ordered by cost with the minimum cost pair on the top. - Iteratively remove the pair (v1, v2) of least cost from the heap, contract this pair, and update the cost of the pairs involving v1. Once the vertices and coordinates been simplified they are sent on to Part three. 4.2.3 Part three The work of Part three is actually quite simple. No major difficulties were encountered and not big problems were needed to be solved. The work of Part three is to delete the old vertices and coordinates from the VRML node and to insert the new ones. This was achieved simply by calling functions from the Cyber97VRML library. To print the VRML scene graph back to the file the print function of Cyber97VRML was used.
  • 39. Chapter 5 Results - testings The previous chapter accounted for the choice of mesh simplification algorithm and how if was implemented. Now it is the time to use the application created in order to fulfill the aim of the thesis - to examine if the Volvo Cars’ 3D digital models could be reduced to containing less than 20% of its original polygons using a mesh simplification algorithm. The results where produced in several sets. When Garland’s Qslim pack- aged had been linked to the VRML parser and the in between interface was functioning satisfactory the first round of testing was performed. Then followed improvements of the application and thereafter new tests. The expectation was to be able to take away more than 80% of the polygons from the original models without creating major degeneracies in the approxi- mations. The tests were performed on models that are used in the simulation process at Volvo Cars. In this chapter the testing and analysis of two different models will be ac- counted for. The models have distinct characteristics but are both representative for what the Volvo Cars’ models might look like. 5.1 The size of the file It is important to realize that the size of the file containing a 3D-model is not directly proportional to the number of triangles the model contains. (This fact has been discussed previously in section 3.1 on page 14.) A file of size 25 Mb can actually contain a lighter model than a file of 20Mb. It depends very much on how compressed the writing of the file is. When splitting the polygons of more than three vertices into triangles the VRML file becomes bigger in size because expressing the polygons as triangles is a less compressed way of writing. At Volvo Cars the size of the models is always measured in the size of the the file which is misleading. 34
  • 40. CHAPTER 5. RESULTS - TESTINGS 35 5.2 Result of the testings For the first tests small models of some 100 000 polygons were used. Simplifying larger models (some millions of polygons) is very time consuming and in the beginning quick feed back is important. 5.2.1 The bracket model The first object tested was a model of a so called bracket which is a small part of the outer front of the car. At the first testings the results were not at all satisfactory. A simplification of merely 50% already created problems and taking away more than 80% of the original polygon showed unthinkable. The first conclusion drawn was that the model was not as high resolutioned as thought from the beginning and could therefore not be simplified as much. It seemed curious however that this small model would contain as many polygons if it was not of high resolution. An application was created that counted the polygons of each node in the VRML- models. When printing this information very interesting facts were discovered. It proved that the polygons were unevenly distributed throughout the model. Some parts were as simplified as they could possibly be while as others were of extreme resolution. In this particular case the parts modeled in a very a high resolution was rather small and thus difficult to discover merely by looking at the model rendered in wire frame. This is illustrated in Fig. 5.1 on page 35, the two small holds in the upper corners of the model actually constitute 88% of the total number of polygons in the entire model. Figure 5.1: Original model (91 638 polygons) with the high resolution hold to the right. The hold almost appear not to be rendered in wire frame due to the excessiveness in polygons. Simplifying the model in Fig. 5.1 applying the same amount of reduction to the entire model produced a very poor result which can be seen in Fig. 5.2 on page 36. 88% of the polygons are taken away and the appearance of the approximation created is far from the original. Especially disturbing are the large holes created when simplifying already low resolution parts of the model.
  • 41. CHAPTER 5. RESULTS - TESTINGS 36 Figure 5.2: Approximation containing only 12% of the original polygons. Every component is reduced to the same extent leaving the holds with superfluous polygons and the rest of the model with too few of them. Setting the parameters differently so that only the high resolution holds were simplified while the rest of the model was left untouched generated a completely different result. Achieving the same amount of simplification (88% of the original polygons taken away) the approximation produced this time, to be seen in Fig. 5.3 on page 36, is hard to tell from the original model from Fig. 5.1. Figure 5.3: Approximation containing only 12% of the original polygons. The holds are reduced to only containing 0.5% of the original polygons while the rest of the model is maintained intact. The result shown in Fig. 5.3 is indeed satisfactory. However, to reach such an outcome the model had to be analyzed and the parameters of the simplification algorithm set according to the structure. Since this is not an automatized procedure it would be time consuming to do it for every single model. 5.2.2 The robot model The next object to be examined is a model of an industrial robot. Before robots can be used in production it must be decided how they should move and what they should do. This is done through geometry simulation using the digital models.
  • 42. CHAPTER 5. RESULTS - TESTINGS 37 Analyzing the model Not every model has unevenly distributed polygons. The robot model in Fig. 5.4 on page 37 has no such defects. Its polygons are perfectly spread over the model and the resolution is well balanced. Consequently there is not much to cut from. However, there may still be a need to reduce the number of polygons. To reach a good result one might be forced to look outside the scope of traditional mesh simplification. Analyzing the model might give some clues to alternative solutions. Figure 5.4: The polygons of this robot are evenly distributed over the model and the resolution is already rather low. The model contains many small items especially around the hook as could be seen in Fig. 5.5 on page 38. In the same figure it can also be seen that the gray tube holders on the top of the robot have an advanced shape. They are not plain cylinders but they have more the shape of two cones merged together. To describe such shapes demand more polygons than to describe cylinders. Examining the body of the robot it is discovered that it is hollow. In Fig. 5.5 a cylindrical part is displayed and it shows clearly that not only the outside of the tube is modeled but also the inside. This kind of modeling demands almost twice as many polygons than if the objects would have been made solid or only as a surface. The visual appearance of the outside of the model does not depend on whether it is modeled as hollow or solid nor does it matter for applications such as collision detection.
  • 43. CHAPTER 5. RESULTS - TESTINGS 38 Figure 5.5: Details of the robot displayed in Fig. 5.4 on page 37. Test results Simplifying the Robot model from Fig 5.4 by taking away 75% of its polygons yields a rather dreadful result illustrated in Fig 5.6 on page 39. The tube holders are badly damaged and look like a bird’s nest. The back side of the robot has lost its shape. The small gray tubes that run on the body of the robot have at some places been replaced by one large triangle instead of cylindrical tubes. Some of the small objects around the hook look more or less the same while as others have disappeared completely. It is still possible to tell what the model represents but the approximation does not look professional. Since simplifying the robot model straight off without any adjustments did not yield a very good result the implementation was improved in order to make it possible to adjust the simplification to fit the robot model. The first thing that was thought of was to simplify separate objects within the model differently depending on their size. The robot model contains many small items. Normally the robots are shown in the context of a robot cell where there are several robots and other objects. In such a context the small items cannot be seen. The parameters were set so that the small items of the robot model were completely taken away and the rest of the model simplified. The simplification was set to produce the same amount of total simplification as in the previous test - 75% of the polygons taken away. However, since the small items were reduced harder the larger objects were not as heavily simplified. The approximated model displayed in Fig. 5.7 on page 40 shows the result of the test with a heavier reduction of small items. The main body of the robot is satisfactory maintained and the approximation of the tub holders is not as bad as in Fig. 5.6, but not completely acceptable either. There are still some degeneracies in them that are visually unappealing. However, the overall appearance is much better. Taking away small objects showed to be an interesting approach for some applications - especially presentation material since it only needs to look good
  • 44. CHAPTER 5. RESULTS - TESTINGS 39 Figure 5.6: The robot from Fig. 5.4 has been simplified and have had 75% of its polygons taken away. and not always geometrically correct. To yield even better results for the robot model and others similar models it would be interesting to test some other methods. One approach would be to try and eliminate the polygons that cannot bee seen from the outside such as the ones inside the cylinder from Fig. 5.5. Implementing such a method would probably make it possible to take away 30-40% of the polygons without any loss in visual appearance. It will not be mesh simplification in its true sense but serves well in order to reduce the number of polygons in the models. Another interesting approach would be the replacement of objects with simi- lar but simpler ones. Taking the tube holders of the robot model as one example. The hollow objects on the top of them (see Fig. 5.5) could be replaced by cylin- ders of the same size demanding fewer polygons. Seen from a distance such a swoop would not affect the visual appearance much. In the same model the rods, modeled as narrow cylinders, could be represented as boxes which do not demand as many polygons. 5.2.3 Late testings As the very last test a larger model of a Volvo V70 body was tested. The model contained 294 400 triangles and the file 28 Mb. With Volvo measures this is still not a large file but compared with the other files tested it was considerably big. Fig. 5.8 on page 41 shows the model rendered with surface properties and Fig. 5.9 on page 41 shows the same model but rendered in wire frame.
  • 45. CHAPTER 5. RESULTS - TESTINGS 40 Figure 5.7: The robot from Fig. 5.4 has been to containing only 25% of its original polygons. The small objects have been reduced harder than the large ones. In the wire frame model in Fig. 5.9 it can clearly be seen that some parts appear denser that others. These sections are modeled in a higher resolution. It is the high resolution parts that primarily should be simplified. At the first attempt the model was simplified node by node as described in section 4.2.1 on page 32. Even though there were several trials with different settings of the parameters the results obtained were not satisfactory. As little as a simplification of 50% caused problems. As a last try the C++ program was changed so that the entire model was reduced at once instead of node by node. The change yielded much better results. The model in Fig. 5.10 on page 42 is simplified to containing 17% of the original model’s polygons. The result is satisfactory. However, looking at the model closely one can see that there are small holes in the mesh. Some were there in the original model as well but others are new. Especially those in the horizontal line on the doors. These small holes are not acceptable in an application such as collision detection. The holes are probably due to bad connections between the different objects in the model which make simplification more difficult. The wire frame rendering of the approximated model, to be viewed in 5.11 on page 42, shows that the dense parts of the model still is of high resolution. Playing around with the parameters, in order to avoid the problem with the small holes in the mesh, would probably make it possible to simplify the model some 10% more.
  • 46. CHAPTER 5. RESULTS - TESTINGS 41 Figure 5.8: Volvo V70 original shaded. The model contains 294 400 triangles. The file is 28 Mb. Figure 5.9: Volvo V70 original wire frame. The denser parts are parts with higher resolution. The two models - the original in Fig. 5.8 and the approximation in Fig. 5.10 - do not have the exact same color. This is due to in changing the C++ program to test for full-model-simplification no implementation for colors was made.
  • 47. CHAPTER 5. RESULTS - TESTINGS 42 Figure 5.10: Volvo V70 simplified shaded. The models is reduced to containing 17% of the original polygons. Figure 5.11: Volvo V70 simplified wire frame. As seen the high resolution parts are still rather dense. Nevertheless, simplifying more yields degeneracies in other parts of the mesh.
  • 48. Chapter 6 Conclusion The primer aim of this thesis was to examine if the Volvo Cars’ 3D digital models could be reduced to containing less than 20% of its original polygons using a mesh simplification algorithm. To reach this goal the thesis focused the development of a C++ program performing mesh simplification based on an existing algorithm. To accomplish the goal, three main research questions, that was to be answered by the thesis, were put forth. The questions were: Q1 Which of the presently existing mesh simplification algorithms conforms the most to the needs of Volvo Cars? Q2 What features do a mesh simplification program able to reduce the Volvo cars’ 3D digital models need to have? Q3 If it is not possible to reduce the Volvo Cars’ 3D models by a more than 80% using the mesh simplification program constructed, what character- istics must be added? The primer aim of the thesis has been achieved. The Results chapter accounts for an analysis of Volvo Cars models reduced using a mesh simplification algorithm implemented in a C++ program created for this thesis. The three research questions have been studied and the three following sec- tions will summarize their answers. 6.1 Evaluating the algorithm chosen To start with the mesh simplification was surveyed and the six algorithms that seemed most interesting were selected. Details about the algorithms are pre- sented in section 3.3 on page 16 and section 3.4 on page 18. Out of the methods described the Garland and Heckbert Quadratic error metrics algorithm (see section 4.1.3 on page 28) was considered to be the most suitable one. The reasons of the choice are stated in section 4.1.2 on page 25. Garland and Heckbert’s fulfilled all but one of the requirements put forth 43
  • 49. CHAPTER 6. CONCLUSION 44 in section 4.1.1 on page 24. The one claim that was not accomplished was the one about the built-in handling of hidden objects. The requirements said that it would be preferable to have such a function since one of the problems with CAE-exported models is that they usually contain many layers of surfaces. Section 7.2 in chapter 7 explains how a hidden surface algorithm could be added to the Garland and Heckbert algorithm. The Garland and Heckbert algorithm showed to be a good choice primarily for the following reasons: - The algorithm could handle the input from every model it was tested on. Garland and Heckbert claim that their algorithm can handle any arbitrary input and so far it showed true. - When there was enough resolution to start from the algorithm produced good results (see Fig. 5.3 on page 36 and Fig. 5.7 on page 40) - There was no flipping over of faces. - There was never any problem with the size of the input not even when models containing more than a million triangles were tested. - The algorithms in itself does not provide any obstacle for the improve- ments found necessary after analyzing the result (see section 6.3 on page 45) 6.2 Evaluating the application created The program created in this thesis focused the implementation of the the Gar- land and Heckbert algorithm. The program is a good basis and it apparently serves its purpose of simplifying models. The following improvements are sug- gested for the existing C++ program in order to increase its performance and usability. - The VRML parser is too slow to handle large models within reasonable time when the application is ran on standard hardware. It is foremost the reading from file that constitute the bottle neck. One suggestion would be to skip the Cyber97VRML package and create a new parser optimized for the purpose. - Multi-threading should be added to the program so that all three parts of the application (see section 4.2 on page 31) could run simultaneously. This would increase the speed of the application. - To make it easier for the user a GUI should be added to the application. - As the last testings showed better results can sometimes be obtained if the entire model is simplified at the same time (see section 5.2.3 on page 39). The program should be expanded to fully handle this option. As for today the program only consider surfaces properties surface properties when the model is simplified node by node (see section 4.2.1 on page 32).
  • 50. CHAPTER 6. CONCLUSION 45 6.3 Evaluating the results of the testing The results (see chapter 5 on page 34) showed that it is possible to take away more than 80% of the model’s polygons without compromising too much in quality. Nevertheless, with some model the following features should be added to the application in order to achieve the desired simplification. These features are: - Geometric error measure. - Hidden surface removal. - Replacement of complex shapes with geometrically similar but not as poly- gon demanding ones. In the next chapter it will be explained what these features consists in and how they could be added to the application.
  • 51. Chapter 7 Future work 7.1 Geometric error measure Section 5.2.1 on page 35 that analyzes the simplification of the bracket model concludes that better results are achieved if not all parts of the models are re- duced to the same extent. To optimize the parameter settings for each model is a tedious process and demands too much of the user. It must be done auto- matically. Looking at the model tested in section 5.2.1 one can see that its polygons are unevenly distributed. The mission is to adjust the program to reduce the high resolutioned parts to greater extent and the less dense parts to a lower extent. A geometric error measure tells how much the approximated model differs from the original. Introducing such a measure to the algorithm would make it possible to set the boundary of the simplification in millimeters instead of number of polygons taken away. As an alternative to letting the program con- tinue the simplification until 80% of the polygons are taken away the process could continue until the approximation differs at the most 2 millimeter from the original. Such a method would imply that more polygons automatically would be taken away from the denser part of the model since it will take longer before the geometry changes there. There are various theories on how to apply geometric error measure to a mesh simplification algorithm. The most interesting article found in the research for this thesis was one by Steve Zelinka and Michael Garland[1]. Their error measure works as an add-on to Garland’s simplification algorithm used in this thesis and should be fairly straightforward to implement. Steve Zelinka even offers free source code for the purpose. 7.2 Hidden surface removal Section 5.2.2 on page 37 discusses the hidden surface problem. The robot model analyzed in that section contains surfaces that never shows (see Fig. 5.5 on page 46
  • 52. CHAPTER 7. FUTURE WORK 47 38). Taking away the polygons of these surfaces would reduce the number of polygons substantially without affecting the appearance of the model. The problem with hidden surfaces could probably be solved in various ways. One would be to take the idea that Lindstrom and Turk used for their Image- Driven algorithm (see section 3.4 on page 18) and transform it into a hidden surface algorithm. Instead of using the different-angle pictures of the object (see Fig. 3.6 on page 19) to measure the difference between the original model and the approximation the images could be used to find out which polygons that cannot be seen from the outside. It is possible to program the GPU so that it tells which polygons that were never drawn when the pictures of the model were rendered out. 7.3 Replacement of complex shapes The research of this thesis did not give rise to any ideas of how complex shapes could be replaced by similar but simpler ones. One solution could be to look at some sort of image-based technique where the shapes of the objects are compared to standard shapes like cylinders and boxes. Another technique would be to create bounding boxes around the objects and compare these with standard shapes.
  • 53. Bibliography [1] S. Zelinka and M. Garland. Permission Grids: Practical, Error-Bounded Simplification. ACM Transaction on Graphics. April 2002 To be down- loaded from http://graphics.cs.uiuc.edu/ garland [2] Michael Garland and Paul Heckbert. Surface Simplification using quadric error metrics. SIGGRAPH 97 Proc., pages 209-216, 1997 To be downloaded from http://graphics.cs.uiuc.edu/ garland [3] M. Garland and P. Heckbert. Simplifying Surfaces with Color and Texture using Quadratic Error Metrics. IEEE Visualization 98 Proc. 1998 [4] William J. Schroeder, Jonathan A. Zarge and William E. Lorensen. Dec- imation of triangle meshes. Computer Graphics (SIGGRAPH ’92 Proc.), 26(2):65-70 July 1992 [5] Jarek Rossignac and Paul Borrel. Multi-resolution 3D approximations for rendering complex scenes. Modeling in Computer Graphics: Methods and Applications, pages 455-465, 1993 [6] Hugues Hoppe, Progressive meshes. In SIGGRAPH ’96 Proc., pages 99-108, Aug. 1996 [7] R´emi Ronfard and Jarek Rossignac. Full-range approximation of triangu- lated polyhedra. Computer Graphics Forum, 15(3), Aug. 1996 Proc. Euro- graphics ’96 [8] Jonathan Cohen et al. Simplification envelopes. SIGGRAPH ’96 Proc. pages 119-128 Aug. 1996 [9] William J. Schroeder. A topology modifying progressive decimation algo- rithm 8th conference on Visualization ’97 Proc. Oct 1997 [10] Peter Lindstrom and Greg Turk. Image-Driven Simplification ACM Trans- action on Graphics Vol. 19, No. 3, pages 204-241 July 2000 To be down- loaded from http://www.gvu.gatech.edu/people/faculty/greg.turk/ [11] K. Rule 3D Graphics File Formats 48
  • 54. Appendix A VRML syntax The scematic picture in Fig. A.1 on page 50 shows a piece of VRML code. Nodes and fields are boxed and numbered in order to make it easier to interpret the VRML syntax. The following paragraph explains the key words of the code shown in the picture. The Shape node (1) contains the two fields appearance and geometry (2). The geometry field is defined by the IndexedFaceSet node (3) which contains the two fields coord and coordIndex (4) that define the vertices and coordinates. The appearance field (2) value is set by the Appearance node (5) in of which material field (6) the surface properties are set. 49
  • 55. APPENDIX A. VRML SYNTAX 50 Figure A.1: Schematic picture of VRML syntax
  • 56. Appendix B Open Source packages B.1 The VRML parser The VRML parser is based on the CyberVRML97 library1 for C++ which is an Open Source software. The library allows to read and write VRML files, set and get the scene graph information, draw the geometries and con- vert from the VRML file format to X3D. For the CyberVRML97 library to compile the OpenGL 1.1 library and the GLUT 3.x library must be installed. The CyberVRML97 also demands the xercesc package to be included. The CyberVRML97 library comes with little documentation. B.2 The Qslim package The Qslim Package2 was developed by Michael Garland, assistant professor in the Department of Computer Science at the University of Illinois. The Qslim package is based on the experimental software that Garland produced to test Heckbert’s and his mesh simplification algorithm picked as the algorithm of choice (see section 4.1.3 on page 28) in this Master thesis. Since built for Garland’s personal purposes the code is not easily interpreted. Basically, it comes without documentation and it is not to be considered industrially safe. The code compiles on the Unix as well as Windows systems but it requires the OpenGL and the XForms libraries. 1Downloaded from http://cybergarage.org/vrml/cv97/cv97cc 2Downloaded from http://graphics.cs.uiuc.edu/ garland 51