SlideShare a Scribd company logo
1 of 11
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
THEORY
In the early nineties a movie impressed the audience by showing computer
generated effects, which have never been seen before. “Terminator 2 – Judgment
Day” can be called the beginning of photo-realistic computer graphics in movies.
The most important effects were the various transformations of T-1000, which was
enemy machine in the story. Those transformations were made by a technique
called ‘morphing’. That can be done in image space, where one two-dimensional
image or video source is transformed into another. For “Terminator 2” it was done
three-dimensional, which means that one 3D mesh is being transformed into
another. Both versions are not meant to be used in real time, but it is able with
today’s graphics hardware. We will only look at an implementation of the 3D
version.
Vertex tweening is an easy way to move a vertex of a mesh independently from
others. Here every vertex has got a relative or absolute destination position vector
beside the source one. With a dynamic blending factor, which is equal for all
vertices at a time, you can interpolate between source and destination position.
The formula looks like this for a relative destination:
PositionOutput= PositionSource+ PositionDestination ⋅ Factor
And with an absolute destination position we need to calculate the relative one:
PositionOutput = PositionSource + ( PositionDestination − PositionSource )⋅Factor
The positions are 3D vectors with x, y and z components and the blending factor is
a scalar value. For the article we will only use relative destination vectors, because
it saves rendering time and code as you can see by comparing the two formulas
above.
Using only this technique results in a lot of limits, because start and target mesh
are the same beside vertex positions. That means there is no difference in number
of vertices and the faces and attributes are the same. Start and target mesh have
got equal materials, textures, shaders and further states. So vertex tweening is
only useful to animate objects in a way, where mesh skinning fails.
To morph between two different objects we can use vertex tweening, but we will
transform and render both meshes at once. Beforehand the destination positions of
the mesh vertices have to be calculated. This can be done by projecting the
vertices of the first mesh to the second one and vice versa. We use the vertex
position as origin for a ray-mesh-intersection test, which is a function that checks if
and where a ray intersects at least one face of a mesh. If there are multiple
intersections the nearest is a good vector to use as destination position. Without
any intersection source should be used as destination. In this case the relative
destination is the zero vector.
For the intersection ray we also need a direction vector beside the origin. This
can be the normal of a vertex or we calculate a vector from origin to the mesh or a
user-defined center. We also should invert the direction vector to get all possible
intersections. That is not needed if an origin is situated out of both meshes since
©2004 RONNY BURKERSRODA PAGE 1
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
we do not have to use the vertex position as origin. For example it is possible to
use the bounding sphere of a mesh:
Direction = Center − Position
Origin = −Direction⋅ Radius + Center
This is very useful if you have got complex objects like helicopters with cockpit
interior. Using the bounding sphere projects every vertex of one mesh to the hull
of the other one. Otherwise it could be possible that some hull vertices are
projected to faces of the interior. Choosing the best values always depends on the
kind of mesh design. After the destination vector is computed we store it into the
vertex data.
Now we know where a vertex has to be moved to get an object that has
structures like another one. It is possible to tessellate objects to increase the
accuracy of those structures, but you do not have to, because we want to use the
good performance of optimized low-polygon objects. Other tricks to improve
quality are being descripted later in this article.
After the preprocessing is done we are able to render the objects with the
morphing effect. This can be done by the application but we are concentrating on
vertex shader, because this improves performance on DirectX-8-compatible
graphics cards and works in software mode for older hardware.
We have to set the current interpolation factor as shader constant to render a
morphing mesh. For the target one the factor has to be inverted by subtracting it
from one. In this way both meshes are always at same state. Now we use the factor
to interpolate between source and destination position. Other vertex processing
like lighting and texture coordinate transformation can be done normally. It is
possible to render both objects per frame. Or we only render the start mesh to the
half and then the target one. This can look strange or ugly but there are
optimizations, which can be done (see Optimization part).
This is a screenshot from LightBrain’s game prototype “Rise of the Hero” implementing the
morphing effect, which was originally created for that project.
©2004 LightBrain GmbH, Hamburg, Germany
©2004 RONNY BURKERSRODA PAGE 2
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
IMPLEMENTATION
I am using DirectX 9 to show an implementation of the morphing algorithm.
DirectX extensions will help me to save time, so the basic functions of 3D graphics
programming will not be implemented here. For experienced OpenGL programmers
it should be now problem to convert it or write an own program on the base of the
algorithm.
For the objects we are able to use extension meshes, whose objects are accessed
over the ID3DXMesh interface. A mesh is storing vertices, an index list of triangle
faces between them and a table of attributes for the triangles. The attributes are
identification numbers, which divide the mesh into different subsets that can be
rendered with various states like materials or textures.
It is possible to load a mesh over the D3DXLoad[…]MeshX[…] functions or to
define an own mesh by locking vertex, index and attribute buffer and setting the
data. For now we are going the first way and are loading the meshes from common
DirectX files, which other programs are able to read and write. Beside the meshes
we are getting an array of materials including texture file names for all subsets. A
subset is rendered by setting material and textures first and then calling
ID3DXMesh::DrawSubset( nSubset ), where nSubset is the number of the
subset.
To preprocess the meshes we have to enhance the vertex data first, so the
relative destination position can be stored to. There are two formats in Direct3D to
define the structure of a vertex: Flexible vertex formats are the one to use for
fixed function pipeline processing, which transforms the vertex data with a set of
functions. The parameters of those functions can be set over Direct3D. Because the
possibilities of the functions were limited vertex shaders, in which the processing
can be programmed, had been introduced. For vertex shaders there is a much more
flexible format: The vertex declaration allows everybody to include all data, which
is needed. We are using vertex shaders so we will also use such declarations. First
they seem to be more complicated but they enable us to be more compatible to
other effects.
A declaration is defined by an array of D3DVERTEXELEMENT9 elements and the
last one must include the data of the D3DDECL_END() macro. Every other element
defines one data element of a vertex by setting the offset in the vertex data (in
bytes), the type of the data (e.g. D3DDECLTYPE_FLOAT3 for a 3D vector), a
method for hardware tessellation, the usage (e.g. position or texture coordinate)
and the index number, if more then one element of the same usage are stored.
Because those declarations are also used to pass on vertex data to the pipeline, a
stream number can be specified, too. In this way multiple vertex buffers are used
to render one set of primitives. But our meshes include only one vertex buffer, so
the stream number should be set to zero.
A common 3D vertex includes position and normal vector and one 2D texture
coordinate. The vertex declaration for such a vertex looks like this:
D3DVERTEXELEMENT9 pStandardMeshDeclaration[] =
{
{ 0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,
©2004 RONNY BURKERSRODA PAGE 3
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
D3DDECLUSAGE_POSITION, 0 },
{ 0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,
D3DDECLUSAGE_NORMAL, 0 },
{ 0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT,
D3DDECLUSAGE_TEXCOORD, 0 },
D3DDECL_END()
};
At the beginning of the vertex data (byte 0) there is a 3D vector for the standard
vertex position. It is followed by the 3D vector of the vertex normal. The data is
positioned at offset 12 because vector of the position include 3 FLOAT values. A
FLOAT value has got the size of 4 bytes (= sizeof( FLOAT )) and the
multiplication with 3 elements results in 12 bytes. Because the normal has the
same size the offset of the texture coordinate is at 24 (= 2 vectors * 12 bytes). But
the texture coordinate is only a 2D vector. That means the vertex data has a size of
32 bytes (= 2 vectors * ( 3 floats * 4 bytes ) + 1 vector * ( 2 floats * 4 bytes )). This is
important because we want to add an element for the destination position:
D3DVERTEXELEMENT9 pMorphingMeshDeclaration[] =
{
{ 0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,
D3DDECLUSAGE_POSITION, 0 },
{ 0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,
D3DDECLUSAGE_NORMAL, 0 },
{ 0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT,
D3DDECLUSAGE_TEXCOORD, 0 },
{ 0, 32, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,
D3DDECLUSAGE_POSITION, 1 },
D3DDECL_END()
};
Now we have added a second position vector, which must have an increased usage
index than the standard one. The whole enhancement can be done automatically
with the following steps.
We use D3DXGetDeclVertexSize() to retrieve the vertex size of the original
declaration and we are going through the declaration to store the highest usage
index of a position element. At next the destination position element for morphing
can be set to the D3DDECL_END() entry. D3DXGetDeclLength() returns the
number of this entry increased by one. As usage index we take the highest index
and add one to it. This last thing is to write the D3DDECL_END() at the end. If
pStandardMeshDeclaration was the original declaration it has been enhanced
to pMorphingMeshDeclaration. You can see the routine at listing 1.
D3DVERTEXELEMENT9 pMeshDeclaration[ MAX_FVF_DECL_SIZE ];
DWORD nPosition = 0;
DWORD nUsageIndex = 0;
DWORD nOffset;
...
// process all declaration elements until end is reached
©2004 RONNY BURKERSRODA PAGE 4
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
while( pMeshDeclaration[ nPosition ].Stream != 0xFF )
{
// check for higher index of a position usage
if(
( pMeshDeclaration[ nPosition ].Usage
== D3DDECLUSAGE_POSITION )
&& ( pMeshDeclaration[ nPosition ].UsageIndex
>= nUsageIndex )
)
nUsageIndex = pMeshDeclaration[ nPosition ].UsageIndex + 1;
// increase position in declaration array
++nPosition;
}
// get element number for new entry
nPosition = D3DXGetDeclLength( pMeshDeclaration ) - 1;
nOffset = D3DXGetDeclVertexSize( pMeshDeclaration, 0 );
// move end element
memmove( &pMeshDeclaration[ nPosition + 1 ] ,
&pMeshDeclaration[ nPosition ], sizeof( D3DVERTEXELEMENT9 ) );
// add new position element
pMeshDeclaration[ nPosition ].Stream = 0;
pMeshDeclaration[ nPosition ].Offset = nOffset;
pMeshDeclaration[ nPosition ].Type = D3DDECLTYPE_FLOAT3;
pMeshDeclaration[ nPosition ].Method = D3DDECLMETHOD_DEFAULT;
pMeshDeclaration[ nPosition ].Usage = D3DDECLUSAGE_POSITION;
pMeshDeclaration[ nPosition ].UsageIndex = nUsageIndex;
Listing 1. Enhancing the vertex declaration for a morphing mesh.
The next step is to clone the start mesh using the new declaration as parameter.
ID3DXMesh::Clone() creates a new mesh object with the same data as the
original one but including space for the destination position. If you do not want to
use the original mesh any longer (e.g. for rendering it without morphing) it can be
released.
The vertex buffer of the cloned mesh must be locked now, so we can calculate its
destination positions. Every vertex has to be projected to the target mesh. To do
this there is an extension function of Direct3D: D3DXIntersect() checks where a
ray intersects an extension mesh. We can use the ray origin and direction we want
and will get all possible projection points. As I mentioned it is most useful to take
the nearest one. The source position has to be subtracted to get the relative
destination vector, which can be stored to the vertex data (see listing 2).
Fortunately reading and writing vertex data is not as hard as it seems to be. Vertex
declarations make it easy to get the offset of a specific vertex element. To retrieve
the source position we should look for an element of type D3DDECLTYPE_FLOAT3,
usage D3DDECLUSAGE_POSITION and index 0. And to get the normal the usage
has to be D3DDECLUSAGE_NORMAL. Then we take the offset to read the 3D vector
from the vertex data. Accessing a specific vertex is possible by doing the following:
VOID* pVertex = (BYTE*) pData + nVertexSize * nVertex;
©2004 RONNY BURKERSRODA PAGE 5
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
pData is the start address of the vertex buffer data, nVertexSize is the size of
one vertex, which can be calculated by calling D3DXGetDeclVertexSize(), and
nVertex is the number of the vertex that should be accessed. pVertex stores the
address of this vertex and can be used to read and write the vectors:
D3DXVECTOR3 vct3SourcePosition
= *(D3DXVECTOR3*)( (BYTE*) pVertex + nOffsetSourcePosition );
...
*(D3DXVECTOR3*)( (BYTE*) pVertex + nOffsetDestinationPosition )
= vct3DestinationPosition;
The offsets, which we got from vertex declaration, are stored in
nOffsetSourcePosition for source and nOffsetDestinationPosition for
destination position.
ID3DXMesh* pmshDestination; // pointer to destination mesh interface
D3DVECTOR3 vct3Source; // source position (vertex input)
D3DVECTOR3 vct3Destination; // destination position (vertex output)
D3DVECTOR3 vct3Direction; // ray direction vector
D3DVECTOR3 vct3Center; // bounding sphere center (parameter)
FLOAT fRadius; // bounding sphere radius (parameter)
FLOAT fDistance; // distance from sphere to mesh
BOOL bIntersection; // intersection flag
...
// calculate direction from sphere center to vertex position
D3DXVec3Normalize( &vct3Direction,
&D3DXVECTOR3( vct3Center - vct3Source ) );
// compute intersection with destination mesh from outstanding point
on the bounding sphere in direction to the center
D3DXIntersect( pmshDestination,
&D3DXVECTOR3( vct3Center - vct3Direction * fRadius ),
&vct3Direction, &bIntersection, NULL, NULL, NULL, &fDistance,
NULL, NULL );
// check for intersection
if( bIntersection )
{
// calculate projected vector and subtract source position
vct3Destination = vct3Center + vct3Direction *
( fDistance - fRadius ) - vct3Source;
}
else
{
// set relative destination position to zero
vct3Destination = D3DXVECTOR3( 0.0f, 0.0f, 0.0f );
}
Listing 2. Calculating the destination position for a vertex
©2004 RONNY BURKERSRODA PAGE 6
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
After storing the destination position vector of each vertex to the buffer of the
start mesh the same has to be done with the target mesh, which is being projected
to the start one. Then the preprocessing is finished.
Now we need the vertex shader, which can transform a vertex of a morphing
mesh between source and destination position. At the beginning of the shader we
declare the version and inputs, which are loaded from the data of the vertex using
its declaration:
; declaration of required vertex shader version
vs_1_1
; declaration of the input registers
dcl_position0 v0 ; source position
dcl_position1 v1 ; destination position
...
At this point we are able to calculate all output values but oPos in any way we
want. The position output is an interpolated vector between source and destination
position. If the blend factor is stored in the shader constant c0, then the code can
look like this:
...
; transform and project vertex to screen
mul r0.xyz, v1.xyz, c0.x ; blend destination vector
add r0.xyz, r0.xyz, v0.xyz ; add source position
mov r0.w, v0.w ; copy w component
...
First the relative destination vector is multiplied with the interpolation factor.
Next the source vector is added to the result. After that r0.xyz includes a vector,
which lies between source and destination position, if c0.x is a value between 0
and 1. At last we have to copy the unchanged w-component of the source position.
Normally the value is 1.0f.
Now r0 can be processed as if it would include the unprocessed vertex position
(e.g. transformation from object to screen space).
The rendering code of your application has to set the constant of the blend factor,
which is c0.x in the shader above. This can be done with the following call:
IDirect3DDevice9::SetVertexShaderConstantF( 0,
&D3DXVECTOR4( fBlendFactor, 0.0f, 0.0f, 0.0f ), 1 );
Remember that you have to invert the blend factor for the target mesh by
calculating 1.0f – fBlendFactor. Now you are able to render the meshes the
way you want: Up to the half blend factor you can draw the start one and then the
target or you render both at the same time with activated z-buffering. For the
second type you should draw the target mesh first and then the source one up to
the half blend value and afterwards reversed, if your objects have semi-
©2004 RONNY BURKERSRODA PAGE 7
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
transparent faces or texels. Anyway both alone will not be good looking for most
kinds of objects.
OPTIMIZATIONS
To get the best looking morphing effect there are a lot of things we can do. Some
I will explain here.
1. BLENDING THE ALPHA VALUE
The most powerful extension is easy to enable, but difficult to make good. We
interpolate the alpha value of the object materials, too. For that both objects have
to be rendered at the same time. Instead of using the same blend factor as for
morphing, now we let the alpha blend value of one mesh to be 1. Otherwise both
objects would become a little transparent while they are morphing. When we start
morphing the start mesh stays opaque until the half, while the target is fading in.
At the half the blend value of both objects is one and next the start mesh is fading
out. The mesh, which has got the blend value of 1, has to be rendered first, so the
semi-transparent pixels or second one are mixed with the opaque of the first.
Because of the possibility of intersecting triangles, which can be semi-
transparent, there are some cases, in which the morphing will still look bad. This
can happen, if the start and target mesh have got semi-transparent materials
originally. One way to make it better is to blend the semi-transparent materials
away, which is good in combination with reflection mapping (see 4.). Here we have
to blend the transparencies of the start mesh away, then to fade in the target,
next to fade the start one out and at last to blend the transparencies of the target
mesh in. Then there are now overlapping semi-transparent pixel between source
and target mesh.
If you do not want to let semi-transparent materials become opaque you can do
the second way. There we use two render-target textures with alpha channel for
rendering each object to one without alpha fading. Before of that we have to set
the alpha values of the textures to 0. Then the two textures are rendered on quad
to the screen blending between both using the morphing blend factor. Here you
should pay attention that the texels of the textures are correctly mapped the
pixels of the screen. If you also need the z-values of the morphing meshes (e.g.
when rendering other objects) the application can write those to the z-buffer while
rendering to the textures. To do that we have to use textures, which have got at
least the dimensions of the back buffer. But we do not need to use a quad as large
as the screen. Transforming the bounding boxes of the meshes to screen space will
help us to get the rectangle we need to render. For that we also have to calculate
the correct texture coordinates.
For applications that can enable anti-aliasing we cannot render directly to the
textures, because those are not supporting multi-sampling. Since DirectX 9b it is
possible to render to a multi-sampled surface like the back buffer and then copying
it to a render-target texture.
©2004 RONNY BURKERSRODA PAGE 8
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
2. USING STEP OBJECTS
When you are morphing between objects with hard edges the effect may seem
very squared, too. If you want a softer or more complex morphing step objects can
be used. Instead of directly interpolation between source and target mesh we are
morphing multiple times in a line. With two step objects it looks like:
Start Object ⇔ First Step
First Step ⇔ Second Step
Second Step ⇔ Target Object
These are the objects between we have to morph and which to be projected to.
If you want a softer effect the step meshes should be modeled with a round
shape. Maybe you or the artist creates the same sphere for both steps and edits
them to shapes similar to the source and target object but softer.
3. TESSELLATING SOURCE AND TARGET MESH
To improve the accuracy of the mesh projection we can tessellate the source and
target morphing meshes before they are projected. Unfortunately this results in
much more vertices but if vertex processing is not the bottleneck you are able to
do it. This is good for objects that have got relatively large faces, which are
projected to different faces of the other mesh. For the step object we do not need
it because these should already be optimized for their task. But source and target
mesh are often optimized for rendering them separately at best quality-
performance relation and not for morphing.
4. MAPPING EFFECT TEXTURES
In the mind of a viewer our two morphing objects become one unit. To amplify
this feeling we can give the
morphing objects a single
texture or the same set of
textures, which are mapped
the same way. It seems the
materials of the start object
are melting to become the new
ones, so they are also one unit.
We have to blend the effect
textures at the beginning in
and at the ending out. A
possibility to get the look of T-
1000 from “Terminator 2” is to
use an environmental cube
map, which is reflected by the
objects. In the first quarter
the original materials of the
start mesh are faded out, so
we see the half effect long
only the reflections and in the last quarter the target materials are faded in.
©2004 RONNY BURKERSRODA PAGE 9
Morphing effect extended by two step objects, alpha
blending, reflection cube mapping and blooming
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
Another way is to use one or more “lightning” textures, which are mapped
spherical and animated or rotated by time to get an electricity effect. This could
also be improved by a particle effect.
5. USING PARTICLE SYSTEMS
If you want morphing for a fantasy styled game, in which you do not want to use
technical looking effects like reflections, then particles are suitable to create a
more magical effect. You can move some stars or flares around the objects. Or a
demon, which rises from the ground, could be surrounded by fire. There are a lot
of particle effects to imagine and a lot of them can enhance the morphing of two
objects.
A benefit of particles is that they are able to cover possible artifacts of a
morphing effect.
6. BLOOMING THE SCENE
Overexposure and blooming make the lighting of objects more realistic and help
to cover artifacts, too. You can use them to let an object becoming hot while
morphing or to increase the specular highlights of reflecting surfaces.
7. INTERPOLATING OTHER DATA
Beside the position vector we are able to blend other vertex data like normal or
tangent vector, too. This is important if you want to change the lighting or reflect
more correctly while morphing. Be patient with such direction vectors. Because of
the interpolation they are loosing their length of one. If you need them for
calculation (e.g. lighting, reflection or transformation to tangent space) you have
to normalize these vectors after interpolation.
8. LOADING PRE-PROCESSED OBJECTS
The pre-processing of the objects costs so much time that it cannot be done
between the renderings of two frames. For objects with thousands of faces it also
increases the waiting time noticeably. So it is a good way to pre-process an object
combination once and then to store it to file. Then the morphing objects can be
loaded from it by the application that wants to morph these. Like for many other
things this save a lot of time, which seems for players to be the time of loading a
level.
CONCLUSION
If you need to transform an object to another this morphing algorithm is a nice
eye-candy to do that. Unfortunately there also some disadvantages:
©2004 RONNY BURKERSRODA PAGE 10
GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013
Pro Contra
 Real time rendering
 3D meshes do not have to be changed
 Objects can have completely different
attributes
 A lot of tricks to make the effect
better
 Pre-processing takes some time, but
can be removed from the final
application, if source and target
mesh combination is known before
 Vertex projection has to be
optimized manually for the best
possible result
 Not flexible enough to work with any
kind of object animation (Skinned
meshes should be a much lower
problem then objects, whose subsets
are transformed completely
independent.)
You can look at the CD to find the complete implementation of a demo that
presents the morphing effect. There is also a library, which can easily be included
into any Direct3D 9 project, to pre-process meshes.
If you want to know more about LightBrain or “Rise of the Hero”, then visit the
web site www.lightbrain.de. Annotation 2013: “Rise of the Hero” has never been completed and LightBrain was
shut down roughly one year later.
©2004 RONNY BURKERSRODA PAGE 11

More Related Content

What's hot

ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATION
ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATIONICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATION
ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATIONijcga
 
Image feature extraction
Image feature extractionImage feature extraction
Image feature extractionRushin Shah
 
3 d graphics with opengl part 2
3 d graphics with opengl  part 23 d graphics with opengl  part 2
3 d graphics with opengl part 2Sardar Alam
 
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGESDOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGEScseij
 
CSG for Polyhedral Objects
CSG for Polyhedral ObjectsCSG for Polyhedral Objects
CSG for Polyhedral ObjectsBen Trumbore
 
3 - A critical review on the usual DCT Implementations (presented in a Malays...
3 - A critical review on the usual DCT Implementations (presented in a Malays...3 - A critical review on the usual DCT Implementations (presented in a Malays...
3 - A critical review on the usual DCT Implementations (presented in a Malays...Youness Lahdili
 
A Novel Algorithm for Watermarking and Image Encryption
A Novel Algorithm for Watermarking and Image Encryption A Novel Algorithm for Watermarking and Image Encryption
A Novel Algorithm for Watermarking and Image Encryption cscpconf
 
Vol 14 No 1 - July 2014
Vol 14 No 1 - July 2014Vol 14 No 1 - July 2014
Vol 14 No 1 - July 2014ijcsbi
 
Building 3D Morphable Models from 2D Images
Building 3D Morphable Models from 2D ImagesBuilding 3D Morphable Models from 2D Images
Building 3D Morphable Models from 2D ImagesShanglin Yang
 
3D Curve Project
3D Curve Project3D Curve Project
3D Curve Projectgraphitech
 
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPS
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPSIMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPS
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPSIJNSA Journal
 
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...ijcsit
 
An efficient approach to wavelet image Denoising
An efficient approach to wavelet image DenoisingAn efficient approach to wavelet image Denoising
An efficient approach to wavelet image Denoisingijcsit
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScsitconf
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGEScscpconf
 
A comprehensive survey of contemporary
A comprehensive survey of contemporaryA comprehensive survey of contemporary
A comprehensive survey of contemporaryprjpublications
 

What's hot (20)

ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATION
ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATIONICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATION
ICVG: PRACTICAL CONSTRUCTIVE VOLUME GEOMETRY FOR INDIRECT VISUALIZATION
 
Image feature extraction
Image feature extractionImage feature extraction
Image feature extraction
 
3 d graphics with opengl part 2
3 d graphics with opengl  part 23 d graphics with opengl  part 2
3 d graphics with opengl part 2
 
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGESDOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
 
CSG for Polyhedral Objects
CSG for Polyhedral ObjectsCSG for Polyhedral Objects
CSG for Polyhedral Objects
 
50120140501009
5012014050100950120140501009
50120140501009
 
3 - A critical review on the usual DCT Implementations (presented in a Malays...
3 - A critical review on the usual DCT Implementations (presented in a Malays...3 - A critical review on the usual DCT Implementations (presented in a Malays...
3 - A critical review on the usual DCT Implementations (presented in a Malays...
 
A Novel Algorithm for Watermarking and Image Encryption
A Novel Algorithm for Watermarking and Image Encryption A Novel Algorithm for Watermarking and Image Encryption
A Novel Algorithm for Watermarking and Image Encryption
 
Vol 14 No 1 - July 2014
Vol 14 No 1 - July 2014Vol 14 No 1 - July 2014
Vol 14 No 1 - July 2014
 
Building 3D Morphable Models from 2D Images
Building 3D Morphable Models from 2D ImagesBuilding 3D Morphable Models from 2D Images
Building 3D Morphable Models from 2D Images
 
3D Curve Project
3D Curve Project3D Curve Project
3D Curve Project
 
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITIONFEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
 
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPS
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPSIMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPS
IMAGE ENCRYPTION BASED ON DIFFUSION AND MULTIPLE CHAOTIC MAPS
 
50120140501016
5012014050101650120140501016
50120140501016
 
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
 
An efficient approach to wavelet image Denoising
An efficient approach to wavelet image DenoisingAn efficient approach to wavelet image Denoising
An efficient approach to wavelet image Denoising
 
Image segmentation
Image segmentation Image segmentation
Image segmentation
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
 
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGESAUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
AUTOMATIC THRESHOLDING TECHNIQUES FOR SAR IMAGES
 
A comprehensive survey of contemporary
A comprehensive survey of contemporaryA comprehensive survey of contemporary
A comprehensive survey of contemporary
 

Viewers also liked

Viewers also liked (6)

Guia de aprendizaje gestion talento humano (2)
Guia de aprendizaje gestion talento humano (2)Guia de aprendizaje gestion talento humano (2)
Guia de aprendizaje gestion talento humano (2)
 
Avance1
Avance1Avance1
Avance1
 
Avance 1
Avance 1Avance 1
Avance 1
 
2013 04-29 - skema - présentation de x aucompte
2013 04-29 - skema - présentation de x  aucompte2013 04-29 - skema - présentation de x  aucompte
2013 04-29 - skema - présentation de x aucompte
 
Shadow copy
Shadow copyShadow copy
Shadow copy
 
Community Research and Analysis, TN Basic Economic Development Course 2013
Community Research and Analysis, TN Basic Economic Development Course 2013Community Research and Analysis, TN Basic Economic Development Course 2013
Community Research and Analysis, TN Basic Economic Development Course 2013
 

Similar to ShaderX³: Geometry Manipulation - Morphing between two different objects

Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeksBeginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeksJinTaek Seo
 
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...IOSR Journals
 
isvc_draft6_final_1_harvey_mudd (1)
isvc_draft6_final_1_harvey_mudd (1)isvc_draft6_final_1_harvey_mudd (1)
isvc_draft6_final_1_harvey_mudd (1)David Tenorio
 
Hacking the Kinect with GAFFTA Day 3
Hacking the Kinect with GAFFTA Day 3Hacking the Kinect with GAFFTA Day 3
Hacking the Kinect with GAFFTA Day 3benDesigning
 
Wordoku Puzzle Solver - Image Processing Project
Wordoku Puzzle Solver - Image Processing ProjectWordoku Puzzle Solver - Image Processing Project
Wordoku Puzzle Solver - Image Processing ProjectSurya Chandra
 
DDGK: Learning Graph Representations for Deep Divergence Graph Kernels
DDGK: Learning Graph Representations for Deep Divergence Graph KernelsDDGK: Learning Graph Representations for Deep Divergence Graph Kernels
DDGK: Learning Graph Representations for Deep Divergence Graph Kernelsivaderivader
 
Best Techniques of Point cloud to 3D.pdf
Best Techniques of Point cloud to 3D.pdfBest Techniques of Point cloud to 3D.pdf
Best Techniques of Point cloud to 3D.pdfRvtcad
 
Point cloud mesh-investigation_report-lihang
Point cloud mesh-investigation_report-lihangPoint cloud mesh-investigation_report-lihang
Point cloud mesh-investigation_report-lihangLihang Li
 
Js info vis_toolkit
Js info vis_toolkitJs info vis_toolkit
Js info vis_toolkitnikhilyagnic
 
iCAMPResearchPaper_ObjectRecognition (2)
iCAMPResearchPaper_ObjectRecognition (2)iCAMPResearchPaper_ObjectRecognition (2)
iCAMPResearchPaper_ObjectRecognition (2)Moniroth Suon
 
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...cscpconf
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an ObjectAnkur Tyagi
 
Rhino Working with Meshes
Rhino Working with MeshesRhino Working with Meshes
Rhino Working with MeshesNYCCTfab
 
YARCA (Yet Another Raycasting Application) Project
YARCA (Yet Another Raycasting Application) ProjectYARCA (Yet Another Raycasting Application) Project
YARCA (Yet Another Raycasting Application) Projectgraphitech
 
Wavelet-Based Warping Technique for Mobile Devices
Wavelet-Based Warping Technique for Mobile DevicesWavelet-Based Warping Technique for Mobile Devices
Wavelet-Based Warping Technique for Mobile Devicescsandit
 
Gavrila_ICCV99.pdf
Gavrila_ICCV99.pdfGavrila_ICCV99.pdf
Gavrila_ICCV99.pdfDeepdeeper
 
Laplacian-regularized Graph Bandits
Laplacian-regularized Graph BanditsLaplacian-regularized Graph Bandits
Laplacian-regularized Graph Banditslauratoni4
 
Intelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIntelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIRJET Journal
 
LogicProgrammingShortestPathEfficiency
LogicProgrammingShortestPathEfficiencyLogicProgrammingShortestPathEfficiency
LogicProgrammingShortestPathEfficiencySuraj Nair
 

Similar to ShaderX³: Geometry Manipulation - Morphing between two different objects (20)

Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeksBeginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
 
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...
 
isvc_draft6_final_1_harvey_mudd (1)
isvc_draft6_final_1_harvey_mudd (1)isvc_draft6_final_1_harvey_mudd (1)
isvc_draft6_final_1_harvey_mudd (1)
 
Hacking the Kinect with GAFFTA Day 3
Hacking the Kinect with GAFFTA Day 3Hacking the Kinect with GAFFTA Day 3
Hacking the Kinect with GAFFTA Day 3
 
Wordoku Puzzle Solver - Image Processing Project
Wordoku Puzzle Solver - Image Processing ProjectWordoku Puzzle Solver - Image Processing Project
Wordoku Puzzle Solver - Image Processing Project
 
DDGK: Learning Graph Representations for Deep Divergence Graph Kernels
DDGK: Learning Graph Representations for Deep Divergence Graph KernelsDDGK: Learning Graph Representations for Deep Divergence Graph Kernels
DDGK: Learning Graph Representations for Deep Divergence Graph Kernels
 
Best Techniques of Point cloud to 3D.pdf
Best Techniques of Point cloud to 3D.pdfBest Techniques of Point cloud to 3D.pdf
Best Techniques of Point cloud to 3D.pdf
 
Point cloud mesh-investigation_report-lihang
Point cloud mesh-investigation_report-lihangPoint cloud mesh-investigation_report-lihang
Point cloud mesh-investigation_report-lihang
 
Js info vis_toolkit
Js info vis_toolkitJs info vis_toolkit
Js info vis_toolkit
 
iCAMPResearchPaper_ObjectRecognition (2)
iCAMPResearchPaper_ObjectRecognition (2)iCAMPResearchPaper_ObjectRecognition (2)
iCAMPResearchPaper_ObjectRecognition (2)
 
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
A NOVEL APPROACH TO SMOOTHING ON 3D STRUCTURED ADAPTIVE MESH OF THE KINECT-BA...
 
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object3D Reconstruction from Multiple uncalibrated 2D Images of an Object
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
 
Rhino Working with Meshes
Rhino Working with MeshesRhino Working with Meshes
Rhino Working with Meshes
 
YARCA (Yet Another Raycasting Application) Project
YARCA (Yet Another Raycasting Application) ProjectYARCA (Yet Another Raycasting Application) Project
YARCA (Yet Another Raycasting Application) Project
 
Wavelet-Based Warping Technique for Mobile Devices
Wavelet-Based Warping Technique for Mobile DevicesWavelet-Based Warping Technique for Mobile Devices
Wavelet-Based Warping Technique for Mobile Devices
 
Gavrila_ICCV99.pdf
Gavrila_ICCV99.pdfGavrila_ICCV99.pdf
Gavrila_ICCV99.pdf
 
A0280105
A0280105A0280105
A0280105
 
Laplacian-regularized Graph Bandits
Laplacian-regularized Graph BanditsLaplacian-regularized Graph Bandits
Laplacian-regularized Graph Bandits
 
Intelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial IntelligenceIntelligent Auto Horn System Using Artificial Intelligence
Intelligent Auto Horn System Using Artificial Intelligence
 
LogicProgrammingShortestPathEfficiency
LogicProgrammingShortestPathEfficiencyLogicProgrammingShortestPathEfficiency
LogicProgrammingShortestPathEfficiency
 

Recently uploaded

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 

Recently uploaded (20)

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 

ShaderX³: Geometry Manipulation - Morphing between two different objects

  • 1. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 THEORY In the early nineties a movie impressed the audience by showing computer generated effects, which have never been seen before. “Terminator 2 – Judgment Day” can be called the beginning of photo-realistic computer graphics in movies. The most important effects were the various transformations of T-1000, which was enemy machine in the story. Those transformations were made by a technique called ‘morphing’. That can be done in image space, where one two-dimensional image or video source is transformed into another. For “Terminator 2” it was done three-dimensional, which means that one 3D mesh is being transformed into another. Both versions are not meant to be used in real time, but it is able with today’s graphics hardware. We will only look at an implementation of the 3D version. Vertex tweening is an easy way to move a vertex of a mesh independently from others. Here every vertex has got a relative or absolute destination position vector beside the source one. With a dynamic blending factor, which is equal for all vertices at a time, you can interpolate between source and destination position. The formula looks like this for a relative destination: PositionOutput= PositionSource+ PositionDestination ⋅ Factor And with an absolute destination position we need to calculate the relative one: PositionOutput = PositionSource + ( PositionDestination − PositionSource )⋅Factor The positions are 3D vectors with x, y and z components and the blending factor is a scalar value. For the article we will only use relative destination vectors, because it saves rendering time and code as you can see by comparing the two formulas above. Using only this technique results in a lot of limits, because start and target mesh are the same beside vertex positions. That means there is no difference in number of vertices and the faces and attributes are the same. Start and target mesh have got equal materials, textures, shaders and further states. So vertex tweening is only useful to animate objects in a way, where mesh skinning fails. To morph between two different objects we can use vertex tweening, but we will transform and render both meshes at once. Beforehand the destination positions of the mesh vertices have to be calculated. This can be done by projecting the vertices of the first mesh to the second one and vice versa. We use the vertex position as origin for a ray-mesh-intersection test, which is a function that checks if and where a ray intersects at least one face of a mesh. If there are multiple intersections the nearest is a good vector to use as destination position. Without any intersection source should be used as destination. In this case the relative destination is the zero vector. For the intersection ray we also need a direction vector beside the origin. This can be the normal of a vertex or we calculate a vector from origin to the mesh or a user-defined center. We also should invert the direction vector to get all possible intersections. That is not needed if an origin is situated out of both meshes since ©2004 RONNY BURKERSRODA PAGE 1
  • 2. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 we do not have to use the vertex position as origin. For example it is possible to use the bounding sphere of a mesh: Direction = Center − Position Origin = −Direction⋅ Radius + Center This is very useful if you have got complex objects like helicopters with cockpit interior. Using the bounding sphere projects every vertex of one mesh to the hull of the other one. Otherwise it could be possible that some hull vertices are projected to faces of the interior. Choosing the best values always depends on the kind of mesh design. After the destination vector is computed we store it into the vertex data. Now we know where a vertex has to be moved to get an object that has structures like another one. It is possible to tessellate objects to increase the accuracy of those structures, but you do not have to, because we want to use the good performance of optimized low-polygon objects. Other tricks to improve quality are being descripted later in this article. After the preprocessing is done we are able to render the objects with the morphing effect. This can be done by the application but we are concentrating on vertex shader, because this improves performance on DirectX-8-compatible graphics cards and works in software mode for older hardware. We have to set the current interpolation factor as shader constant to render a morphing mesh. For the target one the factor has to be inverted by subtracting it from one. In this way both meshes are always at same state. Now we use the factor to interpolate between source and destination position. Other vertex processing like lighting and texture coordinate transformation can be done normally. It is possible to render both objects per frame. Or we only render the start mesh to the half and then the target one. This can look strange or ugly but there are optimizations, which can be done (see Optimization part). This is a screenshot from LightBrain’s game prototype “Rise of the Hero” implementing the morphing effect, which was originally created for that project. ©2004 LightBrain GmbH, Hamburg, Germany ©2004 RONNY BURKERSRODA PAGE 2
  • 3. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 IMPLEMENTATION I am using DirectX 9 to show an implementation of the morphing algorithm. DirectX extensions will help me to save time, so the basic functions of 3D graphics programming will not be implemented here. For experienced OpenGL programmers it should be now problem to convert it or write an own program on the base of the algorithm. For the objects we are able to use extension meshes, whose objects are accessed over the ID3DXMesh interface. A mesh is storing vertices, an index list of triangle faces between them and a table of attributes for the triangles. The attributes are identification numbers, which divide the mesh into different subsets that can be rendered with various states like materials or textures. It is possible to load a mesh over the D3DXLoad[…]MeshX[…] functions or to define an own mesh by locking vertex, index and attribute buffer and setting the data. For now we are going the first way and are loading the meshes from common DirectX files, which other programs are able to read and write. Beside the meshes we are getting an array of materials including texture file names for all subsets. A subset is rendered by setting material and textures first and then calling ID3DXMesh::DrawSubset( nSubset ), where nSubset is the number of the subset. To preprocess the meshes we have to enhance the vertex data first, so the relative destination position can be stored to. There are two formats in Direct3D to define the structure of a vertex: Flexible vertex formats are the one to use for fixed function pipeline processing, which transforms the vertex data with a set of functions. The parameters of those functions can be set over Direct3D. Because the possibilities of the functions were limited vertex shaders, in which the processing can be programmed, had been introduced. For vertex shaders there is a much more flexible format: The vertex declaration allows everybody to include all data, which is needed. We are using vertex shaders so we will also use such declarations. First they seem to be more complicated but they enable us to be more compatible to other effects. A declaration is defined by an array of D3DVERTEXELEMENT9 elements and the last one must include the data of the D3DDECL_END() macro. Every other element defines one data element of a vertex by setting the offset in the vertex data (in bytes), the type of the data (e.g. D3DDECLTYPE_FLOAT3 for a 3D vector), a method for hardware tessellation, the usage (e.g. position or texture coordinate) and the index number, if more then one element of the same usage are stored. Because those declarations are also used to pass on vertex data to the pipeline, a stream number can be specified, too. In this way multiple vertex buffers are used to render one set of primitives. But our meshes include only one vertex buffer, so the stream number should be set to zero. A common 3D vertex includes position and normal vector and one 2D texture coordinate. The vertex declaration for such a vertex looks like this: D3DVERTEXELEMENT9 pStandardMeshDeclaration[] = { { 0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, ©2004 RONNY BURKERSRODA PAGE 3
  • 4. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 D3DDECLUSAGE_POSITION, 0 }, { 0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_NORMAL, 0 }, { 0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0 }, D3DDECL_END() }; At the beginning of the vertex data (byte 0) there is a 3D vector for the standard vertex position. It is followed by the 3D vector of the vertex normal. The data is positioned at offset 12 because vector of the position include 3 FLOAT values. A FLOAT value has got the size of 4 bytes (= sizeof( FLOAT )) and the multiplication with 3 elements results in 12 bytes. Because the normal has the same size the offset of the texture coordinate is at 24 (= 2 vectors * 12 bytes). But the texture coordinate is only a 2D vector. That means the vertex data has a size of 32 bytes (= 2 vectors * ( 3 floats * 4 bytes ) + 1 vector * ( 2 floats * 4 bytes )). This is important because we want to add an element for the destination position: D3DVERTEXELEMENT9 pMorphingMeshDeclaration[] = { { 0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0 }, { 0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_NORMAL, 0 }, { 0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0 }, { 0, 32, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 1 }, D3DDECL_END() }; Now we have added a second position vector, which must have an increased usage index than the standard one. The whole enhancement can be done automatically with the following steps. We use D3DXGetDeclVertexSize() to retrieve the vertex size of the original declaration and we are going through the declaration to store the highest usage index of a position element. At next the destination position element for morphing can be set to the D3DDECL_END() entry. D3DXGetDeclLength() returns the number of this entry increased by one. As usage index we take the highest index and add one to it. This last thing is to write the D3DDECL_END() at the end. If pStandardMeshDeclaration was the original declaration it has been enhanced to pMorphingMeshDeclaration. You can see the routine at listing 1. D3DVERTEXELEMENT9 pMeshDeclaration[ MAX_FVF_DECL_SIZE ]; DWORD nPosition = 0; DWORD nUsageIndex = 0; DWORD nOffset; ... // process all declaration elements until end is reached ©2004 RONNY BURKERSRODA PAGE 4
  • 5. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 while( pMeshDeclaration[ nPosition ].Stream != 0xFF ) { // check for higher index of a position usage if( ( pMeshDeclaration[ nPosition ].Usage == D3DDECLUSAGE_POSITION ) && ( pMeshDeclaration[ nPosition ].UsageIndex >= nUsageIndex ) ) nUsageIndex = pMeshDeclaration[ nPosition ].UsageIndex + 1; // increase position in declaration array ++nPosition; } // get element number for new entry nPosition = D3DXGetDeclLength( pMeshDeclaration ) - 1; nOffset = D3DXGetDeclVertexSize( pMeshDeclaration, 0 ); // move end element memmove( &pMeshDeclaration[ nPosition + 1 ] , &pMeshDeclaration[ nPosition ], sizeof( D3DVERTEXELEMENT9 ) ); // add new position element pMeshDeclaration[ nPosition ].Stream = 0; pMeshDeclaration[ nPosition ].Offset = nOffset; pMeshDeclaration[ nPosition ].Type = D3DDECLTYPE_FLOAT3; pMeshDeclaration[ nPosition ].Method = D3DDECLMETHOD_DEFAULT; pMeshDeclaration[ nPosition ].Usage = D3DDECLUSAGE_POSITION; pMeshDeclaration[ nPosition ].UsageIndex = nUsageIndex; Listing 1. Enhancing the vertex declaration for a morphing mesh. The next step is to clone the start mesh using the new declaration as parameter. ID3DXMesh::Clone() creates a new mesh object with the same data as the original one but including space for the destination position. If you do not want to use the original mesh any longer (e.g. for rendering it without morphing) it can be released. The vertex buffer of the cloned mesh must be locked now, so we can calculate its destination positions. Every vertex has to be projected to the target mesh. To do this there is an extension function of Direct3D: D3DXIntersect() checks where a ray intersects an extension mesh. We can use the ray origin and direction we want and will get all possible projection points. As I mentioned it is most useful to take the nearest one. The source position has to be subtracted to get the relative destination vector, which can be stored to the vertex data (see listing 2). Fortunately reading and writing vertex data is not as hard as it seems to be. Vertex declarations make it easy to get the offset of a specific vertex element. To retrieve the source position we should look for an element of type D3DDECLTYPE_FLOAT3, usage D3DDECLUSAGE_POSITION and index 0. And to get the normal the usage has to be D3DDECLUSAGE_NORMAL. Then we take the offset to read the 3D vector from the vertex data. Accessing a specific vertex is possible by doing the following: VOID* pVertex = (BYTE*) pData + nVertexSize * nVertex; ©2004 RONNY BURKERSRODA PAGE 5
  • 6. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 pData is the start address of the vertex buffer data, nVertexSize is the size of one vertex, which can be calculated by calling D3DXGetDeclVertexSize(), and nVertex is the number of the vertex that should be accessed. pVertex stores the address of this vertex and can be used to read and write the vectors: D3DXVECTOR3 vct3SourcePosition = *(D3DXVECTOR3*)( (BYTE*) pVertex + nOffsetSourcePosition ); ... *(D3DXVECTOR3*)( (BYTE*) pVertex + nOffsetDestinationPosition ) = vct3DestinationPosition; The offsets, which we got from vertex declaration, are stored in nOffsetSourcePosition for source and nOffsetDestinationPosition for destination position. ID3DXMesh* pmshDestination; // pointer to destination mesh interface D3DVECTOR3 vct3Source; // source position (vertex input) D3DVECTOR3 vct3Destination; // destination position (vertex output) D3DVECTOR3 vct3Direction; // ray direction vector D3DVECTOR3 vct3Center; // bounding sphere center (parameter) FLOAT fRadius; // bounding sphere radius (parameter) FLOAT fDistance; // distance from sphere to mesh BOOL bIntersection; // intersection flag ... // calculate direction from sphere center to vertex position D3DXVec3Normalize( &vct3Direction, &D3DXVECTOR3( vct3Center - vct3Source ) ); // compute intersection with destination mesh from outstanding point on the bounding sphere in direction to the center D3DXIntersect( pmshDestination, &D3DXVECTOR3( vct3Center - vct3Direction * fRadius ), &vct3Direction, &bIntersection, NULL, NULL, NULL, &fDistance, NULL, NULL ); // check for intersection if( bIntersection ) { // calculate projected vector and subtract source position vct3Destination = vct3Center + vct3Direction * ( fDistance - fRadius ) - vct3Source; } else { // set relative destination position to zero vct3Destination = D3DXVECTOR3( 0.0f, 0.0f, 0.0f ); } Listing 2. Calculating the destination position for a vertex ©2004 RONNY BURKERSRODA PAGE 6
  • 7. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 After storing the destination position vector of each vertex to the buffer of the start mesh the same has to be done with the target mesh, which is being projected to the start one. Then the preprocessing is finished. Now we need the vertex shader, which can transform a vertex of a morphing mesh between source and destination position. At the beginning of the shader we declare the version and inputs, which are loaded from the data of the vertex using its declaration: ; declaration of required vertex shader version vs_1_1 ; declaration of the input registers dcl_position0 v0 ; source position dcl_position1 v1 ; destination position ... At this point we are able to calculate all output values but oPos in any way we want. The position output is an interpolated vector between source and destination position. If the blend factor is stored in the shader constant c0, then the code can look like this: ... ; transform and project vertex to screen mul r0.xyz, v1.xyz, c0.x ; blend destination vector add r0.xyz, r0.xyz, v0.xyz ; add source position mov r0.w, v0.w ; copy w component ... First the relative destination vector is multiplied with the interpolation factor. Next the source vector is added to the result. After that r0.xyz includes a vector, which lies between source and destination position, if c0.x is a value between 0 and 1. At last we have to copy the unchanged w-component of the source position. Normally the value is 1.0f. Now r0 can be processed as if it would include the unprocessed vertex position (e.g. transformation from object to screen space). The rendering code of your application has to set the constant of the blend factor, which is c0.x in the shader above. This can be done with the following call: IDirect3DDevice9::SetVertexShaderConstantF( 0, &D3DXVECTOR4( fBlendFactor, 0.0f, 0.0f, 0.0f ), 1 ); Remember that you have to invert the blend factor for the target mesh by calculating 1.0f – fBlendFactor. Now you are able to render the meshes the way you want: Up to the half blend factor you can draw the start one and then the target or you render both at the same time with activated z-buffering. For the second type you should draw the target mesh first and then the source one up to the half blend value and afterwards reversed, if your objects have semi- ©2004 RONNY BURKERSRODA PAGE 7
  • 8. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 transparent faces or texels. Anyway both alone will not be good looking for most kinds of objects. OPTIMIZATIONS To get the best looking morphing effect there are a lot of things we can do. Some I will explain here. 1. BLENDING THE ALPHA VALUE The most powerful extension is easy to enable, but difficult to make good. We interpolate the alpha value of the object materials, too. For that both objects have to be rendered at the same time. Instead of using the same blend factor as for morphing, now we let the alpha blend value of one mesh to be 1. Otherwise both objects would become a little transparent while they are morphing. When we start morphing the start mesh stays opaque until the half, while the target is fading in. At the half the blend value of both objects is one and next the start mesh is fading out. The mesh, which has got the blend value of 1, has to be rendered first, so the semi-transparent pixels or second one are mixed with the opaque of the first. Because of the possibility of intersecting triangles, which can be semi- transparent, there are some cases, in which the morphing will still look bad. This can happen, if the start and target mesh have got semi-transparent materials originally. One way to make it better is to blend the semi-transparent materials away, which is good in combination with reflection mapping (see 4.). Here we have to blend the transparencies of the start mesh away, then to fade in the target, next to fade the start one out and at last to blend the transparencies of the target mesh in. Then there are now overlapping semi-transparent pixel between source and target mesh. If you do not want to let semi-transparent materials become opaque you can do the second way. There we use two render-target textures with alpha channel for rendering each object to one without alpha fading. Before of that we have to set the alpha values of the textures to 0. Then the two textures are rendered on quad to the screen blending between both using the morphing blend factor. Here you should pay attention that the texels of the textures are correctly mapped the pixels of the screen. If you also need the z-values of the morphing meshes (e.g. when rendering other objects) the application can write those to the z-buffer while rendering to the textures. To do that we have to use textures, which have got at least the dimensions of the back buffer. But we do not need to use a quad as large as the screen. Transforming the bounding boxes of the meshes to screen space will help us to get the rectangle we need to render. For that we also have to calculate the correct texture coordinates. For applications that can enable anti-aliasing we cannot render directly to the textures, because those are not supporting multi-sampling. Since DirectX 9b it is possible to render to a multi-sampled surface like the back buffer and then copying it to a render-target texture. ©2004 RONNY BURKERSRODA PAGE 8
  • 9. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 2. USING STEP OBJECTS When you are morphing between objects with hard edges the effect may seem very squared, too. If you want a softer or more complex morphing step objects can be used. Instead of directly interpolation between source and target mesh we are morphing multiple times in a line. With two step objects it looks like: Start Object ⇔ First Step First Step ⇔ Second Step Second Step ⇔ Target Object These are the objects between we have to morph and which to be projected to. If you want a softer effect the step meshes should be modeled with a round shape. Maybe you or the artist creates the same sphere for both steps and edits them to shapes similar to the source and target object but softer. 3. TESSELLATING SOURCE AND TARGET MESH To improve the accuracy of the mesh projection we can tessellate the source and target morphing meshes before they are projected. Unfortunately this results in much more vertices but if vertex processing is not the bottleneck you are able to do it. This is good for objects that have got relatively large faces, which are projected to different faces of the other mesh. For the step object we do not need it because these should already be optimized for their task. But source and target mesh are often optimized for rendering them separately at best quality- performance relation and not for morphing. 4. MAPPING EFFECT TEXTURES In the mind of a viewer our two morphing objects become one unit. To amplify this feeling we can give the morphing objects a single texture or the same set of textures, which are mapped the same way. It seems the materials of the start object are melting to become the new ones, so they are also one unit. We have to blend the effect textures at the beginning in and at the ending out. A possibility to get the look of T- 1000 from “Terminator 2” is to use an environmental cube map, which is reflected by the objects. In the first quarter the original materials of the start mesh are faded out, so we see the half effect long only the reflections and in the last quarter the target materials are faded in. ©2004 RONNY BURKERSRODA PAGE 9 Morphing effect extended by two step objects, alpha blending, reflection cube mapping and blooming
  • 10. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 Another way is to use one or more “lightning” textures, which are mapped spherical and animated or rotated by time to get an electricity effect. This could also be improved by a particle effect. 5. USING PARTICLE SYSTEMS If you want morphing for a fantasy styled game, in which you do not want to use technical looking effects like reflections, then particles are suitable to create a more magical effect. You can move some stars or flares around the objects. Or a demon, which rises from the ground, could be surrounded by fire. There are a lot of particle effects to imagine and a lot of them can enhance the morphing of two objects. A benefit of particles is that they are able to cover possible artifacts of a morphing effect. 6. BLOOMING THE SCENE Overexposure and blooming make the lighting of objects more realistic and help to cover artifacts, too. You can use them to let an object becoming hot while morphing or to increase the specular highlights of reflecting surfaces. 7. INTERPOLATING OTHER DATA Beside the position vector we are able to blend other vertex data like normal or tangent vector, too. This is important if you want to change the lighting or reflect more correctly while morphing. Be patient with such direction vectors. Because of the interpolation they are loosing their length of one. If you need them for calculation (e.g. lighting, reflection or transformation to tangent space) you have to normalize these vectors after interpolation. 8. LOADING PRE-PROCESSED OBJECTS The pre-processing of the objects costs so much time that it cannot be done between the renderings of two frames. For objects with thousands of faces it also increases the waiting time noticeably. So it is a good way to pre-process an object combination once and then to store it to file. Then the morphing objects can be loaded from it by the application that wants to morph these. Like for many other things this save a lot of time, which seems for players to be the time of loading a level. CONCLUSION If you need to transform an object to another this morphing algorithm is a nice eye-candy to do that. Unfortunately there also some disadvantages: ©2004 RONNY BURKERSRODA PAGE 10
  • 11. GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013 Pro Contra  Real time rendering  3D meshes do not have to be changed  Objects can have completely different attributes  A lot of tricks to make the effect better  Pre-processing takes some time, but can be removed from the final application, if source and target mesh combination is known before  Vertex projection has to be optimized manually for the best possible result  Not flexible enough to work with any kind of object animation (Skinned meshes should be a much lower problem then objects, whose subsets are transformed completely independent.) You can look at the CD to find the complete implementation of a demo that presents the morphing effect. There is also a library, which can easily be included into any Direct3D 9 project, to pre-process meshes. If you want to know more about LightBrain or “Rise of the Hero”, then visit the web site www.lightbrain.de. Annotation 2013: “Rise of the Hero” has never been completed and LightBrain was shut down roughly one year later. ©2004 RONNY BURKERSRODA PAGE 11