• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
ShaderX³: Geometry Manipulation - Morphing between two different objects
 

ShaderX³: Geometry Manipulation - Morphing between two different objects

on

  • 586 views

That is an article from 2004, which had been published in the book "ShaderX 3".

That is an article from 2004, which had been published in the book "ShaderX 3".

Statistics

Views

Total Views
586
Views on SlideShare
586
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    ShaderX³: Geometry Manipulation - Morphing between two different objects ShaderX³: Geometry Manipulation - Morphing between two different objects Document Transcript

    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013THEORYIn the early nineties a movie impressed the audience by showing computergenerated effects, which have never been seen before. “Terminator 2 – JudgmentDay” can be called the beginning of photo-realistic computer graphics in movies.The most important effects were the various transformations of T-1000, which wasenemy machine in the story. Those transformations were made by a techniquecalled ‘morphing’. That can be done in image space, where one two-dimensionalimage or video source is transformed into another. For “Terminator 2” it was donethree-dimensional, which means that one 3D mesh is being transformed intoanother. Both versions are not meant to be used in real time, but it is able withtoday’s graphics hardware. We will only look at an implementation of the 3Dversion.Vertex tweening is an easy way to move a vertex of a mesh independently fromothers. Here every vertex has got a relative or absolute destination position vectorbeside the source one. With a dynamic blending factor, which is equal for allvertices at a time, you can interpolate between source and destination position.The formula looks like this for a relative destination:PositionOutput= PositionSource+ PositionDestination ⋅ FactorAnd with an absolute destination position we need to calculate the relative one:PositionOutput = PositionSource + ( PositionDestination − PositionSource )⋅FactorThe positions are 3D vectors with x, y and z components and the blending factor isa scalar value. For the article we will only use relative destination vectors, becauseit saves rendering time and code as you can see by comparing the two formulasabove.Using only this technique results in a lot of limits, because start and target meshare the same beside vertex positions. That means there is no difference in numberof vertices and the faces and attributes are the same. Start and target mesh havegot equal materials, textures, shaders and further states. So vertex tweening isonly useful to animate objects in a way, where mesh skinning fails.To morph between two different objects we can use vertex tweening, but we willtransform and render both meshes at once. Beforehand the destination positions ofthe mesh vertices have to be calculated. This can be done by projecting thevertices of the first mesh to the second one and vice versa. We use the vertexposition as origin for a ray-mesh-intersection test, which is a function that checks ifand where a ray intersects at least one face of a mesh. If there are multipleintersections the nearest is a good vector to use as destination position. Withoutany intersection source should be used as destination. In this case the relativedestination is the zero vector.For the intersection ray we also need a direction vector beside the origin. Thiscan be the normal of a vertex or we calculate a vector from origin to the mesh or auser-defined center. We also should invert the direction vector to get all possibleintersections. That is not needed if an origin is situated out of both meshes since©2004 RONNY BURKERSRODA PAGE 1
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013we do not have to use the vertex position as origin. For example it is possible touse the bounding sphere of a mesh:Direction = Center − PositionOrigin = −Direction⋅ Radius + CenterThis is very useful if you have got complex objects like helicopters with cockpitinterior. Using the bounding sphere projects every vertex of one mesh to the hullof the other one. Otherwise it could be possible that some hull vertices areprojected to faces of the interior. Choosing the best values always depends on thekind of mesh design. After the destination vector is computed we store it into thevertex data.Now we know where a vertex has to be moved to get an object that hasstructures like another one. It is possible to tessellate objects to increase theaccuracy of those structures, but you do not have to, because we want to use thegood performance of optimized low-polygon objects. Other tricks to improvequality are being descripted later in this article.After the preprocessing is done we are able to render the objects with themorphing effect. This can be done by the application but we are concentrating onvertex shader, because this improves performance on DirectX-8-compatiblegraphics cards and works in software mode for older hardware.We have to set the current interpolation factor as shader constant to render amorphing mesh. For the target one the factor has to be inverted by subtracting itfrom one. In this way both meshes are always at same state. Now we use the factorto interpolate between source and destination position. Other vertex processinglike lighting and texture coordinate transformation can be done normally. It ispossible to render both objects per frame. Or we only render the start mesh to thehalf and then the target one. This can look strange or ugly but there areoptimizations, which can be done (see Optimization part).This is a screenshot from LightBrain’s game prototype “Rise of the Hero” implementing themorphing effect, which was originally created for that project.©2004 LightBrain GmbH, Hamburg, Germany©2004 RONNY BURKERSRODA PAGE 2
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013IMPLEMENTATIONI am using DirectX 9 to show an implementation of the morphing algorithm.DirectX extensions will help me to save time, so the basic functions of 3D graphicsprogramming will not be implemented here. For experienced OpenGL programmersit should be now problem to convert it or write an own program on the base of thealgorithm.For the objects we are able to use extension meshes, whose objects are accessedover the ID3DXMesh interface. A mesh is storing vertices, an index list of trianglefaces between them and a table of attributes for the triangles. The attributes areidentification numbers, which divide the mesh into different subsets that can berendered with various states like materials or textures.It is possible to load a mesh over the D3DXLoad[…]MeshX[…] functions or todefine an own mesh by locking vertex, index and attribute buffer and setting thedata. For now we are going the first way and are loading the meshes from commonDirectX files, which other programs are able to read and write. Beside the mesheswe are getting an array of materials including texture file names for all subsets. Asubset is rendered by setting material and textures first and then callingID3DXMesh::DrawSubset( nSubset ), where nSubset is the number of thesubset.To preprocess the meshes we have to enhance the vertex data first, so therelative destination position can be stored to. There are two formats in Direct3D todefine the structure of a vertex: Flexible vertex formats are the one to use forfixed function pipeline processing, which transforms the vertex data with a set offunctions. The parameters of those functions can be set over Direct3D. Because thepossibilities of the functions were limited vertex shaders, in which the processingcan be programmed, had been introduced. For vertex shaders there is a much moreflexible format: The vertex declaration allows everybody to include all data, whichis needed. We are using vertex shaders so we will also use such declarations. Firstthey seem to be more complicated but they enable us to be more compatible toother effects.A declaration is defined by an array of D3DVERTEXELEMENT9 elements and thelast one must include the data of the D3DDECL_END() macro. Every other elementdefines one data element of a vertex by setting the offset in the vertex data (inbytes), the type of the data (e.g. D3DDECLTYPE_FLOAT3 for a 3D vector), amethod for hardware tessellation, the usage (e.g. position or texture coordinate)and the index number, if more then one element of the same usage are stored.Because those declarations are also used to pass on vertex data to the pipeline, astream number can be specified, too. In this way multiple vertex buffers are usedto render one set of primitives. But our meshes include only one vertex buffer, sothe stream number should be set to zero.A common 3D vertex includes position and normal vector and one 2D texturecoordinate. The vertex declaration for such a vertex looks like this:D3DVERTEXELEMENT9 pStandardMeshDeclaration[] ={{ 0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,©2004 RONNY BURKERSRODA PAGE 3
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013D3DDECLUSAGE_POSITION, 0 },{ 0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,D3DDECLUSAGE_NORMAL, 0 },{ 0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT,D3DDECLUSAGE_TEXCOORD, 0 },D3DDECL_END()};At the beginning of the vertex data (byte 0) there is a 3D vector for the standardvertex position. It is followed by the 3D vector of the vertex normal. The data ispositioned at offset 12 because vector of the position include 3 FLOAT values. AFLOAT value has got the size of 4 bytes (= sizeof( FLOAT )) and themultiplication with 3 elements results in 12 bytes. Because the normal has thesame size the offset of the texture coordinate is at 24 (= 2 vectors * 12 bytes). Butthe texture coordinate is only a 2D vector. That means the vertex data has a size of32 bytes (= 2 vectors * ( 3 floats * 4 bytes ) + 1 vector * ( 2 floats * 4 bytes )). This isimportant because we want to add an element for the destination position:D3DVERTEXELEMENT9 pMorphingMeshDeclaration[] ={{ 0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,D3DDECLUSAGE_POSITION, 0 },{ 0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,D3DDECLUSAGE_NORMAL, 0 },{ 0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT,D3DDECLUSAGE_TEXCOORD, 0 },{ 0, 32, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT,D3DDECLUSAGE_POSITION, 1 },D3DDECL_END()};Now we have added a second position vector, which must have an increased usageindex than the standard one. The whole enhancement can be done automaticallywith the following steps.We use D3DXGetDeclVertexSize() to retrieve the vertex size of the originaldeclaration and we are going through the declaration to store the highest usageindex of a position element. At next the destination position element for morphingcan be set to the D3DDECL_END() entry. D3DXGetDeclLength() returns thenumber of this entry increased by one. As usage index we take the highest indexand add one to it. This last thing is to write the D3DDECL_END() at the end. IfpStandardMeshDeclaration was the original declaration it has been enhancedto pMorphingMeshDeclaration. You can see the routine at listing 1.D3DVERTEXELEMENT9 pMeshDeclaration[ MAX_FVF_DECL_SIZE ];DWORD nPosition = 0;DWORD nUsageIndex = 0;DWORD nOffset;...// process all declaration elements until end is reached©2004 RONNY BURKERSRODA PAGE 4
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013while( pMeshDeclaration[ nPosition ].Stream != 0xFF ){// check for higher index of a position usageif(( pMeshDeclaration[ nPosition ].Usage== D3DDECLUSAGE_POSITION )&& ( pMeshDeclaration[ nPosition ].UsageIndex>= nUsageIndex ))nUsageIndex = pMeshDeclaration[ nPosition ].UsageIndex + 1;// increase position in declaration array++nPosition;}// get element number for new entrynPosition = D3DXGetDeclLength( pMeshDeclaration ) - 1;nOffset = D3DXGetDeclVertexSize( pMeshDeclaration, 0 );// move end elementmemmove( &pMeshDeclaration[ nPosition + 1 ] ,&pMeshDeclaration[ nPosition ], sizeof( D3DVERTEXELEMENT9 ) );// add new position elementpMeshDeclaration[ nPosition ].Stream = 0;pMeshDeclaration[ nPosition ].Offset = nOffset;pMeshDeclaration[ nPosition ].Type = D3DDECLTYPE_FLOAT3;pMeshDeclaration[ nPosition ].Method = D3DDECLMETHOD_DEFAULT;pMeshDeclaration[ nPosition ].Usage = D3DDECLUSAGE_POSITION;pMeshDeclaration[ nPosition ].UsageIndex = nUsageIndex;Listing 1. Enhancing the vertex declaration for a morphing mesh.The next step is to clone the start mesh using the new declaration as parameter.ID3DXMesh::Clone() creates a new mesh object with the same data as theoriginal one but including space for the destination position. If you do not want touse the original mesh any longer (e.g. for rendering it without morphing) it can bereleased.The vertex buffer of the cloned mesh must be locked now, so we can calculate itsdestination positions. Every vertex has to be projected to the target mesh. To dothis there is an extension function of Direct3D: D3DXIntersect() checks where aray intersects an extension mesh. We can use the ray origin and direction we wantand will get all possible projection points. As I mentioned it is most useful to takethe nearest one. The source position has to be subtracted to get the relativedestination vector, which can be stored to the vertex data (see listing 2).Fortunately reading and writing vertex data is not as hard as it seems to be. Vertexdeclarations make it easy to get the offset of a specific vertex element. To retrievethe source position we should look for an element of type D3DDECLTYPE_FLOAT3,usage D3DDECLUSAGE_POSITION and index 0. And to get the normal the usagehas to be D3DDECLUSAGE_NORMAL. Then we take the offset to read the 3D vectorfrom the vertex data. Accessing a specific vertex is possible by doing the following:VOID* pVertex = (BYTE*) pData + nVertexSize * nVertex;©2004 RONNY BURKERSRODA PAGE 5
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013pData is the start address of the vertex buffer data, nVertexSize is the size ofone vertex, which can be calculated by calling D3DXGetDeclVertexSize(), andnVertex is the number of the vertex that should be accessed. pVertex stores theaddress of this vertex and can be used to read and write the vectors:D3DXVECTOR3 vct3SourcePosition= *(D3DXVECTOR3*)( (BYTE*) pVertex + nOffsetSourcePosition );...*(D3DXVECTOR3*)( (BYTE*) pVertex + nOffsetDestinationPosition )= vct3DestinationPosition;The offsets, which we got from vertex declaration, are stored innOffsetSourcePosition for source and nOffsetDestinationPosition fordestination position.ID3DXMesh* pmshDestination; // pointer to destination mesh interfaceD3DVECTOR3 vct3Source; // source position (vertex input)D3DVECTOR3 vct3Destination; // destination position (vertex output)D3DVECTOR3 vct3Direction; // ray direction vectorD3DVECTOR3 vct3Center; // bounding sphere center (parameter)FLOAT fRadius; // bounding sphere radius (parameter)FLOAT fDistance; // distance from sphere to meshBOOL bIntersection; // intersection flag...// calculate direction from sphere center to vertex positionD3DXVec3Normalize( &vct3Direction,&D3DXVECTOR3( vct3Center - vct3Source ) );// compute intersection with destination mesh from outstanding pointon the bounding sphere in direction to the centerD3DXIntersect( pmshDestination,&D3DXVECTOR3( vct3Center - vct3Direction * fRadius ),&vct3Direction, &bIntersection, NULL, NULL, NULL, &fDistance,NULL, NULL );// check for intersectionif( bIntersection ){// calculate projected vector and subtract source positionvct3Destination = vct3Center + vct3Direction *( fDistance - fRadius ) - vct3Source;}else{// set relative destination position to zerovct3Destination = D3DXVECTOR3( 0.0f, 0.0f, 0.0f );}Listing 2. Calculating the destination position for a vertex©2004 RONNY BURKERSRODA PAGE 6
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013After storing the destination position vector of each vertex to the buffer of thestart mesh the same has to be done with the target mesh, which is being projectedto the start one. Then the preprocessing is finished.Now we need the vertex shader, which can transform a vertex of a morphingmesh between source and destination position. At the beginning of the shader wedeclare the version and inputs, which are loaded from the data of the vertex usingits declaration:; declaration of required vertex shader versionvs_1_1; declaration of the input registersdcl_position0 v0 ; source positiondcl_position1 v1 ; destination position...At this point we are able to calculate all output values but oPos in any way wewant. The position output is an interpolated vector between source and destinationposition. If the blend factor is stored in the shader constant c0, then the code canlook like this:...; transform and project vertex to screenmul r0.xyz, v1.xyz, c0.x ; blend destination vectoradd r0.xyz, r0.xyz, v0.xyz ; add source positionmov r0.w, v0.w ; copy w component...First the relative destination vector is multiplied with the interpolation factor.Next the source vector is added to the result. After that r0.xyz includes a vector,which lies between source and destination position, if c0.x is a value between 0and 1. At last we have to copy the unchanged w-component of the source position.Normally the value is 1.0f.Now r0 can be processed as if it would include the unprocessed vertex position(e.g. transformation from object to screen space).The rendering code of your application has to set the constant of the blend factor,which is c0.x in the shader above. This can be done with the following call:IDirect3DDevice9::SetVertexShaderConstantF( 0,&D3DXVECTOR4( fBlendFactor, 0.0f, 0.0f, 0.0f ), 1 );Remember that you have to invert the blend factor for the target mesh bycalculating 1.0f – fBlendFactor. Now you are able to render the meshes theway you want: Up to the half blend factor you can draw the start one and then thetarget or you render both at the same time with activated z-buffering. For thesecond type you should draw the target mesh first and then the source one up tothe half blend value and afterwards reversed, if your objects have semi-©2004 RONNY BURKERSRODA PAGE 7
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013transparent faces or texels. Anyway both alone will not be good looking for mostkinds of objects.OPTIMIZATIONSTo get the best looking morphing effect there are a lot of things we can do. SomeI will explain here.1. BLENDING THE ALPHA VALUEThe most powerful extension is easy to enable, but difficult to make good. Weinterpolate the alpha value of the object materials, too. For that both objects haveto be rendered at the same time. Instead of using the same blend factor as formorphing, now we let the alpha blend value of one mesh to be 1. Otherwise bothobjects would become a little transparent while they are morphing. When we startmorphing the start mesh stays opaque until the half, while the target is fading in.At the half the blend value of both objects is one and next the start mesh is fadingout. The mesh, which has got the blend value of 1, has to be rendered first, so thesemi-transparent pixels or second one are mixed with the opaque of the first.Because of the possibility of intersecting triangles, which can be semi-transparent, there are some cases, in which the morphing will still look bad. Thiscan happen, if the start and target mesh have got semi-transparent materialsoriginally. One way to make it better is to blend the semi-transparent materialsaway, which is good in combination with reflection mapping (see 4.). Here we haveto blend the transparencies of the start mesh away, then to fade in the target,next to fade the start one out and at last to blend the transparencies of the targetmesh in. Then there are now overlapping semi-transparent pixel between sourceand target mesh.If you do not want to let semi-transparent materials become opaque you can dothe second way. There we use two render-target textures with alpha channel forrendering each object to one without alpha fading. Before of that we have to setthe alpha values of the textures to 0. Then the two textures are rendered on quadto the screen blending between both using the morphing blend factor. Here youshould pay attention that the texels of the textures are correctly mapped thepixels of the screen. If you also need the z-values of the morphing meshes (e.g.when rendering other objects) the application can write those to the z-buffer whilerendering to the textures. To do that we have to use textures, which have got atleast the dimensions of the back buffer. But we do not need to use a quad as largeas the screen. Transforming the bounding boxes of the meshes to screen space willhelp us to get the rectangle we need to render. For that we also have to calculatethe correct texture coordinates.For applications that can enable anti-aliasing we cannot render directly to thetextures, because those are not supporting multi-sampling. Since DirectX 9b it ispossible to render to a multi-sampled surface like the back buffer and then copyingit to a render-target texture.©2004 RONNY BURKERSRODA PAGE 8
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 20132. USING STEP OBJECTSWhen you are morphing between objects with hard edges the effect may seemvery squared, too. If you want a softer or more complex morphing step objects canbe used. Instead of directly interpolation between source and target mesh we aremorphing multiple times in a line. With two step objects it looks like:Start Object ⇔ First StepFirst Step ⇔ Second StepSecond Step ⇔ Target ObjectThese are the objects between we have to morph and which to be projected to.If you want a softer effect the step meshes should be modeled with a roundshape. Maybe you or the artist creates the same sphere for both steps and editsthem to shapes similar to the source and target object but softer.3. TESSELLATING SOURCE AND TARGET MESHTo improve the accuracy of the mesh projection we can tessellate the source andtarget morphing meshes before they are projected. Unfortunately this results inmuch more vertices but if vertex processing is not the bottleneck you are able todo it. This is good for objects that have got relatively large faces, which areprojected to different faces of the other mesh. For the step object we do not needit because these should already be optimized for their task. But source and targetmesh are often optimized for rendering them separately at best quality-performance relation and not for morphing.4. MAPPING EFFECT TEXTURESIn the mind of a viewer our two morphing objects become one unit. To amplifythis feeling we can give themorphing objects a singletexture or the same set oftextures, which are mappedthe same way. It seems thematerials of the start objectare melting to become the newones, so they are also one unit.We have to blend the effecttextures at the beginning inand at the ending out. Apossibility to get the look of T-1000 from “Terminator 2” is touse an environmental cubemap, which is reflected by theobjects. In the first quarterthe original materials of thestart mesh are faded out, sowe see the half effect longonly the reflections and in the last quarter the target materials are faded in.©2004 RONNY BURKERSRODA PAGE 9Morphing effect extended by two step objects, alphablending, reflection cube mapping and blooming
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013Another way is to use one or more “lightning” textures, which are mappedspherical and animated or rotated by time to get an electricity effect. This couldalso be improved by a particle effect.5. USING PARTICLE SYSTEMSIf you want morphing for a fantasy styled game, in which you do not want to usetechnical looking effects like reflections, then particles are suitable to create amore magical effect. You can move some stars or flares around the objects. Or ademon, which rises from the ground, could be surrounded by fire. There are a lotof particle effects to imagine and a lot of them can enhance the morphing of twoobjects.A benefit of particles is that they are able to cover possible artifacts of amorphing effect.6. BLOOMING THE SCENEOverexposure and blooming make the lighting of objects more realistic and helpto cover artifacts, too. You can use them to let an object becoming hot whilemorphing or to increase the specular highlights of reflecting surfaces.7. INTERPOLATING OTHER DATABeside the position vector we are able to blend other vertex data like normal ortangent vector, too. This is important if you want to change the lighting or reflectmore correctly while morphing. Be patient with such direction vectors. Because ofthe interpolation they are loosing their length of one. If you need them forcalculation (e.g. lighting, reflection or transformation to tangent space) you haveto normalize these vectors after interpolation.8. LOADING PRE-PROCESSED OBJECTSThe pre-processing of the objects costs so much time that it cannot be donebetween the renderings of two frames. For objects with thousands of faces it alsoincreases the waiting time noticeably. So it is a good way to pre-process an objectcombination once and then to store it to file. Then the morphing objects can beloaded from it by the application that wants to morph these. Like for many otherthings this save a lot of time, which seems for players to be the time of loading alevel.CONCLUSIONIf you need to transform an object to another this morphing algorithm is a niceeye-candy to do that. Unfortunately there also some disadvantages:©2004 RONNY BURKERSRODA PAGE 10
    • GEOMETRY MANIPULATION: MORPHING BETWEEN TWO DIFFERENT OBJECTS MAY 7, 2013Pro Contra Real time rendering 3D meshes do not have to be changed Objects can have completely differentattributes A lot of tricks to make the effectbetter Pre-processing takes some time, butcan be removed from the finalapplication, if source and targetmesh combination is known before Vertex projection has to beoptimized manually for the bestpossible result Not flexible enough to work with anykind of object animation (Skinnedmeshes should be a much lowerproblem then objects, whose subsetsare transformed completelyindependent.)You can look at the CD to find the complete implementation of a demo thatpresents the morphing effect. There is also a library, which can easily be includedinto any Direct3D 9 project, to pre-process meshes.If you want to know more about LightBrain or “Rise of the Hero”, then visit theweb site www.lightbrain.de. Annotation 2013: “Rise of the Hero” has never been completed and LightBrain wasshut down roughly one year later.©2004 RONNY BURKERSRODA PAGE 11