HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
Upcoming SlideShare
Loading in...5
×
 

HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

on

  • 871 views

 

Statistics

Views

Total Views
871
Views on SlideShare
871
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics Document Transcript

  • 3D–The Basics
  • 3D–The BasicsUse of 3DDisplaying and Constructing 3D ModelsExamining 3D Software Tools
  • Use of 3DIn general, there are fundamental differences between Movie and Gamegenerated assets. A primary concern is polygon count and efficiency.Currently the only way to model in video games is by using polygons, whichcan require a denser mesh to emulate smoother or more natural lookingmodels such as humans and animals. NURBS models can be created, but needto be converted and optimized to polygons for use in the game. In pre-rendered movies, any technique is allowed to create your models.Movie models can be generated up to millions of polygons using severaldifferent techniques at once. A model consisting of NURBS and polygons aswell as subdivision surface models is normal and completely acceptable.Gaming models have to be more efficient in their use of modeled details tomaintain a manageable data set to render. The reasoning here is that anefficient streamlined environment composed of the lower poly assets willrender more smoothly and give better frame to frame renders duringgameplay. What your gaming system is in essence, is a renderer thatconstantly has the task of rendering each frame of gameplay at 30 frames persecond. Some games hit the magic number of 60 frames a second. If this ratedrops during the game the result is a poor experience and hamperedgameplay. This applies to PC games as well, although they will typically havemore processing power to run higher resolution models.With constant innovations and improvement in next-gen consoles andtechnology, development of more advanced techniques and processes give usmore detailed looking models at a lower cost. One of these advances is theuse of normal mapping. A normal map acts like a bump map, in that is addssurface detail without adding polygons. Normal maps go a step furtherbecause they actually replace the surface normal with new multi-channel datato represent an X, Y, Z coordinate system. What this means is that we cancreate a high resolution model of 2 or 3 million polygons and bake the highresolution detail down to a normal map that retains the component space dataof that high resolution model. It is then a process to create a streamlinedmodel that emulates the general proportions of the high density model, but ata much more efficient poly count of 2500, for example. Once the normal mapdata is applied to this low-res rendition of our high-res monster, the modelimmediately looks more complex geometrically but at an affordable rendering
  • cost. Movie productions also use Normal Mapping techniques, but the assetthat they use the Normal Map on is typically a more detailed model than theone used in games.Another difference between movie and game modeling is the fact that noteverything needs to be built for a movie or pre-rendered model. It is commonpractice for film to only build those elements in the scene that you can actuallysee on the screen. In a game environment, it is necessary to make most thingsviewable from 360 degrees. Can you imagine walking around your favoritegame level and not seeing the back side of 3D car you just walked up to? Ornot being able to see the back of the character you just spoke to? It wouldn’tkeep you immersed in the game very long. Well in a movie if the camera nevertravels to the rear of that set or never moves around the corner, it doesn’tneed to be built. This is certainly true for aspects of the gaming world, like thefar off detail of the mountains, or implied buildings that you as a player can’tactually get to in the game.A common practice among the two disciplines is that of creating LOD models,or Level Of Detail models. In a game, when a character carrying a machine gunwalks up to you from the far end of a long hallway, chances are it is not aconsistent model the entire journey for the character or gun. When it is faraway, a lower resolution model, with lower resolution textures is used. Thereasoning for this is that the details cannot be discerned at that length so thereis no need to use CPU time to render those higher resolution elements. As thecharacter approaches, there may be 2 or 3 changes that swap the model andtextures out with higher and higher resolution assets, until it has walked rightup to you in camera. If done properly, these “swap outs” go unnoticed for themost part.Movie modeling might use aspects of LOD’s too. There are close up modelsand models built for distance shots too. The main difference for film models isthat rarely do the various LOD’s have to seamlessly blend. Much of thisdecision making process lies in the story or action that needs to be conveyedfor that shot. For the very next shot, it may require a completely different setof assets and details that didn’t apply to the first shot. Typically there arethree levels of modeling that occur for movie models: Block, Medium andDetailed. Each stage identifies and solves different problems for theproduction. At the block stage, the overall proportions are identified with a simple lowdetail model. This helps to define the silhouette of the model and have a low
  • resolution asset useful for animatics or test renders. Medium level modelstake the next step and begin by adding other details onto the Block model thathelp to define the finished look of the model. Additions like antennae, guns,rear view mirrors or other details that are not defining the general shape of themodel qualify. This stage helps to identify moving parts and areas that mayrequire special attention from a technical artist. Finally there is the Detailedmodel, which contains all of the detailed parts and pieces on a higherresolution chassis.An example utilizing these ideas is a space-ship model that flies past the screenas it speeds towards its destination. Because we only see the one side of theship, this is the only part that needs to be built. This close fly by model needsto have a high amount of detail and geometry to look convincing.There are no concerns for efficiently, really, in the movie created asset. Aslong as the model can render, it is considered to be acceptable. For a pre-rendered sequence, render time can be extensive, but typically there are largerender farms that can tackle the job. There is also the safety factor for thesemodels that any render anomaly can be fixed in Post, where the game modelmust work all the time at every frame it is rendered in. Other stipulationssometime burden the game model such as the fact that at times the gameasset must be “water tight”. What this means is that all of the vertices on themodel need to be welded or merged. Render times for real-time shadows andadvanced lighting can be complicated if a model is not sealed at the vertexlevel, and therefore they take longer to compute.It is a common expression that there is a time and place for everything.Nothing could be more true when discussing modeling for Movies or Games.There are certainly similarities between the two mediums and many differentapproaches to solve the task at hand. As game systems become more andmore advanced, these two approaches may become more and more alike.Perhaps one day there may be no distinction in the modeling process betweenthe two.This Article is about the company 3D Museum that describes how theyconstruct and represent a 3D model.Laser Scanning
  • The first step in building a three-dimensional (3D) model is to digitize theobject. A high-speed and high-accuracy laser scanner (Minolta Vivid 910) isbeing used, which not only samples the model with high precision, but alsoprovides rich color information. Due to its light weight, the 3D scanner cantravel with us to other collections. Data ProcessingThe raw 3D scan data need to be processed to produce a complete surfacemodel of the fossil. The crucial step is to accurately merge the individual scansinto a single mesh. Most of our processing is done in Raindrop GeomagicStudio, but Rapidform has also been used. PresentationFor research purposes, high resolution 3D data is being kept, but for dataexchange via the web they reduce the filesize – this guarantees fast andsmooth loading of the 3D objects.Rapidform offers a 3D compression and publishing tool using ICF (INUSCompression Format). The two other file formats we are providing, Wirefusion(WF) and 3D Compression (3DC), are based on VRML (Virtual Reality ModelingLanguage). 3DC files do not preserve the vertex colors of VRML files, leavingfossil images monotone.Sources: http://www.siggraph.org/publications/newsletter/volume-41-number-2/modeling-techniques-movies-vs-games,http://en.wikipedia.org/wiki/Video_game, http://www.guardian.co.uk/life-in-3d/gaming-and-3d-technology,http://www.cyberjam.com/3d_interactive_media.html,http://3dmuseum.org/?page_id=241
  • 3D Modelling TechniquesDrafting has come a long way from blueprints into the new world of 3DModeling where files can be updated almost instantly, and sent online throughemail. CAD designers can create computer files with CAD software which canbe read by manufacturing machines to produce products. The 3D CAD designeris the one who actually materializes the 3D model. CAD drafting services offera wide array of services to the public also.With the new advancements in technology recently, almost every type oftechnical drawing is done with the use of computers. Blueprints are still used inthe field, and for other reasons, but all the drawings are done on a computer.In the past if an update needed to be made to the blueprints the draftsmenwould have to either erase, or start all over. With CAD though, the draftsmenwill simply open the file, and make the necessary changes. Another greatfeature is that the file can be saved to your computer, some type of externalhard drive, or online. Just make sure its somewhere safe.The person behind the scenes of 3D modeling is the CAD designer. They usespecial CAD software to create the 3D models. Within the software thedevelopers have incorporated tools for creating lines, circles, arcs, and other2D related objects. Also this software has commands for sculpting, cutting,revolving, mirroring, and other 3D tools. Also the software has the ability torender images with color, texture, lighting, and backgrounds. With all of this atthe CAD designers disposal, anything imagined can be designed.Drafting encompasses many different practices and principles within it. Thereis mechanical drafting, architecture drafting, civil drafting, electrical drafting,structural drafting, drafting for plumbing, 3D modeling, and drafting for justabout anything you can imagine. CAD software has designed programs for eachone of these fields and has made special accommodations for each. Forexample, within architectural programs there is a command for creating walls,doors, roofs, slabs, and other architectural features. This allows the CADdrafter to work much faster, and be more efficient within drawing.3D models have allowed the design process to be done more accurately andefficiently than in the past. Drafting has had many changes over the years, andupdates to CAD software are made routinely. These new type of blueprint aremuch more flexible and allow for changes to be made at a moments notice.Once a design is complete it can go directly to the manufacture to be
  • developed. CAD is used with everything from architecture to inventions and isthe main tool used in any type of technical drawing. This technology allowsengineers to examine work before production, and has made life on thegeneral public more safe.Displaying and Constructing3D ModelsModeling is the first part of the graphic pipeline. When we are modeling in 3Dwe are in Cartesian space. When we are modeling we use shapes; the mostbasic ones e.g. cone, cylinder, sphere, box.In 3D animation, a polygon is the exact same thing, only these polygons areconnected to build your 3D model. Individual polygons are stitched togetheralong the sides or at the vertex points to create the full model. Think of it asputting together puzzle pieces to create a whole, except that rather thanseeing a printed image on the pieces, youre instead forming a whole otherthree-dimensional shape whose boundaries and volume are defined by smallertwo-dimensional shapes. Polygons are the wrapper on the chocolate Easterbunny; the candy coating on your M&Ms.More polygons in a model can mean more detail and smoother renders, but itcan also mean longer render times and more problems caused by overlappinglines and vertices.Application Programming Interface (API):Application Programming Interface (API) is a set of functions and rules that acomputer use to communicate with each other to do certain jobs, just like howa player communicates to a game by pressing a certain button to do certainaction. (application programming interface, eg Direct3D, OpenGL; graphicspipeline, egmodelling, lighting, viewing, projection, clipping, scan conversion,texturing and shading,display; rendering techniques (radiosity, ray tracing);rendering engines; distributed rendering techniques;lighting; textures; fogging;shadowing; vertex and pixel shaders; level of detail.)Direct 3D:
  • Direct 3D is only available for windows 95 and up and that it renders 3Dgraphics especially in gaming as it uses the Graphics card. It all started in 1992with ServanKeondjian who started a company called RenderMorphics and theydeveloped a 3D graphical Application programming interface (API for short), Itwas used in medical imaging and CAD (computer aided design) software. Twoversions of this API were released. And in February 1995 Microsoft boughtRenderMorphics. When Direct3D was used to render they used a thing called aBuffer to render 3D geometry but the process was AWKWARD and hadcomplex stages that you have to do manually and so Open GL was made tomake it simpler.Rendering:Rendering is a way to display 3d objects, lighting and textures together, tocreate an image or animation from the data sent by the 3D modeling program.There are 4 types of renders: Rasterize Raycasting Raytracing RadiosityRasterize:Rasterize is majorly used on real time applications such as games. It is donesimilarly to what most technologies in digital graphics of any sort uses todisplay a render, instead of rendering the whole scene pixel by pixel, it rendersthe geomertries that you see on screen and it will change accordingly. A goodexample of rasterizing would be Oblivion as you travel across the land ofTamriel.
  • Raycasting:Raycasting is similar to Raytracing since they both share similar algorithms. Theonly thing that distinguishes the two is that Raycasting is a faster version ofRaytracing and that it cannot render secondary rays, where as Raytracing can.Raytracing:Ray tracing is a technique that renders out an image by casting out rays ontothe scene and as the rays cast upon the geometry, the colour value of thatpixel is calculated. It can produce high degree of visual realism, but it will costtime to render the scene. It is capable of simulating different variety of visualeffects such as reflection (an example would a glass), scattering (where thelight rays hits the geometry and it bounces back and scatters) and refraction(refraction is used on water or air and it will change depending on the changeof direction).Example of using raytracing:
  • Ray tracing is best used on still images, special effects, and TV, sadly it is notsuited to be used on games.Radiocity:Radiosity is a technical term in which it is uses two types of lights, an incidentlight (in which the light source hits onto the subject) and a reflective light(where the light reflects off from the subject’s surface). This is used especiallyon interior design.Example of using Radiosity:and a video example http://www.youtube.com/watch?v=NO3uvnbwCKMHow to apply sample fog on 3DS Max:Go to Rendering > Environment (hotkey 8)
  • Underneath the atmosphere section, click add…and select Fog
  • You can change the density of how far or near the fog will appear as yourender the sceneI think this is not the best way of producing HQ fog and that this should bedone through Adobe After Effects.How to make textures not blurr inviewport:First apply the textures on the material editor by dragging and dropping thetextures onto the shaders or click on the maps section and then the diffuse slotand select the file
  • Next we need to click on customize > preferencesclick on the viewport tab > configure driver
  • and tick “Match Bitmap Size as Closely as Possible” on the Background TextureSize section and also tick the same thing again on the Download Texture Sizesection as well
  • and finally all you need to do is click on the material editor again and click thetexture that you want to see more clearer, to refresh it.Progressive and Interlace scanning:So what is Progressing and Interlace scan?Interlace and progressive scanning describes how images are displayed on ourTV screens. The image is displayed rapidly and updating the screen all the time,this associates with computer monitors as well.Progressive scan:•The image is displayed rapidly and drawn in sequence•Requires a higher refresh rate•Associated with computer monitors•Latest HD TV’s can display Progressive Scan•Can display fast moving images•Requires a high bandwidth (more data per image)Frame Buffer:•This is the area of video memory which is stored ready tp be transmitted tothe monitor device. To display moving images (flipbook)•High resolution and more bit depth requires more video memory to storeimages.Interlace scanning:• Unlike Progressive scanning, the interlace scanning takes half the bandwidthof non interlaced scanning (progressive).•Interlacing is used by all the analogue TV broadcast systems•Interlace scanning is done by drawing out the even numbered rows, then theodd numbered rows (or vice versa doesn’t make a difference)
  • Vertex Lighting:Vertex Lighting (also known as Gouraud shading) is a method that is used todisplay and simulate differing effects of light across the surface of a 3d object.This is done by calculating the vertices around the subject as well as where thelight source is projecting at, the more amount of vertex there is, the better thespecualar lighting, the lower the amount of vertices there is, the less qualityyou will have from a high poly specular lighting.Distributed renderingDistributed rendering (also known as DR) is a technique in which lots ofcomputers are rendering the same scene and that it helps reduce therendering time that it originally has.Vray on 3ds Max is capable of doing this process. The process is done by usingTCI/ IP protocols and when you’re using Vray, there are two things you need toknow, there is a Render Clients and Render Servers.Render ClientsThe render client is the main source of where the renders servers will need toget the information from and it divides the frames into bits and spreads itacross the Render Servers. It distributes data to the render servers forprocessing and collects the results.Render ServersA render server is a computer that collects the information that the RenderClients have sent and it processes it and sends the result back.ref:Clipping 3D:Clipping is used to display the inside and outside of the geometry, you candisable this and make the inside of the geometry transparent on 3Ds Max, todo this, right click > object properties > tick back force cull
  • Sources:
  • http://animation.about.com/od/glossaryofterms/g/What-Is-A-3d-Polygon.htmhttp://www.fastgraph.com/help/3D_clipping.htmlhttp://en.wikipedia.org/wiki/Projective_geometryhttp://www.google.co.uk/search?hl=en&q=what+is+clipping+3d%3F&metahttp://www.spot3d.com/vray/help/150SP1/distributed_rendering.htmhttp://en.wikipedia.org/wiki/3D_computer_graphicshttp://en.wikipedia.org/wiki/3D_modelhttp://www.best3dsolution.com/services/3d-rendering/http://www.blender.org/http://ezinearticles.com/?3D-Modeling-Technology&id=6102955