Your SlideShare is downloading. ×
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
3D Article
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

3D Article

824

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
824
On Slideshare
0
From Embeds
0
Number of Embeds
14
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Applications of 3D today:3D in Games:3D in games has come a long way in recent years. It all started in a game called 3DMonster Maze. It was developed by Malcolm Evans in 1981 for the Sinclair ZX81platform. The game awarded points for each step the player took without gettingcaught by the Tyrannosaurus Rex that hunted them in the 16 by 16 cell, randomlygenerated maze.Then we got more advanced 3D graphics from games such as Spyro and CrashBandicoot:This was when we started to realise the true potential of 3D in games for the future.We were all amazed at what was possible as it was only 1996 and we were gettingthese amazing graphics. Gaming was evolving at a rate unseen by any other form ofmedia before. The 3d aspect also opened up a lot of new avenues for gamesdesigners to create new genres and build upon the tried and tested genres, like fpsgames, with masterpieces such as Goldeneye.Then we started to get even more updated and improved graphics from games likethe Grand Theft Auto series and the Elder Scrolls series.
  • 2. This is when we were at the height of PS2 and Xbox gaming, but then Microsoftunveiled their new console, the Xbox 360 which would revolutionise the future ofconsole gaming. This is now we how get the hyper realistic graphics of today ingames like Far Cry, Red Dead Redemption and Crysis.
  • 3. 3D in Movies and TVComputer-generated imagery (CGI) is the application of computer graphics to createor contribute to images in art, printed media, video games, films, televisionprograms, commercials, simulators and simulation generally. The visual scenes may be dynamic or static, and may be 2D, though the term "CGI"is most commonly used to refer to 3D computer graphics used for creating scenes orspecial effects in films and television. The term computer animation refers todynamic CGI rendered as a movie. The term virtual world refers to agent-based,interactive environments. Computer graphics software is used to make computer-generated imagery for movies, etc.Recent availability of CGI software and increased computer speeds have allowedindividual artists and small companies to produce professional-grade films, games,and fine art from their home computers. This has brought about an internetsubculture with its own set of global celebrities, clichés, and technical vocabulary.Lots of very popular TV shows and movies, such as Doctor Who and AvengersAssemble have used CGI to create things in the movie which could not be done byhumans and they are usually mixed in with live action to make it look more authentic.
  • 4. They can create a real impact as they almost seem real if they are doneprofessionally, like in the background of the Avengers Assemble poster how the cityis burning and crumbling it really looks real and adds to the story and setting. Otherpopular examples of shows that use CGI are Primeval and 24.
  • 5. 3D in AnimationsWith the technology behind creating 3D animations, it’s possible to makephotorealistic 3D content that can’t be distinguished from a real photograph or video.3D animation also provides a high level of control and flexibility which makes theartistic freedom endless. Disney and Pixar are probably the most popular 3danimation makers, but there are lots of freelancers and smaller companies that try tocreate professional animations. Some examples include vr3 and kurodragon. Somepopular animations include A Bugs Life and Toy Story.3D in MedicineWe brought you the first potentially negative use of 3D printers this morning with therevelation that one can make rare handcuff keys with a simple 3D printer or lasercutter. The technology is still really cool, but it must be used with great responsibility.Well, there’s another use for 3D printers that has a lot of potential to be abused, butalso a lot of potential to save lives. The 3D printer revolution has taken hold ofProfessor Lee Cronin at Glasgow University. He has many interests, but one of hismost ambitious involves 3D printers. In an interview with The Guardian, he talks up3D printers and their potential for revolutionizing the medicine industry. His goal is tocreate “downloadable chemistry” so that people can print their own medicine athome. Of course, you can already see the problem here. Prescription drug abuse isa major problem in many countries, especially in the U.S. Giving people easy accessto those drugs is a potential hazard that must be addressed. Cronin dismisses sucha scenario and instead focuses on the benefits such an innovation could have onsociety.
  • 6. 3D in EducationGaia’s 3D Visual Learning Solutions provide an interactive learning experiencedesigned to make teaching easier and learning more fun. All of our 3D solutions aredesigned to complement the school’s curriculum and improve the ability of studentsto learn.
  • 7. 3D in Architecture3D architectural design is the final stage of design development that Architects andInterior Designers favour in order to visualise their Architectural drawings andcreative design ideas. The emerging 3D images that our professional 3D Architectscreate can be astonishingly photo-realistic. This 3d technology is used mostly (butnot exclusively) by architects, design studios and property developers for a variety ofprojects and plans. This can include hotel and property redevelopment, homeimprovements and commercial interior design. However, an increasing number ofproduct designers, Engineers, tradesmen and film studios are turning to 3Dprofessionals for their specialist help in order to bring dynamic solutions to a widerange of situations.
  • 8. Displaying 3D Polygon Animations API An application programming interface (API) is a protocol intended to be used as an interface by software components to communicate with each other. An API may include specifications for routines, data structures, object classes, and variables. An API specification can take many forms, including an International Standard such as POSIX, vendor documentation such as the Microsoft Windows API, the libraries of a programming language, e.g. Standard Template Library in C++ or Java API. Garter predicts that by 2014 75% of Fortune 500 enterprises will open an API. Direct3D Direct3D is a low-level API that you can use to draw triangles, lines, or points per frame, or to start highly parallel operations on the GPU.·Hides different GPU implementations behind a coherent abstraction. But you still need to know how to draw 3D graphics.·Is designed to drive a separate graphics-specific processor. Newer GPUs have hundreds or thousands of parallel processors.·Emphasizes parallel processing. You set up a bunch of rendering or compute state and then start an operation. You dont wait for immediate feedback from the operation. You dont mix CPU and GPU operations.
  • 9. OpenGLOpenGL (Open Graphics Library) is a cross-language, multi-platform API forrendering 2D and 3D computer graphics. The API is typically used to interact with aGPU, to achieve hardware-accelerated rendering. OpenGL was developed by SiliconGraphics Inc. in 1992[4] and is widely used in CAD, virtual reality, scientificvisualization, information visualization, flight simulation, and video games. OpenGLis managed by the non-profit technology consortium Khronos Group.
  • 10. Graphics PipelineIn 3D computer graphics, the terms graphics pipeline or rendering pipeline mostcommonly refer to the way in which the 3D mathematical information containedwithin the objects and scenes are converted into images and video. The graphicspipeline typically accepts some representation of a three-dimensional primitive asinput and results in a 2D raster image as output. OpenGL and Direct3D are twonotable 3d graphic standards, both describing very similar graphic pipelines.Stages of the graphics pipelinePer-vertex lighting and shadingGeometry in the complete 3D scene is lit according to the defined locations of lightsources, reflectance, and other surface properties. Some (mostly older) hardwareimplementations of the graphics pipeline compute lighting only at the vertices of thepolygons being rendered. The lighting values between vertices are then interpolatedduring rasterization. Per-fragment or per-pixel lighting, as well as other effects, canbe done on modern graphics hardware as a post-rasterization process by means ofa shader program. Modern graphics hardware also supports per-vertex shadingthrough the use of vertex shaders.ClippingGeometric primitives that now fall completely outside of the viewing frustum will notbe visible and are discarded at this stage.Projection Transformation
  • 11. In the case of a Perspective projection, objects which are distant from the cameraare made smaller. This is achieved by dividing the X and Y coordinates of eachvertex of each primitive by its Z coordinate (which represents its distance from thecamera). In an orthographic projection, objects retain their original size regardless ofdistance from the camera.Viewport TransformationThe post-clip vertices are transformed once again to be in window space. In practice,this transform is very simple: applying a scale (multiplying by the width of thewindow) and a bias (adding to the offset from the screen origin). At this point, thevertices have coordinates which directly relate to pixels in a raster.Scan Conversion or RasterisationRasterisation is the process by which the 2D image space representation of thescene is converted into raster format and the correct resulting pixel values aredetermined. From now on, operations will be carried out on each single pixel. Thisstage is rather complex, involving multiple steps often referred as a group under thename of pixel pipeline.Texturing, Fragment ShadingAt this stage of the pipeline individual fragments (or pre-pixels) are assigned a colorbased on values interpolated from the vertices during rasterization, from a texture inmemory, or from a shader program.DisplayThe final colored pixels can then be displayed on a computer monitor or otherdisplay.Geometric TheoryGeometric theory: (vertices; lines; curves; edge; polygons; element; face; primitives; meshes, eg wireframe; coordinate geometry (two-dimensional, three-dimensional); surfaces Mesh construction: box modelling; extrusion modelling; using common primitives, eg cubes, pyramids,cylinders, spheres)Cartesian Coordinates:When working with a three dimensional software, we are making a 3D illusionalinterpretation of something in a flat 2D screen and so every 3D software such as 3DsMax, Blender, Maya and any other software has a Catesian coordinate system to
  • 12. represent a geometry in 3D space. Cartesian Coordinate is also used mathematicsas well.Rene Discartes was a French mathematician who developed the CartesianCoordinate system (1637). He did it because he wanted to merge Algebra andEuclidean Geometry together. His work was an important role in mathematics in thedevelopment of analytic geometry, calculus and cartography.2D and 3D Cartesian Coordinate system:As usual in maths, when working with a 2D Cartesian Coordinate there are two axis,X (across the axis) and Y (which goes down) and when the two meet together, it iscalled a origin.When using a 3D Cartesian coordinate system (you can find 3D catesiancoordinates in 3D softwares like 3DS Max) is that it consists of three axis which isX,Y and Z.Mesh ConstructionAlthough it is possible to construct a mesh by manually specifying vertices andfaces, it is much more common to build meshes using a variety of tools. A widevariety of 3d graphics software packages are available for use in constructingpolygon meshes. One of the more popular methods of constructing meshes is box modeling, whichuses two simple tools:
  • 13. The subdivide tool splits faces and edges into smaller pieces by adding newvertices. For example, a square would be subdivided by adding one vertex in thecenter and one on each edge, creating four smaller squares.The extrude tool is applied to a face or a group of faces. It creates a new face of thesame size and shape which is connected to each of the existing edges by a face.Thus, performing the extrude operation on a square face would create a cubeconnected to the surface at the location of the face.A second common modeling method is sometimes referred to as inflation modelingor extrusion modeling. In this method, the user creates a 2d shape which traces theoutline of an object from a photograph or a drawing. The user then uses a secondimage of the subject from a different angle and extrudes the 2d shape into 3d, againfollowing the shape’s outline. This method is especially common for creating facesand heads. In general, the artist will model half of the head and then duplicate thevertices, invert their location relative to some plane, and connect the two piecestogether. This ensures that the model will be symmetrical.Another common method of creating a polygonal mesh is by connecting togethervarious primitives, which are predefined polygonal meshes created by the modelingenvironment. Common primitives include:CubesPyramidsCylinders2D primitives, such as squares, triangles, and disksSpecialized or esoteric primitives, such as the Utah Teapot or Suzanne, Blendersmonkey mascot.Spheres - Spheres are commonly represented in one of two ways:Icospheres are icosahedrons which possess a sufficient number of triangles toresemble a sphere.UV Spheres are composed of quads, and resemble the grid seen on some globes -quads are larger near the "equator" of the sphere and smaller near the "poles,"eventually terminating in a single vertex.Finally, some specialized methods of constructing high or low detail meshes exist.Sketch based modeling is a user-friendly interface for constructing low-detail modelsquickly, while 3d scanners can be used to create high detail meshes based onexisting real-world objects in almost automatic way. These devices are veryexpensive, and are generally only used by researchers and industry professionalsbut can generate high accuracy sub-millimetric digital representations.3D Development SoftwarePolygonal modeling - Points in 3D space, called vertices, are connected by linesegments to form a polygonal mesh. The vast majority of 3D models today are built
  • 14. as textured polygonal models, because they are flexible and because computers canrender them so quickly. However, polygons are planar and can only approximatecurved surfaces using many polygons.Curve modeling - Surfaces are defined by curves, which are influenced by weightedcontrol points. The curve follows (but does not necessarily interpolate) the points.Increasing the weight for a point will pull the curve closer to that point. Curve typesinclude Nonuniform rational B-spline (NURBS), Splines, Patches and geometricprimitivesDigital sculpting - Still a fairly new method of modeling, 3D sculpting has becomevery popular in the few short years it has been around. There are currently 3 types ofdigital sculpting: Displacement, which is the most widely used among applications atthis moment, volumetric and dynamic tessellation. Displacement uses a densemodel (often generated by Subdivision surfaces of a polygon control mesh) and
  • 15. stores new locations for the vertex positions through use of a 32bit image map thatstores the adjusted locations. Volumetric which is based loosely on Voxels hassimilar capabilities as displacement but does not suffer from polygon stretching whenthere are not enough polygons in a region to achieve a deformation. DynamictesselationIs similar to Voxel but divides the surface using triangulation to maintain asmooth surface and allow finer details. These methods allow for very artisticexploration as the model will have a new topology created over it once the modelsform and possibly details have been sculpted. The new mesh will usually have theoriginal high resolution mesh information transferred into displacement data ornormal map data if for a game engine.http://en.wikipedia.org/wiki/3D_modelingAutodesk 3ds Max, formerly 3D Studio Max, is 3D computer graphics software formaking 3D animations, models, and images. It was developed and produced byAutodesk Media and Entertainment. It has modeling capabilities, a flexible pluginarchitecture and can be used on the Microsoft Windows platform. It is frequentlyused by video game developers, TV commercial studios and architecturalvisualization studios. It is also used for movie effects and movie pre-visualization.In addition to its modeling and animation tools, the latest version of 3ds Max alsofeatures shaders (such as ambient occlusion and subsurface scattering), dynamicsimulation, particle systems, radiosity, normal map creation and rendering, globalillumination, a customizable user interface, and its own scripting language.
  • 16. http://en.wikipedia.org/wiki/Autodesk_3ds_MaxAutodesk Maya, commonly shortened to Maya, is 3D computer graphics softwarethat runs on Microsoft Windows, Mac OS and Linux, originally developed by AliasSystems Corporation (formerly Alias|Wavefront) and currently owned and developedby Autodesk, Inc. It is used to create interactive 3D applications, including videogames, animated film, TV series, or visual effects. The product is named after theSanskrit word Maya ( māyā), the Hindu concept of illusion.http://en.wikipedia.org/wiki/Autodesk_MayaCINEMA 4D is a 3D modeling, animation and rendering application developed byMAXON Computer GmbH of Friedrichsdorf, Germany. It is capable of proceduraland polygonal/subdmodeling, animating, lighting, texturing, rendering, and commonfeatures found in 3d modelling applications.
  • 17. http://en.wikipedia.org/wiki/Cinema_4DSketchUp is a 3D modeling program optimized for a broad range of applicationssuch as architectural, civil, mechanical, film as well as video game design — andavailable in free as well as professional versions. The program highlights its ease ofuse, and an online repository of model assemblies (e.g., windows, doors,automobiles, entourage, etc.) known as 3D Warehouse enables designers to locate,download, use and contribute free models. The program includes a drawing layoutfunctionality, allows surface rendering in variable "styles," accommodates third-party"plug-in" programs enabling other capabilities (e.g., near photo realistic rendering)and enables placement of its models within Google Earth. In early 2012, Google, thecurrent owner of Sketchup, announced it will sell the program to Trimble, a companyformerly known for GPS location services.
  • 18. http://en.wikipedia.org/wiki/SketchUpPolygon Count and File SizeThe two common measurements of an objects cost’ or file size are the polygoncount and vertex count. For example, a game character may stretch anywhere from200-300 polygons, to 40,000+ polygons. A high-end third-person console or PCgame may use many vertices or polygons per character, and an iOS tower defencegame might use very few per character.Polygons Vs. TrianglesWhen a game artist talks about the poly count of a model, they really mean thetriangle count. Games almost always use triangles not polygons because mostmodern graphic hardware is built to accelerate the rendering of triangles.The polygon count thats reported in a modelling app is always misleading, becausea models triangle count is higher. Its usually best therefore to switch the polygoncounter to a triangle counter in your modelling app, so youre using the samecounting method everyone else is using.Polygons however do have a useful purpose in game development. A model madeof mostly four-sided polygons (quads) will work well with edge-loop selection &transform methods that speed up modelling, make it easier to judge the "flow" of amodel, and make it easier to weight a skinned model to its bones. Artists usuallypreserve these polygons in their models as long as possible. When a model isexported to a game engine, the polygons are all converted into trianglesautomatically. However different tools will create different triangle layouts withinthose polygons. A quad can end up either as a "ridge" or as a "valley" depending on
  • 19. how its triangulated. Artists need to carefully examine a new model in the gameengine to see if the triangle edges are turned the way they wish. If not, specificpolygons can then be triangulated manually.Triangle Count vs. Vertex CountVertex count is ultimately more important for performance and memory than thetriangle count, but for historical reasons artists more commonly use triangle count asa performance measurement. On the most basic level, the triangle count and thevertex count can be similar if the all the triangles are connected to one another. 1triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4triangles use 6 vertices and so on. However, seams in UVs, changes toshading/smoothing groups, and material changes from triangle to triangle etc. are alltreated as a physical break in the models surface, when the model is rendered bythe game. The vertices must be duplicated at these breaks, so the model can besent in renderable chunks to the graphics card.Overuse of smoothing groups, over-splittage of UVs, too many material assignments(and too much misalignment of these three properties), all of these lead to a muchlarger vertex count. This can stress the transform stages for the model, slowingperformance. It can also increase the memory cost for the mesh because there aremore vertices to send and store.http://wiki.polycount.net/PolygonCountRendering TimeRendering is the final process of creating the actual 2D image or animation from theprepared scene. This can be compared to taking a photo or filming the scene afterthe setup is finished in real life. Several different, and often specialised, renderingmethods have been developed. These range from the distinctly non-realisticwireframe rendering through polygon-based rendering, to more advanced techniquessuch as: scanline rendering, ray tracing, or radiosity. Rendering may take fromfractions of a second to days for a single image/frame. In general, different methodsare better suited for either photo-realistic rendering, or real-time rendering.Real-timeRendering for interactive media, such as games and simulations, is calculated anddisplayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye canprocess in a fraction of a second, i.e. one frame. The primary goal is to achieve anas high as possible degree of photorealism at an acceptable minimum renderingspeed (usually 24 frames per second, as that is the minimum the human eye needsto see to successfully create the illusion of movement). In fact, exploitations can beapplied in the way the eye perceives the world, and as a result the final imagepresented is not necessarily that of the real-world, but one close enough for thehuman eye to tolerate. Rendering software may simulate such visual effects as lensflares, depth of field or motion blur. These are attempts to simulate visual
  • 20. phenomena resulting from the optical characteristics of cameras and of the humaneye. These effects can lend an element of realism to a scene, even if the effect ismerely a simulated artefact of a camera. This is the basic method employed ingames, interactive worlds and VRML. The rapid increase in computer processingpower has allowed a progressively higher degree of realism even for real-timerendering, including techniques such as HDR rendering. Real-time rendering is oftenpolygonal and aided by the computers GPU.Non Real-timeAnimations for non-interactive media, such as feature films and video, are renderedmuch more slowly. Non-real time rendering enables the leveraging of limitedprocessing power in order to obtain higher image quality. Rendering times forindividual frames may vary from a few seconds to several days for complex scenes.Rendered frames are stored on a hard disk then can be transferred to other mediasuch as motion picture film or optical disk. These frames are then displayedsequentially at high frame rates, typically 24, 25, or 30 frames per second, to achievethe illusion of movement.When the goal is photo-realism, techniques such as ray tracing or radiosity areemployed. This is the basic method employed in digital media and artistic works.Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter.Examples of such techniques include particle systems (which can simulate rain,smoke, or fire), volumetric sampling (to simulate fog, dust and other spatialatmospheric effects), caustics (to simulate light focusing by uneven light-refractingsurfaces, such as the light ripples seen on the bottom of a swimming pool), andsubsurface scattering (to simulate light reflecting inside the volumes of solid objectssuch as human skin).The rendering process is computationally expensive, given the complex variety ofphysical processes being simulated. Computer processing power has increasedrapidly over the years, allowing for a progressively higher degree of realisticrendering. Film studios that produce computer-generated animations typically makeuse of a render farm to generate images in a timely manner. However, fallinghardware costs mean that it is entirely possible to create small amounts of 3Danimation on a home computer system. The output of the renderer is often used asonly one small part of a completed motion-picture scene. Many layers of materialmay be rendered separately and integrated into the final shot using compositingsoftware.Reflection/Scattering - How light interacts with the surface at a given pointShading - How material properties vary across the surfacehttp://en.wikipedia.org/wiki/3D_rendering

×