Virtual Reality

3,150 views
3,052 views

Published on

A small report on virtual Reality. Outlining the basic concepts of virtual reality technology.

Published in: Education, Technology
4 Comments
9 Likes
Statistics
Notes
No Downloads
Views
Total views
3,150
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
4
Likes
9
Embeds 0
No embeds

No notes for slide

Virtual Reality

  1. 1. 1ContentsSection 1: Introduction1.1. The three I’s of Virtual Reality.........................................................21.2. A short history of early Virtual Reality.............................................31.3. Early commercial VR technology.....................................................41.4. The components of a VR system......................................................4Section 2: Input Devices2.1. 3D Positional Trackers.............................................................................52.1.1. Tracker performance parameters.................................................62.1.2. Different types of trackers............................................................82.2. Navigation and Manipulation Interfaces....................................................92.3. Gesture interfaces....................................................................................9Section 3: Output Devices3.1. Graphics Display.................................................................................103.2. Haptic Feedback..................................................................................113.2.1. Tactile feedback interfaces3.2.2. Force Feedback InterfacesSection 4: Computer Architecture for VR4.1. The Rendering Pipeline.........................................................................134.2. Haptic Rendering Pipeline....................................................................14Section 5: Modelling5.1. Geometric Modelling............................................................................155.1.1. Virtual Object Shape5.1.2. Object Visual Appearance5.2. Kinematic Modelling.............................................................................165.3. Physical Modelling...............................................................................175.3.1. Collision detection5.3.2. Surface deformation5.3.3. Force computation5.4. Behaviour Modelling............................................................................185.5. Model Management..............................................................................185.5.1. Level-of-Detail Management5.5.2. Cell SegmentationSection 6: VR Programming6.1. Toolkits and Scene Graphs...................................................................206.2. JAVA 3D..............................................................................................216.2.1. Model Geometry and Appearance6.2.2. Java 3D Scene GraphsConclusion
  2. 2. 2Section 1: IntroductionThe scientific community has been working in the field of virtual reality (VR) fordecades, having recognized it as a very powerful human-computer interface. A large number ofpublications, TV shows, and conferences have describes virtual reality in various and(sometimes) inconsistent ways. This is led to confusion in the technical literature.Then what is virtual reality? Let us first describe it in terms of functionality. It is asimulation in which computer graphics is used to create a realistic-looking world. Moreover,the synthetic world is not static, but responds to the user’s input (gesture, verbal commands,etc.). This defines a key feature of virtual reality, which is real-time interactivity. Here real timemeans that the computer is able to detect a user’s input and modify the virtual worldinstantaneously. People like to see things change on the screen in response to their commandsand become captivated by the simulation.Definition: Virtual reality is a high-end user-computer interface that involves real-timesimulation and interactions through multiple sensorial channels. These sensorial modalities arevisual, auditory, and tactile, smell, and taste.1.1 The three I’s of Virtual RealityInteractivity: Anybody who doubts the spell binding power of interactivegraphics has only to look at children playing videos games. It was reported that twoyoungsters in United Kingdom continued playing Nintendo even though their house wason fire!Immersion: Interactivity and its captivating power contributes to the feeling ofimmersion, of being part of the action in the virtual world that the user experiences.Virtual reality pushes it even further by using all human sensorial channels. Indeed,users not only see and manipulate graphics objects in the Virtual world, they can alsotouch and feel them [Burdea, 1996].Imagination: The extent to which a virtual reality application is able to solve aparticular problem, that is the extent to which a simulation performs well, dependstherefore very much on the human imagination. The imagination part of VR refers tothe minds capacity to perceive nonexistent things.Fig 1.1 The three I’s of Virtual reality
  3. 3. 31.2 A short history of early Virtual RealityVirtual reality is not a new invention, but it dates back more than 40 years. In 1962, U.S.Patent #3,030,870 was issued was issued to Morton Heilig for his invention entitled SensoramaSimulator, which was the 1stvirtual reality video arcade. As shown in Figure 1.2, this early VRworkstation had 3D video feedback (obtained by using a pair of side-by-side 35mm cameras),motion, colour, stereo, aroma, wind effect, and a seat that vibrated. It was thus possible tosimulate a motorcycle ride through New York City.In 1981, NASA created a LCD based HMD, they just reverse engineered a SonyWatchman TV and added special optics to focus image near the eyes. Even today HMDs use thesame basic principle used by NASA in 1981.1.3 Early Commercial VR TechnologyThe 1stcompany to sell VR products was VPL Inc. This company produced the DataGlove(fig 1.3) [VPL, 1987]. Its fibre-optic sensors allowed computers to measure finger and thumbbending, and thus interaction was possible through gestures.Soon after this the game company Nintendo introduces a much cheaper PowerGlove forits gaming console. It used ultrasonic sensors to measure wrist position relative to the screenconductive ink flex sensors to measure finger bending.The 1stcommercial head-mounted displays, called EyePhones, were introduced by VPLin 1980s. These HDMs used LCD displays to produce a stereo image, but at extremely lowresolution (360X240 pixels). In early 1991 Division Ltd. Introduced the 1stintegratedcommercial VR workstation. It had a Stereo display on an HDM, 3D sound, hand tracking, andgesture recognition.Fig 1.2 Sensorama Fig 1.3 VPL DataGlove
  4. 4. 41.4 The Components of a VR System
  5. 5. 5Section 2: Input DevicesOne of the three I’s defining virtual reality stands for interactivity. In order to allowhuman-computer interaction it is necessary to use special interfaces designed to input user’scommands into the computer and provide feedback from the simulation to the user.Today’s VR interfaces are varied in functionalities and purpose, as they address severalhuman sensorial channels. For Example, body motion is measured with #d position Trackers orusing sensing suits, hand gestures are digitized by sensing gloves, visual feedback is sentthrough stereo HDMs and large volume displays, virtual sound is computed by 3D soundgenerators, etc. Some of these technologies are still under research. The aim of the researchersis to allow faster and more natural ways of interaction with the computer and thus overcomingthe communication bottleneck presented by the keyboard and mouse.2.1 3D Positional TrackersMany computer application domains, such as navigation, ballistic missile tracking,ubiquitous computing, robotics, biomechanics, architecture, computer-aided design (CAD),education, and VR require knowledge of real time position and orientation of moving objectswithin some frame of reference [Hightower and Borriello, 2001].Objects moving in 3D space have 6 degrees of freedom, three for translations and threefor rotations. If a Cartesian coordinate system is attached to the moving object (as illustrated inFig 2.1), then its translations are along X, Y, and Z axes. Object rotation about these axes arecalled yaw, pitch, and roll respectively.Fig 2.1 System of coordinates of moving 3D objectDefinition: The special purpose hardware used in VR tomeasure the real-time change in a 3D objects position andorientation is called a tracker.VR applications typically measure the motion of theuser’s head, hands or limbs for the purpose of view control,locomotion, and object manipulation. In case of the headmounted display illustrated in figure 2.2, the tracker receiveris placed on the user’s head, so when the posture of the head changes so does the position ofthe receiver. The user’s head motion is sampled by an electronic unit and sent to a hostcomputer (in this case a graphics workstation). The computer calculates the new viewingdirection of the virtual scene and to render an updated image.
  6. 6. 6Another VR sensorial modality that uses tracker information is 3D sound, which in figure2.2 is presented through headphones. Tracker data allow the computer to collocate soundsources with virtual objects the user sees in the simulation. This helps increase the simulationrealism and the user’s feeling of immersion in the synthetic world.2.1.1 Tracker performance parametersAll 3D trackers regardless of the technology they use, share a number of very importantperformance parameters, such as accuracy, jitter, drift and latency. These are illustrated infigure 2.3.Definition: Tracker accuracy represents the difference between the objects actual position andthat reported by tracker measurements.
  7. 7. 7Fig 2.3 Tracker performance parameters: a) accuracy b) jitter c) drift d) latencyThe more accurate a tracker, smaller this difference is and the better the simulationfollows the real user’s actions. Accuracy is given separate for tracking translation (millimetres)and rotation (degrees). Accuracy is typically not constant and is degrades with distance fromthe origin of reference of the system of coordinates. The distance at which accuracy isacceptable defines the tracker operating range or working envelope. Accuracy should not beconfused with resolution, which is the granularity or the minimum change in tracker object 3Dposition that the sensor can detect. The sphere of repeatability is the envelope which enclosesrepeated measurements of a real object stationary position. Repeatability depends on trackerjitter.Definition: Tracker jitter represents the change in tracker output when the tracked object isstationary.A noisy tracker makes accurate measurements difficult. Just like accuracy, jitter is notconstant over the tracker work envelope, and is influenced by environmental conditions in thetracker’s vicinity.
  8. 8. 8Definition: Tracker drift is the steady increase in tracker error with time.The output of the tracker wit drift measure the position of a stationary object is shownin figure 2.3c. As time passes, the tracker inaccuracy grows, which makes its data useless. Driftneeds to be controlled by periodically zeroing it using a secondary tracker that does not havedrift.Definition: Latency is the time delay between action and result. In the case of the 3D tracker,latency is the time between the changes in object position/orientation and the time the sensordetects this change.Minimal latency is desired, since large latencies have serious negative effect on thesimulation. This can induce “simulation sickness”. Latency can be minimized by:1. Synchronizing measurement, communication, rendering and display loops, this methodis called generation lock or ‘genlock’.2. Using faster communication lines.3. High update or sampling rate.Definition: Tracker update rate represents the number of measurements that the trackerreports every second.2.1.2 Different types of trackers1. Mechanical tracker: A mechanical tracker consists of a serial or parallelkinematic structure composed of links interconnected using sensorized joints.Figure2. Magnetic tracker: A magnetic tracker is a noncontact position measurementdevice that uses magnetic fields produced by a stationary transmitter todetermine the real-time position of a moving element. There are 2 types: AC andDC3. Ultrasonic tracker: An ultrasonic tracker is a noncontact position measurementdevice that uses ultrasonic signals produced by a stationary transmitter todetermine the real-time position of a moving receiver element.4. Optical tracker: A optic tracker is a noncontact position measurement devicethat uses optical sensing to determine the real-time position of objects.5. Hybrid inertial tracker: A hybrid tracker is a system that utilizes two or moreposition measurement technologies to track objects better than any singletechnology would allow.
  9. 9. 9Figure 2.4 Mechanical tracker Figure 2.5 Optical tracker2.2 Navigation and Manipulation InterfacesDefinition: A navigation/manipulation interface is a device that allows the interactive change ofview to the virtual environment and exploration through the selection and manipulation ofvirtual objects.2.2.1Types of Navigation/Manipulation Interfaces1. Tracker-Based: Trackers offer more functionality to VR simulation than simplymeasuring real-time position of user. Integrated within these trackers are manipulationdevices.2. Trackballs: This is a sensorized cylinder that measures three forces and three torquesapplied by the user’s hand on a compliant element.3. 3D probes: Intuitive and inexpensive device which allow either absolute or relativeposition control of objects.2.3 Gesture interfacesDefinition: Gesture interfaces are devices that measure the real-time position of user’s fingersand wrists in order to allow natural, gesture-recognition based interaction with the virtualworld.Devices used: Pinch Glove, 5DT Data Glove, Didjiglove, CyberGlove
  10. 10. 10Section 3: Output DevicesNow we look at special hardware designed to provide feedback from the simulation inresponse to the input. The sensorial channels fed back by these interfaces are sight, sound, andtouch.3.1 Graphics DisplayDefinition: A graphic display is a computer interface that presents synthetic world images inone or several users interacting with the virtual world.Types of displays:1. Personal Graphics display: A graphics display that outputs virtual scene destined to beviewed by a single user.a. Head-Mounted Displays (HMDs) (fig 3.1)b. Hand-Supported Displays (HSDs)c. Floor-Supported Displays (FSDs)d. Desk-Supported Displays (DSDs)2. Large-Volume Displays: Graphics display that allows several users located in closeproximity to simultaneously view images of the virtual display.a. Monitor-Based (fig3.2)b. Projector-BasedFig 3.1 Head-Mounted Display Fig 3.2 Panoramic Display
  11. 11. 113.2 Haptic FeedbackDefinition: Touch feedback conveys real-time information on contact surface geometry, virtualobject surface roughness, slippage, and temperature. It does no actively resist the user’scontact motion and cannot stop user from moving through virtual surface.Definition: Force feedback provides real-time information on virtual object surface compliance,object weight, and inertia. It actively resists user’s contact motion and can stop it.3.2.1 Tactile feedback interfacesThese devices use many ways to stimulate skin receptors ranging from air blows, jets,electric impulses, vibrations, micro-pin arrays, direct stimulation and functional neuro-muscularstimulation to provide tactile feedback.Devices: Tactile mouse [Rosenberg and Martin, 2001]CyberTouch Glove [2000 Immersion Co.] (fig 3.3)Temperature feedback gloveFig. 3.3 CyberTouch Glove3.3.2 Force Feedback InterfacesThese devices provide substantial force to stop user’s motion compliant to objects inthe virtual world. This implies these devices have larger actuators, heavier structures, largercomplexities and greater cost.An important characteristic of force feedback is mechanical bandwidth.Definition: The mechanical bandwidth of a force feedback interface represents the frequencyof force and torque refreshes as felt by the user (through finger attachments, handles, gimbals,etc.).Devices: Force Feedback Joystick, Phantom Arm, CyberGrasp Glove, CyberForce Glove
  12. 12. 12Fig 3.4 CyberGrasp force feedback glove Fig 3.5 CyberForce force feedback system
  13. 13. 13Section 4: Computer Architecture for VRNow we look at computing hardware supporting such real-time interaction, which wecall the “VR Engine.” This term is an abstraction, which corresponds to various physicalhardware configurations, from a single computer to many networked computers supporting agiven simulation.Definition: The VR Engine is a key component of any VR system, which reads its input devices,accesses task-dependant databases, performs the required real-time computations to updatethe state of the virtual world, and feeds the result to the output display.During a VR simulation it is impossible to predict all users’ actions and store allcorresponding synthetic world stated in memory. Therefore the virtual world is created in realtime. For a smooth simulation at least 30 frames/second needs to be displayed. Therefore theVR engine needs to recompute the virtual world every 33 msec! This process alone takes alarge computational load that needs to be handled in parallel with other tasks.4.1 The Rendering PipelineThe term rendering is associated with graphics. It represents the process ofconverting 3D geometrical models populating the virtual world into 2D scenepresented to the user. But this is also extended to render the haptic feedback ofthe VR system.Graphics rendering has three fundamental stages as illustrated in figure4.1. The first stage the application stage, which is done entirely in software by theCPU. It reads the world geometry database as well as the user’s input mediatedby the input devices. In response to the user’s input the application stage maychange the view to the simulation or change orientation of virtual objects, orcreate/destroy objects.The application stage results are fed to the geometry stage, which canimplement either hardware or software. This stage consists of modeltransformations, lighting computations, scene projection, clipping, and mapping.The last stage in graphics pipeline is the rasterizing stage, which is done inhardware, in order to gain speed. This stage converts the vertex informationoutput by the geometry stage into pixel information needed by the video display.This stage also does anti-aliasing in order to smooth out jagged edges. This leadsto large computational loads. Therefore performed in parallel.
  14. 14. 14Fig 4.1 Advanced graphic rendering pipeline4.2 Haptic Rendering pipelineVR simulation systems implement additional sensorial modalities, such as haptics, thatneed to meet similar real-time constrains. This can be implemented through a multistagehaptics rendering pipeline.During the first stage of the haptic rendering pipeline the physical characteristics of the3D object are loaded from the database. These include surface compliance, smoothness,weight, surface temperature, etc. The first stage of the pipeline also performs collisiondetection to determine which virtual objects collide, if any. Here only altered objects arepassed to the next stage of the pipeline.The second stage calculates the change in characteristics of the virtual object based onvarious simulation models. More objects altered the more the computational load on thepipeline.The third and the last stage is the haptic texturing stage, which renders the touch andforce feedback components of the simulation to the output devices. This stage is largelyhardware dependent.Graphics accelerators:ATI Fire GL2 [2000]NVIDIA3Dlabs Wildcat IIHaptics rendering devices:CyberForceTemperature feedback gloveForce feedback joysticks
  15. 15. 15Section 5: ModellingAnother important aspect is the modelling of the virtual world. This means first themapping of I/O devices with the simulation scene, then developing object databases forpopulating the world. This means modelling object shape, appearance, kinematic constraints,intelligent behaviour, and physical characteristics. Finally, in order to maintain real-timeinteraction with the simulation, the model needs to be optimized during the modelmanagement step.5.1 Geometric Modelling5.1.1 Virtual Object ShapeThe shape of virtual objects is determined by their 3D surface, which can be describedin several ways. The vast majority of virtual objects have their surface composed of triangularmeshes. Triangular meshes are preferred because they use shared vertices and are faster torender. Another way of shape representation is to use parametric surfaces.Fig 5.1 parametric surfacesVRauthoringtoolI/O mappingGeometricmodellingKinematicmodellingPhysicalmodellingIntelligentbehaviourModelmanagement
  16. 16. 16There are several methods by which object surfaces can be constructed.Using Toolkit EditorsImporting CAD FilesCreating surfaces with 3D DigitizerCreating surfaces with 3D ScannerUsing online 3D Object Databases5.1.2 Object Visual AppearanceThe next step is to illuminate the scene such that the object becomes visible. Theappearance of an object will depend strongly on the type and placement of virtual light sourcesas well as on the object’s surface.Scene Illumination: Local scene illumination treats the interactions between objects andlight sources in isolation, neglecting the interdependences between objects. Global illuminationmodels the inter-reflections between objects and shadows, resulting in more realistic lookingscene.Texturing: is a technique performed in the rasterizing stage of the graphics pipeline inorder to modify the object model’s surface properties such as colour, specular reflection, orpixel normal.5.2 Kinematic ModellingThis determines the location of 3D objects with respect to a world system ofcoordinates as well as their motion in the virtual world. Object kinematics is governed byparent-child hierarchical relations, with the motion of a parent object affecting that of its child.Homogeneous transformation matrices are used to express object translation androtations, and scaling.Object position is expressed using 3D coordinate systems. This system of coordinates isattached to the object, usually at its centre of gravity, and oriented along the object axes ofsymmetry.Fig 5.2 View of 3D world.
  17. 17. 175.3 Physical ModellingNext step in virtual world modelling is the integration of object’s physicalcharacteristics. These include weight, inertia, surface geometry, compliance, deformationmode, etc. These features bring more realism to the simulation.5.3.1 Collision detectionThe first stage of haptic rendering is collision detection, which determines weather twoor more objects are in contact with each other. This can be considered a form of haptic clippingsince only objects that collide are processed by the haptic rendering pipeline.There are 2 types of collision detections: approximate and exact.5.3.2 Surface deformationCollision detection is followed by collision response, which depends on characteristics ofthe virtual objects in contact and on the particular application being developed. Surfacedeformation changes the 3D objects geometry interactively and thus needs to be coordinatedwith graphics pipeline.5.3.4 Force computationWhen the user interacts with 3D object surfaces they should feel the reaction force.These forces need to be computed by the haptic rendering pipeline and sent to haptic forcefeedback devices to the user. Force computation takes into account the type of surfacecontact, the kind of surface deformation, as well as the objects physical and kinematiccharacteristics. There are main 3 types of virtual objects:Elastic Virtual objectsPlastic Virtual objectsRigid Virtual objects
  18. 18. 185.4 Behaviour ModellingIt is also possible to model object behaviour that is independent of the user’s actions.This becomes critical in very large simulation environments, when users cannot possibly controlall interactions that are taking place.Consider modelling of a virtual office, for example. Such an office could have anautomatic door, a clock and a desk calendar, as well as furniture. The time displayed by theclock and date shown by the calendar should be automatically adjusted by the VR Engine. Thevirtual doors should open when the user walks into the office. All these behaviours have to bemodelled into the objects and virtual environment.5.5 Model ManagementDefinition: Model management combines techniques to help the VR engine render complexvirtual environments at interactive rates without a significant impact on the simulation quality.5.5.1 Level-of-Detail ManagementThis technique involves using the same objects with different polygon counts. Theseobjects represent the same object with different level of detail. The reason to do so stems fromthe realization that the human eye perceives less and less details as objects are further away.Thus would be wasteful to represent distant objects with high level of detail.LOD management can be further classified into Static and adaptive.Fig 5.3 LOD Management
  19. 19. 195.5.2 Cell SegmentationWhen models are too large and cannot be fitted into the RAM they need to be renderedsuch a way that the impact of memory swaps on the simulation frame rate minimized. Thisinvolves partitioning the large model into smaller ones, then rendering those with static oradaptive LOD management.There are two methods:Automatic cell segmentationCombined cell, LOD, and Database methodsFig 5.4 Cell segmentation: A car model
  20. 20. 20Section 6: VR ProgrammingPreviously discussed topics shows how to model the geometry, appearance, physical,and behaviour properties of virtual objects as well as their parent-child object hierarchy. All thisis done to reduce the frame rendering time. These are the basic components of programming,or authoring a virtual environment.Definition: An authoring environment is an application programming interface (API)-basedmethod of creating a virtual world. A run-time environment subsequently allows user’s real-time interaction with the authored world.6.1 Toolkits and Scene GraphsA toolkit is an extendable library of objects-oriented functions designed for VRspecifications. Simulated objects are parts of classes and inherit their default attributes, thussimplifying the programming task. Modern general purpose toolkits include WorldToolKit(WTK) [Sense8 Co., 2001], Java 3D [1998], Virtual Reality Modelling Language (VRML), etc.Scene Graph is a hierarchical organization of objects in the virtual world, together withthe view to that world. The scene graph represents a tree structure, with nodes connected bybranches. The topmost node in the hierarchy is the root, which is the parent node to the wholegraph. The external nodes are called leaves. They typically represent visible objects. Theinternal nodes represent transformations, which position their children objects in space inrelation to the root node or in relation to other objects. Any transformation applied to a giveninternal node will affect all its children. Scene graphs are not static and change to reflect thecurrent state of the virtual world.6.2 JAVA 3DJava, introduced by Sun Microsystems in the mid 1990s’, has become the programmingenvironment of choice for platform independent, highly distributed applications. Java 3D is oneof the Java APIs, which was designed for object-oriented programming of interactive 3Dgraphics applications. Java 3D uses OpenGL and Direct 3D low-level graphics library functions.6.2.1 Model Geometry and AppearanceAn object’s 3D shape and appearance are specified by Java 3D class Shape3D(), whichos an extension of the scene-graph leaf nodes. The specific values within the Shape3D()class are set by functions called methods, namely setGeometry() andsetAppearance().
  21. 21. 21Geometries can be constructed from starch, using points, lines, triangles, quads, arrays, etc. Forexample, if a geometry is built using triangle arrays, then/* Create list of 3D coordinated for vertices */Point3f[] myCoords = {New Point3f 0.0f, 0.0f, 0.0f,. . .}/* Create list of vertex normal for lighting */Vertor3f[] myNormals = {New Vector3f 0.0f, 0.0f, 0.0f. . .}/* Create the triangular array */TriangleArray myTris = new TriangleArray(Mycoords.length,GeometryArray.COORDINATES |GeometryArray.NORMALS );myTris.setCoordinates(0, myCoords);myTris.setNormals(0, myNormals);/* Assemble the shape */Shape3D myShape = new Shape3D(myTris, myAppear);An alternative way of setting object geometry is to import files in formats such as 3DS,DFX, NFF, and WRL. The geometry importing is done by methods called loaders. Loaders addcontent of the loaded file to the scene graph as a single object. If the object needs to besegmented in order to maintain its intrinsic parent-child dependencies, then parts need to beaccessed individually by subsequent method calls. Let us consider a virtual hand geometry fileHand.wrl created with VRML. In order to access parts we need to/* add file to scene graph */Scene SC = loader.load(“Hand.wrl”);BranchGroup Bg = SC.getSceneGroup();/* access to finger subparts of the loaded model */Thumb = Bg.getChild(0);Index = Bg.getChild(1);Middle = Bg.getChild(2);Ring = Bg.getChild(3);Small = Bg.getChild(4);
  22. 22. 22The object’s appearance is specified the Java 3D Appearance() class. These materialand texture attributes need to be defined first and then grouped to form a new appearance asinMat = new Material();Mat.setDiffuseColor(r,g,b);Mat.setAmbientColor(r,g,b);Mat.setSpecularColor(r,g,b);/* import texture file */TextLd = new textureLoader(“checkered.jpg”, ..., ...);Tex = TextLd.getTexture();/* create the appearance and set it */Appr = new Appearance();Appr = setMaterial(Mat);Appr = setTexture(Tex);Geom.setAppearance(Appr);6.2.2 Java 3D Scene GraphsA Java 3D scene graphs is constructed of Node objects in parent-child relationshipsforming a tree structure. The arcs of a tree form no cycles. Only one path exists from the rootof a tree to each of the leaves; therefore, there is only one path from the root of a scene graphto each leaf node.Each scene graph path in a Java 3D scene graph completely specifies the stateinformation of its leaf. State information includes the location, orientation, and size of a visualobject.Each scene graph has a single VirtualUniverse. The VirtualUniverse object has alist of Locale objects. A Locale object provides a reference point in the virtual universe. EachLocale object may serve as the root of multiple subgraphs of the scene graph.A BranchGroup object is the root of a subgraph, or branch graph. There are two differentcategories of scene subgraph: the view branch graph and the content branch graph.Scene Graph Hierarchyo The VirtualUniverse, Locale, Group, and Leaf classes appear in this portion of thehierarchy. Other than the VirtualUniverse and Locale objects, the rest of a scenegraph is composed of SceneGraphObject objects. SceneGraphObject is thesuperclass for nearly every Core and Utility Java 3D class. SceneGraphObject has twosubclasses: Node and NodeComponent. The subclasses of Node provide most of theobjects in the scene graph. A Node object is either a Group node or a Leaf nodeobject. Group and Leaf are superclasses to a number of subclasses. Here is a quicklook at the Node class, its two subclasses, and the NodeComponent class. After thisbackground material is covered, the construction of Java 3D programs is explained.
  23. 23. 23Node Classo The Node class is an abstract superclass of Group and Leaf classes. The Nodeclass defines some important common methods for its subclasses. Informationon specific methods is presented in later sections after more backgroundmaterial is covered. The subclasses of Node compose scene graphs.Group Classo The Group class is the superclass used in specifying the location and orientationof visual objects in the virtual universe. Two of the subclasses of Group areBranchGroup and TransformGroup. In the graphical representation of the scenegraph, the Group symbols (circles) are often annotated with BG forBranchGroups, TG for TransformGroups, etc.Leaf Classo The Leaf class is the superclass used in specifying the shape, sound, and behaviorof visual objects in the virtual universe. Some of the subclasses of Leaf areShape3D, Light, Behavior, and Sound. These objects can have no children butmay reference NodeComponents.Fig 6.1 Example of Java 3D scene graph Fig 6.2 Scene graph of a cockroach
  24. 24. 24ConclusionUseful applications of VR include training in a variety of areas (military, medical, equipmentoperation, etc.), education, design evaluation (virtual prototyping), architectural walk-through,human factors and ergonomic studies, simulation of assembly sequences and maintenancetasks, assistance for the handicapped, study and treatment of phobias (e.g., fear of height),entertainment, and much more.Like all great technologies, theres a monumental duality about it.Virtual Reality technology can represent the next step in the sociological evolution ofhumanity. A world where you can do anything, you can enjoy everything in virtual world whichyou cannot even dream in this real world, like you can enjoy the latest model of Mercedeswithout spending any money and a world where every virtual desire of mankind can besatisfied for the cost of pennies.On the other hand, Virtual Reality could be greatest single threat to society. Imagine anentire modernized civilization leaving the "real" world for the "virtual" one. A nation of emptystreets, empty schools as family spend their entire days plugged into a Virtual Reality MachineEverybody will be living in their own world and living their life happily without any tensions &sorrows and above all that world will be according to your taste.Above all it is concluded that the virtual reality is acting a social evolution or societydepends on the ways it can be used. If you enjoy a drive of Mercedes in virtual reality it willcause a loss to Mercedes Company and leads to a loss of country’s economy. If you use it in away like Virtual Training System and in the field of medical.References:Main Reference: “Virtual Reality Technology,” 2nd Edition, Grigore C. BurdeaOther References:Burdea, G., 1993, “Virtual reality systems and application” *short course+, in electro ’93International Conference, Edison, NJVPL, 1987, DataGlove Model 2 User’s Manual, VPL Research, Redwood City, CA.Anon, 1998, “3D Navigational and Gesture devices”, VR News, Vol. 7(1), page 26-29.Immersion Co., 2001 “CyberGlove”, online at www.immersion.com/products/3dInterSense, 2000b, “IS-900 Precision Motion Tracker”, company brochcure, InterSense Co.Foxlin, E., 1998, “Motion tracking requirements and technology”, page 163-210.Monkman and P. Taylor, 1993, “Thermal Tactile sensing”, Vol. 9(3), page 313-318Bishop, 1986, “Fast Phong Shading Algorithm”, page 103-105.“Introduction to VR”, www.csie.nctu.edu.tw/course_vr_2001/doc/VRcourse5.pdfhttp://www.oracle.com/technetwork/java/javase/tech/desktop-documentation-jsp-138646.html

×