SlideShare a Scribd company logo
PDIT TIRUPATI AUGMENTED REALITY
1
ABSTRACT
Video games have been entertaining us for nearly 30 years, ever since Pong was introduced to
arcades in the early In 1970's. Computer graphics have become much more sophisticated since then.
And soon, game graphics will seem all too real. In the next decade, researchers plan to pull graphics
out of your television screen or computer display and integrate them into real- world environments.
This new technology called augmented reality, will further blur the line between what is real and what
is computer-generated by enhancing what we see, hear, feel and smell. Augmented reality will truly
change the way we view the world. Picture yourself walking or driving down the street. With
augmented-reality displays, which will eventually look much like a normal pair of glasses, informative
graphics will appear in your field of view, and audio will coincide with what very you see. These
enhancements will be refreshed continually to reflect the moments of your head. Augmented reality is
still in the early stage of research and development at various universities and high-tech companies.
Eventually, possibly by the end of this decade we will see the first mass-marketed augmented-reality
system, which can be described as "the Walkman of the 21st Century". Augmented reality (AR) is
the altering your perception of the world by wrapping a layer around reality. A layer of augmentation.
They say reality is a hard pill to swallow. So how about reality served with a sugar coating of
information? Augmented Reality can help in multiple ways – provide critical information about your
surroundings, give you your personal babel fish, help soldiers survive wars better, help law
enforcement officials catch criminals, avoid car accidents, train surgeons, and much more. But AR is
not just about this, it can be fun too! Yep it has limitless possibilities in gaming Should you be
interested in not just experiencing AR but also creating that same magic, the last chapter will help you
take your first steps into AR programming. We tell you about the domain knowledge required for
starting off with each type of AR implementation and we end with an introduction to some of the most
useful frameworks / toolkits available to begin your journey.
PDIT TIRUPATI AUGMENTED REALITY
2
1. INTRODUCTION
Augmented reality (AR) refers to computer displays that add virtual information to a user's
sensory perception. Most AR research focuses on see-through devices, usually worn on the head that
overlay graphics and text on the user's view of his or her surroundings. In general it superimposes
graphics over a real world environment in real time.
Getting the right information at the right time and the right place is key in all these applications.
Personal digital assistants such as the Palm and the Pocket PC can provide timely information using
wireless networking and GlobalPositioning System (GPS) receivers that constantly track the handheld
devices. But what makes Augmented Reality different is how the information is presented: not on a
separate display but integrated with the user's perceptions. This kind of interface minimizes the extra
mental effort that a user has to expend when switching his or her attention back and forth between
real-world tasks and a computer screen. In augmented reality, the user's view of the world and the
computer interface literally become one.
Between the extremes of real life and Virtual Reality lies the spectrum of Mixed Reality, in
which views of the real world are combined in some proportion with views of a virtual environment.
Combining direct view, stereoscopic videos, and stereoscopic graphics, Augmented Reality describes
that class of displays that consists primarily of a real world environment, with graphic enhancement or
augmentations.
In Augmented Virtuality, real objects are added to a virtual environment. In Augmented
Reality, virtual objects are added to real world. An AR system supplements the real world with virtual
(computer generated) objects that appear to co-exist in the same space as the real world. Virtual
Reality is a synthetic environment.
Real
Environment
Augmented
Reality
Augmented
Virtuality
Virtual
Environment
PDIT TIRUPATI AUGMENTED REALITY
3
1..1 Comparison between AR and virtual environments
The overall requirements of AR can be summarized by comparing them against the
requirements for Virtual Environments, for the three basic subsystems that they require.
1. Scene generator : Rendering is not currently one of the major problems in AR. VE systems have
much higher requirements for realistic images because they completely replace the real world with
the virtual environment . In AR, the virtual images only supplement the real world. Therefore,
fewer virtual objects need to be drawn, and theydo not necessarily have to be realistically rendered
in order to serve the purposes of the application.
2. Display devices: The display devices used in AR may have less stringent requirements than VE
systems demand, again because AR does not replace the real world. For example, monochrome
displays may be adequate for some AR applications, while virtually all VE systems today use full
color. Optical see-through HMD's with a small field-of-view may be satisfactory because the user
can still see the real world with his peripheral vision; the see-through HMD does not shut off the
user's normal field-of-view. Furthermore, the resolution of the monitor in an optical sec-through
HMD might be lower than what a user would tolerate in a VE application, since the optical
see-through HMD does not reduce the resolution of the real environment.
3. Tracking and sending: While in the previous two cases AR had lower requirements than VE that
is not the case for tracking and sensing. In this area, the requirements for AR are much stricter than
those for VE systems. A major reason for this is the registration problem.
Basic Subsytems VR AR
Scene Generator More Advanced Less Advanced
Display device High Quality Low Quality
Tracking and Sensing Less Advanced More advanced
Table No.1 : Comparison of requirements of Augmented Reality and Virtual Reality
PDIT,TIRUPATI AUGMENTED REALITY
4
2.HISTROY
• Although augmented reality may seem like the stuff of science fiction, researchers have been
building prototype system for more thanthree decades. The first was developed in the 1960s by
computer graphics pioneer Ivan Surtherland and his students at Harvard University.
• In the 1970s and 1980s a small number of researchers studied augmented reality at institution
such as the U.S. Air Force's Armstrong Laboratory, the NASA Ames Research Center and the
university of North Carolina at Chapel Hill
• It wasn't until the early 1990s that the term "Augmented Reality”was coined by scientists at
Boeing who were developing an experimental AR system to help workers assemble wiring
harnesses.
• In 1996 developers at Columbia University develop 'The Touring Machine'
• In 2001 MIT came up with a very compact AR system known as "MIThriir.
• Presently research is being done in developing BARS (Battlefield Augmented Reality
Systems) by engineers at Naval Research Laboratory, Washington D.C.
PDIT,TIRUPATI AUGMENTED REALITY
5
3. WORKING
AR system tracks the position and orientation of the user's head so that the overlaid material
can be aligned with the user's view of the world. Through this process, known as registration, graphics
software canplace a three dimensional image of a tea cup, for example on top of a real saucer and keep
the virtual cup fixed in that position as the user moves about the room. ARsystems employ some of the
same hardware technologies used in virtual reality research, but there's a crucial differences: whereas
virtual reality brashly aims to replace the real world, augmented reality respectfully supplement it.
Augmented Reality is still in anearly stage of research and development at various universities
and high-tech companies. Eventually, possible by the end of this decade, we will see first
mass-marketed augmented reality system, which one researcher calls "The Walkman of the 2 Is1
century". What augmented reality' attempts to do is not only super impose graphics over a real
environment in real time, but also change those graphics to accommodate a user's head- and
eye-movements, so that the graphics always fit and perspective.
Here are the three components needed to make an augmented-reality system work:
• Head-mounted display
• Tracking system
• Mobile computing power
3.1 Head-Mounted Display
Just as monitor allow us to see text and graphics generated by computers, head-mounted
displays (HMD's) will enable us to view graphics and text created by augmented-reality systems.
There are two basic types of HMD's
• Video see-through
• Optical see-through
PDIT,TIRUPATI AUGMENTED REALITY
6
3.1.1 Optical see-through display
Fig 2: Optical see-through HMD conceptual diagram.
A simple approach to optical see-through display employs a mirror beam splitter- a half
silvered mirror that both reflects and transmits light. If properly oriented in front of the user's eye, the
beamsplitter can reflect the image of a computer display into the user's line of sight yet still allow light
from the surrounding world to pass through. Such beam splitters, which are called combiners, have
long been used in head up displays for fighter-jet- pilots (and, more recently, for drivers of luxury-
cars). Lenses can be placed between the beam splitter and the computer display to focus the image so
that it appears at a comfortable viewing distance. If a display and optics are provided for each eye, the
view can be in stereo. Sony makes a see-through display that some researchers use, called the
"Glasstron".
PDIT,TIRUPATI AUGMENTED REALITY
7
3.1.2 Video see-through displays
Combined Video
Fig 3: Video see-through HMD conceptual diagram
In contrast, a video see throughdisplay uses video mixing technology, originally developed for
television special effects, to combine the image from a head worn camera with synthesized graphics.
The merged image is typically presented on an opaque head worn display. With careful design the
camera can be positioned so that its optical path is closed to that of the user's eye; the video image thus
approximates what the userwould normally see. As with optical see through displays, a separate
system can be provided for each eye to support stereo vision. Video composition can be done in more
than one way. A simple way is to use chroma-keying: a technique used in many video special effects.
The background of the computer graphics images is set to a specific color, say green, which none of
the virtual objects use. Then the combining step replaces all green areas with the corresponding parts
from the video of the real world. This has the effect of superimposing the virtual objects over the real
world. A more sophisticated composition would use depth information at each pixel for the real world
images; it could combine the real and virtual images by a pixel-by-pixel depthcomparison. This would
allow real objects to cover virtual objects and vice-versa.
3.1.3 Comparison of optical sec through and video see through displays
Each of approaches to sec through display design has its pluses and minuses. Optical see
throughsystems allows the user to see the real world withresolutionand field of view. But the overlaid
graphics in current optical see through systems are not opaque and therefore cannot completely
PDIT,TIRUPATI AUGMENTED REALITY
8
obscure the physical objects behind them. As result, the superimposed text may be hard to read against
some backgrounds, and three-dimensional graphics may not produce a convincing illusion.
Furthermore, although focus physical objects depending on their distance, virtual objects are alt
focused in the plane of the display. This means that a virtual object that is intended to be at the same
position as a physical object may have a geometrically correct projection, yet the user may not be able
to view both objects in focus at the same time.
In video see-through systems, virtual objects can fully obscure physical ones and can be
combined with them using a rich variety of graphical effects. There is also discrepancy between how
the eye focuses virtual and physical objects, because both are viewed on same plane, the limitations of
current video technology, however, mean that the quality of the visual experience of the real world is
significantlydecreased, essentially to the levelof the synthesized graphics, witheverything focusingat
the same apparent distance. At present, a video camera and display is no match for the human eye.
An optical approach has the following advantages over a video approach
1. Simplicity': Optical blending is simpler and cheaper than video blending. Optical approaches have
only one "stream" of video to worry about: the graphic images. The real world is seen directly through
the combiners, and that time delay is generally a few nanoseconds. Video blending, on the other hand,
must deal with separate video streams for the real and virtual images. The two streams of real and
virtual images must be properly synchronized or temporal distortion results. Also, optical see through
HMD's with narrow field of view combiners offer views of the real world that have little distortion.
Video cameras almost always have some amount of distortion that must be compensated for, along
with any distortion from the optics in front of the display devices. Since video requires cameras and
combiners that optical approaches do not need, video will probably be more expensive and
complicated to build than optical based systems.
2. Resolution: Video blending limits the resolution of what the user sees, both real and virtual, to the
resolution of the display devices. With current displays, this resolution is far less than the resolving
power of the fovea. Optical see-through also shows the graphic images at the resolution of the display
devices, but the user's view of the real world is not degraded. Thus, video reduces the resolution of the
real world, while optical see-through docs not.
3. Safety: Video see-throughHMD's arc essentially modified closed-view HMD's, If the power is cut
off, the user is effectivelyblind. This is a safetyconcern in some applications. In contrast, when power
is removed from an optical see-through HMD, the user still has a direct view of the real world. The
HMD then becomes a pair of heavy sunglasses, but the user can still see.
PDIT,TIRUPATI AUGMENTED REALITY
9
4. No eye offset: With video see-through, the user's view of the real world is provided by the video
cameras. In essence, this puts his "eyes" where the video cameras are not located exactly where the
user's eyes are, creating an offset between the cameras and the real eyes. The distance separating the
cameras may also not be exactly the same as the user's interpupillary distance (IPD). This difference
between camera locations and eye locations introduces displacements from what the user sees
compared to what he expects to see. For example, if the cameras are above the user's eyes, he will see
the world from a vantage point slightly taller than he is used to.
Video blending offers the following advantages over optical blending
1. Flexibility in composition strategies: A basic problem with optical see-through is that the virtual
objects do not completely obscure the real world objects, because the optical combiners allow light
from both virtual and real sources. Building an optical see-through HMD that can selectively shut out
the light from the real world is difficult. Any filter that would selectivelyblock out light must be placed
in the optical path at a point where the image is in focus, which obviously cannot be the user's eye.
Therefore, the optical system must have two places where the image is in focus: at the user's eye and
the point of the hypothetical filter. This makes the optical design much more difficult and complex. No
existing optical see-through HMD blocks incoming light in this fashion. Thus, the virtual objects
appear Ghost-like and semi-transparent. This damages the illusion of reality because occlusion is one
of the strongest depth cues. In contrast, video see-through is far more flexible about how it merges the
real and virtual images. Since both the real and virtual are available in digital form, video see-through
compositors can, on a pixel-by-pixel basis, take the real, or the virtual, or some blend between the two
to simulate transparency.
2. Wide field-of-view: Distortions in optical systems are a function of the radial distance away from
the optical axis. The further one looks away from the center of the view, the larger the distortions get.
A digitized image taken through a distorted optical system can be undistorted by applying image
processing techniques to unwrap the image, provided that the optical distortion is wellcharacterized.
This requires significant amount of computation, but this constraint will be less important in the future
as computers become faster. It is harder to build wide field-of-view displays with optical see-through
techniques. Any distortions of the user's view of the real world must be corrected optically, rather than
digitally, because the system has no digitized image of the real world to manipulate. Complexoptics is
expensive and add weight to the HMD. Wide field-of-view systems are an exception to the general
trend of optical approaches being simpler and cheaper than video approaches.
PDIT,TIRUPATI AUGMENTED REALITY
10
3. Real and virtual view delays can be matched: Video offers anapproach for reducing or avoiding
problems caused by temporal mismatches between the real and virtual images. Optical see-through
HMD's offer an almost instantaneous view of the real world but a delayed view of the virtual. This
temporal mismatch can cause problems. With video approaches, it is possible to delay the video of the
real world to match the delay from the virtual image stream.
4. Additional registration strategies: In optical see-through, the only information the system has
about the user's head location comes from the head tracker. Video blending provides another source of
information: the digitized image of the real scene. This digitized image means that video approaches
can employ additional registration strategies unavailable to optical approaches.
5. Easier to match the brightness of the real and virtual objects: Both optical and video
technologies have their roles, and the choice of technologydepends uponthe application requirements.
Many of the mismatch assembly and repair prototypes use optical approaches, possibly because of the
cost and safety issues. If successful, the equipment would have to be replicated in large numbers to
equip workers on a factory floor. In contrast, most of the prototypes for medical applications use video
approaches, probably for the flexibility in blending real and virtual and for theadditional registration
strategies offered.
3.2 Tracking and Orientation
The biggest challenge facing developers of augmented reality the need to know where the user
is located in reference to his or her surroundings. There's also the additional problem of tracking the
movement of users eyes and heads. A tracking system has to recognize these movements and project
the graphics related to the real-world environment the user is seeing at any given movement. Currently
both video see-through and optical see-through displays optically have lag in the overlaid material due
to the tracking technologies currently available.
3.2.1 Indoor Tracking
Tracking is easier in small spaces than in large spaces, Trackers typically have two parts: one
worn by the tracked person or object and other built into the surrounding environment, usually within
the same room. Inoptical trackers, the targets - LED's or reflectors, for instance - canbe attached to the
tracked person or to the object, and an array of optical sensors can be embedded in the room's ceiling.
Alternatively the tracked users can wear the sensors, and targets can be fixed to the ceiling. By
calculating the distance to reach visible target, the sensors can determine the user's position and
orientation.
PDIT,TIRUPATI AUGMENTED REALITY
11
Researchers at the University of North Carolina-Chapel Hill have developed a very precise
system that works within 500 sq feet. The HiBall Tracking System is an optoelectronic tracking
system made of two parts:
Six user-mounted, optical sensors.
Infrared-light-emitting diodes (LED's) embedded in special ceiling panels.
The system uses the known locationof LED's the known geometry of the user-mounted optical
sensors and a special algorithm to compute and report the user's position and orientation. The system
resolves linear motion of less than 0.2 millimeters, and angular motions less than 0.03 degrees. It has
an update rate of more than 1500Hz, and latency is kept at about one millisecond. In everyday life,
people rely on several senses-including what they see, cues from their inner earsand gravity's pull on
their bodies- to maintain their bearings. In a similar fashion. "Hybrid Trackers" draw on several
sources of sensory information. For example, the wearer of an AR display can be equipped with
inertial sensors (gyroscope and accelerometers) to record changes in head orientation. Combining this
information with data from optical, video or ultrasonic devices greatly improve the accuracy of
tracking.
3.2.2 Outdoor Tracking
Head orientation is determined with a commercially available hybrid tracker that combines
gyroscopes and accelerometers with magnetometers that measure the earth's magnetic field. For
position tracking we take advantage OF a high-precision version of the increasingly popular Global
Positioning system receiver.
A GPS receiver can determine its position by monitoring radio signals from navigation
satellites. GPS receivers have an accuracy of about 10 to 30 meters. An augmented reality, system
would be worthless if the graphics projected were of something 10 to 30 meters away from what you
were actually looking at.
User can get better result with a technique known as differential GPS. In this method, the
mobile GPS receiver also monitors signals from another GPS receiver and a radio transmitter at a fixed
location on the earth. This transmitter broadcasts the correction based on the difference between the
stationary GPS antenna's known and computed positions. By using these signals to correct the satellite
signals, the differential GPS can reduce the margin of error to less than one meter.
The system is able to achieve the centimeter-level accuracy by employing the real-time
kinematics GPS, a more sophisticated form of differential GPS that also compares the phases of the
PDIT,TIRUPATI AUGMENTED REALITY
12
signals at the fixed and mobile receivers. Trimble Navigation reports that they have increased the
precision of their global positioning system (GPS) by replacing local reference stations with what they
term a Virtual Reference Station (VRS), This new VRS will enable users to obtain a centimeter-level
positioning without local reference stations; it can achieve long-range, real-time kinematics (RTK)
precision over greater distances via wireless communications wherever they are located. Real-time
kinematics technique is a way to use GPS measurements to generate positioning within one to two
centimeters (0,39 to 0.79 inches). RTK is often used as the key component in navigational system or
automatic machine guidance.
Unfortunately, GPS is not the ultimate answer to position tracking. The satellite signals are
relatively weak and easily blocked by buildings or even foliage. This rule out useful tracking indoors
or in places likes midtown Manhattan, where rows of tall building block most of the sky. GPS tracking
works well in wide open spaces and relatively low buildings.
3.3 Mobile Computing Power
For a wearable augmented realty system, there is still not enough computing power to create
stereo 3-D graphics. So researchers are using whatever they can get out of laptops and persona!
Computers, for now. Laptops are just now starting to be equipped with graphics processing unit
(CPU's), Toshiba just now added a NVIDIA to their notebooks that is able to process more than
17-miilion triangles per second and 286-miIlion pixels per second, which can enable CPU-intensive
programs, such as 3D games. But still notebooks lag far behind-NVID1A has developed a custom
300-MHz 3-D graphics processor for Microsoft's Xbox game console that can produce 150 million
polygon per second—and polygons are more complicated than triangles. So you can see how far
mobiles graphics chips have to go before theycan create smooth graphics like the ones you see on your
home video-game system.
PDIT,TIRUPATI AUGMENTED REALITY
13
4. TYPES OF AUGMENTED REALITY
4.1 ProjectionbasedAR
Just like anything else which is beyond our reach, projection based AR feels more attractive (at
least as of now) compared to an AR app you can install on your phone. As is obvious by its name,
projection based AR functions using projection onto objects. What makes it interesting is the wide
array of possibilities. One of the simplest is projection of light on a surface. Speaking of lights,
surfaces and AR, did you ever think those lines on your fingers (which divide each finger into three
parts) can create 12 buttons? Have a look at the image and you would quickly grasp what we’re talking
about. The picture depicts one of the simplest uses of projection based AR where light is fired onto a
surface and the interaction is done by touching the projected surface with hand. The detectionof where
the user has touched the surface is done by differentiating between an expected (or known) projection
image and the projection altered by interference of user’s hand.
Projection-based AR can build you a castle in air, or ma dialler on hand
4.2 RecognitionbasedAR
Recognition based AR focuses on recognition of objects and then provides us more
information about the object. e.g. when using your mobile phone to scan a barcode or QR code, you
actually use object recognition technology. Fact is, except location based AR systems, all other types
do use some type of recognition system to detect the type of object over which augmentation has to be
done. Recognition based AR technology has varied uses as well. One of them is to detect the object in
front of the camera and provide information about the object on screen. This is something similar to the
AR apps for travellers (locationbrowsers). However, the difference lies in the fact that the AR location
browsers usually do not know about the objects that they see while recognition based AR apps do. A
second type of recognition-based AR application is to recognize the object and replace it with
something else.
PDIT,TIRUPATI AUGMENTED REALITY
14
Augmented Reality for simulating chemical formulae
4.3 Outlining AR
Though the human eye is known to be the best camera in the world, there are limitations. We
cannot look at things for too long. We cannot see well in low light conditions and sure as anything,
your eye cannot see in infrared. For such cases, special cameras were built. Augmented reality apps
which performoutlining use suchcameras. Once again, object recognition sits behind all that outlining
AR can do. Let us begin with a life-saving implementation example. When driving a car on a road
infoggy weather, the boundaries of the road may not be very visible to the human eye, leading to
mishaps. Advanced cameras tuned specially to see the surroundings in low light conditions can be used
to outline the road boundaries within which the car should stay. Sucha system would prove very useful
in avoiding accidents. With extra sensors capable of detecting objects around (e.g. by using
ultrasound) the overall risk of hitting some living object can be minimized as well. The technologycan
help you save pedestrian lives as well. Outlining people crossing the road on a HUD (Heads Up
Display) windscreen can be more useful than having a separate infrared video feed.
Outlining the road can help save you a mishap
PDIT,TIRUPATI AUGMENTED REALITY
15
5. APPLICATIONS
Only recently have the capabilities of real-time video image processing, computer graphics
systems and new display technologies converged to make possible the display of a virtual graphical
image correctly registered with a view of the 3D environment surrounding the user. Researchers
working with the AR system have proposed them as solutions in many domains. The areas have been
discussed range from entertainment to militarytraining. Many of the domains, such as medical are also
proposed for traditional virtual reality systems. This section will highlight some of the proposed
application for augmented reality.
5.1 Medical
Because imaging technology is so pervasive throughout the medical field, it is not surprising
that this domain is viewed as one of the more important for augmented reality systems. Most of the
medical application deal with image guided surgery. Pre-operative imaging studies suchas CT or MR]
scans, of the patient provide the surgeon with the necessary view of the internal anatomy. From these
images the surgery is planned. Visualization of the path through the anatomy to the affected area
where, for example, a tumor must be removed is done by first creating the 3D model from the multiple
views and slices in the preoperative study. This is most often done mentally though some systems will
create 3D volume visualization from the image study. AR can be applied so that the surgical team can
see the CT or MR1 data correctly registered on the patient in the operating theater while the procedure
is progressing. Being able to accurately register the images at this point will enhance the performance
of the surgical team.
Another application for AR in the medical domain is in ultra sound imaging. Using an optical
see-through display the ultrasound technician can view a volumetric rendered image of the fetus
overlaid on the abdomen of the pregnant woman. The image appears as if it were inside of the
abdomen and is correctly rendered as the user moves.
PDIT,TIRUPATI AUGMENTED REALITY
16
Fig 6: Virtual fetus inside womb of pregnant patient.
Fig 7: Mockup of breast tumor biopsy. 3-D graphics guide needle insertion.
5.2 Entertainment
A simple form of the augmented reality has been in use in the entertainment and news business
for quite some time. Whenever you are watching the evening weather report the weather reporter is
shown standing in the front of changing weather maps. In the studio the reporter is standing in front of
a blue or a green screen. This real image is augmented with the computer generated maps using a
technique called chroma-keying. It is also possible to create a virtual studio environment so that the
actors can appear to be positioned in a studio with computer generated decorating.
Movie special effects make use of digital computing to create illusions. Strictly speaking with
current technology this may not be considered augmented reality because it is not generated in the
real-time. Most special effects are created off-line, frame by frame with a substantial amount of user
interaction and computer graphics system rendering. But some work is progressing in computer
analysis of the live action images to determine the camera parameters and use this to drive the
generation of the virtual graphics objects to be merged.
Princeton Electronics Billboard has developed an augmented reality system that allows
broadcasters to insert advertisement into specific areas of the broadcast image. For example, while
broadcasting a baseball game this system would be able to place an advertisement in the image so that
PDIT,TIRUPATI AUGMENTED REALITY
17
it appears on the outfield wall of the stadium. By using pre-specified reference points in the stadium,
the system automatically determines the camera angles being used and referring to the pre-defined
stadium map inserts the advertisement into the current place. AR QUAKE, 76 designed using the same
platform, blends users in the real world with those in a purely virtual environment. A mobile AR user
plays as a combatant in the computer game Quake, where the game runs with a virtual model of the
real environment.
Fig 8: AR in sports broadcasting. The annotations on the race cars and the yellow first down
tine arc inserted into the broad cast in real time.
5.3 Engineering Design
Imagine that a group of designers are working on the model of a complex device for their
clients. The designers and clients want to do a joint design reviews even though they are physically
separated. If each of them had a conference room that was equipped with an augmented re4ality
display this could be accomplished. The physical prototype that the designers have mocked up is
imaged and displayed in the client's conference room in 3D, The clients can walk around display
looking at different aspects of it. To hold the discussion the client can point at the prototype to
highlight sections and this will be reflected on the real model in the augmented display that the
designers are using. Or perhaps in an earlier stage of the design, before a prototype is built, the view in
each conference room is augmented with a computer generated image of the current design built from
the CAD file describing it. This would allow real time interactions with elements of the design so that
either side can make adjustments and change that are reflected in the view seen by both groups.
5.4 Robotics and Telerobotics
In the domain of robotics and Telerobotics an augmented display can assist the user of the
system. A Telerobotics operator uses a visual image of the remote workspace to guide the robot.
Annotationof the view would still be useful just as it is whenthe scene is in front of the operator. There
PDIT,TIRUPATI AUGMENTED REALITY
18
is an added potential benefit. Since often the view of the remote scene is monoscopic, augmentation
with wire frame drawings of structures in the view can facilitate visualization of the remote 3D
geometry. If the operator is attempting a motion it could be practiced on a virtual robot that is
visualized as an augmentation to the real scene. The operator can decide to proceed with the motion
after seeing the results. The robot motion could then be executed directly which in a telerobotics
application would eliminate any oscillations caused by long delays to the remote site.
Fig 9: Virtual lines show a planned motion of a robot arm
5.6 Consumer design
Virtual reality systems are already used for consumer design. Using perhaps more of a graphics
system than virtual reality, when you go to the typical home store wanting to add a new deck to your
house, they will show you a graphical picture of what the deck will look like. It is conceivable that a
future system would allow you to bring a video tape of your house shot from various viewpoints in
your backyard and in real time it would augment that view to show the new deck in its finished form
attached to your house. Or bring in a tape of your current kitchen and the augmented reality processor
would replace your current kitchen cabinetry with virtual images of the new kitchen that you are
designing.
PDIT,TIRUPATI AUGMENTED REALITY
19
6. ADVANTAGES AND ITS LIMITATIONS
Advantages
 Augmented Reality is set to revolutionize the mobile user experience as did gesture and touch
(multi-modal interaction) in mobile phones. This will redefine the mobile user experience for
the next generation making mobile search invisible and reduce search effort for users.
 Augmented Reality, like multi-modal interaction (gestural interfaces) has a long history of
usability research, analysis and experimentation and therefore has a solid history as an
interface technique.
 Augmented Reality improves mobile usability by acting as the interface itself, requiring little
interaction. Imagine turning on your phone or pressing a button where the space, people,
objects around you are “sensed” by your mobile device- giving you location based or context
sensitive information on the fly.
Limitations
 Current performance levels (speed) on today’s [2009] iPhone or similar touch devices like the
Google G1 will take a few generations to make Augmented Reality feasible as a general
interface technique accessible to the general public.
 Content may obscure or narrow a user’s interests or tastes. For example, knowing where
McDonald’s or Starbucks is in Paris or Rome might not interest users as much as “off the
beaten track information” that you might seek out in travel experiences.
 Privacy control will become a bigger issue than with today’s information saturation
levels. Walking up to a stranger or a group of people might reveal status, thoughts (Tweets), or
other information that usually comes with an introduction, might cause unwarranted breaches
of privacy.
PDIT,TIRUPATI AUGMENTED REALITY
20
7. FUTURE ENHANCEMENT
This section identifiers areas and approaches that require further researches to produce
improved AR systems.
Hybrid approach
Further tracking systems may be hybrids, because combining approaches can cover
weaknesses. The same may be true for other problems in AR. For example, current registration
strategies generally focus on a single strategy. Further systems may be more robust if several
techniques are combined. An example is combining vision-based techniques with prediction. If the
fiducially arc not available, the system switches to open-loop prediction to reduce the registration
errors, rather than breaking down completely. The predicted viewpoints in turn produce a more
accurate initial location estimate for the vision-based techniques.
Portability
It is essential that potential AR applications give the user the ability to walk around large
environments, even outdoors. This requires making the requirement self-continued and portable.
Existing tracking technology is not capable of tracking a user outdoors at the required accuracy.
Multimodal displays
Almost all work in AR has focused on the visual sense: virtual graphic objects and overlays.
But augmentation might apply to all other senses as well. In particular, adding and removing 3-D
sound is a capability that could be useful in some AR applications.
Perceptual and psychophysical studies
Augmented reality is an area ripe for psychophysical studies. How much lag can a user detect?
How much registration error is detectable when the head is moving? Besides questions on perception,
psychological experiments that explore performance issues are also needed. How much does
head-motion prediction improve user performance on a specific task? How much registration error is
tolerable for a specific application before performance on that task degrades substantially? Is the
allowable error larger while the user moves her head versus when she stands still? Furthermore, no
much is known about potential optical illusion caused byerrors or conflicts in the simultaneous display
of real and virtual objects.
PDIT,TIRUPATI AUGMENTED REALITY
21
8. CONCLUSION
Augmented reality is far behind Virtual Environments in maturity. Several commercial
vendors sell complete, turnkey Virtual Environment systems. However, no commercial vendor
currently sells an HMD-based Augmented Reality system. A few monitor-based "virtual set" systems
are available, but today AR systems are primarily found in academic and industrial research
laboratories.
The first deployed HMD-based AR systems will probably be in the application of aircraft
manufacturing. Both Boeing and McDonnell Douglas are exploring this technology. The former uses
optical approaches, while the letter is pursuing video approaches. Boeing has performed trial runs with
workers using a prototype system but has not yet made any deployment decisions. Annotation and
visualization applications in restricted, limited range environments are deployable today, although
much more work needs to be done to make them cost effective and flexible.
Applications in medical visualization will take longer. Prototype visualization aids have been
used on an experimental basis, but the stringent registration requirements and ramifications of
mistakes will postpone common usage for many years. AR will probably be used for medical training
before it is commonly used in surgery.
The next generation of combat aircraft will have Helmet Mounted Sights with graphics
registered to targets in the environment. These displays, combined with short-range steer able missiles
that can shoot at targets off-bore sight, give a tremendous combat advantage to pilots in dogfights.
Instead of having to be directly behind his target in order to shoot at it, a pilot can now shoot at
anything within a 60-90 degree cone of his aircraft's forward centerline. Russia and Israel currently
have systems with this capability, and the U.S is expected to field the AIM-9X missile with its
associated Helmet-mounted sight in 2002.
Augmented Reality is a relatively new field, where most of the research efforts have occurred
in the past four years. Because of the numerous challenges and unexplored avenues in this area, AR
will remain a vibrant area of research for at least the next several years.
After the basic problems with AR are solved, the ultimate goal will be to generate virtual objects that
are so realistic that they are virtually indistinguishable from the real environment. Photorealism has
been demonstrated in feature films, but accomplishing this in an interactive application will be much
harder. Lighting conditions, surface reflections, and other properties must be measured automatically
,in real time.
PDIT,TIRUPATI AUGMENTED REALITY
22
9.REFERENCES
 Augmented reality Wikipedia,http://en.wikipedia.org/wiki/Augmented_reality
Augmented reality: A practical guide. (2008)
 http://media.pragprog.com/titles/cfar/intro.pdf
http://arcadia.eafit.edu.co/Publications/AugmentedRealityIADATEnglish.pdf
 http://upcommons.upc.edu/eprints/bitstream/2117/9839/1/IEM_number16_WN.2.pdf
International Conference on EngineeringEducation & Research, Korea, (2009).
 http://robot.kut.ac.kr/papers/DeveEduVirtual.pdf
Jochim, S., Augmented Reality in Modern Education
 http://augmentedrealitydevelopmentlab.com/wpcontent/uploads/2010/08/
 ARDLArticle8.5-11Small.pdf
Blalock, J., Carringer, J.: Augmented Reality Applications for Environmental
 Designers. In E. Pearson & P. Bohman (Eds.), Proceedings of World
 Conference on Educational Multimedia, Hypermedia and
 Telecommunications, pp. 2757--2762. Chesapeake, VA: AACE. (2006). 15.
 Dix, J., Finlay, J., Abowd, D., Beale, R.:Human-Computer Interaction.
 Third Edition, Pearson: Prentice Hall Europe,(2004).
 8.. Valenzuela, D., Shrivastava, P.: Interview as a Method for Qualitative
 Research.Presentation
 http://www.public.asu.edu/~kroel/www500/ Interview%20Fri.pdf
Thomas, W.: A Review of ResearchonProject BasedLearning. March,
 (2000).http://www.bobpearlman.org/BestPractices/PBL_Research.pdf
Shtereva, K., Ivanova, M., Raykov, P.: Project BasedLearning
 In Microelectronics: Utilizing ICAPP. Interactive Computer Aided Learning
 Conference, 23 – 25 September Villach, Austria, (2009).

More Related Content

What's hot

Augmented reality The future of computing
Augmented reality The future of computingAugmented reality The future of computing
Augmented reality The future of computing
Abhishek Abhi
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
Nikhil Katte
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
Anugya Shukla
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
Praneeth Kakani
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
Medha Behera
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
Md. Al-Faran Hasan Sadat
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality ppt
Sourav Rout
 
Virtual reality ppt
Virtual reality pptVirtual reality ppt
Virtual reality ppt
diksha gaur
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
Narendra kumar Jha
 
Augmented reality
Augmented reality Augmented reality
Augmented reality
vivekuniyal
 
Future of Augmented Reality
Future of Augmented RealityFuture of Augmented Reality
Future of Augmented Reality
OsamaAliMangi
 
Augmented reality
Augmented reality Augmented reality
Augmented reality
pneumonia
 
Augmented Reality (AR)
Augmented Reality (AR)Augmented Reality (AR)
Augmented Reality (AR)
Samsil Arefin
 
Virtual Reality - With Demo Video
Virtual Reality - With Demo VideoVirtual Reality - With Demo Video
Virtual Reality - With Demo Video
Nikhil Mhatre
 
virtual reality ppt
 virtual reality  ppt virtual reality  ppt
virtual reality ppt
nirav radadiya
 
Augmented reality report
Augmented reality reportAugmented reality report
Augmented reality reportSatyendra Gupta
 
Augmented Reality (AR)
Augmented Reality (AR)Augmented Reality (AR)
Augmented Reality (AR)
Ravikeerthi Rao
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
Apurva Hyanki
 

What's hot (20)

Augmented reality The future of computing
Augmented reality The future of computingAugmented reality The future of computing
Augmented reality The future of computing
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Augmented reality ppt
Augmented reality pptAugmented reality ppt
Augmented reality ppt
 
Virtual reality ppt
Virtual reality pptVirtual reality ppt
Virtual reality ppt
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented reality
Augmented reality Augmented reality
Augmented reality
 
Future of Augmented Reality
Future of Augmented RealityFuture of Augmented Reality
Future of Augmented Reality
 
Augmented reality
Augmented reality Augmented reality
Augmented reality
 
Augmented Reality (AR)
Augmented Reality (AR)Augmented Reality (AR)
Augmented Reality (AR)
 
Virtual Reality - With Demo Video
Virtual Reality - With Demo VideoVirtual Reality - With Demo Video
Virtual Reality - With Demo Video
 
virtual reality ppt
 virtual reality  ppt virtual reality  ppt
virtual reality ppt
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented reality report
Augmented reality reportAugmented reality report
Augmented reality report
 
Augmented Reality (AR)
Augmented Reality (AR)Augmented Reality (AR)
Augmented Reality (AR)
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 

Viewers also liked

Augmented Reality ppt
Augmented Reality pptAugmented Reality ppt
Augmented Reality ppt
Khyati Ganatra
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
Niranjan Arya
 
AUGMENTED REALITY PPT's
AUGMENTED REALITY PPT'sAUGMENTED REALITY PPT's
AUGMENTED REALITY PPT's
Venu Gopal
 
Augmented Reality pdf
Augmented Reality pdf Augmented Reality pdf
Augmented Reality pdf
Qualcomm
 
Augmented Reality Application - Final Year Project
Augmented Reality Application - Final Year ProjectAugmented Reality Application - Final Year Project
Augmented Reality Application - Final Year Project
Yash Kaushik
 
Final year design project report - Studies in application of augmented realit...
Final year design project report - Studies in application of augmented realit...Final year design project report - Studies in application of augmented realit...
Final year design project report - Studies in application of augmented realit...Mannu Amrit
 
AUGMENTED REALITY
AUGMENTED REALITYAUGMENTED REALITY
Stratellites - Satellites in Stratosphere
Stratellites - Satellites in StratosphereStratellites - Satellites in Stratosphere
Stratellites - Satellites in Stratosphere
RaviIIT
 
Android N's 3D Touch
Android N's 3D Touch Android N's 3D Touch
Android N's 3D Touch
techugo
 
3D Touch by Karol Kozub, Macoscope
3D Touch by Karol Kozub, Macoscope3D Touch by Karol Kozub, Macoscope
3D Touch by Karol Kozub, Macoscope
Macoscope
 
Augmented Reality - General conclusions and recommendations
Augmented Reality - General conclusions and recommendationsAugmented Reality - General conclusions and recommendations
Augmented Reality - General conclusions and recommendations
banholzer76
 
Engineering Seminar Report on Augmented Reality
Engineering Seminar Report on Augmented RealityEngineering Seminar Report on Augmented Reality
Engineering Seminar Report on Augmented RealityAyush Agarwal
 
Impact on profession and preparation for tomorrow cs ca nirc_22_nov[1]
Impact on profession and preparation for tomorrow cs ca nirc_22_nov[1]Impact on profession and preparation for tomorrow cs ca nirc_22_nov[1]
Impact on profession and preparation for tomorrow cs ca nirc_22_nov[1]Pavan Kumar Vijay
 
Mobile AR Lecture 5 - Location Based AR
Mobile AR Lecture 5 - Location Based ARMobile AR Lecture 5 - Location Based AR
Mobile AR Lecture 5 - Location Based AR
Mark Billinghurst
 
Google | Project Loon | Internet Ballon
Google | Project Loon | Internet BallonGoogle | Project Loon | Internet Ballon
Google | Project Loon | Internet Ballon
Rocker ʌvinʌsh
 
Developing Hybrid Applications with IONIC
Developing Hybrid Applications with IONICDeveloping Hybrid Applications with IONIC
Developing Hybrid Applications with IONIC
Fuat Buğra AYDIN
 
Building services project 2 proposal report
Building services project 2 proposal reportBuilding services project 2 proposal report
Building services project 2 proposal report
Tan Jaden
 
3D Touch
3D Touch3D Touch
Lucio Grenzi - Use Ionic framework to develop mobile application
Lucio Grenzi - Use Ionic framework to develop mobile applicationLucio Grenzi - Use Ionic framework to develop mobile application
Lucio Grenzi - Use Ionic framework to develop mobile application
Codemotion
 

Viewers also liked (20)

Augmented Reality ppt
Augmented Reality pptAugmented Reality ppt
Augmented Reality ppt
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
AUGMENTED REALITY PPT's
AUGMENTED REALITY PPT'sAUGMENTED REALITY PPT's
AUGMENTED REALITY PPT's
 
Augmented Reality pdf
Augmented Reality pdf Augmented Reality pdf
Augmented Reality pdf
 
Augmented Reality Application - Final Year Project
Augmented Reality Application - Final Year ProjectAugmented Reality Application - Final Year Project
Augmented Reality Application - Final Year Project
 
Final year design project report - Studies in application of augmented realit...
Final year design project report - Studies in application of augmented realit...Final year design project report - Studies in application of augmented realit...
Final year design project report - Studies in application of augmented realit...
 
AUGMENTED REALITY
AUGMENTED REALITYAUGMENTED REALITY
AUGMENTED REALITY
 
Stratellites - Satellites in Stratosphere
Stratellites - Satellites in StratosphereStratellites - Satellites in Stratosphere
Stratellites - Satellites in Stratosphere
 
Android N's 3D Touch
Android N's 3D Touch Android N's 3D Touch
Android N's 3D Touch
 
3D Touch by Karol Kozub, Macoscope
3D Touch by Karol Kozub, Macoscope3D Touch by Karol Kozub, Macoscope
3D Touch by Karol Kozub, Macoscope
 
Augmented Reality - General conclusions and recommendations
Augmented Reality - General conclusions and recommendationsAugmented Reality - General conclusions and recommendations
Augmented Reality - General conclusions and recommendations
 
Engineering Seminar Report on Augmented Reality
Engineering Seminar Report on Augmented RealityEngineering Seminar Report on Augmented Reality
Engineering Seminar Report on Augmented Reality
 
Impact on profession and preparation for tomorrow cs ca nirc_22_nov[1]
Impact on profession and preparation for tomorrow cs ca nirc_22_nov[1]Impact on profession and preparation for tomorrow cs ca nirc_22_nov[1]
Impact on profession and preparation for tomorrow cs ca nirc_22_nov[1]
 
Mobile AR Lecture 5 - Location Based AR
Mobile AR Lecture 5 - Location Based ARMobile AR Lecture 5 - Location Based AR
Mobile AR Lecture 5 - Location Based AR
 
Google | Project Loon | Internet Ballon
Google | Project Loon | Internet BallonGoogle | Project Loon | Internet Ballon
Google | Project Loon | Internet Ballon
 
Developing Hybrid Applications with IONIC
Developing Hybrid Applications with IONICDeveloping Hybrid Applications with IONIC
Developing Hybrid Applications with IONIC
 
Stadion.Lviv.Ua
Stadion.Lviv.UaStadion.Lviv.Ua
Stadion.Lviv.Ua
 
Building services project 2 proposal report
Building services project 2 proposal reportBuilding services project 2 proposal report
Building services project 2 proposal report
 
3D Touch
3D Touch3D Touch
3D Touch
 
Lucio Grenzi - Use Ionic framework to develop mobile application
Lucio Grenzi - Use Ionic framework to develop mobile applicationLucio Grenzi - Use Ionic framework to develop mobile application
Lucio Grenzi - Use Ionic framework to develop mobile application
 

Similar to AUGMENTED REALITY Documentation

augmented reality
augmented realityaugmented reality
augmented reality
Dark Side
 
Augmented Reality By Jaseem Bhutto
Augmented Reality By Jaseem BhuttoAugmented Reality By Jaseem Bhutto
Augmented Reality By Jaseem BhuttoJaseem Bhutto
 
AugmentedReality_35.docx
AugmentedReality_35.docxAugmentedReality_35.docx
AugmentedReality_35.docx
AbhiS66
 
Augmented reality(ar) seminar
Augmented reality(ar) seminarAugmented reality(ar) seminar
Augmented reality(ar) seminar
Harshith Booragadda
 
Augmented Reality
Augmented Reality Augmented Reality
Augmented Reality Kiran Kumar
 
Augmented Reality Report by Singhan Ganguly
Augmented Reality Report by Singhan GangulyAugmented Reality Report by Singhan Ganguly
Augmented Reality Report by Singhan Ganguly
singhanganguly
 
Augmented reality report
Augmented reality reportAugmented reality report
Augmented reality reportSatyendra Gupta
 
44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt
AjayPoonia22
 
44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt
Nagulahimasri
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
Shashi Bhushan
 
Augmented reality
Augmented realityAugmented reality
Augmented realityvivekuniyal
 
augmented_reality.ppt
augmented_reality.pptaugmented_reality.ppt
augmented_reality.ppt
AdityaKumar421752
 
20n05a0418 ppt.pptx
20n05a0418 ppt.pptx20n05a0418 ppt.pptx
20n05a0418 ppt.pptx
Nagulahimasri
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
Nitin Meena
 
Augmented reality and virtual reality technology
Augmented reality and virtual reality technologyAugmented reality and virtual reality technology
Augmented reality and virtual reality technology
AMAN148668
 
Augumented Reality
Augumented RealityAugumented Reality
Augumented Reality
HariHaranA12
 
Virtual reality (vr) presentation
Virtual reality (vr) presentation Virtual reality (vr) presentation
Virtual reality (vr) presentation
Ranjeet Kumar
 

Similar to AUGMENTED REALITY Documentation (20)

augmented reality
augmented realityaugmented reality
augmented reality
 
Augmented Reality By Jaseem Bhutto
Augmented Reality By Jaseem BhuttoAugmented Reality By Jaseem Bhutto
Augmented Reality By Jaseem Bhutto
 
AugmentedReality_35.docx
AugmentedReality_35.docxAugmentedReality_35.docx
AugmentedReality_35.docx
 
Augmented reality(ar) seminar
Augmented reality(ar) seminarAugmented reality(ar) seminar
Augmented reality(ar) seminar
 
Augmented Reality
Augmented Reality Augmented Reality
Augmented Reality
 
Augmented Reality Report by Singhan Ganguly
Augmented Reality Report by Singhan GangulyAugmented Reality Report by Singhan Ganguly
Augmented Reality Report by Singhan Ganguly
 
Augmented reality report
Augmented reality reportAugmented reality report
Augmented reality report
 
44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt
 
44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt44328856-Augmented-Reality.ppt
44328856-Augmented-Reality.ppt
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
AR/VR basics
AR/VR basicsAR/VR basics
AR/VR basics
 
augmented_reality.ppt
augmented_reality.pptaugmented_reality.ppt
augmented_reality.ppt
 
20n05a0418 ppt.pptx
20n05a0418 ppt.pptx20n05a0418 ppt.pptx
20n05a0418 ppt.pptx
 
CMPE- 280-Research_paper
CMPE- 280-Research_paperCMPE- 280-Research_paper
CMPE- 280-Research_paper
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Augmented reality
Augmented reality Augmented reality
Augmented reality
 
Augmented reality and virtual reality technology
Augmented reality and virtual reality technologyAugmented reality and virtual reality technology
Augmented reality and virtual reality technology
 
Augumented Reality
Augumented RealityAugumented Reality
Augumented Reality
 
Virtual reality (vr) presentation
Virtual reality (vr) presentation Virtual reality (vr) presentation
Virtual reality (vr) presentation
 

Recently uploaded

Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 

Recently uploaded (20)

Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 

AUGMENTED REALITY Documentation

  • 1. PDIT TIRUPATI AUGMENTED REALITY 1 ABSTRACT Video games have been entertaining us for nearly 30 years, ever since Pong was introduced to arcades in the early In 1970's. Computer graphics have become much more sophisticated since then. And soon, game graphics will seem all too real. In the next decade, researchers plan to pull graphics out of your television screen or computer display and integrate them into real- world environments. This new technology called augmented reality, will further blur the line between what is real and what is computer-generated by enhancing what we see, hear, feel and smell. Augmented reality will truly change the way we view the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view, and audio will coincide with what very you see. These enhancements will be refreshed continually to reflect the moments of your head. Augmented reality is still in the early stage of research and development at various universities and high-tech companies. Eventually, possibly by the end of this decade we will see the first mass-marketed augmented-reality system, which can be described as "the Walkman of the 21st Century". Augmented reality (AR) is the altering your perception of the world by wrapping a layer around reality. A layer of augmentation. They say reality is a hard pill to swallow. So how about reality served with a sugar coating of information? Augmented Reality can help in multiple ways – provide critical information about your surroundings, give you your personal babel fish, help soldiers survive wars better, help law enforcement officials catch criminals, avoid car accidents, train surgeons, and much more. But AR is not just about this, it can be fun too! Yep it has limitless possibilities in gaming Should you be interested in not just experiencing AR but also creating that same magic, the last chapter will help you take your first steps into AR programming. We tell you about the domain knowledge required for starting off with each type of AR implementation and we end with an introduction to some of the most useful frameworks / toolkits available to begin your journey.
  • 2. PDIT TIRUPATI AUGMENTED REALITY 2 1. INTRODUCTION Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perception. Most AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time. Getting the right information at the right time and the right place is key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and GlobalPositioning System (GPS) receivers that constantly track the handheld devices. But what makes Augmented Reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. This kind of interface minimizes the extra mental effort that a user has to expend when switching his or her attention back and forth between real-world tasks and a computer screen. In augmented reality, the user's view of the world and the computer interface literally become one. Between the extremes of real life and Virtual Reality lies the spectrum of Mixed Reality, in which views of the real world are combined in some proportion with views of a virtual environment. Combining direct view, stereoscopic videos, and stereoscopic graphics, Augmented Reality describes that class of displays that consists primarily of a real world environment, with graphic enhancement or augmentations. In Augmented Virtuality, real objects are added to a virtual environment. In Augmented Reality, virtual objects are added to real world. An AR system supplements the real world with virtual (computer generated) objects that appear to co-exist in the same space as the real world. Virtual Reality is a synthetic environment. Real Environment Augmented Reality Augmented Virtuality Virtual Environment
  • 3. PDIT TIRUPATI AUGMENTED REALITY 3 1..1 Comparison between AR and virtual environments The overall requirements of AR can be summarized by comparing them against the requirements for Virtual Environments, for the three basic subsystems that they require. 1. Scene generator : Rendering is not currently one of the major problems in AR. VE systems have much higher requirements for realistic images because they completely replace the real world with the virtual environment . In AR, the virtual images only supplement the real world. Therefore, fewer virtual objects need to be drawn, and theydo not necessarily have to be realistically rendered in order to serve the purposes of the application. 2. Display devices: The display devices used in AR may have less stringent requirements than VE systems demand, again because AR does not replace the real world. For example, monochrome displays may be adequate for some AR applications, while virtually all VE systems today use full color. Optical see-through HMD's with a small field-of-view may be satisfactory because the user can still see the real world with his peripheral vision; the see-through HMD does not shut off the user's normal field-of-view. Furthermore, the resolution of the monitor in an optical sec-through HMD might be lower than what a user would tolerate in a VE application, since the optical see-through HMD does not reduce the resolution of the real environment. 3. Tracking and sending: While in the previous two cases AR had lower requirements than VE that is not the case for tracking and sensing. In this area, the requirements for AR are much stricter than those for VE systems. A major reason for this is the registration problem. Basic Subsytems VR AR Scene Generator More Advanced Less Advanced Display device High Quality Low Quality Tracking and Sensing Less Advanced More advanced Table No.1 : Comparison of requirements of Augmented Reality and Virtual Reality
  • 4. PDIT,TIRUPATI AUGMENTED REALITY 4 2.HISTROY • Although augmented reality may seem like the stuff of science fiction, researchers have been building prototype system for more thanthree decades. The first was developed in the 1960s by computer graphics pioneer Ivan Surtherland and his students at Harvard University. • In the 1970s and 1980s a small number of researchers studied augmented reality at institution such as the U.S. Air Force's Armstrong Laboratory, the NASA Ames Research Center and the university of North Carolina at Chapel Hill • It wasn't until the early 1990s that the term "Augmented Reality”was coined by scientists at Boeing who were developing an experimental AR system to help workers assemble wiring harnesses. • In 1996 developers at Columbia University develop 'The Touring Machine' • In 2001 MIT came up with a very compact AR system known as "MIThriir. • Presently research is being done in developing BARS (Battlefield Augmented Reality Systems) by engineers at Naval Research Laboratory, Washington D.C.
  • 5. PDIT,TIRUPATI AUGMENTED REALITY 5 3. WORKING AR system tracks the position and orientation of the user's head so that the overlaid material can be aligned with the user's view of the world. Through this process, known as registration, graphics software canplace a three dimensional image of a tea cup, for example on top of a real saucer and keep the virtual cup fixed in that position as the user moves about the room. ARsystems employ some of the same hardware technologies used in virtual reality research, but there's a crucial differences: whereas virtual reality brashly aims to replace the real world, augmented reality respectfully supplement it. Augmented Reality is still in anearly stage of research and development at various universities and high-tech companies. Eventually, possible by the end of this decade, we will see first mass-marketed augmented reality system, which one researcher calls "The Walkman of the 2 Is1 century". What augmented reality' attempts to do is not only super impose graphics over a real environment in real time, but also change those graphics to accommodate a user's head- and eye-movements, so that the graphics always fit and perspective. Here are the three components needed to make an augmented-reality system work: • Head-mounted display • Tracking system • Mobile computing power 3.1 Head-Mounted Display Just as monitor allow us to see text and graphics generated by computers, head-mounted displays (HMD's) will enable us to view graphics and text created by augmented-reality systems. There are two basic types of HMD's • Video see-through • Optical see-through
  • 6. PDIT,TIRUPATI AUGMENTED REALITY 6 3.1.1 Optical see-through display Fig 2: Optical see-through HMD conceptual diagram. A simple approach to optical see-through display employs a mirror beam splitter- a half silvered mirror that both reflects and transmits light. If properly oriented in front of the user's eye, the beamsplitter can reflect the image of a computer display into the user's line of sight yet still allow light from the surrounding world to pass through. Such beam splitters, which are called combiners, have long been used in head up displays for fighter-jet- pilots (and, more recently, for drivers of luxury- cars). Lenses can be placed between the beam splitter and the computer display to focus the image so that it appears at a comfortable viewing distance. If a display and optics are provided for each eye, the view can be in stereo. Sony makes a see-through display that some researchers use, called the "Glasstron".
  • 7. PDIT,TIRUPATI AUGMENTED REALITY 7 3.1.2 Video see-through displays Combined Video Fig 3: Video see-through HMD conceptual diagram In contrast, a video see throughdisplay uses video mixing technology, originally developed for television special effects, to combine the image from a head worn camera with synthesized graphics. The merged image is typically presented on an opaque head worn display. With careful design the camera can be positioned so that its optical path is closed to that of the user's eye; the video image thus approximates what the userwould normally see. As with optical see through displays, a separate system can be provided for each eye to support stereo vision. Video composition can be done in more than one way. A simple way is to use chroma-keying: a technique used in many video special effects. The background of the computer graphics images is set to a specific color, say green, which none of the virtual objects use. Then the combining step replaces all green areas with the corresponding parts from the video of the real world. This has the effect of superimposing the virtual objects over the real world. A more sophisticated composition would use depth information at each pixel for the real world images; it could combine the real and virtual images by a pixel-by-pixel depthcomparison. This would allow real objects to cover virtual objects and vice-versa. 3.1.3 Comparison of optical sec through and video see through displays Each of approaches to sec through display design has its pluses and minuses. Optical see throughsystems allows the user to see the real world withresolutionand field of view. But the overlaid graphics in current optical see through systems are not opaque and therefore cannot completely
  • 8. PDIT,TIRUPATI AUGMENTED REALITY 8 obscure the physical objects behind them. As result, the superimposed text may be hard to read against some backgrounds, and three-dimensional graphics may not produce a convincing illusion. Furthermore, although focus physical objects depending on their distance, virtual objects are alt focused in the plane of the display. This means that a virtual object that is intended to be at the same position as a physical object may have a geometrically correct projection, yet the user may not be able to view both objects in focus at the same time. In video see-through systems, virtual objects can fully obscure physical ones and can be combined with them using a rich variety of graphical effects. There is also discrepancy between how the eye focuses virtual and physical objects, because both are viewed on same plane, the limitations of current video technology, however, mean that the quality of the visual experience of the real world is significantlydecreased, essentially to the levelof the synthesized graphics, witheverything focusingat the same apparent distance. At present, a video camera and display is no match for the human eye. An optical approach has the following advantages over a video approach 1. Simplicity': Optical blending is simpler and cheaper than video blending. Optical approaches have only one "stream" of video to worry about: the graphic images. The real world is seen directly through the combiners, and that time delay is generally a few nanoseconds. Video blending, on the other hand, must deal with separate video streams for the real and virtual images. The two streams of real and virtual images must be properly synchronized or temporal distortion results. Also, optical see through HMD's with narrow field of view combiners offer views of the real world that have little distortion. Video cameras almost always have some amount of distortion that must be compensated for, along with any distortion from the optics in front of the display devices. Since video requires cameras and combiners that optical approaches do not need, video will probably be more expensive and complicated to build than optical based systems. 2. Resolution: Video blending limits the resolution of what the user sees, both real and virtual, to the resolution of the display devices. With current displays, this resolution is far less than the resolving power of the fovea. Optical see-through also shows the graphic images at the resolution of the display devices, but the user's view of the real world is not degraded. Thus, video reduces the resolution of the real world, while optical see-through docs not. 3. Safety: Video see-throughHMD's arc essentially modified closed-view HMD's, If the power is cut off, the user is effectivelyblind. This is a safetyconcern in some applications. In contrast, when power is removed from an optical see-through HMD, the user still has a direct view of the real world. The HMD then becomes a pair of heavy sunglasses, but the user can still see.
  • 9. PDIT,TIRUPATI AUGMENTED REALITY 9 4. No eye offset: With video see-through, the user's view of the real world is provided by the video cameras. In essence, this puts his "eyes" where the video cameras are not located exactly where the user's eyes are, creating an offset between the cameras and the real eyes. The distance separating the cameras may also not be exactly the same as the user's interpupillary distance (IPD). This difference between camera locations and eye locations introduces displacements from what the user sees compared to what he expects to see. For example, if the cameras are above the user's eyes, he will see the world from a vantage point slightly taller than he is used to. Video blending offers the following advantages over optical blending 1. Flexibility in composition strategies: A basic problem with optical see-through is that the virtual objects do not completely obscure the real world objects, because the optical combiners allow light from both virtual and real sources. Building an optical see-through HMD that can selectively shut out the light from the real world is difficult. Any filter that would selectivelyblock out light must be placed in the optical path at a point where the image is in focus, which obviously cannot be the user's eye. Therefore, the optical system must have two places where the image is in focus: at the user's eye and the point of the hypothetical filter. This makes the optical design much more difficult and complex. No existing optical see-through HMD blocks incoming light in this fashion. Thus, the virtual objects appear Ghost-like and semi-transparent. This damages the illusion of reality because occlusion is one of the strongest depth cues. In contrast, video see-through is far more flexible about how it merges the real and virtual images. Since both the real and virtual are available in digital form, video see-through compositors can, on a pixel-by-pixel basis, take the real, or the virtual, or some blend between the two to simulate transparency. 2. Wide field-of-view: Distortions in optical systems are a function of the radial distance away from the optical axis. The further one looks away from the center of the view, the larger the distortions get. A digitized image taken through a distorted optical system can be undistorted by applying image processing techniques to unwrap the image, provided that the optical distortion is wellcharacterized. This requires significant amount of computation, but this constraint will be less important in the future as computers become faster. It is harder to build wide field-of-view displays with optical see-through techniques. Any distortions of the user's view of the real world must be corrected optically, rather than digitally, because the system has no digitized image of the real world to manipulate. Complexoptics is expensive and add weight to the HMD. Wide field-of-view systems are an exception to the general trend of optical approaches being simpler and cheaper than video approaches.
  • 10. PDIT,TIRUPATI AUGMENTED REALITY 10 3. Real and virtual view delays can be matched: Video offers anapproach for reducing or avoiding problems caused by temporal mismatches between the real and virtual images. Optical see-through HMD's offer an almost instantaneous view of the real world but a delayed view of the virtual. This temporal mismatch can cause problems. With video approaches, it is possible to delay the video of the real world to match the delay from the virtual image stream. 4. Additional registration strategies: In optical see-through, the only information the system has about the user's head location comes from the head tracker. Video blending provides another source of information: the digitized image of the real scene. This digitized image means that video approaches can employ additional registration strategies unavailable to optical approaches. 5. Easier to match the brightness of the real and virtual objects: Both optical and video technologies have their roles, and the choice of technologydepends uponthe application requirements. Many of the mismatch assembly and repair prototypes use optical approaches, possibly because of the cost and safety issues. If successful, the equipment would have to be replicated in large numbers to equip workers on a factory floor. In contrast, most of the prototypes for medical applications use video approaches, probably for the flexibility in blending real and virtual and for theadditional registration strategies offered. 3.2 Tracking and Orientation The biggest challenge facing developers of augmented reality the need to know where the user is located in reference to his or her surroundings. There's also the additional problem of tracking the movement of users eyes and heads. A tracking system has to recognize these movements and project the graphics related to the real-world environment the user is seeing at any given movement. Currently both video see-through and optical see-through displays optically have lag in the overlaid material due to the tracking technologies currently available. 3.2.1 Indoor Tracking Tracking is easier in small spaces than in large spaces, Trackers typically have two parts: one worn by the tracked person or object and other built into the surrounding environment, usually within the same room. Inoptical trackers, the targets - LED's or reflectors, for instance - canbe attached to the tracked person or to the object, and an array of optical sensors can be embedded in the room's ceiling. Alternatively the tracked users can wear the sensors, and targets can be fixed to the ceiling. By calculating the distance to reach visible target, the sensors can determine the user's position and orientation.
  • 11. PDIT,TIRUPATI AUGMENTED REALITY 11 Researchers at the University of North Carolina-Chapel Hill have developed a very precise system that works within 500 sq feet. The HiBall Tracking System is an optoelectronic tracking system made of two parts: Six user-mounted, optical sensors. Infrared-light-emitting diodes (LED's) embedded in special ceiling panels. The system uses the known locationof LED's the known geometry of the user-mounted optical sensors and a special algorithm to compute and report the user's position and orientation. The system resolves linear motion of less than 0.2 millimeters, and angular motions less than 0.03 degrees. It has an update rate of more than 1500Hz, and latency is kept at about one millisecond. In everyday life, people rely on several senses-including what they see, cues from their inner earsand gravity's pull on their bodies- to maintain their bearings. In a similar fashion. "Hybrid Trackers" draw on several sources of sensory information. For example, the wearer of an AR display can be equipped with inertial sensors (gyroscope and accelerometers) to record changes in head orientation. Combining this information with data from optical, video or ultrasonic devices greatly improve the accuracy of tracking. 3.2.2 Outdoor Tracking Head orientation is determined with a commercially available hybrid tracker that combines gyroscopes and accelerometers with magnetometers that measure the earth's magnetic field. For position tracking we take advantage OF a high-precision version of the increasingly popular Global Positioning system receiver. A GPS receiver can determine its position by monitoring radio signals from navigation satellites. GPS receivers have an accuracy of about 10 to 30 meters. An augmented reality, system would be worthless if the graphics projected were of something 10 to 30 meters away from what you were actually looking at. User can get better result with a technique known as differential GPS. In this method, the mobile GPS receiver also monitors signals from another GPS receiver and a radio transmitter at a fixed location on the earth. This transmitter broadcasts the correction based on the difference between the stationary GPS antenna's known and computed positions. By using these signals to correct the satellite signals, the differential GPS can reduce the margin of error to less than one meter. The system is able to achieve the centimeter-level accuracy by employing the real-time kinematics GPS, a more sophisticated form of differential GPS that also compares the phases of the
  • 12. PDIT,TIRUPATI AUGMENTED REALITY 12 signals at the fixed and mobile receivers. Trimble Navigation reports that they have increased the precision of their global positioning system (GPS) by replacing local reference stations with what they term a Virtual Reference Station (VRS), This new VRS will enable users to obtain a centimeter-level positioning without local reference stations; it can achieve long-range, real-time kinematics (RTK) precision over greater distances via wireless communications wherever they are located. Real-time kinematics technique is a way to use GPS measurements to generate positioning within one to two centimeters (0,39 to 0.79 inches). RTK is often used as the key component in navigational system or automatic machine guidance. Unfortunately, GPS is not the ultimate answer to position tracking. The satellite signals are relatively weak and easily blocked by buildings or even foliage. This rule out useful tracking indoors or in places likes midtown Manhattan, where rows of tall building block most of the sky. GPS tracking works well in wide open spaces and relatively low buildings. 3.3 Mobile Computing Power For a wearable augmented realty system, there is still not enough computing power to create stereo 3-D graphics. So researchers are using whatever they can get out of laptops and persona! Computers, for now. Laptops are just now starting to be equipped with graphics processing unit (CPU's), Toshiba just now added a NVIDIA to their notebooks that is able to process more than 17-miilion triangles per second and 286-miIlion pixels per second, which can enable CPU-intensive programs, such as 3D games. But still notebooks lag far behind-NVID1A has developed a custom 300-MHz 3-D graphics processor for Microsoft's Xbox game console that can produce 150 million polygon per second—and polygons are more complicated than triangles. So you can see how far mobiles graphics chips have to go before theycan create smooth graphics like the ones you see on your home video-game system.
  • 13. PDIT,TIRUPATI AUGMENTED REALITY 13 4. TYPES OF AUGMENTED REALITY 4.1 ProjectionbasedAR Just like anything else which is beyond our reach, projection based AR feels more attractive (at least as of now) compared to an AR app you can install on your phone. As is obvious by its name, projection based AR functions using projection onto objects. What makes it interesting is the wide array of possibilities. One of the simplest is projection of light on a surface. Speaking of lights, surfaces and AR, did you ever think those lines on your fingers (which divide each finger into three parts) can create 12 buttons? Have a look at the image and you would quickly grasp what we’re talking about. The picture depicts one of the simplest uses of projection based AR where light is fired onto a surface and the interaction is done by touching the projected surface with hand. The detectionof where the user has touched the surface is done by differentiating between an expected (or known) projection image and the projection altered by interference of user’s hand. Projection-based AR can build you a castle in air, or ma dialler on hand 4.2 RecognitionbasedAR Recognition based AR focuses on recognition of objects and then provides us more information about the object. e.g. when using your mobile phone to scan a barcode or QR code, you actually use object recognition technology. Fact is, except location based AR systems, all other types do use some type of recognition system to detect the type of object over which augmentation has to be done. Recognition based AR technology has varied uses as well. One of them is to detect the object in front of the camera and provide information about the object on screen. This is something similar to the AR apps for travellers (locationbrowsers). However, the difference lies in the fact that the AR location browsers usually do not know about the objects that they see while recognition based AR apps do. A second type of recognition-based AR application is to recognize the object and replace it with something else.
  • 14. PDIT,TIRUPATI AUGMENTED REALITY 14 Augmented Reality for simulating chemical formulae 4.3 Outlining AR Though the human eye is known to be the best camera in the world, there are limitations. We cannot look at things for too long. We cannot see well in low light conditions and sure as anything, your eye cannot see in infrared. For such cases, special cameras were built. Augmented reality apps which performoutlining use suchcameras. Once again, object recognition sits behind all that outlining AR can do. Let us begin with a life-saving implementation example. When driving a car on a road infoggy weather, the boundaries of the road may not be very visible to the human eye, leading to mishaps. Advanced cameras tuned specially to see the surroundings in low light conditions can be used to outline the road boundaries within which the car should stay. Sucha system would prove very useful in avoiding accidents. With extra sensors capable of detecting objects around (e.g. by using ultrasound) the overall risk of hitting some living object can be minimized as well. The technologycan help you save pedestrian lives as well. Outlining people crossing the road on a HUD (Heads Up Display) windscreen can be more useful than having a separate infrared video feed. Outlining the road can help save you a mishap
  • 15. PDIT,TIRUPATI AUGMENTED REALITY 15 5. APPLICATIONS Only recently have the capabilities of real-time video image processing, computer graphics systems and new display technologies converged to make possible the display of a virtual graphical image correctly registered with a view of the 3D environment surrounding the user. Researchers working with the AR system have proposed them as solutions in many domains. The areas have been discussed range from entertainment to militarytraining. Many of the domains, such as medical are also proposed for traditional virtual reality systems. This section will highlight some of the proposed application for augmented reality. 5.1 Medical Because imaging technology is so pervasive throughout the medical field, it is not surprising that this domain is viewed as one of the more important for augmented reality systems. Most of the medical application deal with image guided surgery. Pre-operative imaging studies suchas CT or MR] scans, of the patient provide the surgeon with the necessary view of the internal anatomy. From these images the surgery is planned. Visualization of the path through the anatomy to the affected area where, for example, a tumor must be removed is done by first creating the 3D model from the multiple views and slices in the preoperative study. This is most often done mentally though some systems will create 3D volume visualization from the image study. AR can be applied so that the surgical team can see the CT or MR1 data correctly registered on the patient in the operating theater while the procedure is progressing. Being able to accurately register the images at this point will enhance the performance of the surgical team. Another application for AR in the medical domain is in ultra sound imaging. Using an optical see-through display the ultrasound technician can view a volumetric rendered image of the fetus overlaid on the abdomen of the pregnant woman. The image appears as if it were inside of the abdomen and is correctly rendered as the user moves.
  • 16. PDIT,TIRUPATI AUGMENTED REALITY 16 Fig 6: Virtual fetus inside womb of pregnant patient. Fig 7: Mockup of breast tumor biopsy. 3-D graphics guide needle insertion. 5.2 Entertainment A simple form of the augmented reality has been in use in the entertainment and news business for quite some time. Whenever you are watching the evening weather report the weather reporter is shown standing in the front of changing weather maps. In the studio the reporter is standing in front of a blue or a green screen. This real image is augmented with the computer generated maps using a technique called chroma-keying. It is also possible to create a virtual studio environment so that the actors can appear to be positioned in a studio with computer generated decorating. Movie special effects make use of digital computing to create illusions. Strictly speaking with current technology this may not be considered augmented reality because it is not generated in the real-time. Most special effects are created off-line, frame by frame with a substantial amount of user interaction and computer graphics system rendering. But some work is progressing in computer analysis of the live action images to determine the camera parameters and use this to drive the generation of the virtual graphics objects to be merged. Princeton Electronics Billboard has developed an augmented reality system that allows broadcasters to insert advertisement into specific areas of the broadcast image. For example, while broadcasting a baseball game this system would be able to place an advertisement in the image so that
  • 17. PDIT,TIRUPATI AUGMENTED REALITY 17 it appears on the outfield wall of the stadium. By using pre-specified reference points in the stadium, the system automatically determines the camera angles being used and referring to the pre-defined stadium map inserts the advertisement into the current place. AR QUAKE, 76 designed using the same platform, blends users in the real world with those in a purely virtual environment. A mobile AR user plays as a combatant in the computer game Quake, where the game runs with a virtual model of the real environment. Fig 8: AR in sports broadcasting. The annotations on the race cars and the yellow first down tine arc inserted into the broad cast in real time. 5.3 Engineering Design Imagine that a group of designers are working on the model of a complex device for their clients. The designers and clients want to do a joint design reviews even though they are physically separated. If each of them had a conference room that was equipped with an augmented re4ality display this could be accomplished. The physical prototype that the designers have mocked up is imaged and displayed in the client's conference room in 3D, The clients can walk around display looking at different aspects of it. To hold the discussion the client can point at the prototype to highlight sections and this will be reflected on the real model in the augmented display that the designers are using. Or perhaps in an earlier stage of the design, before a prototype is built, the view in each conference room is augmented with a computer generated image of the current design built from the CAD file describing it. This would allow real time interactions with elements of the design so that either side can make adjustments and change that are reflected in the view seen by both groups. 5.4 Robotics and Telerobotics In the domain of robotics and Telerobotics an augmented display can assist the user of the system. A Telerobotics operator uses a visual image of the remote workspace to guide the robot. Annotationof the view would still be useful just as it is whenthe scene is in front of the operator. There
  • 18. PDIT,TIRUPATI AUGMENTED REALITY 18 is an added potential benefit. Since often the view of the remote scene is monoscopic, augmentation with wire frame drawings of structures in the view can facilitate visualization of the remote 3D geometry. If the operator is attempting a motion it could be practiced on a virtual robot that is visualized as an augmentation to the real scene. The operator can decide to proceed with the motion after seeing the results. The robot motion could then be executed directly which in a telerobotics application would eliminate any oscillations caused by long delays to the remote site. Fig 9: Virtual lines show a planned motion of a robot arm 5.6 Consumer design Virtual reality systems are already used for consumer design. Using perhaps more of a graphics system than virtual reality, when you go to the typical home store wanting to add a new deck to your house, they will show you a graphical picture of what the deck will look like. It is conceivable that a future system would allow you to bring a video tape of your house shot from various viewpoints in your backyard and in real time it would augment that view to show the new deck in its finished form attached to your house. Or bring in a tape of your current kitchen and the augmented reality processor would replace your current kitchen cabinetry with virtual images of the new kitchen that you are designing.
  • 19. PDIT,TIRUPATI AUGMENTED REALITY 19 6. ADVANTAGES AND ITS LIMITATIONS Advantages  Augmented Reality is set to revolutionize the mobile user experience as did gesture and touch (multi-modal interaction) in mobile phones. This will redefine the mobile user experience for the next generation making mobile search invisible and reduce search effort for users.  Augmented Reality, like multi-modal interaction (gestural interfaces) has a long history of usability research, analysis and experimentation and therefore has a solid history as an interface technique.  Augmented Reality improves mobile usability by acting as the interface itself, requiring little interaction. Imagine turning on your phone or pressing a button where the space, people, objects around you are “sensed” by your mobile device- giving you location based or context sensitive information on the fly. Limitations  Current performance levels (speed) on today’s [2009] iPhone or similar touch devices like the Google G1 will take a few generations to make Augmented Reality feasible as a general interface technique accessible to the general public.  Content may obscure or narrow a user’s interests or tastes. For example, knowing where McDonald’s or Starbucks is in Paris or Rome might not interest users as much as “off the beaten track information” that you might seek out in travel experiences.  Privacy control will become a bigger issue than with today’s information saturation levels. Walking up to a stranger or a group of people might reveal status, thoughts (Tweets), or other information that usually comes with an introduction, might cause unwarranted breaches of privacy.
  • 20. PDIT,TIRUPATI AUGMENTED REALITY 20 7. FUTURE ENHANCEMENT This section identifiers areas and approaches that require further researches to produce improved AR systems. Hybrid approach Further tracking systems may be hybrids, because combining approaches can cover weaknesses. The same may be true for other problems in AR. For example, current registration strategies generally focus on a single strategy. Further systems may be more robust if several techniques are combined. An example is combining vision-based techniques with prediction. If the fiducially arc not available, the system switches to open-loop prediction to reduce the registration errors, rather than breaking down completely. The predicted viewpoints in turn produce a more accurate initial location estimate for the vision-based techniques. Portability It is essential that potential AR applications give the user the ability to walk around large environments, even outdoors. This requires making the requirement self-continued and portable. Existing tracking technology is not capable of tracking a user outdoors at the required accuracy. Multimodal displays Almost all work in AR has focused on the visual sense: virtual graphic objects and overlays. But augmentation might apply to all other senses as well. In particular, adding and removing 3-D sound is a capability that could be useful in some AR applications. Perceptual and psychophysical studies Augmented reality is an area ripe for psychophysical studies. How much lag can a user detect? How much registration error is detectable when the head is moving? Besides questions on perception, psychological experiments that explore performance issues are also needed. How much does head-motion prediction improve user performance on a specific task? How much registration error is tolerable for a specific application before performance on that task degrades substantially? Is the allowable error larger while the user moves her head versus when she stands still? Furthermore, no much is known about potential optical illusion caused byerrors or conflicts in the simultaneous display of real and virtual objects.
  • 21. PDIT,TIRUPATI AUGMENTED REALITY 21 8. CONCLUSION Augmented reality is far behind Virtual Environments in maturity. Several commercial vendors sell complete, turnkey Virtual Environment systems. However, no commercial vendor currently sells an HMD-based Augmented Reality system. A few monitor-based "virtual set" systems are available, but today AR systems are primarily found in academic and industrial research laboratories. The first deployed HMD-based AR systems will probably be in the application of aircraft manufacturing. Both Boeing and McDonnell Douglas are exploring this technology. The former uses optical approaches, while the letter is pursuing video approaches. Boeing has performed trial runs with workers using a prototype system but has not yet made any deployment decisions. Annotation and visualization applications in restricted, limited range environments are deployable today, although much more work needs to be done to make them cost effective and flexible. Applications in medical visualization will take longer. Prototype visualization aids have been used on an experimental basis, but the stringent registration requirements and ramifications of mistakes will postpone common usage for many years. AR will probably be used for medical training before it is commonly used in surgery. The next generation of combat aircraft will have Helmet Mounted Sights with graphics registered to targets in the environment. These displays, combined with short-range steer able missiles that can shoot at targets off-bore sight, give a tremendous combat advantage to pilots in dogfights. Instead of having to be directly behind his target in order to shoot at it, a pilot can now shoot at anything within a 60-90 degree cone of his aircraft's forward centerline. Russia and Israel currently have systems with this capability, and the U.S is expected to field the AIM-9X missile with its associated Helmet-mounted sight in 2002. Augmented Reality is a relatively new field, where most of the research efforts have occurred in the past four years. Because of the numerous challenges and unexplored avenues in this area, AR will remain a vibrant area of research for at least the next several years. After the basic problems with AR are solved, the ultimate goal will be to generate virtual objects that are so realistic that they are virtually indistinguishable from the real environment. Photorealism has been demonstrated in feature films, but accomplishing this in an interactive application will be much harder. Lighting conditions, surface reflections, and other properties must be measured automatically ,in real time.
  • 22. PDIT,TIRUPATI AUGMENTED REALITY 22 9.REFERENCES  Augmented reality Wikipedia,http://en.wikipedia.org/wiki/Augmented_reality Augmented reality: A practical guide. (2008)  http://media.pragprog.com/titles/cfar/intro.pdf http://arcadia.eafit.edu.co/Publications/AugmentedRealityIADATEnglish.pdf  http://upcommons.upc.edu/eprints/bitstream/2117/9839/1/IEM_number16_WN.2.pdf International Conference on EngineeringEducation & Research, Korea, (2009).  http://robot.kut.ac.kr/papers/DeveEduVirtual.pdf Jochim, S., Augmented Reality in Modern Education  http://augmentedrealitydevelopmentlab.com/wpcontent/uploads/2010/08/  ARDLArticle8.5-11Small.pdf Blalock, J., Carringer, J.: Augmented Reality Applications for Environmental  Designers. In E. Pearson & P. Bohman (Eds.), Proceedings of World  Conference on Educational Multimedia, Hypermedia and  Telecommunications, pp. 2757--2762. Chesapeake, VA: AACE. (2006). 15.  Dix, J., Finlay, J., Abowd, D., Beale, R.:Human-Computer Interaction.  Third Edition, Pearson: Prentice Hall Europe,(2004).  8.. Valenzuela, D., Shrivastava, P.: Interview as a Method for Qualitative  Research.Presentation  http://www.public.asu.edu/~kroel/www500/ Interview%20Fri.pdf Thomas, W.: A Review of ResearchonProject BasedLearning. March,  (2000).http://www.bobpearlman.org/BestPractices/PBL_Research.pdf Shtereva, K., Ivanova, M., Raykov, P.: Project BasedLearning  In Microelectronics: Utilizing ICAPP. Interactive Computer Aided Learning  Conference, 23 – 25 September Villach, Austria, (2009).