Augmented Reality (AR) is a variation of Virtual Environments (VE), or Virtual
Reality as it is more commonly called. VE technologies completely immerse a
user inside a synthetic environment.
While immersed, the user cannot see the real world around him. In contrast, AR
allows the user to see the real world, with virtual objects superimposed upon or
composited with the real world.
Free kick radius / Offside.
Already common in TV shows
• Invented Head-Mounted Display which was the first
step in making AR a possibility
• Coined the term “Augmented Reality”
• Developed Complex Software at Boeing to help
technicians assemble cables into aircraft
Prof. Tom Caudell
In 1999, Hirokazu Kato of the Nara Institute of Science and
Technology released the ARToolKit to the open source
Although the smartphone was yet to be invented, it was what
allowed a simple, handheld device with a camera and an
internet connection to bring AR to the masses.
GPS + Compass + Gyro + Accelerometer
Marker (Fiduciary, frame, etc)
NFT (2D images)
3D (Pre-trained point cloud)
Live 3D (SLAM)
Face, Fingers, Body
Marker-based AR uses a camera and a visual marker to determine
the center, orientation, and range of its spherical coordinate
ARToolkit is the first fully-featured toolkit for marker-based AR
Markers work by having software recognise a particular pattern,
such as a barcode or symbol, when a camera points at it, and
overlaying a digital image at that point on the screen.
As the name implies, image targets are images that the AR
SDK can detect and track. Unlike traditional markers, data
matrix codes and QR codes, image targets do not need
special black and white regions or codes to be recognized.
The AR SDK uses sophisticated algorithms to detect and track
the features that are naturally found in the image itself.
GPS + Compass + Gyro + Accelerometer
ability of a particular device to record its
position in the world and then offer data
that’s relevant to that location: finding
your way around a city, remembering
where you parked the car, naming the
mountains around you or the stars in the
Computer scientists get output images from the computer
tomography (CT) for the virtually produced image of the
inner body. A modern spiral-CT makes several X-Ray
reconstructs their 3-dimensional perspective.
A computer-aided tomogram is clearer than a normal X-ray
photograph because it enables differentiation of the body's
various types of tissue. The computer scientist then
superimposes the saved CT scans with a real photo of the
patient on the operating table. For surgeons the impression
produced is that of lookinv through the skin and throughout
the various layers of the body in 3-dimensions and in color.
The “virtual watch” is created by real-time lightreflecting technology that allows the consumer to
interact with the design by twisting their wrist for a
360 degree view. Shoppers will be able to “try on”
28 different watches from the Touch collection by
the Swiss watch maker Tissot, and can also
experiment with different dials and straps.
Augmented Reality. Be it style or
comfort. This is the Virtual Shopping
Mall of the Future. You can sit at
home, try your clothing on our
virtual shop and shop interactively.
It is Designed for both at-home and
The military has been using displays in cockpits
that present information to the pilot on the
windshield of the cockpit or the visor of the flight
helmet. This is a form of augmented reality display.
Integrating drawings and cutouts with real-world
images provides context for an engineer.
Augmented reality also provides the ability to
recreate the sights and sounds of the ancient
world, allowing a tourist to experience a place in
time as if he or she were actually present when a
given event in history occurred. By viewing a
augmented by computer generated images, the
viewer can actually experience a historic place or
event as if he or she has traveled back in time.
AR can aid in visualizing building projects.
Computer-generated images of a structure can be
superimposed into a real life local view of a
constructed there. AR can also be employed within
an architect's work space, rendering into their view
animated 3D visualizations of their 2D drawings.
Architecture sight-seeing can be enhanced with AR
applications allowing users viewing a building's
exterior to virtually see through its walls, viewing
its interior objects and layout.
AR technology has been successfully used in
various educational institutes to act as add-ons to
the textbook material or as a virtual, 3d textbook in
itself. Normally done with head mounts the AR
experience allows the students to ‘‘relive’’ events
as they are known to have happened, while never
leaving their class. These apps can be implemented
on the Android platform, but you need the backing
of some course material provider. Apps like these
also have the potential to push AR to the forefront
because they have a very large potential user base.
Word Lens has its limits. The
translation will have mistakes,
and may be hard to understand,
but it usually gets the point
across. If a translation fails, there
is a way to manually look up
words by typing them in. Word
Lens does not read very stylized
fonts, handwriting, or cursive.
There are many, many more uses of AR that
cannot be categorized so easily. They are mostly
still in the designing and planning stages, but have
the potential to forward AR technology to the
forefront of daily gadgets.
What to track
Where it is (3D pose)
Your Interesting Stuff
Vuforia is a Augmented Reality framework which is developing by
The Vuforia platform uses superior, stable, and technically
efficient computer vision-based image recognition and offers the
widest set of features and capabilities, giving developers the
freedom to extend their visions without technical limitations. With
support for iOS, Android, and Unity 3D, the Vuforia platform
allows you to write a single native app that can reach the most
users across the widest range of smartphones and tablets.
Tools & Services
Target Management System
App Development Guide
Vuforia Web Services
development process with the
Vuforia platform. The platform
consists of the Vuforia Engine
(inside the SDK), the Target
Management System hosted on
the developer portal (Target
Manager), and optionally, the
Cloud Target Database.
Cygwin is a Unix-like environment and
command-line interface for Microsoft
Cygwin provides native integration of
Windows-based applications, data,
and other system resources with
applications, software tools, and data
of the Unix-like environment.
Android apps are typically written in Java, with
its elegant object-oriented design. However, at
times, you need to overcome the limitations of
Java, such as memory management and
performance, by programming directly into
Android native interface. Android provides
Native Development Kit (NDK) to support native
development in C/C++, besides the Android
Software Development Kit (Android SDK) which
It provides a set of system headers for stable native APIs that are guaranteed to be supported in all later releases of the platform:
libc (C library) headers
libm (math library) headers
JNI interface headers
libz (Zlib compression) headers
liblog (Android logging) header
OpenGL ES 1.1 and OpenGL ES 2.0 (3D graphics libraries) headers
A Minimal set of headers for C++ support
OpenSL ES native audio libraries
Download the Vuforia SDK (you need to accept the license agreement before the download can start)
Extract the contents of the ZIP package and put it into <DEVELOPMENT_ROOT>
Adjust the Vuforia Environment settings in Eclipse
Type of Augmented reality: Image –
SDK of demo: Vuforia
Mobile platform: Android (NDK)
3D Content Rendering with OPENGL
3D model is an obj. file format.
• Android NDK applications that include Java code and resource files as well as C/C++ source code and sometimes assembly code. All
native code is compiled into a dynamic linked library (.so file) and then called by Java in the main program using a JNI mechanism.
NDK application development can be divided into five steps;
Creating a sub-directory called "jni" and place
all the native sources here.
Creating a "Android.mk" to describe our
native sources to the NDK build system.
automatically build for x86 ABI. We will need
to either create a build file “Application.mk”
to explicitly specify our build targets
Building our native code by running the
"ndk-build" (in NDK installed directory)
script from our project's directory.
Note that the build system will
automatically add proper # prefix and
suffix to the corresponding generated
file. In other words, a shared library
module named ‘DevFestArDemo' will
Loading Native Libs
Making a few JNI calls out of the box. In java
class, look for method declarations starting
with "public native".
We create and ImageTargets class to use and manage the Augmented Reality SDK.
Initialize application GUI elements that are not related to AR.
InitQCARTask An async task to initialize QCAR asynchronously.
Done the Initializing QCAR, Then Initialize the image tracker.
Initializes AR application components.
This is texture for our 3d model. We are just calling our texture is in
Do application initialization in native code (e.g.
Registering callbacks, etc.)
Creating a texture of 3d Content and loading from
An async task to load the tracker data
In this step we are defining our marker in ImageTargets.ccp file. But firstly let me explain the general structure and working principle of marker.
Image targets can be created with the online Target Manager tool from JPG or PNG input images (only RGB or grayscale images are
supported) 2 MB or less in size. Features extracted from these images are stored in a database which can then be downloaded and
packaged together with your application. The database can then be used by Vuforia for runtime comparisons.
A feature is a sharp, spiked, chiseled detail in the image, such as the ones present in textured objects. The image analyzer
represents features as small yellow crosses. Increase the number of these details in your image, and verify that the details create a
Adding a Target
Not enough features. More visual details are
required to increase the total number of features.
Poor feature distribution. Features are present in
some areas of this image but not in others. Features
need to be distributed uniformly across the image.
Poor local contrast. The objects in this image need
sharper edges or clearly defined shapes in order to
provide better local contrast
This image is not suitable for detection and
tracking. We should consider an alternative image
or significantly modify this one.
Although this image may contain enough features
and good contrast, repetitive patterns hinder
detection performance. For best results, choose
an image without repeated motifs (even if rotated
and scaled) or strong rotational symmetry.
Loading our Data Sets to Image Tracker.
Starting Camera Device.
Start the Tracker to detect and track real-world
objects in camera video frames.
OpenGL for Embedded Systems (OpenGL ES) is a subset of the OpenGL
computer graphics rendering application programming interface (API) for
rendering 2D and 3D computer graphics such as those used by video
games, typically hardware-accelerated using a graphics processing unit
Called to draw the current frame.
onSurfaceChanged(GL10 gl, int width, int height)
Called when the surface changed size.
onSurfaceCreated(GL10 gl, EGLConfig config)
Called when the surface is created or recreated.
First, for each active (visible) trackable we create a
modelview matrix from its pose. Then we apply transforms
to this matrix in order to scale and position our model.
Finally we multiply it by the projection matrix to create the
MVP (model view projection) matrix that brings the 3D
content to the screen. Later in the code, we bind this MVP
matrix to the uniform variable in our shader. Each vertex of
our 3D model will be multiplied by this matrix, effectively
bringing that vertex from world space to screen space (the
transforms are actually object > world > eye > window).
Next, we need to feed the model arrays (vertices, normals,
and texture coordinates) to our shader. We start by binding
our shader, then assigning our model arrays to the attribute
fields in our shader
I am using obj2opengl tool for this.
OBJ2OPENGL does the latter and acts as a converter from model files to C/C++ headers
that describe vertices of the faces, normals and texture coordinates as simple arrays of
floats. OBJ2OPENGL is a Perl script that reads a Wavefront OBJ file describing a 3D
object and writes a C/C++ include file describing the object in a form suitable for use
with OpenGL ES. It is compatible with java and the libraries of the android SDK.
In this step we create a folder which is name
“Devfest” on the Desktop. And putting our
model and obj2opengl.pl file in “Devfest
folder”. And than we need to install a Perl
script interpreter. Installing on our computer
to use obj2opengl.pl Perl codes. Now we are
opening windows command page and we are
writing codes as this figure
Now we hava a “helicopter.h” file which has OpenGL ES
vertex array, to implement our project.
Adding in jni folder to helicopter.h file
In this step we are setting vertex array in our opengl
ImageTarget.ccp file to use our 3d model.
Include generated arrays which is ImageTargets.ccp.
Set input data arrays to draw.
glTexCoordPointer(2, GL_FLOAT, 0, (const GLvoid*) helicopterTexCoords);
glVertexPointer(3, GL_FLOAT, 0, (const GLvoid*) helicopterVerts);
glNormalPointer(GL_FLOAT, 0, (const GLvoid*) helicopterNormals);
glDrawArrays(GL_TRIANGLES, 0, helicopterNumVerts);
Now we are adding our activities and adding some permisition on our
project’s Androidmanifest.xml file
Now we run our augmented reality application.