Making Augmented Reality Applications with Android NDK
Upcoming SlideShare
Loading in...5

Making Augmented Reality Applications with Android NDK



DevFest'13 Istanbul Presentation.

DevFest'13 Istanbul Presentation.



Total Views
Views on SlideShare
Embed Views



2 Embeds 11 9 2



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

Making Augmented Reality Applications with Android NDK Making Augmented Reality Applications with Android NDK Presentation Transcript

  • • • • • • • Augmented Reality Type of Augmented Reality Frameworks Vuforia Android NDK OPenGL ES Demo
  • WHO AM I ?
  • Augmented Reality (AR) is a variation of Virtual Environments (VE), or Virtual Reality as it is more commonly called. VE technologies completely immerse a user inside a synthetic environment. While immersed, the user cannot see the real world around him. In contrast, AR allows the user to see the real world, with virtual objects superimposed upon or composited with the real world.
  •     Free kick radius / Offside. Advertising spots Weather Forecast Stock Already common in TV shows
  • • Invented Head-Mounted Display which was the first step in making AR a possibility • Coined the term “Augmented Reality” • Developed Complex Software at Boeing to help technicians assemble cables into aircraft Prof. Tom Caudell
  • • In 1999, Hirokazu Kato of the Nara Institute of Science and Technology released the ARToolKit to the open source community. • Although the smartphone was yet to be invented, it was what allowed a simple, handheld device with a camera and an internet connection to bring AR to the masses. Hirokazu Kato
  • • GPS + Compass + Gyro + Accelerometer • Marker (Fiduciary, frame, etc) • NFT (2D images) • 3D (Pre-trained point cloud) • Live 3D (SLAM) • Face, Fingers, Body
  • • Marker-based AR uses a camera and a visual marker to determine the center, orientation, and range of its spherical coordinate system. • • ARToolkit is the first fully-featured toolkit for marker-based AR Markers work by having software recognise a particular pattern, such as a barcode or symbol, when a camera points at it, and overlaying a digital image at that point on the screen.
  • • As the name implies, image targets are images that the AR SDK can detect and track. Unlike traditional markers, data matrix codes and QR codes, image targets do not need special black and white regions or codes to be recognized. The AR SDK uses sophisticated algorithms to detect and track the features that are naturally found in the image itself.
  • • GPS + Compass + Gyro + Accelerometer • Location-based applications use the ability of a particular device to record its position in the world and then offer data that’s relevant to that location: finding your way around a city, remembering where you parked the car, naming the mountains around you or the stars in the sky.
  • • Computer scientists get output images from the computer tomography (CT) for the virtually produced image of the inner body. A modern spiral-CT makes several X-Ray photographs with diverse perspectives and then reconstructs their 3-dimensional perspective. A computer-aided tomogram is clearer than a normal X-ray photograph because it enables differentiation of the body's various types of tissue. The computer scientist then superimposes the saved CT scans with a real photo of the patient on the operating table. For surgeons the impression produced is that of lookinv through the skin and throughout the various layers of the body in 3-dimensions and in color.
  • • The “virtual watch” is created by real-time lightreflecting technology that allows the consumer to interact with the design by twisting their wrist for a 360 degree view. Shoppers will be able to “try on” 28 different watches from the Touch collection by the Swiss watch maker Tissot, and can also experiment with different dials and straps.
  • • Fitting Reality is based on Augmented Reality. Be it style or comfort. This is the Virtual Shopping Mall of the Future. You can sit at home, try your clothing on our virtual shop and shop interactively. It is Designed for both at-home and in-store experience.
  • • The military has been using displays in cockpits that present information to the pilot on the windshield of the cockpit or the visor of the flight helmet. This is a form of augmented reality display.
  • • Integrating drawings and cutouts with real-world images provides context for an engineer.
  • • Augmented reality also provides the ability to recreate the sights and sounds of the ancient world, allowing a tourist to experience a place in time as if he or she were actually present when a given event in history occurred. By viewing a physical environment whose elements are augmented by computer generated images, the viewer can actually experience a historic place or event as if he or she has traveled back in time.
  • • AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into a real life local view of a property before the physical building is constructed there. AR can also be employed within an architect's work space, rendering into their view animated 3D visualizations of their 2D drawings. Architecture sight-seeing can be enhanced with AR applications allowing users viewing a building's exterior to virtually see through its walls, viewing its interior objects and layout.
  • • AR technology has been successfully used in various educational institutes to act as add-ons to the textbook material or as a virtual, 3d textbook in itself. Normally done with head mounts the AR experience allows the students to ‘‘relive’’ events as they are known to have happened, while never leaving their class. These apps can be implemented on the Android platform, but you need the backing of some course material provider. Apps like these also have the potential to push AR to the forefront because they have a very large potential user base.
  • • Word Lens has its limits. The translation will have mistakes, and may be hard to understand, but it usually gets the point across. If a translation fails, there is a way to manually look up words by typing them in. Word Lens does not read very stylized fonts, handwriting, or cursive.
  • • There are many, many more uses of AR that cannot be categorized so easily. They are mostly still in the designing and planning stages, but have the potential to forward AR technology to the forefront of daily gadgets.
  • What to track Where it is (3D pose) Your Interesting Stuff
  • • Vuforia is a Augmented Reality framework which is developing by Qualcomm company. • The Vuforia platform uses superior, stable, and technically efficient computer vision-based image recognition and offers the widest set of features and capabilities, giving developers the freedom to extend their visions without technical limitations. With support for iOS, Android, and Unity 3D, the Vuforia platform allows you to write a single native app that can reach the most users across the widest range of smartphones and tablets.
  • Device SDK • • • Android iOS Unity Extension Tools & Services • • • Target Management System App Development Guide Vuforia Web Services Support Forum
  • • This diagram overview of provides the an application development process with the Vuforia platform. The platform consists of the Vuforia Engine (inside the SDK), the Target Management System hosted on the developer portal (Target Manager), and optionally, the Cloud Target Database.
  • • Cygwin is a Unix-like environment and command-line interface for Microsoft Windows. • Cygwin provides native integration of Windows-based applications, data, and other system resources with applications, software tools, and data of the Unix-like environment.
  • • Android apps are typically written in Java, with its elegant object-oriented design. However, at times, you need to overcome the limitations of Java, such as memory management and performance, by programming directly into Android native interface. Android provides Native Development Kit (NDK) to support native development in C/C++, besides the Android Software Development Kit (Android SDK) which supports Java.
  • It provides a set of system headers for stable native APIs that are guaranteed to be supported in all later releases of the platform: • libc (C library) headers • libm (math library) headers • JNI interface headers • libz (Zlib compression) headers • liblog (Android logging) header • OpenGL ES 1.1 and OpenGL ES 2.0 (3D graphics libraries) headers • A Minimal set of headers for C++ support • OpenSL ES native audio libraries
  • • Download the Vuforia SDK (you need to accept the license agreement before the download can start) • Extract the contents of the ZIP package and put it into <DEVELOPMENT_ROOT> • Adjust the Vuforia Environment settings in Eclipse • <DEVELOPMENT_ROOT> • android-ndk-r8 • android-sdk-windows • vuforia-sdk-android-xx-yy-zz
  • • Type of Augmented reality: Image – based • SDK of demo: Vuforia • Mobile platform: Android (NDK) • 3D Content Rendering with OPENGL ES 1.1 • 3D model is an obj. file format. 3D Model Marker
  • • Android NDK applications that include Java code and resource files as well as C/C++ source code and sometimes assembly code. All native code is compiled into a dynamic linked library (.so file) and then called by Java in the main program using a JNI mechanism. NDK application development can be divided into five steps;
  • • Creating a sub-directory called "jni" and place all the native sources here. • Creating a "" to describe our native sources to the NDK build system. • By default, the NDK build doesn’t automatically build for x86 ABI. We will need Application.Mk to either create a build file “” to explicitly specify our build targets Android.Mk
  • • Building our native code by running the "ndk-build" (in NDK installed directory) script from our project's directory. • Note that the build system will automatically add proper # prefix and suffix to the corresponding generated file. In other words, a shared library module named ‘DevFestArDemo' will generate ‘'.
  • • Loading Native Libs • Making a few JNI calls out of the box. In java class, look for method declarations starting with "public native". ... ImageTargets.Java ImageTargets.cpp
  • • We create and ImageTargets class to use and manage the Augmented Reality SDK. • Initialize application GUI elements that are not related to AR.
  • • InitQCARTask An async task to initialize QCAR asynchronously. • Done the Initializing QCAR, Then Initialize the image tracker.
  • • Initializes AR application components. ImageTargets.Java
  • • This is texture for our 3d model. We are just calling our texture is in assets folder. texture.png imagetarget.ccp
  • • Do application initialization in native code (e.g. Registering callbacks, etc.) • Creating a texture of 3d Content and loading from ImageTargets.cpp
  • • An async task to load the tracker data asynchronously. ImageTargets.Java
  • • In this step we are defining our marker in ImageTargets.ccp file. But firstly let me explain the general structure and working principle of marker.
  • • Image targets can be created with the online Target Manager tool from JPG or PNG input images (only RGB or grayscale images are supported) 2 MB or less in size. Features extracted from these images are stored in a database which can then be downloaded and packaged together with your application. The database can then be used by Vuforia for runtime comparisons.
  • Creating Device Database Adding a Target
  • Creating Device Database Adding a Target
  • • A feature is a sharp, spiked, chiseled detail in the image, such as the ones present in textured objects. The image analyzer represents features as small yellow crosses. Increase the number of these details in your image, and verify that the details create a non-repeating pattern. Adding a Target
  • • Not enough features. More visual details are required to increase the total number of features. • Poor feature distribution. Features are present in some areas of this image but not in others. Features need to be distributed uniformly across the image. • Poor local contrast. The objects in this image need sharper edges or clearly defined shapes in order to provide better local contrast
  • • This image is not suitable for detection and tracking. We should consider an alternative image or significantly modify this one. • Although this image may contain enough features and good contrast, repetitive patterns hinder detection performance. For best results, choose an image without repeated motifs (even if rotated and scaled) or strong rotational symmetry.
  • • Loading our Data Sets to Image Tracker. DevFestTest.xml ImageTargets.cpp
  • • Starting Camera Device. • Start the Tracker to detect and track real-world objects in camera video frames. ImageTargets.cpp
  • ImageTargets.cpp
  • • OpenGL for Embedded Systems (OpenGL ES) is a subset of the OpenGL computer graphics rendering application programming interface (API) for rendering 2D and 3D computer graphics such as those used by video games, typically hardware-accelerated using a graphics processing unit (GPU). • android.GLSurfaceView • android.GLSurfaceView.Renderer onDrawFrame(GL10 gl) Called to draw the current frame. onSurfaceChanged(GL10 gl, int width, int height) Called when the surface changed size. onSurfaceCreated(GL10 gl, EGLConfig config) Called when the surface is created or recreated.
  • • First, for each active (visible) trackable we create a modelview matrix from its pose. Then we apply transforms to this matrix in order to scale and position our model. Finally we multiply it by the projection matrix to create the MVP (model view projection) matrix that brings the 3D content to the screen. Later in the code, we bind this MVP matrix to the uniform variable in our shader. Each vertex of our 3D model will be multiplied by this matrix, effectively bringing that vertex from world space to screen space (the transforms are actually object > world > eye > window). • Next, we need to feed the model arrays (vertices, normals, and texture coordinates) to our shader. We start by binding our shader, then assigning our model arrays to the attribute fields in our shader
  • • I am using obj2opengl tool for this. • OBJ2OPENGL does the latter and acts as a converter from model files to C/C++ headers that describe vertices of the faces, normals and texture coordinates as simple arrays of floats. OBJ2OPENGL is a Perl script that reads a Wavefront OBJ file describing a 3D object and writes a C/C++ include file describing the object in a form suitable for use with OpenGL ES. It is compatible with java and the libraries of the android SDK. Heiko Behrens
  • • In this step we create a folder which is name “Devfest” on the Desktop. And putting our model and file in “Devfest folder”. And than we need to install a Perl script interpreter. Installing on our computer to use Perl codes. Now we are opening windows command page and we are writing codes as this figure
  • • Now we hava a “helicopter.h” file which has OpenGL ES vertex array, to implement our project. • Adding in jni folder to helicopter.h file helicopter.h
  • • In this step we are setting vertex array in our opengl ImageTarget.ccp file to use our 3d model. • Include generated arrays which is ImageTargets.ccp. • Set input data arrays to draw. glTexCoordPointer(2, GL_FLOAT, 0, (const GLvoid*) helicopterTexCoords); glVertexPointer(3, GL_FLOAT, 0, (const GLvoid*) helicopterVerts); glNormalPointer(GL_FLOAT, 0, (const GLvoid*) helicopterNormals); glDrawArrays(GL_TRIANGLES, 0, helicopterNumVerts); ImageTargets.cpp
  • • Now we are adding our activities and adding some permisition on our project’s Androidmanifest.xml file AndroidManifest.xml
  • • Now we run our augmented reality application.