This document discusses augmented reality (AR) developer tools. It describes several low-level AR libraries including ARToolKit, FLARToolKit, and SSTT. It then discusses additional AR authoring software like osgART, Studierstube, MXRToolKit, DART, mARx, AMIRE, ComposAR, and iaTAR that provide higher-level tools for building AR applications and experiences. The document also covers components of AR applications like tracking, display, and some example ARToolKit applications.
Mistakes to avoid when designing DLLs and thoughts about other platforms to deliever with your DLL.
Some compilers provide mechanisms to automatically export all functions and variables in a library that have external linkage. Avoid using any such mechanisms. Export exactly the interface that you need to export, and no more.
Mistakes to avoid when designing DLLs and thoughts about other platforms to deliever with your DLL.
Some compilers provide mechanisms to automatically export all functions and variables in a library that have external linkage. Avoid using any such mechanisms. Export exactly the interface that you need to export, and no more.
The Robots industry is promising major operational benefits, although no one is quite sure where robots and the Industrial Internet of Things (IIoT) will take manufacturing. IoT represents a closing of the gap between production and IT and is seen as the next big step for automation.
Lets see some algorithms and safety concepts behind this topic.
Introduction to Computer Vision using OpenCVDylan Seychell
This is an introductory deck to computer vision using OpenCV and Python, through examples. This presentation is a step by step codelab through the basic functions of OpenCV.
Presented as a pre-conference tutorial at the GPU Technology Conference in San Jose on September 20, 2010.
Learn about NVIDIA's OpenGL 4.1 functionality available now on Fermi-based GPUs.
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers ...J On The Beach
Do you want to check the efficiency of the new, state of the art, GraalVM JIT Compiler in comparison to the old but mostly used JIT C2? Let’s have a side by side comparison from a performance standpoint on the same source code.
The talk reveals how traditional Just In Time Compiler (e.g. JIT C2) from HotSpot/OpenJDK internally manages runtime optimizations for hot methods in comparison to the new, state of the art, GraalVM JIT Compiler on the same source code, emphasizing all of the internals and strategies used by each Compiler to achieve better performance in most common situations (or code patterns). For each optimization, there is Java source code and corresponding generated assembly code in order to prove what really happens under the hood.
Each test is covered by a dedicated benchmark (JMH), timings and conclusions. Main topics of the agenda: - Scalar replacement - Null Checks - Virtual calls - Lock coarsening - Lock elision - Virtual calls - Scalar replacement - Lambdas - Vectorization (few cases)
The tools used during my research study are JITWatch, Java Measurement Harness, and perf. All test scenarios will be launched against the latest official Java release (e.g. version 11).
ZapApp makes the service experience at a restaurant (or any eatery) better for the guest. Also increases the service staff efficiency. It allows the guest to scan a QR code on the restaurant table to place an order through their smartphone. The order is directly sent to the kitchen and the waiter gets the food to the table. The digital menu is attractive with pictures and description about the food. We do not plan to make the waiters redundant we just want to make the service experience better.
Creately offers many Use Case diagram templates which you can use instantly to create your own Use Case diagrams. Many UML Use Case diagram templates can be found on our diagram community. Just click on the use as templates button to immediately start modifying it using our online diagramming tools.
The Robots industry is promising major operational benefits, although no one is quite sure where robots and the Industrial Internet of Things (IIoT) will take manufacturing. IoT represents a closing of the gap between production and IT and is seen as the next big step for automation.
Lets see some algorithms and safety concepts behind this topic.
Introduction to Computer Vision using OpenCVDylan Seychell
This is an introductory deck to computer vision using OpenCV and Python, through examples. This presentation is a step by step codelab through the basic functions of OpenCV.
Presented as a pre-conference tutorial at the GPU Technology Conference in San Jose on September 20, 2010.
Learn about NVIDIA's OpenGL 4.1 functionality available now on Fermi-based GPUs.
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers ...J On The Beach
Do you want to check the efficiency of the new, state of the art, GraalVM JIT Compiler in comparison to the old but mostly used JIT C2? Let’s have a side by side comparison from a performance standpoint on the same source code.
The talk reveals how traditional Just In Time Compiler (e.g. JIT C2) from HotSpot/OpenJDK internally manages runtime optimizations for hot methods in comparison to the new, state of the art, GraalVM JIT Compiler on the same source code, emphasizing all of the internals and strategies used by each Compiler to achieve better performance in most common situations (or code patterns). For each optimization, there is Java source code and corresponding generated assembly code in order to prove what really happens under the hood.
Each test is covered by a dedicated benchmark (JMH), timings and conclusions. Main topics of the agenda: - Scalar replacement - Null Checks - Virtual calls - Lock coarsening - Lock elision - Virtual calls - Scalar replacement - Lambdas - Vectorization (few cases)
The tools used during my research study are JITWatch, Java Measurement Harness, and perf. All test scenarios will be launched against the latest official Java release (e.g. version 11).
ZapApp makes the service experience at a restaurant (or any eatery) better for the guest. Also increases the service staff efficiency. It allows the guest to scan a QR code on the restaurant table to place an order through their smartphone. The order is directly sent to the kitchen and the waiter gets the food to the table. The digital menu is attractive with pictures and description about the food. We do not plan to make the waiters redundant we just want to make the service experience better.
Creately offers many Use Case diagram templates which you can use instantly to create your own Use Case diagrams. Many UML Use Case diagram templates can be found on our diagram community. Just click on the use as templates button to immediately start modifying it using our online diagramming tools.
Lecture 4 from the COSC 426 graduate class on Augmented Reality. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. August 1st 2012
By now you have heard about Flash Augmented Reality and how it is taking the Flash Development community by storm! Whether you are looking for how to get started, how to improve your own experiments or have a client who desperately needs AR on their site, this session is for you.
In this keynote I cover getting up and running as well as the ideal workflow for testing/deploying your creation. I also cover the basics then quickly move into how to build a FLAR Emulator for easy testing/debugging as well as general usability/performance issues. Finally we will look at my own experiments with AR, how they were built and highlight some of the best uses of Flash AR today.
The goal of this presentation is to teach you how to build a solid reusable foundation for all of your Flash AR projects which will allow you to quickly prototype your ideas. All code covered in this session is open source and free to use. Documentation on how it works will also be handed out as well.
We all make mistakes while programming and spend a lot of time fixing them.
One of the methods which allows for quick detection of defects is source code static analysis.
We all make mistakes while programming and spend a lot of time fixing them.
One of the methods which allows for quick detection of defects is source code static analysis.
Nous entendons aujourd’hui parler de Deep Learning un peu partout : reconnaissance d’images, de sons, génération de textes, etc. Suite aux récentes annonces sur Android Neural Network API et TensorFlowLite et à la release du framework CoreML d’Apple, tout nous pousse vers le “on-device intelligence”.
Bien que les techniques et frameworks soient en train de se démocratiser, il reste difficile d’en voir les applications concrètes en entreprise, et encore moins sur des applications mobiles. Nous avons donc décidé de construire un Proof Of Concept pour relever les défis du domaine.
A travers une application mobile à but éducatif, utilisant du Deep Learning pour de la reconnaissance d’objets, nous aborderons les impacts de ce type de modèles sur les smartphones, l’architecture pour l’entraînement et le déploiement de modèles sur un service Cloud, ainsi que la construction de l’application mobile avec les dernières nouveautés annoncées.
Abstract:
With an everyday increase in the number of cars on our roads and highways, we are facing numerous problems, for example:
• Smuggling of cars
• Invalid license plates
• Identification of stolen cars
• Usage of cars in terrorist attacks/illegal activities
In order to address the above issues, we took up the project of developing a prototype, which can perform license plate recognition (LPR). This project, as the name signifies, deals with reading, storing and comparing the license plate numbers retrieved from snapshots of cars to ensure safety in the country and ultimately help to reduce unauthorized vehicles access and crime.
License Plate Recognition (LPR) has been a practical technique in the past decades. It is one of the most important applications for Computer Vision, Patter Recognition and Image Processing in the field of Intelligent Transportation Systems (ITS).
Generally, the LPR system is divided into three steps, license plate locating, license plate character segmentation and license plate recognition. This project discusses a complete license plate recognition system with special emphasis on the Localization Module.In this study, the proposed algorithm is based on extraction of plate region using morphological operations and shape detection algorithms. Segmentation of plate made use of horizontal and vertical smearing and line detection algorithms. Lastly, template matching algorithms were used for character recognition.
The implementation of the project was done in the platforms of Matlab and OpenCV.
Similar to COSC 426 Lect. 3 -AR Developer Tools (20)
Keynote talk by Mark Billinghurst at the 9th XR-Metaverse conference in Busan, South Korea. The talk was given on May 20th, 2024. It talks about progress on achieving the Metaverse vision laid out in Neil Stephenson's book, Snowcrash.
These are slides from the Defence Industry event orgranized by the Australian Research Centre for Interactive and Virtual Environments (IVE). This was held on April 18th 2024, and showcased IVE research capabilities to the South Australian Defence industry.
This is a guest lecture given by Mark Billinghurst at the University of Sydney on March 27th 2024. It discusses some future research directions for Augmented Reality.
Presentation given by Mark Billinghurst at the 2024 XR Spring Summer School on March 7 2024. This lecture talks about different evaluation methods that can be used for Social XR/AR/VR experiences.
Empathic Computing: Delivering the Potential of the MetaverseMark Billinghurst
Invited guest lecture by Mark Billingurust given at the MIT Media Laboratory on November 21st 2023. This was given as part of Professor Hiroshi Ishii's class on Tangible Media
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationMark Billinghurst
A talk given by Mark Billinging in the CLIPE workshop in Tubingen, Germant on April 27th 2023. This talk describes how virtual avatars can be used to support remote collaboration.
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
Keynote talk given by Mark Billinghurst at the CHI 2023 Workshop on Towards and Inclusive and Accessible Metaverse. The talk was given on April 23rd 2023.
Lecture 6 of the COMP 4010 course on AR/VR. This lecture is about designing AR systems. This was taught by Mark Billinghurst at the University of South Australia on September 1st 2022.
Keynote speech given by Mark Billinghurst at the ISS 2022 conference. Presented on November 22nd, 2022. This keynote outlines some research opportunities in the Metaverse.
Lecture 5 in the 2022 COMP 4010 lecture series. This lecture is about AR prototyping tools and techniques. The lecture was given by Mark Billinghurst from University of South Australia in 2022.
Lecture 4 in the 2022 COMP 4010 lecture series on AR/VR. This lecture is about AR Interaction techniques. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 3 in the 2022 COMP 4010 lecture series on AR/VR. This lecture provides an introduction for AR Technology. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 2 in the 2022 COMP 4010 Lecture series on AR/VR and XR. This lecture is about human perception for AR/VR/XR experiences. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 1 for the 2022 COMP 4010 course on AR and VR. This course was taught by Mark Billinghurst at the University of South Australia in 2022. This lecture provides an introduction to AR, VR and XR.
Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
Short talk by Mark Billinghurst on Empathic Computing and Collaborative Immersive Analytics, presented on July 28th 2022 at the Siggraph 2022 conference.
Lecture given by Mark Billinghurst on June 18th 2022 about how the Metaverse can be used for corporate training. In particular how combining AR, VR and other Metaverse elements can be used to provide new types of learning experiences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
3. Low Level AR Libraries
ARToolKit
Marker based tracking
g
FLARToolKit
Flash
Fl h version of ART lK
f ARToolKit
SSTT
Simple Spatial Template Tracking
Opira
O ira
Robust Natural Feature Tracking
4. What ARToolKit?
Wh is ART lK ?
Marker Tracking Library for AR applications
Open Source, Multi platform (Linux Windows MacOS)
Source Multi-platform (Linux, Windows,
Overlays 3D virtual objects on real markers
Uses single tracking marker
Determines camera pose information (6 DOF)
ARToolKit Website
http://www.hitl.washington.edu/artoolkit/
http://artoolkit.sourceforge.net/
5. ARToolKit S f
ART lK Software
ARToolKit version: 2.65 or later
2 65
Currently two license models
Open Source (GPL): ARToolKit 2 72
2.72
Commercial (ARToolWorks): ARToolKit 4.0
OS: Li
OS Linux, Windows, MacOS X, iPh
Wi d M OS X iPhone/Android
/A d id
Programming language: C
Related f
R l d software
ARToolKit Professional: Commercial version
ARToolKitPlus: Advanced version
NyARToolkit: Java and C# version
FLARToolKit: Flash version
6. ARToolKit Family
ART lKit F il
ARToolKit NFT
ARToolKit
ARToolKit Plus
JARToolKit (Java)
ARToolKit (Symbian)
FLARToolKit (Flash)
NyToolKit
- Java, C#,
- Android WM
Android,
FLARManager (
(Flash)
)
7. ARToolKit
ART lK contents
Libraries
libAR – tracking
b t ac g
libARvideo – video capturing
libARgsub – image/graphics drawing
libARmulti – multi-marker tracking
Utilities
Camera calibration
Marker training
8. ARToolKit Structure
ARvideo.lib
DirectShow
Three key libraries:
AR32.lib
AR32 lib – ARToolKit image processing functions
ARgsub32.lib – ARToolKit graphics functions
ARvideo.lib
ARvideo lib – DirectShow video capture class
9. Additional Software
To build an AR application you may need
additional software
High level rendering library
Open VRML, Open Inventor, osgART, etc
Audio Library
Fmod, etc
Peripheral support
10. What does ARToolKit Calculate?
Position of makers in the camera coordinates
Pose of markers in the camera coordinates
Output format
3x4 matrix format to represent the
transformation matrix from the marker
coordinates to the camera coordinates
16. An ARToolKit Application
Initialization
I iti li ti
Load camera and pattern parameters
Main Loop
M i L
Step1. Image capture and display
Step2. Marker detection
Step3. Marker identification
Step4. Getting pose information
Step5. Object Interactions/Simulation
Step6. Display virtual objects
End Application
Camera shut down
17. Image capture: libARvideo
Return the pointer for captured image
ARUint8 *arVideoGetImage( void );
Pixel format and byte size are defined in config h
config.h
#define AR_PIX_FORMAT_BGR
#define AR_PIX_SIZE 3
18. Graphics handling: libARgsub
Set up and clean up the graphics window
void argInit( ARParam *cparam double zoom
*cparam, zoom,
int fullFlag, int xwin, int ywin,
int hmd_flag );
void argCleanup( void );
cparam: camera parameter
zoom: zoom ratio
fullFlag: 0: normal, 1: full screen mode
Xwin, ywin: create small window for debug
hmd_flag: 0: normal, 1: optical see-through mode
19. Graphics handling: libARgsub
Go into the iterative cycle
void argMainLoop(
void (*mouseFunc)(int btn int state int x int y)
btn,int state,int x,int y),
void (*keyFunc)(unsigned char key, int x, int y),
void (*mainFunc)(void)
);
Swap buffers
p
void argSwapBuffers( void );
20. Graphics handling: libARgsub
Set the window for 2D drawing
void argDrawMode2D( void )
id D M d 2D( id );
Set the window for 3D drawing
g
void argDrawMode3D( void );
void argDraw3dCamera( int xwin, int ywin );
Display image
void argDispImage( ARUi t8 *i
id Di I ( ARUint8 *image,
int xwin, int ywin );
21. Sample ARToolKit Applications
Ex. 1: Simple video display
Ex. 2: Detecting a marker
Ex. 3: Using pattern
Ex. 4: Getting
E 4 G tti a 3D i f information
ti
Ex.
Ex 5: Virtual object overlay
22. Ex 1: Simple Video Display
Program : sample1.c
Key points
Loop structure
Video image handling
Camera parameter handling
Window setup
Mouse and keyboard handling
24. Sample1.c - mainLoop Function
if( dataPtr = (ARUint8 *)
arVideoGetImage()) == NULL ) {
arUtilSleep(2);
return;
}
argDrawMode2D();
D M d 2D()
argDispImage(dataPtr, 0, 0 );
arVideoCapNext();
argSwapBuffers();
25. Sample1.c – video initialization
Configure th video input
C fi the id i t
vconf = <video configuration string>
Start video capture
arVideoCapStart();
In init(), open the video
arVideoOpen( vconf );
arVideoInqSize(&xsize, &ysize);
When finished, close the video path
arVideoCapStop();
arVideoClose();
26. Changing Image Size
For input capture
vconf = “videoWidth=320,videoHeight=240";
Note – the camera must support this image size
For display
argInit( &cparam 1 5 0 0 0 0 );
&cparam, 1.5, 0, 0, 0,
The
Th second parameter means zoom ratio for display
d f d l
image size related to input image.
27. Ex. 2: Detecting a Marker
Program : sample2.c
P l 2
Key points
yp
Threshold value
Important external variables
arDebug – keep thresholded image
arImage – pointer for thresholded image
arImageProcMode – use 50% image for image
processing
- AR_IMAGE_PROC_IN_FULL
- AR IMAGE PROC IN HALF
AR_IMAGE_PROC_IN_HALF
28. Sample2.c – marker detection
/* detect the markers in the video frame */
if(arDetectMarker(dataPtr, th
if( D t tM k (d t Pt thresh,
h
&marker_info, &marker_num) < 0 ) {
cleanup();
l ()
exit(0);
}
for( i = 0; i < marker_num; i++ ) {
argDrawSquare(marker_info[i].vertex,0,0);
}
29. Sample2.c – marker_info structure
typedef struct {
int area;
int id;
int dir;
double cf;
double pos[2];
double line[4][3];
double vertex[4][2];
} ARMarkerInfo;
30. Ex. 3: Using a Pattern
Program : sample3.c
Key points
Pattern files loading
Structure of marker
information
- Region features
- Pattern Id, direction
- Certainty factor
Marker identification
31. Making a pattern template
Use f ili
U of utility program:
mk_patt.exe
Show the pattern
Put the corner of red line
segments on the left-top
left top
vertex of the marker
Pattern stored as a
template in a file
1:2:1 ratio determines the
pattern region used
tt i d
33. Checking for known patterns
/* check for known patterns */
k = -1;
for( i = 0; i < marker num; i++ ) {
marker_num;
if( marker_info[i].id == patt_id) {
/* you've found a pattern */
printf("Found pattern: %d n",patt_id);
if( k == -1 ) k = i;
else
/* make sure you have the best pattern
(highest confidence factor) */
if( marker_info[k].cf < marker_info[i].cf )
k i f [k] f k i f [i] f
k = i;
}
}
34. Ex. 4 – Getting 3D information
Program : sample4.c
Key points
Definition of a real marker
Transformation matrix
- Rotation component
-TTranslation component
l ti t
36. Finding the Camera Position
This function sets transformation matrix from marker
to camera into marker_trans[3][4].
_ [ ][ ]
arGetTransMat(&marker_info[k], marker_center,
marker_width, marker_trans);
You can see the position information in the values of
marker_trans[3][4].
marker trans[3][4]
Xpos = marker_trans[0][3];
Ypos = marker_trans[1][3];
Zpos = marker_trans[2][3];
38. Ex. 5- Virtual Object Display
Program : sample5.c
Key points
OpenGL parameter setting
Setup of projection matrix
Setup of modelview matrix
39. Appending your own OpenGL code
Set the camera parameters to OpenGL Projection matrix.
g
argDrawMode3D();
argDraw3dCamera( 0, 0 );
Set the transformation matrix from the marker to the camera to
the OpenGL ModelView matrix.
argConvGlpara(marker_trans, gl_para);
glMatrixMode(GL_MODELVIEW);
glLoadMatrixd( gl_para );
After calling these functions, your OpenGL objects are
drawn in the real marker coordinates.
40. 3D CG Model Rendering
ARToolKit does not have a function to handle
3D CG models
models.
3rd party CG rendering software should be
employed.
OpenVRML
p
OpenSceneGraph
etc
41.
42. Loading Multiple Patterns
Sample File: LoadMulti.c
Uses object c to load
object.c
Object Structure
typedef struct {
t d f t t
char name[256];
int id;
int visible;
double marker_coord[4][2];
double trans[3][4];
double marker_width;
double marker_center[2];
} ObjectData_T;
43. Finding Multiple Transforms
Create object list
ObjectData_T
ObjectData T *object;
object;
Read in objects - in init( )
read_ObjData( char *name, int *objectnum );
Find Transform – in mainLoop( )
for( i = 0; i < objectnum; i++ ) {
..Check patterns
Ch k tt
..Find transforms for each marker
}
44. Drawing Multiple Objects
Send the object list to the draw function
draw( object, objectnum );
Draw each object individually
j y
for( i = 0; i < objectnum; i++ ) {
if( object[i].visible == 0 ) continue;
( j [ ] ;
argConvGlpara(object[i].trans, gl_para);
draw_object(
draw object( object[i].id, gl para);
gl_para);
}
45. Limitations of ARToolKit
Partial occlusions cause tracking failure
Affected by lighting and shadows
Tracking range depends on marker size
Performance depends on number of markers
P f d d b f k
cf artTag, ARToolKitPlus
Pose accuracy depends on distance to marker
Pose accuracy depends on angle to marker
y p g
46. ARToolKit in the World
Hundreds f
H d d of projects
j
Large research community
g y
47. FLARToolKit
Flash AS3 Version of the ARToolKit
(was ported from NyARToolkit the Java Version of the ARToolkit)
enables augmented reality on the Browser
uses Papervision3D for as 3D Engine
available at http://saqoosha.net/
dual license, GPL and commercial license
54. AR Authoring
Software Lib i
S ft Libraries
osgART, Studierstube, MXRToolKit
g
Plug-ins to existing software
DART (Macromedia Director), mARx
(M di Di ) AR
Stand Alone
AMIRE, ComposAR, etc
Next Generation
iaTAR (Tangible AR)
56. BuildAR
http://www.buildar.co.nz/
pp
Stand alone application
Visual interface for AR model viewing application
Enables non-programmers to build AR scenes
57. ImageTclAR
Adds AR components to ImageTcl
http://metlab.cse.msu.edu/imagetclar/
Modular Library (Scripting, Tcl)
Supports several tracking systems (vision, magnetic, inertial)
Easy to learn but little support, small community
58. DART
Designers AR Toolkit
http://www.cc.gatech.edu/dart/
http://www cc gatech edu/dart/
AR plug-in for Macromedia Director
Developed f d i
D l d for designers
Visual programming
Scripting interface
59. Studierstube
Complete authoring tool
http://studierstube.icg.tu-graz.ac.at/
Framework (Low Level Programming, C++)
Modularity, Extensibility, Scalability, Heterogeneity
Support for wide range of trackers, displays, input
60. Metaio UnifEye SDK
Complete commercial authoring platform
http://www.metaio.com/products/
p p
Offers viewer and editor tools
Visual interface and low level SDK
Delivery on desktop or mobile platforms
61. OSGART Programming Library
Integration of ARToolKit with a High-Level
Rendering Engine (OpenSceneGraph)
g g ( p p )
OSGART= OpenSceneGraph + ARToolKit
Supporting Geometric + Photometric Registration
pp g g
62. osgART:Features
C++ (but also Python Lua, etc).
Python, Lua etc)
Multiple Video Input supports:
Direct (Firewire/USB Camera), Files, Network by
ARvideo, PtGrey, CVCam, VideoWrapper, etc.
Benefits of Open Scene Graph
Rendering Engine Plug-ins etc
Engine, Plug ins,
66. What is a Scene Graph?
Tree-like structure for organising a virtual world
e.g. VRML
Hierarchy of nodes that define:
Hi h f d th t d fi
Groups (and Switches, Sequences etc…)
Transformations
Projections
Geometry
…
And states and attributes that define:
Materials and textures
Lighting and blending
…
68. Benefits of a Scene Graph
Performance
P f
Structuring data facilitates
optimization
- Culling, state management, etc…
Abstraction
Underlying graphics pipeline is
hidden
Low-level programming (“how do I Image: sgi
display this?”) replaced with high-
level
l l concepts (“what do I want to
t (“ h t d tt
display?”)
69. About Open Scene Graph
http://www.openscenegraph.org/
Open-source scene graph implementation
Based on OpenGL
Object-oriented C++ following design pattern principles
Used for simulation, games, research, and industrial projects
Active development community
Maintained by Robert Osfield
~2000 mailing list subscribers
Documentation project: www osgbooks com
www.osgbooks.com
Uses the OSG Public License (similar to LGPL)
70. About Open Scene Graph (2)
Pirates of the XXI Century Flightgear SCANeR
3DVRII Research Institute EOR VRlab Umeå University
71. Open Scene Graph Features
Plugins for loading and saving
3D: 3D Studio (.3ds), OpenFlight (.flt), Wavefront (.obj)…
2D: .png, .jpg, .bmp, QuickTime movies
NodeKits to extend functionality
y
e.g. osgShadow
Cross platform
Cross-platform support for:
Window management (osgViewer)
Threading (OpenThreads)
72. Open Scene Graph Architecture
Inter-operability with
other environments,
e.g. Python
Plugins read and NodeKits extend
write 2D image and core functionality,
3D model files Scene graph and exposing higher-level
rendering node types
functionality
73. Some Open Scene Graph Demos
osgviewer osgmotionblur osgparticle
osgreflect osgdistortion osgfxbrowser
You may want to get the OSG data package:
Via SVN: http://www.openscenegraph.org/svn/osg/OpenSceneGraph-Data/trunk
74. Learning OSG
Check out the Quick Start Guide
Free PDF download at http://osgbooks.com/, Physical copy $13US
Join h
J i the mailing li
ili list:
http://www.openscenegraph.org/projects/osg/wiki/MailingLists
Browse the website: http://www.openscenegraph.org/projects/osg
Use the forum: http://forum.openscenegraph.org
p p g p g
Study the examples
Read the source?
76. What is osgART?
osgART adds AR to Open Scene Graph
Further developed and enhanced by:
Julian Looser
Hartmut Seichter
H S h
Raphael Grasset
Current version 2.0, Open Source
http://www.osgart.org
http://www osgart org
77. osgART Approach: Basic Scene Graph
Root
Transform
T f [0.988 -0.031
-0.048 0.857
0.141 0.513
-0.145
-0.512
0.846
10.939 29.859 -226.733 ]
0
0
0
1
To add Video see-through AR:
3D Object Integrate live video
g
Apply correct projection matrix
Update tracked transformations in
realtime
79. osgART Approach: AR Scene Graph
Root
Virtual
Camera
Video
Transform
Layer
Video
3D Obj
Object
Geode
80. osgART Approach: AR Scene Graph
Root
Projection matrix from
tracker calibration
Orthographic Transformation matrix
projection
Virtual
updated from marker
Camera tracking in realtime
Video
Transform
Layer
Full-screen quad
with live texture
updated from
Video source Video
3D Obj
Object
Geode
81. osgART Approach: AR Scene Graph
Root
Projection matrix from
tracker calibration
Orthographic Transformation matrix
projection
Virtual
updated from marker
Camera tracking in realtime
Video
Transform
Layer
Full-screen quad
with live texture
updated from
Video source Video
3D Obj
Object
Geode
82. osgART Architecture
Like any video see-through AR library, osgART requires video
input and tracking capabilities.
Video Source
e.g.
e g DirectShow
AR
A Library
Applica
ation
Tracking Module
(libAR.lib)
83. osgART Architecture
osgART uses a plugin architecture so that video sources and tracking
technologies can be plugged in as necessary
OpenCVVideo -
VidCapture -
CMU1394 -
PointGrey SDK -
VidereDesign -
Video Plugin
V
VideoWrapper -
Vd W
VideoInput -
VideoSource -
App
DSVL -
os
Intranel RTSP -
sgART
plication
Tracke Plugin
ARToolKit4 -
ARToolkitPlus - er
n
MXRToolKit -
ARLib -
bazAR (work in progress) -
ARTag (work in progress) -
84. Basic osgART Tutorial
Develop
D l a working osgART application from scratch.
ki ART li ti f t h
Use ARToolKit 2.72 library for tracking and video
capture
85. osgART Tutorial 1: Basic OSG Viewer
Start with the standard Open Scene Graph Viewer
We will modify this to do AR!
y
86. osgART Tutorial 1: Basic OSG Viewer
The basic osgViewer…
#include <osgViewer/Viewer>
#include <osgViewer/ViewerEventHandlers>
int main(int argc, char* argv[]) {
// Create a viewer
osgViewer::Viewer viewer;
// Create a root node
osg::ref_ptr<osg::Group> root = new osg::Group;
// Attach root node to the viewer
viewer.setSceneData(root.get());
g
// Add relevant event handlers to the viewer
viewer.addEventHandler(new osgViewer::StatsHandler);
viewer.addEventHandler(new osgViewer::WindowSizeHandler);
viewer.addEventHandler(new osgViewer::ThreadingHandler);
viewer.addEventHandler(new
viewer addEventHandler(new osgViewer::HelpHandler);
// Run the viewer and exit the program when the viewer is closed
return viewer.run();
}
87. osgART Tutorial 2: Adding Video
Add a video plugin
Load, configure, start video capture…
Add a video background
Create, link to video, add to scene-graph
g p
88. osgART Tutorial 2: Adding Video
New code to load and configure a Video Plugin:
// Preload the video and tracker
int _video_id = osgART::PluginManager::getInstance()->load("osgart_video_artoolkit2");
// Load a video plugin.
osg::ref_ptr<osgART::Video> video =
dynamic_cast<osgART::Video*>(osgART::PluginManager::getInstance()->get(_video_id));
// Check if an instance of the video stream could be created
if (!video.valid()) {
// Without video an AR application can not work. Quit if none found.
osg::notify(osg::FATAL) << "Could not initialize video plugin!" << std::endl;
exit(-1);
}
// Open the video. This will not yet start the video stream but will
// get information about the format of the video which is essential
// for the connected tracker.
video->open();
89. osgART Tutorial 2: Adding Video
New code to add a live video background
osg::Group* createImageBackground(osg::Image* video) {
osgART::VideoLayer* _layer = new osgART::VideoLayer();
_layer->setSize(*video);
osgART::VideoGeode* _geode = new osgART::VideoGeode(osgART::VideoGeode::USE_TEXTURE_2D, video);
addTexturedQuad(*_geode, video->s(), video->t());
_layer->addChild(_geode);
return _layer;
}
In the main function…
osg::ref_ptr<osg::Group> videoBackground = createImageBackground(video.get());
videoBackground->getOrCreateStateSet()->setRenderBinDetails(0, "RenderBin");
root->addChild(videoBackground.get());
video->start();
90. osgART Tutorial 3: Tracking
Add a Tracker plugin
T k l
Load, configure, link to video
Add a Marker to track
Load,
Load activate
Tracked node
Create, li k with marker via tracking callbacks
C link i h k i ki llb k
Print out the tracking data
91. osgART Tutorial 3: Tracking
Load a t ac g plugin and associate it with the video plugin
oa tracking p ug a assoc ate t w t t e v eo p ug
int _tracker_id = osgART::PluginManager::getInstance()->load("osgart_tracker_artoolkit2");
osg::ref_ptr<osgART::Tracker> tracker =
dynamic_cast<osgART::Tracker*>(osgART::PluginManager::getInstance()->get(_tracker_id));
if (!tracker.valid()) {
// Without tracker an AR application can not work. Quit if none found.
osg::notify(osg::FATAL) << "Could not initialize tracker plugin!" << std::endl;
exit(-1);
}
// get the tracker calibration object
osg::ref_ptr<osgART::Calibration> calibration = tracker->getOrCreateCalibration();
// load a calibration file
if (!calibration->load("data/camera para dat"))
(!calibration->load( data/camera_para.dat ))
{
// the calibration file was non-existing or couldnt be loaded
osg::notify(osg::FATAL) << "Non existing or incompatible calibration file" << std::endl;
exit(-1);
}
// set the image source for the tracker
tracker->setImage(video.get());
osgART::TrackerCallback::addOrSet(root.get(), tracker.get());
// create the virtual camera and add it to the scene
osg::ref_ptr<osg::Camera> cam = calibration->createCamera();
root->addChild(cam.get());
92. osgART Tutorial 3: Tracking
Load a marker and activate it
Associate it with a transformation node (via event callbacks)
Add the transformation node to the virtual camera node
osg::ref_ptr<osgART::Marker> marker = tracker->addMarker("single;data/patt.hiro;80;0;0");
if (!marker.valid())
{
// Without marker an AR application can not work. Quit if none found.
osg::notify(osg::FATAL) << "Could not add marker!" << std::endl;
exit(-1);
}
marker->setActive(true);
osg::ref_ptr<osg::MatrixTransform> arTransform = new osg::MatrixTransform();
osgART::attachDefaultEventCallbacks(arTransform.get(), marker.get());
cam->addChild(arTransform.get());
Add a debug callback to print out information about the tracked marker
osgART::addEventCallback(arTransform.get(), new osgART::MarkerDebugCallback(marker.get()));
94. osgART Tutorial 4: Adding Content
Now put the trackin data t use!
N t tracking to se!
Add content to the tracked transform
Basic cube code
arTransform->addChild(osgART::testCube());
arTransform->getOrCreateStateSet()->setRenderBinDetails(100, "RenderBin");
95. osgART Tutorial 5: Adding 3D Model
Open Scene Graph can load some 3D formats directly:
O S G h l d f di l
e.g. Wavefront (.obj), OpenFlight (.flt), 3D Studio (.3ds), COLLADA
Others need to be converted
Support for some formats is much better than others
e.g. OpenFlight good, 3ds hit and miss.
Recommend native .osg and .ive formats
.osg – ASCII representation of scene graph
.ive – Binary OSG file. Can contain hold textures
ive file textures.
osgExp : Exporter for 3DS Max is a good choice
http://sourceforge.net/projects/osgmaxexp
Otherwise .3ds files from TurboSquid can work
96. osgART Tutorial 5: Adding 3D Model
Replace the simple cube with a 3D model
Models are loaded using the osgDB::readNodeFile() function
std::string filename = "media/hollow_cube.osg";
arTransform >addChild(osgDB::readNodeFile(filename));
arTransform->addChild(osgDB::readNodeFile(filename));
Export to .osg
3D Studio M
St di Max
osgART
Note: Scale is important. Units are in mm.
important mm
97. osgART Tutorial 6: Multiple Markers
Repeat the process so far to track more than
one marker simultaneouslyy
98. osgART Tutorial 6: Multiple Markers
Repeat the process so far to track more than one marker
R h f k h k
Load and activate two markers
osg::ref_ptr<osgART::Marker> markerA = tracker->addMarker("single;data/patt.hiro;80;0;0");
markerA->setActive(true);
osg::ref_ptr<osgART::Marker> markerB = tracker->addMarker("single;data/patt.kanji;80;0;0");
_
markerB->setActive(true);
Create two transformations, attach callbacks, and add models
osg::ref_ptr<osg::MatrixTransform> arTransformA = new osg::MatrixTransform();
osgART::attachDefaultEventCallbacks(arTransformA.get(), markerA.get());
arTransformA->addChild(osgDB::readNodeFile("media/hitl_logo.osg"));
arTransformA->getOrCreateStateSet()->setRenderBinDetails(100, "RenderBin");
cam->addChild(arTransformA.get());
osg::ref_ptr<osg::MatrixTransform> arTransformB = new osg::MatrixTransform();
osgART::attachDefaultEventCallbacks(arTransformB.get(), markerB.get());
arTransformB->addChild(osgDB::readNodeFile("media/gist_logo.osg"));
arTransformB->getOrCreateStateSet()->setRenderBinDetails(100, "RenderBin");
cam->addChild(arTransformB.get());
100. Basic osgART Tutorial: Summary
Standard OSG Viewer Addition of Video Addition of Tracking
Addition of basic 3D Addition of 3D Model Multiple Markers
g p
graphics
102. FLARManager:
Makes building FLARToolkit apps easier
Is open-source, with a free and commercial license
open source,
Is designed to allow exploration of both augmented
reality and alternative controllers
Was initiated by Eric Socolofsky, developed with
contributions from FLARToolkit community
ib i f FLART lki i
Decouples FLARToolkit from Papervision3D
Configuration without recompilation, via xml config
103. FLARManager: features
Gives more control over application environment
Provides multiple input options
Robust multiple marker management
Supports multiple 3D frameworks
Offers features f optimization
Off f for i i i
Allows for customization
105. Websites
Software Download
http://artoolkit.sourceforge.net/
http://artoolkit sourceforge net/
ARToolKit Documentation
http://www.hitl.washington.edu/artoolkit/
ARToolKit Forum
http://www.hitlabnz.org/wiki/Forum
ARToolworks Inc
http://www.artoolworks.com/
p
106. ARToolKit Plus
http://studierstube.icg.tu-
graz.ac.at/handheld_ar/artoolkitplus.php
t/h dh ld / t lkit l h
osgART
http://www.osgart.org/
FLARToolKit
http://www.libspark.org/wiki/saqoosha/FLARToolKit/
FLARManager
http://words.transmote.com/wp/flarmanager/
http://words transmote com/wp/flarmanager/
107. Books
Interactive E
I Environments with Open-Source
hO S
Software: 3D Walkthroughs and Augmented
Reality for Architects with Bl d 2 43 DART
R li f A hi i h Blender 2.43,
3.0 and ARToolKit 2.72 by Wolfgang Höhl
A Hitchhikers Guide to Virtual Reality by Karen
y y
McMenemy and Stuart Ferguson
108. More Information
• M k Billi h t
Mark Billinghurst
– mark.billinghurst@hitlabnz.org
• Websites
– www.hitlabnz.org
hi l b