A lecture on research directions in Augmented Reality as part of the COSC 426 class on AR. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
In these slides you can find the basic concepts of natural user interfaces. From the evolution of the classic desktop centered applications to the more intuitive and natural ones.
In these slides you can find the basic concepts of natural user interfaces. From the evolution of the classic desktop centered applications to the more intuitive and natural ones.
Augmented Reality (AR) will be the next mobile computing platform, seamlessly merging the real world with virtual objects to support realistic, intelligent, and personalized experiences. Making this vision possible requires the next level of immersion, artificial intelligence, and connectivity within the thermal and power envelope of a wearable glasses.
Learn more at: https://www.qualcomm.com/invention/cognitive-technologies/immersive-experiences/augmented-reality
Come ogni nuova convergenza tecnologica l''Augmented Reality ridefinisce l'esperienza del corpo attraverso lo spazio e lo spazio attraverso i codici. Il buzz che circonda l'AR individua oggi un punto di convergenza tra tecnologie mature, sovraccarico delle potenzialità del presente.
Virtual Reality refers to a high-end user interface that involves real-time simulation and interactions through multiple sensorial channels. Virtual Reality is often used to describe a wide variety of applications, commonly associated with its immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves and miniaturization have helped popularize the concept. Jaron Lanier coined the term Virtual Reality in 1987. Today Virtual Reality plays a big part in the everyday lives of the world’s population.
Presentation for Handheld Librarian 3 as an intro to augmented reality services including what's happening in the fields of advertising, marketing, retail, shipping, gaming, and the wealth of GIS information overlay currently available. Social issues are briefly covered as well.
User Interfaces and User Centered Design Techniques for Augmented Reality and...Stuart Murphy
We chose to explore virtual and augmented reality (VR and AR) due to its recent emergence into the mainstream areas of gaming, mobile applications and various other systems. We felt it important to distinguish between VR and AR in both areas of interaction design and user interface evaluation and creation techniques. As it is a topic of great passion for us we wanted to instill the possibilities that this medium has to offer for interaction designers and UI developers.
AWE 2014 - The Glass Class: Designing Wearable InterfacesMark Billinghurst
Tutorial taught at the AWE 2014 conference, by Mark Billinghurst and Rob Lindeman on May 27th 2014. It provides an overview of how to design interfaces for wearable computers, such as Google Glass.
Augmented Reality (AR) will be the next mobile computing platform, seamlessly merging the real world with virtual objects to support realistic, intelligent, and personalized experiences. Making this vision possible requires the next level of immersion, artificial intelligence, and connectivity within the thermal and power envelope of a wearable glasses.
Learn more at: https://www.qualcomm.com/invention/cognitive-technologies/immersive-experiences/augmented-reality
Come ogni nuova convergenza tecnologica l''Augmented Reality ridefinisce l'esperienza del corpo attraverso lo spazio e lo spazio attraverso i codici. Il buzz che circonda l'AR individua oggi un punto di convergenza tra tecnologie mature, sovraccarico delle potenzialità del presente.
Virtual Reality refers to a high-end user interface that involves real-time simulation and interactions through multiple sensorial channels. Virtual Reality is often used to describe a wide variety of applications, commonly associated with its immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves and miniaturization have helped popularize the concept. Jaron Lanier coined the term Virtual Reality in 1987. Today Virtual Reality plays a big part in the everyday lives of the world’s population.
Presentation for Handheld Librarian 3 as an intro to augmented reality services including what's happening in the fields of advertising, marketing, retail, shipping, gaming, and the wealth of GIS information overlay currently available. Social issues are briefly covered as well.
User Interfaces and User Centered Design Techniques for Augmented Reality and...Stuart Murphy
We chose to explore virtual and augmented reality (VR and AR) due to its recent emergence into the mainstream areas of gaming, mobile applications and various other systems. We felt it important to distinguish between VR and AR in both areas of interaction design and user interface evaluation and creation techniques. As it is a topic of great passion for us we wanted to instill the possibilities that this medium has to offer for interaction designers and UI developers.
AWE 2014 - The Glass Class: Designing Wearable InterfacesMark Billinghurst
Tutorial taught at the AWE 2014 conference, by Mark Billinghurst and Rob Lindeman on May 27th 2014. It provides an overview of how to design interfaces for wearable computers, such as Google Glass.
Google Glass, The META and Co. - How to calibrate your Optical See-Through He...Jens Grubert
Slides from our ISMAR 2014 tutorial http://stctutorial.icg.tugraz.at/
Abstract:
Head Mounted Displays such as Google Glass and the META have the potential to spur consumer-oriented Optical See-Through Augmented Reality applications. A correct spatial registration of those displays relative to a user’s eye(s) is an essential problem for any HMD-based AR application.
At our ISMAR 2014 tutorial we provide an overview of established and novel approaches for the calibration of those displays (OST calibration) including hands on experience in which participants will calibrate such head mounted displays.
The final lecture in the COSC 426 graduate course in Augmented Reality. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury on Sept. 19th 2012
VENTURI is a collaborative European project targeting the shortcomings of current Augmented Reality design; bringing together the forces of mobile platform manufacturers, technology providers, content creators, and researchers in the field.
VENTURI aims to place engaging, innovative and useful mixed reality experiences into the hands of ordinary people, by co-evolving next generation AR platforms and algorithms.
VENTURI plans to create a seamless and optimal user experience through a thorough analysis and evolution of the AR technology chain, spanning device hardware capabilities to user satisfaction.
The fifth lecture from the Augmented Reality Summer School taught by Mark Billinghurst at the University of South Australia, February 15th - 19th, 2016. This provides an overview of AR research directions.
In this talk, I will introduce a new concept of “ubiquitous Virtual
Reality (UVR)” in the view point of Metaverse and then explain how to realize Virtual Reality in physical space with context-aware Augmented Reality. In UVR-enabled space it is possible to personalize using user’s, as well as environmental, context and then selectively share the augmented object with additional (or 3D content as well as text) information according to user’s social relationships. I will also explain some core technologies developed in GIST U-VR Lab for last 5 years and demonstrate U-VR applications such as DigiLog Book, Digilog Miniature, CAMAR Tour, etc.
A Translation Device for the Vision Based Sign Languageijsrd.com
The Sign language is very important for people who have hearing and speaking deficiency generally called Deaf and Mute. It is the only mode of communication for such people to convey their messages and it becomes very important for people to understand their language. This paper proposes the method or algorithm for an application which would help in recognizing the different signs which is called Indian Sign Language. The images are of the palm side of right and left hand and are loaded at runtime. The method has been developed with respect to single user. The real time images will be captured first and then stored in directory and on recently captured image and feature extraction will take place to identify which sign has been articulated by the user through SIFT(scale invariance Fourier transform) algorithm. The comparisons will be performed in arrears and then after comparison the result will be produced in accordance through matched key points from the input image to the image stored for a specific letter already in the directory or the database the outputs for the following can be seen in below sections. There are 26 signs in Indian Sign Language corresponding to each alphabet out which the proposed algorithm provided with 95% accurate results for 9 alphabets with their images captured at every possible angle and distance.
Building Mobile AR Applications Using the Outdoor AR Library (Part 1)Mark Billinghurst
The first part of a tutorial given on November 21st at the MGIA symposium at Siggraph Asia 2013. This shows how to build Outdoor AR applications using the HIT Lab NZ's Outdoor AR library. For more information see http://www.hitlabnz.org/index.php/products/mobile-ar-framework/334
WorldKit: Rapid and Easy Creation of Ad-hoc Interactive
Applications on Everyday Surfaces.
Instant access to computing, when and where we need it,
has long been one of the aims of research areas such as
ubiquitous computing. In this paper, we describe the
WorldKit system, which makes use of a paired depth camera
and projector to make ordinary surfaces instantly interactive.
Using this system, touch-based interactivity can,
without prior calibration, be placed on nearly any unmodified
surface literally with a wave of the hand, as can other
new forms of sensed interaction. From a user perspective,
such interfaces are easy enough to instantiate that they
could, if desired, be recreated or modified “each time we sat
down” by “painting” them next to us. From the programmer’s
perspective, our system encapsulates these capabilities
in a simple set of abstractions that make the creation of
interfaces quick and easy. Further, it is extensible to new,
custom interactors in a way that closely mimics conventional
2D graphical user interfaces, hiding much of the
complexity of working in this new domain. We detail the
hardware and software implementation of our system, and
several example applications built using the library.
Keynote talk by Mark Billinghurst at the 9th XR-Metaverse conference in Busan, South Korea. The talk was given on May 20th, 2024. It talks about progress on achieving the Metaverse vision laid out in Neil Stephenson's book, Snowcrash.
These are slides from the Defence Industry event orgranized by the Australian Research Centre for Interactive and Virtual Environments (IVE). This was held on April 18th 2024, and showcased IVE research capabilities to the South Australian Defence industry.
This is a guest lecture given by Mark Billinghurst at the University of Sydney on March 27th 2024. It discusses some future research directions for Augmented Reality.
Presentation given by Mark Billinghurst at the 2024 XR Spring Summer School on March 7 2024. This lecture talks about different evaluation methods that can be used for Social XR/AR/VR experiences.
Empathic Computing: Delivering the Potential of the MetaverseMark Billinghurst
Invited guest lecture by Mark Billingurust given at the MIT Media Laboratory on November 21st 2023. This was given as part of Professor Hiroshi Ishii's class on Tangible Media
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationMark Billinghurst
A talk given by Mark Billinging in the CLIPE workshop in Tubingen, Germant on April 27th 2023. This talk describes how virtual avatars can be used to support remote collaboration.
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
Keynote talk given by Mark Billinghurst at the CHI 2023 Workshop on Towards and Inclusive and Accessible Metaverse. The talk was given on April 23rd 2023.
Lecture 6 of the COMP 4010 course on AR/VR. This lecture is about designing AR systems. This was taught by Mark Billinghurst at the University of South Australia on September 1st 2022.
Keynote speech given by Mark Billinghurst at the ISS 2022 conference. Presented on November 22nd, 2022. This keynote outlines some research opportunities in the Metaverse.
Lecture 5 in the 2022 COMP 4010 lecture series. This lecture is about AR prototyping tools and techniques. The lecture was given by Mark Billinghurst from University of South Australia in 2022.
Lecture 4 in the 2022 COMP 4010 lecture series on AR/VR. This lecture is about AR Interaction techniques. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 3 in the 2022 COMP 4010 lecture series on AR/VR. This lecture provides an introduction for AR Technology. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 2 in the 2022 COMP 4010 Lecture series on AR/VR and XR. This lecture is about human perception for AR/VR/XR experiences. This was taught by Mark Billinghurst at the University of South Australia in 2022.
Lecture 1 for the 2022 COMP 4010 course on AR and VR. This course was taught by Mark Billinghurst at the University of South Australia in 2022. This lecture provides an introduction to AR, VR and XR.
Empathic Computing and Collaborative Immersive AnalyticsMark Billinghurst
Short talk by Mark Billinghurst on Empathic Computing and Collaborative Immersive Analytics, presented on July 28th 2022 at the Siggraph 2022 conference.
Lecture given by Mark Billinghurst on June 18th 2022 about how the Metaverse can be used for corporate training. In particular how combining AR, VR and other Metaverse elements can be used to provide new types of learning experiences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
3. The Future is with us
It takes at least 20 years for new technologies to go
from the lab to the lounge..
“The technologies that will significantly affect our lives
over the next 10 years have been around for a
decade.
The future is with us. The trick is learning how to spot it
us it.
The commercialization of research, in other words,
is far more about prospecting than alchemy.”
Bill Buxton
Oct 11th 2004
5. Research Directions
Components
C
Markerless tracking, hybrid tracking
Displays, i
Di l input d i
t devices
Tools
Authoring tools, user generated content
tools
Applications
Interaction techniques/metaphors
Experiences
User evaluation novel AR/MR experiences
evaluation,
7. Occlusion with See-through HMD
The P bl
Th Problem
Occluding real objects with virtual
Occluding virtual objects with real
Real Scene Current See-through HMD
8. ELMO (Kiyokawa 2001)
Occlusive see-through HMD
Masking LCD
Real time range finding
10. ELMO Design
Virtual images
from LCD
Depth
Sensing
g LCD Mask
Real
World
Optical
Combiner
Use LCD mask to block real world
Depth sensing for occluding virtual images
14. Mobile BuildAR
Ideal
Id l authoring tool
h l
Develop on PC, deploy on handheld
AR Scene
PC
AR Player
PC
BuildAR XML Mobile Phone Edgelib
stbES
Symbian/WM
S bi /WM
Python
17. Future Directions
SLIDE 17
Interaction Techniques
Input techniques
3D vs. 2D input
Pen/buttons/gestures
P /b /
Collaboration techniques
Simultaneous access to AR content
User studies…
24. Lucid Touch
Microsoft Research & Mitsubishi Electric Research Labs
M f R h M b h El R hL b
Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., Shen, C.
LucidTouch: A See-Through Mobile Device
In Proceedings of UIST 2007, Newport, Rhode Island, October 7-10, 2007, pp. 269–
278.
25.
26. Auditory Modalities
Auditory
auditory icons
earcons
speech synthesis/recognition
Nomadic Radio (Sawhney)
- combines spatialized audio
bi ti li d di
- auditory cues
- speech synthesis/recognition
27. Gestural interfaces
1. Micro-gestures
(unistroke, smartPad)
2. Device-based
2 D i b d gestures
(tilt based examples)
3.
3 Embodied interaction
(eye toy)
28. Haptic Modalities
Haptic interfaces
Simple uses in mobiles? (vibration instead of ringtone)
Sony’s Touchengine
- physiological experiments show you can perceive two stimulus 5ms apart, and spaced as
low as 0.2 microns
4 μm
n層
28 μm
n層
V
31. Multimodal Input
Combining speech and gesture builds on the strength of
each
Speech – mode selection, group selection
p g p
Gesture – direct manipulation
Key problem
Command disambiguation
- “Move that chair” - which chair?
Use statistical methods for disambiguation
Speech and gesture recognition provide multiple possibilities –
need to look for most probable
SenseShapes detect object of interest (Olwal 2003)
Olwal 2003
39. Example: Visualizing Sensor Networks
Rauhala et. al. 2007 (Linkoping)
Network of Humidity Sensors
ZigBee wireless communication
Use Mobile AR to Visualize Humidity
40.
41.
42. Example: Sensor Input for AR Interaction
UbiComp sensor
Light, temp, motion, sound
RF connection
AR software plug-in
Sensor input interacting with AR
applications
43. uPart USB Bridge
Particle
http://particle.teco.edu idle: 16 Hour
49. UCAM: Architecture
wear-UCAM wearService
Light
<Service> IR receiver
BioSensor
User
Conditional
UserProfileManager Context #2 ubiTrack
< wearService >
Content
wearSensor
User
Conditional
Context #3 MRWindow
<Service>
Sensor User
Service
Conditional
Context #1
ubiTV
<Service>
(Integrator,Manager,
Interpreter,ServiceProvider) Media services Light service
ubiTrack
Tag-it
Context Interface What/When/How Where/When
PDA Couch Sensor Door Sensor
Network Interface Who/What/When/How When/How When/How
ubi-UCAM
BAN/PAN TCP/IP
(BT) (Discovery,Control,Event)
Operating System
O ti S t
vr-UCAM
50. Ubiquitous
UbiComp
Ubi AR
Ubi VR
Weiser Mobile AR
Desktop AR VR
Terminal
Reality Virtual Reality
Milgram
From: Joe Newman
51. Future Directions
SLIDE 51
Massive Multiuser
Handheld AR for the first time allows extremely high
numbers of AR users
Requires
R i
New types of applications/games
New infrastructure (server/client/peer to peer)
(server/client/peer-to-peer)
Content distribution…
57. Leveraging Web 2 0
L i W b 2.0
Content retrieval using HTTP
g
XML encoded meta information
KML placemarks + extensions
Queries
Based on location (from GPS, image recognition)
Based on situation (barcode markers)
Queries l deliver tracking f
Q i also d li ki feature d b
databases
Everybody can set up an AR 2.0 server
Syndication:
y
Community servers for end-user content
Tagging
AR client subscribes to arbitrary number of feeds
58. Content
Content creation and delivery
Content creation pipeline
Delivering previously unknown content
Streaming of
Data (objects multi-media)
(objects, multi media)
Applications
Distribution
How do users learn about all that content?
How do they access it?
H d th
59. Twitter 360
Twitter 360 http://www.twitter-360.com
p
AR to geo-locate Tweets around you
Better than Google maps?
g p
60. Scaling Up
AR on a City Scale
y
Using mobile phone as ubiquitous sensor
MIT Senseable City Lab
http://senseable.mit.edu/