In this talk, I will introduce a new concept of “ubiquitous Virtual
Reality (UVR)” in the view point of Metaverse and then explain how to realize Virtual Reality in physical space with context-aware Augmented Reality. In UVR-enabled space it is possible to personalize using user’s, as well as environmental, context and then selectively share the augmented object with additional (or 3D content as well as text) information according to user’s social relationships. I will also explain some core technologies developed in GIST U-VR Lab for last 5 years and demonstrate U-VR applications such as DigiLog Book, Digilog Miniature, CAMAR Tour, etc.
How Global-Scale Personal Lightwaves are Transforming Scientific ResearchLarry Smarr
07.10.17
Speaker
University of Virginia Computational Science Speaker Series
University of Virginia Library
Title: How Global-Scale Personal Lightwaves are Transforming Scientific Research
Charlotte, VA
How Global-Scale Personal Lightwaves are Transforming Scientific ResearchLarry Smarr
07.10.17
Speaker
University of Virginia Computational Science Speaker Series
University of Virginia Library
Title: How Global-Scale Personal Lightwaves are Transforming Scientific Research
Charlotte, VA
Come ogni nuova convergenza tecnologica l''Augmented Reality ridefinisce l'esperienza del corpo attraverso lo spazio e lo spazio attraverso i codici. Il buzz che circonda l'AR individua oggi un punto di convergenza tra tecnologie mature, sovraccarico delle potenzialità del presente.
A talk from the Intro Classes Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Steve Feiner (Columbia University): The Future of AR
What is AR and where is it going? This talk will provide an introduction to AR in the many forms that it has taken, from its birth 50 years ago until now. Along the way, I will provide some insight into the welter of terms, both old and new, that are currently being used and abused to refer to AR. And I will discuss what's to come, as AR researchers and practitioners explore collaboration, mobility, and context, in the march toward ubiquity.
http://AugmentedWorldExpo.com
keynote speech by Mark Billinghurst at the Workshop on Transitional Interfaces in Mixed and Cross-Reality, at the ACM ISS 2021 Conference. Given on November 14th 2021
Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.The overlaid sensory information can be constructive (i.e. additive to the natural environment) or destructive (i.e. masking of the natural environment) and is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one. Augmented reality is related to two largely synonymous terms: mixed reality and computer-mediated reality.
Creating Immersive and Empathic Learning ExperiencesMark Billinghurst
Keynote talk given by Mark Billinghurst at the International Conference on Teaching and Learning in Education, March 3rd 2016, in Kuala Lumpur, Malaysia. Talks about the use of AR and VR to provide educational experiences.
IN140703 service support technologies 6.10.2016Pirita Ihamäki
6.10.2016 Service support technologies first part go through Virtual Reality, Virtual Prototyping, Component of a Virtual Prototype, Top 5 Virtual Reality Gadget of the Future, Virtual Market Potential.
Design Approaches For Immersive Experiences AR/VR/MRMark Melnykowycz
Presented at inaugural International Investment Forum in Virtual, Augmented and Mixed Reality (#IIFVAR 2017) at Technopark Zurich, organized by the Swiss Society of Virtual and Augmented Reality (SSVAR). Here I presented an overview of how to design products for virtual, augmented, and mixed reality experiences. With a logical framework of user experience, theater/film, and game design, we can use the best tools of those disciplines to approach immersive design with an understanding of story structure, user state, and interaction mechanics.
More discussion of the elements of the talk are available here:
https://idezo.ch/design-approaches-immersive-experiences-iifvar-2017/
presentation for augmented reality. ,It consists of introduction, working, components of AR, applications, limitations, recent development and conclusion. all the best for your presentation
A lecture on Mobile Augmented Reality. A lecture given by Mark Billinghurst at the University of Canterbury on Friday September 13th 2013. This is part of the COSC 426 graduate course on Augmented Reality.
Augmented reality is a virtual scene generated by a computer that augments the scene with additional information. This presentation explains the use of augmented reality in today's world.
Come ogni nuova convergenza tecnologica l''Augmented Reality ridefinisce l'esperienza del corpo attraverso lo spazio e lo spazio attraverso i codici. Il buzz che circonda l'AR individua oggi un punto di convergenza tra tecnologie mature, sovraccarico delle potenzialità del presente.
A talk from the Intro Classes Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Steve Feiner (Columbia University): The Future of AR
What is AR and where is it going? This talk will provide an introduction to AR in the many forms that it has taken, from its birth 50 years ago until now. Along the way, I will provide some insight into the welter of terms, both old and new, that are currently being used and abused to refer to AR. And I will discuss what's to come, as AR researchers and practitioners explore collaboration, mobility, and context, in the march toward ubiquity.
http://AugmentedWorldExpo.com
keynote speech by Mark Billinghurst at the Workshop on Transitional Interfaces in Mixed and Cross-Reality, at the ACM ISS 2021 Conference. Given on November 14th 2021
Augmented reality (AR) is an interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.The overlaid sensory information can be constructive (i.e. additive to the natural environment) or destructive (i.e. masking of the natural environment) and is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one. Augmented reality is related to two largely synonymous terms: mixed reality and computer-mediated reality.
Creating Immersive and Empathic Learning ExperiencesMark Billinghurst
Keynote talk given by Mark Billinghurst at the International Conference on Teaching and Learning in Education, March 3rd 2016, in Kuala Lumpur, Malaysia. Talks about the use of AR and VR to provide educational experiences.
IN140703 service support technologies 6.10.2016Pirita Ihamäki
6.10.2016 Service support technologies first part go through Virtual Reality, Virtual Prototyping, Component of a Virtual Prototype, Top 5 Virtual Reality Gadget of the Future, Virtual Market Potential.
Design Approaches For Immersive Experiences AR/VR/MRMark Melnykowycz
Presented at inaugural International Investment Forum in Virtual, Augmented and Mixed Reality (#IIFVAR 2017) at Technopark Zurich, organized by the Swiss Society of Virtual and Augmented Reality (SSVAR). Here I presented an overview of how to design products for virtual, augmented, and mixed reality experiences. With a logical framework of user experience, theater/film, and game design, we can use the best tools of those disciplines to approach immersive design with an understanding of story structure, user state, and interaction mechanics.
More discussion of the elements of the talk are available here:
https://idezo.ch/design-approaches-immersive-experiences-iifvar-2017/
presentation for augmented reality. ,It consists of introduction, working, components of AR, applications, limitations, recent development and conclusion. all the best for your presentation
A lecture on Mobile Augmented Reality. A lecture given by Mark Billinghurst at the University of Canterbury on Friday September 13th 2013. This is part of the COSC 426 graduate course on Augmented Reality.
Augmented reality is a virtual scene generated by a computer that augments the scene with additional information. This presentation explains the use of augmented reality in today's world.
CAMAR 2.0; Context-aware Mobile Augmented Reality 2.0; R&D Activities @ GIST U-VR Lab 2009; slide presented at 12th MobileWebAppsCamp (Mobile UX and Mobile AR) in Seoul, Korea
AN EVALUATION OF THE USE OF AUDIO GUIDANCE IN AUGMENTED REALITY SYSTEMS IMPLE...ijma
Recently, museums and historic sites have begun reaching out beyond their traditional audience groups,
using more innovative digital display technology to find and attract a new audience. Virtual, mixed, and
Augmented Reality (AR) technologies are becoming more ubiquitous in our society and “virtual history”
exhibits are starting to be available to the public. There are numerous studies focusing on AR, however a
scant amount of research is being done at historical sites. An initial experiment used repeated measures
(ANOVA) to compare and rank three different types of AR devices used at a site of cultural heritage. A
further experiment was then undertaken to observe participants using two different AR devices with and
without sound to determine if which device used or the presence of sound impact the usability of the device,
or the user’s satisfaction/preference of specific devices. Several surveys, including demographic and
usability surveys, were provided in order to collect a range of user data. A two-way repeated measures
(ANOVA) were used to analyze the quantitative data gathered. No significant effects were observed based
on the quantitative data provided by the surveys, indicating that all devices were equally usable and
satisfactory, and that sound did not have a significant impact in this instance. However, the qualitative
data indicated that users may prefer using AR technology on a smartphone device and preferred to use this
device paired with sound.
An Evaluation of the use of Audio Guidance in Augmented Reality Systems Imple...ijma
Recently, museums and historic sites have begun reaching out beyond their traditional audience groups, using more innovative digital display technology to find and attract a new audience. Virtual, mixed, and Augmented Reality (AR) technologies are becoming more ubiquitous in our society and “virtual history” exhibits are starting to be available to the public. There are numerous studies focusing on AR, however a scant amount of research is being done at historical sites. An initial experiment used repeated measures (ANOVA) to compare and rank three different types of AR devices used at a site of cultural heritage. A further experiment was then undertaken to observe participants using two different AR devices with and without sound to determine if which device used or the presence of sound impact the usability of the device, or the user’s satisfaction/preference of specific devices. Several surveys, including demographic and usability surveys, were provided in order to collect a range of user data. A two-way repeated measures (ANOVA) were used to analyze the quantitative data gathered. No significant effects were observed based on the quantitative data provided by the surveys, indicating that all devices were equally usable and satisfactory, and that sound did not have a significant impact in this instance. However, the qualitative data indicated that users may prefer using AR technology on a smartphone device and preferred to use this device paired with sound.
Augmented Reality - Everything you need to know by Vaibhav DwivediVaibhav Dwivedi
Hey everyone,
I made this presentation as a part of academics and I presented it to give an introductory knowledge regarding Augmented Reality (AR) and Things which everyone should atleast know about it to be aware of this fascinating technology.
Augmented reality is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information.
The Augmented Reality is the cutting-edge technology which is at its new peak after the massive success of the popular game "Pokemon Go".It is estimated to cross $100 billion investment by the year 2020.
A lecture on research directions in Augmented Reality as part of the COSC 426 class on AR. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury.
AUGMENTED REALITY (AR) IN DAILY LIFE: EXPANDING BEYOND GAMINGLiveplex
The global fascination with AR was arguably ignited by the gaming industry, with titles like Pokémon GO captivating millions and showcasing the potential of immersive technology. However, the true power of AR lies not in its ability to entertain but in its capacity to enhance, transform, and simplify everyday tasks and experiences. Through the seamless integration of digital information with the physical environment, AR has emerged as a versatile tool, enriching user interactions across various domains.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
2. Gwangju (光州), Korea, the city of
Science & Technology, Light, Culture & Art, Food
GIST is Research-oriented University
U-VR Lab and CTI started in 2001 and 2005, respectively
3. Brief History
Personal History and Status of AR
Estimated user 180M+ by 2012
Major brands are taking keen interest
Consumers are hungry for Apps
1992 1994 1999
1968 1998
1991 ‘AR’ by Tom Continuum 1st ISMR
HMD by Ivan 1st IWAR in
1st
ICAT Caudell @ by Milgram 9 th ICAT
Sutherland SF, CA, USA
Boeing @ ATR (Waseda U)
1999 2001 2002 2004
2005 2006
ATR MIC GIST U-VR 1st ISMAR, 14th ICAT
GIST CTI 1stISUVR
Lab Lab Darmstadt in Seoul
2009 2011
2007 2008 2010
Sony ISO/SC24/ 2012
Sony ‘Eye Qualcomm
Wikitude ‘EyePet’ WG9 KAIST U-
of R&D
Sony PS Vi VR Lab
Judgment’ mAR Guide LBS AR Center
ta
4. Outline
Paradigm Shift : DigiLog with AR & Ubiquitous VR
DigiLog Applications and U-VR Core
U-VR 2.0: What’s Next?
Summary and Q&A
5. Media vs. A-Reality
(S-)Media creates Perception
Perception is (A-)Reality
So, (S-)Media creates (A-)Reality
What does (S-) and (A-) mean?
S-Media : Smart, Social (CI)
A-Reality : Altered, Augmented
6. Computing History and My Perspective
Computing History
Mainframe Personal Networked Ubiquitous U-VR
60s 80s 90s 00s 10s
Computer Computer Computers Computing Computing
Text CG/Image Multimedia u-Media s-Media
Sharing a Individual Sharing over Human- Community-
computer usage Internet centered centered
Information Knowledge Intelligence Wisdom
Emotion Fun
Computing in next 5-10 Years :
Nomadic human: Desktop-based UI -> Augmented Reality
Smart space : Intelligence for a user -> Wisdom for community
Smart media: Personal emotion -> Social fun
7. DigiLog and Ubiquitous VR
Is DigiLog-X a new Media?
DigiLog-X : Digital (Service/Content) over Analog Life
Media platform: Phone/TV/CE + Computer + …
HW platform: mobile network + Cloud + …
Service/Content platform: SNS + LBS + CAS + … over Web/App
UI/UX platform: 3D + AR/VR/MR + …
So, DigiLog-X is becoming a new Media !!!
How to realize Smart DigiLog?
Ubiquitous Virtual Reality = VR in smart physical space
Context-aware Mixed (Mirrored) Augmented Reality for smart DigiLog UI/UX
=> Mobile/wearable + Smart (context-aware) + AR + (for) Social Fun
8. Hype Cycle of AR 2011
Augmented Reality
• MIT’s annual review; “10 Emerging Tech.s 2007”
• Gartner: top 10 disruptive tech 2008-12
2010 • Juniper: mAR 1.4B downloads/y, revenue $1.5B/y
by 2015 (11M in 2010)
2009
2008
9. Is AR Hype?
Google Trend (VR vs. AR)
A: Virtual Reality Embraced by Businesses
B: Another use for your phone: 'augmented reality
C: Qualcomm Opens Austria Research Center to Focus on Augmented Reality
D: Qualcomm Launches Augmented Reality Application Developer Challenge
E: Review: mTrip iPhone app uses augmented reality
F: Toyota demos augmented-reality-enhanced car windows
10. What’s U-VR, MR & AR?
Dual space {R, R’}
RE RE’
V
RE
RE
R R’
VE’
RE RE’
VE’
11. What’s U-VR, MR & AR?
Woo’s Definition [11] : U-VR is
3D Link btw dual (real & virtual) spaces with
additional info
CoI augmentation, not just sight: sound,
haptics, smell, taste, etc.
Bidirectional UI for H2H/H2S/S2H/S2S
communication in dual spaces
Virtual space
How to U-Content
Seamless Augmentation
Link btw dual spaces
Seamlessly? LINK
CoI
Real space
Social Networks
12. Outline
Paradigm Shift : DigiLog with AR & Ubiquitous VR
DigiLog Applications and U-VR Core Technology
U-VR 2.0: What’s Next?
Summary and Q&A
13. DigiLog Applications
DigiLog with AR for Edutainment
DigiLog with AR: interactive, flexible, interesting, direct experience, etc.
Edutainment
Education: learning, training, knowledge
Entertainement: fun, game, storytelling
Technological Challenges : It should …
Be simple to use and robust as a tool
Provide the user with clear and concise information
Enable the educator/tutor to input information in a simple and effective manner
Enable easy interaction between learners
Make complex procedures transparent to the learner
Be cost effective and easy to install
14. DigiLog @ U-VR Lab 2006
Garden Alive: an Emotionally Intelligent Interactive Garden
Intuitive interaction: TUIs seamlessly bridge to the garden in a virtual world
Educational purpose: users can evaluate what environmental conditions can affect
plant growth
Emotional sympathy to the users: the emotional change of the virtual plants based on
user’s interaction which maximizes user interest The International Journal of Virtual Reality, 2006, 5(4):21-30
The International Journal of Virtual Reality, 2006, 5(4):21-30
Fig. 1. The overall system.
rfaces in the real garden nutrient influences growth in different parts of the plant.
3) Hand gestures
the real garden Furthermore, for more natural interface, users can interact with
Fig. 4. Tangible user interfaces. Tangible user interfaces with watering pot, user’s hand and nutrients supplier in the "Garde
vironment is divided into two parts based on virtual plants using their hands. We defined the various
the surface, such as the ground and the meanings according to hand gestures. Four kinds of hands
e surface of the real garden corresponds to the rainbow. Furthermore, there are a fixed number of plants in the
gestures can be recognized. For example, the users are grabbing
Teaejin Ha, Woontack Woo, ”Garden Alive: An Emotionallysupplier, called the plantsthegrow.International Journal of Virtual Reality (IJVR), 5, 4, pp. 21-30, 2006.
und and the underground. Users can see how of the nutrients
Intelligent Interactive Garden,” In the second the plants are reproduced
population and to same numbers of
III. ARTIFICIAL INTELLIGN
15. DigiLog @ U-VR Lab 2006
Garden Alive: an Emotionally Intelligent Interactive Garden
Demo
데모비디오
◦ From the presented Garden Alive, users experience excitement and emotional interaction which is difficult to feel
in the real garden
• The various kinds of growing plants which have different gene types according to generational evolution
• Changes of emotion reflecting the user’s interaction, where the intelligent content can provide emotional feed
back to the users
Teaejin Ha, Woontack Woo, ”Garden Alive: An Emotionally Intelligent Interactive Garden,” International Journal of Virtual Reality (IJVR), 5, 4, pp. 21-30, 2006.
16. DigiLog @ U-VR Lab 2010
Digilog book for temple bell tolling experience
Digilog Book: an augmented paper book that provides additional multimedia content
stimulating readers’ five senses using AR technologies
• Descriptions for multisensory AR contents; multisensory feedback; and vision-based manual input
Taejin Ha, Youngho Lee, Woontack Woo, "Digilog book for temple bell tolling experience based on interactive augmented reality," Virtual Reality, 15(4), pp. 295-309, 2010.
17. DigiLog @ U-VR Lab 2010
Digilog book for temple bell tolling experience
A ‘‘temple bell experience’’ book
◦ The temple bell experience book is expected to encourage readers to explore cultural heritages for ed
ucation and entertainment purposes
Taejin Ha, Youngho Lee, Woontack Woo, "Digilog book for temple bell tolling experience based on interactive augmented reality," Virtual Reality, 15(4), pp. 295-309, 2010.
18. Digilog Applications 2010
Enhance Experience, Engage, Educate & Entertain
Hongkil Dong Technologies in Chosun
Storytelling application Storytelling application
Integrated with virtools* Integrated with virtools*
19. Digilog Apps 2011
DigiLog Miniature
Storytelling application Storytelling application
Integrated with virtools* Integrated with virtools*
20. Technical Challenges
CoI Localization:
Context of Interest (CoI): Space vs. Object
Accurate CoI Recognition and Tracking
3D Interaction
Ubiquitous Augmentation
LBS/SNS-based Authoring and Mash-up
Smart UI for Intuitive Visualization
AR-Infography + Organic UI
Networking and public DB management
U-VR ecosystem with SNS, LBS, CaS
HW wish list
Better camera/GPS/compass, CPU/GPU, I/O, battery
23. AR @ U-VR Lab 2008
Multiple 3D Object Tracking for Augmented Reality
Performance-preserving parallel detection and tracking framework
Stabilized 3D tracking by fusing detection and frame-to-frame tracking
Keypoint verification for occluded region removal
Y. Park, V. Lepetit and W.Woo, “Multiple 3D Object Tracking for Augmented Reality,” in Proc. ISMAR 2008, pp.117-120, Sep. 2008.
Y. Park, V. Lepetit and W.Woo, “Extended Keyframe Detection with Stable Tracking for Multiple 3D Object Tracking,” IEEE TVCG, 17(11):
1728-1735, 2011
24. AR @ U-VR Lab 2008
Multiple 3D Object Tracking for Augmented Reality
Multiple objects 3D tracking demonstration
데모비디오
This video shows simultaneous multiple 3D object tracking which maintains frame rate. The video also
shows the effect of temporal keypoint verification.
Y. Park, V. Lepetit and W.Woo, “Multiple 3D Object Tracking for Augmented Reality,” in Proc. ISMAR 2008, pp.117-120, Sep. 2008.
Y. Park, V. Lepetit and W.Woo, “Extended Keyframe Detection with Stable Tracking for Multiple 3D Object Tracking,” IEEE TVCG, 17(11):
1728-1735, 2011
25. AR @ U-VR Lab 2009
Handling Motion-Blur in 3D Tracking and Rendering for AR
Generalized image formation model simulating motion-blur effect
Derivation using Efficient Second-order Minimization into a optimization
Automated exposure time evaluation
Y. Park, V. Lepetit and W.Woo, “ESM-Blur: Handling & Rendering Blur in 3D Tracking and Augmentation ,” in Proc. ISMAR 2009, pp.163-166,
Oct. 2009
Y. Park, V. Lepetit and W.Woo, “Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality,” IEEE TVCG, (to appear)
26. AR @ U-VR Lab 2009
Handling Motion-Blur in 3D Tracking and Rendering for AR
Comparison with ESM and augmentation with motion blur effect
데모비디오
This video compares the proposed ESM-Blur and ESM-Blur-SE with ESM and illustrate the
augmentation with motion-blur effect for 3D models under general motion.
Y. Park, V. Lepetit and W.Woo, “ESM-Blur: Handling & Rendering Blur in 3D Tracking and Augmentation ,” in Proc. ISMAR 2009, pp.163-166,
Oct. 2009
Y. Park, V. Lepetit and W.Woo, “Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality,” IEEE TVCG, (to appear)
27. AR @ U-VR Lab 2010
Scalable Tracking for Digilog Books
Fast and reliable tracking using a multi-core programming approach
Frame-to-frame tracking for fast performance: Bounded search
Two-step detection for scalability: “Image searching + Feature-level matching”
image
image 6 DOF pose in challenging viewpoints
33
Re-localization of
Image searching
Points
No
Valid? Feature-level
matching
Yes
Track Points
(Frame to Frame) No
Valid Page
ID?
No Enough poi Yes
nts?
Yes
Compute
Compute Homography (H)
Homography Matches visualization
Inliers, Inliers,
Page ID Page ID,
(R t)i-1 H
Decompose
Count inliers
Homography
Tracking Thread (Main) Detection Thread (Background)
K. Kim, V. Lepetit and W.Woo, “Scalable Planar Targets Tracking for Digilog Books,” The Visual Computer, 26(6-8):1145-1154, 2010.
28. AR @ U-VR Lab 2010
Scalable Tracking for Digilog Books
Tracking Performance HongGilDong: Digilog Book Applications
데모비디오
Visualization of inliers Storytelling application
Less than 10 ms tracking speed with 314 planar Integrated with virtools*
targets in a database.
K. Kim, V. Lepetit and W.Woo, “Scalable Planar Targets Tracking for Digilog Books,” The Visual Computer, 26(6-8):1145-1154, 2010.
29. AR @ U-VR Lab 2010
Real-time Modeling and Tracking
Real-time SfM
In-situ modeling of various objects and
collecting of tracking data on real-time
structure from motion
Objects insertion by minimal user
interactions
Interactive Modeling
Tracking multiple objects independently
37
in real-time
image
New points
Searching features Feature extraction triangulation (3.3)
Keyframes searching Bundle No
Frame-to-Frame (3.2.1, 3.4.2 ) adjustment ( 3.3)
matching Yes
Feature matching
and No
Object modeling?
outliers rejection ( 3.2.2)
Pose update (3.5) Multiple Object Tracking
Yes
Keyframe
condition? Map update ( 3.3)
Rendering
Yes
Foreground Background
K. Kim, V. Lepetit and W.Woo, “Keyframe-based Modeling and Tracking of Multiple 3D Objects”, International Symposium on Mixed and Augmented Reality,” ISMAR, 2010.
2001 ~ 2010 Copyright@GIST U-VR Lab.
30. AR @ U-VR Lab 2010
Real-time Modeling and Tracking
ISMAR10 Extension
데모비디오
Supporting various types of objects
Enhanced multiple object detection
K. Kim, V. Lepetit and W.Woo, “Keyframe-based Modeling and Tracking of Multiple 3D Objects”, International Symposium on Mixed and Augmented Reality,” ISMAR, 2010.
31. AR @ U-VR Lab 2011
Reconstruction, Registration, and Tracking for Digilog Miniatures
Fast and reliable 3D tracking based on the scalable tracker for digilog books
Tracking data: Incremental 3D reconstruction of the target objects in offline
Registration: fitting planar surface with the reconstructed keypoints
Offline process
SIFT feature Incremental Bundle
extraction reconstruction adjustment
Collect Set local Adjust a
keypoints coordinates scale
Detection
(Target tracking)
P-n-P
Online process
voctree
Keyframe*
Extracting SIFT features Searching Keyframe* Outlier Rejection &
Finding Keypoints
search-window
P-n-P
+
Image L-M
minimization
(Adding Keypoints if available) Frame-by-Frame Matching Pose Update (R, t)
K. Kim, N. Park and W.Woo, “Vision-based All-in-One Solution for AR and its Storytelling Applications,” The Visual Computer (submitted), 2011.
32. AR @ U-VR Lab 2011
Reconstruction, Registration, and Tracking for Digilog Miniatures
Miniature I Miniature II Miniature III
데모비디오
Palace Temple Town
Keyframes: 23 Keyframes: 42 Keyframes: 82
Keypoints: 10,370 Keypoints: 24,039 Keypoints: 80,157
K. Kim, N. Park and W.Woo, “Vision-based All-in-One Solution for AR and its Storytelling Applications,” The Visual Computer (submitted), 2011.
33. AR @ U-VR Lab 2011
Depth-assisted Real-time 3D Object Detection for AR
Texture-less 3D Object Detection in Real-time
Robust Detection under varying lighting conditions
Scale difference detection
RGB Depth
Image Image
Image & Depth
Templates
Gradient Computation
c- Image & Depth
Template Matching
n-
s,
3D Point Registration
t No
e Is registration error
small ?
n
Yes
y
y Pose computation
et
e Figure 3: Overall procedure of the proposed method. The steps
er W. Lee, marked W. Woo, “Depth-assisted Real-timea GPU. Detection for Augmented Reality,” ICAT2011, 2011
N. Park, in shade runs in parallel on 3D Object
34. AR @ U-VR Lab 2011
Depth-assisted 3D Object Detection for AR (Nov. 30, Session 5)
Robust Detection with different lighting
Multiple 3D Object Detection
데모비디오 conditions and scales
- 3D texture-less object detection & pose - Robust detection under varying lighting
estimation conditions
- Multiple target detection in real-time - Detection of scale difference between two
similar objects
Available at : http://youtu.be/TgnocccmS7U
W. Lee, N. Park, W. Woo, “Depth-assisted Real-time 3D Object Detection for Augmented Reality,” ICAT2011, 2011
35. AR @ U-VR Lab 2011
Texture-less 3D object Tracking with RGB-D Cam
Object training while tracking: start without known 3D model
Stabilization using color image as well as depth map
Depth map enhancement around noisy boundary and surface
Y. Park, V. Lepetit and W.Woo, “Texture-Less Object Tracking with Online Training using An RGB-D Camera,” in Proc. ISMAR 2011, pp. 121-
126, Oct. 2011.
36. AR @ U-VR Lab 2011
Texture-less 3D object Tracking with RGB-D Cam
Tracking while training of texture-less objects
데모비디오
This video shows the tracking of texture-less objects that are difficult to track using conventional
keypoint-based methods. The tracking begins without known object 3D model.
Y. Park, V. Lepetit and W.Woo, “Texture-Less Object Tracking with Online Training using An RGB-D Camera,” in Proc. ISMAR 2011, pp. 121-
126, Oct. 2011.
37. AR @ U-VR Lab 2011
In situ 3D Modeling for wearable AR
38. Interaction @ U-VR Lab 2009-10
Two-handed tangible interactions for augmented blocks
Cubical user interface based tangible interactions
Screw-driving (SD) method for free positioning
Block-assembly (BA) method using pre-knowledge
Augmented assembly guidance
Preliminary and interim guidance in BA
SD sequence
BA sequence
H.Lee, M.Billinghusrt, and W.Woo, “Two-handed tangible interaction techniques for composing augmented blocks,” in Virtual Reality, Vol.15, No.2-3,pp133-146, Jun. 2010.
39. Interaction @ U-VR Lab 2009-10
Two-handed tangible interactions for augmented blocks
AR Toy Car Making:
Tangible Cube Interface based Screw-driving interaction
Screw-Driving technique is based on the real world condition where two or more real objects are
joined together using a screw and screw-driver. Supporting axis change by the help of additional
button and visual hints for 3D positioning
Link: http://youtu.be/t0iVuNygqQw
40. Interaction @ U-VR Lab 2010
An Empirical Evaluation of Virtual Hand
Techniques for 3D Object Manipulation
Adopt Fitts’ law-based formal evaluation process
Extend the design parameters of the 1D scale Fitts’ law to 3D scale
Implement and compare standard TAR manipulation techniques
CUP method PADDLE method
≈
CUBE method Ex_PADDLE method
Taejin Ha, Woontack Woo, "An Empirical Evaluation of Virtual Hand Techniques for 3D Object Manipulation in a Tangible Augmented Reality Environment," IEEE 3D User
Interfaces, pp. 91-98, 2010.
41. Interaction @ U-VR Lab 2011
An Interactive 3D Movement Path Manipulation Method
Control point allocation test properly generate 3D movement path
Dynamic selection method effectively selects the small and dense control points
Taejin Ha, Mark Billinghurst, Woontack Woo, "An Interactive 3D Movement Path Manipulation Method in an Augmented Reality Environment," Interacting with Computers, 2011
(in press).
42. Interaction @ U-VR Lab 2010-11
An Empirical Evaluation of Virtual Hand Techniques
Virtual Hand 3D Path Manipulation
데모비디오
Affordance could enhance usability through ◦ A movement path can be constructed using only a
promoting the user’s understanding small number of control points
Instant triggering could help rapid ◦ A movement path can be rapidly manipulated
manipulation (e.g., button input) with relatively reduced hand and arm
The selection can be made easier by expanding movements using increased effective distance
the selection area
Taejin Ha, Woontack Woo, "An Empirical Evaluation of Virtual Hand Techniques for 3D Object Manipulation in a Tangible Augmented Reality Environment," IEEE 3D User
Interfaces, pp. 91-98, 2010.
Taejin Ha, Mark Billinghurst, Woontack Woo, "An Interactive 3D Movement Path Manipulation Method in an Augmented Reality Environment," Interacting with Computers, 2011
43. Interaction @ U-VR Lab 2011
ARWand: Phone-based 3D Object Manipulation in AR
Exploits a 2D touch screen, a 3DOF accelerometer, and compass sensors information
to manipulate 3D objects in 3D space
Design transfer functions to map the control space of mobile phones to an AR display
space
Taejin Ha, Woontack Woo, "ARWand: Phone-based 3D Object Manipulation in Augmented Reality Environment," ISUVR, pp. 44-47, 2011.
44. Interaction @ U-VR Lab 2011
ARWand: Phone-based 3D Object Manipulation in AR
Experiment and application
◦ Low control-to-display gain: a sophisticated translation could be possible but this requires a significant amount of
clutching
◦ High gain could reduce the frequent clutching, but accurate manipulation could be difficult
◦ Therefore, we need to consider an optimal control function that satisfies both fast and accurate manipulation
Taejin Ha, Woontack Woo, "ARWand: Phone-based 3D Object Manipulation in Augmented Reality Environment," ISUVR, pp. 44-47, 2011.
45. Interaction @ U-VR Lab 2011
Graphical Menus using a Mobile Phone for Wearable AR Systems
Classifying focusable menus via a mobile phone with stereo HMD
Display-referenced (DR)
Manipulator-referenced (MR)
Target-referenced (TR)
DR MR TR
DR MR TR
H.Lee, D.Kim, and W.Woo, “Graphical Menus using a Mobile Phone for Wearable AR systems,” in Proc. ISUVR 2011, pp55-58, Jul. 2011.
46. Interaction @ U-VR Lab 2011
Graphical Menus using a Mobile Phone for Wearable AR Systems
Wearable menus on three focusable elements
Based on previous menu work, we determine display-, manipulator- and target-referenced menu
placement according to focusable elements within a wearable AR system. Also it implemented by
using a mobile phone with a stereo head-mounted display
Link: http://youtu.be/TVrE5ljlCYI
47. CAMAR 2009-10
Mobile AR: WHERE to augment?
Concept Context-aware Annotation (H. Kim)
Plan Recognition (Y. Jang) Multi-page Recognition (J.Park) LBS + mobile AR (W. Lee)
[Paper] Y. Jang and W. Woo, “Stroke-based semi-automatic region of interest detection for in-situ painting recognition", 14th International Conference on Human-Computer Interaction (HCII 2011), Jul. 9-14,
Orlando, USA, accepted.
[Patent] W. Woo, Y. Jang, “현장에서 그림 인식을 위해 선긋기 상호작용을 통한 반자동식 관심영역 검출 알고리즘 ,” 2010. (출원 중)
48. CAMAR: Context-aware mobile AR
How to make CAMAR App’s more useful?
Impractical AR Useful AR
•3D models placed in a webcam
with little or no interactivity •Engaging, persistent
experience for the user
•Layered animation with little or no
feedback
•MAR that uses solely GPS, •[LBS + SNS + MAR]
compass, and accelerometer input drawing from a large DB
•MAR where geo-tagging doesn't with customization
serve an everyday purpose features
49. CAMAR 2.0: Context-aware mobile AR
Sharing
Direct
response
Mashup
Reflective
response Planned
response
50. Context Awareness @ U-VR Lab 2010
Context-aware Microblog Browser
Observe the properties of microblogs from large-scale data analysis
Propose the method that retrieves user-related hot topics of microblogs
User’s Interests Inference Local Recent Hot Topic Detection
Preference Hot Topic Categorization
Hot Topic Categorization
Selection
with Re-Raking
Local Hot Topics Detection
Preference Inference
based on TF
Comparison Comparison with Hot Topic
Similarity with Global Data Previous Local Data Visualization
Measurement
between Topic User & Friends
Activity
and Interest Micro-Blogs User Context
Inference Local Micro-Blogs Retrieval
Retrieval Acquisition
Web
Contextual Information
Real-Time Local Hot Topics
J. Han, X. Xie, and W. Woo, “Context-based Local Hot Topic Detection for Mobile User,” in Proc. of Adjunct Pervasive 2010, pp.001-004, May. 2010.
51. Context Awareness @ U-VR Lab 2010
Context-aware Microblog Browser
Dependence of Microblogs and Context Microblog Mobile Browser
데모비디오
User history is the most affective for user interest Gather user contexts from a mobile phone
Location and user social relationship is also Detect real-time local hot topics from microblogs
important and local social networking is more Select hot topics related to user preference and
important than them activity
J. Han, X. Xie, and W. Woo, “Context-based Local Hot Topic Detection for Mobile User,” in Proc. of Adjunct Pervasive 2010, pp.001-004, May. 2010.
52. Context Awareness @ U-VR Lab 2011
Adaptive Content Recommendation
Recommend user-preferred content
Retrieve content efficiently using hierarchal
context model
J. Han, H. Schmidtke, X. Xie, and W. Woo, “Adaptive Content Recommendation using Hierarchical Context Model with Granularity for Mobile Consumer,” in Pers. Ubiqu. Comp
ut., pp.000-000, 2012. (Submitted)
53. Context Awareness @ U-VR Lab 2011
Adaptive Content Recommendation
Hierarchical Context Model Content Recommender using Context Model
데모비디오
• Collection of directed acyclic graph • Retrieve tags related to retrieved photos
• Represent partial order relation • Tag cloud with DAG structure
• Capture subtag-supertag hierarchies • Collect tags and investigate frequency of the
tags
• Display with different size of fonts
J. Han, H. Schmidtke, X. Xie, and W. Woo, “Adaptive Content Recommendation using Hierarchical Context Model with Granularity for Mobile Consumer,” in Pers. Ubiqu.
Comput., pp.000-000, 2012. (Submitted)
55. CAMAR @ U-VR Lab 2009
CAMAR Tag Framework: Context-Aware Mobile Augmented
Reality for Dual-reality Linkage
A novel tag concept which adds a tag to an object as a reference point in dual-reality
to contact about sharing information
H. Kim, W. Lee and W. Woo, “CAMAR Tag Framework: Context-Aware Mobile Augmented Reality Tag Framework for Dual-reality Linkage”, in ISUVR 2009, pp.39-42,
July 2009.
56. CAMAR @ U-VR Lab 2010
Real and Virtual Worlds Linkage through Cloud-Mobile
Convergence
Consider opportunities and requirements for
dual world linkage through CMCVR
Implement an object-based linkage module
prototype on a mobile phone
Evaluate results of obtained 3D points
normalization
A Model of Real and Virtual Worlds Linkage through CMCVR
Object modeling from real to virtual world Content authoring from virtual to real world
H. Kim and W.Woo, “Real and Virtual Worlds Linkage through Cloud-Mobile Convergence”, in Virtual Reality Workshop (CMCVR), pp.10-13, March. 2010.
57. CAMAR @ U-VR Lab 2010
Real and Virtual Worlds Linkage through Cloud-Mobile
Convergence
Poster linkage from real to virtual world
데모비디오 Dual art galleries
Real and virtual world Real and virtual world
- ubiHome, a smart home test bed - art gallery test bed
- virtual 3D ubiHome - virtual 3D art gallery
Two-dimensional objects Two-dimensional objects
- like posters - like structure shape and picture frames
H. Kim and W.Woo, “Real and Virtual Worlds Linkage through Cloud-Mobile Convergence”, in Virtual Reality Workshop (CMCVR), pp.10-13, March. 2010.
58. CAMAR @ U-VR Lab 2010
Barcode-assisted Planar Object Tracking for Mobile AR
embed the information related to a planar object into the barcode,
and the information is used to limit image regions to perform
keypoint matching between consecutive frames.
Tracking by Detection (Mobile) Barcode Detection + Natural Feature
Tracking
N.Park W.Lee and W.Woo, “Barcode-assisted Planar Object Tracking Method for Mobile Augmented Reality” in Proc. ISUVR 2011, pp.40-43, July. 2011.
http://www.youtube.com/watch?feature=player_profilepage&v=nho4y2yoASo, Barcode-assisted Planar Object Tracking Method for Mobile Augmented Reality, GIST CTI.
59. CAMAR @ U-VR Lab 2010
2D Detection/Recognition for mobile tagging
Semi-automatic ROI Detection for Painting Region
Robust to Illumination, View Direction/Distance Changes
Fast Recognition based on Local Binary Pattern (LBP) codes
In-Situ code enrollment for a detected new painting
Various size of paintings
Extracted binary codes Updating code
DB
Y Matching
New? N
Object ID #
ROI* detection LBP* code Updating new painting code
(Rectangular shape) extraction Code matching by hamming distance
* ROI = Region of Interest * LBP = Local Binary Pattern
Y. Jang and W. Woo, "A Stroke-based Semi-automatic ROI Detection Algorithm for In-Situ Painting Recognition", HCII2011,
Orlando, Florida, USA, July 9-14, 2011 (LNCS)
60. CAMAR @ U-VR Lab 2010
2D Detection/Recognition for mobile tagging
Stroke-based ROI Detection/Recognition [1] ROI Detection/Recognition
데모비디오
Semi-automatic ROI Detection for Painting Touch-triggered Painting Detection/Recognition
Region
Robust to View Distance Changes
Robust to Illumination, View Direction Changes
In-situ Painting Code Generation/Enrollment
Fast Recognition based on Local Binary Pattern (LBP)
[1] http://www.youtube.com/watch?feature=player_detailpage&v=pGp-L2dbcYU
61. CAMAR @ U-VR Lab 2010
In Situ Video Tagging on Mobile Phones
In situ Planar Target Learning on Mobile Phones
Sensor-based Automatic Fronto-parallel View Generation
Fast Vanishing Point Computation
Input Image
Horizontal Vanishing Points
target ? Estimation
Fronto-parallel View
Generation
Target Learning
on the mobile GPU
Real-time Detection
W. Lee, Y. Park, V. Lepetit, W. Woo, "In-Situ Video Tagging on Mobile Phones," Circuit Systems and Video Technology, IEEE Trans. on, Vol. 21, No. 10, pp. 1487-1496, 2011.
W. Lee, Y. Park, V. Lepetit, W. Woo, "Point-and-Shoot for Ubiquitous Tagging on Mobile Phones," ISMAR10, pp. 57-64, 2010.
62. CAMAR @ U-VR Lab 2010
In Situ Video Tagging on Mobile Phones
In situ Augmentation of Real World Objects Vertical Target Learning & Detection
데모비디오
- In situ augmentation of real world objects - Learning a vertical target from an arbitrary
without pre-trained database viewpoint
- Fast target learning in a few seconds - Vanishing point-based fronto-parallel view
- Real-time detection from novel viewpoints generation
- Real-time detection from unseen viewpoints
Available at : http://youtu.be/vaaFhvfwet8
W. Lee, Y. Park, V. Lepetit, W. Woo, "In-Situ Video Tagging on Mobile Phones," Circuit Systems and Video Technology, IEEE Trans. on, Vol. 21, No. 10, pp. 1487-1496, 2011.
W. Lee, Y. Park, V. Lepetit, W. Woo, "Point-and-Shoot for Ubiquitous Tagging on Mobile Phones," ISMAR10, pp. 57-64, 2010.
63. CAMAR @ U-VR Lab 2011
Interactive Annotation on Mobile Phones for Real and Virtual
Space Registration
Allows to quickly capture the dimensions of a room
Operates at interactive frame-rates on mobile device
and provides simple touch-interaction
Serves as anchors for linking virtual information to
the real space represented by the room
H. Kim, G. Reitmayr and W.Woo, “Interactive Annotation on Mobile Phones for Real and Virtual Space Registration,” in Proc. ISMAR 2011, pp.265-266, Oct. 2011.
64. CAMAR @ U-VR Lab 2011
Interactive Annotation on Mobile Phones for Real and Virtual
Space Registration
Demo #1 데모비디오 Demo #2
In office room and seminar room, In ART gallery,
- Capture the dimensions of a room, - Load an AR zone-based room model
approximated as a room - Annotate a virtual content on rectangular areas
- Annotate a virtual content on rectangular on the room’s surface
areas on the room’s surface
Youtube share link http://www.youtube.com/watch?v=I00I-phmPbI
65. CAMAR @ U-VR Lab 2011
In-situ AR Mashup for AR Content Authoring
Easily create AR contents from Web contents
Context-based content recommendation
User-similarity, item similarity, social relationship
Configure AR content sharing setting
To Whom, When, in What conditions
H.Yoon and W.Woo, “CAMAR Mashup: Empowering End-user Participation in U-VR Environment,” in Proc. ISUVR 2009, pp.33-36, July. 2009. (Best Paper Award)
H.Yoon and W.Woo, “Concept and Applications of In-situ AR Mashup Content,” in Proc. SCI 2011, pp. 25-30, Sept. 2011.
66. CAMAR @ U-VR Lab 2011
In-situ AR Mashup for AR Content Authoring
In-situ Content Mashup
• Extract query keywords based on context of object
• Content recommendation based on personal context and social context
• Access related Flickr, Twitter, Picasa contents in-situ
H.Yoon and W.Woo, “CAMAR Mashup: Empowering End-user Participation in U-VR Environment,” in Proc. ISUVR 2009, pp.33-36, July. 2009. (Best Paper Award)
H.Yoon and W.Woo, “Concept and Applications of In-situ AR Mashup Content,” in Proc. SCI 2011, pp. 25-30, Sept. 2011.
67. Application Usage Prediction for Smartphones
Personalized application prediction based on context
Dynamic home screen: app recommendation and highlight
Frequency of
Procedure applications
• Sensory info.
Data • Formatting
collection • Data recording
C1
• Filtering
Pre- • Merging
processing • Discretization
C2
• WraperSubset C3
Feature selection
selection • cfsSubClass
• GTT
• MFU/MRU
Training & • Bayesian model
prediction • SVM/C4.5
68. Outline
Paradigm Shift : DigiLog with AR & Ubiquitous VR
Digilog Applications and U-VR Core
U-VR 2.0: What’s Next?
Summary and Q&A
70. What’s Next?
Where is this headed?
Computing in next 5-10 Years :
Nomadic human: Desktop-based UI -> Augmented Reality
Smart space : Intelligence for a user-> Wisdom for community => <STANDARD>
Responsive content: Personal emotion -> Social fun => <Social Issues>
Augmented Content is a King, then Context is a queen consort controlling the King!
71. AR Standard
Interoperability (Standard)
W3C : HTML5 (ETRI)
http://www.w3.org/2010/06/w3car/report.html
ISO/IEC JTC1 SC24 : WG6,7,8 & WG9 (NEW on AR)
X3D(KU), XML(GIST)
ISO/IEC JTC1 SC29 :
X3D(ETRI) <Figure by. H. Jeon @ ETRI>
web3D :
X3D (Fraunhofer)
OGC :
KLM & ARML
KARML (GATECH)
72. Social AR?
Issues of Social AR
Physical self along with a digital profile
Unauthorized Augmented Advertising
Privacy: Augmented Behavioral Targeting
Safety: Physical danger
Spam
74. Summary
Paradigm Shift : DigiLog with AR & Ubiquitous VR
DigiLog Applications and VR Core
U-VR 2.0: What’s Next?
Summary and Q&A
75. Q&A
“The future is already here. It is just not uniformly distributed”
by William Gibson (SF writer)
More Information
Woontack Woo, Ph.D.
Twitter: @wwoo_ct
Mail: wwoo@gist.ac.kr
Web: http://cti.gist.ac.kr
ISUVR 2012 @ KAIST, Aug. 22 - 25, 2012