Your SlideShare is downloading. ×
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
UVR2011(icat2011)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

UVR2011(icat2011)

1,122

Published on

In this talk, I will introduce a new concept of “ubiquitous Virtual …

In this talk, I will introduce a new concept of “ubiquitous Virtual
Reality (UVR)” in the view point of Metaverse and then explain how to realize Virtual Reality in physical space with context-aware Augmented Reality. In UVR-enabled space it is possible to personalize using user’s, as well as environmental, context and then selectively share the augmented object with additional (or 3D content as well as text) information according to user’s social relationships. I will also explain some core technologies developed in GIST U-VR Lab for last 5 years and demonstrate U-VR applications such as DigiLog Book, Digilog Miniature, CAMAR Tour, etc.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,122
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
24
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. ICAT2011@ Osaka University (2011.11.28)Augmented Reality & DigiLog:Toward Ubiquitous Virtual Reality 2.0 Woontack Woo (禹 雲澤), Ph.D. http://twitter.com/wwoo_ct GIST CTI/U-VR Lab Gwangju, Korea
  • 2. Gwangju (光州), Korea, the city of Science & Technology, Light, Culture & Art, FoodGIST is Research-oriented UniversityU-VR Lab and CTI started in 2001 and 2005, respectively
  • 3. Brief HistoryPersonal History and Status of AR Estimated user 180M+ by 2012 Major brands are taking keen interest Consumers are hungry for Apps 1992 1994 1999 1968 1998 1991 ‘AR’ by Tom Continuum 1st ISMRHMD by Ivan 1st IWAR in 1st ICAT Caudell @ by Milgram 9 th ICAT Sutherland SF, CA, USA Boeing @ ATR (Waseda U) 1999 2001 2002 2004 2005 2006 ATR MIC GIST U-VR 1st ISMAR, 14th ICAT GIST CTI 1stISUVR Lab Lab Darmstadt in Seoul 2009 2011 2007 2008 2010 Sony ISO/SC24/ 2012Sony ‘Eye Qualcomm Wikitude ‘EyePet’ WG9 KAIST U- of R&D Sony PS Vi VR LabJudgment’ mAR Guide LBS AR Center ta
  • 4. OutlineParadigm Shift : DigiLog with AR & Ubiquitous VRDigiLog Applications and U-VR CoreU-VR 2.0: What’s Next?Summary and Q&A
  • 5. Media vs. A-Reality(S-)Media creates PerceptionPerception is (A-)RealitySo, (S-)Media creates (A-)RealityWhat does (S-) and (A-) mean? S-Media : Smart, Social (CI) A-Reality : Altered, Augmented
  • 6. Computing History and My PerspectiveComputing History Mainframe Personal Networked Ubiquitous U-VR60s 80s 90s 00s 10s Computer Computer Computers Computing Computing Text CG/Image Multimedia u-Media s-Media Sharing a Individual Sharing over Human- Community- computer usage Internet centered centered Information Knowledge Intelligence Wisdom Emotion FunComputing in next 5-10 Years : Nomadic human: Desktop-based UI -> Augmented Reality Smart space : Intelligence for a user -> Wisdom for community Smart media: Personal emotion -> Social fun
  • 7. DigiLog and Ubiquitous VRIs DigiLog-X a new Media? DigiLog-X : Digital (Service/Content) over Analog Life Media platform: Phone/TV/CE + Computer + … HW platform: mobile network + Cloud + … Service/Content platform: SNS + LBS + CAS + … over Web/App UI/UX platform: 3D + AR/VR/MR + … So, DigiLog-X is becoming a new Media !!!How to realize Smart DigiLog? Ubiquitous Virtual Reality = VR in smart physical space Context-aware Mixed (Mirrored) Augmented Reality for smart DigiLog UI/UX => Mobile/wearable + Smart (context-aware) + AR + (for) Social Fun
  • 8. Hype Cycle of AR 2011 Augmented Reality • MIT’s annual review; “10 Emerging Tech.s 2007” • Gartner: top 10 disruptive tech 2008-122010 • Juniper: mAR 1.4B downloads/y, revenue $1.5B/y by 2015 (11M in 2010)20092008
  • 9. Is AR Hype?Google Trend (VR vs. AR) A: Virtual Reality Embraced by Businesses B: Another use for your phone: augmented reality C: Qualcomm Opens Austria Research Center to Focus on Augmented Reality D: Qualcomm Launches Augmented Reality Application Developer Challenge E: Review: mTrip iPhone app uses augmented reality F: Toyota demos augmented-reality-enhanced car windows
  • 10. What’s U-VR, MR & AR? Dual space {R, R’} RE RE’ V RERE R R’ VE’ RE RE’ VE’
  • 11. What’s U-VR, MR & AR?Woo’s Definition [11] : U-VR is 3D Link btw dual (real & virtual) spaces with additional info CoI augmentation, not just sight: sound, haptics, smell, taste, etc. Bidirectional UI for H2H/H2S/S2H/S2S communication in dual spaces Virtual space How to U-Content Seamless Augmentation Link btw dual spaces Seamlessly? LINK CoI Real space Social Networks
  • 12. OutlineParadigm Shift : DigiLog with AR & Ubiquitous VRDigiLog Applications and U-VR Core TechnologyU-VR 2.0: What’s Next?Summary and Q&A
  • 13. DigiLog ApplicationsDigiLog with AR for Edutainment DigiLog with AR: interactive, flexible, interesting, direct experience, etc. Edutainment  Education: learning, training, knowledge  Entertainement: fun, game, storytellingTechnological Challenges : It should … Be simple to use and robust as a tool Provide the user with clear and concise information Enable the educator/tutor to input information in a simple and effective manner Enable easy interaction between learners Make complex procedures transparent to the learner Be cost effective and easy to install
  • 14. DigiLog @ U-VR Lab 2006 Garden Alive: an Emotionally Intelligent Interactive Garden Intuitive interaction: TUIs seamlessly bridge to the garden in a virtual world Educational purpose: users can evaluate what environmental conditions can affect plant growth Emotional sympathy to the users: the emotional change of the virtual plants based on user’s interaction which maximizes user interest The International Journal of Virtual Reality, 2006, 5(4):21-30 The International Journal of Virtual Reality, 2006, 5(4):21-30 Fig. 1. The overall system.rfaces in the real garden nutrient influences growth in different parts of the plant. 3) Hand gestures the real garden Furthermore, for more natural interface, users can interact with Fig. 4. Tangible user interfaces. Tangible user interfaces with watering pot, user’s hand and nutrients supplier in the "Gardevironment is divided into two parts based on virtual plants using their hands. We defined the various the surface, such as the ground and the meanings according to hand gestures. Four kinds of handse surface of the real garden corresponds to the rainbow. Furthermore, there are a fixed number of plants in the gestures can be recognized. For example, the users are grabbing Teaejin Ha, Woontack Woo, ”Garden Alive: An Emotionallysupplier, called the plantsthegrow.International Journal of Virtual Reality (IJVR), 5, 4, pp. 21-30, 2006.und and the underground. Users can see how of the nutrients Intelligent Interactive Garden,” In the second the plants are reproduced population and to same numbers of III. ARTIFICIAL INTELLIGN
  • 15. DigiLog @ U-VR Lab 2006 Garden Alive: an Emotionally Intelligent Interactive Garden Demo 데모비디오 ◦ From the presented Garden Alive, users experience excitement and emotional interaction which is difficult to feel in the real garden • The various kinds of growing plants which have different gene types according to generational evolution • Changes of emotion reflecting the user’s interaction, where the intelligent content can provide emotional feed back to the usersTeaejin Ha, Woontack Woo, ”Garden Alive: An Emotionally Intelligent Interactive Garden,” International Journal of Virtual Reality (IJVR), 5, 4, pp. 21-30, 2006.
  • 16. DigiLog @ U-VR Lab 2010 Digilog book for temple bell tolling experience Digilog Book: an augmented paper book that provides additional multimedia content stimulating readers’ five senses using AR technologies • Descriptions for multisensory AR contents; multisensory feedback; and vision-based manual inputTaejin Ha, Youngho Lee, Woontack Woo, "Digilog book for temple bell tolling experience based on interactive augmented reality," Virtual Reality, 15(4), pp. 295-309, 2010.
  • 17. DigiLog @ U-VR Lab 2010 Digilog book for temple bell tolling experience A ‘‘temple bell experience’’ book ◦ The temple bell experience book is expected to encourage readers to explore cultural heritages for ed ucation and entertainment purposesTaejin Ha, Youngho Lee, Woontack Woo, "Digilog book for temple bell tolling experience based on interactive augmented reality," Virtual Reality, 15(4), pp. 295-309, 2010.
  • 18. Digilog Applications 2010Enhance Experience, Engage, Educate & EntertainHongkil Dong Technologies in ChosunStorytelling application Storytelling applicationIntegrated with virtools* Integrated with virtools*
  • 19. Digilog Apps 2011DigiLog MiniatureStorytelling application Storytelling applicationIntegrated with virtools* Integrated with virtools*
  • 20. Technical ChallengesCoI Localization: Context of Interest (CoI): Space vs. Object Accurate CoI Recognition and Tracking3D InteractionUbiquitous Augmentation LBS/SNS-based Authoring and Mash-upSmart UI for Intuitive Visualization AR-Infography + Organic UINetworking and public DB managementU-VR ecosystem with SNS, LBS, CaSHW wish list Better camera/GPS/compass, CPU/GPU, I/O, battery
  • 21. Tracking @ U-VR Lab 2007-8 BilliARd (2005)2D Picture Tracking (K.Kim) 3D Vase Tracking (Y. Park) 3D Vase Tracking (Y. Park) Tracking (W. Baek) Tracking MO (W.Back) Multiple object Tracking (Y.Park)
  • 22. Tracking @ U-VR Lab 2007-8PageRecognition (K.Kim) 2D Tracking (K.Kim) Tracking w/Modeling (K.Kim) Tracking App (K.Kim) Layer Authoring (J.Park) AR+ tale + -let (booklet)
  • 23. AR @ U-VR Lab 2008 Multiple 3D Object Tracking for Augmented Reality Performance-preserving parallel detection and tracking framework Stabilized 3D tracking by fusing detection and frame-to-frame tracking Keypoint verification for occluded region removalY. Park, V. Lepetit and W.Woo, “Multiple 3D Object Tracking for Augmented Reality,” in Proc. ISMAR 2008, pp.117-120, Sep. 2008.Y. Park, V. Lepetit and W.Woo, “Extended Keyframe Detection with Stable Tracking for Multiple 3D Object Tracking,” IEEE TVCG, 17(11): 1728-1735, 2011
  • 24. AR @ U-VR Lab 2008 Multiple 3D Object Tracking for Augmented Reality Multiple objects 3D tracking demonstration 데모비디오 This video shows simultaneous multiple 3D object tracking which maintains frame rate. The video also shows the effect of temporal keypoint verification.Y. Park, V. Lepetit and W.Woo, “Multiple 3D Object Tracking for Augmented Reality,” in Proc. ISMAR 2008, pp.117-120, Sep. 2008.Y. Park, V. Lepetit and W.Woo, “Extended Keyframe Detection with Stable Tracking for Multiple 3D Object Tracking,” IEEE TVCG, 17(11): 1728-1735, 2011
  • 25. AR @ U-VR Lab 2009 Handling Motion-Blur in 3D Tracking and Rendering for AR Generalized image formation model simulating motion-blur effect Derivation using Efficient Second-order Minimization into a optimization Automated exposure time evaluationY. Park, V. Lepetit and W.Woo, “ESM-Blur: Handling & Rendering Blur in 3D Tracking and Augmentation ,” in Proc. ISMAR 2009, pp.163-166, Oct. 2009Y. Park, V. Lepetit and W.Woo, “Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality,” IEEE TVCG, (to appear)
  • 26. AR @ U-VR Lab 2009 Handling Motion-Blur in 3D Tracking and Rendering for AR Comparison with ESM and augmentation with motion blur effect 데모비디오 This video compares the proposed ESM-Blur and ESM-Blur-SE with ESM and illustrate the augmentation with motion-blur effect for 3D models under general motion.Y. Park, V. Lepetit and W.Woo, “ESM-Blur: Handling & Rendering Blur in 3D Tracking and Augmentation ,” in Proc. ISMAR 2009, pp.163-166, Oct. 2009Y. Park, V. Lepetit and W.Woo, “Handling Motion-Blur in 3D Tracking and Rendering for Augmented Reality,” IEEE TVCG, (to appear)
  • 27. AR @ U-VR Lab 2010 Scalable Tracking for Digilog Books Fast and reliable tracking using a multi-core programming approach Frame-to-frame tracking for fast performance: Bounded search Two-step detection for scalability: “Image searching + Feature-level matching” image image 6 DOF pose in challenging viewpoints 33 Re-localization of Image searching Points No Valid? Feature-level matching Yes Track Points (Frame to Frame) No Valid Page ID? No Enough poi Yes nts? Yes Compute Compute Homography (H) Homography Matches visualization Inliers, Inliers, Page ID Page ID, (R t)i-1 H Decompose Count inliers Homography Tracking Thread (Main) Detection Thread (Background)K. Kim, V. Lepetit and W.Woo, “Scalable Planar Targets Tracking for Digilog Books,” The Visual Computer, 26(6-8):1145-1154, 2010.
  • 28. AR @ U-VR Lab 2010 Scalable Tracking for Digilog Books Tracking Performance HongGilDong: Digilog Book Applications 데모비디오 Visualization of inliers Storytelling application Less than 10 ms tracking speed with 314 planar Integrated with virtools* targets in a database.K. Kim, V. Lepetit and W.Woo, “Scalable Planar Targets Tracking for Digilog Books,” The Visual Computer, 26(6-8):1145-1154, 2010.
  • 29. AR @ U-VR Lab 2010 Real-time Modeling and Tracking Real-time SfM In-situ modeling of various objects and collecting of tracking data on real-time structure from motion Objects insertion by minimal user interactions Interactive Modeling Tracking multiple objects independently 37 in real-time image New points Searching features Feature extraction triangulation (3.3) Keyframes searching Bundle No Frame-to-Frame (3.2.1, 3.4.2 ) adjustment ( 3.3) matching Yes Feature matching and No Object modeling? outliers rejection ( 3.2.2) Pose update (3.5) Multiple Object Tracking Yes Keyframe condition? Map update ( 3.3) Rendering Yes Foreground BackgroundK. Kim, V. Lepetit and W.Woo, “Keyframe-based Modeling and Tracking of Multiple 3D Objects”, International Symposium on Mixed and Augmented Reality,” ISMAR, 2010. 2001 ~ 2010 Copyright@GIST U-VR Lab.
  • 30. AR @ U-VR Lab 2010 Real-time Modeling and Tracking ISMAR10 Extension 데모비디오 Supporting various types of objects Enhanced multiple object detectionK. Kim, V. Lepetit and W.Woo, “Keyframe-based Modeling and Tracking of Multiple 3D Objects”, International Symposium on Mixed and Augmented Reality,” ISMAR, 2010.
  • 31. AR @ U-VR Lab 2011 Reconstruction, Registration, and Tracking for Digilog Miniatures Fast and reliable 3D tracking based on the scalable tracker for digilog books Tracking data: Incremental 3D reconstruction of the target objects in offline Registration: fitting planar surface with the reconstructed keypoints Offline process SIFT feature Incremental Bundle extraction reconstruction adjustment Collect Set local Adjust a keypoints coordinates scale Detection (Target tracking) P-n-P Online process voctree Keyframe* Extracting SIFT features Searching Keyframe* Outlier Rejection & Finding Keypoints search-window P-n-P + Image L-M minimization (Adding Keypoints if available) Frame-by-Frame Matching Pose Update (R, t)K. Kim, N. Park and W.Woo, “Vision-based All-in-One Solution for AR and its Storytelling Applications,” The Visual Computer (submitted), 2011.
  • 32. AR @ U-VR Lab 2011 Reconstruction, Registration, and Tracking for Digilog Miniatures Miniature I Miniature II Miniature III 데모비디오 Palace Temple Town Keyframes: 23 Keyframes: 42 Keyframes: 82 Keypoints: 10,370 Keypoints: 24,039 Keypoints: 80,157K. Kim, N. Park and W.Woo, “Vision-based All-in-One Solution for AR and its Storytelling Applications,” The Visual Computer (submitted), 2011.
  • 33. AR @ U-VR Lab 2011 Depth-assisted Real-time 3D Object Detection for AR Texture-less 3D Object Detection in Real-time Robust Detection under varying lighting conditions Scale difference detection RGB Depth Image Image Image & Depth Templates Gradient Computationc- Image & Depth Template Matchingn-s, 3D Point Registrationt Noe Is registration error small ?n Yesyy Pose computationet e Figure 3: Overall procedure of the proposed method. The stepser W. Lee, marked W. Woo, “Depth-assisted Real-timea GPU. Detection for Augmented Reality,” ICAT2011, 2011 N. Park, in shade runs in parallel on 3D Object
  • 34. AR @ U-VR Lab 2011 Depth-assisted 3D Object Detection for AR (Nov. 30, Session 5) Robust Detection with different lighting Multiple 3D Object Detection 데모비디오 conditions and scales - 3D texture-less object detection & pose - Robust detection under varying lighting estimation conditions - Multiple target detection in real-time - Detection of scale difference between two similar objects Available at : http://youtu.be/TgnocccmS7UW. Lee, N. Park, W. Woo, “Depth-assisted Real-time 3D Object Detection for Augmented Reality,” ICAT2011, 2011
  • 35. AR @ U-VR Lab 2011 Texture-less 3D object Tracking with RGB-D Cam Object training while tracking: start without known 3D model Stabilization using color image as well as depth map Depth map enhancement around noisy boundary and surfaceY. Park, V. Lepetit and W.Woo, “Texture-Less Object Tracking with Online Training using An RGB-D Camera,” in Proc. ISMAR 2011, pp. 121- 126, Oct. 2011.
  • 36. AR @ U-VR Lab 2011 Texture-less 3D object Tracking with RGB-D Cam Tracking while training of texture-less objects 데모비디오 This video shows the tracking of texture-less objects that are difficult to track using conventional keypoint-based methods. The tracking begins without known object 3D model.Y. Park, V. Lepetit and W.Woo, “Texture-Less Object Tracking with Online Training using An RGB-D Camera,” in Proc. ISMAR 2011, pp. 121- 126, Oct. 2011.
  • 37. AR @ U-VR Lab 2011In situ 3D Modeling for wearable AR
  • 38. Interaction @ U-VR Lab 2009-10 Two-handed tangible interactions for augmented blocks Cubical user interface based tangible interactions  Screw-driving (SD) method for free positioning  Block-assembly (BA) method using pre-knowledge Augmented assembly guidance  Preliminary and interim guidance in BA SD sequence BA sequenceH.Lee, M.Billinghusrt, and W.Woo, “Two-handed tangible interaction techniques for composing augmented blocks,” in Virtual Reality, Vol.15, No.2-3,pp133-146, Jun. 2010.
  • 39. Interaction @ U-VR Lab 2009-10 Two-handed tangible interactions for augmented blocks AR Toy Car Making: Tangible Cube Interface based Screw-driving interaction Screw-Driving technique is based on the real world condition where two or more real objects are joined together using a screw and screw-driver. Supporting axis change by the help of additional button and visual hints for 3D positioningLink: http://youtu.be/t0iVuNygqQw
  • 40. Interaction @ U-VR Lab 2010 An Empirical Evaluation of Virtual Hand Techniques for 3D Object Manipulation Adopt Fitts’ law-based formal evaluation process Extend the design parameters of the 1D scale Fitts’ law to 3D scale Implement and compare standard TAR manipulation techniques CUP method PADDLE method ≈ CUBE method Ex_PADDLE methodTaejin Ha, Woontack Woo, "An Empirical Evaluation of Virtual Hand Techniques for 3D Object Manipulation in a Tangible Augmented Reality Environment," IEEE 3D User Interfaces, pp. 91-98, 2010.
  • 41. Interaction @ U-VR Lab 2011 An Interactive 3D Movement Path Manipulation Method Control point allocation test properly generate 3D movement path Dynamic selection method effectively selects the small and dense control pointsTaejin Ha, Mark Billinghurst, Woontack Woo, "An Interactive 3D Movement Path Manipulation Method in an Augmented Reality Environment," Interacting with Computers, 2011 (in press).
  • 42. Interaction @ U-VR Lab 2010-11 An Empirical Evaluation of Virtual Hand Techniques Virtual Hand 3D Path Manipulation 데모비디오  Affordance could enhance usability through ◦ A movement path can be constructed using only a promoting the user’s understanding small number of control points  Instant triggering could help rapid ◦ A movement path can be rapidly manipulated manipulation (e.g., button input) with relatively reduced hand and arm  The selection can be made easier by expanding movements using increased effective distance the selection areaTaejin Ha, Woontack Woo, "An Empirical Evaluation of Virtual Hand Techniques for 3D Object Manipulation in a Tangible Augmented Reality Environment," IEEE 3D User Interfaces, pp. 91-98, 2010.Taejin Ha, Mark Billinghurst, Woontack Woo, "An Interactive 3D Movement Path Manipulation Method in an Augmented Reality Environment," Interacting with Computers, 2011
  • 43. Interaction @ U-VR Lab 2011 ARWand: Phone-based 3D Object Manipulation in AR Exploits a 2D touch screen, a 3DOF accelerometer, and compass sensors information to manipulate 3D objects in 3D space Design transfer functions to map the control space of mobile phones to an AR display spaceTaejin Ha, Woontack Woo, "ARWand: Phone-based 3D Object Manipulation in Augmented Reality Environment," ISUVR, pp. 44-47, 2011.
  • 44. Interaction @ U-VR Lab 2011 ARWand: Phone-based 3D Object Manipulation in AR Experiment and application ◦ Low control-to-display gain: a sophisticated translation could be possible but this requires a significant amount of clutching ◦ High gain could reduce the frequent clutching, but accurate manipulation could be difficult ◦ Therefore, we need to consider an optimal control function that satisfies both fast and accurate manipulationTaejin Ha, Woontack Woo, "ARWand: Phone-based 3D Object Manipulation in Augmented Reality Environment," ISUVR, pp. 44-47, 2011.
  • 45. Interaction @ U-VR Lab 2011 Graphical Menus using a Mobile Phone for Wearable AR Systems Classifying focusable menus via a mobile phone with stereo HMD  Display-referenced (DR)  Manipulator-referenced (MR)  Target-referenced (TR) DR MR TR DR MR TRH.Lee, D.Kim, and W.Woo, “Graphical Menus using a Mobile Phone for Wearable AR systems,” in Proc. ISUVR 2011, pp55-58, Jul. 2011.
  • 46. Interaction @ U-VR Lab 2011 Graphical Menus using a Mobile Phone for Wearable AR Systems Wearable menus on three focusable elements Based on previous menu work, we determine display-, manipulator- and target-referenced menu placement according to focusable elements within a wearable AR system. Also it implemented by using a mobile phone with a stereo head-mounted displayLink: http://youtu.be/TVrE5ljlCYI
  • 47. CAMAR 2009-10 Mobile AR: WHERE to augment? Concept Context-aware Annotation (H. Kim) Plan Recognition (Y. Jang) Multi-page Recognition (J.Park) LBS + mobile AR (W. Lee)[Paper] Y. Jang and W. Woo, “Stroke-based semi-automatic region of interest detection for in-situ painting recognition", 14th International Conference on Human-Computer Interaction (HCII 2011), Jul. 9-14,Orlando, USA, accepted.[Patent] W. Woo, Y. Jang, “현장에서 그림 인식을 위해 선긋기 상호작용을 통한 반자동식 관심영역 검출 알고리즘 ,” 2010. (출원 중)
  • 48. CAMAR: Context-aware mobile ARHow to make CAMAR App’s more useful? Impractical AR Useful AR •3D models placed in a webcam with little or no interactivity •Engaging, persistent experience for the user •Layered animation with little or no feedback •MAR that uses solely GPS, •[LBS + SNS + MAR] compass, and accelerometer input drawing from a large DB •MAR where geo-tagging doesnt with customization serve an everyday purpose features
  • 49. CAMAR 2.0: Context-aware mobile AR Sharing Direct response Mashup Reflective response Planned response
  • 50. Context Awareness @ U-VR Lab 2010 Context-aware Microblog Browser Observe the properties of microblogs from large-scale data analysis Propose the method that retrieves user-related hot topics of microblogs User’s Interests Inference Local Recent Hot Topic Detection Preference Hot Topic Categorization Hot Topic Categorization Selection with Re-Raking Local Hot Topics Detection Preference Inference based on TF Comparison Comparison with Hot Topic Similarity with Global Data Previous Local Data Visualization Measurement between Topic User & Friends Activity and Interest Micro-Blogs User Context Inference Local Micro-Blogs Retrieval Retrieval Acquisition Web Contextual Information Real-Time Local Hot TopicsJ. Han, X. Xie, and W. Woo, “Context-based Local Hot Topic Detection for Mobile User,” in Proc. of Adjunct Pervasive 2010, pp.001-004, May. 2010.
  • 51. Context Awareness @ U-VR Lab 2010 Context-aware Microblog Browser Dependence of Microblogs and Context Microblog Mobile Browser 데모비디오 User history is the most affective for user interest Gather user contexts from a mobile phone Location and user social relationship is also Detect real-time local hot topics from microblogs important and local social networking is more Select hot topics related to user preference and important than them activityJ. Han, X. Xie, and W. Woo, “Context-based Local Hot Topic Detection for Mobile User,” in Proc. of Adjunct Pervasive 2010, pp.001-004, May. 2010.
  • 52. Context Awareness @ U-VR Lab 2011 Adaptive Content Recommendation Recommend user-preferred content Retrieve content efficiently using hierarchal context modelJ. Han, H. Schmidtke, X. Xie, and W. Woo, “Adaptive Content Recommendation using Hierarchical Context Model with Granularity for Mobile Consumer,” in Pers. Ubiqu. Comp ut., pp.000-000, 2012. (Submitted)
  • 53. Context Awareness @ U-VR Lab 2011 Adaptive Content Recommendation Hierarchical Context Model Content Recommender using Context Model 데모비디오 • Collection of directed acyclic graph • Retrieve tags related to retrieved photos • Represent partial order relation • Tag cloud with DAG structure • Capture subtag-supertag hierarchies • Collect tags and investigate frequency of the tags • Display with different size of fontsJ. Han, H. Schmidtke, X. Xie, and W. Woo, “Adaptive Content Recommendation using Hierarchical Context Model with Granularity for Mobile Consumer,” in Pers. Ubiqu. Comput., pp.000-000, 2012. (Submitted)
  • 54. CAMAR Applications 2009-11
  • 55. CAMAR @ U-VR Lab 2009 CAMAR Tag Framework: Context-Aware Mobile Augmented Reality for Dual-reality Linkage A novel tag concept which adds a tag to an object as a reference point in dual-reality to contact about sharing informationH. Kim, W. Lee and W. Woo, “CAMAR Tag Framework: Context-Aware Mobile Augmented Reality Tag Framework for Dual-reality Linkage”, in ISUVR 2009, pp.39-42, July 2009.
  • 56. CAMAR @ U-VR Lab 2010 Real and Virtual Worlds Linkage through Cloud-Mobile Convergence Consider opportunities and requirements for dual world linkage through CMCVR Implement an object-based linkage module prototype on a mobile phone Evaluate results of obtained 3D points normalization A Model of Real and Virtual Worlds Linkage through CMCVR Object modeling from real to virtual world Content authoring from virtual to real worldH. Kim and W.Woo, “Real and Virtual Worlds Linkage through Cloud-Mobile Convergence”, in Virtual Reality Workshop (CMCVR), pp.10-13, March. 2010.
  • 57. CAMAR @ U-VR Lab 2010 Real and Virtual Worlds Linkage through Cloud-Mobile Convergence Poster linkage from real to virtual world 데모비디오 Dual art galleries Real and virtual world Real and virtual world - ubiHome, a smart home test bed - art gallery test bed - virtual 3D ubiHome - virtual 3D art gallery Two-dimensional objects Two-dimensional objects - like posters - like structure shape and picture framesH. Kim and W.Woo, “Real and Virtual Worlds Linkage through Cloud-Mobile Convergence”, in Virtual Reality Workshop (CMCVR), pp.10-13, March. 2010.
  • 58. CAMAR @ U-VR Lab 2010 Barcode-assisted Planar Object Tracking for Mobile AR embed the information related to a planar object into the barcode, and the information is used to limit image regions to perform keypoint matching between consecutive frames. Tracking by Detection (Mobile) Barcode Detection + Natural Feature TrackingN.Park W.Lee and W.Woo, “Barcode-assisted Planar Object Tracking Method for Mobile Augmented Reality” in Proc. ISUVR 2011, pp.40-43, July. 2011.http://www.youtube.com/watch?feature=player_profilepage&v=nho4y2yoASo, Barcode-assisted Planar Object Tracking Method for Mobile Augmented Reality, GIST CTI.
  • 59. CAMAR @ U-VR Lab 2010 2D Detection/Recognition for mobile tagging Semi-automatic ROI Detection for Painting Region  Robust to Illumination, View Direction/Distance Changes Fast Recognition based on Local Binary Pattern (LBP) codes In-Situ code enrollment for a detected new painting Various size of paintings Extracted binary codes Updating code DB Y Matching New? N Object ID # ROI* detection LBP* code Updating new painting code (Rectangular shape) extraction Code matching by hamming distance * ROI = Region of Interest * LBP = Local Binary PatternY. Jang and W. Woo, "A Stroke-based Semi-automatic ROI Detection Algorithm for In-Situ Painting Recognition", HCII2011,Orlando, Florida, USA, July 9-14, 2011 (LNCS)
  • 60. CAMAR @ U-VR Lab 2010 2D Detection/Recognition for mobile tagging Stroke-based ROI Detection/Recognition [1] ROI Detection/Recognition 데모비디오 Semi-automatic ROI Detection for Painting Touch-triggered Painting Detection/Recognition Region Robust to View Distance Changes Robust to Illumination, View Direction Changes In-situ Painting Code Generation/Enrollment Fast Recognition based on Local Binary Pattern (LBP)[1] http://www.youtube.com/watch?feature=player_detailpage&v=pGp-L2dbcYU
  • 61. CAMAR @ U-VR Lab 2010 In Situ Video Tagging on Mobile Phones In situ Planar Target Learning on Mobile Phones Sensor-based Automatic Fronto-parallel View Generation Fast Vanishing Point Computation Input Image Horizontal Vanishing Points target ? Estimation Fronto-parallel View Generation Target Learning on the mobile GPU Real-time DetectionW. Lee, Y. Park, V. Lepetit, W. Woo, "In-Situ Video Tagging on Mobile Phones," Circuit Systems and Video Technology, IEEE Trans. on, Vol. 21, No. 10, pp. 1487-1496, 2011.W. Lee, Y. Park, V. Lepetit, W. Woo, "Point-and-Shoot for Ubiquitous Tagging on Mobile Phones," ISMAR10, pp. 57-64, 2010.
  • 62. CAMAR @ U-VR Lab 2010 In Situ Video Tagging on Mobile Phones In situ Augmentation of Real World Objects Vertical Target Learning & Detection 데모비디오 - In situ augmentation of real world objects - Learning a vertical target from an arbitrary without pre-trained database viewpoint - Fast target learning in a few seconds - Vanishing point-based fronto-parallel view - Real-time detection from novel viewpoints generation - Real-time detection from unseen viewpoints Available at : http://youtu.be/vaaFhvfwet8W. Lee, Y. Park, V. Lepetit, W. Woo, "In-Situ Video Tagging on Mobile Phones," Circuit Systems and Video Technology, IEEE Trans. on, Vol. 21, No. 10, pp. 1487-1496, 2011.W. Lee, Y. Park, V. Lepetit, W. Woo, "Point-and-Shoot for Ubiquitous Tagging on Mobile Phones," ISMAR10, pp. 57-64, 2010.
  • 63. CAMAR @ U-VR Lab 2011 Interactive Annotation on Mobile Phones for Real and Virtual Space Registration Allows to quickly capture the dimensions of a room Operates at interactive frame-rates on mobile device and provides simple touch-interaction Serves as anchors for linking virtual information to the real space represented by the roomH. Kim, G. Reitmayr and W.Woo, “Interactive Annotation on Mobile Phones for Real and Virtual Space Registration,” in Proc. ISMAR 2011, pp.265-266, Oct. 2011.
  • 64. CAMAR @ U-VR Lab 2011 Interactive Annotation on Mobile Phones for Real and Virtual Space Registration Demo #1 데모비디오 Demo #2 In office room and seminar room, In ART gallery, - Capture the dimensions of a room, - Load an AR zone-based room model approximated as a room - Annotate a virtual content on rectangular areas - Annotate a virtual content on rectangular on the room’s surface areas on the room’s surfaceYoutube share link http://www.youtube.com/watch?v=I00I-phmPbI
  • 65. CAMAR @ U-VR Lab 2011 In-situ AR Mashup for AR Content Authoring Easily create AR contents from Web contents Context-based content recommendation  User-similarity, item similarity, social relationship Configure AR content sharing setting  To Whom, When, in What conditionsH.Yoon and W.Woo, “CAMAR Mashup: Empowering End-user Participation in U-VR Environment,” in Proc. ISUVR 2009, pp.33-36, July. 2009. (Best Paper Award)H.Yoon and W.Woo, “Concept and Applications of In-situ AR Mashup Content,” in Proc. SCI 2011, pp. 25-30, Sept. 2011.
  • 66. CAMAR @ U-VR Lab 2011 In-situ AR Mashup for AR Content Authoring In-situ Content Mashup • Extract query keywords based on context of object • Content recommendation based on personal context and social context • Access related Flickr, Twitter, Picasa contents in-situH.Yoon and W.Woo, “CAMAR Mashup: Empowering End-user Participation in U-VR Environment,” in Proc. ISUVR 2009, pp.33-36, July. 2009. (Best Paper Award)H.Yoon and W.Woo, “Concept and Applications of In-situ AR Mashup Content,” in Proc. SCI 2011, pp. 25-30, Sept. 2011.
  • 67. Application Usage Prediction for Smartphones Personalized application prediction based on context Dynamic home screen: app recommendation and highlight Frequency of Procedure applications • Sensory info. Data • Formatting collection • Data recording C1 • Filtering Pre- • Merging processing • Discretization C2 • WraperSubset C3 Feature selection selection • cfsSubClass • GTT • MFU/MRU Training & • Bayesian model prediction • SVM/C4.5
  • 68. OutlineParadigm Shift : DigiLog with AR & Ubiquitous VRDigilog Applications and U-VR CoreU-VR 2.0: What’s Next?Summary and Q&A
  • 69. U-VR2.0 for eco-System Dual space {R, R’} RE RE’ V RERE R R’ VE’ RE RE’ VE’
  • 70. What’s Next?Where is this headed? Computing in next 5-10 Years :  Nomadic human: Desktop-based UI -> Augmented Reality  Smart space : Intelligence for a user-> Wisdom for community => <STANDARD>  Responsive content: Personal emotion -> Social fun => <Social Issues> Augmented Content is a King, then Context is a queen consort controlling the King!
  • 71. AR StandardInteroperability (Standard) W3C : HTML5 (ETRI)  http://www.w3.org/2010/06/w3car/report.htmlISO/IEC JTC1 SC24 : WG6,7,8 & WG9 (NEW on AR)  X3D(KU), XML(GIST)ISO/IEC JTC1 SC29 :  X3D(ETRI) <Figure by. H. Jeon @ ETRI>web3D :  X3D (Fraunhofer)OGC :  KLM & ARML  KARML (GATECH)
  • 72. Social AR?Issues of Social AR Physical self along with a digital profile Unauthorized Augmented Advertising Privacy: Augmented Behavioral Targeting Safety: Physical danger Spam
  • 73. What’s NEXT?CAMAR 2.0 <Open + Participate + Share> LBS + In-situ Mashup + SNS + CAMAR for sustainable AR eco-systemWearable CAMAR 2.0
  • 74. SummaryParadigm Shift : DigiLog with AR & Ubiquitous VRDigiLog Applications and VR CoreU-VR 2.0: What’s Next?Summary and Q&A
  • 75. Q&A“The future is already here. It is just not uniformly distributed” by William Gibson (SF writer)More Information Woontack Woo, Ph.D. Twitter: @wwoo_ct Mail: wwoo@gist.ac.kr Web: http://cti.gist.ac.kr ISUVR 2012 @ KAIST, Aug. 22 - 25, 2012

×