A presentation i gave with Terrance Cohen at GDC 11 - smartphones summit about lessons learned building a platform for geo-social Augmented Reality games
GDC 2011 - Taking the Purple Pill - Lessons Learned Building a Platform for G...Terrance Cohen
Augmented reality blends the virtual world with the real world on your smartphone's display. Social augmented reality games connect players in a shared persistent augmented reality world. Building a platform for client and server applications that supports these games presents new and interesting challenges. This presentation discusses problems encountered with unsuccessful approaches, and solutions that achieve compelling player experiences.
Virtual or real? AR Foundation best practices from Krikey - Unite Copenhagen ...Unity Technologies
The AR Foundation toolkit has been critical for Krikey to build compelling AR games that function cross-platform, at scale. Krikey, an AR mobile gaming application, used dynamic ground plane detection and camera translation to enable users to play 3D games that interact with the real world. These slides cover some of the best practices Krikey developed while using AR Foundation.
Speakers:
Ketaki Shriram - Krikey
Jhanvi Shriram - Krikey
Watch the session on YouTube: https://youtu.be/5MKRuJEA1hI
The document discusses the history and process of special effects in filmmaking. It begins with a brief overview of how special effects have been used as far back as the 1700s by magicians and progressed to techniques like matte paintings and rear projection screens in early films. The document then focuses on modern special effects, highlighting CGI techniques used in films like Jurassic Park, Avatar, and Harry Potter to bring imaginary worlds and creatures to life. It also describes the multi-step post-production process that visual effects artists use to add effects like explosions and integrate computer graphics into live-action footage.
Virtual reality uses head-mounted displays to simulate a user's physical presence in an imaginary world. Google Cardboard is an inexpensive VR platform that works with smartphones, using cardboard frames with lenses and magnets to hold phones and allow users to view VR content through dedicated apps. It has been popular among developers and allows users to view locations in Street View and Earth, 360-degree videos on YouTube, and play games through their Android or iOS devices.
A presentation on the future format of film making and Virtual Production Technology.
The film is a larger-than-life canvas that makes heroes and celebrities. Based on the film scale, the team behind a film varies from 5 to 5000 members. Future technologies help creatives explore their potential to the fullest by making films from a home pc or in their indie studios. Thanks to Virtual Production technology that's helping reduce the cost factors and allowing individuals to get scope for making the films. I coined this kind of film as Hybrid Films.
Special regards to companies like Epic Games and Reallusion that are making flexible tools for creatives. The recent launch of MetaHumans is a big leap that shall store our heroes to stay young forever.
Cheers, and welcome to the world of Hybrid Movies, which's going to make a drastic technological change.
This document describes a virtual reality studio that offers various VR/AR solutions for industries like automotive, architecture, and entertainment. The studio uses real-time rendering technology to create interactive and immersive virtual environments for applications such as VR pre-visualization, simulation, e-commerce, and augmented reality live performances. Key services mentioned include VR/AR production, real-time 3D modeling, motion capture, and collaboration tools to streamline creative workflows.
The document discusses Google Cardboard and virtual reality. Some key points:
- Google Cardboard was awarded the Cannes Lions Mobile Grand Prix for its role in advancing mobile VR experiences. Over 1 million Cardboard units have been distributed worldwide since 2014.
- The Cardboard SDK allows for the creation of VR apps and experiences for Android and iOS devices when placed in a Cardboard viewer. Tips are provided for effective VR design such as using reticles instead of cursors, considering depth for UI elements, and leveraging spatial audio.
- Google Spotlight Stories produces high-resolution 360 videos for Cardboard using specialized rigs with multiple cameras. Expeditions allows virtual field trips in VR classrooms. Various cameras and
GDC 2011 - Taking the Purple Pill - Lessons Learned Building a Platform for G...Terrance Cohen
Augmented reality blends the virtual world with the real world on your smartphone's display. Social augmented reality games connect players in a shared persistent augmented reality world. Building a platform for client and server applications that supports these games presents new and interesting challenges. This presentation discusses problems encountered with unsuccessful approaches, and solutions that achieve compelling player experiences.
Virtual or real? AR Foundation best practices from Krikey - Unite Copenhagen ...Unity Technologies
The AR Foundation toolkit has been critical for Krikey to build compelling AR games that function cross-platform, at scale. Krikey, an AR mobile gaming application, used dynamic ground plane detection and camera translation to enable users to play 3D games that interact with the real world. These slides cover some of the best practices Krikey developed while using AR Foundation.
Speakers:
Ketaki Shriram - Krikey
Jhanvi Shriram - Krikey
Watch the session on YouTube: https://youtu.be/5MKRuJEA1hI
The document discusses the history and process of special effects in filmmaking. It begins with a brief overview of how special effects have been used as far back as the 1700s by magicians and progressed to techniques like matte paintings and rear projection screens in early films. The document then focuses on modern special effects, highlighting CGI techniques used in films like Jurassic Park, Avatar, and Harry Potter to bring imaginary worlds and creatures to life. It also describes the multi-step post-production process that visual effects artists use to add effects like explosions and integrate computer graphics into live-action footage.
Virtual reality uses head-mounted displays to simulate a user's physical presence in an imaginary world. Google Cardboard is an inexpensive VR platform that works with smartphones, using cardboard frames with lenses and magnets to hold phones and allow users to view VR content through dedicated apps. It has been popular among developers and allows users to view locations in Street View and Earth, 360-degree videos on YouTube, and play games through their Android or iOS devices.
A presentation on the future format of film making and Virtual Production Technology.
The film is a larger-than-life canvas that makes heroes and celebrities. Based on the film scale, the team behind a film varies from 5 to 5000 members. Future technologies help creatives explore their potential to the fullest by making films from a home pc or in their indie studios. Thanks to Virtual Production technology that's helping reduce the cost factors and allowing individuals to get scope for making the films. I coined this kind of film as Hybrid Films.
Special regards to companies like Epic Games and Reallusion that are making flexible tools for creatives. The recent launch of MetaHumans is a big leap that shall store our heroes to stay young forever.
Cheers, and welcome to the world of Hybrid Movies, which's going to make a drastic technological change.
This document describes a virtual reality studio that offers various VR/AR solutions for industries like automotive, architecture, and entertainment. The studio uses real-time rendering technology to create interactive and immersive virtual environments for applications such as VR pre-visualization, simulation, e-commerce, and augmented reality live performances. Key services mentioned include VR/AR production, real-time 3D modeling, motion capture, and collaboration tools to streamline creative workflows.
The document discusses Google Cardboard and virtual reality. Some key points:
- Google Cardboard was awarded the Cannes Lions Mobile Grand Prix for its role in advancing mobile VR experiences. Over 1 million Cardboard units have been distributed worldwide since 2014.
- The Cardboard SDK allows for the creation of VR apps and experiences for Android and iOS devices when placed in a Cardboard viewer. Tips are provided for effective VR design such as using reticles instead of cursors, considering depth for UI elements, and leveraging spatial audio.
- Google Spotlight Stories produces high-resolution 360 videos for Cardboard using specialized rigs with multiple cameras. Expeditions allows virtual field trips in VR classrooms. Various cameras and
Persistent world-scale AR experiences with ARCore Cloud Anchors and AR Founda...Unity Technologies
In this session, Google engineers will walk through ARCore's Cloud Anchors feature, which will soon allow world-scale persistent augmented reality experiences. Cloud Anchors gives developers the ability to create AR experiences that are shared by multiple users both simultaneously and across a large time frame. With the new ARCore Extensions for AR Foundation, it is easier than ever to create shared AR experiences between users, regardless of what mobile device they are using. This presentation will include a special preview from our partners, Sybo and iDreamSky, about their new experience using persistent Cloud Anchors.
The document discusses Google Cardboard, a low-cost virtual reality headset developed by Google. It can turn smartphones into virtual reality displays. The cardboard headset contains lenses and magnets that allow users to view VR content on their phone through compatible apps. When placed in the headset, the phone's magnetometer detects button presses via magnet to control the VR experience. The headset allows users to explore various VR environments and experiences through apps like YouTube and Google Earth at a low price point, helping make VR more accessible.
Royal Institution CS Materclass - Mobile/VR developmentDavid Bell
The document discusses an upcoming talk on mobile development and virtual reality. It will include a short introduction, demos of jQuery Mobile and A-Frame VR, and a design lab session. Various mobile development platforms and VR tools will be introduced. The lab session will involve building a basic mobile form using jQuery Mobile and creating VR experiences with A-Frame. Attendees will work in groups to design VR experiences using images, sounds, and portals.
The document discusses the history and uses of computer-generated imagery (CGI). It notes that the first CGI was created in the 1960s by Russian scientists to model a moving cat. CGI has since been used in films, television, and other media to create visuals that cannot be achieved in the real world. The process involves creating 3D models, assigning them properties, animating elements, and rendering the scene. CGI has benefits like prototyping and animation but also higher costs than practical effects. Popular software for CGI includes Maya, 3D Studio Max, and Blender.
The document discusses the history and use of computer generated imagery (CGI) in movies, from early uses in films like Star Wars to more modern applications. It covers how CGI has helped make certain sci-fi elements cheaper to create and boosted the popularity of big-budget sci-fi films. The document also examines techniques for creating realistic CGI characters and animation versus more cartoon-style animation.
Stop motion animation is achieved by recording individual still frames of motion and playing them back sequentially to create the illusion of continuous movement. Each frame is a slightly altered version of the previous frame with objects moved in small increments. This labor intensive process can involve manipulating physical objects, cutouts, or live actors frame-by-frame to simulate movement. Different stop motion techniques produce varied effects and involve altering material positions gradually over many frames to depict animation.
Computer generated imagery (CGI) uses computers to generate animations and is a subset of animation. CGI animation is divided into photorealistic and non-photorealistic categories. The document discusses the evolution of CGI from early films like King Kong (1933) to modern films like King Kong (2005) through the introduction of 3D software and motion capture technology. It also covers current CGI techniques like cel-shaded animation and future advances in stereoscopic 3D films and interactive interfaces.
The document discusses various post-production processes like editing, sound editing, visual effects, and compositing. It describes the differences between visual effects and special effects, with visual effects referring to digital post-production techniques while special effects involve on-set mechanical or optical techniques. The document also provides examples of different visual effects techniques like chroma keying, rotoscoping, wire and rig removal, camera tracking, and matchmoving.
The document discusses visual effects (VFX) and computer generated imagery (CGI). It defines VFX as processes that create and manipulate imagery outside of live action shots. CGI refers to computer graphics used to create images for films, games, etc. Common VFX techniques mentioned include compositing, matte painting, animation, and chroma keying, which combines elements using blue or green screens. The document provides examples and discusses software used for VFX like After Effects and Nuke.
The document provides an overview of the visual effects process for creating computer generated images (CGI) in films. It discusses the typical roles involved, software used, advantages of CGI, and basic terms like 2D and 3D animation. It then outlines the process which includes high resolution scanning, 3D modeling, motion capture, tracking, rotoscoping, matte painting, compositing, and combining elements to create the final shots. An example walkthrough is given of creating a scene from The Patriot using these techniques.
CGI stands for computer generated imagery and is used extensively in movies to create visual effects and animations that may not be possible through practical or live-action filming. There are two main uses of CGI in movies - realistic CGI, which aims to make computer graphics appear physically, photorealistically, or functionally realistic; and computer animation, which can be 2D or 3D. Popular animation techniques include tweening, morphing, and rendering. As technologies advance, the applications of CGI continue to evolve and allow for ever greater realism and new types of animated content in films.
AR Foundation framework: product roadmap – Unite Copenhagen 2019Unity Technologies
Learn about the latest developments in AR Foundation, the Unity framework purpose-built for augmented reality (AR) development that lets you build your app once and deploy across mobile and wearable AR platforms. In this session you'll also hear about the roadmap for AR Foundation and what's in the works.
Speakers:
Mike Durand – Unity
Matt Fuad – Unity
Watch the session on YouTube: https://youtu.be/UkBXOff8Efo
The document provides a summary of various creative projects that utilize digital media and crowdsourcing including music videos created using fan-submitted content, interactive installations, data visualizations, and augmented/virtual reality works. It briefly describes projects such as the Johnny Cash Project music video, interactive light displays, location-based mobile games, and tools for visualizing information and trends on social networks. The document covers a wide range of genres including art, film, music, technology, and their intersections.
Nelson Zagalo from the University of Minho in Portugal gave a presentation on compositing at the University of Maribor in Slovenia. Compositing involves combining visual elements from different sources to create a single image, often making elements appear part of the same scene. It is used for special effects in film and to connect real and artificial images. Traditional film techniques included physical compositing, multiple exposures, rear projection, and matting. Digital techniques include blend operations, keying, alpha channels, mattes, masks, nesting, color correction, and motion tracking. Zagalo provided examples of how these techniques have been used in famous films.
The document discusses the technology behind the movie Avatar, including 3D camera technology, motion capture, computer graphics and visual effects. It then covers the introduction of 3D movies, including the use of 3D glasses and 3D projectors in theaters. Finally, it discusses the business model of Avatar, focusing on toys using augmented reality and the creation of the Na'vi language.
A brief approach of simulation in VFX & Games environment, I talked about some common concepts in simulation media like crowds and fluid simulation in physics engines. Also The Interstellar Case, about the Gargantua Black Hole simulation was amazing and I wanted to share it. And about some simulation games, that are commonly used for analysis in real-games environments.
Animatronics is the use of mechatronics to create machines that appear lifelike rather than robotic. The design process involves sculpting, mold making, armature fabrication, costuming, and programming. Animatronics are used in entertainment like movies and theme parks to bring characters and creatures to life. While animatronics provide realistic experiences, the technology is also very costly, complex, time-consuming, and requires skilled labor.
Animatronics refers to the use of robotic devices to emulate a human or an animal or bring lifelike characteristics to an otherwise inanimate object. Animatronic creations include animals (including dinosaurs), plants and even mythical creatures. A robot designed to be a convincing imitation of a human is more specifically labeled as an android. Modern animatronics have found widespread applications in movie special effects and theme parks and have, since their inception, been primarily used as a spectacle of amusement.
Social Studies: Pinterest & Instagram for BrandsPeriscope
The Periscope Community Team presents their take on Pinterest and Instagram. Learn how your business or brand can use these rapidly-growing platforms to engage with and learn from brand enthusiasts.
The document discusses usage statistics and growth trends for several social media platforms: Instagram, Tumblr, Pinterest, Twitter, and Facebook. Some key findings include:
- Instagram has over 100 million users and reached that number faster than other platforms. It is popular among luxury brands.
- Tumblr has over 150 million users and its content usually links to different interests.
- Pinterest's users are 68% women and it is growing very quickly, especially among mothers and those interested in food, home, and style.
- Twitter has 500 million users and is effective for brand promotions and driving purchases.
- Facebook has over 1 billion users and its users spend the most time on the site each month compared to other
Editorial Calendar Template via HootsuiteFlutterbyBarb
The document provides details on 6 social media templates that will save time, including the title, author, topic, deadline, publish time, images to be included, and suggested publishing channels for sharing the templates. The templates are to be shared by Evan LePage and will provide downloadable templates for social media use.
Persistent world-scale AR experiences with ARCore Cloud Anchors and AR Founda...Unity Technologies
In this session, Google engineers will walk through ARCore's Cloud Anchors feature, which will soon allow world-scale persistent augmented reality experiences. Cloud Anchors gives developers the ability to create AR experiences that are shared by multiple users both simultaneously and across a large time frame. With the new ARCore Extensions for AR Foundation, it is easier than ever to create shared AR experiences between users, regardless of what mobile device they are using. This presentation will include a special preview from our partners, Sybo and iDreamSky, about their new experience using persistent Cloud Anchors.
The document discusses Google Cardboard, a low-cost virtual reality headset developed by Google. It can turn smartphones into virtual reality displays. The cardboard headset contains lenses and magnets that allow users to view VR content on their phone through compatible apps. When placed in the headset, the phone's magnetometer detects button presses via magnet to control the VR experience. The headset allows users to explore various VR environments and experiences through apps like YouTube and Google Earth at a low price point, helping make VR more accessible.
Royal Institution CS Materclass - Mobile/VR developmentDavid Bell
The document discusses an upcoming talk on mobile development and virtual reality. It will include a short introduction, demos of jQuery Mobile and A-Frame VR, and a design lab session. Various mobile development platforms and VR tools will be introduced. The lab session will involve building a basic mobile form using jQuery Mobile and creating VR experiences with A-Frame. Attendees will work in groups to design VR experiences using images, sounds, and portals.
The document discusses the history and uses of computer-generated imagery (CGI). It notes that the first CGI was created in the 1960s by Russian scientists to model a moving cat. CGI has since been used in films, television, and other media to create visuals that cannot be achieved in the real world. The process involves creating 3D models, assigning them properties, animating elements, and rendering the scene. CGI has benefits like prototyping and animation but also higher costs than practical effects. Popular software for CGI includes Maya, 3D Studio Max, and Blender.
The document discusses the history and use of computer generated imagery (CGI) in movies, from early uses in films like Star Wars to more modern applications. It covers how CGI has helped make certain sci-fi elements cheaper to create and boosted the popularity of big-budget sci-fi films. The document also examines techniques for creating realistic CGI characters and animation versus more cartoon-style animation.
Stop motion animation is achieved by recording individual still frames of motion and playing them back sequentially to create the illusion of continuous movement. Each frame is a slightly altered version of the previous frame with objects moved in small increments. This labor intensive process can involve manipulating physical objects, cutouts, or live actors frame-by-frame to simulate movement. Different stop motion techniques produce varied effects and involve altering material positions gradually over many frames to depict animation.
Computer generated imagery (CGI) uses computers to generate animations and is a subset of animation. CGI animation is divided into photorealistic and non-photorealistic categories. The document discusses the evolution of CGI from early films like King Kong (1933) to modern films like King Kong (2005) through the introduction of 3D software and motion capture technology. It also covers current CGI techniques like cel-shaded animation and future advances in stereoscopic 3D films and interactive interfaces.
The document discusses various post-production processes like editing, sound editing, visual effects, and compositing. It describes the differences between visual effects and special effects, with visual effects referring to digital post-production techniques while special effects involve on-set mechanical or optical techniques. The document also provides examples of different visual effects techniques like chroma keying, rotoscoping, wire and rig removal, camera tracking, and matchmoving.
The document discusses visual effects (VFX) and computer generated imagery (CGI). It defines VFX as processes that create and manipulate imagery outside of live action shots. CGI refers to computer graphics used to create images for films, games, etc. Common VFX techniques mentioned include compositing, matte painting, animation, and chroma keying, which combines elements using blue or green screens. The document provides examples and discusses software used for VFX like After Effects and Nuke.
The document provides an overview of the visual effects process for creating computer generated images (CGI) in films. It discusses the typical roles involved, software used, advantages of CGI, and basic terms like 2D and 3D animation. It then outlines the process which includes high resolution scanning, 3D modeling, motion capture, tracking, rotoscoping, matte painting, compositing, and combining elements to create the final shots. An example walkthrough is given of creating a scene from The Patriot using these techniques.
CGI stands for computer generated imagery and is used extensively in movies to create visual effects and animations that may not be possible through practical or live-action filming. There are two main uses of CGI in movies - realistic CGI, which aims to make computer graphics appear physically, photorealistically, or functionally realistic; and computer animation, which can be 2D or 3D. Popular animation techniques include tweening, morphing, and rendering. As technologies advance, the applications of CGI continue to evolve and allow for ever greater realism and new types of animated content in films.
AR Foundation framework: product roadmap – Unite Copenhagen 2019Unity Technologies
Learn about the latest developments in AR Foundation, the Unity framework purpose-built for augmented reality (AR) development that lets you build your app once and deploy across mobile and wearable AR platforms. In this session you'll also hear about the roadmap for AR Foundation and what's in the works.
Speakers:
Mike Durand – Unity
Matt Fuad – Unity
Watch the session on YouTube: https://youtu.be/UkBXOff8Efo
The document provides a summary of various creative projects that utilize digital media and crowdsourcing including music videos created using fan-submitted content, interactive installations, data visualizations, and augmented/virtual reality works. It briefly describes projects such as the Johnny Cash Project music video, interactive light displays, location-based mobile games, and tools for visualizing information and trends on social networks. The document covers a wide range of genres including art, film, music, technology, and their intersections.
Nelson Zagalo from the University of Minho in Portugal gave a presentation on compositing at the University of Maribor in Slovenia. Compositing involves combining visual elements from different sources to create a single image, often making elements appear part of the same scene. It is used for special effects in film and to connect real and artificial images. Traditional film techniques included physical compositing, multiple exposures, rear projection, and matting. Digital techniques include blend operations, keying, alpha channels, mattes, masks, nesting, color correction, and motion tracking. Zagalo provided examples of how these techniques have been used in famous films.
The document discusses the technology behind the movie Avatar, including 3D camera technology, motion capture, computer graphics and visual effects. It then covers the introduction of 3D movies, including the use of 3D glasses and 3D projectors in theaters. Finally, it discusses the business model of Avatar, focusing on toys using augmented reality and the creation of the Na'vi language.
A brief approach of simulation in VFX & Games environment, I talked about some common concepts in simulation media like crowds and fluid simulation in physics engines. Also The Interstellar Case, about the Gargantua Black Hole simulation was amazing and I wanted to share it. And about some simulation games, that are commonly used for analysis in real-games environments.
Animatronics is the use of mechatronics to create machines that appear lifelike rather than robotic. The design process involves sculpting, mold making, armature fabrication, costuming, and programming. Animatronics are used in entertainment like movies and theme parks to bring characters and creatures to life. While animatronics provide realistic experiences, the technology is also very costly, complex, time-consuming, and requires skilled labor.
Animatronics refers to the use of robotic devices to emulate a human or an animal or bring lifelike characteristics to an otherwise inanimate object. Animatronic creations include animals (including dinosaurs), plants and even mythical creatures. A robot designed to be a convincing imitation of a human is more specifically labeled as an android. Modern animatronics have found widespread applications in movie special effects and theme parks and have, since their inception, been primarily used as a spectacle of amusement.
Social Studies: Pinterest & Instagram for BrandsPeriscope
The Periscope Community Team presents their take on Pinterest and Instagram. Learn how your business or brand can use these rapidly-growing platforms to engage with and learn from brand enthusiasts.
The document discusses usage statistics and growth trends for several social media platforms: Instagram, Tumblr, Pinterest, Twitter, and Facebook. Some key findings include:
- Instagram has over 100 million users and reached that number faster than other platforms. It is popular among luxury brands.
- Tumblr has over 150 million users and its content usually links to different interests.
- Pinterest's users are 68% women and it is growing very quickly, especially among mothers and those interested in food, home, and style.
- Twitter has 500 million users and is effective for brand promotions and driving purchases.
- Facebook has over 1 billion users and its users spend the most time on the site each month compared to other
Editorial Calendar Template via HootsuiteFlutterbyBarb
The document provides details on 6 social media templates that will save time, including the title, author, topic, deadline, publish time, images to be included, and suggested publishing channels for sharing the templates. The templates are to be shared by Evan LePage and will provide downloadable templates for social media use.
This document discusses augmented reality and its use as a marketing tool. Augmented reality digitally enhances the real world by adding layers of digital information like videos and photos onto real-world items. As a marketing tool, augmented reality can be used across a product's lifecycle to create emotional connections with customers and drive offline sales by allowing brands to engage with customers in a repeated manner. However, there are also some roadblocks to the adoption of augmented reality in marketing.
Grafdom, one of the leading digital media agencies in the Middle East is celebrating its ten year anniversary of operations. Since its founding in September 2005, the Dubai headquartered company has achieved remarkable industry acclaim over its decade long track record. A decade of experience has transformed Grafdom from a new venture with big ambitions into a successful digital agency that leading brands, government departments, NGOs and entrepreneurs rely on for solutions that add value and offer a level of reliability that is unmatched. "We are grateful to all our clients, partners and most of all our outstanding team for being a part of this journey." said Farid Gasim, Director of Operations. "Our clients’ success is, and will continue to be, our success. This milestone is a testament to Grafdom's commitment towards innovation, long term partnerships and business integrity. We look forward to another decade of emerging opportunities and new relationships.” Today Grafdom has become a leading name with operations in 5 cities, including Dubai, Abu Dhabi, Toronto, Lahore and Baku. The team at Grafdom has continued a pursuit of delivering cutting edge solutions in digital media with award winning website designs, viral social media campaigns and interactive mobile apps.
Social Robotics normally assumes visual feedback between robotic trainees and human trainers. Given that robots rarely have adequate visual perception/recognition, such systems are noisy and prone to judgment errors. One way to resolve this problem is to simplify the communication channel between humans and robots. This paper uses simple gravity+motion XYZV sensors ubiquitous in modern personal devices -- smartphones in particular -- to power gait-based exo-systems (power leg assist, etc). This paper discusses current work in progress on this topic, specifically (1) gait modeling and recognition of kick-in moments for hardware, (2) use of the XYZV channel in both directions, allowing humans to send feedback in realtime, (3) social robotics constructs and methods that support maximum flexibility in applications of the otherwise traditionally narrow-purpose hardware systems.
Augmented reality (AR) involves overlaying digital information and graphics onto the real world. This document discusses the history and key concepts of AR, including that it combines real and virtual elements in real-time and is interactive. It also examines some examples of early AR technologies from the 1960s to today. Common hardware components needed for AR like displays, tracking systems, and mobile computing power are outlined. Potential applications of AR in education, tourism, and shopping are also reviewed.
Presented at Softwarica College of IT, Kathmandu
This presentation includes:
1. About AR
a. Definition
b. Examples
c. Image Recognition and Tracking
d. SLAM (Simultaneous Localization and Mapping)
e. Difference between VR and AR
2. History of AR
3. Current Scenario of AR
a. Statistics
b. Mobile AR Examples
c. Magic Leap and Hololens
4. Getting Started with Unity
a. SDK Cheatsheet
[Pandora 22] ...Deliberately Unsupervised Playground - Milan LicinaDataScienceConferenc1
This talk will showcase some of the latest practices where technology is used in order to create more believable results with non-player characters and situations. Apart from game worlds, we will look into the usage of content created by bots which are displayed and executed in real-time experiences and interactive media. The goal of the presentation is to discuss the tools and approaches in storytelling for both tech and non-tech enthusiasts which can help them not only to automate but to widen their (creative) output.
Augmented reality (AR) is a live view of the physical world where virtual elements are overlaid, usually in real-time. AR systems consist of see-through displays, hardware, and software. Common types of AR include pattern tracking and using GPS and compass data. Building an AR application involves programming with an IDE and libraries like FLARManager. Platform-specific AR development requires using the Android or iOS SDK with AR libraries such as Qualcomm, NyARToolkit, or ARKit.
Presentation at Follow the Sun Conference. 14 April, 2011 Online.
Zagami, J. (2011). Augmenting Education [Presentation slides]. Retrieved from http://www.slideshare.net/j.zagami/augmenting-education
The document discusses augmenting education through augmented reality technologies. It provides timelines for when different augmented reality and mobile technologies may become widely used in education, ranging from one to five years. These include electronic books, simple augmented reality, gesture-based computing, and learning analytics. The document also provides background information on augmented reality, virtual reality, and how augmented reality can enhance the real world with computer-generated stimuli. It provides several examples of current educational applications that use augmented reality.
Augmented reality : Possibilities and Challenges - An IEEE talk at DA-IICTParth Darji
This presentation is a part of a talk I was invited to give on the topic of Augmented Reality and Virtual Worlds. This talk, organized by IEEE, aimed at introducing the technology to students and discuss the scope and research associated with it. Qualcomm's Vuforia platform is used as a prototype.
Introduction about Augmented Reality. This slides will provide knowledge about how Augmented Reality will work virtually using VR Glasses, Google Glass,etc.
This presentation is to understand what is Augmented Reality, its use and future. It also contains some slides to show how iOS developer create app with facility of AR using ARKit framework introduced in iOS 11.
This document provides an overview of Magic Leap and its augmented reality technology. It discusses Magic Leap's applications in gaming, entertainment, education and commerce. Magic Leap aims to create realistic 3D images that interact with the real world using eye tracking and natural hand gestures. It also plans to establish strategic control through intellectual property protection, network effects, economies of scale, and setting industry standards to potentially dominate the augmented reality market.
The presentation provides an overview of augmented reality technology, focusing on Magic Leap's approach. It discusses Magic Leap's augmented reality glasses, how their technology works to overlay digital objects that interact realistically in the physical environment. Applications are explored in gaming, entertainment, education and commerce, outlining the value propositions in each sector. Key competitors are also compared against Magic Leap.
“Any Sufficiently advanced technology is indistinguishable from magic”
Augmented Reality is set to revolutionize how we perceive reality be it the field of gaming or retail marketing, online retail stores or the adverts.
And one of the interesting aspect about this is that everyone's going to totally love it!
Augmented reality is changing the way we view the world -- or at least the way its users see the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view, and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head. Similar devices and applications already exist, particularly on smart phones like the iPhone.
This document discusses augmented reality (AR), which enhances the real world with computer-generated perceptual information. AR is defined as enhancing the real world rather than completely replacing it like virtual reality. The document then discusses how AR works by superimposing graphics, audio and other senses onto real-world scenes. Examples of AR technologies like Hawk-Eye and Sixth Sense are provided. Applications of AR discussed include travel, transportation, medicine, and advertising. Limitations and the future of AR are also mentioned.
This document provides an overview of augmented reality (AR) including definitions, comparisons to virtual reality, history, applications, and the present and future of AR. It defines AR as an interactive experience that enhances the real world by overlaying computer-generated information such as images, text, and sounds. Examples of current AR applications discussed include gaming, medical, manufacturing, navigation, and defense uses. The future of AR is predicted to include expanded uses in social networking, marketing, advertising, and education as the technology continues to develop.
Augmented Reality: What is it and should I care?Kevin Cheng
The document discusses augmented reality (AR) and its potential future applications. It defines AR as live views of the real world with virtual elements mixed in. Recent advancements in mobile technology such as cameras, internet connectivity, GPS, and accelerometers have enabled new AR applications. Examples mentioned include using AR for art, product previewing, games, simulation, and training. Challenges include a lack of design patterns and technical limitations, but the market is predicted to grow significantly in coming years as smartphones become more common and the technology becomes more ubiquitous.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
Taking the purple pill
1. Taking the Purple Pill: Lessons Learned Building a Platform for Geo-Social Augmented Reality Games Terrance Cohen Vice President, Game Platforms Oriel Bergig Vice President, R&D
2. Taking the Purple Pill Lessons Learned Building a Platform for Geo-Social Augmented Reality Games
3. Taking the Purple Pill Lessons Learned Building a Platform for Geo-Social Augmented Reality Games
4. Taking the Purple Pill Lessons Learned Building a Platform for Geo-Social Augmented Reality Games
5. Taking the Purple Pill Lessons Learned Building a Platform for Geo-Social Augmented Reality Games
6. Taking the Purple Pill Lessons Learned Building a Platform for Geo-Social Augmented Reality Games
9. Obligatory “What is AR” Slides Ronald Azuma’s seminal work “A Survey of Augmented Reality” August 1997. http://www.cs.unc.edu/~azuma/ARpresence.pdf This is your first stop if you have any interest in AR.
10. “In Augmented Reality, the user can see the real world around him, with computer graphics superimposed or composited with the real world. Instead of replacing the real world, we supplement it. Ideally, it would seem to the user that the real and virtual objects coexisted.”
11.
12. Lesson: AR is Different Rendering Pipeline Augmented Reality pipeline Presents the Virtual world to the Real world One-way process Also presents the Real world to the Virtual world Round-trip process
19. Lesson: Bridging Tech Worlds Model Transform Lighting CameraTransform Viewport Transform 3D Virtual Scene Screen space (2D viewport)
20. Lesson: Bridging Tech Worlds Model transform Model Transform Lighting CameraTransform Viewport Transform Screen space (2D viewport) 3D Virtual Scene Real Scene (the world) Real Object transform HITLAB: AR Pancho project
21. Lesson: Bridging Tech Worlds Lighting Model Transform Lighting CameraTransform Viewport Transform Screen space (2D viewport) 3D Virtual Scene Real Scene (the world) Real Object transform Real lighting VTT: Photorealistic rendering for Augmented reality
22. Lesson: Bridging Tech Worlds Camera transform Model Transform Lighting CameraTransform Viewport Transform Screen space (2D viewport) 3D Virtual Scene Real Scene (the world) Real Object transform Real camera transform Real lighting
23. Lesson: Bridging Tech Worlds The live feed Model Transform Lighting CameraTransform Viewport Transform Screen space (2D viewport) 3D Virtual Scene Real Scene (the world) Real Object transform Real lighting Real camera transform
35. Lesson: Tangible Interaction Here comes the sketch with the steps to print the marker Printed Markers You are FIVE steps away from an AR tangible experience
36. Lesson: In-Place AR Sketch a Marker LINK: Shape recognition and pose estimation for mobile augmented reality
Welcome everyone to GDC 2011, and thank you for joining us today.I’m Terrance Cohen, Vice President of Game Platforms for Ogmento, and with me today is Oriel Bergig, Vice President of Research & Development and co-founder of Ogmento.I come from the world of mass-market console game development. Most recently I was the Lead Systems Engineer at Insomniac Games focusing on the Resistance franchise.Less than ½ year ago, I joined Ogmento to work on geo-social Augmented Reality games.I hope that after this hour you’ll understand why.<>This time last year, most of us had smartphones. But today, anyone is likely to be walking around with a GHz processor in their pocket, a 3” screen or better, and best of all a back-facing camera!This is a perfect storm for augmented reality!
Whatare Geo-Social Augmented Reality Games?
When I say Geo, you should think of FourSquare: Locations in the game are 1-to-1 with locations in the real world, and being there in the game means being there in the real world. The game is played in the real world.
So what are Social games?< you don’t know? GET OUT! 8-) >Alright, you probably know what social games are. But they’re hard to define, and usually defined by example.So think of FarmVille, and we’ll define social games as games that integrate with a social network and use that network to enhance gameplay between players.
And when I say Augmented Reality or AR games, think of Eye of Judgement or Eye Pet where virtual objects and characters are registered and overlaid with features of the real world, and players interact with them.
This is a frame from Eye Pet, where players interact with virtual pets.You’ve seen those before: Geographically-based games, Social games, and AR games. But you probably haven’t seen them combined before.AR games invite the player to interact with virtual objects in the real world.Geo-social games invite the player to play the game at locations in the real world.So you remember why I’m here – the time is right, now that we have such capable devices.And for the first time in history, combining these features in a game is possible – it can run on the phone in your pocket. The hardware barrier is finally gone.
What do I mean when I talk about, “Taking the Purple Pill?”Well, if the Red Pill represents Reality, and the Blue Pill represents Virtual Reality,
…Then the Purple Pill represents Augmented Reality – a blending of Real and Virtual Reality.
We are game developers, and AR is central to what we’ll be discussing, so let’s be a little more rigorous about our definition.AND I would be remiss if I didn’t have the obligatory “What is AR” slides… but there are only 2 of them 8-) !AR is effectively defined by Ronald Azuma’s seminal work, “A Survey of Augmented Reality”.This is your first stop if you have any interest in AR, and hopefully after this hour, you will!
According to Azuma’s definition. <read>
Now, we’re going to jump right into some of the lessons we’ve learned building a platform and games using geo, social, and augmented reality techniques.One big lesson that I learned is that Augmented Reality is different. It’s a “game changer” in many ways. And I think it takes some time to really get the subtle and the not-so-subtle ways that AR changes the way games are designed, developed, and maintained, to say nothing about how they’re played.So to think about how AR is different, I want to compare the Rendering Pipeline with the AR Pipeline.The purpose or job of the Rendering Pipeline is to present the Virtual world to the Real world.If you think of rendering as a part of the AR pipeline - that is, the tail-end of the pipeline - what AR adds to the beginning is: presenting the real world to the virtual world.At the end of the day, the Rendering Pipeline is a one-way process. Augmented Reality represents the full round-trip process.
Think of the rendering pipeline like this:<sweep hand to the right>With the standard 3D rendering pipeline, you have the virtual world rendered on the screen, and presented to the player.[End of Slide][Back story: the player & the scene => captured by the camera => features extracted, pose estimated, objects registered => virtual and real world rendered on the screen, and presented to the player. ]
Now, here’s the round-trip pipeline represented by augmented reality.<pointing at features>The player is being seen by the camera, the AR algorithms register the player in the scene, and composite the player with objects in the virtual world. Then the blended real and virtual world is rendered on the screen, and presented back to the player.[End of Slide][Back story: the player & the scene => captured by the camera => features extracted, pose estimated, objects registered => virtual and real world rendered on the screen, and presented to the player. ]
Alright. Now another lesson I learned in the process of developing geo-social AR games.We are not just bridging the real and virtual world in the context of a simulation.We are also bridging two sides of the real engineering world. These two sides are the graphics engineers, and on the other side of the room, the augmented reality scientists.And the two of them don’t talk.This is a story told to me by our Chief Scientist, he was at a conference, and the rendering engineers were on one side of the room, the AR scientists on the other side of the room, and they literally knew nothing about the algorithms being used by the people on the other side of the room.
So here’s an example of the disconnect between the two sides of the room.I call it, “The Tech Demo Drop,” and here’s how it goes:The Chief Scientist appears out of thin air. He says, “Hey, take a look at this demo that I made last night.”We say, “Oh, wow! That’s fantastic! Thank you! <> So… how do we make a game out of this …”We look around, and the Chief Scientist has vanished.
And here’s an example of the tech demo drop…http://www.youtube.com/watch?v=dL4j3FoBykg
At Ogmento we are actually Bridging Worlds.At this point, I’d like to hand-over the mic to the Vice President of R&D and co-founder of Ogmento, Oriel Bergig!
Lets start with the rendering pipeline we all know and then see how it is changed when AR comes into play.We start with a 3D virtual scene. 3D models will pass a model transformation then lighting is applied and then a camera transformation.Now lets see how Augmented Reality is going to affect this pipeline..
Now we have a real scene being imaged by a camera.In many cases in Augmented Reality we want the virtual object to be registered to something in the world.Let see a short movie of a project done at the HitLabNZ to demonstrate that.In the AR rendering pipeline objects are transforming not only according to their animation and etc. but also according to the transformation of an object in the real world.
You can do more if you can calculate the position of the real light sources from the camera image.In that case you no longer need to simulate where the light sources would be in a real scene. It is a real scene and the lights are where the lights are.
The next step is the camera transform. If your camera is moving that’s where its going to affect the rendering pipeline.A mobile phone is a good example of a moving camera. In the AR rendering pipeline the camera transform is done according to the real camera transform.
Now all we are left to do is to inject the live feed as part of the pipeline to create the illusion object are registered to the real world.On good way of doing it is using a texture on a plane used as the camera view backgroundSo every frame, 30 times a second we replace the texture of that plane with the image that comes from the camera.One of the advantages of this approach is you can now apply shaders and other effects to both the camera feed and the virtual objects. This is important because it results in a similar appearance to both which increase the illusion the real and the virtual co-exist.
THIS IS A DEMOLet us see a live demo of a 3D AR scene and of the model transformation we discussed.CAMERA OFFStep 1: This is a Unity3D environmentStep 2: This is the light source, camera, object and a planeCAMERA ONStep 3: This is me injected onto the plain as a texture Step 4: I want the object to rotate according to the rotation of a real object so they become registeredANCHORStep 5: Look at the camera preview window – they seem to be on on top of the otherStep 6: Im about to rotate the card I have in my hand and the virtual objet will rotate such that it appears to be registered ROTATE LIGHT SOURCEIf I will rotate the light source it will create a similar effect on the live feed and the object increasing the suspension of disbeliefThis demo is also a very sophisticated way of bridging the worlds. This is bridging AR with game development environment and you can see in real time the final result the AR view
Ivan Southerland known for Sketch pad Created the Sword of DamoclesIn 1968 he create the first computer program to demonstrate VR and ARGot that? AR and VR started at the same time – bridging the two is actually closing a circle
WHERE AR WORKS WELL ?AirplanesPlane rotation and location are accurately knownTarget location is accurately knownAR Locking on a target is possible
In 2007 Sony came out with the eye of judgment, the first AR game.
Another type of AR can be done with Kinect – Augmented Reality for this platform is still under development.But lets take a second look at kincent / Wii / Move. It is all about Moving and InteractionBUT What if you could really move ?
Of course – a mini Head mounted display is the answer. We’ve seen it in GDC two years ago and…. We are still waiting.And it wasn’t see-through. The first transparent one came a month ago at CES but its still far from a product for playing games.Plus, even it was ready our target audience don’t have it.So for me there is only one obvious answer!
My platform of choice is in your pocket. It is a perfect window to the world. Its …. Your mobile.So that's the platform of choice for Augmented Reality games in 2011 - mobile phones.
The first thing that comes in mind when thinking AR games on mobile phones is the camera transform.Our smart phones comes with accelerometer, gyroscope, GPS, and compass so they have all the sensors needed to calculate the camera position and rotation much like they did it with airplanes. Well almost – because the hardware on the phone is not as good as the one on the airplane and so you need a lot of work to get these reading filtered.If we don’t filter the reading the objects will be floating around. As we filter more the reading - the objects will become more steady but at the same time when we move around they will lag behind -- breaking the illusion.There is no perfect solution for that. One good way is to use an extended kalman filter.
So lets assume we solved all the issues with phone sensors. It is still limited because you can not tell what is in the image with the sensors alone.On the red screen shot an object is floating in the air not registered to anything specific.You can work around those issues in gameplay. In our game in daemon hunting mode, the objects are moving around quickly and your goal is to track them. This is good to overcome noisy sensors and distract the player from noticing the object are loosely registered to the real world. But that is limited in terms of game experience you can do.So what if you want to do more. You want to really see what's in the image so you can register objects.
Thats where you need to do image analysis and computer vision at every frame.OK – so we want CV. What can we do with it?
Tangible interaction!Remember the ARPancho?This project is a good example of tangible interaction because the virtual character reacts to the card movements. Until not too long ago the only available technology that can help you do that required printing.
Printing means youneed to print a specific shape or image to be able to play!Meaning - you need to go through 5 steps so you can start the game.Publishing a mobile game on the appStore looses its purpose when you need to first install the game, then go to a website, download and open a pdf, print it the PDF and finally play! And don’t forget to to carry the print with you so you can be mobile. We are talking mobile games after all.
So what if you could create your markers anywhere by just sketching them. What if you can support almost any sketched shape or scribble.The sky is the limit to the different possible arts. In our game the theme is around the 5 elements of a pentagram, each element has its unique magic.So we are asking the players to sketch a pentagram and a 3D pentagram will appear leaping off the page with the sketch.I’ve included a link to an award winning paper describing sketch markers.So we can sketch any scribble we want and it will become the marker or the hook to the real world. It will become your tangible interface to the virtual world!Is that all?What if we could do more with your scribbles…
Remember the tech demo drop with the garden leaping of the sketched page? That was the first attempt of that chief scientist.But in his head it was a simple example of a bigger concept –In-Place AR where what you scribble becomes part of the game. It can be a a blue circle that becomes a lake, a real leaf that becomes grass or ... a simple curve that becomes a race trackSo if you did a good job in bridging the mind set the next time the chief scientist comes back to the room he demonstrates something that will blow you away and is actually a game demo.I want to demonstrate to you what he demonstrated to all of us:DEMO Step 1 – I am sketching a simple scribble Step 2 – I want this to be my race track Step 3 – I want a road to loft from this sketchStep 4 – Lets playStep 5 – Lets play against the computerStep 6 – I can transition from AR mode to AR and VR or to fully VR 3D mode.This un-named game is demonstrated here for the first time.It is revealing a new genre of games. Sketch interaction games.AFTER DEMOThank you very much - This is also the opportunity to thank Nate hagbi, our chief scientist for inventing all this great tech. I will now pass the mic back to Terrance so he can tell you more about the purple pill.
Isn’t that great? Sketch racing is an example of some very exciting sketch AR technology. It shows the potential of what AR games can be.But there’s more to it. We’re doing geo-social augmented reality games.
<>The next lesson that I want to discuss with you is this.Geo-Social games are different than other games.Just to review for a moment. Earlier we talked about Geo-Social games.We said that in geographically-based games like FourSquare, the game is played in the real world.And we said that social games like FarmVille integrate with a social network and use that network to enhance gameplay between players.I want to highlight here that regular games bring … the game … to the player. Geo-social games bring the player … to the game.Similar to the way AR games invite the player to interact with virtual objects in the real world,Geo-social games invite the player to experience the game in the real world.Imagine moving around a city in an FPS, except that at the same time, you’re actually moving around the city in the real world.
Now here’s a game that imagines demonic forces forever trying to break through into our world.In Paranormal Activity: Sanctuary you investigate paranormal phenomena to identify hot spots of demonic activity called hell holes,then use your arcane powers to exorcise the evil spirits and create sanctuaries that protect against further attacks.But beware, if you let down your guard down you risk becoming possessed by dark spirits and turning against your former allies.Paranormal Activity: Sanctuary is Ogmento’s latest title for the iPhone. It is available for free from the App Store now.
And now I will use Paranormal Activity: Sanctuary to demonstrate many of the themes we’ve been discussing here. We’ll focus on the geo-social and AR elements of the game, including a demonstration of a Sketch AR component that you can get on your device immediately… if you have an iPhone. (Don’t worry, we’re bringing our stuff to Android very soon, I promise!)Demo: Paranormal Activity: Sanctuary<looking at heatmap>As you can see, we're looking at a map. On the map, in the game, we're in Room 303 of the South Hall of the Moscone Center, because in the real world, that's where we are.This is the location-based part of the game - the geo. This game is played in the real world.You can also see that there are zones of depravity called hellholes - indicated in red.And there are zones of sanity called sanctuaries - indicated in blue.<selecting a hellhole>Each hellhole has a Predator - the actual player who has contributed the most depravity to the zone.And each sanctuary has a Guardian - the actual player who has contributed the most sanity to the zone.This is part of the social aspect of the game. You are battling real players for control of locations in the real world.There are also missions on the map, identified by pins. These missions can be based on what items you have, or they can be location-based. For example, you might have missions related to ghosts or spirits near a cemetery, or sacred text related missions near a library.Now we've created missions specifically for GDC. Some of them are here, and others are spread around San Francisco. After the presentation, please pick-up a map at the front of the room - it will guide you to the most valuable missions around the area.This is what geo-social games are all about.<selecting the Terror mission>One of our missions is The Terror in Room 303, again, because that's where we are!We'll go on that mission…It says that The geo-social energy at the conference is exhilarating, but you need something to bring it into focus…So we're going to do it…<wait for it>We've reinforced our sanctuary/hellhole, and we've gained<select it>The Purple PillThe Purple Pill is not a drug, but a way of looking at the world. The real world augmented by the virtual, the virtual world enhanced by the real.Alright, now we're going to do some investigation. We're looking for demonic activity in our immediate location.<>And there we go, we photograph the demonic activity which gives us experience.This is the type of AR browser that can be seen in other AR games.But next, we're going to cast a spell, and boost it with sketch AR.To do that, we draw a pentagram on a piece of paper. Then we look at it with our device, and allow it to boost the power of our spell.
Now we’re going to go “under the hood” of geo-social games, and take a look at some of the big moving parts.Games played at locations in the real-world means using a geo-spatial database. In our case, we use PostGIS, an extension to PostgreSQLPersistent world & missions means using a transactional database back-end that can handle some heavy-lifting. Paranormal Activity: Sanctuary uses Amazon Web Services, including Electric Cloud 2 and RDS.Of course, the game uses a “freemium” model, where players can make in-game purchases, which fund the development and maintenance. For that to work, we need a way of accepting payments. Apple’s App Store makes that straightforward. It’s currently a little trickier on Android, but promises to get easier soon as Google begins to support similar mechanisms.But to figure out what people like to buy, what makes them want to buy, and to get them to do more of that, there’s also a whole layer of analytics and reporting.Sometimes I feel like I’m back building business applications… but then I remember that these database records don’t represent telemarketing calls or insurance policies (true story), but hellholes and sanctuaries, missions completed, and Spells cast. Which, let me tell you, is much cooler! But these are a lot of moving parts that need to work well together.
… and sometimes they don’t! That’s when someone gets a call like one of these:<play>These are not things you want to be woken-up to hear. So that’s when you contract with someone on the other side of the world, who is already awake to hear it, and fix it.
So, the heart of some of the more complex data manipulations and analysis is a set of ETL processes – which stands for Extract, Transform, and Load. The ETL processes Extract data from your application databases and other source systems like GIS, Transform that data, and Load it into tables in your data warehouse and back into your application databases.Here’s an example of our ETL processes. This is the ETL script for managing leaderboards, a feature coming soon to Paranormal Activity.You can see how events come into the script with location data. The geographical coordinates are checked against a geolocation cache, to see if we already have leaderboard data for that city/state/country. If we do, then the new value can just be added to that leaderboard. However, if the location isn’t in our geolocation cache, then we need to do a reverse geolocation lookup on OpenStreetMap through MapQuest to figure out where the player is. Once that’s done, we can populate the leaderboardand the geolocation cache with the new location.
Now we go from data transformation to data analysis and visualization.So who doesn’t like a pretty picture?Here’s a map of the locations of spell casts in Paranormal Activity after a week or so of activity.Now, when I first looked at this, I was wondering – why is there so much activity to the east, and on the west coast, but very little inbetween.<anyone>
Yup, that’s why.<>
Now we’ve talked about most of the ingredients in geo-social AR gamesWe’ve talked about the client components: Sensors and GPS for location and orientation.The camera as a window to the real world.Maps to display the current location and the “play field”.AR to register virtual objects in the real world.And all of the database and analytics to go with it.<hands scraping deviously>We have all of the necessary the ingredients for our Witches’ Brew!
So we end up using a lot of off-the-shelf and open-source software, particularly on the server side.And that can sometimes result in quite a tech soup.Don’t fear it.This is not about everyone “doing it my way”, rather interop is the name of the game.They’re designed to work together, or are sometimes even about working together.For those of you who started in the mobile space this may be “old-hat” to you. You might be saying, “What’s he talking about?”But for those of us who have made the transition from consoles, it’s a brave new world.For us, it was more like, “Hello, welcome to Game Studio, Inc. Here’s your compiler, and here’s the platform SDK. The engine is written in C++, which is what you’ll be using for the next 12 years of your life. Good luck!” From what I’ve seen so far, that’s not mobile development, and certainly not geo-social AR game development.Tag cloud generated at http://www.tagxedo.com/Tag cloud words:Java Java Java Java Java Java JavaCSharpCppC# C# C# C# C# C# C# C#C++ C++RooRooDjangoDjangoMySQLMySQLMySQLMySQLJNI JNI JNIUnity Unity UnityAntEclipseObjective-C Objective-C Objective-CMono Mono Mono Mono Mono MonoXcodeAmazon EC2 Amazon EC2 Amazon EC2 Amazon EC2PostgreSQLPostGISPostGISPostGISAndroid Android Android Android AndroidGPSGoogle Google Google Google GoogleApple Apple Apple AppleiOSiOSiOSiOSFlash Flash Flash FlashAdobeXAMLOpenGL OpenGL OpenGLOpenCVTouchiPhoneiPhoneHTCMotorola MotorolaVerizon Verizon Verizon VerizonT-Mobile T-MobileAT&T AT&T AT&TNVIDIATegra2AcerGDC GDC GDC GDC GDCSprintSunOracleSubversionApacheSpringSourceGrailsDroolsMavenTomcat TomcatTerracottaPentahoPentahoPentahoPentahoGeoIQGeoIQFortiusOneFortiusOneFortiusOne
So here’s a little example of embracing the tech soup.We’ll be looking at an example of calling into the Android SDK from Unity, which most of you probably know is a very popular game engine for mobile development.In this case, we’ll be turning the device’s camera on and off from Unity, something that Unity doesn’t inherently support.
So first you write a Unity script in C# which imports a native library that you also wrote, and exposes some set of functions from the native library to your Unity script.In this case, we have a script which – when it is called to Start() – turns the device’s camera on by calling setCameraOn(true), a function implemented in a native library.
Now, the native library is implemented in C++, using JNI which is the Java Native Interface, and lets you manipulate Java objects from native code. In this case, we’ll be manipulating objects that are instances of classes that we derived from Android SDK base classes. So we’ll find the Android Activity class, get its currentActivity static reference, locate that object which is our Activity instance, find the PlaytimeControls member of the Activity, find the setCameraOn() method of the PlaytimeControls object, and call that method.
Now PlaytimeControls is a Java class that uses the Android SDK to keep track of the currentActivity, to manipulate a CameraPreview instance which is owned by the Activity, and to add and remove the CameraPreview from the Activity’s RelativeLayout. If you haven’t seen it before, it looks a little crazy. The 2nd time, it’s pretty straightforward, and the 3rd time, you’re writing it in your sleep.
So it’s almost time for us to finish here, and give you a chance to ask questions. Before we do, I want to review a little.Remember that we talked about Geo-Social Augmented Reality, and how AR as the Purple Pill blends the real and virtual worlds.
We talked about how AR is Different, and how we’re Bridging Worlds between engineering disciplines.
And we talked about bridging the worlds between game design and software engineering.We talked about selecting the right platform for AR games.
We saw how we can do sensor-based AR on a mobile platform, and why we need the computer vision.
We talked about tangible interaction, and we demonstrated in-place augmented reality.
We talked about how Geo-Social is different and we saw a demo of how these elements came together in Paranormal Activity: Sanctuary.And then we looked under the hood of geo-social games.
Then we talked about data analysis for geo-social gaming, and how that is just one ingredient in the Witches’ Brew.And finally, we talked about how we embraced the tech soup and wrapped-up with an example of calling the Android SDK from Unity.
I sincerely hope you’ve enjoyed our presentation, and thank you for Taking the Purple Pill.This is Oriel Bergig, and I’m Terrance Cohen. You can follow us on Twitter.If you enjoyed your time here, please remember to fill-out the evaluation forms and hand them to the awesome volunteers in the back. If you didn’t enjoy it, there are some black receptacles on the floor in the back, so you can just go ahead and slip them right in there 8-). No, I’m only joking. Please return the evaluation forms to the volunteers in the back.We’re going to take questions now - We’re supposed to ask you to Preface your questions or remarks with identification such as “This is Cindy from Indianapolis…”And we’re supposed to always repeat the question or summarize a comment for the benefit of participants who may not have heard it.