9. Mobile VR
SPLIT SCREEN
SMARTPHONE APPS
PHONE SERVES AS
PROCESSOR & DISPLAY
INTERNAL SENSORS
MEASURE ROTATION AND
HEAD ORIENTATION
LESS POWERFUL /
IMMERSIVE EXPERIENCE
10. Desktop VR
STANDARD DESKTOP PC
APPLICATIONS
HMD IS A SEPARATE
DISPLAY
INTERNAL& EXTERNAL
SENSORS MEASURE HEAD
ORIENTATION AND
POSITION
MORE POWERFUL
IMMERSIVE EXPERIENCE
17. Understanding the VR Web
The elements of today’s browser-based virtual reality ecosystem
@misslivirose
18. WebVR
• Experimental API for browsers to interact with VR hardware
• Identifies connected peripherals
• Headsets
• Tracking devices as display capabilities
• Passes in display data for rendering
• Allows application to receive headset transforms
• Currently supported in Firefox Nightly & Chromium
19. WebVR: Under the Hood
• VRDisplay: the base of all VR devices
• Capabilities include pose, positional tracking, near/far planes, orientation,
and more
• VRPose: the state of a headset/device at a given time
• VRStageParameters: values that describe a space for room scale
devices
• Extensions for Window interface and Gamepad interface
• And much, much more!
https://mozvr.com/webvr-spec/
20. WebVR Library & Three.JS
• Boilerplate code to be used with Three.js
• VRControls: apply headset transforms to the scene camera
• VREffect: how the camera should render a split-screen view
• WebVRManager: changing from VR to non-VR mode
• WebVR Polyfill: Implementation for mobile browsers without native
WebVR support
21. A-Frame
• Markup-styled language built on WebVR
• Entity-Component system for extensibility
• Support for 2D and 3D objects:
• Primitives, spherical images, video, model imports
<script src="aframe.min.js"></script>
<a-scene>
// VR code!
</a-scene>
22. Vizor Create
• Visual node-based programming for VR websites
• Multi-user editing and real time visualizations
23. Export to WebVR
VR Browsers Standard Browsers
Generated Web Build
Core Assets + Scripting
• Use JavaScript, C#, or Boo to
write scripts in the Unity game
engine
• Add Jump Gaming WebVR plugin
• Build HTML5 application using
WebGL
• Render to browser with support
for the VR browsers and desktop
headsets
25. Content Alert:
This section contains videos from VR capture and may be unpleasant for those
who are extremely sensitive to motion sickness triggers
Good morning everyone!
My name is Liv Erickson, and I am so excited to be back in Scotland!
2007 was an exciting year for me. It was the year I took my first computer programming class, and it was the year that I visited Scotland for the first time! My high school had an exchange program with Dunfermline, and I was able to spend three weeks out here in the summer.
At 16 years old, my computer programming experience consisted of Java applications, and the web development experience that I had was customizing my Bebo and MySpace pages, and a Star Wars fan site that I made when episode 3 was coming. In 2007, I had just joined Facebook.
My career aspirations as a teenager were pretty simple.
I wanted to be a Jedi Knight.
Flash forward to university. I had, sadly, not been able to find a major that aligned well with my path to Jedi-Knighthood, so I settled on the next best thing: Computer Science. Where I learned more Java programming. But this time, I also had the freedom to explore all of the other areas of computer science as well, so I started to do a little application development. I wrote some bad C code and created a pretty unstable Linux shell for a computer systems class that just about everyone dreaded.
In what may not surprise you, I also played a fair number of video games.
When I graduated and moved to the San Francisco Bay Area, I was blown away by the amount of passion that everyone had for technology. I liked my job, but that was just it – I liked it. I wasn’t running home to continue reading app store reviews about how much Microsoft sucked, that’s for sure. So I set out on a mission – I was going to find that thing that I was really passionate about.
Now, normally, I’d probably advise against using YouTube to make life-altering career choices, but it turns out, that’s exactly what I did. I was sitting at home in my living room watching videos when I clicked on a related video that featured the Oculus Rift Developer Kit 2, which had just began shipping, and I had what I consider my ‘lightbulb moment’. I had, quite literally, felt as though the room lit up and that the universe was telling me that I absolutely needed to work in virtual reality right that minute. So I searched “How to develop for virtual reality”, found a download for Unity, started the download, and bought tickets for my first virtual reality meetup group, conveniently meeting that next week.
I was so excited, I showed up early, while everyone was setting up. One guy saw me standing there, and I probably looked ridiculous, with my mouth wide open gaping at everything, and offered me a chance to demo what he was working on. I put on the headset, grabbed the motion controllers he gave me, and the room faded around me.
And I was a Jedi Knight, choosing the lightsaber I was going to train with.
Frankly, I think that I can skip a lot of what happened between then and now, because so much of it is a blur and you probably can imagine at this point how excited I was. As soon as I had that moment of living out a childhood dream, I was sold on the nature of this technology that would soon change my entire world. Three months after my first experience, while I was blogging just about every minute of my self-taught adventures in VR, I was offered the role I have now as a developer evangelist at Microsoft, focusing on virtual and augmented reality technologies. My job is as cool as it sounds – I get to play with, develop for, and then teach about the newest VR and AR platforms, tools, and environments, which is what brings me here to you today.
One area of virtual reality that I’m particularly fascinated by and passionate about is the advent of the VR Web. Today, I’m going to show you how easy it is to get started with developing for VR devices in the browser, and how you can use JavaScript and markup-styled languages to build cross-platform immersive experiences.
Before we dive into the specifics of browser-based VR, though, let’s take a second to ask the bigger question: why even consider VR at all? In 2012, Oculus broke Kickstarter records with the announcement of the ‘Rift’ development kits. HTC announced their own headset in partnership with Valve, the HTC Vive, in 2015. Both headsets went on the market to consumers earlier this year, and there are several other desktop and standalone headsets that are under development with early prototypes expected to launch later this year. Sony’s PlayStation VR device is releasing in October, and a number of high-and-low end mobile VR headsets have been shipping since Google announced CardboardVR at I/O 2014.
What all of these devices bring with them, in addition to being a wonderfully high-tech fashion accessory, is the ability to be a part of experiences in entirely new ways. Our digital worlds are no longer contained to a two dimensional screen or hidden behind layers of skeuomorphism and abstract interfaces. We have, for the first time, the ability to be completely present inside of virtual worlds, and to carry that technology around in our pockets.
And I am carrying that here with me today! If you haven’t had a chance to try out a VR device yet, I’ll be more than happy to show you a demo on the Gear VR sometime today. Just come find me.
Often times, you’ll hear virtual reality toted as one of the new ways to inspire empathy within and between users. VR redefines our relationship with technology.
Compared to more traditional computing platforms, virtual reality has added benefits that fundamentally change how we think about and engage with technology, big data, and software applications. We become part of the operating system and our own bodies become an input and output device to control experiences. We have the tools to visualize information in 3D around us, engaging the reptilian, spatial part of our brains that we use to describe and interact with the physical world around us.
For the first time, we are able to transform data into imagery to stand inside of big data, and to walk in someone else’s footsteps in a way that we see their world through their eyes at scale. We can watch multiple sides of a story play out around us, and sit in on experiences that were previously made impossible due to geography, financial standing, or time.
With this generation of virtual reality technology, we are beginning to set the stage for the future of computing, storytelling, training, healthcare, and so much more.
Generally speaking, when we talk about VR today, we will be talking about headsets that fall into one of three categories. This is excluding a number of different ways that VR and immersive computing can be achieved, so it isn’t meant to be a comprehensive list, but it does cover the basics of the head mounted display (HMD) technology being used when we discuss virtual reality broadly.
The first, and most accessible form of VR technology today is Mobile VR – smartphone-based virtual reality headsets that are powered by a cell phone application. These devices range from $5 Cardboard headsets to $99+ headsets with integrated IMUs and input controllers, such as the Samsung Gear VR headset. Applications targeting mobile VR run as applications right on the phone, which also acts as the rendering display for the headset. At a minimum, mobile VR headsets have a place to hold the phone and lenses that distort the screen to be viewed in stereo.
[Show the Gear VR]
The second most common type of VR headset today is a Desktop VR headset – specialized display devices that are worn by a user and connected to a separate computer. These devices range from the $599 Rift to the $799 room-scale Vive headset, with more on the way. A desktop VR device generally will contain specialized tracking hardware, a display, lenses, headstraps, and a standalone component to track position in 3D space as the user moves his or her body around. Applications run on the desktop PC, rendered by a GPU and displayed on the headset directly, and usually also contain a third view rendered to the screen so people outside of the headset can see a version of what the viewer does in the headset. In a similar vein, the PlayStation VR will be a head mounted display powered by the PlayStation 4, and share many of the properties as other desktop VR devices.
The last type of VR device is a Standalone VR headset – which is, as you probably can guess, exactly what it sounds like! These devices are self-contained and include all of the necessary components within a single device. One example of this is the Pico Neo, which runs an Android OS and has the bulk of the processing done in the controller.
The virtual and augmented reality industry is commonly referred to as the “Wild West,” or compared to the Gold Rush – and that’s true, to an extent. We’re early on in an industry with the potential to change the world, and that’s a truly inspiring and creative place to be. We haven’t yet reached a point where best practices are defined, there isn’t one major platform sitting on top of the entire industry, and there are so many approaches when it comes to creating VR content. So why consider the web as a platform for virtual reality applications in 2016?
With a native VR application, your rendering pipeline will look different across the HTC Vive, Oculus Rift, or any new hardware that may come out with a different resolution or pixel density. With native mobile VR, your support matrix would likely look even larger across different sized screens. With a virtual reality web, it’s incredibly simple to start building out scenes that work flawlessly regardless of whether or not they are in a specific VR device, or even in a VR device at all, allowing you to focus on the content of the application over specific implementations on a per-device or per-platform basis.
One of the primary benefits of a solution that allows development of virtual reality applications on the web is the ability to look at designing and developing applications with cross-platform functionality written in from the ground up. Advancements in the Firefox and Chromium browsers, as well as mobile browsers, are enabling web developers to allow the browser to handle all of the understanding of a given platform and rendering a specific VR experience accordingly.
This means that you don’t have to write your own implementations of how your scene reacts to different orientations across devices, and the browser is able to instead provide a consistent implementation that remains consistent. All you need to worry about is using that information in a WebGL context.
As we look to virtual reality as a new platform and medium for creating experiences, we start to see where traditional distribution models are standing in the way of innovation. Consider investigative journalism pieces – today, we don’t consume a single news story as a download through an app store. We browse through different types of content through a number of different sources, and App Stores don’t curate that news content.
With VR, applications are often story-driven and focus around specific events or experiences. Some VR applications have struggled to get onto App Stores for mobile devices due to a “limited” scope of content, and the model of a single app for each individual VR experience is already proving to be somewhat less than idea
When you first begin building virtual reality applications, there is a lot to look into from a design perspective – it’s already quite a lot to learn new patterns for interactions, 3D graphics, and different input mechanisms, and with browser-based virtual reality applications, you already have the tools available to you to begin building in languages that you are already familiar with.
Broadly speaking, when we think about the VR web, it isn’t just a specific platform or framework – it’s the idea of our browser technology growing and adapting to support 3D environments as fluidly as our browsers do traditional content. At it’s core, we’ll start by looking at the experimental WebVR API, then dive into the tools and ecosystem evolving around the API to make virtual reality development for the web as straightforward as possible.
At the core of just about all things related to the VR web is the aptly named “Web VR API”. This experimental API is a collaborative effort to set a series of standards in place for virtual reality device support on both desktop and mobile browsers. The WebVR API offloads the majority of the processing effort for tracking head mounted display hardware directly to the browser, so as developers we are able to prioritize content over custom implementations of applications to support the evolving VR device ecosystem.
Right now, WebVR is supported in Firefox Nightly, and specific builds of the Chromium browser, as well as a few other proprietary browsers for specific VR platforms or applications.
The WebVR specification breaks down virtual reality devices into a number of different components so that the hardware can be easily accessed by web applications. Some of the interfaces are related specifically to rendering a VR scene appropriately for a given headset, while others handle elements around device orientation, or extensibility, such as support for the Gamepad Interface.
For a desktop VR device, this may be separate hardware entirely from the device running the browser, but for a mobile VR device, capabilities of the phone itself would provide the implementation for the interfaces defined in the spec.
Developing for WebVR is relatively simple, and reuses quite a few of the same libraries and frameworks as 3D development for standard WebGL contexts, with WebVR supported in libraries like Three.JS and Babylon.JS. There are a few WebVR libraries that makes it as simple as adding in a few VR-specific management and rendering elements to a Three.JS scene. Other tools, which I’ll cover shortly, provide a wrapper around these elements to abstract even more of the VR-specific code away from the front-end layer.
The WebVR boilerplate template allows you to create standard WebGL scenes, using something like Three.JS, and replaces your traditional perspective or orthographic camera with a stereoscopic one that applies headset movement as transforms to the camera within the WebGL context through a VRControl. Moving in and out of VR mode is handled by a VR Manager element.
One such tool, created by the MozVR team over at Mozilla, is A-Frame, a markup-styled virtual reality development language that uses HTML-styled tags and an extensible entity-component system to allow developers to begin creating for virtual reality without even touching the underlying WebVR code or interactions.
With one additional JavaScript file, your first VR application could be only a couple of lines of code away.
Another tool is Vizor Create, a node-based visual editor for VR websites. This tool allows user to collaborate and view changes in real time, and provides an extensive interface for visually programming interactions between different scene elements.
While not immediately browser-based, one of the benefits of a game engine such as a Unity is the presence of a powerful visual editing tool for 3D environments, unified scripting languages for different components, and built-in support for things like physics, lighting, and rendering. Unity provides a solution that allows you to use their editor for designing and developing your application using their own game engine system components, and build to an HTML5 application using WebGL directly from their engine.
Recently, Jump Gaming developed and released a free Unity plugin component that uses the HTML5 build option in Unity to include support for WebVR, enabling you to use Unity to build directly to a WebVR-enabled application for desktop browsers.
One of my first applications – this example was built to show the potential for integrating web vr projects in with existing web infrastructure. The photos are from the ISS, and the user can stand over earth with a 3D environment built around them. You’ll see a similar project in just a few minutes, written with A-Frame, but this project was built on top of the WebVR API as it stood about a year ago.
Show source code for the project
Talk about viewing data, changing the way we think about productivity.
I made this application when I got here to view some of my favorite photos from the week in a fun, immersive environment using A-Frame. Anyone want to take a guess at how many lines of code are in this app?
[Show A-Frame code and explain what’s going on, time permitting]
What we think about today as being a browser, or a VR platform, or a website – all of these things are going to evolve rapidly over the coming years as virtual reality becomes more and more widely adopted. Google announced that over 5 million Cardboard units were shipped globally – and that was back in January. New headsets are coming to the market regularly as improvements in graphics technology makes hardware more readily available.
Custom browsers within virtual reality applications are enabling holographic mixed reality experiences with support for webpages that come to life in a three-dimensional space. We’re at the beginning – smart and connected devices are closing the gap between science fiction, and reality.
One of the areas I’m particularly excited about is what we’re able to do for visualizations and manipulating data in virtual environments. As big data computation becomes more affordable, the ways that we’re able to explore and interact with information will grow in new ways that haven’t yet been thought of. Imagine being able to visualize changes in the ocean floor over decades as if you’re standing on the bottom yourself, or creating large-scale simulations of environmental data from the past – and having access to that information in your pocket.
Similarly, as we begin to look at building smarter applications, the promise of machine learning and the VR web is also an area of particular interest for me. As we look at advancements in machine learning technologies, and think of their applications within adaptive and predictive environments, there is a lot of new information that we will begin to use to create custom experiences for users.
And in our connected world, being able to bring more and more devices together opens up even more potential for virtual and augmented environments. As we continue to utilize information about biometric devices, internet connected appliances, and more, we’ll start to see the digital and physical world blend together in completely innovative virtual environments that combine elements of the real world with computer-generated ones.
So what do we need to do to get there?
Right now, the VR web has a lot of promise and a lot of momentum. Challenges around performance and optimization are being addressed so that applications can hit the framerates required for desktop virtual reality experiences. Wider adoption into a standard and support for additional operating systems are still needed to create experiences that can be easily transported across computers, phones, and other devices that may be in the pipeline.
Visual editors for 3D web applications are starting to become more widely available and fuller featured. As we look to the next few months for additional improvements at the platform, browser, and framework layer, the viability of the web as a platform for rich immersive content will become even more solidified.
Of course, there’s only so much that I can cover in 45 minutes – so if you’re ready (and I hope you are!) to try your hand at building your own VR web applications, head over to to the above link to get started with resources and tutorials to get you started.
And with that – I will leave the stage to other speakers who are similarly passionate about and pushing forward in different areas of the web ecosystem. If you have any questions, I’ll be around at different sessions today and tomorrow. I am excited, not just for the next two days of amazing content and great presentations, but for the future of the web and the power that JavaScript holds for connecting our digital lives to new experiences and redefining reality.
Thank you!