The most important part of VR interaction is the person doing the interacting. Human-centered interaction design focuses on the human side of communication between user and machine: the interface from the user’s perspective. Focusing on users is more important for VR than for any other medium. When VR is done well, interactions can be brilliant and pleasurable, but when done badly, they can result in frustration, fatigue, and sickness. Many causes of bad VR are centered on a lack of understanding of human perception, intuitive interaction, design principles, and real users. Quality interactions enhance user understanding of what has just occurred, what is happening, what can be done, and how to do it. For optimal VR experiences, goals and needs must be efficiently achieved, and the experiences must be engaging and enjoyable.
Phil Keslin (CTO, Niantic) - Effective AR in Pokémon GO!AugmentedWorldExpo
Phil Keslin, CTO, Niantic, Inc. creator of Pokémon GO!
A talk from the Inspire Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
The basic AR implementation in Pokémon GO! has inspired user behaviors typical in higher end applications including widespread sharing of photos with user-aligned and projected characters, user organized real-world meet ups and rapid viral uptake. This talk will discuss the design and user interface choices we made to foster this.
http://AugmentedWorldExpo.com
Slides from the 2016/2017 edition of the Video game Design and Programming course at the Politecnico di Milano. More information at http://www.polimigamecollective.org Some of the video games developed by the students during the course are available at https://polimi-game-collective.itch.io
Slides from the 2016/2017 edition of the Video game Design and Programming course at the Politecnico di Milano. More information at http://www.polimigamecollective.org Some of the video games developed by the students during the course are available at https://polimi-game-collective.itch.io
COMP 4010 Course on Virtual and Augmented Reality. Lectures for 2017. Lecture 2: VR Technology. Taught by Bruce Thomas on August 3rd 2017 at the University of South Australia. Slides by Mark Billinghurst
The most important part of VR interaction is the person doing the interacting. Human-centered interaction design focuses on the human side of communication between user and machine: the interface from the user’s perspective. Focusing on users is more important for VR than for any other medium. When VR is done well, interactions can be brilliant and pleasurable, but when done badly, they can result in frustration, fatigue, and sickness. Many causes of bad VR are centered on a lack of understanding of human perception, intuitive interaction, design principles, and real users. Quality interactions enhance user understanding of what has just occurred, what is happening, what can be done, and how to do it. For optimal VR experiences, goals and needs must be efficiently achieved, and the experiences must be engaging and enjoyable.
Phil Keslin (CTO, Niantic) - Effective AR in Pokémon GO!AugmentedWorldExpo
Phil Keslin, CTO, Niantic, Inc. creator of Pokémon GO!
A talk from the Inspire Track at AWE USA 2017 - the largest conference for AR+VR in Santa Clara, California May 31- June 2, 2017.
The basic AR implementation in Pokémon GO! has inspired user behaviors typical in higher end applications including widespread sharing of photos with user-aligned and projected characters, user organized real-world meet ups and rapid viral uptake. This talk will discuss the design and user interface choices we made to foster this.
http://AugmentedWorldExpo.com
Slides from the 2016/2017 edition of the Video game Design and Programming course at the Politecnico di Milano. More information at http://www.polimigamecollective.org Some of the video games developed by the students during the course are available at https://polimi-game-collective.itch.io
Slides from the 2016/2017 edition of the Video game Design and Programming course at the Politecnico di Milano. More information at http://www.polimigamecollective.org Some of the video games developed by the students during the course are available at https://polimi-game-collective.itch.io
COMP 4010 Course on Virtual and Augmented Reality. Lectures for 2017. Lecture 2: VR Technology. Taught by Bruce Thomas on August 3rd 2017 at the University of South Australia. Slides by Mark Billinghurst
Five emerging technology trends that bend, that is, that will substantially change how we do live, learn, work and play. Trends (as of 2/2014) are: Big Data, Augmented Reality, Semantic Web, xTreme BYOD, and Transmedia Storytelling.
Lecture 1 of the COMP 4010 course on Augmented and Virtual Reality. Taught by Mark Billinghurst, Bruce Thomas and Gun Lee from the University of South Australia. This lecture provides an introduction to Virtual Reality. Taught on July 24th 2018.
Transported vr the virtual reality platform for real estateWithTheBest
We all know that buying a house is an emotional purchase. Facts and figures don’t sell houses. Imagination does. // A buyer walking through a home and imagining their new life there is the most powerful tool you can wield. // So how do we build virtual reality experiences that create walkthroughs that are a near perfect substitute for an in-person showing
More Related Content
Similar to Human-Centered Design for Immersive Interaction - Jason Jerald
Five emerging technology trends that bend, that is, that will substantially change how we do live, learn, work and play. Trends (as of 2/2014) are: Big Data, Augmented Reality, Semantic Web, xTreme BYOD, and Transmedia Storytelling.
Lecture 1 of the COMP 4010 course on Augmented and Virtual Reality. Taught by Mark Billinghurst, Bruce Thomas and Gun Lee from the University of South Australia. This lecture provides an introduction to Virtual Reality. Taught on July 24th 2018.
Transported vr the virtual reality platform for real estateWithTheBest
We all know that buying a house is an emotional purchase. Facts and figures don’t sell houses. Imagination does. // A buyer walking through a home and imagining their new life there is the most powerful tool you can wield. // So how do we build virtual reality experiences that create walkthroughs that are a near perfect substitute for an in-person showing
Global demand for Mixed Realty (VR/AR) content is about to explode. WithTheBest
Commercial success depends upon the ability of VR producers to understand the complexities of creating and finishing, high-quality, content and finding an audience.
VR, a new technology over 40,000 years oldWithTheBest
Virtual Reality is at once both cutting edge, and a descendant of humanity's most ancient arts. Google's Chief Game Designer will bring some perspective to the origins of VR, why it matters to us in terms of evolution and storytelling, and give an overview of how Google is now supporting VR with tech like Google Cardboard, Android N, Daydream VR, and Spotlight Stories.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
6. 6@TheVRBookJason Jerald, Phd
The Most Exciting VR Tech
• The Human Brain!
– ~100B Neurons
– Thousands synapses/neuron
– ~1B Synapses/mm^3
– 150km of nerve fibers
– Six degrees of separation
7. 7@TheVRBookJason Jerald, Phd
Perception
• The Human Senses
– Sight
– Hearing
– Balance and physical motion
– Touch
– Proprioception
– Smell and Taste
– Multimodal
– Sensory Substitutions
9. 9@TheVRBookJason Jerald, Phd
The Iterative Perceptual Process
• With VR, we
hijack the human
perception-action
system
– Human Output =
VR Input/Tracking
– VR Output =
Human Input
Image Courtesy of The VR Book
(adapted from Goldstein [2007])
10. 10@TheVRBookJason Jerald, Phd
• Latency
• Calibration
• Tracking accuracy & precision
• Field of View
• Refresh rate
• Judder & Flicker
• Display response & persistence
• Vergence/Accommodation
• Stereoscopic cues
• Real world peripheral vision
• Fit/Weight/center of mass
• Motion platforms
• Hygiene
• Temperature
• Dirty screens
Factors of Adverse Health Effects
System Individual
• History of motion sickness
• VR experience
• Health
• Thinking about Sickness
• Belief
• Gender
• Age
• Mental model/expectations
• Interpupilary distance
• Not knowing
• Sense of balance
• Flicker fusion frequency threshold
• Real-world task experience
• Migraine history
Application
• Frame rate
• Locus of control
• Visual acceleration
• Physical head motion
• Duration
• Vection
• Binocular-occlusion conflict
• Virtual rotation
• Gorilla arm
• Standing vs sitting
• Rest frames
• Binocular disparity
• VR entrance & exit
• Luminance
• Repetitive strain
11. 11@TheVRBookJason Jerald, Phd
Motion Sickness—The Basics
• The biggest problem is motion sickness
• Varies greatly from person to person
• As users:
– Stop at the first sign of sickness
– Be well-hydrated and not hungry, not hung over
– Watch out for colds and sinus congestion
• As creators:
• Maintain the frame rate of the display hardware 100% of the time
• Make interactions comfortable
– Initial signs of discomfort may very well lead to sickness
• Minimize accelerations (e.g., roller coaster motions) and virtual rotations
• Remind your users not to “tough it out”
• Wipe down equipment periodically
16. 16@TheVRBookJason Jerald, Phd
The Most Important Input Device!
The Human Hands!
– Large proportion of sensory and motor cortex
devoted to the hands
– Hand-tracking technology is simply the mediator
that brings the hands into VR
16
17. 17@TheVRBookJason Jerald, Phd
Interaction Patterns
• Hand
Selection
• Pointing
• Image-
Plane
• Volume
Based
Selection
• Walking
• Steering
• 3D Multi-
Touch
• Automated
Viewpoint
Control
• Direct Hand
• Proxy
• 3D Tool
Manipulation
• Widgets &
Panels
• Non Spatial
Indirect
Control
• Pointing
Hand
• World-in-
Miniature
• MultiModal
Compound
34. 34@TheVRBookJason Jerald, Phd
Summary
• Widespread access to tracked hand-held controllers will take VR to
the next level
– But only for applications that are designed well
• There are no universal answers
– Selecting devices and interaction patterns are project dependent
– Understand tradeoffs and where what is appropriate
• Iterative. Iterate. Iterate.
• Have fun!
37. 37@TheVRBookJason Jerald, Phd
References
• Jerald, J. [2015] The VR Book: Human-Centered Design for Virtual Reality. Morgan &
Claypool Publishers and ACM Books.
• Gabbard J. (2014). Usability Engineering of Virtual Environments. In K. S. Hale and K.M.
Stanney (Eds), Handbook of Virtual Environments (2nd ed., pp. 721-747). CRC Press.
• Norman D. [2013] The Design of Everyday Things, Expanded and Revised Edition. Basic
Books.
• Pierce, J., Forsberg, A., Conway, M., Hong, S., Zeleznik, R., and Mine, M. R. (1997). Image
Plane Interaction Techniques in 3D Immersive Environments. In ACM Symposium on
Interactive 3D Graphics (pp. 39–44). ACM Press.
• Taylor, R., Jerald, J., VanderKnyff, C., Wendt, J., Borland, D., Marshburn, D., Sherman, W.,
and Whitton, M. (2010). Lessons about Virtual Environment Software Systems from 20
Years of VE Building. Presence: Teleoperators and Virtual Environments, 19(2), 162–178.
39. 39@TheVRBookJason Jerald, Phd
A New Artistic Medium
• The screen disappears
• The best VR applications are designed from the
beginning for VR
• Today the technology is being sold
• Tomorrow the story and emotion will be sold
42. 42@TheVRBookJason Jerald, Phd
Example Pattern
• Walking Pattern
– a form of viewpoint control
– Specific walking techniques
• real-world walking
• redirected walking
• walking in place
• treadmill walking
What makes me qualified to give a tutorial on VR interactions?
20 years of VR experience on over 60 projects with over 30 organizations ranging from academics to startups to Fortune 500 Companies.
What I’ve learned is that every VR project is different. There are few absolute truths when it comes to VR. But theory, processes, and guidelines can be useful.
If you want to learn more about human-centered design for VR then check out my book at TheVRBook.net.
20 years of research summarized into 600 pages
Not necessarily a leisurely Sunday afternoon read, but dramatically simplified. No equations, no code—the focus is on the design from a human-centered approach.
Topics include human perception, understanding and reducing sickness, environmental design, interaction patterns, and iteration concepts.
Over 600 guidelines for designing VR applications, hundreds definitions, and over 300 references pointing to more detail.
Go to TheVRBook.net to learn more about applying VR design concepts to your application
If you are teaching a VR course, then talk with me about getting a free evaluation copy.
I’d like to point out VR is not new!
Depending how you define it you could go back to prehistoric times of cavemen drawing on walls.
Here are some systems ranging from the 1830s to the 1990s. For example,
Wheatstone stereoscope
Brewster stereoscope from the 1860s
Flight simulators were really what got VR going in the 1900s
Sensorama in the 1960s, included a stereoscopic display, wide field of view, color, stereo sound, wind, smell, a motion/vibration platform
Tom Furness and Ivan Sutherland creating head-tracked computer rendered head-mounted displays in the 1960s
There has been quite a bit of VR over the years. However, for me there is one tech that is most exciting.
Many people are only vaguely aware of this.
Absolutely essential, without this VR is not possible.
Has been available in various forms for thousands of years.
The Human Brain!
A massive number of neurons—the basic unit of the nervous system.
However, the connection between those neurons is what is most impressive.
On average, there are thousands of connections (synapses) from a single neuron connecting to other neurons.
There is a billion of these synapses for a cubic millimeter of brain matter.
150km of nerve fibers is enough to circle the earth multiple times!
A neuron can reach any other neuron on average by traveling through only six other neurons.
This results in a massive parallel machine.
Today’s computers cannot compare, some do but that is comparing apples to oranges.
The result of this massive parallel machine called a brain integrated with the body is a perception and action entity that enables us to interact with the world
Our interactions with the real and virtual worlds start with perception.
VR is much more than just graphics! Especially when it comes to interacting.
Most VR creators focus primarily on visuals.
Some do audio, but mostly as an afterthought.
Lack of physical touch is one of the greatest challenges of VR.
Proprioception is extremely important for VR—the sense of where your body parts are located (e.g., close your eyes and touch your nose). Normally don’t consciously think about proprioception but when its missing it’s a problem.
Smell and taste are important for some applications but is difficult to do well with VR (although research and proof of concepts have shown this to be doable).
Bringing the senses together in a coherent manner is important to induce a sense of presence in users and for interactions to be intuitive. Spatial and temporal compliance are most important (e.g., proprioception matches visuals). Multimodal input/output can be challenging but effective.
This perception of the real (or virtual) world is not reality or “the truth”
Our perceptions are an interpretation of reality.
We do not perceive objects directly, for example, objects aren’t poking into our retinas, we instead perceive the photons bouncing off those objects into our eyes.
For VR, we hijack the human senses and replace real world cues with artificially created cues that the mind interprets as a form of reality.
Our sense of reality can be quite distorted, different for different people and under different circumstances
A song can be quite emotional and have some significant meaning for one person while not for another person.
Here is a version of Goldstein’s iterative perception-action process
This is a model of how distal stimuli out in the world reaches our receptors and how we process it
We can replace the real world distal stimuli with computer generated stimuli and sense the users actions in order to update that stimuli
Human output becomes VR Input
VR output becomes human input
Many VR “experts” argue what causes VR sickness as if there were one single factor.
The ones listed here are the ones we know about, many more have been hypothesized.
The leading hardware manufacturers are solving many of the system challenges.
E.g., latency is not much of a problem any more for the best HMD systems.
But we should still be aware of them to select the best hardware and to recognize problems when they occur.
Unfortunately we don’t have much control over individual factors.
But we should understand our target audience.
We can also influence their mental model.
The factors that we have the most control over are application factors.
Especially important for navigation schemes.
But bad non-navigation interfaces can cause problems as well.
For example put the interface in a comfortable position in the torso reference frame near the waste.
We will talk about some of these throughout the talk.
There is much more to VR beyond displays
Hardware is varied and depends upon project goals
VR hand input is varied and depends upon project goals.
E.g., location based entertainment can be very different than games in the home
Here is an example of some work we did with Virtuix as shown on ABC’s Shark Tank that uses an omni-directional treadmill to navigate through the virtual world.
It is not clear if and how much this treadmill technique reduces motion sickness
A great area of research anyone in this audience could study.
More interaction patterns and techniques will be presented later in the course.
No single input device class has all qualities or is the ideal solution for all applications.
Most appropriate device depends on goals and the application/experience.
Hybrid and multimodal interaction can be ideal (if designed and integrated well), but more expensive and difficult to implement.
Currently tracked hand-held controllers are the most appropriate for most, but not all, VR experiences
I’ve seen some crazy VR input devices.
However, there is one type of input that is especially important for VR
Clue: Shown on previous slides. Multiple times
Turns out these input device is important for the real world as well.
The human hands!
A relatively large part of the human brain is dedicated to input/output of the hands
Hardware simply enables us to bring the hands into and interact with the experience.
Now lets move on to some common interaction themes that occur across VR applications.
I looked at ~100 interaction techniques and categorized them into interaction patterns.
Very different from software design patterns. These patterns are from the user’s point of view.
I organize the patterns into 5 overarching patterns.
These 16 patterns can be further broken down into more specific interaction techniques.
- The first thing people do when hand tracking becomes available is direct selection and manipulation.
Here is what I built back in 2013 just after the Oculus Kickstarter DK1 shipment
Quite a bit can be done just by tracking three points—the head and the two hands.
Here, I’m threatening Paul Mlyniec, the President of Digital ArtForms. Since his hands are tracked he doesn’t have to think of where the surrender button is. He just naturally raises his hands to non-verbally say “don’t shoot!”
Very similar to how we interact in the real world
Other parts of the body can be moved with inverse kinematics and animations. For example the feet move with a walking animation when pressing forward with joystick on a hand-held controller.
Inverse kinematics can be extremely compelling if done well and where appropriate.
Example of some work we did with Digital ArtForms, Sixense, and the Wake Forest School of Medicine to teach kids about how their brains work.
Learn by doing
Simply reach out to grab the brain parts-intersect the hand with the object and then push a button
Can do so much more than you can with a real-world puzzle. E.g., integrating with digital media, smarter interactions, special effects.
Play movie
Pointing is normally considered to be non realistic (unless simulating a laser pointer)
Enables selection at a distance
Here we have an exocentric view of the world from an egocentric perspective.
Here, pointing enables the user to shoot electricity from his hands to protect a friend in a smaller world (Work with Digital ArtForms)
Why do VR developers think user interface elements should go in the head or world reference frame?
A simple solution:
Put UI signifers/labels in the hands.
Simply look at your hands when performing an action until you get good at the task—similar to real reality.
The signifiers are simple transparent textures with labels attached to a Razer Hydra.
A 3d model matching what the user physically feels would be better.
No finger tracking, but close enough for users to intuitively know what button does what by touch.
Enable users to turn on/off the visuals. Important for both novice and expert users.
VR experts argue if arms or realistic looking hands are better.
Like many other great VR debates, the answer is it depends!
Arms are most appropriate when:
Realism is important
It can be assumed the user will always be facing in one direction or the torso is tracked.
Arms are not appropriate when:
Reaching further than the physical length of your arms is important
For example:
Reaching to pick up items off the floor from a seated position
You want to reach out further into the distance than you could if you were physically constrained with how far your arms can reach.
The Go-Go Techniques is a hand selection technique that has 1to1 mapping within personal space at 2/3 of your physical arm length. Beyond 2/3 of your arm length, your arm reach expands exponentially enabling selection at a distance.
Hand selection/manipulation can be quite compelling even when the hands don’t look like hands (i.e., 3d cursors).
With appropriate spatial and temporal compliance, 3D cursors feel like they are your hands—even though they don’t look at all like your hands.
The Image-plane pattern is rarely used pattern but can work quite well for selecting objects at a distance.
Shown here is the Head-Crusher Technique.
Think of the world as a 2D plane in front of the user.
Hold the hand up and place the two fingers of the dominant hand around the object of interest
Push a button on the non-dominant hand or verbally say “select”.
Requires closing one eye and can result in gorilla arm if used often.
Widgets and panels often take desktop applications or metaphors and bring them into the virtual world
Often not ideal, but easy to do.
Hand pointing via ray selection is most common and most usable.
Here I’m bringing in arbitrary desktop applications into a CAVE VR system.
If you want to do a quick mathematical calculation, you don’t want to have to exit the virtual world to bring up a calculator app and its not worth writing an immersive 3D calculator app. Instead of exiting the virtual world, bring the existing application into VR.
Many other forms of immersive widgets and panels. For example hand-held panels, panels above the head, finger menus, marking menus, etc.
Putting a virtual panel on the hand works surprisingly well
Provides double dexterity where the nondominant hand provides a reference frame to work in.
Interface always available
Turn on/off with the click of a physical button
Examples here include traditional GUI elements (buttons, sliders, drop down menus) as well as more interesting GUI elements (dials, marking menus, color cube)
Similar to 2D multitouch
Think of manipulating the world as an object—grab with both hands to translate, orient, and scale the world
Thinking about manipulating the world as an object results in the user thinking of the world moving instead of moving through the world, resulting in reduced motion sickness.
Appropriate for abstract non-realistic interactions
Data independent. We’ve used this across a variety of applications including volumetric data—no polygons required because no intersection computations are required.
Solves gorilla arm. A focus study found users could go 4 hours with no report of arm fatigue.
To translate, simply grab the world with one or both hands
To scale/zoom, “pinch” with two hands instead of two fingers
To rotate, grab the world with both hands and orbit about the midpoint between the hands as if it were a globe
Can do translation, scale, and rotation simultaneously with a single gesture
interaction is direct & immediate—starts as soon as you push the button—no need to wait for the system to recognize the motion
Results in the feeling that you are are crawling through and stretching space
An example of 3D Multitouch shown at SIGGRAPH 2011. Non-HMD here but the 3D multitouch actually works better with HMDs as that is what it was originally designed for.
Play movie
The same interface is used with MakeVR (new company should be formed by VRDC 2016). An immersive CAD modeling program.
3D content creation and world building intuitive and accessible by anyone.
Currently works with Sixense STEM, Razer Hydras, and SpaceGrips.
Support coming for Oculus Touch controllers and HTC Vive controllers
For those not familiar with MakeVR, here is an example video of what is possible.
What is common to all VR Development?
Iteration is absolutely essential!
More so than for creating any other product
If you get it wrong here you can make users sick or injure them
Will get it wrong on first attempts
What you think would work great often does not
A big part of what we are working at NextGen Interactions is to creating processes for providing better feedback to VR creators for their specific projects.
That starts with task analysis to really understand what is trying to be accomplished
That leads to expert guidelines-based evaluations that are the first steps to making sure the most important items are being done well. That leads to formative usability analysis which is using techniques such as speak-out-loud protocols where the user explains out loud what he is doing and thinking, to observational analysis of breaks in presence.
Then those that want to get more formalized results can conduct formal scientific studies to compare different techniques or conditions.
Widespread access to tracked hand-held controllers not only provides a large user base but provides developers with quality input devices that will lead to innovative interfaces that nobody has yet considered.
Just because anything is possible doesn’t mean everything will work. Viewpoint control especially requires caution due to risk of sickness and injury.
Iteration is absolutely essential for VR.
Most importantly have fun! Otherwise what is the point of reality, whether virtual reality or real reality?
If you want to learn more about human-centered design for VR then check out my book at TheVRBook.net.
20 years of research summarized into 600 pages
Not necessarily a leisurely Sunday afternoon read, but dramatically simplified. No equations, no code—the focus is on the design from a human-centered approach.
Topics include human perception, understanding and reducing sickness, environmental design, interaction patterns, and iteration concepts.
Over 600 guidelines for designing VR applications, hundreds definitions, and over 300 references pointing to more detail.
Go to TheVRBook.net to learn more about applying VR design concepts to your application
If you are teaching a VR course, then talk with me about getting a free evaluation copy.
There are multiple health challenges of VR.
For now, we will focus on the worst offender—motion sickness.
Thinking about and understanding reference frames is important for human-centered interaction design.
Many of you computer graphics experts are intimately familiar with reference frames.
Screen space
Object space
World space
Reference frames define:
The axii that determine translation along x, y, & z and also rotation along yaw, pitch, and roll.
How objects move relative to other objects (e.g., attached to the head, hand, or world).
The image here looks complex in 2D but becomes intuitively obvious when immersed.
Head reference frame—similar to screen coordinates in 2D development.
Blue Arrows in torso reference frame represent the forward direction.
It is important for usability and comfort to place your interfaces in the appropriate reference frames.
We will discuss these reference frames and how they are best used in more detail.
In some cases reference frames are equivalent to each other. E.g., for a fully walkable system (e.g., HTC Vive) the virtual world and real world ref frames can be made to be equivalent
Viewpoint control patterns are especially important due to challenges of motion sickness.
An example of a viewpoint control pattern is The walking pattern.
There are various ways to “walk” within VR. Example walking techniques are:
Real World walking, such as with the HTC Vive.
Redirected walking, such as with The Void.
Walking in Place, such as with Stompz.
Treadmill walking, such as with the Virtuix Omni.
A WIM is basically a real-time map of the world a user is in.
Can hold in the hand or have it attached in the world or to your body (torso reference frame)
Can reach into the map and move things that then moves the corresponding object in the larger world
Take on role of robot.
Crawl through a body with a friend
Green user has carved out the midsection of a body to take a closer look inside.
Blue user follows him
Medical teleconsultation—2 doctors at remote locations could examine a dataset and point things out that would be difficult to do in 2D setting.
Precision is normally extremely difficult in VR.
No physical constraints as there is with a mouse on a desk.
So instead add artificial constraints—what we call jigs. Shown here as used with MakeVR. Think of carpenter tools to provide the user more precise interaction.
E.g., grids can be attached to polygon faces and then the grid spacing can adjusted.
Jigs are constraint tools, similar to tools that real world carpenters use to build things.
They can constrain objects to a plane or a models surface.
Or snap objects to grid points that can be spaced as needed.
Or snapping an object to another object.