Siggraph 2014: The Glass Class - Designing Wearable Interfaces
Aug. 10, 2014•0 likes
20 likes
Be the first to like this
Show More
•5,465 views
views
Total views
0
On Slideshare
0
From embeds
0
Number of embeds
0
Download to read offline
Report
Technology
Course on Designing Wearable interfaces taught by Mark Billinghurst at Siggraph 2014. Presented on August 10th, 2014 from 10:45am - 12:15pm. The course focuses mainly on design guidelines and tools for rapid prototyping for Google Glass.
Siggraph 2014: The Glass Class - Designing Wearable Interfaces
The Glass Class: Designing Wearable Interfaces
Mark Billinghurst
The HIT Lab NZ, University of Canterbury
The 41st International
Conference and Exhibition
on Computer Graphics and
Interactive Techniques
Mark Billinghurst
▪ Director of The HIT Lab NZ, University of
Canterbury
▪ PhD Univ. Washington
▪ Research on AR, mobile HCI,
Collaborative Interfaces, Wearables
▪ Joined Glass team at Google [x] in 2013
Course Goals
In this course you will learn
▪ Introduction to head mounted wearable computers
▪ Understanding of current wearable technology
▪ Key design principles/interface metaphors
▪ Rapid prototyping tools
▪ Areas for future research
What You Won’t Learn
▪ Who are the companies/universities in this space
▪ See the Siggraph exhibit floor
▪ Designing for non-HMD based interfaces
▪ Watches, fitness bands, etc
▪ How to develop wearable hardware
▪ optics, sensor assembly, etc
▪ Evaluation methods
▪ Experimental design, statistics, etc
Schedule
• 10:45 am Introduction
• 10:55 am Technology Overview
• 11:05 am Design Guidelines
• 11:25 am Prototyping Tools
• 11:55 am Example Applications
• 12:05 am Research Directions/Resources
A Brief History of Computing
Trend
▪ Smaller, cheaper, faster, more intimate
▪ Moving from fixed to handheld and onto body
1950’s
1980’s
1990’s
Wearable Computing
▪ Computer on the body that is:
▪ Always on
▪ Always accessible
▪ Always connected
▪ Other attributes
▪ Augmenting user actions
▪ Aware of user and surroundings
The Ideal Wearable
▪ Persists and Provides Constant Access: Designed for
everyday and continuous use over a lifetime.
▪ Senses and Models Context: Observes and models the
users environment, mental state, it’s own state.
▪ Augments and Mediates: Information support for the user in
both the physical and virtual realities.
▪ Interacts Seamlessly: Adapts its input and output modalities
to those most appropriate at the time.
Starner, T. E. (1999). Wearable computing and contextual awareness
(Doctoral dissertation, Massachusetts Institute of Technology).
History of Wearables
▪ 1960-90: Early Exploration
▪ Gamblers and Custom build devices
▪ 1990 - 2000: Academic, Military Research
▪ MIT, CMU, Georgia Tech, EPFL, etc
▪ 1997: ISWC conference starts
▪ 1995 – 2005+: First Commercial Uses
▪ Niche industry applications, Military
▪ 2010 - : Second Wave of Wearables
Origins - The Gamblers
• Thorp and Shannon (1961)
– Wearable timing device for roulette prediction
• Keith Taft (1972)
– Wearable computer for blackjack card counting
Belt computer Shoe Input Glasses Display
Mobile AR: Touring Machine (1997)
▪ University of Columbia
▪ Feiner, MacIntyre, Höllerer, Webster
▪ Combined
▪ See through head mounted display
▪ GPS tracking, Orientation sensor
▪ Backpack PC (custom)
▪ Tablet input
Feiner, S., MacIntyre, B., Höllerer, T., & Webster, A. (1997). A touring machine: Prototyping 3D mobile
augmented reality systems for exploring the urban environment. Personal Technologies, 1(4), 208-217.
Summary
Wearables are a new class of computing
Intimate, persistent, aware, accessible
Evolution over 50 year history
Backpack to head worn
Custom developed to consumer ready device
Enables new applications
Collaboration, memory, AR, industry, etc
Many head worn wearables are coming
User Experience
• Truly Wearable Computing
– Less than 46 ounces
• Hands-free Information Access
– Voice interaction, Ego-vision camera
• Intuitive User Interface
– Touch, Gesture, Speech, Head Motion
• Access to all Google Services
– Map, Search, Location, Messaging, Email, etc
Types of Head Mounted Displays
Occluded
See-thru
Multiplexed
Multiplexed Displays
▪ Above or below line of sight
▪ Strengths
▪ User has unobstructed view of real world
▪ Simple optics/cheap
▪ Weaknesses
▪ Direct information overlay difficult
▪ Display/camera offset from eyeline
▪ Wide FOV difficult
Vuzix M-100
▪ Monocular multiplexed display ($1000)
▪ 852 x 480 LCD display, 15 deg. FOV
▪ 5 MP camera, HD video
▪ GPS, gyro, accelerometer
Strengths of optical see-through
▪ Simpler (cheaper)
▪ Direct view of real world
▪ Full resolution, no time delay (for real world)
▪ Safety
▪ Lower distortion
▪ No eye displacement
▪ see directly through display
Vuzix Wrap 1200DXAR
▪ Stereo video see-through display ($1500)
■ Twin 852 x 480 LCD displays, 35 deg. FOV
■ Stereo VGA cameras
■ 3 DOF head tracking
Strengths of Video See-Through
▪ True occlusion
▪ Block image of real world
▪ Digitized image of real world
▪ Flexibility in composition, match time delays
▪ More registration, calibration strategies
▪ Wide FOV is easier to support
▪ wide FOV camera
Twiddler Input
▪ Chording or multi-tap input
▪ Possible to achieve 40 - 60 wpm after 30+ hours
▪ cf 20 wpm on T9, or 60+ wpm for QWERTY
Lyons, K., Starner, T., Plaisted, D., Fusia, J., Lyons, A., Drew, A., & Looney, E. W. (2004, April).
Twiddler typing: One-handed chording text entry for mobile phones. In Proceedings of the SIGCHI
conference on Human factors in computing systems (pp. 671-678). ACM.
Virtual Keyboards
▪ In air text input
▪ Virtual QWERTY keyboard up to 20 wpm
▪ Word Gesture up to 28 wpm
▪ Handwriting around 20-30 wpm
A. Markussen, et. al. Vulture: A Mid-Air Word-Gesture Keyboard (CHI 2014)
Unobtrusive Input Devices
▪ GestureWrist
▪ Capacitive sensing, changes with hand shape
Rekimoto, J. (2001). Gesturewrist and gesturepad: Unobtrusive wearable interaction devices. In
Wearable Computers, 2001. Proceedings. Fifth International Symposium on (pp. 21-27). IEEE.
Skinput
Using EMG to detect muscle activity
Tan, D., Morris, D., & Saponas, T. S. (2010). Interfaces on the go.
XRDS: Crossroads, The ACM Magazine for Students, 16(4), 30-34.
Issues to Consider
▪ Fatigue
▪ “Gorrilla” Arm from free-hand input
▪ Comfort
▪ People want to do small gestures by waist
▪ Interaction on the go
▪ Can input be done while moving?
Design For the Device
• Simple, relevant information
• Complement existing devices
Last year Last week NowForever
The Now machine
Focus on location, contextual and timely
information, and communication.
Don’t design an app
Glass OS is time-based model, not an app model.
The
world
is
the
experience
Get
the
interface
and
interac-ons
out
of
the
way.
It's
like
a
rear
view
mirror
Don't
overload
the
user.
S-ck
to
the
absolutely
essen-al,
avoid
long
interac-ons.
Be
explicit.
Micro
Interac8ons
The
posi-on
of
the
display
and
limited
input
ability
makes
longer
interac-ons
less
comfortable.
Using
it
shouldn’t
take
longer
than
taking
out
your
phone.
Time Looking at Screen
Oulasvirta, A. (2005). The fragmentation of attention in mobile
interaction, and what to do with it. interactions, 12(6), 16-18.
Design for MicroInteractions
▪ Design interaction less than a few seconds
– Tiny bursts of interaction
– One task per interaction
– One input per interaction
▪ Benefits
– Use limited input
– Minimize interruptions
– Reduce attention fragmentation
Make it Glanceable
• Seek to rigorously reduce information density.
• Design for recognition, not reading.
Bad Good
Reduce the Number of Info Chunks
• You are designing for recognition, not reading.
• Reducing the total # of information chunks will
greatly increase the glanceability of your design.
• .
1
2
3
1
2
3
4
5 (6)
Design single interactions < 4 s
Eye movements
For 1: 1 230ms
For 2: 1 230ms
For 3: 1 230ms
For 4: 3 690ms
For 5: 2 460ms
~1,840ms
Eye movements
For 1: 1-2 460ms
For 2: 1 230ms
For 3: 1 230ms
~920ms
1
2
3
1
2
3
4
5 (6)
Don’t Get in the Way
• Enhance, not replace, real world interaction
Design for Interruptions
▪ Gradually increase engagement and attention load
▪ Respond to user engagement
Receiving SMS on Glass
“Bing”
Tap
Swipe
Glass
Show Message Start Reply
User Look
Up
Say
Reply
Transparent
displays
are
tricky
Colors
are
funny
and
inconsistent.
You
can
only
add
light
to
a
scene,
not
cover
anything
up.
Mo-on
can
be
disorien-ng.
Clarity,
contrast,
brightness,
visual
field
and
aHen-on
are
important.
Establish hierarchy with color
White is your <h1> and grey is your <h2> or <h3>.
Footer text - establishing time, attribution, or
distance - is the only place with smaller font size.
CityViewAR
• Using AR to visualize Christchurch city buildings
– 3D models of buildings, 2D images, text, panoramas
– ARView, Map view, List view
– Available on Android/iOS market
Important Note
▪ Most current wearables run Android OS
▪ eg Glass, Vuzix, Atheer, Epson, etc
▪ So many tools for prototyping on Android
mobile devices will work for wearables
▪ If you want to learn to code, learn
▪ Java, Android, Javascript/PHP
Typical Development Steps
▪ Sketching
▪ Storyboards
▪ UI Mockups
▪ Interaction Flows
▪ Video Prototypes
▪ Interactive Prototypes
▪ Final Native Application
Increased
Fidelity &
Interactivity
Glassware Flow Designer
• Features
– Design using common patterns and layouts
– Specify interactions and card flow
– Share with other designers
• Available from:
– https://developers.google.com/glass/tools-
downloads/glassware-flow-designer
Screen Sharing
▪ Android Design Preview
– Tool for sharing screen content onto Glass
– https://github.com/romannurik/
AndroidDesignPreview/releases
Mac Screen
Glass
▪ Series of still photos in a movie format.
▪ Demonstrates the experience of the product
▪ Discover where concept needs fleshing out.
▪ Communicate experience and interface
▪ You can use whatever tools, from Flash to iMovie.
Video Sketching
Limitations
▪ Positives
▪ Good for documenting screens
▪ Can show application flow
▪ Negatives
▪ No interactivity/transitions
▪ Can’t be used for testing
▪ Can’t deploy on wearable
▪ Can be time consuming to create
UXpin - www.uxpin.com
▪ Web based wireframing tool
▪ Mobile/Desktop applications
▪ Glass templates, run in browser
Proto.io - http://www.proto.io/
▪ Web based mobile prototyping tool
▪ Features
▪ Prototype for multiple devices
▪ Gesture input, touch events, animations
▪ Share with collaborators
▪ Test on device
Justinmind
▪ Native wireframing tool
▪ Build mobile apps without programming
▪ drag and drop, interface templates
▪ web based simulation
▪ test on mobile devices
▪ collaborative project sharing
▪ Templates for Glass, custom templates
Wireframe Limitations
▪ Can’t deploy on Glass
▪ No access to sensor data
▪ Camera, orientation sensor
▪ No multimedia playback
▪ Audio, video
▪ Simple transitions
▪ No conditional logic
Processing
▪ Programming tool for Artists/Designers
▪ http://processing.org
▪ Easy to code, Free, Open source, Java based
▪ 2D, 3D, audio/video support
▪ Processing For Android
▪ http://wiki.processing.org/w/Android
▪ Strong Android support, builds .apk file
Basic Processing Sketch
/* Notes comment */
//set up global variables
float moveX = 50;
//Initialize the Sketch
void setup (){
}
//draw every frame
void draw(){
}
Importing Libraries
▪ Can add functionality by Importing Libraries
▪ java archives - .jar files
▪ Include import code
import processing.opengl.*;
▪ Popular Libraries
▪ Minim - audio library, OCD - 3D camera views
▪ bluetoothDesktop - bluetooth networking
Processing and Glass
▪ One of the easiest ways to build rich
interactive wearable applications
▪ focus on interactivity, not coding
▪ Collects all sensor input
▪ camera, accelerometer, touch
▪ Can build native Android .apk files
▪ Side load onto Glass
Hello World Image
PImage img; // Create an image variable
void setup() {
size(640, 360);
//load the ok glass home screen image
img = loadImage("okGlass.jpg"); // Load the image into
the program
}
void draw() {
// Displays the image at its actual size at point (0,0)
image(img, 0, 0);
}
Touch Pad Input
▪ Tap recognized as DPAD input
void keyPressed() {
if (key == CODED){
if (keyCode == DPAD) {
// Do something ..
▪ Java code to capture rich motion events
▪ import android.view.MotionEvent;
Motion Event
//Glass Touch Events - reads from touch pad
public boolean dispatchGenericMotionEvent(MotionEvent event) {
float x = event.getX(); // get x/y coords
float y = event.getY();
int action = event.getActionMasked(); // get code for action
switch (action) { // let us know which action code shows up
case MotionEvent.ACTION_MOVE:
touchEvent = "MOVE";
xpos = myScreenWidth-x*touchPadScaleX;
ypos = y*touchPadScaleY;
break;
Sensors
▪ Ketai Library for Processing
▪ https://code.google.com/p/ketai/
▪ Support all phone sensors
▪ GPS, Compass, Light, Camera, etc
▪ Include Ketai Library
▪ import ketai.sensors.*;
▪ KetaiSensor sensor;
Using Sensors
▪ Setup in Setup( ) function
▪ sensor = new KetaiSensor(this);
▪ sensor.start();
▪ sensor.list();
▪ Event based sensor reading
void onAccelerometerEvent(…){
accelerometer.set(x, y, z);
}
Using the Camera
▪ Import camera library
▪ import ketai.camera.*;
▪ KetaiCamera cam;
▪ Setup in Setup( ) function
cam = new KetaiCamera(this,640,480,15);
▪ Draw camera image
void draw() { //draw the camera image
image(cam, width/2, height/2);
Native Coding
▪ For best performance need native coding
▪ Low level algorithms etc
▪ Most current wearables based on Android OS
▪ Need Java/Android skills
▪ Many devices have custom API/SDK
▪ Vusix M-100: Vusix SDK
▪ Glass: Mirror API, Glass Developer Kit (GDK)
Glassware Development
▪ Mirror API
▪ Server programming, online/web application
▪ Static cards / timeline management
▪ GDK
▪ Android programming, Java (+ C/C++)
▪ Live cards
▪ See: https://developers.google.com/glass/
▪ REST API
▪ Java servlet, PHP, Go,
Python, Ruby, .NET
▪ Timeline based apps
▪ Static cards
- Text, HTML, media attachment (image & video)
▪ Manage timeline
- Subscribe to timeline notifications, contacts
- Location based services
Mirror API
GDK
▪ Glass Development Kit
▪ Android 4.0.3 ICS + Glass specific APIs
▪ Use standard Android Development Tools
▪ GDK add-on features
▪ Timeline and cards
▪ Menu and UI
▪ Touch pad and gesture
▪ Media (sound, camera and voice input)
GDK
Glass Summary
▪ Use Mirror API if you need ...
▪ Use GDK if you need ...
▪ Or use both
Rasberry Pi Glasses
▪ Modify video glasses, connect to Rasberry Pi
▪ $200 - $300 in parts, simple assembly
▪ https://learn.adafruit.com/diy-wearable-pi-near-eye-kopin-video-glasses
Physical Input Devices
▪ Can we develop unobtrusive input devices ?
▪ Reduce need for speech, touch pad input
▪ Socially more acceptable
▪ Examples
▪ Ring, pendant,
▪ bracelet, gloves, etc
WearScript Features
• Community of Developers
• Easy development of Glass Applications
– GDK card format
– Support for all sensor input
• Support for advanced features
– Augmented Reality
– Eye tracking
– Arduino input
Summary
▪ Prototyping for wearables is similar to mobiles
▪ Tools for UI design, storyboarding, wireframing
▪ Android tools to create interactive prototypes
▪ App Inventor, Processing, etc
▪ Arduino can be used for hardware prototypes
▪ Once prototyped Native Apps can be built
Challenges for the Future (2001)
▪ Privacy
▪ Power use
▪ Networking
▪ Collaboration
▪ Heat dissipation
▪ Interface design
▪ Intellectual tools
▪ Augmented Reality systems
Starner, T. (2001). The challenges of wearable
computing: Part 1. IEEE Micro,21(4), 44-52.
Starner, T. (2001). The challenges of wearable
computing: Part 2. IEEE Micro,21(4), 54-67.
Gesture Interaction With Glass
▪ 3 Gear Systems
▪ Hand tracking
▪ Hand data sent to glass
▪ Wifi networking
▪ Hand joint position
▪ AR application rendering
▪ Vuforia tracking
Capturing Behaviours
▪ 3 Gear Systems
▪ Kinect/Primesense Sensor
▪ Two hand tracking
▪ http://www.threegear.com
Performance
▪ Full 3d hand model input
▪ 10 - 15 fps tracking, 1 cm fingertip resolution
Modeling Cognitive Processes
• Model cognitive processes
– Based on cognitive psychology
• Use model to:
– Identify opportunity for wearable
– Predict user’s cognitive load
Typical Cognitive Model
1. Functional Modularity: cognitive system divided
into functionally separate systems
2. Parallel Module Operation: cognitive modules
operate in parallel, independent of each other
3. Limited Capacity: cognitive modules are limited in
capacity with respect to time or content
4. Serial Central Operation: central coordination of
modules (eg monitoring) is serial
Cognitive Interference
▪ Structural interference
▪ Two or more tasks compete for limited
resources of a peripheral system
- eg two cognitive processes needing vision
▪ Capacity interference
▪ Total available central processing
overwhelmed by multiple concurrent tasks
- eg trying to add and count at same time
Application of Cognitive Model
Busy street > Escalator > Café > Laboratory.
But if you made Wayfinding, Path Planning, Estimating
Time to Target, Collision Avoidance easier?
Conclusions
• Wearable computing represents a fourth
generation of computing devices
• Google Glass is the first consumer wearable
– Lightweight, usable, etc
• A range of wearables will appear in 2014
– Ecosystem of devices
• Significant research opportunities exist
– User interaction, displays, social impact