Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Good afternoon	and	thanks	for	having	me	here.	In	this	talk	I	want	to	look	at	the	design	
challenges	of	systems	that	antici...
Let me begin by telling you a bit about my background. I m a user experience
designer. I was one of the first professional...
I’ve	also	worked	on	the	user	experience	design	of	a	lot	of	consumer	electronics
products	from	companies	you’ve	probably	he...
I	wrote	a	couple	of	books	based	on	my	experience	as	a	designer.	One	is	a	cookbook	of	
user	research	methods,	and	the	secon...
I	also	started	a	couple	of	companies.	The	first,	Adaptive	Path,	was	primarily	focused	
on	the	web, and	with	the	second	one...
Today	I	work	for	PARC,	the	famous	research	lab	that	invented	the	personal	computer,	
object	oriented	software,	the	tablet	...
I	want	start	by	focusing	on	what	I	feel	is	a key	aspect	of	consumer	IoT	that’s	often	
missed	when	people	focus	on	the	hard...
Historically,	a	company	made	an	electronic	product,	say	a	turntable,	they	found	
people	to	sell	it	for	them,	they	advertis...
When you have a multitude of connected devices and apps, value shifts to services
and the devices, software applications a...
Amazon really	gets	this.	Here s	a	telling	older	ad	from	Amazon	for	the	Kindle. It’s	
saying	 Look,	use	whatever	device you...
When	Fire	was	released	5	years	ago,	Jeff	Bezos	even	called it	a	service.
10
Most	large-scale	IoT products	are	service	avatars.	They	use	specialized sensors	and	
actuators	to	support	a	service,	but	h...
Compare that	to	X10,	their	spiritual	predecessor	that’s	been	in	the	business	for	30	
years.	All	that	X10	tells	is	you	is	w...
Simply	connecting	existing	stuff	to	the	internet	does	not	produce customer	value…
13
Simple	connectivity	helps	when	you’re	trying	to	maximize	the	efficiency	of	a	fixed	
process,	but	that’s	not	a	problem	that...
…or	an	egg	carton,	you	still	have	the	same	problem,	and	it’s	a	user	experience	
problem.
The	UX	problem	is	that	end	users	...
How	do	you	make	money	in	this	space of	dematerialized	devices	and	cloud	services?
16
One	approach	is	to	change	from	an	ownership	model	to	a	subscription	model.	Now	
the	device	gives	access	to	a	desired	end	r...
Hewlett Packard’s	printer	division	is	really	an	ink	company	that	also	makes	ink	
consumption	devices.	Similarly	Amazon	is	...
..or	physical.	Their Dash	replenishment	service	can	turn	any	device	with	
consumables…
19
…into	an	automatic	Amazon	reordering	machine.
The	Dash	button	is	a	networked	computer	whose	only	purpose	is	to	be	an	avata...
I	think the	real	value	connected	services	offer	is	their	ability	to	make	sense	of	the	
world	on	our	behalf,	to	reduce	cogn...
The	interesting	thing	is	that	this	not	just	theory.
Prediction	and	response	is	at	the	heart	of	the	value	proposition	many	...
Amazon’s	Echo	speaker says	it’s	continually	learning.	How	is	that?	Predictive	machine	
learning	based	on	your	actions	and	...
The Birdi smart	smoke	alarm	says	it	will	learn	over	time,	which	is	again	the	same	
thing.
24
Jaguar, learning…AND	intelligent.
25
The	Edyn plant	watering	system	adapts to	every	change.	What	is	that	adaptation?	
Predictive	machine	learning.
26
Canary,	a	home	security service.
27
Cocoon,	another	home	security	system knows.	How	does	it	know?	Machine	learning.
28
Here’s	foobot,	an	air	quality	service.
[I	also	like	how	one	of its	implicit	service	promises	is	to	identify when	your	kids...
Silk’s	Sense	adapts
30
Mistbox sprays	water	into	your	air	conditioner	to	reduce	your	energy	bill.	You’d	think	
that’s	a	pretty	simple	process,	bu...
A	number	of	companies	are	making	chips	that	make	machine	learning	much	cheaper	
and	more	power-efficient,	which	means	that...
33
They	do	this	through	processes	that	have	many	names,	but	I’ll	lump	them	all	under	Machine	
Learning,	which	is	a	big	part	o...
What’s	new	is	a	class	of	systems	that	understand	the	content	of	images.	They	don’t	just	look	
at	features,	but	clusters	of...
When	one	of	the	dimensions	is	time	and	another	is	the	outcome	of	a	series	of	actions	
you	can	make	a	pattern	recognizer	th...
37
As	interesting	as	these	issues	are,	I	think	that, more	importantly,	what	they	represent	
is	that	we’re	entering into	a	new...
Think	of	a	sewing	machine.	It’s	very	complex,	but	it	still	only	acts	in	response	to	us.
39
Computers	acting autonomously erode	this	simple	tool/user	relationship.	Predictive	
IoT	is	more	than	just	recommending	a	n...
By	2000	Sheridan	expanded	these	ideas	with	Parasuraman and	Wickens to	define	a	
spectrum	of	responsibility	between	people	...
The	ideal	scenario	these	things	paint	is	pretty	seductive.	Imagine	a	world	of	espresso	
machines	that	start brewing	as	you...
We’ve	never	had mechanical	things	that	make	significant	decisions	on	their	own.	As	
devices	adapt	their	behavior,	how	will...
The	irony	in	predictive	systems	is	that they’re	pretty	unpredictable,	at	least	at	first.	
When	machine	learning	systems	ar...
The	last	issue	comes	as	a	result of	the	previous	two:	control.	How	can	we	maintain	
some	level	of	control	over	these devic...
Here	are	7 patterns	I’ve	observed	in	developing	predictive	systems	that	I	think	map	to	
the	IoT.	For	most	of	these	I’m	goi...
To build	an	effective	anticipatory	machine	learning	system,	you	need	to	know	what	to	
anticipate,	and	to	do	that	you	need	...
What	goes into	that	mental	model?
There	are	lots	of	ways	to	structure	how	you	represent	people’s	view	of	the	world.	It’s	
...
One	of	the	great	cliches in	UX	design	is	the	search	for	delight,	such	as	the	seasonally	
changing	backgrounds	in	Google	Ca...
Because	machine	means	your tools	adapt	to	you	and	learns	from	you,	adaptive	tools	are	
more	like	apprentices,	rather	than	...
In	addition	to	teaching	apprentices about	our	needs,	we	also	learn	from	apprentices	
what	their	capabilities	are	and	why	t...
The	next	pattern	is	that	you	need	a	user	story	for	every	stage	of	the	machine	learning	
and	prediction	process,	even	for	s...
Since	predictive	systems	are	neither	consistent, nor	are	the	reasons	for	their	behavior	
clear,	this	can	be	really	confusi...
About	ten years	ago	Timo Arnall and	his	students	tried	to	address	a	similar	set	of	
questions	around	interactions	with	RFI...
Predictive	behavior,	is	all	about	time,	about	sequences	of	activities.	Many	predictive	
UX	issues around	expectations	and	...
You	have	to	give	people	a	clear way	to	teach	the	system	and	tell	it	when	its	model	is	
wrong.	Statistical	systems,	by	defi...
Finally,	don’t	automate. These	systems	shouldn’t	try	to	replace	people,	but	to	support	
them,	to	augment and	extent	their	...
Finally,	an	antipattern:	making	people do	all	of	the	training,	asking	them	to	identify	
whether	a	behavior	is	appropriate	...
Finally,	for	me	the	IoT is	not	about	the	things,	but	the	experience	created	by the	
services	for	which	the	things are	avat...
Ultimately	we	are	using	these	tools	to	extend	our	capabilities,	to	use	the	digital	world	
as	an	extension	of	our	minds.	To...
Thank	you.
61
Upcoming SlideShare
Loading in …5
×

The UX of Predictive Behavior for the IoT (2016: O'Reilly Designing for the IOT)

679 views

Published on

This presentation identifies challenges to the user experience design of smart devices (such as the Nest Thermostat, the Amazon Echo, the Edyn water monitor, etc.) that use machine learning to anticipate the needs of people and environments and adapt in response, and point to some potential design patterns to help address those challenges. The Internet of Things promises that by analyzing data from many sensors over time our experience of the world becomes better and more efficient. Our environment can predict our behavior, anticipate problems and needs, and maximize the chances of a desirable end result.
Though this notion of effortless automation is seductive (espresso machines that start just as you’re thinking it’s a good time for coffee; office lights that dim when it’s sunny and power is cheap), we don’t have good examples for designing user experiences of predictive systems. As a result, today it’s much easier to create such systems that are confusing, unpredictable and uncontrollable.

Published in: Design
  • Get Paid $25 per hour to watch YouTube videos ♣♣♣ https://tinyurl.com/rbrfd6j
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

The UX of Predictive Behavior for the IoT (2016: O'Reilly Designing for the IOT)

  1. 1. Good afternoon and thanks for having me here. In this talk I want to look at the design challenges of systems that anticipate users’ needs and then act on them. That means it sits at the intersection of the internet of things, user experience design and machine learning, and although people have dealt with one of those disciplines before, I don’t think they’ve ever been combined in quite the ways they are now, or with the current enthusiasm. The talk is divided into several parts: it starts with an overview of how I think Internet of Things devices are primarily components of services, rather than being self-contained experiences, how predictive behavior enables key components of those services, and then I finish by trying to to identify user experience issues around predictive behavior and suggestions for patterns to ameliorate those issues. A couple of caveats: - My current work in this field focuses almost exclusively on the consumer internet of things, so I see most things through that lens. Predictive AI has a long history in industrial applications, it’s in the consumer space that we really the the UX issues. - I want to point out that few if any of the issues I raise are new. Though the terms “internet of things” and “machine learning” are hot right now, the ideas have been discussed in research circles for decades. Search for “ubiquitous computing,” “ambient intelligence,” and “pervasive computing” and you’ll see a lot of great thought in the space. If you’re really ambitious, you can read the Artificial Intelligence and Cybernetics works of the 50s and 60s and you’ll be surprised by the prescience of the people working in this space when the entire world’s compute power was about as much as my key fob. - There are a lot of ideas here, and I will almost certainly under-explain something. For that I apologize in advance. My goal here is to give you a general sense of how these the pieces connect, rather than an in-depth explanation of any one of the pieces. - Finally, most of my slides don’t have words on them, so I’ll make the complete deck with a transcript available as soon I’m done. 0
  2. 2. Let me begin by telling you a bit about my background. I m a user experience designer. I was one of the first professional Web designers. This is the navigation for a hot sauce shopping site I designed in the spring of 1994. 1
  3. 3. I’ve also worked on the user experience design of a lot of consumer electronics products from companies you’ve probably heard of. 2
  4. 4. I wrote a couple of books based on my experience as a designer. One is a cookbook of user research methods, and the second describes what I think are some of the core concerns when designing networked computational devices. I’m also married to one of the authors of this book, so thinking about the impact of the design of connected devices on people is kind of a family business. 3
  5. 5. I also started a couple of companies. The first, Adaptive Path, was primarily focused on the web, and with the second one, ThingM, I got deep into developing hardware. 4
  6. 6. Today I work for PARC, the famous research lab that invented the personal computer, object oriented software, the tablet computer, and laser printer, as a principal in its Innovation Services group. We help companies reduce the risk of adopting novel technologies using a mix of social research, design and business strategy. 5
  7. 7. I want start by focusing on what I feel is a key aspect of consumer IoT that’s often missed when people focus on the hardware of the IoT, which is that consumer IoT products have a very different business model than traditional consumer electronics. 6
  8. 8. Historically, a company made an electronic product, say a turntable, they found people to sell it for them, they advertised it and people bought it. That was traditionally the end of the company’s relationship with the customer until that person bought another thing, and all of the value of the relationship was in the device. With the IoT, the sale of the device is just the beginning of the relationship and physical thing holds almost no value for either the customer or the manufacturer. 7
  9. 9. When you have a multitude of connected devices and apps, value shifts to services and the devices, software applications and websites used to access it—its avatars— become secondary. A camera becomes a really good appliance for taking photos for Instagram, while a TV becomes a nice Instagram display that you don’t have to log into every time, and a phone becomes a convenient way to check your friends’ pictures on the road. Hardware, physical things, become simultaneously more specialized and devalued as users see “through” each device to the service it represents. The avatars exist to get better value out of the service. 8
  10. 10. Amazon really gets this. Here s a telling older ad from Amazon for the Kindle. It’s saying Look, use whatever device you want. We don t care, as long you stay loyal to our service. You can buy our specialized devices, but you don t have to. 9
  11. 11. When Fire was released 5 years ago, Jeff Bezos even called it a service. 10
  12. 12. Most large-scale IoT products are service avatars. They use specialized sensors and actuators to support a service, but have little value—or don’t work at all—without the supporting service. Smart Things, which was acquired by Samsung, clearly states its service offering right up front on their site. The first thing they say about their product line is not what the functionality is, but what effect their service will achieve for their customers. Their hardware products’ functionality, how they will technically satisfy the service promise, is almost an afterthought. 11
  13. 13. Compare that to X10, their spiritual predecessor that’s been in the business for 30 years. All that X10 tells is you is what the devices are, not what the service will accomplish for you. I don’t even know if there IS a service. Why should I care that they have “modules”? I shouldn’t, and I don’t. 12
  14. 14. Simply connecting existing stuff to the internet does not produce customer value… 13
  15. 15. Simple connectivity helps when you’re trying to maximize the efficiency of a fixed process, but that’s not a problem that most people have. We’ve been able to simply connect various devices to a computer since a Tandy Color Computers could lights off and on over X10 in 1983. Today you can buy a module from Particle, Electric Imp or a dozen other companies and integrate it in a month to connect any arbitrary device to the Internet. The problem is that that wasn’t very useful then, and it’s not very useful now. If you replace the Tandy with an iPhone and the lamp with a washing machine… 14
  16. 16. …or an egg carton, you still have the same problem, and it’s a user experience problem. The UX problem is that end users have to connect all the dots to coordinate between a wide variety of devices, and to interpret the meaning of all of these sensors to create personal value. For many simply connected products there is so little efficiency to be had relative to the cognitive load that it’s just not worth it. What’s worse, the extra cognitive load is exactly opposite to what the product promises, and customers feel intensely disappointed, perhaps even betrayed, when they realize how little they get out of such a product That makes most such products effectively WORSE than useless. That promise gap is what distinguishes a gadget from a tool, why this egg carton is funny, and why Quirky who made it, filed for bankruptcy after burning through hundreds of millions of dollars. 15
  17. 17. How do you make money in this space of dematerialized devices and cloud services? 16
  18. 18. One approach is to change from an ownership model to a subscription model. Now the device gives access to a desired end result, without the burdens of ownership or maintenance. The IoT technology is what gives an efficient way to track and charge for assets. Car sharing, bike sharing, Uber and AirBNB follow this model. You don’t use it every day, so why own it? High-end clothing is going this way. Do you really need to own that Prada handbag so you can use it twice a year? 17
  19. 19. Hewlett Packard’s printer division is really an ink company that also makes ink consumption devices. Similarly Amazon is trying to corner the market on all consumables, whether they’re digital… 18
  20. 20. ..or physical. Their Dash replenishment service can turn any device with consumables… 19
  21. 21. …into an automatic Amazon reordering machine. The Dash button is a networked computer whose only purpose is to be an avatar for products where it’s not yet economically feasible to include connected electronics, like a macaroni and cheese box. That’s going to change as the electronics get cheaper. Moreover, the button is a sensor for people’s intent, which then dovetails into the real business model, which is not just shipping you mints when you’re too lazy to leave the house…but to identify your buying patterns, your cravings, your impulses, so that they can predict them and ship you mints not when you ask for them, but when you want them. 20
  22. 22. I think the real value connected services offer is their ability to make sense of the world on our behalf, to reduce cognitive load by enabling people to interact with devices at a higher level than simple telemetry, at the level of intentions and goals, rather than data and control. Humans are not built to collect and make sense of huge amounts of data across many devices, or to articulate our needs as systems of mutually interdependent components. Computers are great at it. 21
  23. 23. The interesting thing is that this not just theory. Prediction and response is at the heart of the value proposition many of the most compelling IoT services, starting with the Nest. The Nest says that it knows you. How does it know you? It predicts what you’re going to want based on your past behavior. 22
  24. 24. Amazon’s Echo speaker says it’s continually learning. How is that? Predictive machine learning based on your actions and your words. 23
  25. 25. The Birdi smart smoke alarm says it will learn over time, which is again the same thing. 24
  26. 26. Jaguar, learning…AND intelligent. 25
  27. 27. The Edyn plant watering system adapts to every change. What is that adaptation? Predictive machine learning. 26
  28. 28. Canary, a home security service. 27
  29. 29. Cocoon, another home security system knows. How does it know? Machine learning. 28
  30. 30. Here’s foobot, an air quality service. [I also like how one of its implicit service promises is to identify when your kids are smoking pot.] 29
  31. 31. Silk’s Sense adapts 30
  32. 32. Mistbox sprays water into your air conditioner to reduce your energy bill. You’d think that’s a pretty simple process, but no, it’s always learning. 31
  33. 33. A number of companies are making chips that make machine learning much cheaper and more power-efficient, which means that it’s going to be very easy to install it in every device, from street lights to medical equipment to toys. It’s not just likely, it’s inevitable. Here’s one that was announced a couple of weeks ago. 32
  34. 34. 33
  35. 35. They do this through processes that have many names, but I’ll lump them all under Machine Learning, which is a big part of what used to be called Artificial Intelligence. Many of the core ideas here go back to the 1950s and it’s the basis of every email spam filter, so if you’ve had your spam automatically filtered, you’ve experienced the value of machine learning. A big part of Machine Learning is pattern recognition. We humans evolved very sophisticated faculties to rapidly identify visual images in all kinds of difficult conditions. You look at a picture of an orange on a red plate and you can tell instantly that it’s not a sunset, but until recently that was really, really hard for a computer. Because of a combination of Moore’s Law and some breakthroughs, computers have gotten much better at pattern recognition in the last couple of years. For a computer, recognizing something starts with a process where some basic attributes of an image are extracted, such as the shape of boundaries between clusters of pixels, or the dominant color of a patch of an image. These are called features in machine learning. By examining lots and lots of examples of features in an image, a machine learning system builds a statistical model of what that cluster represents. Basic forms of this kind of image recognition has been used industrially for decade. Lego has a completely automated factory that injection molds a million Lego bricks an hour, examines every single piece, automatically sorts, bags and boxes them, all using computer vision. That’s relatively old. Images from: Region-based Convolutional Networks for Accurate Object Detection and Semantic Segmentation, R. Girshick, J. Donahue, T. Darrell, J. Malik, IEEE Transactions on Pattern Analysis and Machine Intelligence Real-Time Image and Video Processing: From Research to Reality by Kehtarnavaz and Gemadia 34
  36. 36. What’s new is a class of systems that understand the content of images. They don’t just look at features, but clusters of features, and clusters of clusters of features, and they can now identify an orange from the setting sun, or a person from an airplane, or a polar bear from a dalmatian. This is why Facebook asks you to say who is in an image. It’s not just for you, it’s for their face recognizer. Now here’s the interesting part: we’re built to identify patterns in visual phenomena, but we’re pretty bad at identifying them in other kinds of situations. For example, if you’ve ever tried to understand someone’s food sensitivities, it’s really hard to extract what that person is reacting to, even if you keep very careful track of what they’ve eaten. We’re just not built for it. It was never evolutionarily sufficiently important, so we didn’t evolve an organ for it. Computers, on the other hand, don’t care, and now that we’ve found really good ways to find patterns in visual images, these same techniques can find patterns in anything. Instead of a matrix of pixels, what if you had a matrix of medical prescriptions, with each row as the history of one person’s prescriptions from the first time that person went to the doctor for a problem, through when they were prescribed certain things, to when they got better, or they didn’t. The same kind of system could learn the typical pattern for prescribing, say, a wheelchair. It would essentially see the general shape of the sequence for the prescription of a chair over time and across many people. Then if you saw a wheelchair being prescribed that was outside of the typical pattern, you could identify it. That’s called anomaly detection. That’s in fact exactly how we built a system to identify Medicare fraud. People are terrible at that stuff, but computers are great. 35
  37. 37. When one of the dimensions is time and another is the outcome of a series of actions you can make a pattern recognizer that associates a sequence of actions with a set of statistical probabilities for possible outcomes based on data collected across a wide variety of similar situations. In other words, because people and machines behave in fairly consistent ways, these machine learning systems can increasingly predict the future and attempt to adapt the current situation to create a more desirable outcome. 36
  38. 38. 37
  39. 39. As interesting as these issues are, I think that, more importantly, what they represent is that we’re entering into a new relationship with our device ecosystem, a sea change in our relationship to the built world. 38
  40. 40. Think of a sewing machine. It’s very complex, but it still only acts in response to us. 39
  41. 41. Computers acting autonomously erode this simple tool/user relationship. Predictive IoT is more than just recommending a new song, it’s acting on your behalf on the basis of its assumption about what you want, and what’s best for you. At the dawn of computing in the late 1940s cyberneticists like Norbert Wiener philosophized about the increasingly complex relationship between people and computers, and how it was fundamentally different than the way we interact with other kinds of machines. Developers working in supervisory control of manufacturing machines and robotics have had to deal with these questions pragmatically for about 30 years, but thanks to the Internet of Things, this is now a problem that everyone will have to grapple with going forward. Here’s a diagram by the greats Tom Sheridan and Bill Verplank from 1978, in which they illustrate four ways that semi-autonomous computers and humans can work together to solve a problem. 40
  42. 42. By 2000 Sheridan expanded these ideas with Parasuraman and Wickens to define a spectrum of responsibility between people and computers. It ranges from humans doing all the work (this is you writing an essay) to computers doing all the work completely autonomously (this is your car’s fuel injection controller). Of course the goal is to get a system to level 9 or 10. That’s the maximum reduction in cognitive load. However, for a system to qualify for that, it has to be very stable, its effects need to be highly predictable and, equally importantly, it’s role needs to be adequately embedded in society. It needs to be OK for a computer to take on that level of responsibility. At the airport we trust the monorail computers to work without human intervention, but we don’t trust the plane autopilot to do that, even though-–as I understand it—planes can basically fly themselves these days. Predictive IoT devices generally fall between 5 and 7 on this scale right now. The problem is that this is the exact range where you’re maximizing someone’s cognitive load, but not necessarily doing all the work for them, so the result of the automation had better be worth it. This fundamentally undermines what we expect from our tools, and when that tool is trying to anticipate what we’re trying to do, it fundamentally changes our working relationship with it. 41
  43. 43. The ideal scenario these things paint is pretty seductive. Imagine a world of espresso machines that start brewing as you’re thinking it’s a good time for coffee; office lights that dim when it’s sunny to save energy, and mac and cheese that never runs out. The problem is that although the value proposition is of a better user experience, it’s unspecific in the details. Previous machine learning systems were used in areas such as predictive maintenance and finance. They were made by and for specialists. Now that these systems are for general consumers, we have some significant questions. How exactly how will our experience of the world, our ability to use all the collected data, become more efficient and more pleasurable? We’re still early in our understanding of predictive devices, and in the discipline of what Aaron Shapiro of Huge has dubbed Anticipatory Design, so right now the problems are worse than solutions. I want to start by articulating the issues I’ve observed in our work. 42
  44. 44. We’ve never had mechanical things that make significant decisions on their own. As devices adapt their behavior, how will they communicate that they’re doing so? Do we stick a sign on them that says “adapting”, like the light on a video camera says “recording”? Should my chair vibrate when adjusting to my posture? How will users, or just passers-by, know which things adapt? I could end up sitting uncomfortable for a long time, waiting for my chair to change, before realizing it doesn’t adapt on its own. How should smart devices set the expectation that they may behave differently in what appears to be identical circumstances? How do we know HOW intelligent these devices are? People already often project more smarts on devices than those devices actually have, so a couple of accurate predictions may imply a much better model than actually exists. How do we know we’re not just homesteading the uncanny valley here? 43
  45. 45. The irony in predictive systems is that they’re pretty unpredictable, at least at first. When machine learning systems are new, they’re often inaccurate and unpredictable, which is not what we expect from our digital devices. 60%-70% accuracy is typical for a first pass, but even 90% accuracy isn’t enough for a predictive system to feel right, since if it’s making decisions all the time, it’s going to be making mistakes all the time, too. It’s fine if your house is a couple of degrees cooler than you’d like, but what if your wheelchair refuses to go to a drinking fountain next to a door because it’s been trained on doors and it can’t tell that’s not what you mean in this one instance? For all the times a system gets it right, it’s on the mistakes that we judge it and a couple such instances can shatter people’s confidence. Anxiety is a kind of cognitive load, and a little doubt about whether a system is going to do the right thing is enough to turn a UX that’s right most of the time into one that’s more trouble than it’s worth. When that happens, you’ve more than likely lost your customer. Unfortunately, sooner than we think, such inaccurate predictive behavior isn’t going to be an isolated incident. Soon we’re going to have 100 connected devices simultaneously acting on predictions about us. If each is 99% accurate, then one is always wrong. So the problem is: How can you design a user experience to make a device still functional, still valuable, still fun, even when it’s spewing junk behavior? How can you design for uncertainty? Photo CC BY 2.0 photo 2011 Pop Culture Geek taken by Doug Kline: https://www.flickr.com/photos/popculturegeek/6300931073/ 44
  46. 46. The last issue comes as a result of the previous two: control. How can we maintain some level of control over these devices, when their behavior is by definition statistical and unpredictable? On the one hand you can mangle your device’s predictive behavior by giving it too much data. When I visited Nest once they told me that none of the Nests in their office worked well because they’re constantly fiddling with them. In machine learning this is called overtraining. The other hand, if I have no direct way to control it other than through my own behavior, how do I adjust it? Amazon and Netflix’s recommendation systems, which is a kind of predictive analytics system, give you some context about why they recommended something, but what do I do when my only interface is a garden hose? 45
  47. 47. Here are 7 patterns I’ve observed in developing predictive systems that I think map to the IoT. For most of these I’m going to be using examples from Nest and recommender systems like Amazon’s, Google’s and Netflix’s. Recommender systems have been around for more than a decade and they’ve been extensively studied. The move into predictive behavior is built on a combination of recommender systems and supervisory control, so I recommend not reinventing the wheel, but learning from those disciplines. 46
  48. 48. To build an effective anticipatory machine learning system, you need to know what to anticipate, and to do that you need to make a model of what people need, value and desire. Simply automating existing activities without understanding why people do them, what their goals are in doing them, misses the point of creating value. Predictability is very valuable, even when the predictability is in something that’s flawed. When we include anticipatory behavior in an experience, we’re essentially trading away an incredibly valuable commodity so that trade had better be worth it. To know whether it’s worth it, we need to have a model of what people value which we’re replacing or augmenting. 47
  49. 49. What goes into that mental model? There are lots of ways to structure how you represent people’s view of the world. It’s a significant focus of cognitive science, and I can’t do it justice, but here’s a nice list I grabbed from the intelligent agent literature. As a designer, many of these boil down to decisions. What decision will an anticipatory system help someone make? What decisions will it make on that person’s behalf? What are the parameters of that decision? For example, if I had a real-time blood glucose monitor and insulin pump that adjusted my blood glucose in real time, which of my decisions would it make for me? Which decisions would it tell me how to make? Which decisions would it give me advice about? Without a clear clearly articulated story about what decisions a system helps someone make, I believe you don’t have a clear story about what value it brings them. How do you figure out what those decisions are? You talk to people. User research. Ethnography. Leaving the office. 48
  50. 50. One of the great cliches in UX design is the search for delight, such as the seasonally changing backgrounds in Google Calendar. My definition for delight is that it’s functionality that subverts people’s near-term expectations, but supports their long- term needs and desires. This is particularly important in designing predictive systems, because if you subvert expectations WITHOUT supporting their needs, you get cognitive dissonance and you have violated their mental model. 49
  51. 51. Because machine means your tools adapt to you and learns from you, adaptive tools are more like apprentices, rather than implements and our use of them is more like a conversation rather than than linear tool use. In fact, I heard one of Nest’s UX designers say that he considered users’ evolving relationships to the Nest as a conversation. This is especially relevant in the era of chatbots and voice UI. If you listen to a human conversation, it’s almost never a linear, straightforward, well-structured process. We stop, we rephrase, we ask for corrections, we talk past each other, we interrupt. More likely than not, this is how a predictive machine learning system will interact with people, from whom it will want guidance, confirmation, and who will ask it for recommendations or changes to its behavior. Ethnomethdologists and conversation analysts have been modeling how people talk to each other for about 40 years, so I’m going to borrow some of their concepts. • Sequence organization is about organizing action in time. What happens first, what happens next? How do the two parties expand on ambiguity? For example, if a home security system decides you’re not home, it can tell you “I see you’re driving away from home. I’m going to turn all the alarms on.” You can then say “All of the except for the back yard.” • Turn-taking is critical. We don’t just simply take turns when talking, we continuously provide feedback and correct. We have expectations for whose turn is next and what they’re supposed to do. “Ok, chair, I’m sitting here, now it’s your turn. Confirm you know I’m here. Warn me if you’re going to adjust.” • Repair is backtracking, clarifying, continuing after an interruption, etc. What happens when the expected sequence, either from the perspective of the person or the service, is broken and needs to be reconstructed? 50
  52. 52. In addition to teaching apprentices about our needs, we also learn from apprentices what their capabilities are and why they made certain decisions, rather than others, when doing the things we taught them to do. This is both a part of how they learn about us and how we learn to work with them effectively. The BMW iDrive system was notorious for its UI, which didn’t tell you what it could or couldn’t do, and how to do it. You had a knob and that was basically it. How do I interrogate an adaptive system to understand what it can do, and to ask it to explain what it just did. How do you know what Siri or Google Now have learned to do? Well, you use the app. But what about services for which you don’t have a display? Chatbots today are essentially command line interfaces. They know specific words and sequences, but what if those commands change over time? What if the device learns new things over time? 51
  53. 53. The next pattern is that you need a user story for every stage of the machine learning and prediction process, even for steps that seems invisible. How will you incentivize people to add their behavior data to the system at all? Why should I upload my car’s dashcam video to your traffic prediction system EVERY DAY? How will you communicate you’re extracting features? I like the way that Google speech to text shows you partial phrases as you’re speaking into it, and how it corrects itself. That small bit of feedback tells people it’s pulling information out and it trains users how to meet the algorithm halfway. How do machine-generated classifications compare to people’s organization of the same phenomena? How is a context model presented to end users and developers? How will you get people to train it and tell you when the model is wrong? Does the final behavior actually match their expectation? 52
  54. 54. Since predictive systems are neither consistent, nor are the reasons for their behavior clear, this can be really confusing. The same thing can behave differently in what appear to be similar circumstances. If we undermine people’s confidence in a system by violating their expectations, they’re likely to be disappointed and stop using it. When we’re dealing with a human or an animal, unpredictable behaviors are expected and tolerated, but that’s not the case with computers. A predictive UX needs to do is to set people’s expectations appropriately. It needs to explain the nature of the device, to describe it is trying to predict, that it’s trying to adapt, that it’s going to sometimes be wrong, to explain how it’s learning, and how long it’ll take before it crosses over from creating more trouble than benefit. Recommender systems, such as Google Now, describe why a certain kind of content was selected, and that sets the expectation that in the future the system will recommend other things based on other kinds of content you’ve requested. Nest’s FAQ kind of buries the information, but it does explain that you shouldn’t expect your thermostat to make a model of when you’re home or not until it’s been operating for a week or so. 53
  55. 55. About ten years ago Timo Arnall and his students tried to address a similar set of questions around interactions with RFID-enabled devices by creating an iconography system that communicated to potential users that these devices had functionality that was invisible from the outside. Perhaps we need something like this for behavior created by predictive behavior? 54
  56. 56. Predictive behavior, is all about time, about sequences of activities. Many predictive UX issues around expectations and uncertainty have time as their basis: what were you expecting to happen and why. If it didn’t happen, why? If something else happened, or it happened at an unexpected time, why did that happen? Knowing that a device has acted on your behalf, and that it’s going to act—and HOW it’s going to act—in the future is important to giving people a model of how it’s working, setting their expectations, reducing the uncertainty. Nest, for example, has a calendar of its expected behavior, and it shows that it’s acting on your behalf to change the temperature, and when you can expect that temperature will be reached. 55
  57. 57. You have to give people a clear way to teach the system and tell it when its model is wrong. Statistical systems, by definition, don’t have simple rules that can be changed. There aren’t obvious handles to turn or dials to adjust, because everything is probabilistic. If the model is made from data collected by several devices, which device should I interact with to get it to change its behavior? Google Now asks whether I want more information from a site I visited, Amazon shows a explanation of why it gave me a suggestion. Mapping this to the consumer IoT means way more explanation than we’re currently getting, which is either that a thing has happened, or it hasn’t. 56
  58. 58. Finally, don’t automate. These systems shouldn’t try to replace people, but to support them, to augment and extent their capabilities, to help them be better at what they want to do, not to replace them. For example, Ember from Meshfire, is a machine learning assistant for social media management. It doesn’t try to replace the social media manager. Instead it manages the media manager’s todo list. It adds things that it thinks are going to be interesting, deletes old things, and reprioritizes the manager’s list based on what it thinks is important. I think this is a good model for how such systems can add value to a person’s experience without creating a situation where random, unexplained behaviors confuse people, frustrate them and make them feel powerless. Ember is an augmentation to the social media manager, it helps that person focus on what’s important so that they can be smarter about their decisions. It doesn’t try to be smarter than they are. How can our devices HELP us, rather than trying to replace us? 57
  59. 59. Finally, an antipattern: making people do all of the training, asking them to identify whether a behavior is appropriate or not, should be done selectively and infrequently. Yes, it will really help your supervised model’s accuracy to have people identify the correct positives from the false positives, but unless you’re paying these people, it’s incredibly annoying to have customers do it all the time. Last Friday one consumer IoT product with a machine learning system I’m playing with asked me to classify its output at 1:11PM, then again at 1:26, and again at 1:47 and again and again. I think it was on roughly ten-minute sensing cycle, and at every cycle it tried to make a decision, and asked me to verify it. I’m sure it’s still doing it, but I turned off all notifications from it, and now I’m considering turning it off entirely. People will sometimes willingly act as sensors and actuators for your system, but because they are not machines, they will not do it all the time and you’re just going to have to find a better way to train your model. 58
  60. 60. Finally, for me the IoT is not about the things, but the experience created by the services for which the things are avatars. 59
  61. 61. Ultimately we are using these tools to extend our capabilities, to use the digital world as an extension of our minds. To do that well we have to respect that as interesting and powerful as these technologies are, they are still in their infancy, and our job as entrepreneurs, developers and designers will be to create systems, services, that help people, rather than adding extra work in the name of simplistic automation. What we want to create is a symbiotic relationship where we, and our predictive systems, work together to create a world that provides the most value, for the least cost, for the most people, for the longest time. We are currently shoveling our old devices into this new medium. We have not yet figured out what the essential capabilities of this new medium are. Literal McLuhan quotation: "The content of the press is literary statement, as the content of the book is speech, and the content of the movie is the novel." 60
  62. 62. Thank you. 61

×