Presenters: Jeremy Bowen, Microsoft, Creative Design Director
There's no question that the products we make in the near future will grow cheaper and smarter - which means more intelligence in more places. How much will our existing attitudes about interaction design apply in this new era of diverse, connected, intelligent devices? Are there new boundaries we need to set to maintain our privacy and our peace of mind? "The Well-Behaved Object" is a way of thinking about the relationship between us and the objects around us that augment our experience – and what it takes for the objects we use to be not just “intelligent”, but valuable, empowering, and meaningful.
Book Paid In Vashi In 8976425520 Navi Mumbai Call Girls
Editor's Notes
I hope my sub-title was grandiose enough to pique your desire to call B.S. on me.
Developed while working in various capacities on the Windows team – IoT, Cortana/Natural language, Contextual awareness, Surface hardware team on future products.
It’s our job to look to the future, and be prepared for what’s on the horizon, right? Things like…
They’re the new “thing”, guys! They’re taking over!
What are we going to do with bazillions of connected devices?
How do we handle computational abilities that start to mirror or exceed our own?
I should throw in Virtual and Augmented Reality…
What does pervasive computing power mean for our privacy, and how do we manage it?
The future is going to be shiny and different!
I think we have a tendency to think that we have to take the old rules…
Burn them with fire…
And replace them with new rules.
The future will no doubt look different. As for all these new concepts, their shapes are new, the circumstances are novel, the implementations are foreign, but I don't think they require new rules when it comes to human-thing interaction.
When I have to think about the future, I prefer to focus on the human side of the question, rather than the technological. Regardless of what companies are able to pull off which product launches and grow which platforms and nurture which ecosystems and sustain which business models, our human needs remain. Luckily, we as a species change much slower than the nature and forms of our creations (or so it seems), so our minds, values, and needs can serve as a steady anchor in the storm of conjecture and Moore's Law.
That said, the objects we will be interacting with will have a level of sophistication and inter-connectedness that we haven't had to consider yet - but that simply forces us to articulate what we already know about what is valuable to us, and apply it to a broader set of circumstances.
I hope to be telling you things you already know.
I hope much of what I say seems obvious.
It's like something I heard once about songwriting…
I’m pretty sure I heard this from Hootie, as in Hootie & the Blowfish?
To further undermine my credibility…
I am not a researcher. All my data is anecdotal. I've done no studies. I'm freely stealing and mashing-up ideas that I've come across and mixing them with my own biases and perspective.
But, I hope to be sharing a way of thinking that will be useful, or at least interesting.
Frictionless environments are mostly theoretical – they’re nearly impossible to replicate. But they help us understand how physics really works.
What would happen if the right side never sloped up to the original height?
Separating principles
Helps us re-combine them into useful equations to solve problems and model reality
Separating principles
Helps us re-combine them into useful equations to solve problems and model reality
OK. I want to take us through a little thought experiment. So everyone take a deep breath… and forget everything you think about the world.
We’re going to a theoretical universe, free of many of the constraints and assumptions we might make in the real world.
You have a mass of Nanobots that can form in to any physical shape, size, weight, material, and have any kind of functionality you desire. You can break the mass in to as many different pieces as you want.
You can also set the boundaries for what each piece can know about, what it can and can't do.
What would you create?
Create a rectangular object you can hold in your hand that you can touch, and it will display information
Create a rectangular object you can hold in your hand that you can touch, and it will display information
Large piece of glass that can show immersive video and amazing sound reproduction. Maybe it can follow you around the house…
A soft fuzzy ball that…
…depending on how you squeeze it, stroke it, or thump it plays you different kinds of music
Or a button that when you push it…
…sushi appears!
This is our frictionless environment. You can make anything that can do anything that you can interact with in any way you please, and technological constraints—which is the friction in this case—won’t stand in your way.
How are you deciding what is valuable?
What are the core elements?
I'm completely ignoring how any of this works. This is not from a technological perspective. But I believe that these are the elements that we humans need and expect from the objects we use.
I’m intentionally not using the word “devices”. I hope for a world where the distinction between an object and a device grows more and more obsolete.
*I want to pause here for a second: We’re going to be talking mostly in terms of physical objects, but I think it applies nearly as well to “virtual objects” - any product or service you interact with through a physical object – an app on a phone, AR objects in the room I see through a Hololens, the service I talk with through Cortana.
But stick with me in the physical realm for a bit, and we’ll talk about how we might translate these ideas for virtual objects or services.
We usually use “Intelligent” as a kind of throw-away word to describe something that is more sophisticated than we’ve encountered before. In many ways, it just means “better”.
In many ways, it just means “better”.
I think we make a mistake when we think "Intelligence" is an objective ingredient in something. "Intelligence" is a value judgement that we users make about a thing. I like to think that thing doesn't "have intelligence" - only our users can tell us if something is indeed "behaving intelligently."
But I think the real hope for intelligence - for us and for our users - is this…
But I think the real hope for intelligence - for us and for our users - is this…
Pigs aren’t flying just yet.
What do I mean by “elements”?
…like the periodic table describes the elements that compose our universe. Any given object in the universe contains a different combination in different proportions.
These are the four elements of an intelligent object.
Here’s a handy definition: An object is interactive to the degree that it responds to manipulation. A rock is an object, and much can be accomplished with a rock, but a rock is not interactive, as it doesn’t respond to manipulation. (Unless you take a chisel to it.)
“responding to manipulation” implies both directions
:Q: What are some of the ways we give input to objects?
Here’s how I break it down…
Ways we can provide input using direct human-to-object contact
Bret Victor: A Brief Rant on the Future of Interaction Design – “Pictures Under Glass”
Input that the object can “see”, without us touching it or making sounds.
:Q: What are the ways our objects give us feedback?
“Voice is going to take over” – we’ll be talking to everything. We’ll be doing all our interactions via talking and pointing.
We definitely ought to be able to do those things. But I see it as simply the long-tail of interactions we want.
Minority report anecdote – Jaron Lanier ?
So that’s the element of Interaction.
Or, the book it can read from.
This can be general information and status…
Or the object can have awareness of its environment, whether it’s its location…
Or an awareness of other devices in the environment
Objects can have awareness of presence and identity – who is around, and information related to them
It can be aware of requests and status.
For an object to be intelligent—from a user’s perspective—there must be a shared understanding between the user and the object.
An Italian may understand a lot about how to get to the airport, but unless we have a shared understanding of what I need, he can't help me. Computers may form lots of connections, capabilities, syntaxes, reactions to things - it may have its own "understanding" of the world - but unless we have a shared understanding of what things mean, and what outcomes should be, it won't help me, and I can't use it.
Info it receives either through its awareness, or through direct interaction.
There are a number of ways that objects can know how to respond to that info.
Basic if/then
Learning the broader implications of things.
Monica is my spouse, and therefore gets special permissions and treatment that others don’t get.
I’m working right now, which means I don’t want to be bothered with emails from TripAdvisor. (True story.)
Pull up the stuff from that project I’m working on with Chad.
Remembering past interactions, patterns, instructions…
This is a key thing that I think technology is pretty bad at. We have to start fresh so often. There's little sense of an accumulative relationship with our things. So much could be done if our things would simply remember, and be able to refer back to the past.
We're getting better at it, but I think this is an area with a lot of potential.
Using past patterns and understanding to determine what would be preferred or implied
Obviously, to reach a shared understanding between a user and an object, there has to be dialog.
Configuring a device to behave the way you want it to is rather one-directional. You are instructing it.
Much of what we do as UX designers is one-directional as well – we make sure the things you use are “discoverable”, “intuitive”. These are ways we try to make the user bend their understanding to the object.
But I would guess that we would tolerate a fair amount of training – in both directions – if it was conversational in nature. (And I don’t mean just in the natural-language sense of the word.)
This is, after all, how we interact with each other. To reach a shared understanding with someone, there is usually a “meeting in the middle”, where both sides are able to understand the other.
What the object actually accomplishes for you in the world.
Virtual outputs like communication, information display
Physical outputs like picking up a chair, playing music
There is obviously lots of room for technology to improve in each of these elements.
Having more flexible and natural interaction methods
Having more robust and useful awareness of environments, identities, preferences, goals, content
Developing a shared understanding in more intuitive, meaningful ways
Empowering us through more effective and significant impact
But should every object’s aim be to have each of these elements in full measure?
:Q: What kinds of problems might we face if everything did all of these things?
When do we want our objects to be dumb? Or simple?
Should every object have fully-flexible interaction methods?
Or do you want every object to have the same level of awareness? (Cameras in the bathroom?)
When does it become problematic for objects to have possibly a better understanding of you and your world than you have?
As for Impact, do we want every object to be capable of anything, like some enormous meta-Swiss-army knife?
Worse yet, if you put all four together, you have every Apocalyptic sci-fi movie.
(Don’t look too hard at these highly scientific charts – they’re mostly just to get a point across…)
When we consider these four elements in relation to what’s ideal for us, people are pretty good at some of these.
Our interaction methods are ideal, almost by definition. Maybe if we introduce mind-reading via technology…
Our ability to reach a shared understanding…
These gaps represent our opportunities in tech to better match our ideals, and in most cases, we’re simply catching up with what comes naturally to us.
Phrase comes from a heavy philosophy text of some sort - Learned this in the context of music composition classes.
When there are no limits to what you can do, decisions become very difficult to make.
It’s useful to our brains to associate functionality with certain objects.
TV remote
Light switch
Phone vs. laptop (how many of you check your email on your phone even when your browser’s open?)
Compartmentalization
Expanding the capabilities of a certain object means - by definition - that you increase it’s complexity.
It’s the reason in the nanobot exercise you would decide to make more than one object.
Even if our objects are capable of doing a lot of things, we will only allow them to do it when we can develop trust.
I have a whole other talk on this topic called “Doorstops & Bicycles” that discusses this aspect –
We derive meaning out of having control over things. The ability to make a decision is what gives us a sense of power and significance.
There are things we want to delegate – things we don’t particularly care about having direct control over
But there are things we want to retain control over, even if they could be delegated.
With that in mind – that we don’t want our objects to be mega-swiss-army-knives that can do anything, know everything, and understand us – possibly more than we understand ourselves – what makes an object well-behaved? It’s the balance between being frustratingly dumb and uncomfortably intelligent or presumptuous. I like to think of interactions in these terms.
How do we both expand the capabilities and set the limits for our objects in a way that is meaningful and appropriate?
We have our four elements – Interaction, Awareness, Understanding, and Impact – but for any given object, how it should leverage each element is different. We wouldn't have the same limits or requirements in every case.
The thing that filters and balances these elements is…
…THE OBJECT’S PURPOSE. The object’s purpose is what governs:
how we want to interact
what we want it to be aware of
How we reach a shared understanding, and what that understanding entails
What impact the object has in the world, whether virtually or physically.
It’s the reason you pick it up, or touch it, or speak to it. (or launch it, if it's UI.)
The reason you pull your phone out of your pocket to check your email while your laptop is on your lap.
The reason you play music out of those speakers in that room.
(The reason you open that app, instead of another.)
Products are created with a purpose in mind, and we usually buy them, or download them, or otherwise use them because we think we value the premise.
But once we actually start using them, what the creators wanted us to use them for is completely irrelevant.
The only thing that matters is what WE think it is good for – because that’s the reason we choose to use it.
A coffeemaker may be intended to make coffee, but if it doesn’t work well, it becomes a heavy object I use to prop open the door to the garage.
The Well-Behaved Object will have…
If all these things are true, you’ll have yourself a well-behaved object.
If it fails in any of these categories, it either doesn’t qualify to be an “intelligent” object, or it’s one that mis-behaves.
This is a model that I hope can be applied as you work on your areas, or products. As I said at the beginning, I hope some of this is obvious – but I also hope it lends some new perspective.
I have one last topic I want to cover on this.
I’ve spent a lot of time emphasizing how intelligent objects ought to bend to what is ideal for the user’s purpose.
But that assumes the user knows what is ideal, or even what the purpose is…
Part of the value we provide as the product-makers is having some intuition and understanding of these issues, and figuring out what’s best on behalf of our users.
They buy or use the product, and discover how valuable it is.
One way to decide what is the most valuable way to do something is to survey the user base, and adjust a single product to the needs and desires of the majority.
A perfectly valid way to get the best bang for the buck.
On the other end of the spectrum, however, is to build products that are designed to learn and adjust themselves to the needs and desires of the individual.
Objects that are
Conversational
Inquisitive
Accommodating
whose priority is to learn and adapt.
Now, users should not need to be designers – they shouldn’t have to start from scratch and create valuable objects out of clean slates.
That exercise at the beginning with the nanobots is actually really hard, and we’re all likely to come up with really stupid ideas that wouldn’t last a week.
We should make products that are generically great – they’re great for a given purpose, intended for a given kind of person with given kinds of tasks. But that should be just the beginning – the baseline.
Truly intelligent objects – the well-behaved ones, anyway – will adapt to their user as much, if not more than, the user adapts to the object.
How we each understand the world is what makes us who we are.
We share a lot in common as humans – and that’s worth celebrating.
But our internal web of meaning, relationships, expectations, aims, etc., is what makes us each unique.
Like a favorite sweater or a well-worn journal, the most valuable things we use will be the ones that bear the imprint of our quirks, our patterns, our experiences, and our histories.
Thank you for coming!
…THE OBJECT’S PURPOSE. The object’s purpose is what governs:
how we want to interact
what we want it to be aware of
How we reach a shared understanding, and what that understanding entails
What impact the object has in the world, whether virtually or physically.