6. Problem
Product
What is the right thing to do? Have we built it right?
Observation
Interviews
Surveys Observation
Survey
7. What is the right thing to do? Have we built it right?
Observation
Interviews
Surveys Observation
Survey
What do they do?
Why do they do it that way?
What problems do they
suffer?
How have they tried to
resolve these problems?
Where does our product fit
in their work or life?
What problems does our
product solve?
When & how is our product
used?
What features are
important?
How should our product
look or behave?
Editor's Notes
Early days – we were called usability engineers. We were the people in white lab coats testing the usability of the product. And the end result was that we often ended up finding about pretty bad user experiences once it was too late, close to the end of the product lifecycle.
Tell the file I/O story.
What we learned was that separate teams of people, working separately and in an almost linear fashion doesn’t work. This doesn’t help us deliver products that people actually want.
15 years later and things have changed. Everybody is Agile now and we are getting used to working collaboratively, in small batch sizes and iterating frequently and rapidly.
This is what we might call the Golden Age and the Everyone Age. There really has never been a better time to be working in UX. We don’t have to convince anyone why UX is important. Tell the story about when interviewing for a job in 1997, nobody knew what HCI was. Things have changed significantly since then.
I feel that design research thrives best when it happens right in the middle of the rapidly evolving design decision making process. It is that increasingly tight collaborative dynamic with the other disciplines that I think is the very most exciting evolutionary trend. I don’t feel like our team is getting left out of the loop because product teams are doing it themselves or because we don’t scale. The opposite is happening. Our most exciting and influential work is still happening within these rapid interplay cycles between design, research, and engineering. We have a ton of data that points to this being the case. For example we have seen a 76% jump in the number of user research lab sessions we've run for product teams within the last year and that growth trend shows no signs of letting up. I think the single biggest kiss of death for user research in the world of modern development would be to try and thrive predominately outside of the agile engineering cadence/design loop.
But we still have problems (although the biggest ones are the best ones to have). If so many people are convinced that we need UX, how do we scale? Can we just hire more and more people? We can, but throwing more people at a problem isn’t always the best way to do things. That is just like going back to the days when we had separate teams with separate roles.
Instead we share the responsibilities – we focus on responsibilities, not roles. While I have UX researcher in my title, I am not the only one responsible for doing UX research. My whole team is responsible for it. While I have some recognized expertise amongst my team for doing UX research, everybody participates. That means that they interview customers, they observe customers. They share in modeling and designing the user experiences that we create.
And what’s great about this is that it doesn’t diminish the quality of the UX that we create. In fact, it improves the quality. Everybody on the team has unique perspectives, all of which help bring the problem we are trying to solve to life. In my experience, there is a rich conversation that takes place amongst the whole team about who the customer is, what the problem is that they have and how our product could solve it. It’s not a UX only conversation – it’s a customer and product conversation, it’s a conversation that needs to involve everyone who works on the product.
The other thing that has changed in the last 15 years is the advent of the cloud and big data. We now have the ability to deploy updates to users rapidly and can measure the effect of those changes via mechanism like flighting and running A/B tests. We now have available a wealth of feedback and we can truly interact with and get feedback from large numbers of users, instead of the 5 or so that we were taught 15 years ago would be sufficient to find 80% of usability problems in a product.
This is a great tool to add to our toolbox. Understanding the impact of design changes in aggregate and then take action upon that understanding is something that we have never been able to do previously.
But this is one more tool, not a replacement. Think about the story I told at the start about the pain and suffering we observed while developers were trying to write some simple code that would read and write text from a text file. Would we have been able to determine why people were struggling from telemetry alone? How would we even have known what they were trying to do? At the very best we would know that they were writing code against different classes but we wouldn’t know if they were successful because we wouldn’t know what they were trying to do. And even if we tried a huge experiment where lots of people tried the task, would we really understand the problems they had without actually watching them?
So how does this work in practice? How we break things down so that we are working on the right thing.
We focus as a team on validating problem hypotheses and product hypotheses. Problem hypotheses are all about understanding the customer and the problems that they have. It helps us determine if we are building the right thing. Do we understand the problems that the customer has, do we understand the outcomes that they want to achieve. There is no point in building a product that doesn’t solve a problem for a customer or that doesn’t help them achieve the outcome they desire.
The constant interplay between formulating, testing, and reformulating both our customer assumptions/hypotheses and our product assumptions/hypotheses. In other words the mindset of being in a continuous learning mode regarding who our customer is, what problems or needs they have, and how effective the product experiences we are creating are at meeting their needs. I love the Bill Buxton quote that the essence of design thinking is, “making sure you are building the right thing and that you are building it correctly”. Having a learning gearbox that constantly refines our understanding of customer needs and the experiences we are delivering relative to those needs is the essence of what we should be all about as a discipline and as a company.
So how does this work in practice? How we break things down so that we are working on the right thing.
We focus as a team on validating problem hypotheses and product hypotheses. Problem hypotheses are all about understanding the customer and the problems that they have. It helps us determine if we are building the right thing. Do we understand the problems that the customer has, do we understand the outcomes that they want to achieve. There is no point in building a product that doesn’t solve a problem for a customer or that doesn’t help them achieve the outcome they desire.
The constant interplay between formulating, testing, and reformulating both our customer assumptions/hypotheses and our product assumptions/hypotheses. In other words the mindset of being in a continuous learning mode regarding who our customer is, what problems or needs they have, and how effective the product experiences we are creating are at meeting their needs. I love the Bill Buxton quote that the essence of design thinking is, “making sure you are building the right thing and that you are building it correctly”. Having a learning gearbox that constantly refines our understanding of customer needs and the experiences we are delivering relative to those needs is the essence of what we should be all about as a discipline and as a company.