Welcome! Today we’re going to talk about the intersection of security & design and ways to help users make better choices and stay safe online.
My name is Jade Applegate, I am a User Experience Engineer. This is my first time to Raleigh, I’ve lived in the San Francisco Bay Area for the last 4 years.
You can find me on the internet @jadeapplegate.
Fastly is a real-time Content Delivery Network, we serve dynamic and static content for the likes of imgur, pinterest, github, and many others.
We are involved in and support several open source projects, and are planning to release two more in the beginning of 2016.
It’s a fantastic company, and we’re growing rapidly with offices in SF, NYC, London, and Tokyo! Come find me if you’d like to learn more.
Alright, with the introductions out of the way, let’s get started.
UX generally focuses on reducing friction for users on the ‘happy path’. Typically these things are viewed as a conversion or acquisition funnel. We are all familiar with this type of cycle.
As an example, if you have an e-commerce site, you would want a customer to create an account, add something to their cart, complete the check-out process, and even return again in the future. Your site would be optimized for this flow, because your business is getting people to buy things from your website.
But what if you put as much emphasis on designing and planning the UX for destructive actions on the site, such as deleting an item from their shopping cart, canceling their order, or deleting their entire account on the site? Typically, these ‘destructive actions’ and warnings are less thought through because they don’t contribute to the ‘happy path’ of customer conversion, and ultimately revenue. But in some cases, they are just as important for the user experience, especially when security is concerned.
Today, I’d like to skip the ‘happy path’ and talk about the opposite -- Adding Friction to the User Experience, and where that makes sense to do so. I’ll introduce you to some relevant security related research, and review some UX guidelines worth paying attention to.
UX should typically be frictionless -- until it isn’t! What does that mean?
Here are a few examples of best practices when it comes to destructive actions in applications.
This is where the designers empathized with users and made it easier to navigate the transition. They purposely put in a road block to confirm and bring awareness to the actions, even though it is not part of the ‘happy path’.
These are all what I like to refer to as ‘the are you sure, there’s no turning back after this’ situations. Let’s take a look.
This first example is from Google’s design specs, and it’s regarding the language used when discarding a draft.
As you can see it’s important that you don’t use ‘‘yes/no’, because those are ambiguous.
You should instead use explicit language to help the user focus on the outcome of their decision.
This helps to eliminate any confusion and makes it easier for the user to understand exactly what will happen.
This second example comes from the Fastly app (currently in private beta) where we popup a modal confirmation with the ‘delete’ action & the name of the thing you’re deleting. Sometimes you want to double check with the user before they complete a destructive action.
Note that the button says “Confirm and Delete” rather than Yes and “Cancel” instead of ‘No” based on what we learned in the previous slide about being specific.
This third example from is from Github, when you’re in the “DANGER ZONE”. It’s important to alert users that they are in a place where the changes they make have real consequences, and sometimes cannot be undone. Since the stakes are so high, when deleting a repo, you even have to type in the name of the repo in order to proceed with the delete. This makes the user actively engage in the experience, and has less margin for error.
These examples of adding friction are small but I wanted to point them out as good UX design and implementation to help users make better choices online. Users can be on autopilot, especially if they’re on a site they use frequently. They can be distracted and not realize the repercussions of clicking a button.
That’s why even though these are not ‘happy path’ actions, it’s important to purposely add friction, so that we can snap them out of it before they do something they didn’t intend to.
By putting yourself in their shoes and understand what they will go through, you can design and implement a better flow. Let’s take a deeper dive into a few case studies which share some of the same principles.
One thing that really intrigues me within the world of UX design is that seemingly small improvements can have a big payoff.
There are 2 fantastic studies done by researchers at Google that I’d like to share with you today, and they focus on improving browser warnings using UX and Design principles.
As we know, browsers show a HTTPS authentication warnings (i.e., SSL warnings) when a user’s info is at risk. This browser warning is another example of the importance in adding friction to the user experience.
This 1st study, “Experimenting At Scale With Google Chrome’s SSL Warning” which was published in 2014, focused on figuring out why user behavior differed dramatically between Chrome and Firefox browsers when SSL warnings were shown.
First, this study is focused the metric of Click Through Rate so let’s define it just so we’re all on the same page. When an SSL warning appears, users have two options,
1) to abandon their destination and return to safety, or
2) to consider and then dismiss the warning and proceed to their intended destination
The percentage of times a user selects that second option and proceeds despite the warning determines the Click Through Rate (CTR).
Typically a high CTR is a good thing, but in the case of ignoring the SSL warning, it’s not. (You may be familiar with the term “Adherance” - in this case, Adherance to the warning is 100% minus CTR).
Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. In Firefox, the CTR was 33% whereas in Chrome, the CTR was 70%. The difference here is significant and they wanted to know why, and how to improve, in order to keep users safer online.
The authors stated that their goal was to decrease the number of users who ignore the warnings in Chrome. Unfortunately, users struggle to understand and often disregard real SSL warnings.
In this study, several factors were investigated that could be responsible for the difference between the user behavior. We’ll explore two of those factors today:
1) the use of imagery
2) style choices
To test these factors, six different experimental SSL warnings were used in Google Chrome 29 and were seen by ~130,000 people. The warnings were designed to test several hypotheses about how users respond to design manipulations.
As I mentioned, the The first factor that was tested was the use of imagery in the warning.
Their hypothesis was that since the brain’s social response to human images is instinctive, inclusion of these images should suggest the feeling of being watched and thus reduce the CTR.
However the results showed no change. Inclusion of human faces to indicate warning (policeman and criminal) as well as a red traffic light (to indicate stop) did not have a significant difference in CTR.
So let’s investigate the other factor, which were styling choices.
In addition to testing slides with the images we just saw, the researchers used three different styles:
The first style tested was their existing chrome warning (1)
The second style tested was a mock of the firefox warning (2)
And the third style tested was the firefox warning content with default Google Chrome styling applied (3)
Their hypothesis was that by applying corporate style guidelines to a warning, it would increase the CTR, since warnings that resemble corporate products do not stand out as unusual.
BUT, what they saw was that just changing the styling did not have an effect on the CTR.
We have ruled out the use of imagery and applying corporate styling. So, what makes the Firefox SSL warning so much more effective than Chrome? Again, remember that the CTR in Firefox was 33% whereas in Chrome is was 70%.
The researchers hypothesized that in Firefox, the warning’s text, layout, and/or default button choice must be responsible. They noticed that the Firefox warning appears to avoid technical jargon, identifies ways to mitigate the risk, hides technical details by default, and has a clear default choice.
To be able to get these elements right is pretty complicated from a UX/Design standpoint, and they were explored by the same team in a subsequent study.
This follow up study, “Improving SSL Warnings: Comprehension And Adherence” was published in 2015, and it focused on SSL warnings in Chrome with the goal of improving comprehension of the warning.
Based on the previous study, their working hypothesis was that by improving comprehension, the CTR would be reduced and they might be closer to matching the CTR of similar warnings in Firefox.
Their first goal was to help users understand the situation they were in, and if that wasn’t possible, help to guide users to a safe path rather than have them guess what was the best way to proceed.
This was tested on Chrome 36 and involved 7,500+ responses gathered through interviews conducted in a lab setting.
The three topics we’ll touch on are Comprehension, Language, and Opinionated Design.
The main goal of this study was to increase comprehension. To achieve this goal, the ideal warning would convey the following three things:
1) Source of the threat: That an informed user would not evaluate how benign or malicious the destination website is, BUT instead would realize that the supposed attacker is at some point between the user’s computer and the website’s server.
2) What data was at Risk: An informed user would therefore consider the sensitivity of her data on the destination website, and understand that the risk applies to all data already on the website, not just new data that the user enters after clicking through the warning.
3) The Potential for False positives: When weighing the likelihood of a false positive, an informed user would consider the website’s reputation and whether the website normally works correctly.
Now that we know what was needed to be understood, Let’s talk about the how.
In general, technical jargon should be avoided when designing a good user experience since it’s ineffective.
People are more likely to read beyond the first sentence of a warning if it uses simple language, and advertisements and warnings that contain technical language hold less interest and are less likely to be remembered or obeyed.
As a non-tech example, when preparing to paint a room, people are more likely to follow the simple instruction to “open a window” than the more complex instruction to “use in a well-ventilated area”
Firefox has less technical terms in their warning, while Chrome uses many technical terms, including “server, operating system, security certificate, and trusted authority.”
This can get very confusing.
For these reasons, the researchers decided that language should follow the following three guidelines:
1) Brevity: Large quantities of text look like they will take effort to read, so people often read none of it. A complication here is explaining the threat succinctly. Given the choice, they figured that the user reading some text is better than none.
2) Reading Level: Ideally, language should be at a 6th grade level in order to be well understood by a general audience.
3) Specific Risk Description: Previous research has shown that people are more likely to comprehend and comply with a warning if it describes the risks explicitly and unambiguously. When possible, it’s best to describe the data types at risk (passwords, messages, credit cards) rather than just saying “your information”
Based on these comprehension and language guidelines, here is the proposed warning (L) compared with the warning that was being used (R) in Chrome 36.
Let’s compare the proposed text on the left with the current text on the right.
Which would be easier for someone to act on? [READ aloud]
By applying the three language related terms of brevity, reading level, and describing the specific data at risk, it actually boils it down to a pretty succinct message.
Now that the researchers had decided on what information the warning should contain, the next step was to determine how that information should look. Here, they introduced the concept of Opinionated Design, which is the use of visual design cues to promote a recommended course of action.
Simply providing information without a clear instruction does not necessarily influence behavior. For example, people don’t always choose healthier products after reading nutritional labels.
There are two important concepts of Opinionated Design to emphasize:
First is the Concept of Choice attractiveness: Researchers wanted the Safe choice to be more visually attractive. They used a familiar bright blue button (same as what’s used for other primary actions on Google) so users should associate the button as the default.
2. The other important concept is Choice visibility: where the ‘unsafe’ choice to proceed is hidden behind the ‘Advanced’ link.
Finding this ‘hidden’ choice requires effort, and the researchers believed that in doing so, the user would view it as ‘not recommended’.
A few Downsides here: 1) there is an increased amount of effort to ignore a false positive, and 2) users may not realize there is another choice hidden behind “Advanced”.
So, let’s look at the final SSL warning.
As a result of optimizing for language comprehension and applying new styling based on the opinionated design principles, the proposed design was released as the new Google SSL warning, in Chrome 37.
With this new design, the Click Through Rate decreased from 70% to 42%, bringing it closer to the Firefox rate of 33%.
This meant that millions of additional users per month choose to act safely due to these warning design changes. These changes might seem small but they had a huge impact in terms of security.
So, what is the purpose of all of this. Why should it matter to you?
First, you might not think to optimize something like this by default, b/c so much of the time, we’re focused on the ‘happy path’. But, as we’ve seen, making small changes to something like a browser warning can impact the security of millions of users.
But looking back to the examples we explored at the beginning of the talk, you don’t need an entire research team or have millions of users to verify your results. These techniques are applicable to smaller interactions as well. Even the most basic Javascript alert with an ‘Are you sure?’ confirmation is better than no friction at all, in terms of helping users make better decisions around destructive actions.
Secondly, this matters to your team or company because applying friction to the user experience where it makes sense has several upsides:
1. it results in less support requests b/c you’ve optimized certain edge cases that were previously ignored, and reduces user errors.
2. it provides a better UX regardless of whether the user is contributing to your conversion funnel,
3. Finally, it gives the user control over specific actions by enabling and empowering them. Think back to the “Danger Zone” from the Github example in the beginning. How badass do you feel when you’re in “The Danger Zone”?
If you’ve optimized for comprehension, you’re more comfortable giving the user control because you’re confident in their understanding.
Imagine emailing github every time you wanted to rename or delete a repository. If you are able to give this type of control to your users, it can take some manual work off of your team. Making fundamental changes to an account can be done by a confident user, rather than having to rely on an internal ad hoc process.
I hope that you’ll apply some of these topics to your next project, and think about adding friction to your user experience when it makes sense to do so.
Thanks for being such a great audience.
I can take a few questions now, if there are any.
You can also find me after this or on Twitter @jadeapplegate
I’ve also put my slides and other resources (including the research papers referenced) up on my github account: https://github.com/jadeapplegate/AllThingsOpen2015