UX Conversion Camp is the UK's only brand-only conversion event, organised by Keep It Usable. 2017 was the best yet! Here are the slides from our fantastic presenter, Pete Marriott of Rental Cars. For more information on UXCC, please visit www.uxconversioncamp.com
2. About Me
Pete Marriott
• Over a decade in Analytics roles
• 6 years+ A/B testing
• Currently heading up Customer Insight at Rentalcars.com
“Daddy works on his computer &
eats chocolate”
3. - World’s biggest online car rental service
- Compare 800 companies at over 49,000 locations
- Manchester based, starting in 2004
- Part of the Priceline Group
7. Example
Hypothesis
Because we saw examples where “payment method” had been removed in competitor analysis we
expect that removing the card selection will cause more customers to complete the payment form.
We’ll measure this using the number of completed bookings / conversion rate
10,000 Visits
1,000
Bookings
10%
Conversion
10,000 Visits
1,200
Bookings
12%
Conversion
8. - We want our features to drive revenue
- We want to improve the customer experience
- We want customer sentiment to drive product development
- We want to make product decision based on fact
Why do we experiment?
9. WE ARE OFTEN WRONG!
One other big reason
Although we aren’t alone
22. A negative or neutral result doesn’t necessarily mean that the
hypothesis is invalid.
The more you test, the more you’ll be able to spot when you should
move onto another hypothesis.
When pulling together this presentation I asked my children what they thought I did at work…
As you’re about to find out going through this pack - I’m no designer
Aside from eating chocolate – I currently head up Customer Insight at Rentalcars
I’m going to take you through 6 pointers from my experience.
Before I go on – I’ll just add that I may say things that some of you disagree with.
But that is okay…
For those that don’t know…
Basically A/B testing is about presenting alternative products to customers and observing the results – helping a decision to be made.
This photo was taken in 2013 shortly after moving offices – obviously a bit of a joke, but the ethos & culture of A/B testing is evident.
Using competitor analysis we may think that removing the drop down (card selector) reduces friction and helps a customer move through the journey.
Some of the most successful companies in the world, also get things wrong.
I work with some of the most incredibly intelligent people and I can promise – they also get things wrong.
A/B Testing isn’t for everyone…
You need enough traffic so that the numbers behind your observed behaviours are statistically significant.
So that we are confident that the observed behaviour would continue to occur outside of the test.
You need the right metrics in place to understand whether the test is a success or not.
Crisp, solid metrics are essential – these can also range from commercial to emotive metrics.
But it is imperative that you have something in place.
My first pointer, which may throw a few people.
Not only is it okay to be wrong, you have to be okay with being wrong.
That is a very alien concept to mosts people – we like to be experts, we’ve built careers off it.
If you’re not okay with being wrong – maybe A/B testing isn’t for you.
But that is part of the magic - you get to prove a change performs well, but you also get to prove when things don’t go well.
You are also able to observe the impact.
With A/B Testing, it helps you understand the decisions you make and the potential impact of those that you would have made.
A/B Testing is a culture shift.
Does everyone know what the Hippo effect is?
A lot of people suggest that experimentation removes the HiPPO.
I’d rather look at it as empowering colleagues – we all have equal buy in / responsibility to the product.
Pointer number 2…
Build up your hypothesis and use that as your benchmark.
Fix the problem you started out trying to solve.
Set your expectations & goals out before the experiment.
Use these as your success criteria – remain neutral as much as possible.
**David Douglass (born 1932) is an American physicist with interests in Condensed Matter Physics and climate change.
Has anyone heard of the Ikea effect?
I’ve just painted my bathroom and it took me a few days (it’s a small bathroom – I just took a while).
Investing that time and effort into the project.
Going back to my success criteria - My wife says that the colour wasn’t right and it didn’t go…
Rather than re-painting a different colour, I’d fallen in love with my effort and said no.
I’ve since had to invest more time into painting the tiles, buy new towels etc.
Be careful not to place too much emphasis on the work you’ve already completed if it doesn’t fit your success criteria.
You have to be ruthless
Pointer number 3…
Understand your success criteria – measure performance against that.
Monitor other behaviours alongside your success criteria which could impact the net benefit. Are cancellations / returns up, calls / complaints increased?
Not only have trust in your tools – it must be well founded.
Build the trust in your tools by investing in them.
You can buy off the shelf products which can work great, we have preferred to build our own and continue to invest in them.
We have some great people working on our testing tools – there is a reason for that.
Do you evaluate during or at the end of an experiment?
There are some arguments to suggest leaving your experiment running for a certain time period and only reviewing once that has expired.
In my experience it is always preferential to observe performance during the experiment.
Why wait until you are sure that something is negative before taking action?
As long as you have enough information to understand why that behaviour occurred – we shouldn’t risk the impact to the customer.
In this instance I prefer to be much more reactive and this is the main reason why I always prefer to monitor performance regularly.
Be careful as performance can be volatile whilst volumes are low.
Admittedly not the best title for my fourth point…
Your research / data that helped build the hypothesis may be entirely valid.
The friction point you have observed may exist.
What a failed experiment tells you is that you failed to effectively alter that behaviour for mutual gain.
So, it could be a fault with the implementation – causing other issues.
It could be that the design doesn’t lend itself to solve the friction point.
Whether the experiment was positive or negative, there is always something to learn.
You should be using this insight to inform any future iteration.
I would also encourage revisiting old experiments.
A valid hypothesis may require multiple iterations before it becomes successful.
Also worth mentioning that what works for your customers today – may not work the same tomorrow (or vice versa)
Search on Google (or any alternative) and you’ll no doubt see a plethora of blogs about best practice UX / best practice web design etc.
Some of you will probably hate me for this but….
In the world of A/B Testing, you shouldn’t assume that best practice is best for your customer.
Examples include – Happy families on landing pages, sliders / hamburger menus on mobile.
I’m a white, 30 something from the North of England – my browsing behaviour will not necessarily be the same as anybody else in this room.
Each of our businesses will more than likely have different customer demographics.
Learn what you can about your own customers and let them create your product roadmap.
Often you see examples online about total page redesigns being A/B tested.
I’m going to use two examples of Rentalcars homepage over the years.
Given a blank canvas I don’t think any of our designers would have designed the website as it is today.
Certainly they wouldn’t now design it as it was…
So if we had moved directly from A to B and saw a performance uplift - how do we know what worked?
Was it the imagery, reviews, security info etc.
What is the optimum design / iteration? – unless you test iteratively, you’ll never know.
Also, you may catch elements that hinder performance / experience.
I’ll just leave this image here…
Not only is A/B Testing a way of improving your product – it is a culture.