The document summarizes how a year of SEO split testing changed the author's understanding of how SEO works. Some of the key findings from the split tests included: (1) Title tests often changed traffic 5-15% but 56% were negative, (2) Structured data sometimes increased traffic over 10% but most changes had no effect, and (3) The same changes often had different or no effects on different sites, highlighting there are no universal "best practices". Split testing improved relationships by making arguments less necessary.
25. Cats Dogs Unicorns Badgers
CRO test - Different users, see different templates on the same pages
1
2
Cats BadgersDogs Unicorns
26. Cats Dogs Unicorns Badgers
SEO test - different users see the same templates on different pages
1
2
Cats Dogs Unicorns Badgers
Cats Dogs Unicorns Badgers
27. Then we measure the change in traffic, e.g. 3% daily increase
57. How much traffic do you
need for SEO split testing?
Rule of thumb: you need roughly 1000 organic sessions a day to
the group of pages you’re testing.
#INBOUND2018 @dom_woodman
85. Get a testing framework
You need to be able to iterate and learn from tests. Having a
shared framework, will help with this.
#INBOUND2018 @dom_woodman
86. You don’t need the why
Knowing the why is great (find it if you can), but you don’t need
it to take action.
#INBOUND2018 @dom_woodman
87. More than you wanted to know
about titles and metas.
100. Title tests usually change
traffic between ~5 - 15%
This stayed equivalent across site size and industry.
#INBOUND2018 @dom_woodman
101. 56% of all title tag
changes were negative
Around 37% were null and only 6% were actually positive. It’s really
hard to write a good title.
#INBOUND2018 @dom_woodman
103. You can’t stop testing
titles.
Titles exist in the context of the rest of a SERP, when everyone
copies your good title format (and they will) you’ll have to mix it
up again.
#INBOUND2018 @dom_woodman
104. Title/meta effects typically
appear in 2-4 days.
It’s also easy to validate the changes have been picked up
because you can scrape it from the SERPS.
#INBOUND2018 @dom_woodman
112. Visible content on the
initial page load matters
Although there is a lot subtlety to how Google renders JS, we
haven’t covered here.
#INBOUND2018 @dom_woodman
118. The same changes have
different effects on
different sites.
This is the big one. There really isn’t best practice.
#INBOUND2018 @dom_woodman
119. (Assuming it was topically similar). We’ve saved clients a lot of
money, by showing they could reduce content without an effect.
Reducing content on
non-article pages was
often null.
#INBOUND2018 @dom_woodman
135. The same changes have
different effects on
different sites.
This is the big one. There really isn’t best practice.
#INBOUND2018 @dom_woodman
136. The same changes have
different effects on
different sites.
This is the big one. There really isn’t best practice.
137. Structured data has an
effect outside of rich
snippets
Occasionally we got big 10-15% wins on important templates, it
varied wildly and was mostly null (never negative).
#INBOUND2018 @dom_woodman
138. Periodically re-challenge
your beliefs
I was lucky to test a successful site first. In a different order, I
might’ve given up on my hypothesis, before finding the right site.
#INBOUND2018 @dom_woodman
147. If you don’t have intent,
bells and whistles fail.
5 star rich snippets increased traffic by 16% on the right site with the
right intent. When the intent wasn’t there, they appeared, but did
nothing.
#INBOUND2018 @dom_woodman
155. Freshness does matter.
But I wouldn’t recommend faking last modified dates...
#INBOUND2018 @dom_woodman
156. Agile testing can help
you take larger risks.
If you can measure and quickly roll out/roll back a test, you can try
things you might not normally feel comfortable doing.
#INBOUND2018 @dom_woodman
166. You don’t need to argue,
when you can test.
When you have solid testing framework and can build things quickly
and easily, testing is easier than arguing and removes arguments
from a relationship.
#INBOUND2018 @dom_woodman
171. Happiness isn’t just up
and to the right
Instead the focus turns to other metrics like how many tests are we
running, how can we improve the testing process.
#INBOUND2018 @dom_woodman
172. A negative test you rolled
back is a bullet dodged
Negative tests feel like wasted time. Once you realise without testing
you probably would’ve rolled out, it feels a lot better.
#INBOUND2018 @dom_woodman
173. You’re about to be forced
out of silos.
You’re going to need to coordinate and work tightly with product and
QA teams if you’re not already.
#INBOUND2018 @dom_woodman
176. The same changes have
different effects on
different sites.
Really can’t emphasize this one enough.
#INBOUND2018 @dom_woodman
177. Get a testing framework.
Most tests fail or are null.
Having a framework will help you move faster and find those wins.
#INBOUND2018 @dom_woodman
178. Testing will improve your
relationships.
You’ll have to spend less time arguing and it creates a culture of
curiosity.
#INBOUND2018 @dom_woodman
179. Periodically re-challenge
your beliefs
You probably have some beliefs about what works or doesn’t work
which are wrong from blind chance. Re-test them.
#INBOUND2018 @dom_woodman
180. Making changes to sections of pages
on templates
● Making SEO changes with tag
manager
● Cloud flare edge workers
● DistilledODN
General useful posts on testing
frameworks & velocity
● Hypothesis framework
● Running a weekly growth meeting
Do it yourself
How does split testing work?
● How does SEO split testing work?
Examples
● Pinterest - Demystifying SEO with
experiments
● Etsy - SEO title tag testing
Measuring SEO split tests
● Google’s original causal impact
paper
● A DIY tool for measuring SEO split
tests
● A walkthrough of the R Causal
Impact library