Often individual updates seem random or arbitrary, or just reverse the fates of the previous update. But is there a general direction over time? What kind of sites have trended upwards, and was it algorithm updates that did that for them?
Google likes to say that if you make good sites and content, updates will reward you. But this isn't really true. Clearly, in the optimisation and experimentation of the Google algorithm, good sites cannot only move upwards. Sometimes they will be tested in a higher position than makes sense given their relevance, after which they must come down.
In recent times, we've also assumed that changes in rankings result quickly from changes in sites. And yet, SEOs will often say that algorithm updates are rewarding them for long-past changes. Which of these narratives are true? Are they compatible?
48. @THCapper
The return of the refresh
https://developers.google.com/search/blog/2019/08/core-updates
“Content that was impacted by one might not
recover—assuming improvements have been
made—until the next broad core update is
released.”
70. @THCapper
Vague
E.g. Core updates
Google is telling us to
expect flux.
“Just make good content.”
“EAT”
Specific
E.g. CWV, Mobilegeddon
Google is trying to change
our behaviour.
“X is now a ranking factor.”
Often slow / non-event
rollout.
72. @THCapper
A splash of vagaries - like a Core update
https://developers.google.com/search/blog/2022/08/helpful-content-update
“How can you ensure you're creating content that
will be successful with our new update? By
following our long-standing advice and guidelines
to create content for people, not for search engines.”
73. @THCapper
A splash of specifics - like Page Experience
https://developers.google.com/search/blog/2022/08/helpful-content-update
• Are you using extensive automation to produce
content on many topics?
…
• Does your content promise to answer a question
that actually has no answer…”
“
74. @THCapper
Some food for the tinfoil hats
“Does your content leave readers feeling like they
need to search again to get better information from
other sources?”
https://developers.google.com/search/blog/2022/08/helpful-content-update
https://moz.com/blog/click-
based-seo-engagement-
signals
119. @THCapper
@THCapper
Agenda
The return of the refresh
This is about Google, not about you
Everyone hurts, sooner or later
You should probably make some good content
This talk picks up just where Lily Ray left off at the end of yesterday’s sessionsLily talked about the September Google Algorithm updates, and what we can learn from themI want to talk about how they fit into the bigger picture of Google updates over the last several years, with some stories from the mountain of data I have access toI’d like to start by asking an important question
Who would win in a fight?What I mean to ask, of course, is which, do you think, was bigger - Panda or Medic?
This is the last 8 years in Moz dataI think there’s a ton of fascinating stories in this chart alone, but I will highlight just a few
You’ll note the relative quiet of the Panda and Penguin era
The incredible amount of flux in the run up to mobile first - not during, but before.
It turns out, the real mobilegeddon was the one we met along the way
Medic, and the surprising period of quiet that followed in 2019
Then the flux of early covidThat’s just a few of the stories hereNow, it can be quite hard on the heatmap to see the size of 1-day spikes, so personally I find it useful to add a line chart over the top
Something you can see, drawn like this…
Is that for example, in August 2022, we had a Panda 4 scale fluctuation roughly once per week.Indeed, when Mozcast was first conceived, the idea was that 70 degrees would be a typical day. {click}
There were zero 70 degree days in the last two years.
There were zero 70 degree days in the last two years.
(This talk is primarily about)
11 core updates
(including Medic)
So that’s 11 core updates, from August 2018 to May 2022, visualised here as a timeline, from Medic to nowAt the time of the SearchLove slide deadline the dust had not quite settled for the September update, but it does fit into the same overall picture
But we all know the answer to Core updates, of course - "just make good content", right?
Well, I’ll show you the story of the Wall Street Journall - whatever your politics, this is a site that basically only publishes original, long form content, from authoritative authors
Now I should caveat that these charts I’m about to share are using the Mozcast keyword set - so it may not be representative of every keyword the site ranked for.
In fact almost certainly it isn’t, because that would heavily index on their branded terms.
But it’s still representative of how it did within these keywords, which are around 10k head terms in the US. And of course I’m only looking at sites which rank for a fair chunk of those keywords, for the sake of fair comparison.
But still - after this first Core update, which the industry called Medic, you can imagine them not feeling so well
Here’s the second update, not so bad.
Looking better, SEO department perhaps lauding themselves internally right now
Oh dear
Okay
Okay
Nice
Nice
Very nice
Whoops
RightSo this is a very turbulent ride, for this site, with all of its good content
Here’s another - CNBC
And here’s ROYTERSSo again - this is not general performance, it’s just within the Mozcast corpus. But the point remains, these are real keywords, 10,000 of them, for which these sites had a very rocky ride.These are original reporting news sites. Some of the most credible in the world.Almost nobody makes more “good content” than these guys.
So, you might be thinking, well it’s not just good content, it’s EATPerhaps there were topics that the WSJ or Royters, were not really qualified to comment on, especially during a pandemic
Try explaining that to the British National Health Service, any time before May this yearMaybe Google felt a British site was not appropriate in the US admidst a pandemic, but there’s not much up as there is down. And, they’re still valid answers to the questions being asked. So the point remains.It feels like these sites are on a random walk
This is what I mean by a random walkThis is a term more commonly found in finance, referring to a series that just randomly moves up or down every day, and it can be the best way to model things like currency exchange.
This is also sometimes called Brownian noise, hence the title of this deck. Not to be confused with Brown noise, which you might have been thinking of.This is what the sites that we just looked at appear to be doing, right?
But it’s not very satisfying as SEOs to think that algorithm updates are just a random movement each time
So are algorithm updates genuinely random? That’s what I’m going to spend the rest of this talk exploring
Via these 4 questionsFirstly, how can it be that authoritative sites such as the ones I just showed you, are having such a rocky timeI want to consider a few possible explanations
Also known as, the myth of winning every timeSo I’d like to show you a few possible explanations
Firstly, perhaps Google is doing this on purpose
It’s attractive sometimes to think that Google is just fucking with us
Google have good reason not to do this
And besides, if Google wanted to fuck with us, they have many better ways of doing that than randomly deflating authoritative sites (quote)
This is similar to what I was saying earlier about how Google can’t afford to present bad results
Google isn’t changing its results to punish or reward you, they’re doing it to hit their own KPIs
This is from those famous non purveyors of good content, we saw earlier, CNBCI’ve referenced this interview from 2018 before because it’s an interesting insight into how Google engineers are thinking about algorithm updatesThey’re testing against certain searcher satisfaction metricsBut they key bit there is *testing*, so there is likely to be some back and forth
Indeed Google kind of say this themselves
Indeed Google kind of say this themselves
So, that feels like it could be part of the story at least
The other big component here is something that Google admits to, but that has kind of fallen out of discussion among SEOs
The notion of an algorithm refresh
So this is a terminology that most would say is no longer relevant to how Google works - if you search for google data refreshes, you’ll find mostly stuff from 2012 and earlier
This could be in a scenario where you make no changes at all, or one in which you make many. Point remains.
Remember that you never have any idea whether a result in position 2 is just behind, or miles behind, position 1
So that could be another component
Okay, so maybe we’ve got some theories there for how the fluctuation might work
In my view, algorithm updates come in two flavours
Most obvious examples are HTTPS, mobilegeddon, page experience updateGoogle is communicating very clearly what they want us to change
Incidentally, these updates are a bluff. I talked about this with the page experience update - they can’t actually roll it out if the industry doesn’t react in time.
On their original rollout day of May 2021, they delayed, I’d suggest, simply because they’d end up punishing too many sites
Initially, very few sites had CruX data
By the actual rollout, that had improved dramatically - from 29 to 38%
As had the performance of those pages - from 30 to 36%
When you multiply out these two increases - from 29 to 38%, and from 31 to 36%, it’s a big increase in how many URLs would be eligible for a boost
So basically it was a bluff - they waited until websites were ready for it
Meanwhile having said initially that they’d punish URLs which failed any threshold, they ended up punishing only those that failed all 3, further easing the blow
So that’s what specific updates tend to be like these days.Obviously, it wasn’t necessarily so back with Panda and Penguin, but then again, they weren’t generally announced wit so much warning and literature.
Core updates on the other hand, are much vaguer
So, that leaves us with an obvious question about this new flavour of update from the start of SeptemberA Helpful Content Update
It has some features of a Core update - such as this incredibly banal advice
But also mentions some specific targeted behaviours, like the Page Experience update
There’s also this curious allusion to a popularly believed user signal ranking factor
And this bold claim, a bit like the page experience update saying it’d punish sites which failed any thresholdIt’s hard to see how following this through wouldn’t represent an act of self sabotage, I’d say this is a bluff
And then also, more food for the increasingly sensible looking conspiracists
It’s attractive sometimes to think that Google is just fucking with us
This is unlikely, as Danny points out here
If they wanted to do that, there’s better ways
It’s attractive sometimes to think that Google is just fucking with us
This is unlikely, as Danny points out here
If they wanted to do that, there’s better ways
It’s attractive sometimes to think that Google is just fucking with us
This is unlikely, as Danny points out here
If they wanted to do that, there’s better ways
Tis is a bit like how people react to certain other updates
(But the lack of pre-announcement doesn’t)So it’s a bit of both
This is before and after the Helpful Content update - after is the darker blue barsNow, I’m sure some sites were affected, but broadly speaking, it was not enough to move the needleMost examples we could find turned out to be false positives
For example WhiteHouse.gov, on the day they published the student loan relief planThis is really important to keep in mind with any kind of algorithm update winners and losers analysis - normally it’s full of people who have better explanations for why they went up or downSite migrations, sudden virality, that kind of thing
But this brings us onto - if Core Updates are vague, and the Helpful Content update did nothing in particular, who should care about all this?
Well, about 20% of sites should expect to be firmly in the crosshairs for Core Updates.
What I mean by that is that only 10% of sites were impacted by none of the 11 updates
This chart shows, of the 11 core updates before this month, how many updates had affected how many sites
This chart shows, of the 11 core updates before this month, how many updates had affected how many sites
This chart shows, of the 11 core updates before this month, how many updates had affected how many sites
This chart shows, of the 11 core updates before this month, how many updates had affected how many sites
But for sites that are affected by *most* updates, you’re only looking at about 20%. Of course, you can take that stat either way.
But for sites that are affected by *most* updates, you’re only looking at about 20%. Of course, you can take that stat either way.And, incredibly, 0 were hit by all updates
The second point you quickly notice is that Core updates are not only about YMYL, or only about medical verticals
These are the most affected sites in the Mozcast dataset, by the 11 core updates on aggregateYou’ll notice, I think, that this is not a list of medical sites
This sort of analysis of single updates can really miss the whole story
This sort of analysis of single updates can really miss the whole story
This sort of analysis of single updates can really miss the whole story
Even a bluff is short term
This is from those famous non purveyors of good content, we saw earlier, CNBCI’ve referenced this interview from 2018 before because it’s an interesting insight into how Google engineers are thinking about algorithm updatesThey’re testing against certain searcher satisfaction metricsBut they key bit there is *testing*, so there is likely to be some back and forth
A medic would easily take out a Panda
If you’re interested in some of the share of voice charts I’ve been sharing today, come down to the STAT booth and talk to myself or Duncan about it later