Rand Fishkin, Wizard of Moz | @randfish | rand@moz.com
SEO in a Two Algorithm World
bit.ly/twoalgo
Get the presentation:
State of Search
November 16th, 2015 8:00am
Dallas, TX
Remember
When…
We Had One Job
Perfectly Optimized Pages
The Search Quality
Teams Determined
What to Include in
the Ranking System
They decided
links > content
By 2007, Link Spam Was Ubiquitous
This paper/presentation
from Yahoo’s spam team in
2007 predicted a lot of what
Google would launch in
Penguin Oct, 2012
(including machine learning)
Even in 2012, It Felt Like Google Was Making Liars Out
of the White Hat SEO World
Via Wil Reynolds
Google’s Last 3 Years of
Advancements Erased a
Decade of Old School SEO
Practices
They Finally Launched EffectiveAlgorithms to Fight
Manipulative Links & Content
Via Google
And They Leveraged Fear + Uncertainty of
Penalization to Keep Sites Inline
Via Moz Q+A
Google Figured Out Intent
Rand probably
doesn’t just want
webpages filled
with the word
“beef”
They Looked at Language, not Just Keywords
Oh… I totally
know this one!
They Predicted When We Want Diverse Results
He probably
doesn’t just
want a bunch of
lists.
They Figured Out When We Wanted Freshness
Old pages on this
topic probably
aren’t relevant
anymore
Their Segmentation of Navigational from Informational
Queries Closed Many Loopholes
Google Learned to ID Entities of Knowledge
And to Connect Entities to Topics & Keywords
Via Moz
Brands Became a Form of Entities
TheseAdvancements Brought Google (mostly)
Back in Line w/ Its Public Statements
Via Google
During These Advances,
Google’s Search Quality
Team Underwent a
Revolution
Early On, Google Rejected Machine Learning in the
Organic RankingAlgo
Via Datawocky,
2008
Amit Singhal Shared Norvig’s ConcernsAbout ML
Via Quora
In 2012, Google Published a PaperAbout How
they Use ML to Predict Ad CTRs:
Via Google
2012
“Our SmartASS system is a
machine learning system. It
learns whether our users
are interested in that ad,
and whether users are going
to click on them.”
By 2013, It Was
Something Google’s
Search Folks Talked
About Publicly
Via SELand
As MLTakes Over More of Google’sAlgo, the
Underpinnings of the Rankings Change
Via Colossal
Google is PublicAbout How They Use MLin Image
Recognition & Classification
Potential ID Factors
(e.g. color, shapes,
gradients, perspective,
interlacing, alt tags,
surrounding text, etc)
Training Data
(i.e. human-labeled images)
Learning
Process
Best
Match
Algo
Google is PublicAbout How They Use MLin Image
Recognition & Classification
Via Jeff Dean’s Slides on Deep Learning; a Must Read for SEOs
Machine Learning in Search Could Work Like This:
Potential Ranking
Factors
(e.g. PageRank, TF*IDF,
Topic Modeling, QDF, Clicks,
Entity Association, etc.)
Training Data
(i.e. good & bad search
results)
Learning
Process
Best Fit
Algo
Training Data
(e.g. good search results)
This is a good SERP –
searchers rarely bounce, rarely
short-click, and rarely need to
enter other queries or go to
page 2.
Training Data
(e.g. bad search results!)
This is a bad SERP –
searchers bounce often,
click other results, rarely
long-click, and try other
queries. They’re definitely
not happy.
The Machines Learn to Emulate the Good Results & Try to Fix
orTweak the Bad Results
Potential Ranking
Factors
(e.g. PageRank, TF*IDF,
Topic Modeling, QDF, Clicks,
Entity Association, etc.)
Training Data
(i.e. good & bad search
results)
Learning
Process
Best Fit
Algo
Deep Learning is Even MoreAdvanced:
Dean says by using deep
learning, they don’t have to
tell the system what a cat is,
the machines learn,
unsupervised, for
themselves…
We’re TalkingAbout
Algorithms that Build
Algorithms
(without human
intervention)
Googlers Don’t Feed in Ranking Factors… The Machines
Determine Those Themselves.
Potential Ranking
Factors
(e.g. PageRank, TF*IDF,
Topic Modeling, QDF, Clicks,
Entity Association, etc.)
Training Data
(i.e. good search results)
Learning
Process
Best Fit
Algo
No wonder these guys are stressed about Google
unleashing the Terminators 
Via CNET & Washington Post
What Does Deep Learning
Mean for SEO?
Googlers Won’t Know Why Something Ranks or
Whether a Variable’s in theAlgo
He means other Googlers.
I’m Jeff Dean. I’ll know.
The Query Success Metrics Will BeAll That
Matters to the Machines
Long to Short Click Ratio Relative CTR vs. Other Results
Rate of Searchers Conducting
Additional, Related Searches
Metrics of User Engagement
on the Page
Metrics of User Engagement
Across the Domain
Sharing/Amplifcation Rate
vs. Other Results
The Query Success Metrics Will BeAll That
Matters to the Machines
Long to Short Click Ratio Relative CTR vs. Other Results
Rate of Searchers Conducting
Additional, Related Searches
Metrics of User Engagement
on the Page
Metrics of User Engagement
Across the Domain
Sharing/Amplifcation Rate
vs. Other Results
If lots of results on a SERP
do these well, and higher
results outperform lower
results, our deep learning
algo will consider it a
success.
We’ll Be Optimizing Less
for Ranking Inputs
Unique Linking Domains
Keywords in Title
Anchor Text
Content Uniqueness
Page Load Speed
And Optimizing More for Searcher Outputs
High CTR for this position?
Good engagement?
High amplification rate?
Low bounce rate?
Strong pages/visit after
landing on this URL?These are likely to be the
criteria of on-site SEO’s future… People return to the site
after an initial search visit
OK… Maybe in the future. But,
do those kinds of metrics really
affect SEO today?
Remember Our Queries & Clicks Test from 2014?
Via Rand’s Blog
Since then, it’s been much harder to move the
needle with raw queries and clicks…
Case closed! Google says they don’t use clicks in the rankings.
Via Linkarati’s Coverage of SMX Advanced
But, what if we tried long
clicks vs.
short clicks?
Note SeriousEats,
ranking #4 here
11:39am on June 21st,
I sent this tweet:
40 Minutes & ~400
Interactions Later
Moved up 2 positions after 2+
weeks of the top 5 staying
static.
70 Minutes & ~500
Interactions Total
Moved up to #1.
Stayed ~12 hours, when it
fell to #13+ for ~8 hours, then
back to #4.
Google? You
messing with us?
Via Google Trends, we can see the relative impact
of the test on query volume
~5-10X normal volume
over 3-4 hours
BTW – This is hard to replicate.
600+ real searchers using a
variety of devices, browsers,
accounts, geos, etc. will not look
the same to Google as a Fiverr
buy, a clickfarm, or a bot. And
note how G penalized the page
after the test… They might not put
it back if they thought the site
itself was to blame for the click
manipulation.
OK… Maybe in the future. But,
do those kinds of metrics really
affect SEO today?
Via Bloomberg Business
The Future:
Optimizing for Two
Algorithms
The Best SEOs HaveAlways
Optimized to Where Google’s Going
Today, I Think We Know,
Better Than Ever, Where That Is
Welcome to your new home, the User/Usage Signals + ML Model Cabin
We Must Choose How to Balance Our Work…
Hammering on the Fading Signals of Old…
Or Embracing Those We
Can See On the Rise
Classic SEO
(ranking inputs)
New SEO
(searcher outputs)
Keyword Targeting Relative CTR
Short vs. Long-Click
Content Gap Fulfillment
Task Completion
Success
Amplification & Loyalty
Quality & Uniqueness
Crawl/Bot Friendly
Snippet Optimization
UX / Multi-Device
Branded Search & TrafficLinks & Anchor Text
5 New(ish) Elements of
Modern SEO
Punching Above Your
Ranking’s Average CTR#1
Optimizing the Title, Meta Description, & URL
a Little for KWs, but a Lot for Clicks
If you rank #3, but have a higher-
than-average CTR for that
position, you might get moved up.
Via Philip Petrescu on Moz
Every Element Counts
Does the title match
what searchers want?
Does the URL seem
compelling?
Do searchers
recognize & want to
click your domain?
Is your result fresh?
Do searchers want a
newer result?
Does the description
create curiosity &
entice a click?
Do you get the
brand dropdown?
Given Google Often Tests New Results Briefly on Page One…
ItMayBeWorthRepeatedPublicationonaTopictoEarnthatHighCTR
Shoot! My post only made it to #15…
Perhaps I’ll try again in a few
months.
Driving Up CTR Through Branding Or Branded
Searches May GiveAn Extra Boost
#1 Ad Spender
#2 Ad Spender
#4 Ad Spender
#3 Ad Spender
#5 Ad Spender
With Google
Trends’ new, more
accurate, more
customizable
ranges, you can
actually watch the
effects of events
and ads on search
query volume
Fitbit has been running ads on
Sunday NFL games that clearly
show in the search trends data.
Beating Out Your Fellow SERP
Residents on Engagement#2
Together, Pogo-Sticking & Long Clicks Might
Determine a Lot of Where You Rank (and for how
long)
Via Bill Slawski on Moz
What Influences Them?
Speed, Speed, and More Speed
Delivers the Best UX on Every Browser
Compels Visitors to Go Deeper Into Your Site
Avoids Features that Annoy or Dissuade Visitors
Content that Fulfills the Searcher’s Conscious &
Unconscious Needs
An SEO’s Checklist for Better Engagement:
Via NY Times
e.g. this interactive
graph that asks visitors
to draw their best
guess likely gets
remarkable
engagement
e.g. Poor Norbert
does a terrible job
at SEO, but the
simplicity compels
visitors to go
deeper and to
return time and
again
Via VoilaNorbert
e.g. Nomadlist’s
superb, filterable
database of cities and
community for remote
workers.
Via Nomadlist
Filling Gaps in Your
Visitors’ Knowledge#3
Google’s looking for
content signals that a
page will fulfill ALL of
a searcher’s needs.
I think I know a
few ways to
figure that out.
ML models may note
that the presence of
certain words,
phrases, & topics
predict more
successful searches
e.g. a page about New York that doesn’t
mention Brooklyn or Long Island may
not be very comprehensive
IfYour Content Doesn’t Fill the Gaps in Searcher’s Needs…
e.g. for this query, Google
might seek content that
includes topics like “text
classification,”
“tokenization,” “parsing,”
and “question answering”
Those Rankings Go to Pages/Sites That Do.
Moz’s Data Science Team
is Working on Something to
Help With This
The (alpha) tool extracts
likely focal topics from a
given page, which can
then be compared vs. an
engines top 10 results
In the meantime, check
out
AlchemyAPI
Or MonkeyLearn
Fulfilling the Searcher’s Task
(not just their query)#4
Broad search Narrower search
Even narrower
search
Website visit
Website
visit
Brand
search
Social validation Highly-specific search
Type-in/direct visit Completion of Task
Google Wants to Get SearchersAccomplishing
Their Tasks Faster
Broad search
All the sites (or answers) you probably
would have visited/sought along that path
Completion of Task
This is Their Ultimate Goal:
If Google sees
that many
people who
perform these
types of
queries:
Eventually end
their queries on
the topic after
visiting Ramen
Rater…
The Ramen Rater
They might use the
clickstream data to
help rank that site
higher, even if it
doesn’t have
traditional ranking
signals
They’re definitely getting and storing it.
APage ThatAnswers the Searcher’s Initial Query
May Not Be Enough
Searchers performing this
query are likely to have the
goal of completing a
transaction
Google Wants to Send Searchers
to Websites that Resolve their
Mission
This is the only site
where you can reliably
find the back issues
and collector covers
Earning More Shares, Links,
& Loyalty per Visit#5
Pages that get lots of
social activity &
engagement, but few
links, seem to
overperform…
Google says they
don’t use social
signals directly, but
examples like these
make SEOs
suspicious
Even for insanely competitive
keywords, we see this type of
behavior when a URLgets
authentically “hot” in the
social world.
Data from Buzzsumo & Moz
show that very few articles
earn sharesAND that links &
shares have almost no
correlation.
Via Buzzsumo &
I suspect Google doesn’t
use raw social shares as
a ranking input, because
we share a lot of content
with which we don’t
engage:
Via Chartbeat
Google Could Be Using a Lot of Other Metrics/Sources to Get
Data That Mimics Social Shares:
Clickstream (from Chrome/Android)
Engagement (from Chrome/Android)
Branded Queries (from Search)
Navigational Queries (from Search)
Rate of Link Growth (from Crawl)
But I Don’t Care if It’s Correlation or Causation;
I Want to Rank Like These Guys!
BTW – GoogleAlmost Certainly Classifies SERPs
Differently & Optimizes to Different Goals
These URLs have loads of shares & may have high
loyalty, but for medical queries, Google has different
priorities
Knowing What Makes OurAudience (and their
influencers) Share is Essential
From an analysis of
the 10,000 pieces of
content receiving the
most social shares on
the web by
Buzzsumo.
Knowing What Makes them Return (or prevents
them from doing so) Is, Too.
We Don’t Need “Better” Content… We Need “10X” Content.
Via Whiteboard Friday
Wrong Question:
“How do we make something as
good as this?”
Right Question:
“How do we make something 10X
better than any of these?”
10X Content is the Future, Because It’s the Only Way to Stand
Out from the Increasingly-Noisy Crowd
http://www.simplereach.com/blog/facebook-continues-to-be-the-
biggest-driver-of-social-traffic/
The top 10% of content
gets all the social shares
and traffic.
Old School On-Site Old School Off-Site
Keyword Targeting Link Diversity
Anchor Text
Brand Mentions
3rd Party Reviews
Reputation Management
Quality & Uniqueness
Crawl/Bot Friendly
Snippet Optimization
UX / Multi-Device
None of our old school tactics will get this
done.
We Have to Go From This:
Wikipedia on Vince Carter (currently ranking #10 for “Vince Carter Dunks”)
ToThis:
Via ESPN
I’ve Been Curating a List of “10X” Content Over the Last
8 months… It’sAll Yours:
bit.ly/10Xcontent
FYI that’s a capital “X”
Welcome to the
Two-Algorithm World of
2015
Algo 1: Google
Algo 2: Subset of Humanity
that Interacts With Your
Content
“Make Pages for People, Not
Engines.”
Terrible Advice.
Keyword Targeting Relative CTR
Short vs. Long-Click
Content Gap Fulfillment
Amplify & Return Rates
Task Completion
Success
Quality & Uniqueness
Crawl/Bot Friendly
Snippet Optimization
UX / Multi-Device
Engines People
Optimize for Both:
Algo Input & Human Output
Rand Fishkin, Wizard of Moz | @randfish | rand@moz.com
bit.ly/twoalgo

Rand Fishkin: Two Algorithm World

  • 1.
    Rand Fishkin, Wizardof Moz | @randfish | rand@moz.com SEO in a Two Algorithm World
  • 2.
  • 3.
    State of Search November16th, 2015 8:00am Dallas, TX
  • 4.
  • 5.
  • 6.
  • 7.
    The Search Quality TeamsDetermined What to Include in the Ranking System
  • 8.
  • 9.
    By 2007, LinkSpam Was Ubiquitous This paper/presentation from Yahoo’s spam team in 2007 predicted a lot of what Google would launch in Penguin Oct, 2012 (including machine learning)
  • 10.
    Even in 2012,It Felt Like Google Was Making Liars Out of the White Hat SEO World Via Wil Reynolds
  • 11.
    Google’s Last 3Years of Advancements Erased a Decade of Old School SEO Practices
  • 12.
    They Finally LaunchedEffectiveAlgorithms to Fight Manipulative Links & Content Via Google
  • 13.
    And They LeveragedFear + Uncertainty of Penalization to Keep Sites Inline Via Moz Q+A
  • 14.
    Google Figured OutIntent Rand probably doesn’t just want webpages filled with the word “beef”
  • 16.
    They Looked atLanguage, not Just Keywords Oh… I totally know this one!
  • 18.
    They Predicted WhenWe Want Diverse Results He probably doesn’t just want a bunch of lists.
  • 20.
    They Figured OutWhen We Wanted Freshness Old pages on this topic probably aren’t relevant anymore
  • 22.
    Their Segmentation ofNavigational from Informational Queries Closed Many Loopholes
  • 23.
    Google Learned toID Entities of Knowledge
  • 24.
    And to ConnectEntities to Topics & Keywords Via Moz
  • 25.
    Brands Became aForm of Entities
  • 26.
    TheseAdvancements Brought Google(mostly) Back in Line w/ Its Public Statements Via Google
  • 27.
    During These Advances, Google’sSearch Quality Team Underwent a Revolution
  • 28.
    Early On, GoogleRejected Machine Learning in the Organic RankingAlgo Via Datawocky, 2008
  • 29.
    Amit Singhal SharedNorvig’s ConcernsAbout ML Via Quora
  • 30.
    In 2012, GooglePublished a PaperAbout How they Use ML to Predict Ad CTRs: Via Google
  • 31.
    2012 “Our SmartASS systemis a machine learning system. It learns whether our users are interested in that ad, and whether users are going to click on them.”
  • 32.
    By 2013, ItWas Something Google’s Search Folks Talked About Publicly Via SELand
  • 33.
    As MLTakes OverMore of Google’sAlgo, the Underpinnings of the Rankings Change Via Colossal
  • 34.
    Google is PublicAboutHow They Use MLin Image Recognition & Classification Potential ID Factors (e.g. color, shapes, gradients, perspective, interlacing, alt tags, surrounding text, etc) Training Data (i.e. human-labeled images) Learning Process Best Match Algo
  • 35.
    Google is PublicAboutHow They Use MLin Image Recognition & Classification Via Jeff Dean’s Slides on Deep Learning; a Must Read for SEOs
  • 36.
    Machine Learning inSearch Could Work Like This: Potential Ranking Factors (e.g. PageRank, TF*IDF, Topic Modeling, QDF, Clicks, Entity Association, etc.) Training Data (i.e. good & bad search results) Learning Process Best Fit Algo
  • 37.
    Training Data (e.g. goodsearch results) This is a good SERP – searchers rarely bounce, rarely short-click, and rarely need to enter other queries or go to page 2.
  • 38.
    Training Data (e.g. badsearch results!) This is a bad SERP – searchers bounce often, click other results, rarely long-click, and try other queries. They’re definitely not happy.
  • 39.
    The Machines Learnto Emulate the Good Results & Try to Fix orTweak the Bad Results Potential Ranking Factors (e.g. PageRank, TF*IDF, Topic Modeling, QDF, Clicks, Entity Association, etc.) Training Data (i.e. good & bad search results) Learning Process Best Fit Algo
  • 40.
    Deep Learning isEven MoreAdvanced: Dean says by using deep learning, they don’t have to tell the system what a cat is, the machines learn, unsupervised, for themselves…
  • 41.
    We’re TalkingAbout Algorithms thatBuild Algorithms (without human intervention)
  • 42.
    Googlers Don’t Feedin Ranking Factors… The Machines Determine Those Themselves. Potential Ranking Factors (e.g. PageRank, TF*IDF, Topic Modeling, QDF, Clicks, Entity Association, etc.) Training Data (i.e. good search results) Learning Process Best Fit Algo
  • 43.
    No wonder theseguys are stressed about Google unleashing the Terminators  Via CNET & Washington Post
  • 44.
    What Does DeepLearning Mean for SEO?
  • 45.
    Googlers Won’t KnowWhy Something Ranks or Whether a Variable’s in theAlgo He means other Googlers. I’m Jeff Dean. I’ll know.
  • 46.
    The Query SuccessMetrics Will BeAll That Matters to the Machines Long to Short Click Ratio Relative CTR vs. Other Results Rate of Searchers Conducting Additional, Related Searches Metrics of User Engagement on the Page Metrics of User Engagement Across the Domain Sharing/Amplifcation Rate vs. Other Results
  • 47.
    The Query SuccessMetrics Will BeAll That Matters to the Machines Long to Short Click Ratio Relative CTR vs. Other Results Rate of Searchers Conducting Additional, Related Searches Metrics of User Engagement on the Page Metrics of User Engagement Across the Domain Sharing/Amplifcation Rate vs. Other Results If lots of results on a SERP do these well, and higher results outperform lower results, our deep learning algo will consider it a success.
  • 48.
    We’ll Be OptimizingLess for Ranking Inputs Unique Linking Domains Keywords in Title Anchor Text Content Uniqueness Page Load Speed
  • 49.
    And Optimizing Morefor Searcher Outputs High CTR for this position? Good engagement? High amplification rate? Low bounce rate? Strong pages/visit after landing on this URL?These are likely to be the criteria of on-site SEO’s future… People return to the site after an initial search visit
  • 50.
    OK… Maybe inthe future. But, do those kinds of metrics really affect SEO today?
  • 51.
    Remember Our Queries& Clicks Test from 2014? Via Rand’s Blog
  • 52.
    Since then, it’sbeen much harder to move the needle with raw queries and clicks…
  • 53.
    Case closed! Googlesays they don’t use clicks in the rankings. Via Linkarati’s Coverage of SMX Advanced
  • 54.
    But, what ifwe tried long clicks vs. short clicks? Note SeriousEats, ranking #4 here
  • 55.
    11:39am on June21st, I sent this tweet:
  • 56.
    40 Minutes &~400 Interactions Later Moved up 2 positions after 2+ weeks of the top 5 staying static.
  • 57.
    70 Minutes &~500 Interactions Total Moved up to #1.
  • 58.
    Stayed ~12 hours,when it fell to #13+ for ~8 hours, then back to #4. Google? You messing with us?
  • 59.
    Via Google Trends,we can see the relative impact of the test on query volume ~5-10X normal volume over 3-4 hours
  • 60.
    BTW – Thisis hard to replicate. 600+ real searchers using a variety of devices, browsers, accounts, geos, etc. will not look the same to Google as a Fiverr buy, a clickfarm, or a bot. And note how G penalized the page after the test… They might not put it back if they thought the site itself was to blame for the click manipulation.
  • 61.
    OK… Maybe inthe future. But, do those kinds of metrics really affect SEO today?
  • 62.
  • 63.
  • 64.
    The Best SEOsHaveAlways Optimized to Where Google’s Going
  • 65.
    Today, I ThinkWe Know, Better Than Ever, Where That Is Welcome to your new home, the User/Usage Signals + ML Model Cabin
  • 66.
    We Must ChooseHow to Balance Our Work…
  • 67.
    Hammering on theFading Signals of Old…
  • 68.
    Or Embracing ThoseWe Can See On the Rise
  • 69.
    Classic SEO (ranking inputs) NewSEO (searcher outputs) Keyword Targeting Relative CTR Short vs. Long-Click Content Gap Fulfillment Task Completion Success Amplification & Loyalty Quality & Uniqueness Crawl/Bot Friendly Snippet Optimization UX / Multi-Device Branded Search & TrafficLinks & Anchor Text
  • 70.
    5 New(ish) Elementsof Modern SEO
  • 71.
  • 72.
    Optimizing the Title,Meta Description, & URL a Little for KWs, but a Lot for Clicks If you rank #3, but have a higher- than-average CTR for that position, you might get moved up. Via Philip Petrescu on Moz
  • 73.
    Every Element Counts Doesthe title match what searchers want? Does the URL seem compelling? Do searchers recognize & want to click your domain? Is your result fresh? Do searchers want a newer result? Does the description create curiosity & entice a click? Do you get the brand dropdown?
  • 74.
    Given Google OftenTests New Results Briefly on Page One… ItMayBeWorthRepeatedPublicationonaTopictoEarnthatHighCTR Shoot! My post only made it to #15… Perhaps I’ll try again in a few months.
  • 75.
    Driving Up CTRThrough Branding Or Branded Searches May GiveAn Extra Boost
  • 76.
    #1 Ad Spender #2Ad Spender #4 Ad Spender #3 Ad Spender #5 Ad Spender
  • 77.
    With Google Trends’ new,more accurate, more customizable ranges, you can actually watch the effects of events and ads on search query volume Fitbit has been running ads on Sunday NFL games that clearly show in the search trends data.
  • 78.
    Beating Out YourFellow SERP Residents on Engagement#2
  • 79.
    Together, Pogo-Sticking &Long Clicks Might Determine a Lot of Where You Rank (and for how long) Via Bill Slawski on Moz
  • 80.
  • 81.
    Speed, Speed, andMore Speed Delivers the Best UX on Every Browser Compels Visitors to Go Deeper Into Your Site Avoids Features that Annoy or Dissuade Visitors Content that Fulfills the Searcher’s Conscious & Unconscious Needs An SEO’s Checklist for Better Engagement:
  • 82.
    Via NY Times e.g.this interactive graph that asks visitors to draw their best guess likely gets remarkable engagement
  • 83.
    e.g. Poor Norbert doesa terrible job at SEO, but the simplicity compels visitors to go deeper and to return time and again Via VoilaNorbert
  • 84.
    e.g. Nomadlist’s superb, filterable databaseof cities and community for remote workers. Via Nomadlist
  • 85.
    Filling Gaps inYour Visitors’ Knowledge#3
  • 86.
    Google’s looking for contentsignals that a page will fulfill ALL of a searcher’s needs. I think I know a few ways to figure that out.
  • 87.
    ML models maynote that the presence of certain words, phrases, & topics predict more successful searches
  • 88.
    e.g. a pageabout New York that doesn’t mention Brooklyn or Long Island may not be very comprehensive
  • 89.
    IfYour Content Doesn’tFill the Gaps in Searcher’s Needs… e.g. for this query, Google might seek content that includes topics like “text classification,” “tokenization,” “parsing,” and “question answering” Those Rankings Go to Pages/Sites That Do.
  • 90.
    Moz’s Data ScienceTeam is Working on Something to Help With This The (alpha) tool extracts likely focal topics from a given page, which can then be compared vs. an engines top 10 results
  • 91.
    In the meantime,check out AlchemyAPI Or MonkeyLearn
  • 92.
    Fulfilling the Searcher’sTask (not just their query)#4
  • 93.
    Broad search Narrowersearch Even narrower search Website visit Website visit Brand search Social validation Highly-specific search Type-in/direct visit Completion of Task Google Wants to Get SearchersAccomplishing Their Tasks Faster
  • 94.
    Broad search All thesites (or answers) you probably would have visited/sought along that path Completion of Task This is Their Ultimate Goal:
  • 95.
    If Google sees thatmany people who perform these types of queries:
  • 96.
    Eventually end their querieson the topic after visiting Ramen Rater… The Ramen Rater
  • 97.
    They might usethe clickstream data to help rank that site higher, even if it doesn’t have traditional ranking signals
  • 98.
  • 99.
    APage ThatAnswers theSearcher’s Initial Query May Not Be Enough Searchers performing this query are likely to have the goal of completing a transaction
  • 100.
    Google Wants toSend Searchers to Websites that Resolve their Mission This is the only site where you can reliably find the back issues and collector covers
  • 101.
    Earning More Shares,Links, & Loyalty per Visit#5
  • 102.
    Pages that getlots of social activity & engagement, but few links, seem to overperform…
  • 103.
    Google says they don’tuse social signals directly, but examples like these make SEOs suspicious
  • 104.
    Even for insanelycompetitive keywords, we see this type of behavior when a URLgets authentically “hot” in the social world.
  • 105.
    Data from Buzzsumo& Moz show that very few articles earn sharesAND that links & shares have almost no correlation. Via Buzzsumo &
  • 106.
    I suspect Googledoesn’t use raw social shares as a ranking input, because we share a lot of content with which we don’t engage: Via Chartbeat
  • 107.
    Google Could BeUsing a Lot of Other Metrics/Sources to Get Data That Mimics Social Shares: Clickstream (from Chrome/Android) Engagement (from Chrome/Android) Branded Queries (from Search) Navigational Queries (from Search) Rate of Link Growth (from Crawl)
  • 108.
    But I Don’tCare if It’s Correlation or Causation; I Want to Rank Like These Guys!
  • 109.
    BTW – GoogleAlmostCertainly Classifies SERPs Differently & Optimizes to Different Goals These URLs have loads of shares & may have high loyalty, but for medical queries, Google has different priorities
  • 110.
    Knowing What MakesOurAudience (and their influencers) Share is Essential From an analysis of the 10,000 pieces of content receiving the most social shares on the web by Buzzsumo.
  • 111.
    Knowing What Makesthem Return (or prevents them from doing so) Is, Too.
  • 112.
    We Don’t Need“Better” Content… We Need “10X” Content. Via Whiteboard Friday Wrong Question: “How do we make something as good as this?” Right Question: “How do we make something 10X better than any of these?”
  • 113.
    10X Content isthe Future, Because It’s the Only Way to Stand Out from the Increasingly-Noisy Crowd http://www.simplereach.com/blog/facebook-continues-to-be-the- biggest-driver-of-social-traffic/ The top 10% of content gets all the social shares and traffic.
  • 114.
    Old School On-SiteOld School Off-Site Keyword Targeting Link Diversity Anchor Text Brand Mentions 3rd Party Reviews Reputation Management Quality & Uniqueness Crawl/Bot Friendly Snippet Optimization UX / Multi-Device None of our old school tactics will get this done.
  • 115.
    We Have toGo From This: Wikipedia on Vince Carter (currently ranking #10 for “Vince Carter Dunks”)
  • 116.
  • 117.
    I’ve Been Curatinga List of “10X” Content Over the Last 8 months… It’sAll Yours: bit.ly/10Xcontent FYI that’s a capital “X”
  • 118.
  • 119.
  • 120.
    Algo 2: Subsetof Humanity that Interacts With Your Content
  • 121.
    “Make Pages forPeople, Not Engines.”
  • 122.
  • 123.
    Keyword Targeting RelativeCTR Short vs. Long-Click Content Gap Fulfillment Amplify & Return Rates Task Completion Success Quality & Uniqueness Crawl/Bot Friendly Snippet Optimization UX / Multi-Device Engines People
  • 124.
    Optimize for Both: AlgoInput & Human Output
  • 125.
    Rand Fishkin, Wizardof Moz | @randfish | rand@moz.com bit.ly/twoalgo