Advertisement
Advertisement

More Related Content

Advertisement
Advertisement

SEO in a Two Algorithm World

  1. Rand Fishkin, Wizard of Moz | @randfish | rand@moz.com SEO in a Two Algorithm World
  2. bit.ly/twoalgo Get the presentation:
  3. Remember When…
  4. We Had One Job
  5. Perfectly Optimized Pages
  6. The Search Quality Teams Determined What to Include in the Ranking System
  7. They decided links > content
  8. By 2007, Link Spam Was Ubiquitous This paper/presentation from Yahoo’s spam team in 2007 predicted a lot of what Google would launch in Penguin Oct, 2012 (including machine learning)
  9. Even in 2012, It Felt Like Google Was Making Liars Out of the White Hat SEO World Via Wil Reynolds
  10. Google’s Last 3 Years of Advancements Erased a Decade of Old School SEO Practices
  11. They Finally Launched EffectiveAlgorithms to Fight Manipulative Links & Content Via Google
  12. And They Leveraged Fear + Uncertainty of Penalization to Keep Sites Inline Via Moz Q+A
  13. Google Figured Out Intent Rand probably doesn’t just want webpages filled with the word “beef”
  14. They Looked at Language, not Just Keywords Oh… I totally know this one!
  15. They Predicted When We Want Diverse Results He probably doesn’t just want a bunch of lists.
  16. They Figured Out When We Wanted Freshness Old pages on this topic probably aren’t relevant anymore
  17. Their Segmentation of Navigational from Informational Queries Closed Many Loopholes
  18. Google Learned to ID Entities of Knowledge
  19. And to Connect Entities to Topics & Keywords Via Moz
  20. Brands Became a Form of Entities
  21. TheseAdvancements Brought Google (mostly) Back in Line w/ Its Public Statements Via Google
  22. During These Advances, Google’s Search Quality Team Underwent a Revolution
  23. Early On, Google Rejected Machine Learning in the Organic RankingAlgo Via Datawocky, 2008
  24. Amit Singhal Shared Norvig’s ConcernsAbout ML Via Quora
  25. In 2012, Google Published a PaperAbout How they Use ML to Predict Ad CTRs: Via Google
  26. 2012 “Our SmartASS system is a machine learning system. It learns whether our users are interested in that ad, and whether users are going to click on them.”
  27. By 2013, It Was Something Google’s Search Folks Talked About Publicly Via SELand
  28. As MLTakes Over More of Google’sAlgo, the Underpinnings of the Rankings Change Via Colossal
  29. Google is PublicAbout How They Use MLin Image Recognition & Classification Potential ID Factors (e.g.color,shapes,gradients, perspective,interlacing,alttags, surroundingtext,etc) Training Data (i.e.human-labeledimages) Learning Process Best Match Algo
  30. Google is PublicAbout How They Use MLin Image Recognition & Classification ViaJeffDean’sSlidesonDeepLearning;aMustReadforSEOs
  31. Machine Learning in Search Could Work Like This: Potential Ranking Factors (e.g.PageRank,TF*IDF, TopicModeling,QDF,Clicks, EntityAssociation,etc.) Training Data (i.e.good&badsearchresults) Learning Process Best Fit Algo
  32. Training Data (e.g.goodsearchresults) This is a good SERP – searchers rarely bounce, rarely short-click, and rarely need to enter other queries or go to page 2.
  33. Training Data (e.g.badsearchresults!) This is a bad SERP – searchers bounce often, click other results, rarely long-click, and try other queries. They’re definitely not happy.
  34. The Machines Learn to Emulate the Good Results & Try to Fix orTweak the Bad Results Potential Ranking Factors (e.g.PageRank,TF*IDF, TopicModeling,QDF,Clicks, EntityAssociation,etc.) Training Data (i.e.good&badsearchresults) Learning Process Best Fit Algo
  35. Deep Learning is Even MoreAdvanced: Dean says by using deep learning, they don’t have to tell the system what a cat is, the machines learn, unsupervised, for themselves…
  36. We’re TalkingAbout Algorithms that Build Algorithms (without human intervention)
  37. Googlers Don’t Feed in Ranking Factors… The Machines Determine Those Themselves. Potential Ranking Factors (e.g.PageRank,TF*IDF, TopicModeling,QDF,Clicks, EntityAssociation,etc.) Training Data (i.e.goodsearchresults) Learning Process Best Fit Algo
  38. No wonder these guys are stressed about Google unleashing the Terminators  Via CNET & Washington Post
  39. What Does Deep Learning Mean for SEO?
  40. Googlers Won’t Know Why Something Ranks or Whether a Variable’s in theAlgo He means other Googlers. I’m Jeff Dean. I’ll know.
  41. The Query Success Metrics Will BeAll That Matters to the Machines Long to Short Click Ratio Relative CTR vs. Other Results Rate of Searchers Conducting Additional, Related Searches Metrics of User Engagement on the Page Metrics of User Engagement Across the Domain Sharing/Amplifcation Rate vs. Other Results
  42. The Query Success Metrics Will BeAll That Matters to the Machines Long to Short Click Ratio Relative CTR vs. Other Results Rate of Searchers Conducting Additional, Related Searches Metrics of User Engagement on the Page Metrics of User Engagement Across the Domain Sharing/Amplifcation Rate vs. Other Results If lots of results on a SERP do these well, and higher results outperform lower results, our deep learning algo will consider it a success.
  43. We’ll Be Optimizing Less for Ranking Inputs Unique Linking Domains Keywords in Title Anchor Text Content Uniqueness Page Load Speed
  44. And Optimizing More for Searcher Outputs High CTR for this position? Good engagement? High amplification rate? Low bounce rate? Strong pages/visit after landing on this URL?These are likely to be the criteria of on-site SEO’s future… People return to the site after an initial search visit
  45. OK… Maybe in the future. But, do those kinds of metrics really affect SEO today?
  46. Remember Our Queries & Clicks Test from 2014? Via Rand’s Blog
  47. Since then, it’s been much harder to move the needle with raw queries and clicks…
  48. Case closed! Google says they don’t use clicks in the rankings. Via Linkarati’s Coverage of SMX Advanced
  49. But, what if we tried long clicks vs. short clicks? Note SeriousEats, ranking #4 here
  50. 11:39am on June 21st, I sent this tweet:
  51. 40 Minutes & ~400 Interactions Later Moved up 2 positions after 2+ weeks of the top 5 staying static.
  52. 70 Minutes & ~500 Interactions Total Moved up to #1.
  53. Stayed ~12 hours, when it fell to #13+ for ~8 hours, then back to #4. Google? You messing with us?
  54. Via Google Trends, we can see the relative impact of the test on query volume ~5-10X normal volume over 3-4 hours
  55. BTW –This is hard to replicate.600+ real searchersusinga varietyof devices,browsers,accounts,geos,etc. willnot lookthe same to Googleas a Fiverr buy,a clickfarm,or a bot.And note how G penalizedthe page after the test…They might not put it back if they thoughtthe site itselfwas to blame for the clickmanipulation.
  56. The Future: Optimizing for Two Algorithms
  57. The Best SEOs HaveAlways Optimized to Where Google’s Going
  58. Today, I Think We Know, Better Than Ever, Where That Is Welcometoyournewhome,theUser/UsageSignals+MLModelCabin
  59. We Must Choose How to Balance Our Work…
  60. Hammering on the Fading Signals of Old…
  61. Or Embracing Those We Can See On the Rise
  62. Classic On-Site SEO (rankinginputs) New On-Site SEO (searcher outputs) Keyword Targeting Relative CTR Short vs. Long-Click Content Gap Fulfillment Amplification & Loyalty Task Completion Success Quality & Uniqueness Crawl/Bot Friendly Snippet Optimization UX / Multi-Device
  63. 5 New(ish) Elements of Modern SEO
  64. Punching Above Your Ranking’s Average CTR#1
  65. Optimizing the Title, Meta Description, & URL a Little for KWs, but a Lot for Clicks If you rank #3, but have a higher-than- average CTR for that position, you might get moved up. Via Philip Petrescu on Moz
  66. Every Element Counts Does the title match what searchers want? Does the URLseem compelling? Do searchers recognize & want to click your domain? Is your result fresh? Do searchers want a newer result? Does the description create curiosity & entice a click? Do you get the brand dropdown?
  67. Given Google Often Tests New Results Briefly on Page One… ItMayBeWorthRepeatedPublicationonaTopictoEarnthatHighCTR Shoot! My post only made it to #15… Perhaps I’ll try again in a few months.
  68. Driving Up CTR Through Branding Or Branded Searches May GiveAn Extra Boost
  69. #1Ad Spender #2Ad Spender #4Ad Spender #3Ad Spender #5Ad Spender
  70. WithGoogleTrends’ new, moreaccurate, morecustomizable ranges, youcan actuallywatchthe effectsofevents and adsonsearchquery volume Fitbit has been running ads on Sunday NFLgames that clearly show in the search trends data.
  71. Beating Out Your Fellow SERP Residents on Engagement#2
  72. Together, Pogo-Sticking & Long Clicks Might Determine a Lot of Where You Rank (and for how long) Via Bill Slawski on Moz
  73. What Influences Them?
  74. Speed, Speed, and More Speed Delivers the Best UX on Every Browser Compels Visitors to Go Deeper Into Your Site Avoids Features thatAnnoy or Dissuade Visitors Content that Fulfills the Searcher’s Conscious & Unconscious Needs An SEO’s Checklist for Better Engagement:
  75. Via NY Times e.g. this interactive graph that asks visitors to draw their best guess likely gets remarkable engagement
  76. e.g. Poor Norbert does a terrible job at SEO, but the simplicity compels visitors to go deeper and to return time and again Via VoilaNorbert
  77. e.g. Nomadlist’s superb, filterable database of cities and community for remote workers. Via Nomadlist
  78. Filling Gaps in Your Visitors’ Knowledge#3
  79. Google’s looking for content signals that a page will fulfill ALL of a searcher’s needs. I think I know a few ways to figure that out.
  80. ML models may note that the presence of certain words, phrases, & topics predict more successful searches
  81. e.g. a page about New York that doesn’t mention Brooklyn or Long Island may not be very comprehensive
  82. If Your Content Doesn’t Fill the Gaps in Searcher’s Needs… e.g. for this query, Google might seek content that includes topics like “text classification,” “tokenization,” “parsing,” and “question answering” Those Rankings Go to Pages/Sites That Do.
  83. Moz’s Data Science Team is Working on Something to Help With This The (alpha) tool extracts likely focal topics from a given page, which can then be compared vs. an engines top 10 results
  84. In the meantime, check out AlchemyAPI Or MonkeyLearn
  85. Earning More Shares, Links, & Loyalty per Visit#4
  86. Pages that get lots of social activity & engagement, but few links, seem to overperform…
  87. Google says they don’t use social signals directly, but examples like these make SEOs suspicious
  88. Even for insanely competitive keywords, we see this type of behavior when a URLgets authentically “hot” in the social world.
  89. Data from Buzzsumo & Moz show that very few articles earn sharesAND that links & shares have almost no correlation. Via Buzzsumo & Moz
  90. I suspect Google doesn’t use raw social shares as a ranking input, because we share a lot of content with which we don’t engage: Via Chartbeat
  91. Google Could Be Using a Lot of Other Metrics/Sources to Get Data That Mimics Social Shares: Clickstream (from Chrome/Android) Engagement (from Chrome/Android) Branded Queries (from Search) Navigational Queries (from Search) Rate of Link Growth (from Crawl)
  92. But I Don’t Care if It’s Correlation or Causation; I Want to Rank Like These Guys!
  93. BTW – GoogleAlmost Certainly Classifies SERPs Differently & Optimizes to Different Goals These URLs have loads of shares & may have high loyalty, but for medical queries, Google has different priorities
  94. Raw Shares & LinksAre Fine Metrics… Via Buzzsumo
  95. But If the Competition Naturally Earns Them Faster, You’re Outta Luck 4 new shares/day 2 new shares/day 3 new shares/day 10 new shares/day
  96. And Google Probably Wants to See Shares that Result in Loyalty & Returning Visits
  97. New KPI #1: Shares & Links Per 1,000 Visits Unique Visits ÷ Shares + Links Via Moz’s 1Metric
  98. New KPI #2: Return Visitor Ratio Over Time Total Visitor Sessions ÷ # of Returning Visitors
  99. Knowing What Makes OurAudience (and their influencers) Share is Essential From an analysis of the 10,000 pieces of content receiving the most social shares on the web by Buzzsumo.
  100. Knowing What Makes them Return (or prevents them from doing so) Is, Too.
  101. We Don’t Need “Better” Content… We Need “10X” Content. Via Whiteboard Friday Wrong Question: “How do we make something as good as this?” Right Question: “How do we make something 10X better than any of these?”
  102. 10X Content is the Future, Because It’s the Only Way to Stand Out from the Increasingly-Noisy Crowd http://www.simplereach.com/blog/facebook-continues-to-be-the- biggest-driver-of-social-traffic/ The top 10% of content gets all the social shares and traffic.
  103. Old School On-Site Old School Off-Site Keyword Targeting Link Diversity Anchor Text Brand Mentions 3rd Party Reviews Reputation Management Quality & Uniqueness Crawl/Bot Friendly Snippet Optimization UX / Multi-Device None of our old school tactics will get this done.
  104. Fulfilling the Searcher’s Task (not just their query)#5
  105. Broad search Narrower search Even narrower search Website visit Website visit Brand search Social validation Highly-specific search Type-in/direct visit Completion of Task Google Wants to Get SearchersAccomplishing Their Tasks Faster
  106. Broad search All the sites (or answers) you probably would have visited/sought along that path Completion of Task This is Their Ultimate Goal:
  107. If Google sees that many people who perform these types of queries:
  108. Eventually end their queries on the topic after visiting Ramen Rater… The Ramen Rater
  109. They might use the clickstream data to help rank that site higher, even if it doesn’t have traditional ranking signals
  110. They’re definitely getting and storing it.
  111. APage ThatAnswers the Searcher’s Initial Query May Not Be Enough Searchers performing this query are likely to have the goal of completing a transaction
  112. Google Wants to Send Searchers to Websites that Resolve their Mission This is the only site where you can reliably find the back issues and collector covers
  113. Welcome to the Two-Algorithm World of 2015
  114. Algo 1: Google
  115. Algo 2: Subset of Humanity that Interacts With Your Content
  116. “Make Pages for People, Not Engines.”
  117. Terrible Advice.
  118. Keyword Targeting Relative CTR Short vs. Long-Click Content Gap Fulfillment Amplify & Return Rates Task Completion Success Quality & Uniqueness Crawl/Bot Friendly Snippet Optimization UX / Multi-Device Engines People
  119. Optimize for Both: Algo Input & Human Output
  120. Bonus Time!
  121. #1) I’ve Been Curating a List of “10X” Content Over the Last 6 months… It’sAll Yours: bit.ly/10Xcontent FYI that’s a capital “X”
  122. #2) Not all content earns links… Buzzsumo & Moz collaborated to find out what does: bit.ly/sharesvslinks No capitals in this one!
  123. Rand Fishkin, Wizard of Moz | @randfish | rand@moz.com bit.ly/twoalgo
Advertisement