Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

SearchLove San Diego 2017 | Will Critchlow | Knowing Ranking Factors Won't Be Enough: How To Avoid Losing Your Job to a Robot

3,503 views

Published on

Under Sundar Pichai, Google is doubling down on machine learning and artificial intelligence, and they're not the only ones. The impact of the robot revolution will not be limited to the ranking of search results, and the impacts on the job market are the subject of endless speculation. Will has been researching the parts of our digital marketing jobs that computers can do better than we can. In this talk, he'll explore the boundaries of human and computer capabilities and show you how to combine the strengths of both.

Published in: Marketing
  • Nice Job!.... STARTUPS get funding...Send your pitchdeck to over 5700 of VC's and Angel's with just 1 click. Visit: Angelvisioninvestors.com
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

SearchLove San Diego 2017 | Will Critchlow | Knowing Ranking Factors Won't Be Enough: How To Avoid Losing Your Job to a Robot

  1. 1. Knowing ranking factors won’t be enough How to avoid losing your job to a robot @willcritchlow
  2. 2. I’m going to tell you about a robot that understands ranking factors better than any of you ...but before I get to that, let’s look at a bit of history...
  3. 3. The other day I searched:
  4. 4. Unsurprisingly, I got an answer
  5. 5. But it got me thinking about how, in 2009, the results would have looked more like this.
  6. 6. In 2009, it would have looked more like this. With every title containing the keyphrase.
  7. 7. In 2009, it would have looked more like this. With every title containing the keyphrase. Most at the beginning.
  8. 8. OK. Maybe wikipedia would have been #1.
  9. 9. We used to have a pretty good understanding of ranking factors
  10. 10. My mental model for c. 2009 ranking factors had three different modes:
  11. 11. My mental model for ~2009 ranking factors had three different modes: One in the hyper-competitive head One in the competitive mid-tail ...and one in the long-tail
  12. 12. One in the hyper-competitive head
  13. 13. Tons of perfectly on-topic pages to choose from One in the hyper-competitive head
  14. 14. So pick only perfectly-on-topic pages One in the hyper-competitive head
  15. 15. ...and rank by authority (*) (*) Page authority, but the domain inevitably factors into that calculation. This is why so many homepages ranked One in the hyper-competitive head
  16. 16. This resulted in a mix of homepages of mid-size sites, and inner pages on huge sites One in the hyper-competitive head
  17. 17. But the general way to move up was through increased authority One in the hyper-competitive head
  18. 18. Kind of search result Pages ranking To move up... Head Homepages of mid-size sites and inner pages of massive sites. All perfectly-targeted. Improve authority. Mid-tail Long-tail
  19. 19. One in the hyper-competitive head One in the competitive mid-tail
  20. 20. Wealth of ROUGHLY on-topic pages to choose from One in the competitive mid-tail
  21. 21. PERFECTLY on-topic could do well even on a relatively weak site One in the competitive mid-tail
  22. 22. Rank the roughly on-topic pages by authority x “on-topicness” One in the competitive mid-tail
  23. 23. Move up with better targeting or more authority One in the competitive mid-tail
  24. 24. Kind of search result Pages ranking To move up... Head Homepages of mid-size sites and inner pages of massive sites. All perfectly-targeted. Improve authority. Mid-tail Perfectly on-topic pages on relatively weak sites plus roughly on-topic on bigger sites. Improve targeting or authority. Long-tail
  25. 25. One in the competitive mid-tail One in the hyper-competitive head ...and one in the long-tail
  26. 26. In the long-tail, a site of arbitrary weakness could rank if it was the most relevant ...and one in the long-tail
  27. 27. Otherwise, massive sites rank with off-topic pages that mention something similar ...and one in the long-tail
  28. 28. Generally, move up with better targeting ...and one in the long-tail
  29. 29. Kind of search result Pages ranking To move up... Head Homepages of mid-size sites and inner pages of massive sites. All perfectly-targeted. Improve authority. Mid-tail Perfectly on-topic pages on relatively weak sites plus roughly on-topic on bigger sites. Improve targeting or authority. Long-tail Arbitrarily-weak on-topic pages and roughly-targeted deep pages on massive sites. Improve targeting.
  30. 30. Kind of search result Pages ranking To move up... Head Homepages of mid-size sites and inner pages of massive sites. All perfectly-targeted. Improve authority. Mid-tail Perfectly on-topic pages on relatively weak sites plus roughly on-topic on bigger sites. Improve targeting or authority. Long-tail Arbitrarily-weak on-topic pages and roughly-targeted deep pages on massive sites. Improve targeting. So that was ~2009
  31. 31. It’s not so simple any more. Google is harder to understand these days.
  32. 32. PageRank (the first algorithm to use the link structure of the web) We know how we got to ~2009...
  33. 33. Information retrieval PageRank
  34. 34. Information retrieval PageRank Original research
  35. 35. Information retrieval PageRank Original research TWEAKS ...with growing complexity in subsequent years
  36. 36. When Amit left Google, there was a fascinating thread on Hacker News in discussion of this article
  37. 37. Particularly this comment from a user called Kevin Lacker (@lacker):
  38. 38. I was thinking about it like it was a math puzzle and if I just thought really hard it would all make sense. -- Kevin Lacker (@lacker)
  39. 39. Hey why don't you take the square root? -- Amit Singhal according to Kevin Lacker (@lacker)
  40. 40. oh... am I allowed to write code that doesn't make any sense? -- Kevin Lacker (@lacker)
  41. 41. Multiply by 2 if it helps, add 5, whatever, just make things work and we can make it make sense later. -- Amit Singhal according to Kevin Lacker (@lacker)
  42. 42. Why does this make the algorithm so hard to understand?
  43. 43. High- dimension Non-linear Discontinuous 3 big reasons:
  44. 44. High- dimension Non-linear Discontinuous
  45. 45. High- dimension Non-linear Discontinuous
  46. 46. High- dimension Non-linear Discontinuous
  47. 47. You might know what any one of the levers does, but they can interact with each other in complex ways This is what a high-dimensional function looks like
  48. 48. High- dimension Non-linear Discontinuous
  49. 49. We sell custom cigar humidors. Our custom cigar humidors are handmade. If you’re thinking of buying a custom cigar humidor, please contact our custom cigar humidor specialists at custom.cigar.humidors@example.com What this needs is another mention of [cigar humidors]
  50. 50. With no mentions of [cigar] or [humidor] this page would be unlikely to rank And yet you can clearly go too far, and have the effect turn negative. This is called nonlinearity. The cigar example is taken directly from Google’s quality guidelines.
  51. 51. High- dimension Non-linear Discontinuous
  52. 52. Discontinuities are steps in the function Think about so-called “over-optimization” tipping points
  53. 53. Let’s put all this together into a practical example:
  54. 54. Think about category pages: Do you recommend removing “SEO text”? We’ve tested it, so we know the answer.
  55. 55. If you said “yes”, congratulations (+3.1% organic sessions in a split-test)
  56. 56. Unless you’re responsible for this site No effect / possible negative effect
  57. 57. No, but I’m still pretty good at this You’re thinking this to yourself right now.
  58. 58. I promised to tell you about a robot that is better than even experienced SEOs... Well. It turns out all we needed was a coin to flip. You’re all fired.
  59. 59. It’s only going to get worse under Sundar Pichai
  60. 60. Who knows who this is? (This is the only CC-licensed photo of him on the internet)
  61. 61. ENHANCE What about now?
  62. 62. John Giannandrea - Google’s head of search Sundar’s choice to lead search after Amit. Previously running machine learning.
  63. 63. ...and of course Jeff Dean is doing Jeff Dean things (c.f. Chuck Norris)
  64. 64. Jeff Dean puts his pants on one leg at a time, but if he had more legs, you would see that his approach is O(log n). Source: Jeff Dean facts
  65. 65. Once, in early 2002, when the search back-ends went down, Jeff Dean answered user queries manually for two hours. Result quality improved markedly during this time
  66. 66. When Jeff Dean goes on vacation, production services across Google mysteriously stop working within a few days. This was reportedly actually true
  67. 67. The original Google Translate was the result of the work of hundreds of engineers over 10 years.
  68. 68. Director of Translate, Macduff Hughes said that it sounded to him as if maybe they could pull off a neural-network-based replacement in three years.
  69. 69. Jeff Dean said “we can do it by the end of the year, if we put our minds to it”.
  70. 70. Hughes: “I’m not going to be the one to say Jeff Dean can’t deliver speed.”
  71. 71. A month later, the work of a team of 3 engineers was tested against the existing system. The improvement was roughly equivalent to the improvement of the old system over the previous 10 years.
  72. 72. Hughes sent his team an email. All projects on the old system were to be suspended immediately. [Read the whole story ]
  73. 73. Background reading:(backchannel, bloomberg)
  74. 74. How to avoid losing your job to a robot This is what you promised, Will.
  75. 75. Let’s start by understanding some robot weaknesses
  76. 76. What’s this?
  77. 77. Ooh. Ooh. I know this one. -- robot
  78. 78. “It’s a leopard. I’m like 99% sure.”
  79. 79. Computers are better than humans at classification, but struggle with adversaries Read more about this here -- Cheetah, Leopard, Jaguar
  80. 80. Lesson: We expect adversarial abilities to take a step backwards They will remain good at classifying bad links but will be likely to fall prey to weird outcomes in adversarial situations
  81. 81. Example: Remember Tay, the Microsoft chatbot that Twitter taught to be racist and sexist in less than a day? Read more here
  82. 82. We’re going to see new kinds of bugs
  83. 83. Rules of ML [PDF] outlines engineering lessons from getting ML into production at Google
  84. 84. Example lesson: There will be silent failures “This is a problem that occurs more for machine learning systems than for other kinds of systems. Suppose that a particular table that is being joined is no longer being updated. The machine learning system will adjust, and behavior will continue to be reasonably good, decaying gradually. Sometimes tables are found that were months out of date, and a simple refresh improved performance more than any other launch that quarter! For example, the coverage of a feature may change due to implementation changes: for example a feature column could be populated in 90% of the examples, and suddenly drop to 60% of the examples. Play once had a table that was stale for 6 months, and refreshing the table alone gave a boost of 2% in install rate. If you track statistics of the data, as well as manually inspect the data on occassion, you can reduce these kinds of failures.”
  85. 85. Example lesson: There will be silent failures “This is a problem that occurs more for machine learning systems than for other kinds of systems. Suppose that a particular table that is being joined is no longer being updated. The machine learning system will adjust, and behavior will continue to be reasonably good, decaying gradually. Sometimes tables are found that were months out of date, and a simple refresh improved performance more than any other launch that quarter! For example, the coverage of a feature may change due to implementation changes: for example a feature column could be populated in 90% of the examples, and suddenly drop to 60% of the examples. Play once had a table that was stale for 6 months, and refreshing the table alone gave a boost of 2% in install rate. If you track statistics of the data, as well as manually inspect the data on occassion, you can reduce these kinds of failures.”
  86. 86. Example lesson: There will be silent failures “This is a problem that occurs more for machine learning systems than for other kinds of systems. Suppose that a particular table that is being joined is no longer being updated. The machine learning system will adjust, and behavior will continue to be reasonably good, decaying gradually. Sometimes tables are found that were months out of date, and a simple refresh improved performance more than any other launch that quarter! For example, the coverage of a feature may change due to implementation changes: for example a feature column could be populated in 90% of the examples, and suddenly drop to 60% of the examples. Play once had a table that was stale for 6 months, and refreshing the table alone gave a boost of 2% in install rate. If you track statistics of the data, as well as manually inspect the data on occassion, you can reduce these kinds of failures.”
  87. 87. That document also has a section on trying to understand what the machines are doing
  88. 88. But human explainability may not even be possible Not every concept a neural network uses fits neatly into a concept for which we have a word. It’s not clear this is a weakness per se, but...
  89. 89. ...this means that engineers won’t always know more than we do about why a page does or doesn’t rank The big knowledge gap of the future is data - clickthrough rates, bounce rates etc.
  90. 90. As Tom Capper said, engineers’ statements can already be misleading
  91. 91. ...and remember the confounding split-tests It’s already not always as simple as “feature X is good” Which all means we may need to be more independent-minded and do more of our own research
  92. 92. So how do we fight back?
  93. 93. Michael Lewis’ latest book is about Kahneman and Tversky spelling. It recounts a story about a piece of medical software that existed in the 1960s.
  94. 94. It was designed to encapsulate how a range of doctors diagnosed stomach cancer from x-rays.
  95. 95. It proceeded to outperform those same doctors despite only containing their expertise. Real people have biases, and fool themselves. Encapsulate your own expert knowledge.
  96. 96. At Distilled, we use a methodology we call the balanced digital scorecard. This encapsulates our beliefs about how to build a high-performing business. Applying it helps avoid our own biases.
  97. 97. Also, while we are talking about books, The Checklist Manifesto is an important part of avoiding the same cognitive biases.
  98. 98. Focus on consulting skills I’ve written a few things about this (DistilledU module, writing better business documents, using split-tests to consult better). Use case studies and creativity. Computers are better at diagnosis than cure. This means: getting things done, convincing organizations, applying general knowledge, learning new things.
  99. 99. We are going to need to be better than ever at debugging things. I wrote about debugging skills for non-developers here. A lot of the story of enterprise consulting is going to be about figuring out why things have gone wrong in the face of sparse or incorrect information from Google.
  100. 100. Disregard expert surveys Firstly, there are all the problems outlined in the search result pairs study - both in the ability of experts to understand factors, and in your ability to use the information even if they do. Secondly, they are broken with another bias called the “law of small numbers” from Lewis’ book. PS - I say this as a participant in many of them Me
  101. 101. Equally, building your digital strategy on what Google tells you to do will become an even worse idea than it already is.
  102. 102. This is why we have been investing so much in split-testing Check out www.distilledodn.com if you haven’t already. The team will be happy to demo for you. We’re now serving ~1.5 billion requests / month, and recently published information covering everything from response times to our +£100k / month split test.
  103. 103. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm
  104. 104. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm 2. Machine learning will make this worse, and is accelerating under Sundar
  105. 105. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm 2. Machine learning will make this worse, and is accelerating under Sundar 3. There are things computers remain bad at, and rankings will become more opaque even to Google engineers
  106. 106. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm 2. Machine learning will make this worse, and is accelerating under Sundar 3. There are things computers remain bad at, and rankings will become more opaque even to Google engineers 4. We remain relevant by: a. Using methodologies and checklists to capture human capabilities and avoid our biases
  107. 107. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm 2. Machine learning will make this worse, and is accelerating under Sundar 3. There are things computers remain bad at, and rankings will become more opaque even to Google engineers 4. We remain relevant by: a. Using methodologies and checklists to capture human capabilities and avoid our biases b. Becoming great consultants and change agents
  108. 108. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm 2. Machine learning will make this worse, and is accelerating under Sundar 3. There are things computers remain bad at, and rankings will become more opaque even to Google engineers 4. We remain relevant by: a. Using methodologies and checklists to capture human capabilities and avoid our biases b. Becoming great consultants and change agents c. Debugging the heck out of everything
  109. 109. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm 2. Machine learning will make this worse, and is accelerating under Sundar 3. There are things computers remain bad at, and rankings will become more opaque even to Google engineers 4. We remain relevant by: a. Using methodologies and checklists to capture human capabilities and avoid our biases b. Becoming great consultants and change agents c. Debugging the heck out of everything d. Avoiding being misled by experts or Google
  110. 110. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm 2. Machine learning will make this worse, and is accelerating under Sundar 3. There are things computers remain bad at, and rankings will become more opaque even to Google engineers 4. We remain relevant by: a. Using methodologies and checklists to capture human capabilities and avoid our biases b. Becoming great consultants and change agents c. Debugging the heck out of everything d. Avoiding being misled by experts or Google e. Testing!
  111. 111. Oh, and one more thing
  112. 112. What about that robot I promised you? The coin flip wasn’t really it
  113. 113. keras.io
  114. 114. The specifics of DeepRank Gather and process training data We started with a broad range of unbranded keywords from our STAT rank tracking. For each of the URLs ranking in the top 10, we gathered key metrics about the domain and page - both from direct crawling and various APIs. We turned this into a set of pairs of URLs {A,B} with their associated keyword, metrics, and their rank ordering.
  115. 115. The specifics of DeepRank Gather and process training data We started with a broad range of unbranded keywords from our STAT rank tracking. For each of the URLs ranking in the top 10, we gathered key metrics about the domain and page - both from direct crawling and various APIs. We turned this into a set of pairs of URLs {A,B} with their associated keyword, metrics, and their rank ordering.
  116. 116. The specifics of DeepRank We have so far trained on just 10 metrics for a relatively small sample (hundreds) of keywords. Our current version is only a few layers deep with only 10 hidden dimensions. The current training samples 30 pairs at a time and trains against them for 500 epochs. Train the model Gather and process training data
  117. 117. The specifics of DeepRank The next task is to get way more metrics for thousands of keywords. This will enable us to train a much deeper model for much longer without overfitting. We also have some more hyperparameter tuning to do, Model Train the model Gather and process training data
  118. 118. To run the model, we input a pair of pages with their associated metrics. New input
  119. 119. Model New input
  120. 120. We get back a probability of page A outranking page B. Model Probability- weighted predictions New input
  121. 121. The goal is a winning combination of human and machine Human + computer beats computer (for now)
  122. 122. Let’s recap 1. Even in a world of 200+ “classical” ranking factors, humans were bad at understanding the algorithm 2. Machine learning will make this worse, and is accelerating under Sundar 3. There are things computers remain bad at, and rankings will become more opaque even to Google engineers 4. We remain relevant by: a. Using methodologies and checklists to capture human capabilities and avoid our biases b. Becoming great consultants and change agents c. Debugging the heck out of everything d. Avoiding being misled by experts or Google e. Testing! 5. Human + robot is the only thing that has a chance of beating the robots
  123. 123. Questions: @willcritchlow
  124. 124. Image credits ● Mobius strip ● Confusion ● Signal box ● Cigar ● Discontinuity ● Confidence ● Burt Totaro ● Sundar Pichai ● John Giannandrea ● Chuck Norris ● Jeff Dean ● Fencing ● Keyboard ● Go ● Robot ● Leopard print sofa ● Leopard ● Bug ● Lego robots ● Iron Man ● San Diego

×