Breaking Bad SEO - The Science of Crawl Space
Upcoming SlideShare
Loading in...5
×
 

Breaking Bad SEO - The Science of Crawl Space

on

  • 4,235 views

Breaking Bad SEO - The Science of Crawl Space - with notes ...

Breaking Bad SEO - The Science of Crawl Space - with notes
Why Breaking Bad?
Story of a high school chemistry teacher who discovers he has cancer and starts producing crystal meth to earn some fast cash to help fund his treatment and leave some money for his family if he dies.
Got me thinking about SEO as a science - requires knowledge, skill and regular testing to deliver the right results or improvements.
Even the lead characters reflect certain aspects of the SEO industry…
There is an episode called Crawl Space, which ties in nicely with this topic.
This version contains the notes on the slides.

Statistics

Views

Total Views
4,235
Views on SlideShare
2,107
Embed Views
2,128

Actions

Likes
3
Downloads
12
Comments
0

7 Embeds 2,128

http://heartrobot.org.uk 1639
https://twitter.com 204
http://blog.deepcrawl.co.uk 188
http://redesign.benleah.com 93
http://plus.url.google.com 2
http://feedly.com 1
http://news.google.com 1
More...

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Why Breaking Bad? <br /> Story of a high school chemistry teacher who discovers he has cancer and starts producing crystal meth to earn some fast cash to help fund his treatment and leave some money for his family if he dies. <br /> Got me thinking about SEO as a science - requires knowledge, skill and regular testing to deliver the right results or improvements. <br /> Even the lead characters reflect certain aspects of the SEO industry… <br />
  • Meet Walter, the chemistry teacher. <br /> Despite getting his hands pretty dirty throughout the series, Walt is fundamentally a good guy and as the star of the show, from a search perspective he should be considered a white hat SEO. <br /> As a scientist, Walt is methodical in his approach, he understands the principles and practices needed to achieve results and has the skill to produce the very best crystal meth. <br />
  • Jesse Pinkman on the other hand is a Black Hat SEO. <br /> An ex-student of Walt’s who failed chemistry at school and became a drug dealer. <br /> Jesse knows his industry and has the right contacts to get by, but he isn’t really interested in delivering the very best results or long term plans. <br /> So why &quot;Crawl Space“? <br />
  • The Totality of Possible URLs For a Website <br /> Google regularly refer to crawl space - it’s fundamentally about knowing your site architecture - the first step towards successful website optimisation. <br /> Identifying your crawl space is the first step towards knowing your site architecture. <br /> Why should we care about crawl space? <br />
  • Because size matters. <br /> Google warn you about potential crawl space issues in GWT and outline some of the implications… <br /> Unnecessarily crawling a large number of duplicate content URLs <br /> Discovering undesired parts of your site <br /> Consuming more bandwidth than necessary <br /> Potential inability to completely index all of your site. <br />
  • So check what’s in the search engine indexes already, is it a realistic number of pages for your site? <br /> Do you recognise the URL structure/formats being returned? <br /> The most common contributor to a large crawl space is duplicate content - seen here with BooHoo.com, which can be caused by a number of valid and invalid reasons. <br /> For SEO, when it comes to URLs per page… <br /> (There can only be one!) <br />
  • There can only be one. <br />
  • But don’t lose you head… <br /> Auditing your Crawl Space will help you understand the full mix of URLs and help formulate a consistent implementation. <br />
  • To build a picture of your site architecture and to discover your complete crawl space, you need to consider what contributes to your URL Universe – the place where all theoretically possible URLs exist. <br /> Invoked URLsAll of the URLs ever brought into existence, for any reason. <br /> Uninvoked URLsURLs that haven’t been invoked (but could be?) <br /> Internally Linked URLsAll Invoked URLs which linked from other internal pages <br /> URLs In SitemapsAll of the Invoked URLs which are currently included in a Sitemap. <br /> Crawlable URLsAll the URLs in the Universe which could be crawled by the search engine <br /> Indexable URLsAll the URLs in the Universe which could be indexed by the search engine <br /> Canonical URLsAll the clean canonical URLs <br /> Should be your crawable uniqe pages <br /> Indexed URLsAll crawled pages which are now in the search engine’s index <br /> Socially Shared URLsAll URLs being shared across social media platforms, Facebook posts, tweets, etc. <br /> Organic Landing Page URLsAll indexed URLs which have driven traffic from organic search results <br /> Externally Linked URLsAll URLs linked from another site. <br /> Mobile URLs <br /> Translated/Regional URLs <br /> Shortened URLsexternal but internal <br /> Domain DuplicatesAliased domains www/non-www, http/https <br />
  • A lack of Crawl Space management can leave you wide open to threats, and lead to a whole load of GWT warnings! <br />
  • Who cares!? Even if social signals are not used in a ranking algorithm, social media and SEO are both able to drive discovery at scale. They both have enormous reach so understanding the crawl space for search & social is crucial. <br /> URL aggregation is not necessarily required, I’d recommend it. <br /> So let take a look at how to tackle all this… <br /> Lets Cook. <br />
  • To get going we need all the components: <br /> A Cook <br /> A Lab <br /> The Formula <br /> Ingredients <br /> Recipe <br />
  • Just like Walt, you need to be the boss, the head chef, the daddy, ideally a scientist – but not a prerequisite if you have the right tools to help you take ownership of your crawl space. <br /> The Lab… <br />
  • This is Jesse’s RV – effectively a mobile meth lab. We need to set up our lab with the tools and equipment necessary to develop search marketing recipes. Need to link up a range of data sources to help us understand the crawl space: <br /> Webmaster Tools - Google & Bing <br /> Landing page data (Analytics) <br /> Linked URL data - WMT/hrefs/Majestic/OSE <br /> Website crawler (Xenu/Screaming Frog/DeepCrawl) <br /> DC’s new Universal Crawl now includes Google Analytics landing page data, along with link equity and social tagging reports to assess a comprehensive crawl space in one. <br /> The Formula… <br />
  • Maximised + Minimised = Optimised <br /> Discovery, management and optimisation of your crawl space is essential and lays the foundation for strong performance. <br /> All spaces need to be carefully defined and managed efficiently. <br /> Maximise indexable space, Minimise crawlable space, Optimise canonical space. <br /> Thankfully the search engines empower you with some great ingredients to help you develop your recipes… <br />
  • As SEOs, we have a whole host of juicy ingredients at our disposal to help cook up an optimised crawl space - picking the very best ingredients for your recipe will make all the difference in testing your recipes. <br /> Use the custom controls in DC to extract and schedule regular data comparisons for each source… <br /> Overwrite robots.txt rules to assess full crawl space as well as the one delivered to search engines. <br /> Schedule regular sitemap crawls and compare against internal and external links, review canonical setup and index controls to maintain consistency. <br /> - DeepCrawl Backlinks crawl <br /> - OG tags & Hreflang DeepCrawl report <br />
  • New Universal Crawl out of Beta – Capture and audit comprehensive website, XML sitemap & organic landing page data in one Universal Crawl - designed to capture and inform your crawl space optimisation for search and social using a wide range of data sources. <br />
  • DC’s Universal Crawl helps you quickly and easily understand your crawl space and identify URL sources, gaps, new formats and traffic value, plus you get all the regular DC features including fully customisable crawl settings, ability to compare test environment vs live site (support QA), custom extraction tools and scheduled crawls that record change, impact and help quantify SEO deliverables. <br /> Lets take a look at some recipes to control your crawl space… <br /> DeepCrawl automatically shows you what’s changed between crawls so you can understand how much of the site is changing. <br /> You might spot some URLs formats which are changing very frequently and affecting the crawl efficiency. <br />
  • DC’s Universal Crawl helps you quickly and easily understand your crawl space and identify URL sources, gaps, new formats and traffic value, plus you get all the regular DC features including fully customisable crawl settings, ability to compare test environment vs live site (support QA), custom extraction tools and scheduled crawls that record change, impact and help quantify SEO deliverables. <br /> Lets take a look at some recipes to control your crawl space… <br /> DeepCrawl automatically shows you what’s changed between crawls so you can understand how much of the site is changing. <br /> You might spot some URLs formats which are changing very frequently and affecting the crawl efficiency. <br />
  • DeepCrawl indexation reports help you quickly assess all crawlable URLs, unique pages, noindex pages and identify URL parameters that you might not want indexed. <br /> Use Webmaster tools data and and site: checks to understand current search engine indexation and use the Parameter Removal controls in DeepCrawl to test the impact of stripping parameters. Monitor crawl efficiency changes and for a quick win update your URL parameter settings in Webmaster tools when you find the right formula. <br />
  • ‘Canonicalized Pages’ reports to ensure your canonical implementation is correct. <br /> Identify canonical URLs which aren’t linked internally with the ‘Unlinked Canonical Pages’ report – you’d expect every canonical URL to be linked somewhere internally. <br /> The ‘Pages without Canonical Tag’ report shows you pages that are missing canonical tags, makes sure there aren’t any important pages included here. <br />
  • Check the consistency of your canonical URLs against social URLs – are your OG & TwitterCard URLs consistent?DeepCrawl has a report called ‘Inconsistent Open Graph and Canonical URLs’ to automatically show any errors. <br /> You can also schedule custom extraction crawls to regularly assess social share equity changes for specific URLs using DC. <br /> Likewise for blog comments etc. <br />
  • Use DeepCrawl Min. content/html ratio reports to find potential Panda pages linked internally. <br /> All pages with less than 10 percent minimum content/HTML ratio.  <br />
  • Review and assess if you can make your crawl more efficient by excluding certain pages.Check your non-canonical URLs and use Universal Crawl to see non-navigational pages not driving traffic. <br />
  • Understanding who’s working with you or who’s against you is important too. Review your domain aliases. <br /> Test all registered domains to check if they return a duplicate or redirect. <br /> Check www/non-www configuration and HTTP/HTTPS. <br /> Implement redirects or use cross domain canonical setups. <br /> Monitor domains using Robotto.org <br />
  • DeepCrawl reports all ‘Disallowed URLs’ so you can easily see what&apos;s already being excluded. <br /> Test changes to your robots.txt file using the Robots Overwrite functionality and develop an optimised file that increases disallowed URLs and focuses your crawl on primary pages - test the impact. <br />
  • Check that your website internal linking is working towards an optimised crawl space. <br /> Use DeepCrawl internal broken links, redirected links, 4xx and 5xx error reports to identify internal links on your site that are broken or are redirected URLs that may affecting your crawl efficiency. <br />
  • Identify pages ‘Only in Sitemaps’ and not linked internally, plus all pages linked internally, which are not in the Sitemaps. Schedule regular sitemap and comparative website crawls to assess what’s changed, you can monitor how much of the site is changing and map against performance. <br /> You might spot some URLs formats which are changing frequently and impacting crawl efficiency. <br />
  • By comparing sitemaps, GA and internal links, a Universal Crawl easily highlights URLs discovered ‘Only in Organic Landing Pages’ and not linked internally. <br /> You can add ‘Backlink Crawls’ to your DC projects, simply upload a comprehensive backlink profile URL list and let DC crawl and validate the URLs. This crawl also helps identify pages generating traffic but not necessarily linked internally, plus it automatically applies inbound link equity metrics to all DC reports at a URL level – very useful. <br />
  • The correct use and implementation of HrefLang tags is important for effective International SEO, but it can be confusing and even experienced SEO’s get it wrong. <br />
  • DeepCrawl helps manage a seamless HrefLang integration by detecting hreflang tags in sitemaps, headers and on-page before showing a matrix of language alternatives for every page so you can see the gaps and inconsistencies in the setup. <br />

Breaking Bad SEO - The Science of Crawl Space Breaking Bad SEO - The Science of Crawl Space Presentation Transcript

  • Breaking Bad SEO The Science of Crawl Space
  • Meet Walter… Meet Walter, the chemistry teacher. Despite getting his hands pretty dirty throughout the series, Walt is fundamentally a good guy and as the star of the show, from a search perspective he should be considered a white hat SEO. As a scientist, Walt is methodical in his approach, he understands the principles and practices needed to achieve results and has the skill to produce the very best crystal meth.
  • Jesse Pinkman Jesse Pinkman, on the other hand, is a Black Hat SEO. An ex-student of Walt’s who failed chemistry at school and became a drug dealer. Jesse knows his industry and has the right contacts to get by, but he isn’t really interested in delivering the very best results or long term plans. So why "Crawl Space“?
  • The Totality of Possible URLs For a Website “[You]…have several options to optimize the “crawl space” (the totality of URLs on your site known to Googlebot) for unique content pages, reduce crawling of duplicative pages, and consolidate indexing signals.” Google Webmaster Central Blog http://googlewebmastercentral.blogspot.co.uk/2014/02/faceted-navigation-best-and-5-of-worst.html What is Crawl Space? Google regularly refer to crawl space - it’s fundamentally about knowing your site architecture - the first step towards successful website optimisation. Identifying your crawl space is the first step towards knowing your site architecture. Why should we care about crawl space?
  • Size Does Matter… Google warn you about potential crawl space issues in GWT and outline some of the implications… •Unnecessarily crawling a large number of duplicate content URLs •Discovering undesired parts of your site •Consuming more bandwidth than necessary •Potential inability to completely index all of your site.
  • domain.com domain.com/home domain.com/home/ domain.com/Home domain.com/?source=ppc domain.com/index.html How’d I Get so Many Pages? So check what’s in the indexes already, is it a realistic number of pages for your site? Do you recognise the URL structure/formats being returned? The most common contributor to a large crawl space is duplicate content - seen here with BooHoo.com, which can be caused by a number of valid and invalid reasons.
  • For SEO… There Can Only Be One!
  • But Don’t Lose Your Head Auditing your Crawl Space will help you understand the full mix of URLs and help formulate a consistent implementation.
  • What Makes Up Your Crawl Space? To build a picture of your site architecture and to discover your complete crawl space, you need to consider what contributes to your URL Universe – the place where all theoretically possible URLs exist.
  • Poor management has implications: •Orphaned pages •Incomplete sitemaps •Dilution of backlinks and shares •Performance is harder to track •Limited volume of unique URLs in analytics •Crawl inefficiency for SEO •Traffic growth strategies just won’t work The Threats A lack of Crawl Space management can leave you wide open to threats, & lead to a load of GWT warnings!
  • Do social signals impact organic rankings? •Twitter? •Facebook? •Google+ Potentially Social and Backlink Synergy Who cares!? Even if social signals are not used in a ranking algorithm, social media and SEO are both able to drive discovery at scale. They both have enormous reach so understanding the crawl space for search & social is crucial. URL aggregation is not necessarily required, I’d recommend it. So let take a look at how to tackle all this…
  • To get going we need all the components: •A Cook •A Lab •The Formula •Organic Ingredients •Recipe Let’s Cook!
  • Be the Boss (of URLs) •An SEO needs to be a great cook… • Make sure you have the right equipment • Follow an accurate formula • Use quality ingredients •Take ownership of your crawl space • Benchmark • URL Roadmap The Cook – YOU! Just like Walt, you need to be the boss, the head chef, the daddy, ideally a scientist – but not a prerequisite if you have the right tools to help you take ownership of your crawl space.
  • The Lab This is Jesse’s RV – effectively a mobile meth lab. We need to set up our lab with the tools and equipment necessary to develop search marketing recipes. Need to link a range of data sources to understand the crawl space: •Webmaster Tools - Google & Bing •Landing page data (Analytics) •Linked URL data - WMT/hrefs/Majestic/OSE •Website crawler (Xenu/Screaming Frog/DeepCrawl) DC’s new Universal Crawl now includes Google Analytics landing page data, along with link equity and social tagging reports to assess a comprehensive crawl space in one.
  • Maximised + Minimised = Optimised Maximise indexable space •Increase volume of valuable pages •Increase crawl efficiency Minimise crawlable space •Define your crawl space •Identify and eliminate threats Optimise canonical space •A clean version of your website •Your URL à la carte The Formula Discovery, management & optimisation of crawl space is essential and lays the foundation for strong performance. All spaces need to be carefully defined and managed efficiently.
  • Use Organic Ingredients As SEOs, we have a whole host of juicy ingredients at our disposal to help cook up an optimised crawl space - picking the very best ingredients for your recipe will make all the difference in testing your recipes. Use the custom controls in DC to extract and schedule regular data comparisons for each source… Overwrite robots.txt rules to assess full crawl space as well as the one delivered to search engines. Schedule regular sitemap crawls and compare against internal and external links, review canonical setup and index controls to maintain consistency. - DeepCrawl Backlinks crawl - OG tags & Hreflang DeepCrawl report
  • So What’s the Recipe?
  • Crawl Space Solutions in One… New Universal Crawl is now out of Beta! Review website, XML sitemaps & organic landing page data in one Universal Crawl with Deep Crawl. Take advantage of a significant head start in defining, managing & optimising your crawl spaces. Universal Crawl is the New Heisenberg…
  • Deep Crawl Goes Universal
  • Deep Crawl Goes Universal DC’s Universal Crawl helps you quickly and easily understand your crawl space and identify URL sources, gaps, new formats and traffic value, plus you get all the regular DC features including fully customisable crawl settings, ability to compare test environment vs live site (support QA), custom extraction tools and scheduled crawls that record change, impact and help quantify SEO deliverables. Lets take a look at some recipes to control your crawl space… DeepCrawl automatically shows you what’s changed between crawls so you can understand how much of the site is changing. You might spot some URLs formats which are changing very frequently and affecting the crawl efficiency.
  • Understand what’s in your crawl space… •Assess indexation reports •Review the current index •Test URL parameter changes •Quick improvements through GWT Parameter settings Identify Indexable & Non-indexable Parameters DeepCrawl indexation reports help you quickly assess all crawlable URLs, unique pages, noindex pages and identify URL parameters that you might not want indexed. Use Webmaster tools data and and site: checks to understand current search engine indexation and use the Parameter Removal controls in DeepCrawl to test the impact of stripping parameters. Monitor crawl efficiency changes and for a quick win update your URL parameter settings in Webmaster tools when you find the right formula.
  • Make sure you’re not self harming •Check canonical implementation •All canonical pages should be linked internally •Assess pages without canonical tags Canonical URL Configuration
  • Which URLs Are Being Shared? ? Check the consistency of canonical URLs against social URLs – are your OG & TwitterCard URLs consistent? DeepCrawl has a report called to automatically show any errors. You can also schedule custom extraction crawls to regularly assess social share equity changes for specific URLs using DC. Likewise for blog comments etc.
  • Where’s your thin content hiding out? Identify Low Value Navigational Pages
  • Take a good look at your analytics… •Review URLs delivering minimal traffic •Identify and assess URLs outside of your canonical setup Identify Low Value Non Navigational URLs Review and assess if you can make your crawl more efficient by excluding certain pages. Check your non-canonical URLs & use Universal Crawl to see non-navigational pages not driving traffic.
  • Identify Domain Aliases Understanding who’s working with you or who’s against you is important too. Review your domain aliases. Test all registered domains to check if they return a duplicate or redirect. Check www/non-www configuration and HTTP/HTTPS. Implement redirects or use cross domain canonical setups.
  • Monitor your domain portfolio & keep alert! www.robotto.org Monitor domains using Robotto.org
  • Check all disallowed URLs… •Webmaster Tools •Deep Crawl Indexation Reports Disallowed URLs DeepCrawl reports all ‘Disallowed URLs’ so you can easily see what's already being excluded. Test changes to your robots.txt file using the Robots Overwrite functionality & develop an optimised file that increases disallowed URLs & focuses the crawl on primary pages - test the impact.
  • Review & Validate all linked URLs… Identify All Linked URLs Check that your website internal linking is working towards an optimised crawl space. Use DeepCrawl internal broken links, redirected links, 4xx and 5xx error reports to identify internal links that are broken or are redirected URLs that may affecting your crawl efficiency.
  • Crawl your sitemap regularly… •Run analysis and compare: • Scheduled sitemap crawls • Scheduled website crawls • All validated, linked URLs Compare Sitemaps Identify pages ‘Only in Sitemaps’ and not linked internally, plus all pages linked internally, which are not in the Sitemaps. Schedule regular sitemap & comparative website crawls to assess what’s changed, you can monitor how much of the site is changing & map against performance. You might spot some URL formats which are changing frequently and impacting crawl efficiency.
  • Where’s the link equity? •Identify pages delivering traffic but not internally linked •Understand the link profile of all pages: • Crawl aggregated link data •DeepCrawl automatically applies link metrics to all reports: Compare Landing Page URLs to Linked URLs By comparing sitemaps, GA and internal links, a Universal Crawl easily highlights URLs discovered ‘Only in Organic Landing Pages’ and not linked internally. You can add ‘Backlink Crawls’ to your DC projects, simply upload a comprehensive backlink profile URL list and let DC crawl and validate the URLs. This crawl also helps identify pages generating traffic but not necessarily linked internally, plus it automatically applies inbound link equity metrics to all DC reports at a URL level – very useful.
  • Watch your Language… Check your Hreflang! International SEO Considerations The correct use and implementation of HrefLang tags is important for effective International SEO, but it can be confusing and even experienced SEO’s get it wrong.
  • • Universal Crawl tests implementation across: • Sitemaps • Headers • On-page • Review a matrix of language alternatives for each page • Assess gaps and inconsistencies in the setup • Review a ‘Pages Without Hreflang Tags’ report • See David Sottimano’s MOZ post on HrefLang: http://moz.com/blog/hreflang-behaviour-insights International SEO Considerations DeepCrawl helps manage a seamless HrefLang integration by detecting hreflang tags in sitemaps, headers and on-page before showing a matrix of language alternatives for every page so you can see the gaps and inconsistencies in the setup.
  • • Review your options and consider your URL Universe • Setup your lab • Google Analytics • Google/BING Webmaster Tools • Deep Crawl – Universal Crawl • Follow the formula: • Maximised + Minimised = Optimised • Develop and test new recipes to focus your crawl spaces. #BreakingBadSEO
  • Thanks, Keep in Touch…