Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
#SMX #23A2 @maxxeight
SEO Best Practices for JavaScript
What To Do When
Google Can't
Understand Your
JavaScript
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
How Search Engines Typically Work
#SMX #23A2 @maxxeight
Everything is there!
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
Web Development Technologies
#SMX #23A2 @maxxeight
Search Engines’ Mission:
Serving the best result
#SMX #23A2 @maxxeight
No page title
No content
Etc.
#SMX #23A2 @maxxeight
It’s in the DOM!
#SMX #23A2 @maxxeight
How Search Engines Typically Work
Render “Understanding web pages better”
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
So, what now?
#SMX #23A2 @maxxeight
 Crawling
– Don’t block resources via robots.txt
How To Make Sure Google Can Understand Your Pages
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
 Crawling
– Don’t block resources via robots.txt
– onclick + window.location != <a href=”link.html”...
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
 Crawling
– Don’t block resources via robots.txt
– onclick + window.location != <a href=”link.html”...
#SMX #23A2 @maxxeight
URL Structures (with AJAX websites)
 Fragment Identifier: example.com/#url
– Not supported. Ignored...
#SMX #23A2 @maxxeight
History API - pushState()
#SMX #23A2 @maxxeight
 Crawling
– Don’t block resources via robots.txt
– onclick + window.location != <a href=”link.html”...
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
 Crawling
– Don’t block resources via robots.txt
– onclick + window.location != <a href=”link.html”...
#SMX #23A2 @maxxeight
Google Fetch & Render PageSpeed Insights
The 5-second rule
#SMX #23A2 @maxxeight
 Crawling
– Don’t block resources via robots.txt
– onclick + window.location != <a href=”link.html”...
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
 HTML snapshots are only required with
uncrawlable URLs (#!)
 When used with clean URLs:
– 2 URLs ...
#SMX #23A2 @maxxeight
 HTML snapshots are only required with
uncrawlable URLs (#!)
 When used with clean URLs:
– 2 URLs ...
#SMX #23A2 @maxxeight
 Crawling
– Don’t block resources via robots.txt
– onclick + window.location != <a href=”link.html”...
#SMX #23A2 @maxxeight
HTTP
Headers
HTML
Source
DOM
HTML
Snapshot
#SMX #23A2 @maxxeight
 Google cache (unless HTML snapshots)
Tools For SEO And JavaScript
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
 Google cache (unless HTML snapshots)
 Google Fetch & Render (Search Console)
– limitation in term...
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
 Google cache (unless HTML snapshots)
 Google Fetch & Render (Search Console)
– limitation in term...
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
 Google cache (unless HTML snapshots)
 Google Fetch & Render (Search Console)
– limitation in term...
#SMX #23A2 @maxxeight
#SMX #23A2 @maxxeight
 Google cache (unless HTML snapshots)
 Google Fetch & Render (Search Console)
– limitation in term...
#SMX #23A2 @maxxeight
LEARN MORE: UPCOMING @SMX EVENTS
THANK YOU!
SEE YOU AT THE NEXT #SMX
Upcoming SlideShare
Loading in …5
×

Max Prin - SMX West 2017 - What to do when Google can't understand your JavaScript

Max Prin - SMX West 2017 - What to do when Google can't understand your JavaScript

Related Books

Free with a 30 day trial from Scribd

See all
  • Be the first to comment

Max Prin - SMX West 2017 - What to do when Google can't understand your JavaScript

  1. 1. #SMX #23A2 @maxxeight SEO Best Practices for JavaScript What To Do When Google Can't Understand Your JavaScript
  2. 2. #SMX #23A2 @maxxeight
  3. 3. #SMX #23A2 @maxxeight How Search Engines Typically Work
  4. 4. #SMX #23A2 @maxxeight Everything is there!
  5. 5. #SMX #23A2 @maxxeight
  6. 6. #SMX #23A2 @maxxeight
  7. 7. #SMX #23A2 @maxxeight Web Development Technologies
  8. 8. #SMX #23A2 @maxxeight Search Engines’ Mission: Serving the best result
  9. 9. #SMX #23A2 @maxxeight No page title No content Etc.
  10. 10. #SMX #23A2 @maxxeight It’s in the DOM!
  11. 11. #SMX #23A2 @maxxeight How Search Engines Typically Work Render “Understanding web pages better”
  12. 12. #SMX #23A2 @maxxeight
  13. 13. #SMX #23A2 @maxxeight So, what now?
  14. 14. #SMX #23A2 @maxxeight  Crawling – Don’t block resources via robots.txt How To Make Sure Google Can Understand Your Pages
  15. 15. #SMX #23A2 @maxxeight
  16. 16. #SMX #23A2 @maxxeight
  17. 17. #SMX #23A2 @maxxeight  Crawling – Don’t block resources via robots.txt – onclick + window.location != <a href=”link.html”> How To Make Sure Google Can Understand Your Pages
  18. 18. #SMX #23A2 @maxxeight
  19. 19. #SMX #23A2 @maxxeight  Crawling – Don’t block resources via robots.txt – onclick + window.location != <a href=”link.html”> – 1 unique “clean” URL per piece of content (and vice-versa) How To Make Sure Google Can Understand Your Pages
  20. 20. #SMX #23A2 @maxxeight URL Structures (with AJAX websites)  Fragment Identifier: example.com/#url – Not supported. Ignored. URL = example.com  Hashbang: example.com/#!url (pretty URL) – Google and Bing will request: example.com/?_escaped_fragment_=url (ugly URL) – The escaped_fragment URL should return an HTML snapshot  Clean URL: example.com/url – Leveraging the pushState function from the History API – Must return a 200 status code when loaded directly
  21. 21. #SMX #23A2 @maxxeight History API - pushState()
  22. 22. #SMX #23A2 @maxxeight  Crawling – Don’t block resources via robots.txt – onclick + window.location != <a href=”link.html”> – 1 unique “clean” URL per piece of content (and vice-versa)  Rendering – Load content automatically, not based on user interaction (click, mouseover, scroll) How To Make Sure Google Can Understand Your Pages
  23. 23. #SMX #23A2 @maxxeight
  24. 24. #SMX #23A2 @maxxeight  Crawling – Don’t block resources via robots.txt – onclick + window.location != <a href=”link.html”> – 1 unique “clean” URL per piece of content (and vice-versa)  Rendering – Load content automatically, not based on user interaction (click, mouseover, scroll) – the 5-second rule How To Make Sure Google Can Understand Your Pages
  25. 25. #SMX #23A2 @maxxeight Google Fetch & Render PageSpeed Insights The 5-second rule
  26. 26. #SMX #23A2 @maxxeight  Crawling – Don’t block resources via robots.txt – onclick + window.location != <a href=”link.html”> – 1 unique “clean” URL per piece of content (and vice-versa)  Rendering – Load content automatically, not based on user interaction (click, mouseover, scroll) – the 5-second rule – Avoid JavaScript errors (bots vs. browsers) How To Make Sure Google Can Understand Your Pages
  27. 27. #SMX #23A2 @maxxeight
  28. 28. #SMX #23A2 @maxxeight  HTML snapshots are only required with uncrawlable URLs (#!)  When used with clean URLs: – 2 URLs requested for each content (crawl budget!)  Served directly to (other) crawlers (Facebook, Twitter, Linkedin, etc.)  Matching the content in the DOM  No JavaScript (except JSON-LD markup)  Not blocked from crawling The “Old” AJAX Crawling Scheme And HTML Snapshots DOM HTML Snapshot
  29. 29. #SMX #23A2 @maxxeight  HTML snapshots are only required with uncrawlable URLs (#!)  When used with clean URLs: – 2 URLs requested for each content (crawl budget!)  Served directly to (other) crawlers (Facebook, Twitter, Linkedin, etc.)  Matching the content in the DOM  No JavaScript (except JSON-LD markup)  Not blocked from crawling The “Old” AJAX Crawling Scheme And HTML Snapshots DOM HTML Snapshot
  30. 30. #SMX #23A2 @maxxeight  Crawling – Don’t block resources via robots.txt – onclick + window.location != <a href=”link.html”> – 1 unique “clean” URL per piece of content (and vice-versa)  Rendering – Load content automatically, not based on user interaction (click, mouseover, scroll) – the 5-second rule – Avoid JavaScript errors (bots vs. browsers)  Indexing – Mind the order of precedence (SEO signals and content) How To Make Sure Google Can Understand Your Pages
  31. 31. #SMX #23A2 @maxxeight HTTP Headers HTML Source DOM HTML Snapshot
  32. 32. #SMX #23A2 @maxxeight  Google cache (unless HTML snapshots) Tools For SEO And JavaScript
  33. 33. #SMX #23A2 @maxxeight
  34. 34. #SMX #23A2 @maxxeight  Google cache (unless HTML snapshots)  Google Fetch & Render (Search Console) – limitation in terms of bytes (~200 KBs) – doesn’t show HTML snapshot (DOM) Tools For SEO And JavaScript
  35. 35. #SMX #23A2 @maxxeight
  36. 36. #SMX #23A2 @maxxeight  Google cache (unless HTML snapshots)  Google Fetch & Render (Search Console) – limitation in terms of bytes (~200 KBs) – doesn’t show HTML snapshot (DOM)  Fetch & Render As Any Bot (TechnicalSEO.com) Tools For SEO And JavaScript
  37. 37. #SMX #23A2 @maxxeight
  38. 38. #SMX #23A2 @maxxeight  Google cache (unless HTML snapshots)  Google Fetch & Render (Search Console) – limitation in terms of bytes (~200 KBs) – doesn’t show HTML snapshot (DOM)  Fetch & Render As Any Bot (TechnicalSEO.com)  Chrome DevTools (JavaScript Console) Tools For SEO And JavaScript
  39. 39. #SMX #23A2 @maxxeight
  40. 40. #SMX #23A2 @maxxeight  Google cache (unless HTML snapshots)  Google Fetch & Render (Search Console) – limitation in terms of bytes (~200 KBs) – doesn’t show HTML snapshot (DOM)  Fetch & Render As Any Bot (TechnicalSEO.com)  Chrome DevTools (JavaScript Console)  SEO Crawlers – ScreamingFrog – Botify – Scalpel (Merkle proprietary tool) Tools For SEO And JavaScript
  41. 41. #SMX #23A2 @maxxeight LEARN MORE: UPCOMING @SMX EVENTS THANK YOU! SEE YOU AT THE NEXT #SMX

×