Your SlideShare is downloading. ×
Search-Friendly Web Development at RubyNation
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Search-Friendly Web Development at RubyNation


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Search EnginescomScore February 2011 Rankings Google AOL Bing Ask 65.6% 1.7% Yahoo 13.1% 3.4% 16.1%
  • 2. Why?
  • 3. “SEO Expert”
  • 4. == “Spammer”
  • 5. White Hat vs.Black Hat
  • 6. Discovery +Navigation
  • 7. A Story...
  • 8. “Not my audience!”
  • 9. “Not my audience!”
  • 10. “Experts” Not Needed
  • 11. Professional Practices• User-Centric Design• Test-Driven Development• DRY and Maintainable Code• Server Performance• Client-Side Performance• Search Engine Considerations
  • 12. Six Simple Rules• Can’t outsmart Google (or Bing or Y!)• Follow Google’s advice• Obey conventions and standards• Stay away from hacks• Understand how search engines work• Think like a searcher
  • 13. Search Engine Pipeline• Crawling• Indexing• Ranking
  • 14. <crawling>
  • 15. Discovery• Links to your pages from other sites• Links to your pages from within your site• Your sitemap.xml
  • 16. Check internal links$ wget --mirror
  • 17. sitemap.xml• Tell search engines exactly what you want them to crawl•• Limit per sitemap: 50,000 URLs, 10MB• Can specify multiple sitemaps with a sitemap index
  • 18. <?xml version="1.0" encoding="UTF-8"?><urlset xmlns="">   <url>      <loc></loc>      <lastmod>2010-01-01</lastmod>      <changefreq>monthly</changefreq>      <priority>0.8</priority>   </url></urlset>
  • 19. Generating sitemap.xml• Write it by hand, stick it in public/• Build a controller, action, and route entry to respond to ‘sitemap.xml’. Use XML Builder to generate the entries. Cache it.• Importantly: Strive for 100% coverage.
  • 20. robots.txt• Exclusion rather than inclusion• robotstxt.orgUser-agent: *Disallow: /profile
  • 21. Be nice to the crawler• Be performant. Fast server response. Fast page load. Compress files. Use if- modified-since header.• Non-www vs. www - pick one.• Ensure unique content. Use <link rel=”canonical”/> where approriate.
  • 22. </crawling>
  • 23. <indexing>
  • 24. Don’t sabotage it• Don’t use a 302 redirect when you mean a 301 redirect.• Make sure images, video, Flash, Silverlight, and AJAX are accessible.• See the Google Webmaster Central Blog for details.• Region-specific content? Think about the bots.
  • 25. </indexing>
  • 26. <ranking>
  • 27. <title>• Most important element to search engines• Think long and hard about it• Keywords! Think like a searcher.• Best format: Page Title | Site Name
  • 28. URLs• Override to_param for pretty URLs.• Dashes are word separators, underscores are not. Use dashes.• International domains are treated as such.
  • 29. <meta>• <meta name=”description” content=”...” />• Make it unique for every page. Use content_for.• Shown to users, doesn’t affect ranking.• <meta name=”keywords” ... /> is ignored
  • 30. Headings and Content• <h> tags should be used appropriately.• Page content should match what the <title> and <h> tags refer to.• Limit use of text-indent:-9999px and display:none in CSS.
  • 31. Rich Snippets• Microformats, RDFa, Microdata
  • 32. </ranking>
  • 33. Tools• Google Webmaster Tools• Bing Webmaster Tools• Yahoo! Site Explorer
  • 34. Five Takeaways• Think like a searcher• Create a sitemap.xml• Optimize your <title>s• Use Google Webmaster Tools• Read the Google Webmaster Blog