The Periodic Table of SEO Ranking Factors - Jenny Halasz
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

The Periodic Table of SEO Ranking Factors - Jenny Halasz

  • 875 views
Uploaded on

Technical factors still matter! Learn how we increased a client's revenue by implementing only technical strategies.

Technical factors still matter! Learn how we increased a client's revenue by implementing only technical strategies.

More in: Technology , Design
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
875
On Slideshare
623
From Embeds
252
Number of Embeds
7

Actions

Shares
Downloads
10
Comments
0
Likes
2

Embeds 252

http://www.jeffalytics.com 157
http://feeds.feedburner.com 50
http://www.scoop.it 33
http://www.feedspot.com 6
https://twitter.com 3
http://cloud.feedly.com 2
http://translate.googleusercontent.com 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • The amount we increased a client’s revenue from organic search just by making technical changes. Cleaned up duplicate content, compressed images and scripts to save load time, and created and submitted new XML sitemaps.
  • Crawl budget is an important concept to understand. The search engines have millions of sites to crawl; they want to spend the most time on the ones that have frequent changes. Similarly, you want to set up clear signals about which are your most important pages and not waste time on duplicate content, redirect loops, or slow load time.
  • Crawl stats in Google Webmaster tools can indicate health or “illness” of a site. The top one is an example of a site that may have a problem.
  • Which one is healthier? It’s a trick question. They are both healthy. While the one on the top shows declining numbers of pages crawled, they haven’t been making any changes to their content. So they don’t need Google to crawl all of those pages everyday. It’s not like Google will forget about a page just because it hasn’t crawled it in a while.
  • The key here is to find out what’s normal for your site. And use this only as a diagnostic – an indicator, not a definitive measure of site health.
  • So what is the duplicate content problem, and what does it have to do with crawl budget? Because search engines use the location (both literal – with site architecture- and figurative – with link value) as a signal of value, they get “confused” for lack of a better word whenever they encounter URLs that are different but have the same content on them.
  • One aside here is that sites rarely get in trouble for duplicate content unless they are doing it on purpose. Panda is often referred to as the “duplicate content” penalty, but that’s not correct. Panda is designed to identify low quality content. It just so happens that a lot of low quality content is also duplicative, either within sites or across sites. More often than not, the search engines will choose one version of the duplicate content. But that may not be the one you want it to be. And rel=canonical is not the only way to fix it. In fact, in most cases, it’s not even the best way. But you’ll learn more about that this afternoon in “The Crazy Complicated Technical Issues That Completely Sabotage the Best SEO Efforts”.
  • I jest, but the key here is that it’s all relative. If your site already loads in 2.5 seconds, improving your load time by a small fraction isn’t going to be that significant. If your site takes more than 6 seconds to load, you have a problem, and you need to fix it.
  • The amount of time it should take a page to load… on desktop.By the end of 2013, one third of all searches will come from mobile devices59% of the public think a webpage should load in 3 seconds or less on their mobileOne Aberdeen study found that a 1-second delay in load time could result in a 7% loss in conversions and a 16% decrease in customer satisfaction
  • Load times for the top U.S. retail websites are 22 percent slower than in December 2011, with a median load time for first-time visitors to a retail website’s home page at 7.25 seconds.
  • In December 2012, the median webpage contained 79 requests (such as images, HTML, and CSS/JavaScript files), an increase of 8.22% from December 2011 median of 73 requests.
  • This is an actual sitemap proposal that we recently received from a potential client. This is an example of how not to do site architecture. Do you see how many choices a potential client has to make? All so that the website could optimize for all these variations of keywords. I won’t get into how you should do that, because that will be covered in a lot of detail in tomorrow’s session about content. But here’s the thing. Suppose I was in an auto accident and I broke my leg and suffered a concussion. Which one of these do I choose?
  • It’s ok to create pages for users that happen to have keywords in them, but don’t make life any more difficult for users than it has to be.Clean up your XML sitemaps. Only submit valid pages, use realistic update timeframes, and don’t put them all at 90% priority.Keep it Simple Sweetheart, but don’t believe that garbage about keeping your URLs under “x number of characters.”
  • Google is your boss and it’s a 5 year old child. It doesn’t have any attention span, it doesn’t think it needs to understand complex concepts, and if it likes you, it will keep coming back.

Transcript

  • 1. @JennyHalaszTHE PERIODIC TABLE OFSEO RANKING FACTORS:2013 EDITIONWhy Technical SEO Still WorksJenny HalaszJune 11, 2013
  • 2. @JennyHalasz
  • 3. @JennyHalaszThe amount we increased aclient’s revenue from organicsearch just by making technicalchanges. Cleaned up duplicatecontent, compressed imagesand scripts to save load time,and created and submittednew XML sitemaps.800%
  • 4. @JennyHalaszCrawl budget is an important concept tounderstand. The search engines have millions ofsites to crawl; they want to spend the most timeon the ones that have frequent changes. Similarly,you want to set up clear signals about which areyour most important pages and not waste time onduplicate content, redirect loops, or slow loadtime.You Have to Crawl Before You Walk
  • 5. @JennyHalaszCrawl stats in Google Webmaster tools can indicatehealth or “illness” of a site. The top one is anexample of a site that may have a problem.Which one has a problem?
  • 6. @JennyHalaszWhich one is healthier? It’s a trick question. They are bothhealthy. While the one on the top shows declining numbers ofpages crawled, they haven’t been making any changes to theircontent. So they don’t need Google to crawl all of those pageseveryday. It’s not like Google will forget about a page justbecause it hasn’t crawled it in a while.Now which one has a problem?
  • 7. @JennyHalasz• Consistent crawl patterns• Increased crawling when changes are made,decreased when site is static• Big spikes or drops can signal a problemThe key here is to find out what’s normal for your site. And use this only as a diagnostic –an indicator, not a definitive measure of site health.What Do You Look For?
  • 8. @JennyHalaszSo what is the duplicate content problem, and what does it have todo with crawl budget? Because search engines use the location (bothliteral – with site architecture- and figurative – with link value) as asignal of value, they get “confused” for lack of a better wordwhenever they encounter URLs that are different but have the samecontent on them.What is the Duplicate Content Problem?http://blog.bigmouthmedia.com/files/2011/12/SEO8.png
  • 9. @JennyHalaszOne aside here is that sites rarely get in trouble for duplicate content unless they aredoing it on purpose. Panda is often referred to as the “duplicate content” penalty, butthat’s not correct. Panda is designed to identify low quality content. It just so happensthat a lot of low quality content is also duplicative, either within sites or across sites.More often than not, the search engines will choose one version of the duplicate content.But that may not be the one you want it to be. And rel=canonical is not the only way to fixit. In fact, in most cases, it’s not even the best way. But you’ll learn more about that thisafternoon in “The Crazy Complicated Technical Issues That Completely Sabotage the BestSEO Efforts”.Duplicate Content Can Cause:• Distributed link value• Wasted crawl budget• Search engine confusion• The wrong page to rank• Poor user experience• “Thin” content penalties*
  • 10. @JennyHalaszI jest, but the key here is that it’s allrelative. If your site already loads in 2.5seconds, improving your load time by asmall fraction isn’t going to be thatsignificant. If your site takes more than6 seconds to load, you have a problem,and you need to fix it.Page Speed
  • 11. @JennyHalaszThe amount of time it should take a page to load… ondesktop. By the end of 2013, one third of all searches willcome from mobile devices59% of the public think a webpage should load in 3 secondsor less on their mobile3 Seconds
  • 12. @JennyHalaszLoad times for the top U.S. retail websites are 22 percentslower than in December 2011, with a median load time forfirst-time visitors to a retail website’s home page at 7.25seconds.Radware Study (Spring 2013)http://bit.ly/1aVYqlo
  • 13. @JennyHalaszIn December 2012, the medianwebpage contained 79 requests (suchas images, HTML, and CSS/JavaScriptfiles), an increaseof 8.22% from December 2011 medianof 73 requests.Radware Study (Spring 2013)http://bit.ly/1aVYqlo
  • 14. @JennyHalasz• Pages are getting slower (and larger).• Browsers are not keeping up. Firefox is fastest,followed by Chrome and IE9.• Many site owners are not implementingpotential performance improvements.Radware Study (Spring 2013)
  • 15. @JennyHalaszURLs & ArchitectureAdd stat aboutmore than threechoices – PeterShankman
  • 16. @JennyHalaszIt’s ok to create pages for users that happen to have keywords in them, butdon’t make life any more difficult for users than it has to be.Clean up your XML sitemaps. Only submit valid pages, use realistic updatetimeframes, and don’t put them all at 90% priority.Keep it Simple Sweetheart, but don’t believe that garbage about keeping yourURLs under “x number of characters.”• Don’t create pages just for the sake of keywords• Clarify which pages have value (XML sitemaps)• KISS, but don’t panic about URL lengthBest Practices for URLs
  • 17. @JennyHalasz1. Google is your boss.• Get things in order. Have an agenda. Don’twaste time.2. User intent comes first.• Build site architecture based on whatusers want.3. Speed matters.• The average user’s attention span is 3seconds. Don’t waste it.3 Takeaways!
  • 18. @JennyHalaszTHANK YOU!ARCHOLOGYJenny Halasz, Presidenthttp://www.archology.com919-747-4791info@archology.comFollow us @archologyweb