Here we take a look at server log file analysis for SEO and explore not only the benefits but also the process of finding, gathering, shipping and analysing user agent logs
Mobile-first goes beyond simply indexing in a search engine. It has several meanings, which traverse user-behaviour, web design, adoption in different territories, adoption amongst user segments, adoption in different verticals. We need to be aware of these fundamentals changes in search behaviour and adapt quickly.
A few of the recent findings we discovered whilst working on an SEO beast which cover crawling, server log file analysis, site speed optimization and database optimizations. Technical SEO insights
Technical SEO Myths Facts And Theories On Crawl Budget And The Importance Of ...Dawn Anderson MSc DigM
There are a lot of myths, facts and theories on crawl budget and the term is bandied around a lot. This deck looks to address some of those myths and also looks at some additional theories around the concepts of 'crawl rank' and 'search engine embarrassment'.
Technical SEO - Gone is Never Gone - Fixing Generational Cruft and Technical ...Dawn Anderson MSc DigM
You have a shiny new site and your brand is looking for a fresh start with their offering. It may be one of many past migrations, protocol switches and redirections you're undertaken historically. But then you find that things didn't quite go as you expected. You never really got back to where you wanted to be in organic search. Part of this is because 'Gone is never Gone'. Every URL that ever was known of on your site is listed in the history logs in the Google search engine system and history logs are used to determine the amount of time your site will be apportioned crawling. You inherited technical SEO debt and generational cruft where everything gets blurred for Google in understanding which is the target URL for a particular term. This can be particularly prevalent when you migrate from one ecommerce platform to another because past crawling rules developed for your site are now not applicable but are still in the history and crawl patterns discovered.
The document discusses different types of duplicate content that can exist on websites, including perfect duplicates, near duplicates, partial duplicates, and content inclusion. It explains that search engines like Google have developed techniques to detect and handle different types of duplicate content differently. For example, perfect duplicates are filtered out before being indexed, while near duplicates or those with different URLs but similar text (DUST) may be indexed but not crawled as frequently to save resources. The document also discusses challenges around detecting different types of duplicate content and how search engines aim to return the most relevant result from a cluster of near-duplicate pages for a given query.
Dawn Anderson SEO Consumer Choice Crawl Budget Optimization ConflictsDawn Anderson MSc DigM
The document discusses how humans are drawn to choice but can be overwhelmed by too many options. It describes research showing people were more likely to buy jam from a display with 6 options than 24. The document also discusses how search engines struggle with the large amount of near-duplicate content on the web, which wastes crawling resources. It provides examples of research done at Google to better detect near-duplicate pages in order to improve search engine efficiency.
WHY YOU SHOULD CARE ABOUT TAKING CARE OF CRAWLS (INTELLIGENT USE OF CRAWL ALLOCATION (BUDGET)). Investigating 'crawl budget', 'crawl rank', 'crawl tank' and 'crawl scheduling by Search Engines'
Creating Commerce Reviews and Considering The Case For User Generated ReviewsDawn Anderson MSc DigM
This document discusses creating and considering user generated reviews. It notes that reviews disrupt the traditional path to purchase and are influential for consumers. There are two main types of reviews: professional reviews from a single entity that can be tracked, and user/consumer reviews from a crowd that may be biased. User reviews can build relevance and momentum through their numbers and natural language. They provide a full picture for consumers but come with responsibilities around moderation, fake reviews, and ensuring objectivity. In the end, both professional and user reviews have benefits and challenges to consider.
Mobile-first goes beyond simply indexing in a search engine. It has several meanings, which traverse user-behaviour, web design, adoption in different territories, adoption amongst user segments, adoption in different verticals. We need to be aware of these fundamentals changes in search behaviour and adapt quickly.
A few of the recent findings we discovered whilst working on an SEO beast which cover crawling, server log file analysis, site speed optimization and database optimizations. Technical SEO insights
Technical SEO Myths Facts And Theories On Crawl Budget And The Importance Of ...Dawn Anderson MSc DigM
There are a lot of myths, facts and theories on crawl budget and the term is bandied around a lot. This deck looks to address some of those myths and also looks at some additional theories around the concepts of 'crawl rank' and 'search engine embarrassment'.
Technical SEO - Gone is Never Gone - Fixing Generational Cruft and Technical ...Dawn Anderson MSc DigM
You have a shiny new site and your brand is looking for a fresh start with their offering. It may be one of many past migrations, protocol switches and redirections you're undertaken historically. But then you find that things didn't quite go as you expected. You never really got back to where you wanted to be in organic search. Part of this is because 'Gone is never Gone'. Every URL that ever was known of on your site is listed in the history logs in the Google search engine system and history logs are used to determine the amount of time your site will be apportioned crawling. You inherited technical SEO debt and generational cruft where everything gets blurred for Google in understanding which is the target URL for a particular term. This can be particularly prevalent when you migrate from one ecommerce platform to another because past crawling rules developed for your site are now not applicable but are still in the history and crawl patterns discovered.
The document discusses different types of duplicate content that can exist on websites, including perfect duplicates, near duplicates, partial duplicates, and content inclusion. It explains that search engines like Google have developed techniques to detect and handle different types of duplicate content differently. For example, perfect duplicates are filtered out before being indexed, while near duplicates or those with different URLs but similar text (DUST) may be indexed but not crawled as frequently to save resources. The document also discusses challenges around detecting different types of duplicate content and how search engines aim to return the most relevant result from a cluster of near-duplicate pages for a given query.
Dawn Anderson SEO Consumer Choice Crawl Budget Optimization ConflictsDawn Anderson MSc DigM
The document discusses how humans are drawn to choice but can be overwhelmed by too many options. It describes research showing people were more likely to buy jam from a display with 6 options than 24. The document also discusses how search engines struggle with the large amount of near-duplicate content on the web, which wastes crawling resources. It provides examples of research done at Google to better detect near-duplicate pages in order to improve search engine efficiency.
WHY YOU SHOULD CARE ABOUT TAKING CARE OF CRAWLS (INTELLIGENT USE OF CRAWL ALLOCATION (BUDGET)). Investigating 'crawl budget', 'crawl rank', 'crawl tank' and 'crawl scheduling by Search Engines'
Creating Commerce Reviews and Considering The Case For User Generated ReviewsDawn Anderson MSc DigM
This document discusses creating and considering user generated reviews. It notes that reviews disrupt the traditional path to purchase and are influential for consumers. There are two main types of reviews: professional reviews from a single entity that can be tracked, and user/consumer reviews from a crowd that may be biased. User reviews can build relevance and momentum through their numbers and natural language. They provide a full picture for consumers but come with responsibilities around moderation, fake reviews, and ensuring objectivity. In the end, both professional and user reviews have benefits and challenges to consider.
An infinite loop occurs when a sequence of instructions in a computer program loops endlessly without a terminating condition. The document discusses issues with having too many URLs on a site that can lead to crawl budget problems with search engines like Google. It provides tips on using XML sitemaps, internal linking strategies, canonicalization, and keeping content fresh to help search engines better crawl and index a large site within the finite crawl budget.
When multiple URLs are near-duplicates Google will choose one and discard the others. This is to ensure that users are not annoyed by multiple URLs which have the same or very very nearly the same and meet a search query in the same context and with the same intent equally well. Many ecommerce sites fall foul to cannibalisation (cannibalization) of their own SEO success because they have within their site a number of mistakes they make with regards to such issues as canonicalization, inconsistent internal link signals, semantic misalignment of intent in site categories and sections, incorrect implementation of href lang internationalization, and other 'clues' which search engines use to identify 'the best' version or source of information for a search query. Here we identify some of the symptoms of SEO cannibalization, and seek to find some solutions which can help to increase visibility for target pages in websites
Using 'page importance' in ongoing conversation with Googlebot to get just a bit more crawl budget as part of technical SEO strategy for ecommerce and enterprise SEO website projects
Bringing in the family to emphasise importance and win during crawlingDawn Anderson MSc DigM
In a web which is bursting at the seams with content, we need strategies to turn Googlebots head and emphasise 'Page Importance'. Bringing in family during crawling from a related sibling, auntie, uncle, parent URL, etc perspective can help to drive signals around which are the key pages in a site. Furthermore, using strong hub pages to meet needs are also emphasised as part of this process
Googlebot has been put on a diet of URLs by the Scheduler in the web crawling system. If your URL’s are not on the list, Googlebot is not coming in. The increasing influx of content flooding the internet means that there is a need for prioritisation on web pages and files visited. Are you telling Googlebot and the URL Scheduler that your content is less important than it is via technical SEO and architectural blunders? There’s a real need to understand Googlebot’s persona and that of the Scheduler, along with the jobs they do, in order to ‘talk to the spider’ and gain more from your time with it.
How to Perform SEO Audits for Maximized Efficiency & Valuealanbleiweiss
SEO audits require a standardized process to maximize efficiency, yet enough customization to ensure you find all the important issues. This deck is meant to help provide you with a standardized process to achieve that goal, from a world class forensic SEO audit professional.
Technical SEO - Generational cruft in SEO - there is never a new site when th...Dawn Anderson MSc DigM
This document discusses how search engines like Google maintain historical records of URLs they have crawled, including metrics like crawl frequency and importance. This historical data is used to predict how often URLs should be recrawled and prioritize them in the crawling queue. Even URLs that return 404 or 410 responses may still be recrawled periodically since search engines never fully remove pages from their indexes. Managing URL history and prioritizing crawls becomes challenging at large scale due to the massive number of URLs maintained in search engine databases over time.
SEO - Stop Eating Your Words - Avoid Cannibalisation Of Your SitesDawn Anderson MSc DigM
Many webmasters and SEOs are stealing from themselves in their potential to rank in Google Search because they end up ranking the wrong pages for search terms. They are competing with themselves in many cases. Google does not always know which page to rank from a site for a query due to many things. One of which is 'too similar' content or 'wrong version' indexation. There are a number of ways to identify this problem and formulate strategies to fix this using on page and your own site power and relevance with clues from within as to which page is most important for given queries.
Pubcon Vegas 2017 You're Going To Screw Up International SEO - Patrick Stoxpatrickstox
The document discusses many of the common issues that can arise when implementing hreflang tags for internationalization, such as tools providing incorrect information; content being served from different URLs than indexed; duplicate pages causing problems; and it taking time for all language versions to be crawled. It emphasizes that internationalization is complex with multiple systems involved and recommends automating the process as much as possible to avoid manual errors, and to expect that problems will occur and need repeated checking.
Using Competitive Gap Analyses to Discover Low-Hanging FruitKeith Goode
Presented at Pubcon - Las Vegas on Tuesday, November 7th, 2017, for the panel Actionable SEO: Low-Hanging Fruit, this deck discusses the importance of competitive intelligence for keywords and links for finding opportunities that you may have missed.
A Technical Look at Content - PUBCON SFIMA 2017 - Patrick Stoxpatrickstox
This document discusses various techniques for analyzing content quality, including those used by search engines like Google. It covers topics like keyword density, latent semantic analysis, TF-IDF, word embeddings, and more. The key message is that while search engines provide guides on quality content, there are many factors to consider and the best approach is to primarily focus on creating useful content for users.
Keeping Things Lean & Mean: Crawl Optimisation - Search Marketing Summit AUJason Mun
This document discusses crawl optimization and how to manage a website's crawl budget. It defines crawl optimization as controlling what content search engines can and cannot crawl and index. The document explains that a site's crawl budget is related to its PageRank, with higher ranked pages receiving more frequent crawls. It then presents a case study where an ecommerce site saw a spike in crawled and indexed pages that hurt organic performance. Investigating found the robots.txt file was missing, allowing unnecessary pages to be crawled. The document outlines various ways to identify and prevent crawl wastage like faceted navigation parameters and internal search results pages.
An SEO's Guide to Website Migrations | Faye Watt | BrightonSEO's Advanced Tec...Faye Watt
In this presentation, Faye will take you through the necessary steps you need to take to ensure a successful website migration, how to avoid the loss of organic traffic and search visibility, and why migrations often fail.
This presentation includes:
How to redirect a domain
How to set up a local server
A migration timeline template
A migration checklist
The document discusses various techniques for improving website speed, organized into three main categories: transmission, rendering, and serving. Transmission-focused techniques include image compression, minification, HTTP compression, and expires headers. Rendering optimizations involve load order, lazy loading, and parallel downloads. Serving improvements involve using a CDN, disk caching, keep-alive headers, and pre-rendering. The document emphasizes testing techniques like Google PageSpeed Insights and HAR files to diagnose bottlenecks and measure the impact of changes.
SearchLove Boston 2018 - Tom Anthony - Hacking Google: what you can learn fro...Distilled
Tom has long been fascinated with how the web works… and how he could break it. In this presentation, Tom will discuss some of the times that he has discovered security issues in Google, Facebook and Twitter. He will discuss compromising Search Console so that he could look up any penalty in the Manual Action tool, how he took control of tens of thousands of websites, and how he recently discovered a major bug that let him rank brand new sites on the first page with no links at all. Tom will outline how these exploits work, and in doing so share some details about the technical side of the web.
SearchLeeds 2018 - Steve Chambers - Stickyeyes - How not to F**K up a Migration Branded3
This document provides guidance on how to successfully conduct a website migration without negatively impacting traffic, rankings, and revenue. It emphasizes the importance of pre-migration planning, actions, and post-migration checks. Key steps include defining responsibilities and resources, creating a project plan and checklist, setting up redirects, internal link updates, and benchmarking rankings. Post-migration, the document recommends checking robots.txt, redirects, log files, and conducting various tests through Google Search Console to identify any issues.
BrightonSEO 2017 - SEO quick wins from a technical checkChloe Bodard
Actionable advice to drive more traffic and revenue to your site from carrying out a technical check. These slides look at technical issues such as 301 redirects, robots.txt files and crawl budget, sitemaps, canonical tags and optimisation, which can all be found by using the technical SEO tools, Screaming Frog, Deep Crawl, Google Search Console and Google Analytics.
What can you expect from these slides?
- To learn which key technical issues to look out for.
- To learn how these technical issues impact your SEO.
- To learn how to fix these technical issues for SEO quick wins.
Google's Search Signals For Page Experience - SMX Advanced 2021 Patrick StoxAhrefs
Patrick Stox is a product advisor, technical SEO, and brand ambassador at Ahrefs who writes for their blog and speaks at conferences. He summarizes that Google's Page Experience update is currently rolling out and focuses on core web vitals like LCP, FID, and CLS. He provides tips on optimizing pages for speed by prioritizing critical resources, using content delivery networks, caching, and other techniques.
The slides of my talk at the seokomm in Salzburg, Austria (November 2015).
It covers the basics of web crawling, with focus of search engine bots. It can be settled in the SEO space aswell as in the general webmaster world. Besides quotes of important influencers of Bing, Yandex and Google, it gives actionable advices on how you can influence the Crawl Budget allocation of search engine spiders.
In context of ajax / javascript crawling there is a minor excursion into the world of angularjs.
The talk closes on some insights on how we wrote our own CMS in order to cover all the SEO needs we are facing (multilanguage, multi-template, caching, if-modified-since, etc.).
If you like this talk, please follow me on twitter:
http://twitter.com/jhmjacob
And to miss to sign up for the Free Account of OnPage.org:
http://onpa.ge/V141p
Last but not least -> visit the seokomm next year if you are around - its worth it :)
http://www.seokomm.at/
Patrick Stox gives a presentation on how search works. He discusses how Google crawls and indexes websites, processes content, handles queries, and ranks results. Some key points include: Google's crawler downloads pages and files from websites; processing includes duplicate detection, link parsing, and content analysis; queries are understood through techniques like spelling correction and query expansion; and search results are ranked based on numerous freshness, popularity, and relevancy signals.
Learn advanced SEO tactics and strategies in this second installment of my Demand Quest course. Topics include local SEO, link building, and international SEO.
An infinite loop occurs when a sequence of instructions in a computer program loops endlessly without a terminating condition. The document discusses issues with having too many URLs on a site that can lead to crawl budget problems with search engines like Google. It provides tips on using XML sitemaps, internal linking strategies, canonicalization, and keeping content fresh to help search engines better crawl and index a large site within the finite crawl budget.
When multiple URLs are near-duplicates Google will choose one and discard the others. This is to ensure that users are not annoyed by multiple URLs which have the same or very very nearly the same and meet a search query in the same context and with the same intent equally well. Many ecommerce sites fall foul to cannibalisation (cannibalization) of their own SEO success because they have within their site a number of mistakes they make with regards to such issues as canonicalization, inconsistent internal link signals, semantic misalignment of intent in site categories and sections, incorrect implementation of href lang internationalization, and other 'clues' which search engines use to identify 'the best' version or source of information for a search query. Here we identify some of the symptoms of SEO cannibalization, and seek to find some solutions which can help to increase visibility for target pages in websites
Using 'page importance' in ongoing conversation with Googlebot to get just a bit more crawl budget as part of technical SEO strategy for ecommerce and enterprise SEO website projects
Bringing in the family to emphasise importance and win during crawlingDawn Anderson MSc DigM
In a web which is bursting at the seams with content, we need strategies to turn Googlebots head and emphasise 'Page Importance'. Bringing in family during crawling from a related sibling, auntie, uncle, parent URL, etc perspective can help to drive signals around which are the key pages in a site. Furthermore, using strong hub pages to meet needs are also emphasised as part of this process
Googlebot has been put on a diet of URLs by the Scheduler in the web crawling system. If your URL’s are not on the list, Googlebot is not coming in. The increasing influx of content flooding the internet means that there is a need for prioritisation on web pages and files visited. Are you telling Googlebot and the URL Scheduler that your content is less important than it is via technical SEO and architectural blunders? There’s a real need to understand Googlebot’s persona and that of the Scheduler, along with the jobs they do, in order to ‘talk to the spider’ and gain more from your time with it.
How to Perform SEO Audits for Maximized Efficiency & Valuealanbleiweiss
SEO audits require a standardized process to maximize efficiency, yet enough customization to ensure you find all the important issues. This deck is meant to help provide you with a standardized process to achieve that goal, from a world class forensic SEO audit professional.
Technical SEO - Generational cruft in SEO - there is never a new site when th...Dawn Anderson MSc DigM
This document discusses how search engines like Google maintain historical records of URLs they have crawled, including metrics like crawl frequency and importance. This historical data is used to predict how often URLs should be recrawled and prioritize them in the crawling queue. Even URLs that return 404 or 410 responses may still be recrawled periodically since search engines never fully remove pages from their indexes. Managing URL history and prioritizing crawls becomes challenging at large scale due to the massive number of URLs maintained in search engine databases over time.
SEO - Stop Eating Your Words - Avoid Cannibalisation Of Your SitesDawn Anderson MSc DigM
Many webmasters and SEOs are stealing from themselves in their potential to rank in Google Search because they end up ranking the wrong pages for search terms. They are competing with themselves in many cases. Google does not always know which page to rank from a site for a query due to many things. One of which is 'too similar' content or 'wrong version' indexation. There are a number of ways to identify this problem and formulate strategies to fix this using on page and your own site power and relevance with clues from within as to which page is most important for given queries.
Pubcon Vegas 2017 You're Going To Screw Up International SEO - Patrick Stoxpatrickstox
The document discusses many of the common issues that can arise when implementing hreflang tags for internationalization, such as tools providing incorrect information; content being served from different URLs than indexed; duplicate pages causing problems; and it taking time for all language versions to be crawled. It emphasizes that internationalization is complex with multiple systems involved and recommends automating the process as much as possible to avoid manual errors, and to expect that problems will occur and need repeated checking.
Using Competitive Gap Analyses to Discover Low-Hanging FruitKeith Goode
Presented at Pubcon - Las Vegas on Tuesday, November 7th, 2017, for the panel Actionable SEO: Low-Hanging Fruit, this deck discusses the importance of competitive intelligence for keywords and links for finding opportunities that you may have missed.
A Technical Look at Content - PUBCON SFIMA 2017 - Patrick Stoxpatrickstox
This document discusses various techniques for analyzing content quality, including those used by search engines like Google. It covers topics like keyword density, latent semantic analysis, TF-IDF, word embeddings, and more. The key message is that while search engines provide guides on quality content, there are many factors to consider and the best approach is to primarily focus on creating useful content for users.
Keeping Things Lean & Mean: Crawl Optimisation - Search Marketing Summit AUJason Mun
This document discusses crawl optimization and how to manage a website's crawl budget. It defines crawl optimization as controlling what content search engines can and cannot crawl and index. The document explains that a site's crawl budget is related to its PageRank, with higher ranked pages receiving more frequent crawls. It then presents a case study where an ecommerce site saw a spike in crawled and indexed pages that hurt organic performance. Investigating found the robots.txt file was missing, allowing unnecessary pages to be crawled. The document outlines various ways to identify and prevent crawl wastage like faceted navigation parameters and internal search results pages.
An SEO's Guide to Website Migrations | Faye Watt | BrightonSEO's Advanced Tec...Faye Watt
In this presentation, Faye will take you through the necessary steps you need to take to ensure a successful website migration, how to avoid the loss of organic traffic and search visibility, and why migrations often fail.
This presentation includes:
How to redirect a domain
How to set up a local server
A migration timeline template
A migration checklist
The document discusses various techniques for improving website speed, organized into three main categories: transmission, rendering, and serving. Transmission-focused techniques include image compression, minification, HTTP compression, and expires headers. Rendering optimizations involve load order, lazy loading, and parallel downloads. Serving improvements involve using a CDN, disk caching, keep-alive headers, and pre-rendering. The document emphasizes testing techniques like Google PageSpeed Insights and HAR files to diagnose bottlenecks and measure the impact of changes.
SearchLove Boston 2018 - Tom Anthony - Hacking Google: what you can learn fro...Distilled
Tom has long been fascinated with how the web works… and how he could break it. In this presentation, Tom will discuss some of the times that he has discovered security issues in Google, Facebook and Twitter. He will discuss compromising Search Console so that he could look up any penalty in the Manual Action tool, how he took control of tens of thousands of websites, and how he recently discovered a major bug that let him rank brand new sites on the first page with no links at all. Tom will outline how these exploits work, and in doing so share some details about the technical side of the web.
SearchLeeds 2018 - Steve Chambers - Stickyeyes - How not to F**K up a Migration Branded3
This document provides guidance on how to successfully conduct a website migration without negatively impacting traffic, rankings, and revenue. It emphasizes the importance of pre-migration planning, actions, and post-migration checks. Key steps include defining responsibilities and resources, creating a project plan and checklist, setting up redirects, internal link updates, and benchmarking rankings. Post-migration, the document recommends checking robots.txt, redirects, log files, and conducting various tests through Google Search Console to identify any issues.
BrightonSEO 2017 - SEO quick wins from a technical checkChloe Bodard
Actionable advice to drive more traffic and revenue to your site from carrying out a technical check. These slides look at technical issues such as 301 redirects, robots.txt files and crawl budget, sitemaps, canonical tags and optimisation, which can all be found by using the technical SEO tools, Screaming Frog, Deep Crawl, Google Search Console and Google Analytics.
What can you expect from these slides?
- To learn which key technical issues to look out for.
- To learn how these technical issues impact your SEO.
- To learn how to fix these technical issues for SEO quick wins.
Google's Search Signals For Page Experience - SMX Advanced 2021 Patrick StoxAhrefs
Patrick Stox is a product advisor, technical SEO, and brand ambassador at Ahrefs who writes for their blog and speaks at conferences. He summarizes that Google's Page Experience update is currently rolling out and focuses on core web vitals like LCP, FID, and CLS. He provides tips on optimizing pages for speed by prioritizing critical resources, using content delivery networks, caching, and other techniques.
The slides of my talk at the seokomm in Salzburg, Austria (November 2015).
It covers the basics of web crawling, with focus of search engine bots. It can be settled in the SEO space aswell as in the general webmaster world. Besides quotes of important influencers of Bing, Yandex and Google, it gives actionable advices on how you can influence the Crawl Budget allocation of search engine spiders.
In context of ajax / javascript crawling there is a minor excursion into the world of angularjs.
The talk closes on some insights on how we wrote our own CMS in order to cover all the SEO needs we are facing (multilanguage, multi-template, caching, if-modified-since, etc.).
If you like this talk, please follow me on twitter:
http://twitter.com/jhmjacob
And to miss to sign up for the Free Account of OnPage.org:
http://onpa.ge/V141p
Last but not least -> visit the seokomm next year if you are around - its worth it :)
http://www.seokomm.at/
Patrick Stox gives a presentation on how search works. He discusses how Google crawls and indexes websites, processes content, handles queries, and ranks results. Some key points include: Google's crawler downloads pages and files from websites; processing includes duplicate detection, link parsing, and content analysis; queries are understood through techniques like spelling correction and query expansion; and search results are ranked based on numerous freshness, popularity, and relevancy signals.
Learn advanced SEO tactics and strategies in this second installment of my Demand Quest course. Topics include local SEO, link building, and international SEO.
This document provides an overview of search engine optimization (SEO) in 3 easy steps. Step 1 is to understand how search engines work by learning about their bots, algorithms, and what they can and cannot see on a website. Step 2 is to make the website bot-friendly by providing clean HTML code without unnecessary JavaScript or Flash that bots cannot read. Step 3 is to build an SEO toolkit for WordPress by adding relevant metadata, keywords and linking to optimize the site's discoverability. The overall goal is to empower website owners to effectively optimize their own WordPress sites for search engines.
The CBC on a diet - Slimming down for a whole nationBarbara Bermes
Talk at FITC Spotlight Web Performance and Optimization, March 16, 2013, Toronto.
Synopsis:
The CBC serves optimized content to millions of Canadians. We’ll share our experience & knowledge of optimizing content delivery for a high-scale & unpredictable audience. We will explain our performance stack from server-side optimization tricks to automated performance tools during deployment. We will discuss our challenges, findings and learnings of continually improving site delivery.
http://fitc.ca/presentation/the-canadian-public-broadcaster-on-a-diet-slimming-down-for-a-whole-nation/
This is the slide deck on how to perform log analysis with BigQuery. The companion guide is here which has most of this information in written format. https://www.distilled.net/resources/guide-to-log-analysis-with-big-query/
SearchLove London 2016 | Dom Woodman | How to Get Insight From Your LogsDistilled
In the SEO industry, we obsess on everything Google says, from John Mueller dropping a hint in a Webmaster Hangout, to the ranking data we spend £1000s to gather. Yet we ignore the data Google throws at us every day, the crawling data. For the longest time, site crawls, traffic data, and rankings have been the pillars of SEO data gathering. Log files should join them as something everyone is doing. We'll go through how to get everything set-up, look at some of the tools to make it easy and repeatable and go through the kinds of analysis you can do to get insights from the data.
SearchLove Boston 2017 | Dom Woodman | How to Get Insight From Your LogsDistilled
In the SEO industry, we obsess on everything Google says, from John Mueller dropping a hint in a Webmaster Hangout, to the ranking data we spend $1000s to gather. Yet we ignore the data Google throws at us every day, the crawling data. For the longest time, site crawls, traffic data, and rankings have been the pillars of SEO data gathering. Log files should join them as something everyone is doing. We'll go through how to get everything set-up, look at some of the tools to make it easy and repeatable and go through the kinds of analysis you can do to get insights from the data.
This document provides an overview of search engine optimization (SEO) techniques and Google algorithms. It discusses both white hat and black hat SEO. White hat SEO focuses on improving a site through high-quality content, technical optimization, and natural links. Black hat techniques like hidden text, keyword stuffing, and link manipulation are avoided. Major Google algorithms like Panda, Penguin, and Mobilegeddon are explained. The document also covers mobile SEO issues like mobile-first indexing and AMP. The overall message is that SEO success comes through following Google's guidelines rather than trying to game the system.
These are the training slides from the SEO training delivered by Noisy Little Monkey at their offices in Bristol.
They cover: technical, on-site, off-site, local and mobile SEO.
This document provides an overview of search engine optimization (SEO) strategies. It discusses what SEO is, why it is important given user behavior trends, and common barriers to implementing SEO. It then outlines potential benefits of doing SEO, such as increased traffic and brand awareness. The document also covers key on-page and off-page optimization techniques, how to research competitors, and tips for developing landing pages and backlinks. Overall, the document serves as a basic guide to SEO best practices for improving search engine rankings.
The Search Landscape is changing. Be prepared for the future by optimizing for featured snippets, compressing images to help improve page speed, leveraging machine learning, pruning your website, refurbishing your top content, and more.