Nashik Call Girl Just Call 7091819311 Top Class Call Girl Service Available
How I’d help Shepherd Neame improve their SEO (Organic presence)
1. How I’d help Shepherd Neame improve their SEO (Organic presence)-
An SEO Audit
www.shepherdneame.co.uk
2. Contents
1. Introduction
2. Recent projects/successes
3. Awards
4. Quick Stats
5. Drop in Visibility
6. Technical on-site issues
7. Competitive landscape and off-page
8. Strategy
3. Quick Stats
“The Map is not the territory*”
Alfred Korzybski
Caveat:- If this were a typical audit, I’d run through their analytics and start the assessment from there,
but for this audit, I am relying exclusively on third-party data.
4. To investigate SN’s organic search performance, I observed the SEO Visibility reported
by Searchmetrics over the past 2 years:
5. Lost of keywords, these are your local pub names and geographical searches that have
Been lost due to Google's Pigeon algorithm change.
As you can see the loss of these keywords matches the loss in traffic.
6. To find how many pages are actually out there, I crawled your entire site using
Screaming Frog’s SEO Spider , IIS Manager and Xenu Link Slueth.
Xenu gave 20,326 pages (html/text), IIS 8,532, and screaming frog 16,711 -
Google has indexed 5,210. This discrepancy may be due to the cdn.
Subdomain.
The warning below, suggests duplicate content issues.
7. A robots.txt file is used to restrict search engines from accessing specific sections of a site.
SM’s site helpfully provides and explanation!
Although not technically wrong, the site is usins the
default Drupal robots.txt, which is indicative that its not
Been considered.
I’d remove the explanation and provide a link to the xml
Sitemap (of which more later).
I would do is remove the Crawl-delay line.
Unless you have a very large site or spidering problems,
it's not needed.
The entries are technically accurate, but they are
unnecessarily verbose. And many are no longer relevant
to the site.
As a general rule its better to noindex pages via the meta tags.
8. The XML sitemap complements the robots.txt file with the former focusing on
Indexability/inclusion the later exclusion.
The sitemap looks like it’s a default setup and only has 140 pages some of which
should not be there.
http://www.shepherdneame.co.uk/pubs/search/51.2601450%2C0.8442802_300
http://www.shepherdneame.co.uk/pubs/search/51.2622513%2C-0.4672517_300
Etc. which are search results pages and indicates duplication, in the eyes of Google as
the page Titles and meta descriptions are the same (there are 746 of these pages with
matching Titles and descriptions).
9. After the pub search result pages (746) the worst directory for duplicate titles
Is /blog.
There are 178 page just listing the blog pages the last being
http://www.shepherdneame.co.uk/blog?keys=&page=177&tag=/Early%20Bird
The first being http://www.shepherdneame.co.uk/blog?keys=&tag=/Early%20Bird
Which is exactly the same page as
http://www.shepherdneame.co.uk/blog ( but has a rel canonical back to the /blog page
as do all 178 pages.
This is poor practice and may lead Search Engines to ignore canonical directives on the
site, or think that you only have one page.
Better implementation would be to remove the rel canonical and sequential rel=next
And rel=rev tags, and change the title tags and meta descriptions to show thw sequences:
10. Top Page Title: Brewing, beer and pub blog from Shepherd Neame
Top Page Meta Description: Read the latest Brewing, beer and pub news, from
Britain's Oldest Brewer – Shepherd Neame
Page 4 Title: Page 4 of 178 of Shepherd Neame’s Blog
Page 4 Meta Description: Read listing page 4 of the latest Brewing, beer and
pub news, from Britain's Oldest Brewer – Shepherd Neame
Implement the following tags:-
<link rel="prev" href="http://www.example.com/article?story=abc&page=1" />
<link rel="next" href="http://www.example.com/article?story=abc&page=3" />
11.
12. Site speed is becoming increasingly important to Google as a Ranking signal and
additionally, since search engine crawlers have a limited crawl budget, they crawl quicker
sites more thoroughly and more regularly than slower sites.
Walmart.com found that:-
For every second of improvement Conversion rate can improve by 2%
SN’s site speed is decent enough but could still be improved with a few best practices
(e.g.,eliminating render-blocking JavaScript, optimizing images, leveraging browser caching,
etc.).
http://www.webperformancetoday.com/2014/04/09/web-page-speed-affect-conversions-infographic/
In addition to suboptimal load times, many of the site’s pages have references to
inaccessible objects (i.e., objects that return 4xx HTTP status codes). I recommend fixing
these broken references because they unnecessarily waste both processing and network
resources.
Moz- Has a measure called DA – domain authority it’s a model of reality, they have tried to match Google’s ranking signals and give a number to how likely a site is to rank.
Officially - “Domain Authority is a score (on a 100-point scale) developed by Moz that predicts how well a website will rank on search engines. ”
For comparison:
taylor-walker.co.uk 45/100,
nicholsonspubs.co.uk 55/100,
SN is 59/100,
greeneking.co.uk is 62/100,
Fullers.co.uk is 64/100,
IIS Manager – Internet information Services – Microsoft tool that crawls a website and analyses it.
Semrush – SEM competitor tool
As the graph shows, the site’s visibility decreased after the 12th December 2014. This date is important as it corresponds with a Google algorithm changes dubbed “Pigeon”, expanded to the United Kingdom, Canada, and Australia. The original update hit the United States in July 2014. The update was confirmed on the 22nd but may have rolled out as early as the 19th.
Pigeon - provide more useful, relevant and accurate local search results that are tied more closely to traditional web search ranking signals.
Weighted to Yelp and other local directories - Urbanspoon, OpenTable, TripAdvisor, Zagat, Kayak, etc.
Affects Search Results Within Both Google Maps Search & Google Web Search
improves their distance and location ranking parameters.
Prior to Pigeon, local results from these dense spaces were hard to parse. Now, with Pigeon’s increased specificity, the algorithm is more accurate.
Again we see another drop in visibility on the 30th July 2015 – this dip corresponds with another Google change dubbed Panda 4.2, which was a slow rollout
Robots.txt is ONLY a way in which to tell a search engine bot which pages it’s allowed to spider, to “see”, and which pages it cannot “see.”
This is not telling search engines if they can include them in their index or not.
There is a huge difference. For example, Google will sometimes list URLs that it’s not allowed to spider, because it’s blocked by robots.txt, yet it still shows up in their index.
For the paginated pages the meta description can be left off and probably better from a time management point of view.
On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the <head> section:<link rel="next" href="http://www.example.com/article?story=abc&page=2" />On the second page, http://www.example.com/article?story=abc&page=2:<link rel="prev" href="http://www.example.com/article?story=abc&page=1" /><link rel="next" href="http://www.example.com/article?story=abc&page=3" />On the third page, http://www.example.com/article?story=abc&page=3:<link rel="prev" href="http://www.example.com/article?story=abc&page=2" /><link rel="next" href="http://www.example.com/article?story=abc&page=4" />And on the last page, http://www.example.com/article?story=abc&page=4:<link rel="prev" href="http://www.example.com/article?story=abc&page=3" />A few points to mention:The first page only contains rel=”next” and no rel=”prev” markup.
Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup.
The last page only contains markup for rel=”prev”, not rel=”next”.
rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the <link> tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL.
rel=”next” and rel=”prev” only need to be declared within the <head> section, not within the document <body>.
We allow rel=”previous” as a syntactic variant of rel=”prev” links.
rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain:<link rel="canonical" href="http://www.example.com/article?story=abc&page=2”/><link rel="prev" href="http://www.example.com/article?story=abc&page=1&sessionid=123" /><link rel="next" href="http://www.example.com/article?story=abc&page=3&sessionid=123" />
rel=”prev” and rel=”next” act as hints to Google, not absolute directives.
When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.