How you can help SEO
Things Every Software Developer Should
Know
Developers listen up. You can’t ignore SEO.
60% of website traffic comes from search engines -
why do so many developers and designers ignore
SEO requirements and build beautiful - but
unfindable - websites?
Ref : http://searchengineland.com/60-direct-traffic-actually-seo-195415
Good News
As a developer you don't have to know, how SEO works, because
It is tough to see where to start
There is lot to digest
And, just when you think you “know” something, Google releases another algorithm update.
But
Keep the thing in mind that Google uses Robots to read the websites and algorithm decide
the fate of website to show in results.
SEO is the concern of how well a robot can read and understand the content and this is
clearly within the developer’s reach.
Just take care of ……..
Site Speed
Redirects
Semantic Markup
URL Structures
Crawler Access
Robots.txt
Javascript
Site Speed -Google is obsessed with speed.
Assets need to be as small as possible for transmission and maintain a high
quality.
You should care about how many network requests are being made per
page load.
You need to care about perceived page load, so getting content onto the
screen as quickly as possible.
A global internet means not everyone is accessing your site on a broadband
connection. Mobile internet means you can’t guarantee the transmission
of data will even complete if it takes several cycles.
Redirects
Why care about Redirects anyway
Nobody likes dead links, this can easily happen when something major
about the structure of your site changes (domain name, internal
structure).
If a user goes to your site and gets a 404 they are not going to try subtle
variations of the URL in order to get to the content, they will go onto the
next site.
Even if the link isn’t dead, people don’t like jumping between 5 different
URLs before getting to the content. If done poorly this can result in
multiple network requests which is inefficient.
Can be broken for months before someone notices.
Semantic Markup
Semantic Markup is excellent for SEO because you are literally giving the
content on your page meaning that a search engine can easily understand
We should care about this anyway because search engines are not the only
things looking at our site. Assistive technologies such as screen readers
can use semantically marked up documents a lot easier.
For example, when you markup content with an <aside> element some
assistive technologies know to leave that out of the main content when
reading aloud to a visually impaired user.
URL Structures
A good URL structure is good for SEO because it is used as part of the
ranking algorithm on most search engines.
The URL will appear in search results, if it makes sense people are more
likely to click on it than if it is a jumble of IDs and keywords.
Crawler Access
Make sure that when Google
does crawl your site, they are
spending time on your important
pages.
Good Site Architecture
Robots.txt
This is the first file that a search engine will request when they crawl your site.
In this file you can specify the Specific URLs that SE should not crawl.
Let’s see the robots.txt of Amazon.in http://www.amazon.in/robots.txt
Properly Set Up Your Staging and Live Sites
To avoid duplicate content issue with your live content you must noindex,
nofollow your staging site.
If you can set up your staging sites behind a password-protected login
screen, even better.
create an “ignore” file in your versioning software to omit robots.txt from
being overwritten on the live server.
Be careful with single page applications (SPAs)
Angular, Meteor, MEAN, or any SPAs are hot. But poorly implemented SPAs
can kill your organic search traffic.
These languages rely on JavaScript and Search Engine Crawlers can’t read
everything on your page by default.
Use a headless browser such as PhantomJS to intercept the regular page request by a
crawler and instead show a completely rendered page, just like any human user would
see.
build pre-rendering fallbacks for search bots (using tools like Prerender.io),
which ensure robots can sniff your content despite the impenetrable
javascript.
You are the Experts
Ask Us Questions, All the Questions.

How developer's can help seo

  • 1.
    How you canhelp SEO Things Every Software Developer Should Know
  • 2.
    Developers listen up.You can’t ignore SEO.
  • 3.
    60% of websitetraffic comes from search engines - why do so many developers and designers ignore SEO requirements and build beautiful - but unfindable - websites? Ref : http://searchengineland.com/60-direct-traffic-actually-seo-195415
  • 4.
    Good News As adeveloper you don't have to know, how SEO works, because It is tough to see where to start There is lot to digest And, just when you think you “know” something, Google releases another algorithm update. But Keep the thing in mind that Google uses Robots to read the websites and algorithm decide the fate of website to show in results. SEO is the concern of how well a robot can read and understand the content and this is clearly within the developer’s reach.
  • 5.
    Just take careof …….. Site Speed Redirects Semantic Markup URL Structures Crawler Access Robots.txt Javascript
  • 6.
    Site Speed -Googleis obsessed with speed. Assets need to be as small as possible for transmission and maintain a high quality. You should care about how many network requests are being made per page load. You need to care about perceived page load, so getting content onto the screen as quickly as possible. A global internet means not everyone is accessing your site on a broadband connection. Mobile internet means you can’t guarantee the transmission of data will even complete if it takes several cycles.
  • 7.
  • 8.
    Why care aboutRedirects anyway Nobody likes dead links, this can easily happen when something major about the structure of your site changes (domain name, internal structure). If a user goes to your site and gets a 404 they are not going to try subtle variations of the URL in order to get to the content, they will go onto the next site. Even if the link isn’t dead, people don’t like jumping between 5 different URLs before getting to the content. If done poorly this can result in multiple network requests which is inefficient. Can be broken for months before someone notices.
  • 9.
    Semantic Markup Semantic Markupis excellent for SEO because you are literally giving the content on your page meaning that a search engine can easily understand We should care about this anyway because search engines are not the only things looking at our site. Assistive technologies such as screen readers can use semantically marked up documents a lot easier. For example, when you markup content with an <aside> element some assistive technologies know to leave that out of the main content when reading aloud to a visually impaired user.
  • 10.
    URL Structures A goodURL structure is good for SEO because it is used as part of the ranking algorithm on most search engines. The URL will appear in search results, if it makes sense people are more likely to click on it than if it is a jumble of IDs and keywords.
  • 11.
    Crawler Access Make surethat when Google does crawl your site, they are spending time on your important pages. Good Site Architecture
  • 12.
    Robots.txt This is thefirst file that a search engine will request when they crawl your site. In this file you can specify the Specific URLs that SE should not crawl. Let’s see the robots.txt of Amazon.in http://www.amazon.in/robots.txt
  • 13.
    Properly Set UpYour Staging and Live Sites To avoid duplicate content issue with your live content you must noindex, nofollow your staging site. If you can set up your staging sites behind a password-protected login screen, even better. create an “ignore” file in your versioning software to omit robots.txt from being overwritten on the live server.
  • 14.
    Be careful withsingle page applications (SPAs) Angular, Meteor, MEAN, or any SPAs are hot. But poorly implemented SPAs can kill your organic search traffic. These languages rely on JavaScript and Search Engine Crawlers can’t read everything on your page by default. Use a headless browser such as PhantomJS to intercept the regular page request by a crawler and instead show a completely rendered page, just like any human user would see. build pre-rendering fallbacks for search bots (using tools like Prerender.io), which ensure robots can sniff your content despite the impenetrable javascript.
  • 15.
    You are theExperts Ask Us Questions, All the Questions.