Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Crawling, Indicizzazione e SEO - Paolo Ramazzotti

889 views

Published on

Crawling, Indicizzazione e SEO - Slide a cura di Paolo Ramazzotti

Published in: Marketing
  • Be the first to comment

Crawling, Indicizzazione e SEO - Paolo Ramazzotti

  1. 1. Crawling > Indexing > RANKING SEO Talk Paolo Ramazzotti www.gimasi.ch
  2. 2. What we do: Consulenza & training:  Inbound Marketing  SEO  Conversion Rate Optimization  Web Analytics  Social Media Marketing  Outbound Marketing  SEM / PPC / Multichannel Adv  Email Marketing  Software / Website Engineering
  3. 3. Question time!  Qual è il requisito più importante per essere trovati da Google?
  4. 4. L’organizzazione dei dati 
  5. 5. Il processo di ricerca Google Index Sito A Sito B Sito C Sito D Sito E Query SERP
  6. 6. Stairway to heaven CRAWLING INDEXING RANKING
  7. 7. Cosa vuol dire “CRAWLING”? “Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index. We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. Google’s crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.”
  8. 8. What???
  9. 9. CRAWLING “Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
  10. 10. CRAWLING We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.
  11. 11. Possiamo “comunicare” con i Robot?
  12. 12. ROBOTS.txt
  13. 13. ROBOTS.txt http://www.corriere.it/robots.txt
  14. 14. CRAWLING Google’s crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.”
  15. 15. Il tuo sito è “crawlable”? Il tuo Sito è “Crawlable”?
  16. 16. Google vuole sentirsi a casa  Robots.txt  Sitemap XML (pagine, immagini)  Sitemap HTML  Breadcrumbs  Layout  Apri un account su Google Webmaster Tools ORA!  Struttura delle URL  UX & Block Level Analysis
  17. 17. Block Level Analysis
  18. 18. What else?
  19. 19. La navigazione! Chi siamo •Mission •Vision •Dove siamo •Contatti Prodotti • Trasporti Internazionali •Spedizioni nazionali Servizi • Logistica •Warehouse
  20. 20. La navigazione! Home Azienda XY • Mission • Vision Trasporti Internaziona li Spedizioni nazionali Logistica Warehouse Contatti
  21. 21. INDEXING “When a user enters a query, our machines search the index for matching pages and return the results we believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the PageRank for a given page. PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site’s PageRank. Not all links are equal: Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content.”
  22. 22. What???
  23. 23. INDEXING “When a user enters a query, our machines search the index for matching pages and return the results we believe are the most relevant to the user. Relevancy is determined by over 200 factors, one of which is the PageRank for a given page.
  24. 24. INDEXING PageRank is the measure of the importance of a page based on the incoming links from other pages. In simple terms, each link to a page on your site from another site adds to your site’s PageRank.
  25. 25. Not all links are EQUAL! Not all links are equal
  26. 26. INDEXING Google works hard to improve the user experience by identifying spam links and other practices that negatively impact search results. The best types of links are those that are given based on the quality of your content.”
  27. 27. Google+ Brand Machine
  28. 28. Attento a questi due!  Barracuda Check Tool http://www.barracuda-digital.co.uk/panguin-tool/
  29. 29. Google vuol conoscere il tuo profilo  Schema.org  Rel=author e rel=publisher  Google Plus  Link Profile  Be a Brand!
  30. 30. Microdata http://www.schema.org
  31. 31. Microdata – quanti e quali?  http://schema.org/docs/full.html
  32. 32. Microdata – debug  http://www.google.com/webmasters/tools/richsnippets
  33. 33. Usage & user intent L’utente ha trovato quello che cercava?  BR  CTR UI e UX devono essere “sticky”, non “bouncy”
  34. 34. Backlink Profile Tools  https://ahrefs.com
  35. 35. Paolo Ramazzotti  Thank You!

×