2. Crawling is the process in which the search engine sends out software programs
called search bots, robots, crawlers, or spiders to find new content in the form of
images, videos, text etc, across the entire world wide web.
First, the crawlers fetch few web pages and then crawls from link on a webpage to
link on another webpage. In this way, they crawl the entire world wide web for new
or updated pages
4. By moving from one link to another , they retrieve information of new content or
updated pages.
They store the URLs that they have discovered in a large database library called
Caffeine
5.
6. What is Indexing?
Indexing is the process by which search engines organize information in the search
library so that the information can be fetched super fast to the users when they type
in the search queries for the same.
To think of indexing, think about the library in your school or college.The books are
indexed in a specific fashion so that once you ask the librarian for any book, he/she
can get you that book from 1000 different books, in a matter of seconds or minutes.
7. Indexing….
What is the varying information used to arrange the books in a library?
It is generally the
•Name of the author
•Title of the book
•ISBN code
•The subject name and others
Google also arranges the information sent by web crawlers using different parameters like
•The freshness of the content
•Keyword richness
•The informational depth
•Word matching documents etc.
8. Once a user enters a search query, google scans through pages in the
database carrying that information.
Now, a score is assigned to each relevant page carrying the information.
The score is decided on the basis of algorithmic calculations of some 100 or
so signals.These are called as ranking signals.
The pages are ranked according to these signals and fetched for the user
on the search page.
RANKING OF PAGES
9.
10. RANKING OF PAGES
REFER TO THE COMPLETE LIST OF
GOOGLE RANKING FACTORS
https://backlinko.com/google-ranking-factors