A web crawler is a program that systematically browses websites to index them. It gathers information from web pages using spiders, which are specialized crawlers. The spiders pass retrieved data to applications that index the content and build searchable databases from keywords and terms. When a user searches online, the search engine checks its database of indexed web pages to return relevant results.