A web crawler is a program that browses the World Wide Web methodically by following links from page to page and downloading each page to be indexed later by a search engine. It initializes seed URLs, adds them to a frontier, selects URLs from the frontier to fetch and parse for new links, adding those links to the frontier until none remain. Web crawlers are used by search engines to regularly update their databases and keep their indexes current.