Internet search engines work by using crawlers (also known as spiders) to automatically fetch web pages. The spiders start from an initial page and follow links to discover new pages, which are then indexed by software and added to the search engine's database. When a user searches the engine for information, it scans the indexed pages in its database to return relevant results.