This document provides a review of design issues for search engines and web crawlers. It discusses how search engines use web crawlers to collect web documents for storage and indexing from the growing World Wide Web. The three main parts of a search engine are the crawler, indexer, and query engine. Crawler-based search engines create listings automatically using algorithms while human-powered directories rely on human organization. Designing efficient search engines and crawlers faces challenges from the diversity of web documents and changing user behaviors. Web crawlers prioritize URLs to download pages and extract new links to update search engine databases.