This document provides a project synopsis for developing a highly efficient web crawler. The objective is to browse the World Wide Web in an automated manner. A web crawler is a computer program that methodically explores websites by starting with a list of seed URLs and identifying links to add to a crawl frontier. As the crawler visits URLs, it finds and adds all hyperlinks to the frontier to visit recursively. The synopsis provides an introduction to web crawlers and their uses, as well as the machine specifications required.