A crawler (also called spider) is a program that automatically fetches Web pages. Most pages contain links to other pages. Therefore, a spider can start from almost any page and yet traverse a large number of diverse and distinct pages. Typically, used by search engines
Characteristics of Streaming Media Stored on the Web
M. Li, M. Claypool, R. Kinicki, and J. Nichols
To appear in ACM Transactions on Internet Technology , 2005.