The document discusses how traditional ETL processes can become a bottleneck for big data analytics due to the large size and unstructured nature of new data sources. It describes how Hadoop can help alleviate this bottleneck by allowing data to be ingested and transformed in parallel across a large cluster of machines in a cost effective way. Specifically, Hadoop enables huge amounts of structured and unstructured data to be loaded without predefined schemas and for ETL transformations to be performed at scale through MapReduce jobs. This enriches data warehouses and speeds up ETL processes for big data.