Big data technologies are needed because traditional relational databases are not well-suited for storing and analyzing the large, diverse, and fast-growing unstructured data being generated. Hadoop is an open-source framework that allows for the distributed processing of large data sets across clusters of computers. It uses HDFS for storage and MapReduce as a programming model for distributed computing. Hadoop can handle large volumes and varieties of data at high velocities to help solve both new and old problems in better ways compared to traditional databases and data warehouses.