The key assumptions made by the Hadoop Distributed File System(HDFS) approach are: 1. High Volume: The volume of data in HDFS is expected to be really large, some like being measured in terabytes or even petabytes. 2. High Availability: The cluster can contain thousands of file servers, and some of them may be down(unavailable) at some given instance. 3. Reliable Communication: Communication is reliable among working machines. 4. The filesystem is not designed for interactive operations, instead, we prefer high throughput over low latency. 5. Servers are distributed among racks and data centers, to ensure redundancy, and therefore availability. Solution The key assumptions made by the Hadoop Distributed File System(HDFS) approach are: 1. High Volume: The volume of data in HDFS is expected to be really large, some like being measured in terabytes or even petabytes. 2. High Availability: The cluster can contain thousands of file servers, and some of them may be down(unavailable) at some given instance. 3. Reliable Communication: Communication is reliable among working machines. 4. The filesystem is not designed for interactive operations, instead, we prefer high throughput over low latency. 5. Servers are distributed among racks and data centers, to ensure redundancy, and therefore availability..