HDFS is Hadoop's distributed file system that stores large files across multiple machines. It splits files into blocks and replicates them across the cluster for reliability. The NameNode manages the file system metadata and DataNodes store the actual blocks. In the event of a failure, replicated blocks allow the data to be recovered. The NameNode aims to place replicas in different racks to avoid single points of failure and improve read performance. HDFS is best for large, immutable files while other options may be better for small files or low-latency access.