This document summarizes a research paper about fault tolerance techniques in big data tools. The paper discusses strategies for fault tolerance that include data duplication, checkpoints, and automatic recovery. It examines fault tolerance in Hadoop and MongoDB. The document reviews several other papers on enhancing fault tolerance in Hadoop clusters and the architecture and fault tolerance of geographically distributed Hadoop clusters. A proposed solution discusses providing fault tolerance using commercial off-the-shelf technology. The conclusion notes that critical components may become unavailable and their states impossible to reproduce due to environmental constraints and special operating systems.