Spark RDDs provide a distributed, immutable, and fault-tolerant abstraction of data across a cluster. RDDs allow data to be cached in memory to enable faster iterative algorithms and interactive use by persisting datasets and reusing cached data. The immutability of RDDs enables parallel processing without locking and supports caching of data. Lazy evaluation delays computation of transformations until results are needed, avoiding recomputing data across transformations while still benefiting from immutability. Type inference makes RDDs robust by inferring data types at runtime.