This document discusses data intensive computing and its relationship to data curation and preservation. It defines data intensive computing as I/O-bound computations that require large volumes of data that is too big to fit in memory. The role of data infrastructures is described, including bringing compute to archived data through queries, scripts, or APIs. Approaches like MapReduce, Hadoop, and Storm are presented for making best use of resources for data intensive workloads. The conclusion is that data intensive computing requires new ways of parallel computing due to huge data volumes and offers new opportunities for data reuse and reduction.