This document discusses GPU accelerated data analysis and machine learning using RAPIDS. It summarizes that Moore's Law is slowing down CPU performance gains, while GPUs excel at parallel workloads suited for machine learning. It introduces RAPIDS as an open source project providing libraries like cuDF, cuIO and cuML to enable the entire data science process on GPUs from data ingestion to modeling. RAPIDS aims to provide a Python ecosystem for data science on GPUs in a scalable way across multiple GPUs, CPUs and nodes to accelerate workflows.