This document discusses running distributed TensorFlow on Kubernetes. It introduces Kubernetes as a platform for container orchestration that can schedule containers across clusters. It then covers scheduling GPUs on Kubernetes nodes and introducing distributed TensorFlow for model replication and training. The remainder discusses building Docker images for distributed TensorFlow jobs, specifying configurations in Kubernetes YAML templates, and running distributed TensorFlow workflows on Kubernetes with workers, parameter servers and shared storage.