Deep learning projects require managing large datasets, heavy-duty dependencies, complex experiments, and large amounts of code. This talk provides best practices for accomplishing these tasks efficiently and reproducibly. Tools that are covered include the Creevey library for processing large collections of files; pip-tools and nvidia-docker for managing dependencies; and MLflow Tracking for tracking experiments.