This document summarizes research on modular multitask reinforcement learning with policy sketches. The proposed approach uses policy sketches to provide loose task supervision that is decoupled from the environment. This allows neural networks to learn reusable policies across related tasks without explicit reward engineering. Experiments show the approach allows agents to achieve similar performance on seen and unseen tasks, and ablation studies demonstrate the importance of the policy sketches for hierarchical reinforcement learning. However, the approach is limited to tasks that can be decomposed into simpler sub-tasks by humans.