This thesis presents a novel hierarchical reinforcement learning framework called Reinforcement Learning Optimal Control (RLOC) for controlling nonlinear dynamical systems. RLOC operates at two levels: 1) It uses optimal control to learn local linear models and controllers. 2) A high-level reinforcement learning agent uses these controllers as actions and learns how to combine them to maximize long-term reward. The framework demonstrates that a small number of local controllers can solve global nonlinear control problems, providing a model for how the brain bridges low-level control and high-level action selection. Evaluation on benchmark problems shows RLOC competes with state-of-the-art control methods in computational cost and solution quality.