This document discusses stochastic optimal control and information theoretic dualities, highlighting two main approaches: stochastic dynamic programming and forward sampling of stochastic differential equations. It details the Hamilton-Jacobi-Bellman equation and its significance in stochastic control problems, alongside applications in fields such as finance and physics. Additionally, the document examines the Legendre transform as a method for understanding optimal control and its relation to Bellman's principle of optimality.