This document summarizes Deep Q-Networks (DQN), a deep reinforcement learning algorithm that was able to achieve human-level performance on many Atari 2600 games. The key ideas of DQN include using a deep neural network to approximate the Q-function, experience replay to increase data efficiency, and a separate target network to stabilize learning. DQN has inspired many follow up algorithms, including double DQN, dueling DQN, prioritized experience replay, and noisy networks for better exploration. DQN was able to learn human-level policies directly from pixels and rewards for many Atari games using the same hyperparameters and network architecture.