Deep Q-Network
A deep learning extension of Q-learning that uses neural networks to approximate the action-value function.
Overview
Deep Q-Network (DQN), introduced by DeepMind in 2013, combines Q-learning with deep neural networks to handle high-dimensional state spaces like raw pixel input. The neural network approximates the Q-value function, mapping states to expected returns for each action.
Key Details
DQN introduced two key innovations: experience replay (storing and randomly sampling past transitions for training) and a target network (a delayed copy of the Q-network for stable targets). DQN famously learned to play Atari games at superhuman levels from raw pixels. Extensions include Double DQN, Dueling DQN, and Rainbow DQN, which combine multiple improvements for better performance.