Early Stopping
A regularization technique that halts training when validation performance stops improving, preventing the model from overfitting to the training data.
How It Works
Monitor validation loss (or another metric) after each epoch. If it hasn't improved for a set number of epochs (patience), stop training and restore the best checkpoint.
Benefits
Simple, effective, and automatic. Saves compute by not training longer than necessary. Acts as implicit regularization without adding hyperparameters like weight decay.