In statistical learning. regularisation is “any modification [made] to a learning algorithm that is intended to reduce its generalisation error but not its training error”.1 Regularisation is an important objective of ML, as important as optimisation.
Some key ideas:
- Weight decay (or regularisation) prevents weights from growing too much.
- Neuron dropout forces a neural network to learn more robust features.
Footnotes
-
From Deep Learning, by Goodfellow, Bengio, Courville, and Bach. ↩