Five ML Concepts - #25
406 words • 3 min read • Abstract

5 machine learning concepts. Under 30 seconds each.
| Resource | Link |
|---|---|
| Papers | Links in References section |
| Video | Five ML Concepts #25![]() |
References
| Concept | Reference |
|---|---|
| Label Smoothing | Rethinking the Inception Architecture (Szegedy et al. 2015) |
| Miscalibration | On Calibration of Modern Neural Networks (Guo et al. 2017) |
| Representation Learning | Representation Learning: A Review (Bengio et al. 2013) |
| Adversarial Examples | Intriguing properties of neural networks (Szegedy et al. 2013) |
| Double Descent | Deep Double Descent (Nakkiran et al. 2019) |
Today’s Five
1. Label Smoothing
Replacing hard one-hot labels with softened target distributions during training. Instead of 100% confidence in one class, distribute small probability to other classes.
Reduces overconfidence and can improve generalization.
Like allowing small uncertainty instead of absolute certainty.
2. Miscalibration
When predicted confidence does not match observed accuracy. A model that says “90% confident” should be right 90% of the time.
Modern neural networks tend to be overconfident. Temperature scaling can help.
Like a forecast that sounds certain but is often wrong.
3. Representation Learning
Learning useful internal features automatically from raw data. Instead of hand-crafting features, the model discovers what matters.
The foundation of deep learning’s success across domains.
Like detecting edges before recognizing full objects.
4. Adversarial Examples
Inputs modified to cause incorrect predictions. Small, often imperceptible changes can flip model outputs.
A security concern and a window into model vulnerabilities.
Like subtle changes that fool a system without obvious differences.
5. Double Descent
Test error that decreases, increases, then decreases again as model capacity grows. The classical bias-variance tradeoff captures only the first part.
Modern overparameterized models operate in the second descent regime.
Like getting worse before getting better—twice.
Quick Reference
| Concept | One-liner |
|---|---|
| Label Smoothing | Softening targets to reduce overconfidence |
| Miscalibration | Confidence not matching accuracy |
| Representation Learning | Automatically learning useful features |
| Adversarial Examples | Inputs crafted to cause errors |
| Double Descent | Test error decreasing twice with model size |
Short, accurate ML explainers. Follow for more.
Part 25 of the Five ML Concepts series. View all parts | Next: Part 26 →
