Out-of-Distribution
Data or situations that are very different from what a model was trained on, causing it to make worse predictions or behave unpredictably.
In Plain English
Out-of-distribution (or OOD) refers to inputs that fall outside the range of normal examples a model learned from. Imagine training a face-recognition system only on daytime photos, then trying to use it at night—those low-light images are out-of-distribution. Models perform worst when they encounter this kind of unfamiliar data, yet they often don't warn you they're confused. This is why AI deployed in the real world needs safeguards: the world throws unexpected situations at AI far more often than training labs can anticipate.
💡Real-World Example
A bank's fraud-detection system was trained on years of normal customer transactions. Then during the pandemic, spending patterns shifted dramatically—people bought groceries online instead of in stores, traveled less, and ordered supplies from new vendors. That sudden, unusual spending was out-of-distribution, and the system either flagged legitimate transactions as fraud or missed actual fraud because it had never seen patterns quite like these before.
Related Terms
What did you think of our explanation?
