TechniquesLast updated: April 2026
Inference
When a trained AI model processes new inputs and generates outputs.
In Plain English
Inference is when an AI model applies what it learned during training to new data. Every time you chat with ChatGPT or generate an image with Midjourney, the model is doing inference. Training happens once (and is very expensive), but inference happens every time you use the AI (and is much cheaper per use).
💡Real-World Example
Training a frontier AI model can cost hundreds of millions of dollars. But each chat response (inference) costs only fractions of a cent.
Related Terms
What did you think of our explanation?
