Diffusion Model
A generative AI system that creates new content by learning to reverse a process of adding noise to data.
In Plain English
Diffusion models work backward from chaos. Imagine taking a clear photo and gradually scrambling it into pure noise—pixel by pixel, more and more garbled—until nothing recognizable remains. A diffusion model learns this forward process, then runs it in reverse: starting from random noise and carefully unscrambling it into a new, coherent image. These models power many modern image generators, including popular text-to-image tools. They're valued because they can generate highly detailed and diverse outputs, and the step-by-step process gives them fine control over what they create.
💡Real-World Example
An AI image generator using a diffusion model starts with random colored static when you type 'a golden retriever in autumn leaves.' Over dozens of steps, it gradually refines that noise into increasingly clear pixels—fur emerges, leaves appear, colors sharpen—until a beautiful, original dog portrait materializes.
Related Terms
What did you think of our explanation?
