AI Glossary

AI Hallucination

When an AI model generates confident but factually incorrect, fabricated, or nonsensical information that has no basis in its training data or reality.

Why It Happens

LLMs are trained to produce plausible text, not truthful text. They interpolate patterns from training data without a grounded understanding of facts. When uncertain, they may generate convincing-sounding but fabricated details.

Mitigation

RAG (grounding in retrieved documents), tool use (fact-checking via search), chain-of-thought (explicit reasoning), fine-tuning on factual data, and constitutional AI methods that train models to express uncertainty.

← Back to AI Glossary

Last updated: March 5, 2026