AI Glossary

Confabulation

When an AI model generates plausible-sounding but factually incorrect information, also known as hallucination — a major reliability challenge for language models.

Why It Happens

LLMs are trained to predict likely next tokens, not to verify facts. They blend patterns from training data, sometimes creating plausible but false combinations. They lack a mechanism to distinguish what they 'know' from what they're 'guessing'.

Types

Factual errors: Wrong dates, names, or statistics. Fabricated sources: Citing papers or URLs that don't exist. Logical errors: Plausible-sounding but invalid reasoning. Confident uncertainty: Stating uncertain information with high confidence.

Mitigation

Retrieval-Augmented Generation (RAG) grounds outputs in real documents. Chain-of-thought prompting improves reasoning accuracy. Calibrated confidence scoring helps flag uncertain outputs. Human-in-the-loop verification for high-stakes applications.

← Back to AI Glossary

Last updated: March 5, 2026