Grounding (RAG/Search)
Connecting LLM responses to verifiable external sources to reduce hallucination.
Overview
Grounding in the context of LLMs refers to techniques that anchor model outputs to verifiable external information sources, reducing hallucination and improving factual accuracy. The most common grounding approach is RAG, where relevant documents are retrieved and included in the model's context.
Key Details
Grounding can also involve web search (Google's Gemini grounding), knowledge graph lookup, database queries, or code execution for numerical claims. Effective grounding includes citing sources, allowing users to verify claims, and clearly distinguishing between retrieved facts and model-generated content. Grounding is essential for enterprise applications where accuracy and auditability are paramount — legal, medical, financial, and compliance use cases.