AI Glossary

Human-in-the-Loop (HITL)

An AI system design where humans are involved in the decision-making loop, reviewing, correcting, or approving AI outputs before they take effect.

When to Use

High-stakes decisions (medical diagnosis, legal, financial). When model confidence is low. When errors are costly or irreversible. During model training (active learning, RLHF).

Patterns

Review queue: AI flags items for human review. Confidence thresholds: Automate high-confidence cases, escalate low-confidence. Feedback loops: Human corrections improve the model over time.

← Back to AI Glossary

Last updated: March 5, 2026