AI Glossary

A/B Testing for ML

Comparing two ML model versions in production by routing traffic between them and measuring outcomes.

Overview

A/B testing for machine learning involves deploying two model versions simultaneously and randomly routing production traffic between them to measure which performs better on key metrics. Unlike offline evaluation, A/B testing captures real-world effects including user behavior changes, latency impacts, and business metric outcomes.

Key Details

Implementation requires traffic splitting infrastructure, statistical significance testing, and careful metric selection. Common approaches include random traffic splitting, multi-armed bandits (which adaptively route more traffic to better performers), and interleaving experiments. A/B testing is the gold standard for validating model improvements before full deployment but requires sufficient traffic and time for statistical power.

Related Concepts

model monitoringmodel evaluationcanary deployment

← Back to AI Glossary

Last updated: March 5, 2026