AI Glossary

Cross-Validation

A resampling technique that trains and evaluates a model on multiple splits of the data for robust performance estimates.

Overview

Cross-validation is a model evaluation technique that divides the dataset into k equal folds, then trains the model k times, each time using a different fold as the test set and the remaining k-1 folds for training. The final performance is the average across all k folds, providing a more robust estimate than a single train-test split.

Key Details

Common variants include k-fold (typically k=5 or k=10), stratified k-fold (preserving class distribution in each fold), leave-one-out (k = number of samples), and time-series cross-validation (respecting temporal ordering). Cross-validation helps detect overfitting and provides confidence intervals on model performance. It's standard practice for model selection and hyperparameter tuning.

Related Concepts

validation setoverfittingmodel evaluation

← Back to AI Glossary

Last updated: March 5, 2026