AI Glossary

Bayesian Inference

A statistical method that updates probability estimates as new evidence becomes available, using Bayes' theorem to combine prior knowledge with observed data.

How It Works

Start with a prior probability (initial belief). Observe new data and compute the likelihood. Apply Bayes' theorem: posterior = (likelihood × prior) / evidence. The posterior becomes the new prior when more data arrives.

Applications in AI

Bayesian neural networks (uncertainty estimation), Gaussian processes (regression with uncertainty), Bayesian optimization (hyperparameter tuning), spam filtering (Naive Bayes), and probabilistic programming languages.

Advantages

Naturally quantifies uncertainty (crucial for safety-critical applications). Works well with small datasets. Provides interpretable probability estimates. Avoids overfitting through principled regularization.

← Back to AI Glossary

Last updated: March 5, 2026