AI Glossary

Compute Budget

The total computational resources (measured in FLOPs, GPU-hours, or dollars) allocated for training or running an AI model.

Training Compute

Frontier model training costs $100M+. GPT-4 reportedly cost over $100M to train. Compute-optimal training (Chinchilla scaling) showed that most models were undertrained relative to their size.

Inference Compute

Serving AI models costs money per query. Techniques like quantization, distillation, and caching reduce inference costs. For many companies, inference costs far exceed training costs.

Scaling Laws

Performance improves predictably with more compute, data, and parameters. Understanding these scaling laws helps allocate compute budgets effectively across model size, data size, and training duration.

← Back to AI Glossary

Last updated: March 5, 2026