AI Glossary

AI Chip

A specialized processor designed specifically for AI workloads, optimized for the matrix operations and parallel computation that neural networks require.

Major AI Chips

NVIDIA H100/B200: Dominant GPU for AI training. Google TPU v5: Custom ASIC for TensorFlow/JAX. AMD MI300X: Competing GPU. AWS Trainium/Inferentia: Cloud-optimized chips. Apple Neural Engine: On-device inference.

Why Specialized?

AI workloads are dominated by matrix multiplication and convolutions. Dedicated hardware can perform these operations 10-100x more efficiently than general-purpose CPUs by using specialized compute units, high-bandwidth memory, and optimized interconnects.

← Back to AI Glossary

Last updated: March 5, 2026