AI Glossary

Confusion Matrix

A table that visualizes a classification model's predictions versus actual labels, showing true positives, true negatives, false positives, and false negatives.

Reading the Matrix

Rows represent actual classes, columns represent predicted classes. The diagonal shows correct predictions. Off-diagonal cells show errors. For binary classification: TP (correctly predicted positive), FP (incorrectly predicted positive), FN (missed positive), TN (correctly predicted negative).

Derived Metrics

Precision: TP / (TP + FP) -- of all positive predictions, how many were correct? Recall: TP / (TP + FN) -- of all actual positives, how many were caught? F1 Score: Harmonic mean of precision and recall. Accuracy: (TP + TN) / Total.

When Accuracy Is Misleading

In imbalanced datasets (e.g., 99% negative), a model predicting 'always negative' gets 99% accuracy but is useless. The confusion matrix reveals this by showing zero true positives. Precision and recall are more informative in such cases.

← Back to AI Glossary

Last updated: March 5, 2026