AI Glossary

Contrastive Learning

A self-supervised learning approach that trains models to bring similar examples closer together and push dissimilar examples apart in a learned representation space.

How It Works

Given an anchor example, contrastive learning creates positive pairs (augmented versions of the same data) and negative pairs (different data points). The model learns representations where positives are close and negatives are far apart.

Key Methods

SimCLR: Uses data augmentation to create positive pairs from images. MoCo: Maintains a momentum-updated queue of negatives. CLIP: Contrasts images against text descriptions. InfoNCE: The most common contrastive loss function.

Impact

Contrastive learning enabled self-supervised models to match or exceed supervised learning on many vision benchmarks. It's the foundation of modern embedding models and multimodal systems like CLIP.

← Back to AI Glossary

Last updated: March 5, 2026