AI Glossary

Dimensionality Reduction

Techniques that reduce the number of features in a dataset while preserving important information, enabling visualization and combating the curse of dimensionality.

Linear Methods

PCA (Principal Component Analysis): Projects data onto directions of maximum variance. LDA (Linear Discriminant Analysis): Finds projections that maximize class separation. Both are fast and well-understood.

Non-linear Methods

t-SNE: Excellent for 2D/3D visualization of clusters. UMAP: Faster than t-SNE, better preserves global structure. Autoencoders: Neural network compression to a latent space.

Applications

Visualizing high-dimensional embeddings. Preprocessing for ML models (reducing noise and computation). Compression of image and text data. Understanding data structure and relationships.

← Back to AI Glossary

Last updated: March 5, 2026