Variational Autoencoder (VAE)
A generative model that learns a probabilistic latent space, enabling smooth generation of new data by sampling and decoding from learned distributions.
How It Differs from Autoencoders
A standard autoencoder maps input to a fixed point in latent space. A VAE maps input to a distribution (mean + variance), then samples from it. This creates a continuous, structured latent space that enables meaningful interpolation and generation.
Applications
Image generation, drug discovery (generating novel molecular structures), text generation, anomaly detection, and as the image encoder/decoder in Stable Diffusion (the VAE compresses images to/from latent space).