Padding (in Neural Networks)
Adding extra values (typically zeros) around the edges of input data to control output dimensions in convolutions, or to the end of sequences to create uniform-length batches.
In CNNs
Zero-padding preserves spatial dimensions after convolution. 'Same' padding adds enough zeros so output size equals input size. 'Valid' padding uses no padding, shrinking output dimensions.
In NLP
Sequences of different lengths are padded to the maximum length in a batch so they can be processed together as tensors. Attention masks indicate which positions are real tokens vs padding, ensuring padding doesn't affect computation.