AI Glossary

Sequence-to-Sequence (Seq2Seq)

A model architecture that transforms one sequence into another, originally using encoder-decoder RNNs and now primarily using transformers.

Architecture

An encoder processes the input sequence into a representation. A decoder generates the output sequence from that representation, one token at a time. The encoder and decoder can be RNNs, transformers, or a mix.

Applications

Machine translation (English to French), text summarization, question answering, code generation, and any task that maps one sequence to another.

Evolution

Original seq2seq used RNNs with attention. Transformer-based seq2seq (T5, BART, mBART) dramatically improved quality. Modern decoder-only LLMs have largely subsumed seq2seq capabilities through prompting.

← Back to AI Glossary

Last updated: March 5, 2026