Multi-Task Learning
Training a single model to perform multiple related tasks simultaneously, sharing representations across tasks to improve generalization and efficiency.
Benefits
Shared representations capture common patterns. Related tasks provide implicit regularization. A single model replaces many specialized ones. Knowledge transfers between tasks.
In LLMs
Large language models are inherently multi-task learners. Through instruction tuning on diverse tasks (translation, summarization, Q&A, coding), they learn a general-purpose capability that transfers to new tasks via prompting.