Zero-Shot Reasoning
The ability of LLMs to solve reasoning tasks without any task-specific examples, often triggered by prompts like 'let's think step by step'.
Overview
Zero-shot reasoning refers to a language model's ability to perform complex reasoning tasks without being shown any examples. Research showed that simply adding 'Let's think step by step' to prompts dramatically improves reasoning performance on math, logic, and commonsense tasks.
Significance
This capability, discovered by Kojima et al. (2022), suggests that large language models have latent reasoning abilities that can be activated through appropriate prompting. It's the foundation of chain-of-thought prompting techniques and demonstrates emergent abilities in large models.