Chain-of-Thought (CoT)
A prompting technique that encourages language models to show their reasoning step-by-step before arriving at a final answer, significantly improving performance on complex tasks.
How It Works
Instead of asking for a direct answer, you prompt the model to 'think step by step' or provide examples that demonstrate reasoning chains. This allows the model to break complex problems into manageable intermediate steps.
Why It Helps
Chain-of-thought gives the model more computation time (more tokens to reason through), makes implicit reasoning explicit, and reduces errors on multi-step problems like math, logic, and code generation.
Variants
Zero-shot CoT: Simply adding 'Let's think step by step.' Few-shot CoT: Providing examples with reasoning chains. Self-consistency: Sampling multiple reasoning paths and taking the majority answer. Tree of Thought: Exploring multiple reasoning branches.