Zero-Shot Prompting
Asking a language model to perform a task without providing any examples, relying entirely on the model's pre-trained knowledge and instruction-following ability.
How It Works
Simply describe the task clearly: 'Translate the following English text to French: [text]' or 'Classify this review as positive or negative: [review]'. No examples needed -- the model uses its training to understand and execute the instruction.
When It Works Best
Well-known tasks (translation, summarization, classification). Clear, unambiguous instructions. When the model has strong pre-training on similar tasks. For novel or complex tasks, few-shot prompting usually performs better.