Zero-shot-CoT: Large Language Models are Zero-Shot Reasoners
Contents
NeurIPS 2022 Google Research, Brain Team arXiv 2205.11916
TL;DR
Zero-shot-CoT, a zero-shot task-agnostic prompt (Let’s think step by step.) without any step-by-step few-shot examples, elicits multi-hop reasoning ability.
Motivations & Innovations
- The success of large language models is often attributed to (in-context) few-shot learning called “prompting”.
- With CoT prompting, the reasoning performance satisfies the scaling laws better and jumps up with the size of the language model.

Approach: Two-stage Prompting

1st Prompt: Reasoning Extraction
2nd Prompt: Answer Extraction
Experiments
Zero-shot-CoT vs Zero-shot:

Comparison with other baselines:

Does model size matter for zero-shot reasoning?: Yes

How does prompt selection affect Few-shot-CoT:
