Chain-of-thought (C-o-T) prompting is a method for breaking down complex problems into simpler steps. When used to prompt large language models, it significantly improves performance on tasks requiring multi-step reasoning like math word problems, commonsense reasoning, and symbolic manipulation. C-o-T prompting sets new state-of-the-art results on benchmarks like GSM8K by enabling models to show the series of intermediate steps used to arrive at the answer. The method enhances interpretability, is effective across different models sizes and conditions, and outperforms simpler prompting methods like only providing equations.