AIModels.fyi

AIModels.fyi

Telling ChatGPT to "think step by step" doesn't (actually) help much

Chain-of-thought prompting isn't really that useful

aimodels-fyi's avatar
aimodels-fyi
Jun 04, 2024
∙ Paid
3
Share

Chain of thought prompting (Telling an LLM to “think step by step”) has been hailed as a powerful technique for eliciting complex reasoning from ChatGPT.

The idea is simple: provide step-by-step examples of how to solve a problem, and the model will learn to apply that reasoning to new problems.

But a new study says otherwise, finding that chain of thought's successes are far more limited and brittle than widely believed.

This study is really making waves on AImodels.fyi, and especially on Twitter. Remember, people release thousands of AI papers, models, and tools daily. Only a few will be revolutionary. We scan repos, journals, and social media to bring them to you in bite-sized recaps, and this is one paper that’s broken through.

If you want someone to monitor and summarize these breakthroughs for you, you should become a paid subscriber. And for our pro members, read on to learn about why CoT might be a waste of tokens!

Keep reading with a 7-day free trial

Subscribe to AIModels.fyi to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 AIModels.fyi
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture