When does chain-of-thought prompting make AI worse?
CoT is super hot right now. You shouldn't always use it.
A lot of ML papers have popularized a technique called chain-of-thought (CoT) prompting, where AI models are asked to break down their thinking step by step. While this approach often improves performance, researchers from Princeton University and NYU have discovered that in certain scenarios, it can significantly harm LLM performance - just as overthinking can sometimes impair human performance!
Let’s see what the researchers found and how to avoid this in our own work.
Keep reading with a 7-day free trial
Subscribe to AIModels.fyi to keep reading this post and get 7 days of free access to the full post archives.