The Illusion of Thinking in LLMs
Apple researchers discuss the strengths and limitations of reasoning models.
Apparently, reasoning models "collapse" beyond certain task complexities.
Lots of important insights on this one. (bookmark it!)
Here are my notes: