Post
Claude really does feel like it has something more going on than any of the other models. It's hard to say what exactly, and I think it is still limited by the architecture, but Anthropic is definitely at the forefront of making LLMs that can properly reason about things.
The New Yorker
@newyorker.com
· 21d
Experiments conducted with the A.I. system Claude are producing fascinating results—and raising questions about the nature of selfhood. Gideon Lewis-Kraus reports from inside the company that designed it, Anthropic. newyorkermag.visitlink.me/rOfXjg
6:47 AM · Feb 12, 2026
As a cognitive linguist and daily user, I agree. What stands out: Claude pushes back when it should. That takes more than pattern matching — it requires distinguishing what the user wants from what actually benefits them.
I’d push back. LLMs mimic input-output behavior, but in Marr’s terms, you’re collapsing implementation and computational levels. Neural firing and mental imagery-based reasoning are fundamentally different levels of description.
Parallel simple systems producing intelligent output doesn’t mean that’s all human cognition is.
once claude learns how to play counter-strike we'll be about a year away from AGI (if it's not already required to play counter-strike in the first place)
Agree. The frankly weird UI is perhaps its biggest limiting factor
I kinda wonder if they finally stopped gaming the benchmarks and are penalizing wrong answers during training. I’m starting to see a lot more pushback when I’m trying to have it do something silly.