LLM / ChatGPT naysayers are failing at 3 types of scientific reasoning
Why are AI researchers, linguists and psychologists totally disagreeing about whether LLMs like ChatGPT can ‘understand’ us or not?
Some of it is easily characterized as a matter of definitions of what it means to ‘understand’ and how one assesses it. This filter catches the overly-fussy naysayer.
But, more fundamentally?
Such LLM naysayers like Gary Marcus and even Yann LeCun have lost perspective – even a grip on reality – due to a lack of scientific and engineering thinking IMO.
The LLM naysayers are no good at either empiricism or theory. Or engineering. As we shall see.
1. Empiricism
Firstly, and most importantly, ‘our’ (my camp’s) belief that LLMs are essentially understanding us, and able to follow logic, was through the empirical evidence that LLMs indeed were understanding us.
Because of our simultaneous engineering mindset (of iterative improvement, see (3) below) we didn’t get overly distracted by the odd hallucination (knowledge) or logical error.
No, instead, we focussed on the core stunning evidence that almost no matter what we asked GPT-4 to do, it understood us as tested by its responses to questions and instructions.