Here is why the AI community doesn't have what it takes to solve AGI.
Honeybees and spiders are intelligent. They would fail the Turing Test. So would cats, dogs and all animals.
True intelligence has the ability to generalize. A bee doesn't need to be trained on millions of pictures of trees, flowers, cats, dogs, other insects, etc. because it can generalize. DL can't generalize. It needs a lot of examples in order to recognize a class of objects. And it can still be fooled by an adversarial example.
Optimization is the opposite of generalization. The brain doesn't optimize functions like DL. It uses a completely different approach to perception. It can build representations of objects on the fly, even if it has never seen them before. It can generalize edges, borders, colors, position, shadows, lighting, etc. It can recognize all bicycles from a single sample. How does it do it? It uses precisely timed spikes, i.e., discrete sensory events. Almost of the information it needs to build a representation is encoded in the timing of the spikes.
Generalization is not the be-all of intelligence, of course, but it is essential.
If your AGI can't do these things, it's dead on arrival. Sorry. That's the problem with current AI. This is why all the claims that the AI community makes about AGI are bogus. They don't have a clue how to solve generalization and, as long as they stick with DL or any other kind of function optimizer, they never will.
15.1K
Views