LLMs and Memory is Definitely All You Need: Google Shows that Memory-Augmented LLMs Can Simulate Any Turing Machine

A major breakthrough in LLM research.

Jesus Rodriguez
Towards AI
Published in
7 min read2 days ago

Created Using Midjourney

I recently started an AI-focused educational newsletter, that already has over 160,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:

Large language models(LLMs) continue to push the limits of computations models one breakthrough at a time. How far could this go? Well, a recent research paper published from AI researchers from Google Brain and University of Alberta shows that it can go VERY FAR. Could we possibly simulate any algorithm using large language models(LLMs) and memory? Can the combination of LLM and memory be Turing complete?

In the realm of computing, the concept of Turing Machines embodies the idea of a universal computer — a remarkable machine capable of emulating the execution of any other computing device. Recent inquiries have arisen, exploring the intricate relationship between Large Language Models (LLMs) and Turing Machines. These inquiries ponder whether the majority of LLMs possess inherent limitations in expressing computations within a single input. To address this, new research endeavors have delved into equipping LLMs with an external feedback loop, whereby the model’s outputs are processed and subsequently fed back as inputs. This approach poses a pivotal question: Does augmenting an LLM with an external feedback loop merely offer utility, or does it fundamentally expand the breadth of computations that can be performed?

In their paper, Google Brain and the University of Alberta delve into this question. Within this work, they astutely demonstrate the computational universality achievable by augmenting an LLM with associative read-write memory. The study centers around the utilization of the…

Read the full story with a free account.

The author made this story available to Medium members only.
Sign up to read this one for free.

Or, continue in mobile web

Already have an account? Sign in

CEO of IntoTheBlock, President of Faktory, I write The Sequence Newsletter, Guest lecturer at Columbia University and Wharton, Angel Investor, Author, Speaker.