Google DeepMind's CEO Demis Hassabis has listed a number of research breakthroughs or new capabilities he believes are needed for artificial general intelligence (AGI). I caught at least two that he mentioned when I listened to a recent episode of Google DeepMind's podcast:

  • Continual learning, a longstanding problem in AI, the ability to learn in every new moment, as in humans do, rather than only learning every few months when a new training run happens (sometimes Hassabis seems to use the term "long-term memory" interchangeably with continual learning, but in the recent podcast he explicitly says continual learning)
  • World models, which, as I understand the term, is somewhere between a philosophical concept and a technical term, referring to some general ability to understand, predict, and model how the real world works, such as the way humans and other mammals have an intuitive understanding of physics

In a January interview, he also mentioned the following missing capabilities:

  • Reasoning, which he also discusses on the podcast, noting a contrast between chatbots' impressive performance on some advanced math problems and a preponderance of elementary mistakes[1]
  • Hierarchical planning, the ability to plan actions that are composed of sub-actions, which are composed of sub-sub-actions, and so on, in a nested hierarchy
  • The ability to creatively generate novel hypotheses or conjectures

Here's the quote from that interview:

The models today are pretty capable, but there are still some missing attributes: things like reasoning, hierarchical planning, long-term memory. There's quite a few capabilities that the current systems don't have. They're also not consistent across the board. They're very strong in some things, but they're still surprisingly weak and flawed in other areas. You'd want an AGI to have pretty consistent, robust behavior across the board for all cognitive tasks.

One thing that's clearly missing, and I always had as a benchmark for AGI, was the ability for these systems to invent their own hypotheses or conjectures about science, not just prove existing ones. They can play a game of Go at a world champion level. But could a system invent Go? Could it come up with relativity back in the days that Einstein did with the information that he had? I think today's systems are still pretty far away from having that kind of creative, inventive capability.

Remarkably, immediately following this, Hassabis says he thinks AGI is "probably three to five years away."

I'm skeptical of such predictions for (at least) two reasons:

1. Many such predictions have come and gone. At very beginning of the field of artificial intelligence in the 1950s, a similarly ambitious goal was set for the first summer of AI research. Anthropic's CEO Dario Amodei predicted in March that AI would take over 90% of coding by September and, well, here we are. (It's not even true at Anthropic!)

2. Hassabis' discussion of the remaining research breakthroughs or missing capabilities seems incongruent with a prediction that those breakthroughs will be made or those capabilities will be developed in such a short time. Hassabis gives the impression he's not even sure he knows the full list yet of what is still missing. And he really thinks the longstanding problems in AI research that he did list will be solved in such a short time? I don't understand what could possibly justify such confidence.

I think it could be an interesting exercise to, rather than (or in addition to) forecasting AGI as a single idea, to decompose AGI into research problems or capabilities like continual learning, world models, reasoning, hierarchical planning, and creative idea generation, and then ask people to forecast those things individually. My guess is that for many or most people, reframing the forecasting question this way would lead to longer timelines for AGI.

I think before venturing a guess, people who want to try forecasting when these research problems will be solved should look into how long AI researchers have been working on them and how much research has already been published. Many of them, in fact I think all of them, are decades old. For instance, I searched Google Scholar and found several papers on hierarchical reinforcement learning from the early 1990s. (Searching for the exact phrase filters out unrelated stuff, but also misses some relevant stuff.) There is more funding now, but it seems like the vast majority of it is being spent on scaling large language models (LLMs) and AI models that generate images and videos, and very little on discovering the new science that Hassabis says is necessary to get to AGI.

Hassabis' comments on the research that remains to be done dovetail with recent comments by another prominent AI researcher, Ilya Sutskever.

  1. ^

    A few days ago, I told GPT-5.2 Thinking I had misplaced my AirPods somewhere in my home and asked it I could if I could use Apple's Find Devices feature to make them play a noise while they were closed in the case. It said no, but offered this helpful advice:

    If they really are in the closed case, force a situation where sound can work. Open the case (lid open) and take at least one AirPod out (even briefly), then retry Play Sound.

    Other peculiarities I've noticed recently include a pattern of answering "yes" to questions that are not yes or no questions.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 13m read
 · 
It’s to build the skills required to solve the problems that you want to solve in the world [I am a career advisor at 80,000 Hours. This post is adapted from a talk I gave on career capital to some ambitious altruistic students. If you prefer slides, you can access them here. These ideas are informed by my work at 80k but reflects my personal views.] I’m often asked how to have an impactful fulfilling career. My four word answer is “get good, be known.” My “fits on a postcard” answer is something like this: 1. Identify a problem with a vast scale of harm that is neglected at current margins and that is tractable to solve. Millions will die of preventable diseases this year, billions of animals will be tortured, severe tail risks like nuclear war and catastrophic pandemics still exist, and we might be on the cusp of a misaligned intelligence explosion. You should find an important problem to work on. 2. Obsessively improve at the rare and valuable skills to solve this problem and do so in a legible way for others to notice. Leverage this career capital to keep the flywheel going— skills allow you to solve more problems, which builds more skills. Rare and valuable roles require rare and valuable traits, so get so good they can’t ignore you. Unfortunately, some ambitious and altruistic young people that I speak to seem to have implicitly developed a model that looks more like this: 1. Identify a problem with a vast scale of harm, that is neglected at current margins, and that is tractable to solve. 2. Get a job from the 80,000 Hours job board at a capital-E capital-A Effective Altruist organization right out of college, as fast as possible, otherwise feel like a failure, oh god, oh god... I empathize with this feeling. Ambitious people who care about reducing risk and suffering in the world understandably think it’s the most important thing they can be doing, and often hold themselves to a high standard when trying to get there. Before properly entering the w
 ·  · 6m read
 · 
I really enjoyed reading the "why I donate" posts in the past week, so much so that I felt compelled to add my reflections, in case someone finds my reasons as interesting as I found theirs. 1. My money must be spent on something, might as well spend it on the most efficient things The core reason I give is something that I think is under-represented in the other posts: the money I have and earn will need to be spent on something, and it feels extremely inefficient and irrational to spend it on my future self when it can provide >100x as much to others. To me, it doesn't seem important whether I'm in the global top 10% or bottom 10%, or whether the money I have is due to my efforts or to the place I was born. If it can provide others 100x as much, it just seems inefficient/irrational to allocate it to myself. The post could end here, but there are other secondary reasons/perspectives on why I personally donate that I haven't seen commonly discussed. 2. Spending money is voting on how the global economy allocates its resources In 2017, I read Wealth: The Toxic Byproduct by Kevin Simler. Surprisingly, I don't think it has ever been posted on this forum. I disagree with some of it, but the core points really changed how I think about wealth, earning, and spending. The post is very well written and enjoyable, but it's 2400 words, so copy-pasting some snippets: > A thought experiment — the Congolese Trading Window: > > Suppose one day you wake up to find a large pile of Congolese francs. [...] A window [...] pushes open to reveal the unfamiliar sights of a Congolese outdoor market [...] a man approaches your window. [...] He's asking if you'd like to buy his grain for 500 francs. > > What should you do? [...] > Your plan is to buy grain whenever you think the price is poised to go up in the near future, and sell whenever you think the price is poised to go down. > [...] > Imagine a particular bag of grain that you bought at T1 for 200 francs, and then sold at T
 ·  · 6m read
 · 
Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- It can feel hard to help factory-farmed animals. We’re up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth. Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days. And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path. Yet, somehow, we’re winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals. Here are my top ten wins from this year: 1. Liberté et Égalité, for Chickens. France’s largest chicken producer, the LDC Group, committed to adopting the European Chicken Commitment for its two flagship brands by 2028 — a shift that French advocacy group L214 estimates will cover 40% of the national chicken market, or up to 400 million birds each year. Across the Channel, British supermarket chain Waitrose transitioned all its own-brand chicken to comply with the parallel UK Better Chicken Commitment. 2. Guten Cluck! The Wurst Is Over for German Animals. Germany’s top retailer, Edeka, committed to making all of its own-brand chicken products compliant with Germany’s equivalent of the European Chicken Commitment by 2030