It was subtle, but Microsoft's CEO Satya Nadella just said he doesn't believe in artificial general intelligence (AGI).

The interviewer, Dwarkesh Patel, asked this question:

We will eventually have models, if they get to human level, which will have this ability to continuously learn on the job. That will drive so much value to the model company that is ahead, at least in my view, because you have copies of one model broadly deployed through the economy learning how to do every single job. And unlike humans, they can amalgamate their learnings to that model. So there’s this sort of continuous learning exponential feedback loop, which almost looks like a sort of intelligence explosion.

If that happens and Microsoft isn’t the leading model company by that time… You’re saying that well, we substitute one model for another, et cetera. Doesn’t that then matter less? Because it’s like this one model knows how to do every single job in the economy, the others in the long tail don’t.

This was Satya Nadella's response:

Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed. It’s like databases. It’s always the thing, “Can one database be the one that is just used everywhere?” Except it’s not. There are multiple types of databases that are getting deployed for different use cases.

I think that there are going to be some network effects of continual learning—I call it data liquidity—that any one model has. Is it going to happen in all domains? I don’t think so. Is it going to happen in all geos? I don’t think so. Is it going to happen in all segments? I don’t think so. It’ll happen in all categories at the same time? I don’t think so. So therefore I feel like the design space is so large that there’s plenty of opportunity.

Nadella then steers the conversation toward practical business and technology concerns:

But your fundamental point is having a capability which is at the infrastructure layer, model layer, and at the scaffolding layer, and then being able to compose these things not just as a vertical stack, but to be able to compose each thing for what its purpose is. You can’t build an infrastructure that’s optimized for one model. If you do that, what if you fall behind? In fact, all the infrastructure you built will be a waste. You kind of need to build an infrastructure that’s capable of supporting multiple families and lineages of models. Otherwise the capital you put in, which is optimized for one model architecture, means you’re one tweak away, some MoE-like breakthrough that happens, and your entire network topology goes out of the window. That’s a scary thing.

Therefore you kind of want the infrastructure to support whatever may come in your own model family and other model families.

I looked back to see what Nadella's answer was the last time he was on Dwarkesh Patel's podcast, the last time Dwarkesh Patel asked him about AGI. This was earlier this year, in February 2025. Nadella actually said something similar then:

This is where I have a problem with the definitions of how people talk about it. Cognitive labor is not a static thing. There is cognitive labor today. If I have an inbox that is managing all my agents, is that new cognitive labor?

Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…

That's why I make this distinction, at least in my head: Don't conflate knowledge worker with knowledge work. The knowledge work of today could probably be automated. Who said my life's goal is to triage my email? Let an AI agent triage my email.

But after having triaged my email, give me a higher-level cognitive labor task of, "Hey, these are the three drafts I really want you to review." That's a different abstraction.

When asked if Microsoft could ever have an AI serve on its board, Nadella says no, but maybe an AI could be a helpful assistant in Microsoft's board meetings:

It's a great example. One of the things we added was a facilitator agent in Teams. The goal there, it's in the early stages, is can that facilitator agent use long-term memory, not just on the context of the meeting, but with the context of projects I'm working on, and the team, and what have you, be a great facilitator?

I would love it even in a board meeting, where it's easy to get distracted. After all, board members come once a quarter, and they're trying to digest what is happening with a complex company like Microsoft. A facilitator agent that actually helped human beings all stay on topic and focus on the issues that matter, that's fantastic.

That's kind of literally having, to your point about even going back to your previous question, having something that has infinite memory that can even help us. You know, after all, what is that Herbert Simon thing? We are all bounded rationality. So if the bounded rationality of humans can actually be dealt with because there is a cognitive amplifier outside, that's great.

Nadella is a charming interview, and he has a knack for framing his answers to questions in a friendly, supportive, and positive way. But if you listen to (or read) what he actually says, he is saying he doesn't buy the idea that there will be AGI anytime soon.

In the recent interview, Nadella also expressed skepticism about OpenAI's projection that it will make $100 billion in revenue in 2027, but in his typical polite and cheerful way. The question from Dwarkesh Patel was:

Do you buy… These labs are now projecting revenues of $100 billion in 2027–28 and they’re projecting revenue to keep growing at this rate of 3x, 2x a year…

Nadella replied:

In the marketplace there’s all kinds of incentives right now, and rightfully so. What do you expect an independent lab that is sort of trying to raise money to do? They have to put some numbers out there such that they can actually go raise money so that they can pay their bills for compute and what have you.

And it’s a good thing. Someone’s going to take some risk and put it in there, and they’ve shown traction. It’s not like it’s all risk without seeing the fact that they’ve been performing, whether it’s OpenAI, or whether it’s Anthropic. So I feel great about what they’ve done, and we have a massive book of business with these chaps. So therefore that’s all good.

Comments1


Sorted by Click to highlight new comments since:

>Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…

This comment does seem to point to a possible disagreement with the AGI concept. I interpreted some of the other comments a little differently though. For example, 

>Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed.

I would say humans are general intelligences, but obviously different humans are good at different things. If we had the ability to cheaply copy people like software, I don’t think we would pick some really smart guy and deploy just him across the economy. I guess what Dwarkesh is saying is that the way continual learning will play out in AIs this will be different because we’ll be able to amalgamate all the stuff AIs learn into a single model, but I don’t think it’s obvious that the way things will play out this will be the most efficient way to do things. 

Curated and popular this week
 ·  · 13m read
 · 
It’s to build the skills required to solve the problems that you want to solve in the world [I am a career advisor at 80,000 Hours. This post is adapted from a talk I gave on career capital to some ambitious altruistic students. If you prefer slides, you can access them here. These ideas are informed by my work at 80k but reflects my personal views.] I’m often asked how to have an impactful fulfilling career. My four word answer is “get good, be known.” My “fits on a postcard” answer is something like this: 1. Identify a problem with a vast scale of harm that is neglected at current margins and that is tractable to solve. Millions will die of preventable diseases this year, billions of animals will be tortured, severe tail risks like nuclear war and catastrophic pandemics still exist, and we might be on the cusp of a misaligned intelligence explosion. You should find an important problem to work on. 2. Obsessively improve at the rare and valuable skills to solve this problem and do so in a legible way for others to notice. Leverage this career capital to keep the flywheel going— skills allow you to solve more problems, which builds more skills. Rare and valuable roles require rare and valuable traits, so get so good they can’t ignore you. Unfortunately, some ambitious and altruistic young people that I speak to seem to have implicitly developed a model that looks more like this: 1. Identify a problem with a vast scale of harm, that is neglected at current margins, and that is tractable to solve. 2. Get a job from the 80,000 Hours job board at a capital-E capital-A Effective Altruist organization right out of college, as fast as possible, otherwise feel like a failure, oh god, oh god... I empathize with this feeling. Ambitious people who care about reducing risk and suffering in the world understandably think it’s the most important thing they can be doing, and often hold themselves to a high standard when trying to get there. Before properly entering the w
 ·  · 6m read
 · 
I really enjoyed reading the "why I donate" posts in the past week, so much so that I felt compelled to add my reflections, in case someone finds my reasons as interesting as I found theirs. 1. My money must be spent on something, might as well spend it on the most efficient things The core reason I give is something that I think is under-represented in the other posts: the money I have and earn will need to be spent on something, and it feels extremely inefficient and irrational to spend it on my future self when it can provide >100x as much to others. To me, it doesn't seem important whether I'm in the global top 10% or bottom 10%, or whether the money I have is due to my efforts or to the place I was born. If it can provide others 100x as much, it just seems inefficient/irrational to allocate it to myself. The post could end here, but there are other secondary reasons/perspectives on why I personally donate that I haven't seen commonly discussed. 2. Spending money is voting on how the global economy allocates its resources In 2017, I read Wealth: The Toxic Byproduct by Kevin Simler. Surprisingly, I don't think it has ever been posted on this forum. I disagree with some of it, but the core points really changed how I think about wealth, earning, and spending. The post is very well written and enjoyable, but it's 2400 words, so copy-pasting some snippets: > A thought experiment — the Congolese Trading Window: > > Suppose one day you wake up to find a large pile of Congolese francs. [...] A window [...] pushes open to reveal the unfamiliar sights of a Congolese outdoor market [...] a man approaches your window. [...] He's asking if you'd like to buy his grain for 500 francs. > > What should you do? [...] > Your plan is to buy grain whenever you think the price is poised to go up in the near future, and sell whenever you think the price is poised to go down. > [...] > Imagine a particular bag of grain that you bought at T1 for 200 francs, and then sold at T
 ·  · 6m read
 · 
Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- It can feel hard to help factory-farmed animals. We’re up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth. Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days. And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path. Yet, somehow, we’re winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals. Here are my top ten wins from this year: 1. Liberté et Égalité, for Chickens. France’s largest chicken producer, the LDC Group, committed to adopting the European Chicken Commitment for its two flagship brands by 2028 — a shift that French advocacy group L214 estimates will cover 40% of the national chicken market, or up to 400 million birds each year. Across the Channel, British supermarket chain Waitrose transitioned all its own-brand chicken to comply with the parallel UK Better Chicken Commitment. 2. Guten Cluck! The Wurst Is Over for German Animals. Germany’s top retailer, Edeka, committed to making all of its own-brand chicken products compliant with Germany’s equivalent of the European Chicken Commitment by 2030