It was subtle, but Microsoft's CEO Satya Nadella just said he doesn't believe in artificial general intelligence (AGI).

The interviewer, Dwarkesh Patel, asked this question:

We will eventually have models, if they get to human level, which will have this ability to continuously learn on the job. That will drive so much value to the model company that is ahead, at least in my view, because you have copies of one model broadly deployed through the economy learning how to do every single job. And unlike humans, they can amalgamate their learnings to that model. So there’s this sort of continuous learning exponential feedback loop, which almost looks like a sort of intelligence explosion.

If that happens and Microsoft isn’t the leading model company by that time… You’re saying that well, we substitute one model for another, et cetera. Doesn’t that then matter less? Because it’s like this one model knows how to do every single job in the economy, the others in the long tail don’t.

This was Satya Nadella's response:

Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed. It’s like databases. It’s always the thing, “Can one database be the one that is just used everywhere?” Except it’s not. There are multiple types of databases that are getting deployed for different use cases.

I think that there are going to be some network effects of continual learning—I call it data liquidity—that any one model has. Is it going to happen in all domains? I don’t think so. Is it going to happen in all geos? I don’t think so. Is it going to happen in all segments? I don’t think so. It’ll happen in all categories at the same time? I don’t think so. So therefore I feel like the design space is so large that there’s plenty of opportunity.

Nadella then steers the conversation toward practical business and technology concerns:

But your fundamental point is having a capability which is at the infrastructure layer, model layer, and at the scaffolding layer, and then being able to compose these things not just as a vertical stack, but to be able to compose each thing for what its purpose is. You can’t build an infrastructure that’s optimized for one model. If you do that, what if you fall behind? In fact, all the infrastructure you built will be a waste. You kind of need to build an infrastructure that’s capable of supporting multiple families and lineages of models. Otherwise the capital you put in, which is optimized for one model architecture, means you’re one tweak away, some MoE-like breakthrough that happens, and your entire network topology goes out of the window. That’s a scary thing.

Therefore you kind of want the infrastructure to support whatever may come in your own model family and other model families.

I looked back to see what Nadella's answer was the last time he was on Dwarkesh Patel's podcast, the last time Dwarkesh Patel asked him about AGI. This was earlier this year, in February 2025. Nadella actually said something similar then:

This is where I have a problem with the definitions of how people talk about it. Cognitive labor is not a static thing. There is cognitive labor today. If I have an inbox that is managing all my agents, is that new cognitive labor?

Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…

That's why I make this distinction, at least in my head: Don't conflate knowledge worker with knowledge work. The knowledge work of today could probably be automated. Who said my life's goal is to triage my email? Let an AI agent triage my email.

But after having triaged my email, give me a higher-level cognitive labor task of, "Hey, these are the three drafts I really want you to review." That's a different abstraction.

When asked if Microsoft could ever have an AI serve on its board, Nadella says no, but maybe an AI could be a helpful assistant in Microsoft's board meetings:

It's a great example. One of the things we added was a facilitator agent in Teams. The goal there, it's in the early stages, is can that facilitator agent use long-term memory, not just on the context of the meeting, but with the context of projects I'm working on, and the team, and what have you, be a great facilitator?

I would love it even in a board meeting, where it's easy to get distracted. After all, board members come once a quarter, and they're trying to digest what is happening with a complex company like Microsoft. A facilitator agent that actually helped human beings all stay on topic and focus on the issues that matter, that's fantastic.

That's kind of literally having, to your point about even going back to your previous question, having something that has infinite memory that can even help us. You know, after all, what is that Herbert Simon thing? We are all bounded rationality. So if the bounded rationality of humans can actually be dealt with because there is a cognitive amplifier outside, that's great.

Nadella is a charming interview, and he has a knack for framing his answers to questions in a friendly, supportive, and positive way. But if you listen to (or read) what he actually says, he is saying he doesn't buy the idea that there will be AGI anytime soon.

Nadella also expressed skepticism about OpenAI's projection that it will make $100 billion in revenue in 2027, but in his typical polite and cheerful way. The question from Dwarkesh Patel was:

Do you buy… These labs are now projecting revenues of $100 billion in 2027–28 and they’re projecting revenue to keep growing at this rate of 3x, 2x a year…

Nadella said:

In the marketplace there’s all kinds of incentives right now, and rightfully so. What do you expect an independent lab that is sort of trying to raise money to do? They have to put some numbers out there such that they can actually go raise money so that they can pay their bills for compute and what have you.

And it’s a good thing. Someone’s going to take some risk and put it in there, and they’ve shown traction. It’s not like it’s all risk without seeing the fact that they’ve been performing, whether it’s OpenAI, or whether it’s Anthropic. So I feel great about what they’ve done, and we have a massive book of business with these chaps. So therefore that’s all good.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 14m read
 · 
You may have noticed that Open Philanthropy is hiring for several roles in our GCR division: senior generalists across our global catastrophic risks team, and grantmakers for our technical AI safety team. (We're also hiring for recruiting and operations roles! I know very little about either field, so I'm not going to talk about them here.) I work as a grantmaker on OP's AI governance team, and inspired by Lizka's recent excellent post giving her personal take on working at Forethought, I wanted to share some personal takes on reasons for and against working on the AI teams at Open Philanthropy. A few things to keep in mind as you read this: * I'm mostly going to talk about personal fit and the day-to-day experience of working at OP, rather than getting into high-level strategy and possible disagreements people might have with it. * On strategy: if you have some substantive big picture disagreements with OP's approach, I think that firstly, you're in good company (this describes many OP staff!), and secondly, you'd still enjoy working here. But if you disagree with many of our strategic choices, or have especially foundational or basic disagreements, you'd probably be kind of miserable working here. * Everything below is just my personal take. I ran a draft past some colleagues and got reactions, but I expect they’d disagree with plenty of what I've written here. The case for working at Open Philanthropy's AI team So why OP’s AI team? A couple of reasons: Impact Open Philanthropy is the biggest philanthropic funder in the AI safety space, and has been for a long time. We have the benefits of: * being able to influence a large amount of funding * having outstanding access to, and coordination abilities across, the AI safety ecosystem * a very well-oiled grantmaking machine, with significant flexibility over how and to whom we can make grants * being trusted to spend money ambitiously and at scale, if we think it's high impact to do so Now is an esp
 ·  · 8m read
 · 
Cross-posted from Good Structures. For impact-minded donors, it’s natural to focus on doing the most cost-effective thing. Suppose you’re genuinely neutral on what you do, as long as it maximizes the good. If you’re donating money, you want to look for the most cost-effective opportunity (on the margin) and donate to it. But many organizations and individuals who care about cost-effectiveness try to influence the giving of others. This includes: * Research organizations that try to influence the allocation or use of charitable funds. * Donor advisors who work with donors to find promising opportunities. * People arguing to community members on venues like the EA Forum. * Charity recommenders like GiveWell and Animal Charity Evaluators. These are endeavors where you’re specifically trying to influence the giving of others. And when you influence the giving of others, you don’t get full credit for their decisions! You should only get credit for how much better the thing you convinced them to do is compared to what they would otherwise do. This is something that many people in EA and related communities take for granted and find obvious in the abstract. But I think the implications of this aren’t always fully digested by the community. In particular, often, when looking at an intervention, it being highly cost-effective is less important than who paid for it — if the donor would have otherwise funded something similarly cost-effective, the intervention isn’t actually making that much difference on the margin. As a quick demonstration, say as a donor advisor you have two options: * Option 1: You can influence Big EA Donor to move $1,000,000 from something generating 95 units of value per dollar to something generating 105 units per dollar. * You’ve created ($1,000,000 * 105 - $1,000,000 * 95) = 10,000,000 units of value. * This is often what I expect a typical EA Forum post arguing that X animal welfare campaign is better than Y one, or similar, would
 ·  · 1m read
 · 
The cause prioritization landscape in EA is changing. * Focus has shifted away from evaluation of general cause areas or cross-cause comparisons, with the vast majority of research now comparing interventions within particular cause areas. * Artificial Intelligence does not comfortably fit into any of the traditional cause buckets of Global Health, Animal Welfare, and Existential Risk. As EA becomes increasingly focused on AI, traditional cause comparisons may ignore important considerations. * While some traditional cause prioritization cruxes remain central (e.g. animal vs. human moral weights, cluelessness about the longterm future), we expect new cruxes have emerged that are important for people’s giving decisions today but have received much less attention. We want to get a better picture of what the most pressing cause prioritization questions are right now. This will help us, as a community, decide what research is most needed and open up new lines of inquiry. Some of these questions may be well known in EA but still unanswered. Some may be known elsewhere but neglected in EA. Some may be brand new. To elicit these cruxes, consider the following question: > Imagine that you are to receive $20 million at the beginning of 2026. You are committed to giving all of it away, but you don’t have to donate on any particular timeline. What are the most important questions that you would want answers to before deciding how, where, and when to give?