It was subtle, but Microsoft's CEO Satya Nadella just said he doesn't believe in artificial general intelligence (AGI).

The interviewer, Dwarkesh Patel, asked this question:

We will eventually have models, if they get to human level, which will have this ability to continuously learn on the job. That will drive so much value to the model company that is ahead, at least in my view, because you have copies of one model broadly deployed through the economy learning how to do every single job. And unlike humans, they can amalgamate their learnings to that model. So there’s this sort of continuous learning exponential feedback loop, which almost looks like a sort of intelligence explosion.

If that happens and Microsoft isn’t the leading model company by that time… You’re saying that well, we substitute one model for another, et cetera. Doesn’t that then matter less? Because it’s like this one model knows how to do every single job in the economy, the others in the long tail don’t.

This was Satya Nadella's response:

Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed. It’s like databases. It’s always the thing, “Can one database be the one that is just used everywhere?” Except it’s not. There are multiple types of databases that are getting deployed for different use cases.

I think that there are going to be some network effects of continual learning—I call it data liquidity—that any one model has. Is it going to happen in all domains? I don’t think so. Is it going to happen in all geos? I don’t think so. Is it going to happen in all segments? I don’t think so. It’ll happen in all categories at the same time? I don’t think so. So therefore I feel like the design space is so large that there’s plenty of opportunity.

Nadella then steers the conversation toward practical business and technology concerns:

But your fundamental point is having a capability which is at the infrastructure layer, model layer, and at the scaffolding layer, and then being able to compose these things not just as a vertical stack, but to be able to compose each thing for what its purpose is. You can’t build an infrastructure that’s optimized for one model. If you do that, what if you fall behind? In fact, all the infrastructure you built will be a waste. You kind of need to build an infrastructure that’s capable of supporting multiple families and lineages of models. Otherwise the capital you put in, which is optimized for one model architecture, means you’re one tweak away, some MoE-like breakthrough that happens, and your entire network topology goes out of the window. That’s a scary thing.

Therefore you kind of want the infrastructure to support whatever may come in your own model family and other model families.

I looked back to see what Nadella's answer was the last time he was on Dwarkesh Patel's podcast, the last time Dwarkesh Patel asked him about AGI. This was earlier this year, in February 2025. Nadella actually said something similar then:

This is where I have a problem with the definitions of how people talk about it. Cognitive labor is not a static thing. There is cognitive labor today. If I have an inbox that is managing all my agents, is that new cognitive labor?

Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…

That's why I make this distinction, at least in my head: Don't conflate knowledge worker with knowledge work. The knowledge work of today could probably be automated. Who said my life's goal is to triage my email? Let an AI agent triage my email.

But after having triaged my email, give me a higher-level cognitive labor task of, "Hey, these are the three drafts I really want you to review." That's a different abstraction.

When asked if Microsoft could ever have an AI serve on its board, Nadella says no, but maybe an AI could be a helpful assistant in Microsoft's board meetings:

It's a great example. One of the things we added was a facilitator agent in Teams. The goal there, it's in the early stages, is can that facilitator agent use long-term memory, not just on the context of the meeting, but with the context of projects I'm working on, and the team, and what have you, be a great facilitator?

I would love it even in a board meeting, where it's easy to get distracted. After all, board members come once a quarter, and they're trying to digest what is happening with a complex company like Microsoft. A facilitator agent that actually helped human beings all stay on topic and focus on the issues that matter, that's fantastic.

That's kind of literally having, to your point about even going back to your previous question, having something that has infinite memory that can even help us. You know, after all, what is that Herbert Simon thing? We are all bounded rationality. So if the bounded rationality of humans can actually be dealt with because there is a cognitive amplifier outside, that's great.

Nadella is a charming interview, and he has a knack for framing his answers to questions in a friendly, supportive, and positive way. But if you listen to (or read) what he actually says, he is saying he doesn't buy the idea that there will be AGI anytime soon.

In the recent interview, Nadella also expressed skepticism about OpenAI's projection that it will make $100 billion in revenue in 2027, but in his typical polite and cheerful way. The question from Dwarkesh Patel was:

Do you buy… These labs are now projecting revenues of $100 billion in 2027–28 and they’re projecting revenue to keep growing at this rate of 3x, 2x a year…

Nadella replied:

In the marketplace there’s all kinds of incentives right now, and rightfully so. What do you expect an independent lab that is sort of trying to raise money to do? They have to put some numbers out there such that they can actually go raise money so that they can pay their bills for compute and what have you.

And it’s a good thing. Someone’s going to take some risk and put it in there, and they’ve shown traction. It’s not like it’s all risk without seeing the fact that they’ve been performing, whether it’s OpenAI, or whether it’s Anthropic. So I feel great about what they’ve done, and we have a massive book of business with these chaps. So therefore that’s all good.

Comments1


Sorted by Click to highlight new comments since:

>Today's cognitive labor may be automated. What about the new cognitive labor that gets created? Both of those things have to be thought of, which is the shifting…

This comment does seem to point to a possible disagreement with the AGI concept. I interpreted some of the other comments a little differently though. For example, 

>Your point, if there’s one model that is the only model that’s most broadly deployed in the world and it sees all the data and it does continuous learning, that’s game set match and you stop shop. The reality that at least I see is that in the world today, for all the dominance of any one model, that is not the case. Take coding, there are multiple models. In fact, everyday it’s less the case. There is not one model that is getting deployed broadly. There are multiple models that are getting deployed.

I would say humans are general intelligences, but obviously different humans are good at different things. If we had the ability to cheaply copy people like software, I don’t think we would pick some really smart guy and deploy just him across the economy. I guess what Dwarkesh is saying is that the way continual learning will play out in AIs this will be different because we’ll be able to amalgamate all the stuff AIs learn into a single model, but I don’t think it’s obvious that the way things will play out this will be the most efficient way to do things. 

Curated and popular this week
 ·  · 1m read
 · 
The cause prioritization landscape in EA is changing. * Focus has shifted away from evaluation of general cause areas or cross-cause comparisons, with the vast majority of research now comparing interventions within particular cause areas. * Artificial Intelligence does not comfortably fit into any of the traditional cause buckets of Global Health, Animal Welfare, and Existential Risk. As EA becomes increasingly focused on AI, traditional cause comparisons may ignore important considerations. * While some traditional cause prioritization cruxes remain central (e.g. animal vs. human moral weights, cluelessness about the longterm future), we expect new cruxes have emerged that are important for people’s giving decisions today but have received much less attention. We want to get a better picture of what the most pressing cause prioritization questions are right now. This will help us, as a community, decide what research is most needed and open up new lines of inquiry. Some of these questions may be well known in EA but still unanswered. Some may be known elsewhere but neglected in EA. Some may be brand new. To elicit these cruxes, consider the following question: > Imagine that you are to receive $20 million at the beginning of 2026. You are committed to giving all of it away, but you don’t have to donate on any particular timeline. What are the most important questions that you would want answers to before deciding how, where, and when to give?
 ·  · 19m read
 · 
Author’s note: This is an adapted version of my recent talk at EA Global NYC (I’ll add a link when it’s available). The content has been adjusted to reflect things I learned from talking to people after my talk. If you saw the talk, you might still be interested in the “some objections” section at the end.   Summary Wild animal welfare faces frequent tractability concerns, amounting to the idea that ecosystems are too complex to intervene in without causing harm. However, I suspect these concerns reflect inconsistent justification standards rather than unique intractability. To explore this idea: * I provide some context about why people sometimes have tractability concerns about wild animal welfare, providing a concrete example using bird-window collisions. * I then describe four approaches to handling uncertainty about indirect effects: spotlighting (focusing on target beneficiaries while ignoring broader impacts), ignoring cluelessness (acting on knowable effects only), assigning precise probabilities to all outcomes, and seeking ecologically inert interventions. * I argue that, when applied consistently across cause areas, none of these approaches suggest wild animal welfare is distinctively intractable compared to global health or AI safety. Rather, the apparent difference most commonly stems from arbitrarily wide "spotlights" applied to wild animal welfare (requiring consideration of millions of species) versus narrow ones for other causes (typically just humans). While I remain unsure about the right approach to handling indirect effects, I think that this is a problem for all cause areas as soon as you realize wild animals belong in your moral circle, and especially if you take a consequentialist approach to moral analysis. Overall, while I’m sympathetic to worries about unanticipated ecological consequences, they aren’t unique to wild animal welfare, and so either wild animal welfare is not uniquely intractable, or everything is. Consequentialism +
 ·  · 2m read
 · 
Long time lurker, first time poster - be nice please! :) I was searching for summary data of EA funding trends, but couldn't find anything more recent than Tyler's post from 2022. So I decided to update it. If this analysis is done properly anywhere, please let me know. The spreadsheet is here (some things might look weird due to importing from Excel to sheets) Observations * EA grantmaking appears on a steady downward trend since 2022 / FTX. * The squeeze on GH funding to support AI / other longtermist priorities appears to be really taking effect this year (though 2025 is a rough estimate and has significant uncertainty.) * I am really interested in particular about the apparent drop in GW grants this year. I suspect that it is wrong or at least misleading - the metrics report suggests they are raising ~$300m p.a. from non OP donors. Not sure if I have made an error (missing direct to charity donations?) or if they are just sitting on funding with the ongoing USAID disruption. Methodology * I compiled the latest grants databases from EA Funds, GiveWell, OpenPhilanthropy, and SFF. I added summary level data from ACE. * To remove double counting, I removed any OpenPhilanthropy grants that were duplicated in GiveWell's grant database. Likewise for EA Funds. * I inflation adjusted to 2025 $ based on the US CPI data from WorldBank. * For 2025 data, I made a judgement call on how much data was "complete" and pro-rated accordingly - e.g. from GiveWell, it looks complete up until the end of June, so I excluded any grants made in H2 and doubled the sum. Notes My numbers are a bit different from Tyler's. I've identified the following reasons: * Inflation adjustments (i.e. an upward boost from using 2025$) * I've used GiveWell's grant database rather than their metrics reports, * Different avoidance of double counting (I removed from OP, Tyler removed from GW. I also went through more manually - from what I can see Tyler removed any GW grant that has OP as