Hide table of contents

Three Epoch employees – Matthew Barnett, Tamay Besiroglu, and Ege Erdil – have left to launch Mechanize, an AI startup aiming for broad automation of ordinary labour:

Today we’re announcing Mechanize, a startup focused on developing virtual work environments, benchmarks, and training data that will enable the full automation of the economy.

We will achieve this by creating simulated environments and evaluations that capture the full scope of what people do at their jobs. ...

Currently, AI models have serious shortcomings that render most of this enormous value out of reach. They are unreliable, lack robust long-context capabilities, struggle with agency and multimodality, and can’t execute long-term plans without going off the rails.

To overcome these limitations, Mechanize will produce the data and evals necessary for comprehensively automating work. Our digital environments will act as practical simulations of real-world work scenarios, enabling agents to learn useful abilities through RL. ...

The explosive economic growth likely to result from completely automating labor could generate vast abundance, much higher standards of living, and new goods and services that we can’t even imagine today. Our vision is to realize this potential as soon as possible.

Tweet from Matthew Barnett:

I started a new company with @egeerdil2 and @tamaybes that's focused on automating the whole economy. We're taking a big bet on our view that the main value of AI will come from broad automation rather than from "geniuses in a data center".

The Mechanize website is scant on detail. It seems broadly bad that the alumni from a safety-focused AI org have left to form a company which accelerates AI timelines (and presumably is based on/uses evals built at Epoch).

It seems noteworthy that Epoch AI retweeted the announcement, wishing the departing founders best of luck – which feels like a tacit endorsement of the move.

Habryka wonders whether payment would have had to be given to Epoch for use of their benchmarks suite.

89

1
0

Reactions

1
0
Comments40


Sorted by Click to highlight new comments since:

I'm not sure I feel as concerned about this as others. tl;dr - They have different beliefs from Safety-concerned EAs, and their actions are a reflection of those beliefs.

It seems broadly bad that the alumni from a safety-focused AI org

Was Epoch ever a 'safety-focused' org? I thought they were trying to understand what's happening with AI, not taking a position on Safety per se.

 ...have left to form a company which accelerates AI timelines

I think Matthew and Tamay think this is positive, since they think AI is positive. As they say, they think explosive growth can be translated into abundance. They don't think that the case for AI risk is strong, or significant, especially given the opportunity cost they see from leaving abundance on the table.

Also important to note is what Epoch boss Jaime says in this very comment thread.

As I learned more and the situation unfolded I have become more skeptical of AI Risk.

The same thing seems to be happening with me, for what it's worth.

People seem to think that there is an 'EA Orthodoxy' on this stuff, but there either isn't as much as people think, or people who disagree with it are no longer EAs. I really don't think it makes sense to clamp down on 'doing anything to progress AI' as being a hill for EA to die on.

I think there are two competing failure modes:

(1) The epistemic community around EA, rationality, and AI safety, should stay open to criticism of key empirical assumptions (like the level of risks from AI, risks of misalignments, etc.) in a healthy way.

(2) We should still condemn people who adopt contrarian takes with unreasonable-seeming levels of confidence and then take actions based on them that we think are likely doing damage.

In addition, there's possibly also a question of "how much do people who benefit from AI safety funding and AI safety association have an obligation to not take unilateral actions that most of the informed people in the community consider negative." (FWIW I don't think the obligation here would be absolute even if Epoch had been branded as centrally 'AI safety,' and I acknowledge that the branding issue seems contested; also, it wasn't Jamie [edit: Jaime] the founder who left in this way, and of the people who went off to found this new org, Matthew Barnett, for instance, has been really open about his contrarian takes, so insofar as Epoch's funders had concerns about the alignment of employees at Epoch, it was also -- to some degree, at least -- on them to ask for more information or demand some kind of security guarantee if they felt worried. And maybe this did happen -- I'm just flagging that I don't feel like we onlookers necessarily have the info, and so it's not clear whether anyone has violated norms of social cooperation here or whether we're just dealing with people getting close to the boundaries of unilateral action in a way that is still defensible because they've never claimed to be more aligned than they were, never accepted funding that came with specific explicit assumptions, etc.)

or whether we're just dealing with people getting close to the boundaries of unilateral action in a way that is still defensible because they've never claimed to be more aligned than they were, never accepted funding that came with specific explicit assumptions, etc.)

Caveats up front: I note the complexity of figuring out what Epoch's own views are, as opposed to Jaime's [corrected spelling] view or the views of the departing employees. I also do not know what representations were made. Therefore, I am not asserting that Epoch did something or needs to do something, merely that the concern described below should be evaluated.

People and organizations change their opinions all the time. One thing I'm unclear on is whether there was a change in position here should that created an obligation to offer to return and/or redistribute unused donor funds. 

I note that, in February 2023, Epoch was fundraising through September 2025. I don't know its cash flows, but I cite that to show it is plausible they were operating on safety-focused money obtained before a material change to less safety-focused views. In other words, the representations to donors may have been appropriate when the money was raised but outdated by the time it was spent. 

I think it's fair to ask whether a donor would have funded a longish runway if it had known the organization's views would change by the time the monies were spent. If the answer is "no," that raises the possibility that the organization may be ethically obliged to refund or regrant the unspent grant monies.

I can imagine circumstances in which the answers are no and yes: for instance, suppose the organization was a progressive political advocacy organization that decided to go moderate left instead. It generally will not be appropriate for that org to use progressives' money to further its new stance. On the other hand, any shift here was less pronounced, and there's a stronger argument that the donors got the forecasting/information outputs they paid for.

Anyway, for me all this ties into post-FTX discussions about giving organizations a healthy financial runway. People in those discussions did a good job flagging the downsides of short-term grants without confidence in renewal, as well as the high degree of power funders hold in the ecosystem. But AI is moving fast; this isn't something more stable like anti-malarial work. So the chance of organizational drift seems considerably higher here.

How do we deal with the possibility that honest organizational changes will create a inconsistency with the implicit donor-recipient understanding at the time of grant? I don't claim to have the answer, or how to apply it here. 

By the way, the name is ‘Jaime’, not ‘Jamie’. The latter doesn't exist in Spanish and the two are pronounced completely differently (they share one phoneme out of five, when aligned phoneme by phoneme).

(I thought I should mention it since the two names often look indistinguishable in written form to people who are not aware that they differ.)

Thank you Pablo for defending the integrity of my name -- literally 😆

How common is it for such repayments to occur, and what do you think would be the standard for the level of clarity of the commitment, and who does that commitment would have to be to? For example, is there a case that 80k hours should refund payments in light of their pivot to focus on AI? I know there are differences, their funder could support the move etc., but in the spirit of the thing, where is the line here?

Editing to add: One of my interests in this topic is that EA/rationalists seem to have some standards/views that diverge somewhat from what I would characterize as more "mainstream" approaches to these kinds of things. Re-reading the OP, I noticed a detail I initially missed:

Habryka wonders whether payment would have had to be given to Epoch for use of their benchmarks suite.

to me this does seem like it implicates a more mainstream view of a potential conflict-of-interest.

I think Matthew and Tamay think this is positive, since they think AI is positive.

I don't see how this alleviates concern. Sure they're acting consistently with their beliefs*, but that doesn't change the fact that what they're doing is bad.

*I assume, I don't really know

Intuitively, it seems we should respond differently depending on which of these three possibilities is true:

  1. They think that what they are doing is negative for the world, but do it anyway, because it is good for themselves personally.
  2. They do not think that what they are doing is negative for the world, but they believe this due to motivated cognition.
  3. They do not think that what they are doing is negative for the world, and this belief was not formed in a way that seems suspect.

From an act consequentialist perspective, these differences do not matter intrinsically, but they are still instrumentally relevant.[1]

  1. ^

    I don't mean to suggest that any one of these possibilities is particularly likely, or they they are all plausible. I haven't followed this incident closely. FWIW, my vague sense is that the Mechanize founders had all expressed skepticism about the standard AI safety arguments for a while, in a way that seems hard to reconcile with (1) or (2).

it suggests the concern is an object level one, not a meta one. the underlying "vibe" I am getting from a lot of these discussions is that the people in question have somehow betrayed EA/the community/something else. That is a meta concern, one of norms. You could "betray" the community even if you are on the AI deceleration side things. If the people in question or Epoch made a specific commitment that they violated, that would be a "meta" issue, and would be one regardless of their "side" on the deceleration question. Perhaps they did do such a thing, but I haven't seen convincing information suggesting this. I think that really the main explanatory variable here is in fact what "side" this suggests they are on. If that is the case, I think it is worth having clarity about it. People can do a bad thing because they are just wrong in their analysis of a situation or their decision-making. That doesn't mean their actions constitute a betrayal.

EDIT: I did not read the entire thing and now realise the author of this post said the same. I will still keep my feelings around this public. 

Hmm. This seems like a strange thing to work towards? Perhaps even harmful. Is this not just trying to push SOTA?

(Perhaps strange is not the right word to use here. I could see many reasons why you would want to do this, but I guess I had the intuition that people at Epoch would not want to do this). 

I've written a short-form here as well.

Responding here for greater visibility -- I'm responding to the idea in your short-form that the lesson from this is to hire for greater value alignment. 

Epoch's founder has openly stated that their company culture is not particularly fussed about most AI risk topics [edit: they only stated this today, making the rest of my comment here less accurate; see thread]. Key quotes from that post: 

  • "on net I support faster development of AI, so we can benefit earlier from it."
  • "I am not very concerned about violent AI takeover. I am concerned about concentration of power and gradual disempowerment."

So I'm not sure this is that much of a surprise? It's at least not totally obvious that Mechanize's existence is contrary to those values.

As a result, I'm not sure the lesson is "EA orgs should hire for value alignment." I think most EAs just didn't understand what Epoch's values were. If that's right, the lesson is that the EA community shouldn't assume that an organization that happens to work adjacent to AI safety actually cares about it. In part, that's a lesson for funders to not just look at the content of the proposal in front of you, but also what the org as a whole is doing. 

Epoch's founder has openly stated that their company culture is not particularly fussed about most AI risk topics

 

To be clear, my personal views are different from my employees or our company.  We have a plurality of views within the organisation (which I think it's important for our ability to figure out what will actually happen!)

I co-started Epoch to get more evidence on AI and AI risk. As I learned more and the situation unfolded I have become more skeptical of AI Risk. I tried to be transparent about this, thought I've changed my mind often and is time consuming to communicate every update, see eg:

https://www.lesswrong.com/posts/Fhwh67eJDLeaSfHzx/jonathan-claybrough-s-shortform?commentId=X3bLKX3ASvWbkNJkH

I also strive to make Epoch work relevant and useful to people regardless of their views. Eg both AI2027 and strategical awareness rely heavily on Epoch work, even though I disagree with their perspectives. You don't need to agree with what I believe to find our work useful!

That post was written today though -- I think the lesson to be learned depends on whether those were always the values vs. a change from what was espoused at the time of funding.

Oh whoops, I was looking for a tweet they wrote a while back and confused it with the one I linked. I was thinking of this one, where he states that "slowing down AI development" is a mistake. But I'm realizing that this was also only in January, when the OpenAI funding thing came out, so doesn't necessarily tell us much about historical values.  

I suppose you could interpret some tweets like this or this in a variety of ways but it now reads as consistent with "don't let AI fear get in the way of progress" type views. I don't say this to suggest that EA funders should have been able to tell ages ago, btw, just trying to see if there's any way to get additional past data.

Another fairly relevant thing to me is that their work is on benchmarking and forecasting potential outcomes, something that doesn't seem directly tied to safety and which is also clearly useful to accelerationists. As a relative outsider to this space, it surprises me much less that Epoch would be mostly made up of folks interested in AI acceleration or at least neutral towards it, than if I found out that some group researching something more explicitly safety-focused had those values. Maybe the takeaway there is that if someone is doing something that is useful both to acceleration-y people and safety people, check the details? But perhaps that's being overly suspicious. 

And I guess also more generally, again from a relatively outside perspective, it's always seemed like AI folks in EA have been concerned with both gaining the benefits of AI and avoiding X risk. That kind of tension was at issue when this article blew up here a few years back and seems to be a key part of why the OpenAI thing backfired so badly. It just seems really hard to combine building the tool and making it safe into the same movement; if you do, I don't think stuff like Mechanize coming out of it should be that surprising, because your party will have guests who only care about one thing or the other.

Interesting that you chose not to name the org in question - I guess you wanted to focus on the meta-level principle rather than this specific case

Maybe I should have. I honestly don't know. I didn't think deeply about it.

To be honest, I don't necessarily think it's as bad as people claim, though I still don't think it was a great action relative to available alternatives, and is at best not the best thing you could decide on for making AI safe, relative to other actions.

One of my core issues, and a big crux here is that I don't really believe that you can succeed at the goal of automating the whole economy with cheap robots without also allowing actors to speed up the race to superintelligence/superhuman AI researchers a lot.

And if we put any weight on misalignment, we should be automating AI safety, not AI capabilities, so this is quite bad.

Jaime Sevilla admits that the reason he supports Mechanize's effort is for selfish reasons:

https://x.com/Jsevillamol/status/1913276376171401583

I selfishly care about me, my friends and family benefitting from AI. For some of my older relatives, it might make a big difference to their health and wellbeing whether AI-fueled explosive growth happens in 10 vs 20 years.

Edit: @Jaime Sevilla has stated that he won't go to Mechanize, and will stay at Epoch, sorry for any confusion.

Saying that I personally support faster AI development because I want people close to me to benefit is not the same as saying I'm working at Epoch for selfish reasons.

I've had opportunities to join major AI labs, but I chose to continue working at Epoch because I believe the impact of this work is greater and more beneficial to the world.

That said, I’m also frustrated by the expectation that I must pretend not to prioritize those closest to me. I care more about the people I love, and I think that’s both normal and reasonable—most people operate this way. That doesn’t mean I don’t care about broader impacts too.

I don't think people are expecting you to pretend to not hold the values that you do, rather they're disappointed that you hold those values, as welfare impartiality is a core value for a lot of EAs.

I don't think impartiality to the extent of not caring more about the people one loves is a core value for very many EAs? Yes, it's pretty central to EA that most people are excessively partial, but I don't recall ever seeing someone advocate full impartiality.

Jason
34
14
0
2

Some of the reaction here may be based on Jaime acting in a professional, rather than a personal, capacity when working in AI. 

There are a number of jobs and roles that expect your actions in a professional capacity to be impartial in the sense of not favoring your loved ones over others. For instance, a politician should not give any more weight to the effects of proposed legislation on their own mother than the effect on any other constituent. Government service in general has this expectation. One could argue that (like serving as a politician), working in AI involves handing out significant risks and harms to non-consenting others -- and that should trigger a duty of impartiality.

Government workers and politicians are free to favor their own mother in their personal life, of course. 

TFD
10
5
2

It seems like the view expressed reduces to an existing-person-effecting view. Is their any plausible mechanism by which an action by Epoch is supposed to impact Sevilla's friends/relatives specifically? I seriously doubt it. The only plausible mechanism would be that AI goes well instead of poorly, which would benefit all existing people. This makes the politician comparison, as stated, dis-analogousness. Would you say that if a politician said their motivation to become a politician was to make a better world for their children, for example, that would somehow violate their duties? Seems like a lot of politicians might have issue if that were the case.

I think this suggests a risk that the real infraction here is honestly stating the consideration about friends and family. Is it really the case that no-one leading AI safety orgs that are aiming for deceleration are motivated, at least partly, by the desire to protect their own friends and family from the consequences of AI going poorly? I will confess that is a big part of my own reasons from being interested in this topic. I would be very surprised if the standard being suggested here was really as ubiquitous as these comments suggest.

I’d agree that a lot of people who care about AI safety do so because they want to leave the world a better place for their children (which encompasses their children’s wellbeing related to being parents themselves and having to worry about their own children’s future). But there’s no trade off between personal and impartial preferences there. That seems to me to be quite different from saying you’re prioritising eg your parents and grandparents getting to have extended lifespans over other people’s children’s wellbeing.

The discussion also isn’t about the effects of Epoch’s specific work, so I’m a bit confused by your argument relying on that.

From Jaime:

“But I want to be clear that even if you convinced me somehow that the risk that AI is ultimately bad for the world goes from 15% to 1% if we wait 100 years I would not personally take that deal. If it reduced the chances by a factor of 100 I would consider it seriously. But 100 years has a huge personal cost to me, as all else equal it would likely imply everyone I know [italics mine] being dead. To be clear I don't think this is the choice we are facing or we are likely to face.“

But there’s no trade off between personal and impartial preferences there. That seems to me to be quite different from saying you’re prioritising eg your parents and grandparents getting to have extended lifespans over other people’s children’s wellbeing.

I can see why you would interpret it this way given the context, but I read the statement differently. Based on my read of the thread, the comment was in response to a question about benefiting people sooner rather than later. This is why I say it reduces to an existing-person-effecting view (which, at least as far as I am aware, is not an unacceptable position to hold in EA). The question is functionally about current vs future people, not literally Sevilla's friends and family specifically. I think this matches the "making the world better for your children" idea. You can channel a love of friends and family into an altruistic impulse, so long as there isn't some specific conflict-of-interest where you're benefiting them specifically. I think the statement in question is consistent with that.

The discussion also isn’t about the effects of Epoch’s specific work, so I’m a bit confused by your argument relying on that.

I'm bringing this up because I think its implausible that anything that is being discussed here has some specific relevance to Sevilla's friends and family as individuals (in support of my point above). In other words, due to the nature of the actions being taken

there’s no trade off between personal and impartial preferences there

In what way are any concrete actions that are relevant here prioritizing Sevilla's family over other people's children? Although I can see how it might initially seem that way I don't think that's what the statement was intended to communicate.

Have you read the whole Twitter thread including Jaime’s responses to comments? He repeatedly emphasises that it’s about his literal friends, family and self, and hypothetical moderate but difficult trade offs with the welfare of others.

TFD
-1
0
1

When I click the link I see three posts that go Sevilla, Lifland, Sevilla. I based my comments above on those. I haven't read through all the other replies by others or posts responding to them. If there is context in those or else where that is relevant I'm open to changing my mind based on that.

He repeatedly emphasises that it’s about his literal friends, family and self, and hypothetical moderate but difficult trade offs with the welfare of others.

Can you say what statements lead you to this conclusion? For example, you quote him saying something I haven't seen, perhaps part of the thread I didn't read.

“But I want to be clear that even if you convinced me somehow that the risk that AI is ultimately bad for the world goes from 15% to 1% if we wait 100 years I would not personally take that deal. If it reduced the chances by a factor of 100 I would consider it seriously. But 100 years has a huge personal cost to me, as all else equal it would likely imply everyone I know [italics mine] being dead. To be clear I don't think this is the choice we are facing or we are likely to face.“

To me, this seems to confirm what I said above:

Based on my read of the thread, the comment was in response to a question about benefiting people sooner rather than later. This is why I say it reduces to an existing-person-effecting view (which, at least as far as I am aware, is not an unacceptable position to hold in EA). The question is functionally about current vs future people, not literally Sevilla's friends and family specifically.

Yes, Sevilla is motivated specifically by considerations about those he loves, and yes, there is a trade-off, but that trade-off is really about current vs future people. People who aren't longtermists for example would also implicate this same trade-off. I don't think Sevilla would be getting the same reaction here if he just said he isn't a longtermist. Because of the nature of the available actions, the interests of Sevilla's  loved-ones is aligned with those of current people (but not necessarily future people). The reason why "everyone [he] know[s]" will be dead is because everyone will be dead, in that scenario.

You might think that having loved-ones as a core motivation above other people is inherently a problem. I think this is answered above by Jeff Kaufman:

I don't think impartiality to the extent of not caring more about the people one loves is a core value for very many EAs? Yes, it's pretty central to EA that most people are excessively partial, but I don't recall ever seeing someone advocate full impartiality.

I agree with this statement. Therefore my view is that simply stating that you're more motivated by consequences to your loved-ones is not, in and of itself, a violation of a core EA idea.

Jason offers a refinement of this view. Perhaps what Kaufman says is true, but what if there is a more specific objection?

There are a number of jobs and roles that expect your actions in a professional capacity to be impartial in the sense of not favoring your loved ones over others. For instance, a politician should not give any more weight to the effects of proposed legislation on their own mother than the effect on any other constituent.

Perhaps the issue is not necessarily that Sevilla has the motivation itself, but that his role comes with a specific conflict-of-interest-like duty, which the statement suggests he is violating. My response was addressing this argument. I claim that the duty isn't as broad as Jason seems to imply:

It seems like the view expressed reduces to an existing-person-effecting view. Is their any plausible mechanism by which an action by Epoch is supposed to impact Sevilla's friends/relatives specifically? I seriously doubt it. The only plausible mechanism would be that AI goes well instead of poorly, which would benefit all existing people. This makes the politician comparison, as stated, dis-analogousness. Would you say that if a politician said their motivation to become a politician was to make a better world for their children, for example, that would somehow violate their duties? Seems like a lot of politicians might have issue if that were the case.

Does a politician who votes for a bill and states they are doing so to "make a better world for their children", violate a conflict-of-interest duty? Jason's argument seems to suggest they would. Let's assume they are being genuine, they really are significantly motivated by care for their children, more than for a random citizen. They apply more weight to the impact of the legislation on their children then to others, violating Jason's proposed criteria.

Yet I don't think we would view such statements as disqualifying for a politician. The reason is that the mechanism by which they benefit their children really only operates by also helping everyone else. Most legislation won't have any different impact on their children compared to any other person. So while the statement nominally suggests a conflict-of-interest, in practice the politicians incentives are aligned, the only way that voting for this legislation helps their children is that it helps everyone, and that includes their children. If the legislation plausibly did have a specific impact on their child (for example impacting an industry their child works in), then that really could be a conflict-of-interest. My claim is there needs to be some greater specificity for a conflict to exist. Sevilla's case is more like the first case than the second, or at least that is my claim:

Is their any plausible mechanism by which an action by Epoch is supposed to impact Sevilla's friends/relatives specifically? I seriously doubt it. The only plausible mechanism would be that AI goes well instead of poorly, which would benefit all existing people.

So, what has Sevilla done wrong? My analysis is this. It isn't simply that he is more motivated to help his loved-ones (Kaufman argument). Nor is it something like a conflict-of-interest (my argument). In another comment on this thread I said this:

People can do a bad thing because they are just wrong in their analysis of a situation or their decision-making.

I think, at bottom, the problem is that Sevilla makes mistake in his analysis and/or decision-making about AI. His statements aren't norm-violating, they are just incorrect (at least some of them are, in my opinion). I think its worth having clarity about what the actual "problem" is.

The reason why "everyone [he] know[s]" will be dead is because everyone will be dead, in that scenario.

 

We are already increasing maximum human lifespan, so I wouldn't be surprised if many people who are babies now are still alive in 100 years. And even if they aren't, there's still the element of their wellbeing while they are alive being affected by concerns about the world they will be leaving their own children to.

Although I haven't thought deeply about the issue you raise you could definitely be correct, and I think they are reasonable things to discuss. But I don't see their relevance to my arguments above. The quote you reference is itself discussing a quote from Sevilla that analyzes a specific hypothetical. I don't necessarily think Sevilla had the issues you raise in mind when we was addressing that hypothetical. I don't think his point was that based on forecasts of life extension technology he had determined that acceleration was the optimal approach in light of his weighing of 1 year-olds vs 50 year-olds. I think his point is more similar to what I mention above about current vs future people. I took a look at more of the X discussion, including the part where that quote comes from, and I think it is pretty consistent with this view (although of course others may disagree). Maybe he should factor in the things you mention, but to the extent his quote is being used to determine his views, I don't think the issues you raise are relevant unless he was considering them when he made the statement. On the other hand, I think discussing those things could be useful in other, more object level discussions. That's kind of what I was getting at here:

I think, at bottom, the problem is that Sevilla makes mistake in his analysis and/or decision-making about AI. His statements aren't norm-violating, they are just incorrect (at least some of them are, in my opinion). I think its worth having clarity about what the actual "problem" is.

I know I've been commenting here a lot, and I understand my style may seem confrontational and abrasive in some cases. I also don't want to ruin people's day with my self-important rants, so, having said my piece, I'll drop the discussion for now and let you get on with other things.

(although it you would like to response you are of course welcome, I just mean to say I won't continue the back-and-forth after, so as not to create a pressure to keep responding.)

I don’t think you’re being confrontational, I just think you’re over-complicating someone saying they support things that might bring AGI forward to 2035 instead of 2045 because otherwise it will be too late for their older relatives. And it’s not that motivating to debate things that feel like over-complications.

I agree that there are no plausible circumstances in which anyone's relatives will benefit in a way not shared with a larger class of people. However, I do think groups of people differ in ways that are relevant to how important fast AI development vs. more risk-averse AI development is to their interests. Giving undue weight to the interests of a group of people because one's friends or family are in that group would still raise the concern I expressed above. 

One group that -- if they were considering their own interests only -- might be rationally expected to accept somewhat more risk than the population as a whole are those who are ~50-55+. As Jaime wrote:

For some of my older relatives, it might make a big difference to their health and wellbeing whether AI-fueled explosive growth happens in 10 vs 20 years.

A similar outcome could also happen if (e.g.) the prior generation of my family has passed on, I had young children, and as a result of prioritizing their interests I didn't give enough weight to older individuals' desire to have powerful AI soon enough to improve and/or extend their lives.

the prior generation of my family has passed on, I had young children

This seems to suggest that you think the politicians "making the world better for my children" statement would then also be problematic. Do you agree with that?

I'll be honest, this argument seems a bit too clever. Is the underlying problem with the statement really that it implies a set of motivations that might slightly up-weight a certain age group? One of the comments speaks of "core values" for EA. Is that really a core value? I'm pretty sure I recall reading an argument by McAskill about how actually we should more heavily weight young people in various ways (I think it was voting), for example. I serious doubt most EAs could claim that they literally are distributionally exact in weighting all morally relevant entities in every decision they make. I think the "core value" that exists probably isn't really this demanding, although I could be wrong.

Prioritising young people often makes sense from an impartial welfare standpoint, because young people have more years left, so there is more welfare to be affected. With voting in particular, it’s the younger people who have to deal with the longer term consequences of any electoral outcome. You see this in climate change related critiques of the Baby Boomer generation.


See eg

“Effective altruism can be defined by four key values: …

2. Impartial altruism: all people count equally — effective altruism aims to give everyone’s interests equal weight, no matter where or when they live. When combined with prioritisation, this often results in focusing on neglected groups…”

https://80000hours.org/2020/08/misconceptions-effective-altruism/

Prioritising young people often makes sense from an impartial welfare standpoint

Sure, I think you can make a reasonable argument for that, but if someone disagreed with that, would you say they lack impartiality? To me it seems like something that is up for debate, within the "margin-of-error" of what is meant by impartiality. Two EAs could come down on different sides of that issue and still be in good standing in the community, and wouldn't be considered to not believe in the general principle of impartiality. Likewise, I think we can interpret Jeff Kaufman's argument above as expressing a similar view about an individual's loved-ones. It is within the "margin-of-error" of impartiality to still have a higher degree of concern for loved-ones, even if that might not be living up to the platonic ideal of impartiality.

My point in bringing this up is, the exact reason why the statement in question is bad seems to be shifting a bit over the conversation. Is the core reason that Sevilla's statement is objectionable really that it might up-weight people in a certain age group?

Yeah that sounds right to me as a gloss 

I think it's a good thing that you're open about your motivations and I appreciate it.

I think Sharmake might be thinking you are one of the people that left Epoch to start Mechanize? (He says "admits that the reason he is working on this" in response to the main post, about Mechanize)

Ah, in case there is any confusion about this I am NOT leaving Epoch nor joining Mechanize. I will continue to be director of Epoch and work in service of our public benefit mission.

I incorrectly thought that you also left, I edited my comment.

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal