Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
I am skeptical that socioeconomic development increase animal welfare at any point. This is a bit like saying that there weren't any environmentalists before the Industrial Revolution, and there are a lot of environmentalists since then, so clearly this whole industry thing must be really good for the environment.
This seems clearly wrong. If you believe that it would take a literal Manhattan project for AI safety ($26 billion adjusting for inflation) to reduce existential risk by a mere 1% and only care about the current 8 billion people dying, then you can save a present person's life for $325, swamping any GiveWell-recommended charity.
So, to be clear, you think that if LLMs continue to complete software engineering tasks of exponentially increasing lengths at exponentially decreasing risk of failure, then that tell us nothing about whether LLMs will reach AGI?
I expect most EAs who have enough money to consider investing them to already be investing them in index funds, which, by design, long the Magnificent Seven already.
It's called online learning in AI 2027 and human-like long-term memory in IABIED.