Yarrow Bouchard 🔸

1086 karmaJoined Canadamedium.com/@strangecosmos

Bio

Pronouns: she/her or they/them. 

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

Sequences
2

Criticism of specific accounts of imminent AGI
Skepticism about near-term AGI

Comments
375

Topic contributions
1

I like the spirit behind this — do science rather than rely on highly abstract, semi-philosophical conceptual arguments. But I have my doubts about the feasibility of the recommended course of action.

It's hard for me to imagine the sort of data you'd want to know not already being covered in existing machine learning research.

Maybe more importantly, I'm really not sure this is the right scientific approach. For example, why not study the sort of real world tasks that would need to be automated for a software intelligence explosion to occur? What are the barriers to those being automated? For instance: data efficiency, generalization, examples of humans performing these tasks to train on, or continual learning/online learning. How much are deep learning systems and deep reinforcement learning systems improving on those barriers? This seems to get to the heart of the matter more than what is suggest above.

Ask what is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by a reliable source such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?

I haven’t thought about my exact probability too hard yet, but for now I’ll just say 90% because that feels about right.

People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble

Since the timeline of any bubble is extremely hard to predict and isn’t the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031). 

For those who know more about forecasting than me, and especially for those who can think of good ways to financially operationalize such a prediction, I would encourage you to make a post about this.

For now, an informal poll:
 

What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
1 vote so far. Place your vote or
0% probability
100% probability

If I had to put a number on it, I’d say I’m, I don’t know, maybe 85%–95% sure AI is in a bubble right now. The reason is that capabilities aren’t improving much, scaling the training of models is running into all sorts of problems (pre-training running out of steam, data running out, RL training being super inefficient), and fundamental issues scaling can’t overcome such as data inefficiency, poor generalization, lack of human example data in many domains, severe difficulties with models learning from video data, and a lack of continual learning or online learning.

Timing bubbles or timing the market in general is famously nearly impossible, so it’s hard to put a specific timeline on when I think the bubble will pop, but I reckon it’s gotta be within about 3 years, so by the end of 2028 or maybe a little further, but not much. In theory, a bubble could go on for quite a long time, but this is a huge bubble. The level of investment is immense, as described in the Atlantic article quoted above. It would be hard to sustain this level of investment for very long without delivering financial results, or delivering proxies for financial results like businesses’ pilot projects with AI going well.

I suspect when the AI bubble pops, for some people, suddenly the idea of imminent AGI will shatter like a dropped glass, for some people, it will be as if nothing happened at all, and for many people it will be somewhere in between. I can only hope that as many people as possible take it as an opportunity to return to fundamentals — go back to basics — and reexamine the case for near-term AGI from the ground up.

Did you find what you were looking for? 

Sorry for the very late reply to an ancient thread. Just want to point out one small thing that is not helping your case:

Many self-help interventions are incentivized against actually fixing people's problems (e.g. therapists stop getting paid if they permanently fix your problems). 

This is a typical pseudoscience/fake medicine line. The doctors and pharma companies want you to be sick! That's their business model! 

Doesn't add up.

Sorry, this is an incredibly late reply in a (by Internet standards) ancient comment thread. 

My point is about differentiation. If Jhourney is saying their work confers benefits on approximately the same level as the many meditation centres you can find all over the place, then I have no qualms with that claim. If Jhourney, or someone else, is saying that Jhourney's work confers benefits far, far higher than any or almost any other meditation centre or treat on Earth, then I'm skeptical about that. 

Transcendental Meditation or TM is an organization that claims far, far higher benefits from its techniques than other forms of meditation, insists on in-person teaching, and charges a very high fee. It's viewed by some people as essentially a scam and some people as a sort of luxury product that is not particularly differentiated from the commodity product.

I'm not saying Jhourney is like Transcendental Meditation, I'm just noting that similar claims have been made in the area of meditation before with a clear financial self-interest to make these claims, and the claims have not been borne out. So, there is a certain standard of evidence a company like Jhourney has to rise above, a certain level of warranted skepticism it has to overcome. 

Hello, Matt. Let me just say I really appreciate your friendly, supportive, and positive approach to this conversation. It's very nice. Discussions on the EA Forum can get pretty sour sometimes, and I'm probably not entirely blameless in that myself.

You don't have to reply if you don't want, but I just wanted to follow up in case you still did.

Can you explain what you mean about the data efficiency of the new RL techniques in the papers you mentioned? You say it's more complex, but that doesn't help me understand.

By the way, did you use an LLM like Claude or ChatGPT to help write your comment? It has some of the hallmarks of LLM writing for me. I'm just saying this to help you — you may not realize how much LLMs' writing style sticks out like a sore thumb (depending on how you use them) and it will likely discourage people from engaging with you if they detect that. I keep encouraging people to trust themselves as writers, trust their own voice, and reassuring them that the imperfections of their writing doesn't make us, the readers, like it less, it makes us like it more.

I am not familiar with the other two things you mentioned, but I'm very familiar with Future Perfect and overall I like it a lot. I think it was a good idea for Vox to start that. 

But Future Perfect is a small subset of what Vox does overall, and what Vox does — mainly explainer journalism, which is important, and which Vox does well — is just one part of news overall. 

Future Perfect is great, but it's also kind of just publishing articles about effective altruism on a news site — not that there's anything wrong with that — rather than an improvement on the news overall.

If you put the Centre for Effective Altruism or the attendees of the Meta Coordination Forum or the 30 people with the most karma on the EA Forum in charge of running the New York Times, it would be an utter disaster. Absolute devastation, a wreck, an institution in ruins.

Load more