Yarrow Bouchard 🔸

1101 karmaJoined Canadamedium.com/@strangecosmos

Bio

Pronouns: she/her or they/them. 

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

Sequences
2

Criticism of specific accounts of imminent AGI
Skepticism about near-term AGI

Comments
381

Topic contributions
1

I just realized the way this poll is set up is really confusing. You're currently at "50% 100% probability", which when you look at it on the number line looks like 75%. Not the best tool to use for such a poll, I guess!

I don't know much about small AI startups. The bigger AI companies have a problem because their valuations have increased so much and the level of investment they're making (e.g. into building datacentres) is reaching levels that feel unsustainable. 

It's to the point where the AI investment, driven primarily by the large AI companies, has significant macroeconomic effects on the United States economy. The popping of an AI bubble could be followed by a U.S. recession. 

However, it's a bit complicated, in that case, as to whether to say the popping of the bubble would have "caused" the recession, since there are a lot of factors, such as tariffs. Macroeconomics and financial markets are complicated and I know very little. I'm not nearly an expert.

I don't think small AI startups creating successful products and then large AI companies copying them and outcompeting them would count as a bubble. That sounds like the total of amount of revenue in the industry would be about the same as if the startups succeeded, it just would flow to the bigger companies instead.

The bubble question is about the industry as a whole.

Yes, Daniel Kokotajlo did say that, but then he also said if that happens, all the problems will be solved fairly quickly anyway (within 5-10 years), so AGI will be only be delayed from maybe 2030 to 2035, or something like that.

Overall, I find his approach to this question to be quite dismissive of possibilities or scenarios other than near-term AGI and overzealous in his belief that either scaling or sheer financial investment (or utterly implausible scenarios about AI automating AI research) will assuredly solve all roadblocks on the way to AGI in very short order. This is not really a scientific approach, but just hand-waving conceptual arguments and overconfident gut intuition.

So, because he doesn't really think the consequences of even fundamental problems with the current AI paradigm could end up being particularly significant, I give Kokotajlo credit for thinking about this idea in the first place (which is like saying I give a proponent of the covid lab leak hypothesis credit for thinking about the idea that the virus could have originated naturally), but I don't give him credit for a particularly good or wise consideration of this issue.

I'd be very interested in seeing the discussions of these topics from Carl Schulman and/or Paul Christiano you are remembering. I am curious to know how deeply they reckon with this uncertainty. Do they mostly dismiss it and hand-wave it away like Kokotajlo? Or do they take it seriously? 

In the latter case, it could be helpful for me because I'd have someone else to cite when I'm making the argument that these fundamental, paradigm-level considerations around AI need to be taken seriously when trying to forecast AGI.

Yes, I said "anyone else", but that was in the context of discussing academic research. But if we were to think more broadly and adjust for demographic variables like level of education (or years of education) and so on, as well as maybe a few additional variables like how interested someone is in science or economics, I don't really believe that people in effective altruism would do particularly better in terms of reducing their own bias. 

I don't think people in EA are, in general or across the board, particularly good at reducing their bias. If you do something like bring up a clear methodological flaw in a survey question, there is a tendency of some people to circle the wagons and try to deflect or downplay criticism rather than simply acknowledge the mistake and try to correct it. 

I think some people (not all and not necessarily most) in EA sometimes (not all the time and not necessarily most of the time) criticize others for perceived psychological bias or poor epistemic practices and act intellectually superior, but then make these sort of mistakes (or worse ones) themselves, and there's often a lack of self-reflection or a resistance to criticism, disagreement, and scrutiny. 

I worry that perceiving oneself as intellectually superior can lead to self-licensing, that is, people think of themselves as more brilliant and unbiased than everyone else, so they are overconfident in their views and overly dismissive of legitimate criticism and disagreement. They are also less likely to examine themselves for psychological bias and poor epistemic practices. 

But what I just said about self-licensing is just a hunch. I worry that it's true. I don't know whether it's true or not.

I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average of median academic working professionally in the social sciences. I don't know why you would think that.

I'm not sure exactly what you were referencing Eliezer Yudkowsky as an example of — someone who is good at reducing his own bias? I think Yudkowsky has shown several serious problems with his epistemic practices, such as:
 

  • Expressing extremely strong views, getting proven wrong, and never discussing them again — not admitting he was wrong, not doing a post-mortem on what his mistakes were, just silence, forever
  • Responding to criticism of his ideas by declaring that the critic is stupid or evil (or similarly casting aspersions), without engaging in the object-level debate, or only engaging superficially
  • Being dismissive of experts in areas where he is not an expert, and not backing down from his level of extreme confidence and his sense of intellectual superiority even when he makes fairly basic mistakes that an expert in that area would not make
  • Thinking of himself possibly as literally the smartest person in the world, and definitely the smartest person in the world working on AI alignment, to whom nobody else is even close, and, going back many years, thinking of himself as, in that sense, the most important person in the world, and possibly the most important person who has ever lived — the fate of the entire world depends on him, personally, and only him (in any other subject area, such as, say, pandemics or asteroids, and in any serious, credible intellectual community, excluding the rationalist community or EA, this would be seen as a sign of complete delusion)
     

Oh, sure. People will keep using LLMs. 

I don’t know exactly how you’d operationalize an AI bubble. If OpenAI were a public company, you could say its stock price goes down a certain amount. But private companies can control their own valuation (or the public perception of it) to a certain extent, e.g. by not raising more money so their last known valuation is still from their most recent funding round. 

Many public companies like Microsoft, Google, and Nvidia are involved in the AI investment boom, so their stocks can be taken into consideration. You can also look at the level of investment and data centre construction. 

I don’t think it would be that hard to come up with reasonable resolution criteria, it’s just that this is of course always a nitpicky thing with forecasting and I haven’t spent any time on it yet. 

Two criticisms of this post.

First, calling GPT-4 "the first very weak AGI" is quite ridiculous. Either that's just false or the term "very weak AGI" doesn't mean anything. These kinds of statements are discrediting. They come across as unserious. 

Second, I find the praise of LessWrong in this post disturbing. There is no credible evidence of LessWrong promoting exceptionally rational thought — to quote Eliezer Yudkowsky, in some sense the test of rationality must be whether the supposedly exceptionally rational people are "currently smiling from on top of a giant heap of utility", which is not remotely true for LessWrong — and there is a lot of evidence of irrationality and outright delusion associated with LessWrong. LessWrong is also morally evil

(I get the impression, based on some significant empirical evidence, that a large number of people are refraining from saying some obvious things out of some combination of fear and politeness, but I’m willing to take the hit in the distant hopes of disrupting this preference falsification.)

I like the spirit behind this — do science rather than rely on highly abstract, semi-philosophical conceptual arguments. But I have my doubts about the feasibility of the recommended course of action.

It's hard for me to imagine the sort of data you'd want to know not already being covered in existing machine learning research.

Maybe more importantly, I'm really not sure this is the right scientific approach. For example, why not study the sort of real world tasks that would need to be automated for a software intelligence explosion to occur? What are the barriers to those being automated? For instance: data efficiency, generalization, examples of humans performing these tasks to train on, or continual learning/online learning. How much are deep learning systems and deep reinforcement learning systems improving on those barriers? This seems to get to the heart of the matter more than what is suggest above.

Ask what is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by a reliable source such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?

I haven’t thought about my exact probability too hard yet, but for now I’ll just say 90% because that feels about right.

People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble

Since the timeline of any bubble is extremely hard to predict and isn’t the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031). 

For those who know more about forecasting than me, and especially for those who can think of good ways to financially operationalize such a prediction, I would encourage you to make a post about this.

For now, an informal poll:
 

What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
5 votes so far. Place your vote or
JM
B
BM
0% probability
100% probability
Load more