Back when they were accepting them I made a bet with the AI Futures team along similar (if simplified) lines:
1) US GDP Growth Limit: The 5-year rolling Compound Annual Growth Rate (CAGR) of US Real GDP will not exceed +20% for any 5-year period ending during the bet's term.
(approximately 1.5x the historical maximum 5-year rolling Real GDP CAGR of approx. +13.3% for 1943).
I also offered a simplified version: a '1.5x maximum historical' bet:
(1.1) maximum annual real US GDP growth will not exceed 28% for any of the next five years (based on 18.9% for 1942).
These would all resolve in 2030, but they could be pulled forward.
If nothing else, I'll wait for this puppy to pop up on Manifold and ride it for the funsies. Even if Scott won't take it, there's liable to be someone who bites.
I've been thinking a lot lately around the hype involving additive manufacturing like a decade ago. People were talking as if we were one step away from Star Trek replicators, and that within only a few years, we'd just go down to a fabrication store which would 3D print objects for us on the spot. Traditional manufacturing supply chains would begin to collapse, as hard-to-source parts would be replaced by print-on-demand. A great creative force would be unleashed - the economy of makers - and we'd head into a new era.
In the end, maybe 10% of that happened? The retail attempts at 3D printing maker labs failed. The technology developed into an essential element of prototyping, but wasn't hugely scalable. I see some YouTubers who are in the maker space use it for certain custom items. But on the whole, it's still very context dependent, and hasn't disrupted a lot of established manufacturing, nor allowed for a wide range of custom-fab startups to come to the fore.
That seems very different. To the lay person a CNC machines turning a block of metal into a part and a 3d printer making the same part is 6 of one half dozen of another.
A computer that can drive a car or crack jokes or a million other things people thought a computer could never do is something quite different.
“No single BLS occupational category will have lost 50% or more of jobs between now and February 14th 2029”
This condition needs to be fixed because their categories can be extremely narrow. You should have a minimum size for a category to qualify such as “1 million jobs”.
This seems like the most likely place for Freddie to lose the bet to me.
Most of the other stuff there is saying something like "the US economy will be recognizably the same critter in three years that it is now," which I think is almost guaranteed--only some kind of godawful catastrophe like a nuclear war or covid x 10 or something will keep that from happening three years from now, because even amazing new technology takes many years to get adopted across most industries.
But even if AI is not transformative for the whole economy, there are plenty of narrow fields where it might be transformative. A few examples that seem plausible to me: (I don't know if any of these are BLS categories)
a. There's a set of people whose job is basically to write copy---low-effort stuff to go on a website or in an ad or (back when they were economically viable) in a print publication. Existing AI tools can do that now. They can't write as well as a very good writer, but most of the people writing that kind of copy weren't amazing writers either. The market for those folks has been bad for a long time, but it seems like it is going to (perhaps already has) evaporate.
b. The same for illustrators---if you want a good-enough image to use for a blog post or something, you can get it from an AI tool and not pay anyone. Again, this isn't giving you Normal Rockwell or something, but it's giving you a good-enough image and you don't have to pay anything (or much) for it.
c. Call center employees---it seems like more and more of this job is being automated away. And while regulatory or administrative barriers may slow down stuff like cab/uber drivers being replaced by self-driving vehicles even when the technology is ready for prime-time, everybody is already trying to squeeze every last penny out of the amount they spend for customer support, helpdesk, etc., employees. The companies that happily fire their whole staff in Utah and hire some service that runs out of India or Ireland or wherever else for 30% less will absolutely be willing to switch over to ChatBot3000 that costs another 30% less and fire the Irish/Indians/whomever. Errors that make customers angry or frustrated but don't cost money immediately have not stopped a race to the bottom on these jobs so far, so an occasional hallucination from the chat bot probably won't, either.
Even worse, you have a multiple-comparisons problem. Does anyone know how often it has happened in the past that one of the many BLS categories has fallen by 50% in a 3-year span?
Now that I’ve got you here, Scott, can I request that you please unblock me on your blog? You missed context and unfairly blocked me for using the trigger phrase “I’m done here” without understanding that it was right after I first made a point saying “X, and I explicitly don’t want to discuss the adjacent issue Y here because it’s off point and contentious” and several people kept trying to engage me in a discussion of Y (I suspect in hopes they could get me to say something about Y that would allow them to dismiss my saying X).
I use it everyday at work and it's great. But I do think it's important to define terms. "AI is completely useless" is an untenable position in 2026. "AI will be widely adopted across multiple industries, but will fail to turbocharge GDP or cause mass unemployment (or for that matter make back the insane amounts being invested in it)" is quite defensible. I legitimately have no idea. My gut says Freddie would win this bet if it's kept to 3 years. In the next 20-30 years, I think we're all in for a pretty crazy ride.
I used to say this as a joke, but now I actually am beginning to believe that there is some kind of psychological connection between the evident cases of LLM-facilitated psychosis and the mass media psychosis with regards to the capacity and trajectory of AI. And maybe I'm wrong but it seems like the dividing line is people who regularly chat with LLMs vs. people who don't. I often hear that if you don't regularly use LLMs, you're going to be sadly ignorant of their true power, and whatnot. But is it possible that the experienced "insight" someone gets from harnessing the power of LLMs to do every conceivable task including idle talking is actually the experience of their brain getting mushier in a very particular way that makes them highly suggestible to various crackpot theories like "you, personally, are a god" or "AIs are going to replace all jobs in the next decade"? It's at least apparent to me that those of my undergrad students who talk to LLMs a lot develop a massive mental blockade to understanding How LLMs Work; they simply cannot believe it's text prediction.
Yeah that’s what I’m thinking. Imagine if there was a big question about whether a new kind of psychedelics was going to unlock a higher stage of human consciousness in the next 10 years. It’d be hard for there to be a balanced evaluation of the question if the only people who were having that conversation were, y’know, on drugs.
To be fair (correct me if I'm wrong) NYT guest writers don't get to write the annoying, clickbaity titles of their articles. (I wonder if AI writes them for the Times now?)
"‘We’re All Polyamorous Now. It’s You, Me and the A.I.’" quotes AI developers and researchers that the author interviewed for her Master's degree project, and a few stats on use, but the 'millions' that she implies are 'polyamourous' with AI now are not surveyed in any meaningful way.
I just had a professional development meeting about AI, as I'm sure many teachers have done or are doing or will do soon. A lot of it was reasonably skeptical, measured even, except that the entire project of technology in schools has led to worse outcomes and no gains and the last three decades of education have been made worse by laptops and phones and screens generally. Anyhow, though, the key is that it was 'skeptical' but not actually skeptical. It took as read things that I don't think are true at all, like the idea that LLMs are a step towards general intelligence. I think that's about as true as the idea that Richard Branson's Virgin Galactic trips were a step towards exploiting the mineral wealth of the Kuiper belt. It's absurd; the man was in the business of flying up really high and then taking a nosedive so people felt weightless, essentially skydiving with a plane around you and no feeling of wind, in order to mimic microgravity.
I feel really good about this metaphor because it is exactly the same way LLMs mimic intelligence. I had a guy who I know for a FACT is specialized in poetry stand up and tell me that LLMs work the way human brains work, that the human brain is the most wonderful LLM of all. No, dude. This wasn't true when people in the 18th century spoke of clockwork, or the 18th spoke of galvanism, and it's not true today. We're a totally different thing. We're pretty good at faking that thing! We're great at faking a lot of things like that! I saw Tupac's ghost do a concert ten years ago! Thus to LLM -> General Artificial Intelligence.
I dunno, man. When I saw the last Mission Impossible movie last year, about an AI that tries to eliminate humanity, I thought it was just a bit of fun. But now that that is actually happening I’m a little concerned.
The solution to your "wander around the store" problem does exist, and is implemented at Home Depot. You can use your phone to search for any product, and it will tell you the aisle and bin, and how many are still in stock. This makes sense for the consumers of that store: a contractor is there to get more cut-wheels, and he's not going to wander around and make an impulse buy of a drill. Wal-Mart, however, _wants_ you to wander around; they want to maximize your time in the store, walk you past as many products as possible, all in the hope that you buy more. Some people call this experience "shopping" and they find it pleasant. I am not one of those people.
Some, or even a lot of, skepticism is understandable. But I don't know how much more clear you could make it that you flat out aren't interested in turning your brain on when it comes to this topic. By this kind of logic, how do you know that a pregnant woman won't spontaneously give birth to a dog?? No no, I don't want to hear any "speculation" I'll believe it when I see it, because there's literally no other way to know. No I will accept absolutely no arguments of any kind until I physically see a not-dog being born. And even if I do, that only happened to THIS woman, THIS time. We don't know a dog won't be born to the next one!
Like sure I guess that's one way to view the world. Have at it. At best you're rehashing philosophy about what knowledge even is and whether we can actually ever know anything.
It's fair to debate the appropriate balance of fear/skepticism/optimism/whatever, but to just flat out insist that no, this is definitely not a thing that anyone should be concerned or excited about because it literally can't do a single thing that's useful or relevant and never will is...you're living in your own little world at that point.
What you're doing in this article isn't skepticism, it's dogmatic opposition rooted in whatever ideological obsession you have with this topic, given how passionately you write about it in a way seems to go far beyond "I'm tired of reading about it." Nobody could read these articles and conclude "Yes, this seems like a person with suitable background who approached this topic with an open mind before rationally concluding that everyone else but him was insane."
Are we in a bubble? Is there too much hype? Is there not enough hype? Will it be transformative over 5/10/20/50 years? Yes, no, maybe, nobody can literally predict the future - touche I guess, because what if tomorrow it rains donuts (prove me wrong)?
Every transformational technology of the past has taken many years, sometimes decades, to go from inception to becoming transformative in any real sense of the word - with those timelines generally getting longer the further back you go. We are barely a few years in. And whether a technology is transformative or not is almost entirely decoupled from how much or how loudly people talk about it. Something that seems lost on you.
If you have no interest in AI you could just not talk about it.vEveryone doesn't need to have to have an opinion about everything, especially if they can't be bothered to earnestly engage with the topic in good faith, one way or the other.
Back when they were accepting them I made a bet with the AI Futures team along similar (if simplified) lines:
1) US GDP Growth Limit: The 5-year rolling Compound Annual Growth Rate (CAGR) of US Real GDP will not exceed +20% for any 5-year period ending during the bet's term.
(approximately 1.5x the historical maximum 5-year rolling Real GDP CAGR of approx. +13.3% for 1943).
I also offered a simplified version: a '1.5x maximum historical' bet:
(1.1) maximum annual real US GDP growth will not exceed 28% for any of the next five years (based on 18.9% for 1942).
These would all resolve in 2030, but they could be pulled forward.
This article contains 45% recycled content.
And 100% recycled electrons.
I do hope there's an impact statement for the earth's electromagnetic spectrum.
If nothing else, I'll wait for this puppy to pop up on Manifold and ride it for the funsies. Even if Scott won't take it, there's liable to be someone who bites.
I've been thinking a lot lately around the hype involving additive manufacturing like a decade ago. People were talking as if we were one step away from Star Trek replicators, and that within only a few years, we'd just go down to a fabrication store which would 3D print objects for us on the spot. Traditional manufacturing supply chains would begin to collapse, as hard-to-source parts would be replaced by print-on-demand. A great creative force would be unleashed - the economy of makers - and we'd head into a new era.
In the end, maybe 10% of that happened? The retail attempts at 3D printing maker labs failed. The technology developed into an essential element of prototyping, but wasn't hugely scalable. I see some YouTubers who are in the maker space use it for certain custom items. But on the whole, it's still very context dependent, and hasn't disrupted a lot of established manufacturing, nor allowed for a wide range of custom-fab startups to come to the fore.
That seems very different. To the lay person a CNC machines turning a block of metal into a part and a 3d printer making the same part is 6 of one half dozen of another.
A computer that can drive a car or crack jokes or a million other things people thought a computer could never do is something quite different.
"Your opinion only matters if you make a bet" is perhaps the most pathetic of the Rationalist positions.
It's a reaction to the fact that making crazy predictions is a cheap form of clickbait. A bet is a tax on bullshit.
“No single BLS occupational category will have lost 50% or more of jobs between now and February 14th 2029”
This condition needs to be fixed because their categories can be extremely narrow. You should have a minimum size for a category to qualify such as “1 million jobs”.
This seems like the most likely place for Freddie to lose the bet to me.
Most of the other stuff there is saying something like "the US economy will be recognizably the same critter in three years that it is now," which I think is almost guaranteed--only some kind of godawful catastrophe like a nuclear war or covid x 10 or something will keep that from happening three years from now, because even amazing new technology takes many years to get adopted across most industries.
But even if AI is not transformative for the whole economy, there are plenty of narrow fields where it might be transformative. A few examples that seem plausible to me: (I don't know if any of these are BLS categories)
a. There's a set of people whose job is basically to write copy---low-effort stuff to go on a website or in an ad or (back when they were economically viable) in a print publication. Existing AI tools can do that now. They can't write as well as a very good writer, but most of the people writing that kind of copy weren't amazing writers either. The market for those folks has been bad for a long time, but it seems like it is going to (perhaps already has) evaporate.
b. The same for illustrators---if you want a good-enough image to use for a blog post or something, you can get it from an AI tool and not pay anyone. Again, this isn't giving you Normal Rockwell or something, but it's giving you a good-enough image and you don't have to pay anything (or much) for it.
c. Call center employees---it seems like more and more of this job is being automated away. And while regulatory or administrative barriers may slow down stuff like cab/uber drivers being replaced by self-driving vehicles even when the technology is ready for prime-time, everybody is already trying to squeeze every last penny out of the amount they spend for customer support, helpdesk, etc., employees. The companies that happily fire their whole staff in Utah and hire some service that runs out of India or Ireland or wherever else for 30% less will absolutely be willing to switch over to ChatBot3000 that costs another 30% less and fire the Irish/Indians/whomever. Errors that make customers angry or frustrated but don't cost money immediately have not stopped a race to the bottom on these jobs so far, so an occasional hallucination from the chat bot probably won't, either.
Even worse, you have a multiple-comparisons problem. Does anyone know how often it has happened in the past that one of the many BLS categories has fallen by 50% in a 3-year span?
Now that I’ve got you here, Scott, can I request that you please unblock me on your blog? You missed context and unfairly blocked me for using the trigger phrase “I’m done here” without understanding that it was right after I first made a point saying “X, and I explicitly don’t want to discuss the adjacent issue Y here because it’s off point and contentious” and several people kept trying to engage me in a discussion of Y (I suspect in hopes they could get me to say something about Y that would allow them to dismiss my saying X).
I wouldn't take the bet, but I will tell you that AI has been really useful for me in my job. It is value-added if you know how to use it.
I use it everyday at work and it's great. But I do think it's important to define terms. "AI is completely useless" is an untenable position in 2026. "AI will be widely adopted across multiple industries, but will fail to turbocharge GDP or cause mass unemployment (or for that matter make back the insane amounts being invested in it)" is quite defensible. I legitimately have no idea. My gut says Freddie would win this bet if it's kept to 3 years. In the next 20-30 years, I think we're all in for a pretty crazy ride.
Corporate insistence that AI works as advertised has and will continue to have a greater economic impact than the technology itself.
I used to say this as a joke, but now I actually am beginning to believe that there is some kind of psychological connection between the evident cases of LLM-facilitated psychosis and the mass media psychosis with regards to the capacity and trajectory of AI. And maybe I'm wrong but it seems like the dividing line is people who regularly chat with LLMs vs. people who don't. I often hear that if you don't regularly use LLMs, you're going to be sadly ignorant of their true power, and whatnot. But is it possible that the experienced "insight" someone gets from harnessing the power of LLMs to do every conceivable task including idle talking is actually the experience of their brain getting mushier in a very particular way that makes them highly suggestible to various crackpot theories like "you, personally, are a god" or "AIs are going to replace all jobs in the next decade"? It's at least apparent to me that those of my undergrad students who talk to LLMs a lot develop a massive mental blockade to understanding How LLMs Work; they simply cannot believe it's text prediction.
Perhaps this is similar to how psychedelics create the illusion of a profound experience. (Which can be very convincing and persistent!)
Yeah that’s what I’m thinking. Imagine if there was a big question about whether a new kind of psychedelics was going to unlock a higher stage of human consciousness in the next 10 years. It’d be hard for there to be a balanced evaluation of the question if the only people who were having that conversation were, y’know, on drugs.
To be fair (correct me if I'm wrong) NYT guest writers don't get to write the annoying, clickbaity titles of their articles. (I wonder if AI writes them for the Times now?)
"‘We’re All Polyamorous Now. It’s You, Me and the A.I.’" quotes AI developers and researchers that the author interviewed for her Master's degree project, and a few stats on use, but the 'millions' that she implies are 'polyamourous' with AI now are not surveyed in any meaningful way.
I just had a professional development meeting about AI, as I'm sure many teachers have done or are doing or will do soon. A lot of it was reasonably skeptical, measured even, except that the entire project of technology in schools has led to worse outcomes and no gains and the last three decades of education have been made worse by laptops and phones and screens generally. Anyhow, though, the key is that it was 'skeptical' but not actually skeptical. It took as read things that I don't think are true at all, like the idea that LLMs are a step towards general intelligence. I think that's about as true as the idea that Richard Branson's Virgin Galactic trips were a step towards exploiting the mineral wealth of the Kuiper belt. It's absurd; the man was in the business of flying up really high and then taking a nosedive so people felt weightless, essentially skydiving with a plane around you and no feeling of wind, in order to mimic microgravity.
I feel really good about this metaphor because it is exactly the same way LLMs mimic intelligence. I had a guy who I know for a FACT is specialized in poetry stand up and tell me that LLMs work the way human brains work, that the human brain is the most wonderful LLM of all. No, dude. This wasn't true when people in the 18th century spoke of clockwork, or the 18th spoke of galvanism, and it's not true today. We're a totally different thing. We're pretty good at faking that thing! We're great at faking a lot of things like that! I saw Tupac's ghost do a concert ten years ago! Thus to LLM -> General Artificial Intelligence.
I dunno, man. When I saw the last Mission Impossible movie last year, about an AI that tries to eliminate humanity, I thought it was just a bit of fun. But now that that is actually happening I’m a little concerned.
The solution to your "wander around the store" problem does exist, and is implemented at Home Depot. You can use your phone to search for any product, and it will tell you the aisle and bin, and how many are still in stock. This makes sense for the consumers of that store: a contractor is there to get more cut-wheels, and he's not going to wander around and make an impulse buy of a drill. Wal-Mart, however, _wants_ you to wander around; they want to maximize your time in the store, walk you past as many products as possible, all in the hope that you buy more. Some people call this experience "shopping" and they find it pleasant. I am not one of those people.
Some, or even a lot of, skepticism is understandable. But I don't know how much more clear you could make it that you flat out aren't interested in turning your brain on when it comes to this topic. By this kind of logic, how do you know that a pregnant woman won't spontaneously give birth to a dog?? No no, I don't want to hear any "speculation" I'll believe it when I see it, because there's literally no other way to know. No I will accept absolutely no arguments of any kind until I physically see a not-dog being born. And even if I do, that only happened to THIS woman, THIS time. We don't know a dog won't be born to the next one!
Like sure I guess that's one way to view the world. Have at it. At best you're rehashing philosophy about what knowledge even is and whether we can actually ever know anything.
It's fair to debate the appropriate balance of fear/skepticism/optimism/whatever, but to just flat out insist that no, this is definitely not a thing that anyone should be concerned or excited about because it literally can't do a single thing that's useful or relevant and never will is...you're living in your own little world at that point.
What you're doing in this article isn't skepticism, it's dogmatic opposition rooted in whatever ideological obsession you have with this topic, given how passionately you write about it in a way seems to go far beyond "I'm tired of reading about it." Nobody could read these articles and conclude "Yes, this seems like a person with suitable background who approached this topic with an open mind before rationally concluding that everyone else but him was insane."
Are we in a bubble? Is there too much hype? Is there not enough hype? Will it be transformative over 5/10/20/50 years? Yes, no, maybe, nobody can literally predict the future - touche I guess, because what if tomorrow it rains donuts (prove me wrong)?
Every transformational technology of the past has taken many years, sometimes decades, to go from inception to becoming transformative in any real sense of the word - with those timelines generally getting longer the further back you go. We are barely a few years in. And whether a technology is transformative or not is almost entirely decoupled from how much or how loudly people talk about it. Something that seems lost on you.
If you have no interest in AI you could just not talk about it.vEveryone doesn't need to have to have an opinion about everything, especially if they can't be bothered to earnestly engage with the topic in good faith, one way or the other.
There's a lot of weird and misplaced hostility here. Also this isn't an article about AI, it's an article about media.