Photo by gibblesmash asdf

My intuition is that driving is a domain narrow enough not to require AGI and, moreover, to require a system of far less sophistication and reasoning capabilities than an AGI. SAE Level 5 autonomy — which requires a vehicle to be able to drive autonomously wherever and whenever a typical human driver could — has not been achieved by any company. All autonomous driving projects currently require a human in the loop, either in the driver’s seat or available to provide remote assistance.

In a world where AGI is achieved by, say, 2030 or 2035, what are the odds Level 5 autonomy hasn’t been solved by 2023? My intuition is that we would expect autonomous vehicles to be a relatively low-hanging fruit that is plucked relatively early in the trajectory from AI solving video games to AI solving ~everything.

There are a few reasons why this intuition could be wrong:

  1. Maybe self-driving is actually an AGI-level problem or much closer to AGI-level than my intuition tells me. (I would rate this as highly plausible.)
  2. Maybe AI progress is such a steep exponential that the lag time between Level 5 autonomy and AGI is much shorter than my intuition tells me. (I would rate this as moderately plausible.)
  3. Perhaps Internet-scale data simply isn’t available to train self-driving AIs. (I would rate this as fairly implausible; it would be more much plausible if Tesla weren’t such a clear counterexample.)
  4. Robotics in general could prove to be either too hard or unimportant for an otherwise transformative or general AI. (I would rate this as highly implausible; it strikes me as special pleading.)
  5. Onboard compute for Teslas, which is a constraint on model size, is tightly limited, whereas LLMs that live in the cloud don’t have to worry nearly as much about the physical space they take up, the cost of the hardware, or their power consumption. (I would rate this as the most plausible objection, but I wonder why Tesla wouldn't put a ton of GPUs in the trunk of a car and see if that works.)
  6. Self-driving cars don’t get to learn through trial-and-error and become gradually more reliable, whereas LLMs do. (I would rate this as somewhat plausible; the counterargument is that Tesla's Autopilot is allowed to make mistakes, which humans can correct.)[1]

Please enumerate any additional reasons you can think of in the comments. Also, please present any arguments or evidence you can think of as to why I should accept any of the reasons given above.

  1. ^

    I owe both points (5) and (6) to a post by Daniel Kokotajlo.

24

0
2

Reactions

0
2
Comments4


Sorted by Click to highlight new comments since:

This kinda overlaps with (2), but the end of 2035 is 12 years away. A lot can happen in 12 years! If we look back to 12 years ago, it was December 2011. AlexNet had not come out yet, neural nets were a backwater within AI, a neural network with 10 layers and 60M parameters was considered groundbreakingly deep and massive, the idea of using GPUs in AI was revolutionary, tensorflow was still years away, doing even very simple image classification tasks would continue to be treated as a funny joke for several more years (literally—this comic is from 2014!), I don’t think anyone was dreaming of AI that could pass a 2nd-grade science quiz or draw a recognizable picture without handholding, GANs had not been invented, nor transformers, nor deep RL, etc. etc., I think.

So “AGI by 2035” isn’t like “wow that could only happen if we’re already almost there”, instead it leaves tons of time for like a whole different subfield of AI to develop from almost nothing. 

(I'm making a case against being confidently skeptical about AGI by 2035, not a case for confidently expecting AGI by 2035.)

Great comment!

Great question!

My understanding was that self-driving cars are already less likely to get into accidents than humans are.

However, they certainly can't "drive autonomously wherever and whenever a typical human driver could", requiring a costly process to adapt current self-driving technology to each city one at a time.

What does this tell us about how far from AGI? In particular, should this make us less enthusiastic about the generative AI direction than we might otherwise be? If it's so powerful, shouldn't we be able to use it to solve self-driving?

I guess it doesn't feel to me that we should make a huge update on this because anyone who is at all familiar with generative AI should already know it is incredibly unreliable without having to bring self-driving cars into the equation.

The question then becomes how insurmountable the unreliability problem is. There are certainly challenges here, but it's not clear that it is insurmountable. The short-timelines scenarios are pretty much always contingent on us discovering some kind of self-reinforcing loops. Is this likely? It's hard to tell, but there are already very basic techniques like self-consistency or reinforcement learning from AI feedback, so it isn't completely implausible. And it's not really clear to me why the lack of self-driving cars at present is a strong reason to believe that attempts to set up such a loop will fail.

Curated and popular this week
 ·  · 13m read
 · 
It’s to build the skills required to solve the problems that you want to solve in the world [I am a career advisor at 80,000 Hours. This post is adapted from a talk I gave on career capital to some ambitious altruistic students. If you prefer slides, you can access them here. These ideas are informed by my work at 80k but reflects my personal views.] I’m often asked how to have an impactful fulfilling career. My four word answer is “get good, be known.” My “fits on a postcard” answer is something like this: 1. Identify a problem with a vast scale of harm that is neglected at current margins and that is tractable to solve. Millions will die of preventable diseases this year, billions of animals will be tortured, severe tail risks like nuclear war and catastrophic pandemics still exist, and we might be on the cusp of a misaligned intelligence explosion. You should find an important problem to work on. 2. Obsessively improve at the rare and valuable skills to solve this problem and do so in a legible way for others to notice. Leverage this career capital to keep the flywheel going— skills allow you to solve more problems, which builds more skills. Rare and valuable roles require rare and valuable traits, so get so good they can’t ignore you. Unfortunately, some ambitious and altruistic young people that I speak to seem to have implicitly developed a model that looks more like this: 1. Identify a problem with a vast scale of harm, that is neglected at current margins, and that is tractable to solve. 2. Get a job from the 80,000 Hours job board at a capital-E capital-A Effective Altruist organization right out of college, as fast as possible, otherwise feel like a failure, oh god, oh god... I empathize with this feeling. Ambitious people who care about reducing risk and suffering in the world understandably think it’s the most important thing they can be doing, and often hold themselves to a high standard when trying to get there. Before properly entering the w
 ·  · 6m read
 · 
I really enjoyed reading the "why I donate" posts in the past week, so much so that I felt compelled to add my reflections, in case someone finds my reasons as interesting as I found theirs. 1. My money must be spent on something, might as well spend it on the most efficient things The core reason I give is something that I think is under-represented in the other posts: the money I have and earn will need to be spent on something, and it feels extremely inefficient and irrational to spend it on my future self when it can provide >100x as much to others. To me, it doesn't seem important whether I'm in the global top 10% or bottom 10%, or whether the money I have is due to my efforts or to the place I was born. If it can provide others 100x as much, it just seems inefficient/irrational to allocate it to myself. The post could end here, but there are other secondary reasons/perspectives on why I personally donate that I haven't seen commonly discussed. 2. Spending money is voting on how the global economy allocates its resources In 2017, I read Wealth: The Toxic Byproduct by Kevin Simler. Surprisingly, I don't think it has ever been posted on this forum. I disagree with some of it, but the core points really changed how I think about wealth, earning, and spending. The post is very well written and enjoyable, but it's 2400 words, so copy-pasting some snippets: > A thought experiment — the Congolese Trading Window: > > Suppose one day you wake up to find a large pile of Congolese francs. [...] A window [...] pushes open to reveal the unfamiliar sights of a Congolese outdoor market [...] a man approaches your window. [...] He's asking if you'd like to buy his grain for 500 francs. > > What should you do? [...] > Your plan is to buy grain whenever you think the price is poised to go up in the near future, and sell whenever you think the price is poised to go down. > [...] > Imagine a particular bag of grain that you bought at T1 for 200 francs, and then sold at T
 ·  · 6m read
 · 
Note: This post was crossposted from the Coefficient Giving Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- It can feel hard to help factory-farmed animals. We’re up against a trillion-dollar global industry and its army of lobbyists, marketeers, and apologists. This industry wields vast political influence in nearly every nation and sells its products to most people on earth. Against that, we are a movement of a few thousand full-time advocates operating on a shoestring. Our entire global movement — hundreds of groups combined — brings in less funds in a year than one meat company, JBS, makes in two days. And we have the bigger task. The meat industry just wants to preserve the status quo: virtually no regulation and ever-growing demand for factory farming. We want to upend it — and place humanity on a more humane path. Yet, somehow, we’re winning. After decades of installing battery cages, gestation crates, and chick macerators, the industry is now removing them. Once-dominant industries, like fur farming, are collapsing. And advocates are building momentum toward bigger reforms for all farmed animals. Here are my top ten wins from this year: 1. Liberté et Égalité, for Chickens. France’s largest chicken producer, the LDC Group, committed to adopting the European Chicken Commitment for its two flagship brands by 2028 — a shift that French advocacy group L214 estimates will cover 40% of the national chicken market, or up to 400 million birds each year. Across the Channel, British supermarket chain Waitrose transitioned all its own-brand chicken to comply with the parallel UK Better Chicken Commitment. 2. Guten Cluck! The Wurst Is Over for German Animals. Germany’s top retailer, Edeka, committed to making all of its own-brand chicken products compliant with Germany’s equivalent of the European Chicken Commitment by 2030