Matrice Jacobine

Student in fundamental and applied mathematics
647 karmaJoined Pursuing a graduate degree (e.g. Master's)France

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Comments
103

Topic contributions
1

It's called online learning in AI 2027 and human-like long-term memory in IABIED.

I think it is bad faith to pretend that those who argue for near-term AGI have no idea about any of this when all the well-known cases for near-term AGI (including both AI 2027 and IABIED) name continual learning as the major breakthrough required.

I am skeptical that socioeconomic development increase animal welfare at any point. This is a bit like saying that there weren't any environmentalists before the Industrial Revolution, and there are a lot of environmentalists since then, so clearly this whole industry thing must be really good for the environment.

This seems clearly wrong. If you believe that it would take a literal Manhattan project for AI safety ($26 billion adjusting for inflation) to reduce existential risk by a mere 1% and only care about the current 8 billion people dying, then you can save a present person's life for $325, swamping any GiveWell-recommended charity.

The press on personalities like SBF and Rob Granieri certainly didn’t help

(As a datapoint, I had no idea what Rob Granieri was before reading this post, and I'm probably not the only one because he doesn't seem to have ever been mentioned here before.)

What EAs typically fund as "global health and development" is this very low level. I am skeptical what you say is significantly true at most higher levels. It seems to me that if this is your true reason you should fund either direct research into lab-grown meat, progress studies, or longtermism.

I think there are pretty good reasons to expect any reasonable axiology to be additive.

So, to be clear, you think that if LLMs continue to complete software engineering tasks of exponentially increasing lengths at exponentially decreasing risk of failure, then that tell us nothing about whether LLMs will reach AGI?

I expect most EAs who have enough money to consider investing them to already be investing them in index funds, which, by design, long the Magnificent Seven already.

You could bet on shorter-term indicators e.g. whether the METR trend will stop or accelerate.

Load more