1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
shlevy
shlevy

@nuclearspaceheater

Here’s one: orthogonality thesis. People who disagree with you are not just dumber versions of yourself, and making them smarter without addressing their hostile values just gives those values more effective people to attack you with.

Setting aside concerns about the orthogonality thesis as applied to artificial agents, do you really think it’s reasonable to bring it to bear when the agents in question are all still basically structured by the same evolutionary and cultural pressures and the intelligence in question is nowhere near the “you can achieve any goal you could possibly pursue” threshold?

slatestarscratchpad

I agree with Shea that humans (especially same-culture humans) share enough common intuitions and goals that talking about orthogonality is kind of pushing it. If greater intelligence increases ability to pursue all goals equally, what exactly is so zero-sum that it makes up for us being better at feeding the hungry, healing the sick, treating mental illness, et cetera?

The particular thing I was talking about, having a human-friendly culture instead of an addictive bottom-of-entropy-well culture, isn’t exacty a man vs. man problem. There are some man vs. man elements, for example creating better addictive things, but overall it seems like the main problem is understanding what’s going on and coordinating to solve it.

It might also be helpful to review Hive Mind, which presents evidence that higher-IQ people are more likely to coordinate and cooperate in various game theoretic ways - which seems to be a big part of the problem I’m talking about here.