As most of you know, there has been a recent controversy between the rationality community and some adjacent bloggers. The core argument seems to be that we have no authority to call ourselves rational, and instead we end up pushing an unsubstantiated worldview.
Scott argues that we at least try to be modest about our level of rationality
While I don’t think the criticisms are perfectly on the mark, the response ends up ignoring the elephant in the room:
When controlling for IQ and environmental factors, are we more “rational” than the control group?
I’m not sure, and the fact that I’m not is in itself a little disconcerting.
If it was obvious that we are, and this whole schism is based on expecting us to have ten resident Elon Musk’s in our ranks, then I’d regard this as an unrealistic demand and think no more of it. However, we lack even one person who can hold a candle to Elon Musk, in fact, Elon won’t even so much as reply to Yudkowsky on twitter despite their shared interests.
This is hardly the extent of our problems. We seem to be largely comprised of XNTX high IQ underachievers. People lacking the social skills, executive function, and/or long term motivation to be successful in the world as it currently exists. I would include myself as currently an example in this category.
Compounding this, upon hearing a critique like this we instinctively attempt to analyze the argument, pick apart trivial points and have a jolly good time contemplating the various ideas, staying in the comfort zone of discourse. Most of us have highly developed analysis skills with relatively little elsewhere, So I guess it isn’t surprising that when all you have is a hammer, every problem looks like a nail.
The problem with the “look, we’re trying really hard okay!” argument is that most of us actually aren’t. We attempt to look like we are trying. We talk about philosophy, epistemology, and cognitive biases, yet we don’t actually do shit. Quoting Yudkowsky:
“The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him.”
The central idea to this is “rationalists should win”, and by most measures, we really don’t. Anecdotally, we are bad enough at this that I have been told in private by another rationalist that some people at EA meetups will think you are less competent if you identify as a rationalist.
This makes you wonder: wait a minute, why aren’t we winning?
I don’t have any satisfying answers to this question, however I do have a list of unsatisfying ones:
We haven’t been around for very long
LessWrong was started in 2009, Yudkowsky has been posting on overcoming bias since late 2006, the only concrete, notable achievements that have come out of this community of 5000+ top tier STEM nerds are some obscure xrisk reduction non-profits and a few patreon-supported insight bloggers.
The world isn’t fair
This has been known to nerds ever since nerds existed. The world doesn’t give a shit if this is fair or not, we are explicitly meant to account for this.
We need more epistemic training
As far as I can see, we have the world’s best rationality training as a community average, yet people who are dumber, more biased and less curious about the world do better than us, to the point you have to wonder if rationality and success in life are negatively correlated when controlled for talent.
There are more important things than “winning”
maybe there are, but judging by the personal complaints of community members, examples being social loneliness (failures of community creation, friendship making and general social skills) romantic relationships (failures to market themselves, to identify and express their needs, to deal with emotional hangups) personal achievement (failure to implement and apply themselves, to do what it takes to achieve their goals) winning really should be high on the list of priorities, certainly higher than reading the latest piece of insight porn.
Here’s another question: what the fuck do we do with this information?
A good starting point would be to take a hard look in the mirror and take action, don’t compose a ten-page essay on how Tyler Cowen et al are strawmanning us, don’t mutter some bullshit about using more Bayesian statistics, just go do the thing. Rationality is about winning, not rationalizing why we aren’t.
A few possible avenues:
Project Hufflepuff - An attempt to deal with some of the issues in the meatspace community, it is relatively new so it remains to be seen if it will work, but it seems to be heading in the right general direction.
Kernel project - This is my personal attempt to address the above problems and some other hard to define ones, namely that we do not have an in-person community that exists for a purpose other than socialising, and that all current hub locations don’t make economic sense to live in unless you are a well-paid software engineer.
@ranma-official once summed it up nicely:
“hey guys we all like to make sane and rational decisions all the time. so let’s all move to the Bay Area so that we can pay the San Francisco rent prices. - by me, a rationalist.”I was initially tempted to nitpick this, but it also says…
Compounding this, upon hearing a critique like this we instinctively attempt to analyze the argument, pick apart trivial points and have a jolly good time contemplating the various ideas, staying in the comfort zone of discourse.
…And I’d feel a bit silly going there :p
The general point of this seems reasonable. I like that it lists reasons we might not be doing as well as we’d like to, as well as projects that exist that are working on it. (Also, of course, CFAR is a thing. However, I understand if it wasn’t listed because everyone already knows about it.)
Do any of my followers have more ideas to contribute on how we could be doing better?
After more than a dozen rationality meetups in Chicago this year, several of them explicitly CFAR driven, I’m starting to think that socializing in meatspace has been regularly undervalued in the community. The community norm on long-ish, thought out and edited posts online is good for karma generation and insight as mental stimulus, but it’s crap for actually hashing out how you think you will win/why you won.
You will not pick up double crux unless you are able to see each other’s face/hear each other’s voice /and so your arguments will continue to suck/. You will not notice surprise in a productive way without someone asking about a failure mode.
tandagore liked this
tandagoredisc reblogged this from sinesalvatorem
polyamoroustabout liked this
daltongraham liked this
sinesalvatorem reblogged this from gcu-sovereign
gcu-sovereign reblogged this from sinesalvatorem and added:
After more than a dozen rationality meetups in Chicago this year, several of them explicitly CFAR driven, I’m starting...
nonevahed liked this
thefinalwraith liked this
mkaiww reblogged this from justisdevan
miss-nomer liked this
taymonbeal liked this
wearsshoes liked this
ilzolende liked this
justisdevan liked this
evolution-is-just-a-theorem liked this
another-normal-anomaly reblogged this from plain-dealing-villain and added:
I think to the extent that the perception of rationalists as not-winning is shaped by looking at a gestalt of people’s...
andaisq liked this
millievfencelikes reblogged this from justisdevan and added:
I have always taken a great deal of comfort from the fact that Julius Caesar felt inadequate next to Alexander the Great
molibdenita liked this
tchtchtchtchtch liked this
eaglesnotforks liked this
mathemagicalschema liked this
millievfence liked this
sinesalvatorem liked this
justisdevan reblogged this from millievfence and added:
I think I must be missing something.Like, the label “rationalist” doesn’t do much for me. I live in Florida, so it’s not...
chroniclesofrettek liked this
millievfence reblogged this from plain-dealing-villain and added:
The description of early ham radio operators in Neurotribes is dead fucking on for LessWrong. I don’t know what theory...
postmodernmarvel liked this
kelsbraintumbler liked this
silver-and-ivory liked this
plain-dealing-villain reblogged this from sinesalvatorem and added:
Bay Area vs. Non-Bay is the only significant cluster that feels missing. It’s sort of aligned with Geek Friend...
kelsbraintumbler reblogged this from sinesalvatorem
absurdseagull liked this
loki-zen reblogged this from sinesalvatorem and added:
Stuff that I forgot to clarify in the first bit:I think the two biggest inter-rationalist conflicts right now are the...
zerofarad liked this
davy-the-sorcerer liked this
highkettaishitpost liked this
not-a-lizard liked this
congruentepitheton liked this
inexacterminology reblogged this from kirbymatkatamiba
kirbymatkatamiba reblogged this from loki-zen and added:
I pretty strongly disagree with point 4, as per @silver-and-ivory‘s post here.
phenoct liked this
jack-rustier liked this
daniel-r-h liked this
andhishorse reblogged this from scientiststhesis and added:
One thing that I’ve noticed is that we don’t have a centralized repository of object-level winning-related knowledge.We...
avalonawoken liked this
sinfuljustin liked this
bendini1 posted this
- Show more notes