Hide table of contents

I argue that you shouldn't accuse your interlocutor of being insufficiently truth-seeking. This doesn't mean you can't internally model their level of truth-seeking and use that for your own decision-making. It just means you shouldn't come out and say "I think you are being insufficiently truth-seeking".

What you should say instead

Before I explain my reasoning, I'll start with what you should say instead:

"You're wrong"

People are wrong a lot. If you think they are wrong just say so. You should have a strong default for going with this option.

"You're being intentional misleading"

For when you basically thinking they are lying but maybe technically aren't by some definitions of "lying".

What about if they are being unintentionally misleading? That's usually just being wrong, you should probably just say they are being wrong. But if you really think the distinction is important, you can say they are being unintentionally misleading.

"You're lying"

For when they are lying.

You can also add your own flair to any of these options to spice things up a bit.

Why you shouldn't accuse people of being insufficient truth-seeking

Clarity

It's not clear what you are even accusing them of. "Insufficient truth-seeking" could arguably be any of the options I mentioned above. Just be specific. If you really think what you're saying is so important and nuanced and you just need to incorporate some deep insight about truth, use the "add your own flair" option to sneak that stuff in.

Achieving your purpose in the discussion

The most common purposes you might have for engaging in the discussion and why invoking "truth-seeking" doesn't help them:

You want to discuss the object-level issue

You just fucked yourself because the discussion is immediately going to center on whether they actually are insufficiently truth-seeking and whether that accusation was justified. You're going to have to gather The Fellowship, take your argument to Mordor, and throw it into the fire of NEVER GO META before you're ever going to be able to discussion the object level again.

You want to discuss your interlocutor's misconduct

You again fucked yourself because:

  1. It's not clear what misconduct you are accusing them of.
  2. Because of the ambiguity they are going to try to make it seem like you're accusing them of more than what you intended, and therefore actually it's you who isn't being truth-seeking, and you're even accusing them of that in bad faith!
  3. Because your statement is about "truth-seeking" instead of the actual misconduct, observers who agree with your interlocutor on the object level but might be sympathetic to your misconduct allegation are going to find it harder to agree with you on the meta issue. You are muddying the object-meta waters instead of tackling the meta-level issue you want to address head-on.

Conclusion

Don't accuse your interlocutor of being insufficiently truth-seeking. Just say they are wrong instead.

10

1
1

Reactions

1
1
Comments3


Sorted by Click to highlight new comments since:

"Truthseeking" is a strange piece of jargon. I'm not sure what purpose it serves. It seems like the meaning of "truthseeking" ambiguates between "practicing good epistemology" and "being intellectually honest", as you describe. So, why not use one of those terms instead?

One thing that annoys me about the EA Forum (which I previously wrote about here is that there's way too much EA Forum-specific jargon. One negative effect of this is it makes it harder to understand what people are trying to say. Another negative effect is it elevates a lot of interesting conjecture to the level of conventional wisdom. If you have some interesting idea in a blog post or a forum post, and then people are quick to incorporate that into the lingo, you've made that idea part of the culture, part of the conventional wisdom. And it seems like people do this too easily.

If you see someone using the term "truthseeking" on the EA Forum, then:

  1. There is no clear definition of this term anywhere that you can easily Google or search on the forum. There is a vague definition on the Effective Altruism Australia website. There is no entry for "truthseeking" in the EA Forum Wiki. The Wikipedia page for truth-seeking says, "Truth-seeking processes allow societies to examine and come to grips with past crimes and atrocities and prevent their future repetition. Truth-seeking often occurs in societies emerging from a period of prolonged conflict or authoritarian rule.[1] The most famous example to date is the South African Truth and Reconciliation Commission, although many other examples also exist."

  2. To the extent EA Forum users even have a clear definition of this term in their heads, they may be bringing along their own quirky ideas about epistemology or intellectual honesty or whatever. And are those good ideas? Who knows? Probably some are and a lot aren't. Making "truthseeking" a fundamental value and then defining "truthseeking" in your own quirky way elevates something you read on an obscure blog last year to the level of an idea that has been scrutinized and debated by a diverse array of scholars across the world for decades and stood the test of time. That's a really silly, bad way to decide which ideas are true and which are false (or dubious, or promising, or a mixed bag, or whatever).

  3. Chances are someone is using it passive-aggressively, or with the implication that they're more truthseeking than someone else. I've never seen someone say, "I wasn't being truthseeking enough and changed my approach." This kinda makes it feel like the main purpose of the word is to be passive-aggressive and act superior.

So, is this jargon anything but a complete waste of time?

Imo the biggest reason not to do this is that it's labeling the person or getting at their character. There's a threat implied that they will be dismissed out of hand bc they are categorically in bad faith. It can be weaponized.

I agree. The OP is in some sense performance art on my part, where I take a proposition that I think people might general justify with high-minded appeals to epistemology or community dynamics, and yet I give only selfish reasons for the conclusion. 

At the same time, I do agree there are many altruistic reasons for the conclusion as well, such as yours. I think the specific issue with "truth-seeking" is that it has enough wiggle room where it might not necessarily be about someone's character (or at least less so than some of my alternatives), which means that when in the middle of a highly contentious discussion people can convince themselves that it's totally a great idea, more so than if they used something where the nature of the attack is more obvious.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would