I investigated aspects of cross-cultural interactions (CCIs) in the EA community, and wrote some of my findings here (in an order that I think makes sense to read in):

1. Evidence of Poor Cross-Cultural Interactions in the EA community

In this project, I investigated non-Western EAs’ perception of CCIs they had with Westerners, specifically:

  1. How often non-Westerners experienced CCI issues;
  2. What kinds of subtle acts of exclusion (SAEs) they had experienced;
  3. How their CCIs compare between EA and non-EA settings.

I collected an array of evidence from seven sources (e.g., personal anecdotes from interviews and a focus group, and some statistics from three surveys not done by me). And based on the evidence on CCIs I have collected so far, I believe that poor CCIs are likely to be a common but minor problem for most non-westerners in the EA community.

If you’re interested to read some vignettes shared by non-Western EAs, you can find them here and here.

2. Subtle Acts of Exclusion <> Microaggression and Internalised Racism

In this piece of writing, my aim is to help readers have a better understanding of MIR (which in my other writings I refer to as SAE), and to do that, I’ve listed down different types and examples of MIR. And given the fact that MIRs are easily misinterpreted, I’ve listed down some non-examples below too. 

3. Some low-confidence takes about cross-cultural interactions between Western EAs and non-Western EAs

It’s hard to know what is the right solution to combat the experiences and examples in the above writings. But for a select few, I have some higher-than-the-average-but-still-low-confidence takes on how to improve cross-cultural interactions.

Comments1


Sorted by Click to highlight new comments since:

I looked at every link in this post and the most useful one for me was this one where you list off examples of uncomfortable cross-cultural interactions from your interviewees. Especially seeing all the examples together rather than just one or two.

I’m a Westerner, but I’m LGBT and a feminist, so I’m familiar with analogous social phenomena. Instances of discrimination or prejudice often have a level of ambiguity. Was that person dismissive toward me because of my identity characteristics or are they just dismissive toward everyone… or were they in a bad mood…? You form a clearer picture when you add up multiple experiences, and especially experiences from multiple people. That’s when you start to see a pattern.

As a person in an identity group that is discriminated against, sometimes you can have a weird feeling that, statistically, you know discrimination is happening, but you don’t know for sure exactly which events are discrimination and which aren’t. Some instances of discrimination are more clear — such as someone invoking a trope or cliché about your group — but any individual instance of someone talking over you, disregarding your opinion, not taking an interest in you, not giving you time to speak, and so on, is theoretically consistent with someone being generally rude or disliking you personally. Stepping back and seeing the pattern is what makes all the difference.

This might be the most important thing that people who do not experience discrimination don’t understand. Some people think that people who experience discrimination are just overly sensitive or are overreacting or are seeing malicious intent where it doesn’t exist. Since so many individual examples of discrimination or potential discrimination can be explained away as someone being generally rude, or in a bad mood, or just not liking someone personally — or whatever — it is possible to deny that discrimination exists, or at least that it exists to the extent that people are claiming.

But discerning causality in the real world is not always so clean and simple and obvious — that’s why we need clinical trials for drugs, for example — and the world of human interaction is especially complex and subtle. 

You could look at any one example on the list you gave and try to explain it away. I got the sense that your interviewees shared this sense of ambiguity. For example: "L felt uncertain about what factors contributed to that dynamic, but they suspected the difference in culture may play a part." When you see all the examples collected together, from the experiences of several different people, it is much harder to explain it all away.

Curated and popular this week
 ·  · 1m read
 · 
I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.
 ·  · 14m read
 · 
Introduction In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on any specific point upon request. Finally, I contacted a few GFI team members to ensure I wasn't making any major errors in this post, and I've tried to incorporate some of their nuances in response to their feedback. From industry to academia: my cultivated meat journey I'm currently in my fourth year (and hopefully final one!) of my PhD. My thesis examines the environmental and economic challenges associated with alternative proteins. I have three working papers on cultivated meat at various stages of development, though none have been published yet. Prior to beginning my doctoral studies, I spent two years at Gourmey, a cultivated meat startup. I frequently appear in French media discussing cultivated meat, often "defending" it in a media environment that tends to be hostile and where misinformation is widespread. For a considerable time, I was highly optimistic about cultivated meat, which was a significant factor in my decision to pursue doctoral research on this subject. However, in the last two years, my perspective regarding cultivated meat has evolved and become considerably more ambivalent. Motivations and epistemic status Although the hype has somewhat subsided and organizations like Open Philanthropy have expressed skepticism about cultivated meat, many people in the movement continue to place considerable hop
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.