Going more meta on EA criticism

When outsiders criticize Effective Altruism, the criticism mostly revolves around the culture and movement:

  • EA is demanding, in that it imposes an implausible and unfairly high moral burden on people that disrupts the normal pace of their lives

  • EA is totalizing, in that EAs end up only thinking about EA and hanging out with EAs

  • EA lacks a certain vibe and aesthetic, which functions as a sort of reductio to larger swaths of the philosophy or movement being misled.

Take Michael Neilsen’s Notes on Effective Altruism:  

This is a really big problem for EA. When you have people taking seriously such an overarching principle, you end up with stressed, nervous people, people anxious that they are living wrongly. The correct critique of this situation isn't the one Singer makes: that it prevents them from doing the most good. The critique is that it is the wrong way to live. 

(demanding)

Similar themes are evident in Kerry Vaughan occasional complaints (totalizing) or in Aella’s recent thread (vibe).

I think there are obvious reasons to push back on these lines of critique: 

  • The “demanding” nature of EA can often be the result of a selection effect: people who are naturally neurotic are attracted to a movement that offers them clear goals, as naturally neurotic people are attracted to prestigious colleges or stratified careers. Also, just as people suffer from the moral demands of EA, so too do many suffer from having no larger purpose. It would be worth trying to figure out which is more common. 

  • The “totalizing” effect of EA seems overestimated by those outside the community. Plenty of members have well-balanced lives. And some enjoy going “all in” on a community–it’s as true for EA as it is for rock climbing. That seems fine if it works for them.

  • Reductio ad vibes seems like a suspicious line of argumentation. Lots of communities are accused of having undesirable vibes, and presumably the bar should be higher for showing that community is actually undesirable.

But it seems worth getting a bit more meta. Why are these arguments so central in EA criticism, and even if they were true, how much should that matter?

***

When EA’s criticize EA, it’s often about thinking and tactics:

  • Are EAs using good moral reasoning?

  • Are EAs correctly prioritizing causes?

  • Are EAs being rigorous in their thinking and employing good epistemic practices?

  • Is the movement employing its resources well?

This is obviously a very different flavor than the outside criticism. So which matters more: Whether EAs are correctly identifying and solving some of the most important problems, or whether movement dynamics are ideal?

There’s an analogy I think is instructive here:

A fireman is trying to put out a fire in a house that is about to erupt into flames. The fireman would do well to hear “There isn’t actually a fire and here’s why” or “You have no idea how to fight fires” or even “You’re actually making the fire bigger.”

But to tell the fireman:

“You shouldn’t feel like you need to put out every fire. There’s more to life than fires”

“You’re quite extreme about this whole firefighting thing. All you do is drive back and forth from the fire station. Do you have any friends besides firefighters?”

“Well I don’t know about the fire but you’re being a bit cringe about this whole thing” 

These just sort of miss the mark entirely. There’s a fire! It could be the case that the fireman should relax and enjoy life more, but this seems like a discussion worth having after the fire is out. 

Is the firefighter doing something useful? This is the key question! I presume that most critics of EAs don’t think there’s a fire at all–and I assume most EAs would welcome compelling criticism along these lines. 

But that’s sort of the point: the object level questions are key. Do nuclear war, biological catastrophe, or the risk from unaligned artificial intelligence represent major risks to human wellbeing in our or our children’s lifetimes? Or even beyond longtermism: Is the current amount of human and animal suffering that takes place intolerable? 

It’s not that I’m totally adverse to the idea that “movement criticism” could take primacy over exigency. There’s a couple lines of criticism that would be obviously compelling:

  • Internal issues are so large that the movement is not able to achieve its goals

  • Groupthink dominates movement (but even this would need to be accompanied by the proliferation of wrong beliefs)

And I’m open to arguments that it’s better to not work with any flawed group, even if they are doing good work.

But that’s not usually the tenor of the criticism. It’s usually just that the EA has some movement problems, without much regard to the relevant scopes: there’s a bit too many stressed out people; there’s a bit too much deferring. Yet most readily admit that EA is highly effective and organized—thus the need for criticism.

But it’s important to understand the burden here. You actually need to argue for why the movement problems are more significant than the object-level problems. “Preventing near term catastrophe” fairs reasonably well across most moral frameworks. So presumably it supersedes “people being a bit intense” in most cases! I’m very curious to hear arguments to why that might not be the case, but it really isn’t intuitive to me. Usually we’re willing to team up with people who kind of annoy us when the stakes are high enough.

What’s interesting about Kerry’s line of argument in particular is that he agrees with EAs on what I would consider the two most important issues: the risks of biological technology and artificial intelligence. 

I think we should find schisming over vibe in the face of massive problems quite uncompelling!

***

In a sense, I’m echoing Neel Nanda: Holy Shit, X Risk. He writes:

TL;DR If you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA. This clearly matters under most reasonable moral views and the common discussion of longtermism, future generations and other details of moral philosophy in intro materials is an unnecessary distraction.

As Neel argues, this version of EA does sort of override even the most basic moral questions”

  • Is consequentialism right or wrong?

  • How altruistic or selfless should we be?

  • What is the value of future lives?

It looks more like: Should I avoid drunk driving?

So I don’t even think that arguments about the correctness of utilitarianism, suffering, quantification, etc, are worthwhile replacements for movement discourse. The question is the object level one: is there a significant risk of existential event in our life times.

The world may not, in fact, look anything like the vulnerable one Neel describes. We may not be in exigent times. We may not be in the most important century. But this is clearly the most important argument to be having!

I’m sympathetic to why EAs worry about making Neel’s line their predominant pitch. Will MacAskill have given an example of why we should attempt to persuade people of our actual values, not their proximate implications:

[There’s a charity] called ScotsCare. It was set up in the 17th century after the union of England and Scotland. There were many Scots who migrated to London, and we were the indigent in London, so it made sense for it to be founded as a nonprofit ensuring that poor Scots had a livelihood, a way to feed themselves, and so on. 

Is it the case in the 21st century that poor Scots in London are the biggest global problem? No, it's not. Nonetheless, ScotsCare continues to this day, over 300 years later. 

Presumably the value underlying ScotsCare was of helping one’s fellow countrymen in need. But rather than ingraining that value into the charity, they engrained a specific mandate: help Scots in London. Now the charity is failing to fulfill even its (far from ideal) moral value: it’s not helping the Scots in greatest need. 

There’s a risk of the same thing happening to EA: If the movement became about simply preventing X-risks in our lifetimes, we would be giving up an enormous opportunity to potentially shape a better future. 

But while this might be true for the EA movement, something is clearly going wrong when we can’t focus and form broader coalitions around near term catastrophes. 

That’s the movement problem that needs to be solved.