Gatekeeping is Good and Everyone Already Agrees

Gatekeeping is one of the great sins of online life. It means something like “to exclude someone from a discussion.” But we exclude people from discussions all the time. So why is gatekeeping bad?

*** 

In its online usage, “Gatekeeping” seems to have emerged in fan communities. In particular, it seems to have been used to describe (usually male) fans of a franchise excluding (usually female) fans by imposing arbitrary knowledge requirements. Per Urban Dictionary:

When someone takes it upon themselves to decide who does or does not have access or rights to a community or identity.

Consider the example:

"Oh man, I love Harry Potter. I am such a geek!"

"Hardly. Talk to me when you're into theoretical physics."

The idea is that “geek” is a community or identity, and taking it upon oneself to decide that “loving Harry Potter” is insufficient to join that community or identity is wrong. 

Other pieces on gatekeeping are similar: Gatekeeping in gaming (Who is a real “gamer”?), the effect of TikTok on gatekeeping, whether teenage girls are cultural gatekeepers, gatekeepers who decide what food is disgusting. There’s even a large subreddit for documenting examples (though note that most of the examples are people getting mad about jokes). 

From Saturday Morning Breakfast Cereal by Zach Weinersmith

In the context of these online communities, “gatekeeping” seems quite annoying! Fan communities exist for mutual enjoyment of a hobby; they are not truth seeking in nature. Gatekeeping, in this sense and context, is discriminatory, and it’s silly because being a fan of something doesn’t and shouldn’t require memorizing arcane knowledge about that thing; it’s probably fine to let Harry Potter fans into the geek community. 

What’s weird is that this usage hasn’t been kept to fan communities. It’s leaked into broader intellectual discourse, so much so that it’s beginning to completely confuse other debates.

And it’s making discussions insane.

***

Think of it this way: We all believe gatekeeping is good in a trivial sense. If someone was hurling insults and spewing meaningless claims, we would prefer they be excluded from discussion. In important or high stakes discussions, we might even think we have a duty to exclude them from discussion.

So we approve of gatekeeping.

That seems unfair: not all exclusion is gatekeeping. But that’s sort of my point: When you pack “to unjustly exclude” into the definition, you hide all the interesting claims you’re making. What is just exclusion?

My position is that the norms and questions around who should be allowed to participate in which discussion are highly complex and situational. They demand to be constantly negotiated. “Gatekeeping,” is a crude concept that serves only to confuse these debates rather than clarify them.

***

Or consider it this way: Expertise has some merit in discussion. We all agree on this. This expertise might be captured in their credentials, like their degrees or publications, but those are only proxies. (Clearly an expert who is dismissed for their lack of credentials is dismissed wrongly.) Experts are people who actually know something about the subject at hand. Depending on the nature of the subject, we prefer to keep the discussion more closed to experts, in the true sense of the word. 

Austere debates in math and science are obvious examples: when a non-expert, who does not understand the subject, disagrees with an expert, they are usually “not even wrong” as they do not even have the requisite understanding to disagree. Even if they understood enough to argue, disagreeing with an epistemically superior—who is both better informed and better at aggregating information than you—is a hallmark sign of irrationality in the philosophy of disagreement: Why should you believe yourself over someone you know to know more and reason better than you?

Math and science are too clear cut. Here’s a messier example: Consider the refrain during the early invasion of Ukraine: “Name the 11 countries that border Ukraine”—the implicit claim being that people who couldn’t name the bordering countries didn’t deserve to opine on the invasion.

From r/gatekeeping

Is this a good discussion norm? I’m not sure–It seems like a bit of a gotcha question. Should one be forgiven for forgetting Moldova?

But it’s at least plausible that someone’s opinion on geopolitical conflict who doesn’t have any knowledge of a region is as bad as someone with no mathematical understanding trying to argue about complex equations. It’s easy to imagine instances where we would ask someone to not comment on a country’s affairs if they didn’t know basic facts about it. The question is simply: which facts?

It’s not clear cut. But these debates—on what knowledge is requisite to participate in a debate—are the only way we can figure it out. Blanket bans on gatekeeping teach us nothing, and just make the discussion worse.

***

Another strange aspect of the gatekeeping debate is how closely it runs parallel to a debate about the value of experts. “Follow the experts,” we are told on COVID, usually rightly once the normal scientific process has sorted itself out. Certainly there might be specific failure points–reasonable people may differ on the best way to identify these failures–but nobody really believes that whether MRNA vaccines work is up for debate. 

We have more intense forms of pro-expert discourse: There’s a discussion of “epistemic trespassing,” the briefly loved and now mostly reviled philosophical term. Or isn’t Annie Lowry’s Fact Man the great enemy of the gatekeepers?

So which is it? Should we do more to protect the status of experts? Or should we strike down the gatekeepers? Again, the “Don’t gatekeep” maxim teaches us little.

There’s an analogous debate around “platforming.” Many–especially those who object to gatekeeping–often suggest that media platforms should ban or censor certain people due to their alleged harmful effects. Once again, we all believe in gatekeeping. 

***

When we recognize this, we can discuss gatekeeping in the abstract as morally neutral. Some people might support more of it, some less, and all will disagree on when it should be used. But the notion that nobody should ever gatekeep would be seen as strange. 

This would be consistent with the first usage of “gatekeeping,” in a paper on food habits [1]:

Entering or not entering a channel and moving from one section of a channel to another is affected by a “gatekeeper.” For instance, in determining the food that enters the channel “buying” we should know whether the husband, the wife, or the maid does the buying. If it is the housewife, then the psychology of the housewife should be studied, especially her attitudes and behavior in the buying situation.

Understanding who the gatekeeper was and what motivated them could help one to understand why some families ate certain foods and others didn’t. But the act of gatekeeping is implicitly understood as normal and inevitable. It was the consequences of that gatekeeping were up for debate. It’s a small switch that would make some debates infinitely less mind numbing. 

***

One still might suspect that gatekeeping is more deeply wrong [2].

You could argue that using Philip Tetlock’s research: Experts are very bad at making predictions. Our best forecasters are much better. This was recently confirmed in a meta-analysis from the new research consultancy, Arb. So gatekeeping is bad because (even true) experts are overrated. 

But the argument doesn’t really work: If expertise is good for some things and poor for others, it’s worth trying to figure that out. But answering them will only help us be better gatekeepers. We might figure out that we should always exclude experts when making predictions, and instead only allow well-calibrated forecasters through our gates.

Or perhaps you believe in competitive markets: Always give people more options, and let consumer choice sort it out. So more voices are always better in the market for ideas. There’s a lot of reasons one might object to this point–it assumes an awfully competitive market for truth, and potentially the problem of choice fatigue–but there’s a more fundamental problem: gatekeepers themselves are competing. Some groups and communities will have very little curation in their discussions and some have a lot. Consumers choose which they prefer. One can’t object to gatekeeping for competition’s sake, because gatekeeping is one of the dimension on which communities are competing. 

***

All that is just to say: It’s time to stop worrying about gatekeeping and start worrying about how to be a good gatekeeper.

[1] I don’t believe that etymology has a special linguistic significance. I just note it here as a point of comparison. 

[2]  In a funny way, the position sort of horse shoes around to something like “free speech absolutism”

Going more meta on EA criticism

When outsiders criticize Effective Altruism, the criticism mostly revolves around the culture and movement:

  • EA is demanding, in that it imposes an implausible and unfairly high moral burden on people that disrupts the normal pace of their lives

  • EA is totalizing, in that EAs end up only thinking about EA and hanging out with EAs

  • EA lacks a certain vibe and aesthetic, which functions as a sort of reductio to larger swaths of the philosophy or movement being misled.

Take Michael Neilsen’s Notes on Effective Altruism:  

This is a really big problem for EA. When you have people taking seriously such an overarching principle, you end up with stressed, nervous people, people anxious that they are living wrongly. The correct critique of this situation isn't the one Singer makes: that it prevents them from doing the most good. The critique is that it is the wrong way to live. 

(demanding)

Similar themes are evident in Kerry Vaughan occasional complaints (totalizing) or in Aella’s recent thread (vibe).

I think there are obvious reasons to push back on these lines of critique: 

  • The “demanding” nature of EA can often be the result of a selection effect: people who are naturally neurotic are attracted to a movement that offers them clear goals, as naturally neurotic people are attracted to prestigious colleges or stratified careers. Also, just as people suffer from the moral demands of EA, so too do many suffer from having no larger purpose. It would be worth trying to figure out which is more common. 

  • The “totalizing” effect of EA seems overestimated by those outside the community. Plenty of members have well-balanced lives. And some enjoy going “all in” on a community–it’s as true for EA as it is for rock climbing. That seems fine if it works for them.

  • Reductio ad vibes seems like a suspicious line of argumentation. Lots of communities are accused of having undesirable vibes, and presumably the bar should be higher for showing that community is actually undesirable.

But it seems worth getting a bit more meta. Why are these arguments so central in EA criticism, and even if they were true, how much should that matter?

***

When EA’s criticize EA, it’s often about thinking and tactics:

  • Are EAs using good moral reasoning?

  • Are EAs correctly prioritizing causes?

  • Are EAs being rigorous in their thinking and employing good epistemic practices?

  • Is the movement employing its resources well?

This is obviously a very different flavor than the outside criticism. So which matters more: Whether EAs are correctly identifying and solving some of the most important problems, or whether movement dynamics are ideal?

There’s an analogy I think is instructive here:

A fireman is trying to put out a fire in a house that is about to erupt into flames. The fireman would do well to hear “There isn’t actually a fire and here’s why” or “You have no idea how to fight fires” or even “You’re actually making the fire bigger.”

But to tell the fireman:

“You shouldn’t feel like you need to put out every fire. There’s more to life than fires”

“You’re quite extreme about this whole firefighting thing. All you do is drive back and forth from the fire station. Do you have any friends besides firefighters?”

“Well I don’t know about the fire but you’re being a bit cringe about this whole thing” 

These just sort of miss the mark entirely. There’s a fire! It could be the case that the fireman should relax and enjoy life more, but this seems like a discussion worth having after the fire is out. 

Is the firefighter doing something useful? This is the key question! I presume that most critics of EAs don’t think there’s a fire at all–and I assume most EAs would welcome compelling criticism along these lines. 

But that’s sort of the point: the object level questions are key. Do nuclear war, biological catastrophe, or the risk from unaligned artificial intelligence represent major risks to human wellbeing in our or our children’s lifetimes? Or even beyond longtermism: Is the current amount of human and animal suffering that takes place intolerable? 

It’s not that I’m totally adverse to the idea that “movement criticism” could take primacy over exigency. There’s a couple lines of criticism that would be obviously compelling:

  • Internal issues are so large that the movement is not able to achieve its goals

  • Groupthink dominates movement (but even this would need to be accompanied by the proliferation of wrong beliefs)

And I’m open to arguments that it’s better to not work with any flawed group, even if they are doing good work.

But that’s not usually the tenor of the criticism. It’s usually just that the EA has some movement problems, without much regard to the relevant scopes: there’s a bit too many stressed out people; there’s a bit too much deferring. Yet most readily admit that EA is highly effective and organized—thus the need for criticism.

But it’s important to understand the burden here. You actually need to argue for why the movement problems are more significant than the object-level problems. “Preventing near term catastrophe” fairs reasonably well across most moral frameworks. So presumably it supersedes “people being a bit intense” in most cases! I’m very curious to hear arguments to why that might not be the case, but it really isn’t intuitive to me. Usually we’re willing to team up with people who kind of annoy us when the stakes are high enough.

What’s interesting about Kerry’s line of argument in particular is that he agrees with EAs on what I would consider the two most important issues: the risks of biological technology and artificial intelligence. 

I think we should find schisming over vibe in the face of massive problems quite uncompelling!

***

In a sense, I’m echoing Neel Nanda: Holy Shit, X Risk. He writes:

TL;DR If you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA. This clearly matters under most reasonable moral views and the common discussion of longtermism, future generations and other details of moral philosophy in intro materials is an unnecessary distraction.

As Neel argues, this version of EA does sort of override even the most basic moral questions”

  • Is consequentialism right or wrong?

  • How altruistic or selfless should we be?

  • What is the value of future lives?

It looks more like: Should I avoid drunk driving?

So I don’t even think that arguments about the correctness of utilitarianism, suffering, quantification, etc, are worthwhile replacements for movement discourse. The question is the object level one: is there a significant risk of existential event in our life times.

The world may not, in fact, look anything like the vulnerable one Neel describes. We may not be in exigent times. We may not be in the most important century. But this is clearly the most important argument to be having!

I’m sympathetic to why EAs worry about making Neel’s line their predominant pitch. Will MacAskill have given an example of why we should attempt to persuade people of our actual values, not their proximate implications:

[There’s a charity] called ScotsCare. It was set up in the 17th century after the union of England and Scotland. There were many Scots who migrated to London, and we were the indigent in London, so it made sense for it to be founded as a nonprofit ensuring that poor Scots had a livelihood, a way to feed themselves, and so on. 

Is it the case in the 21st century that poor Scots in London are the biggest global problem? No, it's not. Nonetheless, ScotsCare continues to this day, over 300 years later. 

Presumably the value underlying ScotsCare was of helping one’s fellow countrymen in need. But rather than ingraining that value into the charity, they engrained a specific mandate: help Scots in London. Now the charity is failing to fulfill even its (far from ideal) moral value: it’s not helping the Scots in greatest need. 

There’s a risk of the same thing happening to EA: If the movement became about simply preventing X-risks in our lifetimes, we would be giving up an enormous opportunity to potentially shape a better future. 

But while this might be true for the EA movement, something is clearly going wrong when we can’t focus and form broader coalitions around near term catastrophes. 

That’s the movement problem that needs to be solved.

PASTA and Progress: The great irony

Epistemic status: low

A foremost goal of the Progress community is to accelerate progress. Part of that involves researching the inputs of progress; another part involves advocating for policies that promote progress.  A few of the common policy proposals include:

  • Fixing the housing supply problem

  • Improving and increasing research and development spending

  • Increasing immigration

  • Repealing excessive regulations, particularly in the energy sector

All of these would be very good and I support them. At the same time, any attempt to increase growth runs against a number of headwinds:

  • The US and other Western governments appear to be deeply sclerotic, leading to regulatory bloat and perhaps making change difficult

  • Population growth is collapsing in the US, due to both fewer births and less immigration. Under most growth models, people are the key source of new ideas.

  • Good ideas are (likely) getting harder to find. Growth on the frontier may simply get harder as we pick “low hanging fruit,” though obviously this is often debated.

The US has grown at, on average, 2.7% since the Reagan administration. The last 10 years have been more disappointing, less than 2%. What could a successful Progress movement be able to accomplish? Raising the rate to 2.5%? To 4%?

I should emphasize that I admire all of the policy and research currently being done by advocates of progress. But usually we approach Progress from the frame of the Great Stagnation: We used to grow quickly, then something happened around 1971, and now we grow slowly. But I wonder if we should also be considering different world views of where we stand in relation to the future.

I’m particularly interested in the view that we’re living in the Most Important Century. In this view, we are nearing a breakthrough that could overcome the headwinds of population decline and the ever more difficult search for new ideas: knowledge production via automation. 

Holden Karnofsky calls this AI system PASTA: Process for Automating Scientific and Technological Advancement. If PASTA or something similar were created, we might enter a period of increasing growth that would quickly usher in a radically different future. 

It may sounds a bit far-fetched, but there’s hasn’t been a devastating argument made against it. Science sounds like something that would be hard to automate, but AI already isn’t progressing as we expected; rather than slowly working its way up from low skilled to high skilled labor, as was often anticipated, AI seems to be on a crash course with creative progressions like writing (GPT systems) and now illustration (DALL-E). Machine learning is all about training by trial and error without precise instruction. And as impressive as current models are, they aren’t even 1% as big as human brains. But that will quickly change as computing power becomes cheaper (More on AI and bioachnors here).

Plus, when have friends of progress been adverse to sci-fi-sounding futures?

If this seems compelling, Karnofsky’s post on PASTA (and the rest of the Most Important Century series) discusses these scenarios in much more detail. 

So should we just build PASTA and reep the rewards of Progress? No–more likely we should be extremely worried. There are serious risk from misaligned artificial intelligence, which could pose a threat to human civilization, and there are possibly also risks from humans colonizing the galaxy without sufficient ethical reflection on how to do that responsibly. 

So we’re caught in a funny place: a lot of proximate growth goals look good but not world changing. And the “big bet” may be a suicide mission. I’m not sure what to make of all of this. The implication might simply be to work in AI alignment and policy. I think at a bare minimum it’s worth us being more curious about these discussions. 

There’s a big irony here: As pessimistic as EAs are about AI trajectories, they see the possibility of, in Karnofsky’s words, “a stable, galaxy wide civilization.” Wouldn’t it be silly if we were working on NSF spending when the takeoff began?

The Cheems Heuristic

Increasing agency is a classic chicken or egg problem: If you don’t have agency, how will you take control to increase your agency? If you already have the ability to become more agentic, perhaps you were never so bereft of agency in the first place.

So where does one begin?

It could be the case that a lack of agency is a deep problem. Maybe it’s genetic, or maybe agency has to be inculcated in children during their formative years, or maybe people develop engrained mental barriers that limit their agency. 

But I don’t really think any of that is true. I think it’s probably more similar to a bad attitude, or is even just a set of wrong beliefs. For example:

So maybe if people simply knew more bits of useful advice like these they would have more agency. We could just teach people them, as many on Twitter do. 

But I still suspect lacking a agency is bit deeper than that. There’s something more that prevents people from simply saying: “Oh, I can just email startups I want to work at? I’ll just do that.” That doesn’t sound like something that someone who was just lacking agency would say. 

I think that deeper problem is the Cheems Mindset. 

As Jeremy Driver recently wrote:

Broadly, personal cheems mindset is the reflexive decision for an individual to choose inaction over action, in particular finding reasons not to do things which have either high expected value, or a huge upside with very little downside risk. 

So how to escape it? Michael Story responded to Jeremy with my favorite piece of advice:

But again, what if you don’t already have the right friends nor do you have the agency to find them? I have a simple proposal: Just ask yourself, what would the anti-cheems thing to do be? Just always do that.

That way, next time you hear about a way to move more adeptly in the world—that you will likely get a response from your cold email —you won’t just think “Someone could do that.” You’ll think: I will do that. Because that’s the anti-cheems thing to do. 

Tyler Cowen and the Lamplight Model of Talent Curation

Tony Kulesa writes that Tyler Cowen is “the best curator of talent in the world” and describes four components of Cowen’s approach that he thinks makes Cowen’s success possible. The first is the most interesting:

Distribution: Tyler promotes the opportunity in such a way that the talent level of the application pool is extraordinarily high and the people who apply are uniquely earnest.

I think there’s a larger theory of talent curation here, which helps shed more light on how Cowen has done what he’s done. 

In my view, there’s two fundamental and overlapping ways to curate talent. The first is the way we usually think about curating talent. Everyone knows Harvard University is the best school in the world, so everyone applies to Harvard University. There’s some implicit curation—don’t waste the $75 application fee if you have no chance of getting in—but so long as you have some chance it’s not a bad bet. So Harvard has a good brand that attracts everyone they would potentially want to enroll and many (20x) more. 

The problem is that Harvard’s image and brand have no way to discern between the types of good students: the ones simply looking for a high-paying gig and the ones that are going to invent nuclear fusion. So to the extent Harvard cares about such differences at all, they have to find the right students through filtration. We can call this the Filtration Model of talent acquisition: Cast a wide net, and use internal mechanisms to find the talent. The success of their filtration depends on Harvard’s admissions department being big and good at figuring out which students are the ones they want.

This is where Kulesa’s notion of distribution comes into play. Cowen, intentionally and unintentionally, has taken a different approach than Harvard. Let’s call it the Lamplight Model. As opposed to the Filtration Model, the Lamplight Model seeks to do most of the filtration externally: Cast a narrow net, but one that catches the people you want. 

Cowen’s intellectual presence is representative of this approach. Marginal Revolution rewards careful readers, ones who can read between the lines of posts and notice recurring themes. Cowen doesn’t engage in typically viral or trendy topics, and when he does the angle is often diametric to the popular discussion. Linked posts are often obscure, especially for a blog ostensibly about economics. But they begin making sense when you think about the world like Cowen does: How does this new piece of information change your model of the world? What are the underlying incentives? Who gains status and who loses it?

So Marginal Revolution pushes away many potential readers but cultivates a deep loyaly from readers who see and think more like Cowen. He can convert some of those readers—usually young and ambitious ones—into EV recipients. 

The structure of Emergent Ventures itself depends on the Lamplight Model: Cowen reviews every application himself, so receiving 40,000 applications a year (as Harvard does) simply isn’t feasible. He needs to use the Lamplight Model to do a lot of his filtering for him.

The Lamplight Model shows up in the data as higher acceptance rates. Kulesa writes:

It isn’t just a matter of more elite selection. In fact, Emergent Ventures has a higher acceptance rate than elite colleges. In May 2020, Tyler reported in an interview with Tim Ferriss that the award rate is ~10%. For comparison, the 2021 acceptance rates of Harvard, Princeton, and Yale were 5%, 6%, and 7%. It also isn’t a wider pool. At that time, he had only ~800 total applications since 2018.

Obviously, the 10% acceptance rate suggests that there is substantial filtering going on too—it’s always going to be a bit of both—but more of the work is happening externally. 

You can see a similar system working for the Thiel Fellowship, as described in a post on Strange Loop Canon. Thiel’s program immediately ticked off much of respectable society and thus drew little interest from conformists and status-seeking students. Instead, it attracted weirdos, the very people Thiel wanted to attract. It repelled the wrong people and attracted the right people. There’s a number of familiar names even in the first cohort, including Laura Deming and Nick Cammarata. Vitalik Buterin was in the fourth cohort, in 2014.

YCombinator seems to have had a similar effect in their first cohort. Per Paul Graham:

That first batch included reddit, Justin Kan and Emmett Shear, who went on to found Twitch, Aaron Swartz, who had already helped write the RSS spec and would a few years later become a martyr for open access, and Sam Altman, who would later become the second president of YC.

Graham seems to identify the same effect I describe here:

I don't think it was entirely luck that the first batch was so good. You had to be pretty bold to sign up for a weird thing like the Summer Founders Program instead of a summer job at a legit place like Microsoft or Goldman Sachs.

Again, we see the power of not just attracting the right people but also repelling the wrong people.

This leads to interesting conclusions about publicity: Word of mouth is very good, as it will generally mean that the desired group is attracting more people similar to themselves. But large-scale publicity, features in mass media outlets and the like could be actively harmful to the program. 

All of this suggests a difficult problem. Cowen’s reach seems to have been growing recently. This is good for the world, as Cowen is an important intellectual. It may even be good for Emergent Ventures initially, as its reach and network expand to an optimum level of word-of-mouth transmission. But if Emergent Ventures becomes a status symbol or meritocratic badge for non-weirdos, it could break the secret mechanisms of EV, just as it might imperil the Thiel Fellowship if they keep minting billionaires faster than any VC could dream of.  I don’t have a strong opinion on YCombinator, but the common opinion seems to be that its fallen victim to this problem, now just a next career step for FAANG employees.

There are a few obvious ways to prevent this. Cowen could become less legible to non-weirdos. Or he could become more offensive to them. Both of these have obvious downsides as well, though. He could also hire an excellent admissions department and change the curation model. So it will be interesting to see whether EV is able to keep its track record and if other similar organizations that are beginning to pop up can emulate its success.

With the announcement of MothMinds, we might be entering a golden age of micro-granting programs. We’ll be able to see how these programs compare: the ones that are formal, with many people filtering applicants, versus the ones that are one person’s side project, where the program’s success depends on their Lamplight Effect. And someday, we will get to see who comes out of which.

Sane Charity

In Doing Two Things at Once, I wrote about the propensity towards hackneyed charity schemes. People just turn two things they like into a “charitable cause.” There’s really an underlying issue that’s much more systemic. 

Jason Crawford, in a discussion of funding models, compares non-profits and for-profits in terms of “metabolisms.” In a for-profit entity, there is a clear feedback loop: Profits (from sales) allow a business to generate returns for investors, who, in turn, loan capital to the company. The for-profit uses that capital, at least in part, to create products for customers. Customers buy those products to create sales for the company. 

In a non-profit organization, there is a missing part of the product loop: There is usually no way that the beneficiaries of charity can return feedback to the non-profit and its donors. Obviously there’s been a big push for charities to collect data, but the data is only directionally connected to impact (it isn’t perfectly accurate) and that’s only somewhat connected to contributions (some donors will donate or not donate regardless of what the data shows). It’s nowhere near the tight signal that sales are. 

In an ideal world, the amount of contributions from donors to an organization would reflect the value that beneficiaries received from an organization. But we know that isn’t the case, because satisfying beneficiaries is rarely the sole goal of donors. Where investors in for-profits relentlessly pursue monetary returns, non-profit beneficiaries pursue other things like self-worth and status. That’s why people are allowed to pair two random things and call it a charity. Charity need not fit the needs of the beneficiaries, just the interests of donors.

I don’t consider this an Effective Altruist critique either. EA focuses on how we can do the most good. Part of that is by serving the needs of beneficiaries, but EA also raises the question of who the beneficiaries of charitable actions should be. Since EA looks to have the most impact, it would typically prefer to do health and poverty work in developing countries over developed ones, as the marginal dollar is able to do more good in the former. That sort of reasoning isn’t required for my argument here: Even if a charity is trying to help people who aren’t destitute (say, poets in Brooklyn), we should still be better attuned to what those beneficiaries actually want if we are to call our actions charity at all. 

There’s a classic essay on this subject, Jeffrey Friedman’s There is No Substitute for Profit and Loss. He focuses on the specific role of profits to push the point I’ve been making even further. It’s not simply that profit signals are a little bit better than an alternative feedback method like doing good data collections. Profit is what tells us whether any proxy—data, metrics, mission statements, management styles—are achieving results. We can’t use secondary metrics to deduce a magic formula which implies profitability, because it is profitability itself that guides the entrepreneur to serve their customers. 

There are some new structures we could try to make charity better: We could create an organization that pays non-profit organizations on behalf of them making their beneficiaries happy. Such an organization could try to make donations function more like profits by donating it proportionally to the amount of value a non-profit creates for its beneficiaries. Perhaps such an organization would allow you to choose a group of beneficiaries one wants to help then donate accordingly. But the “accordingly” is the hard part. Do you survey people and ask what they valued? The information is inevitably of a lower quality than their revealed preferences through spending. Indeed, in Friedman’s terms, the profits would be signaling to us the quality of our survey data: Do answers match revealed preferences?

Why not instead just give money directly? Or perhaps start a grant program? In those cases you could still take advantage of the corrective power of profits.

There can be an advantage for charity that isn’t tied to the desires of beneficiaries: Some things do more good than people value them. Insecticide-treated malaria nets are the perfect example. Someone may place little value in a net, perhaps due to their time preference, but using a net would do a lot of good for recipients and nets have positive externalities. So ignoring profits might be good for things with positive externalities, as they would be naturally undersupplied. 

At the same time, the median charity scheme is probably creating much less value than that, so direct cash transfers likely do good on most margins. 

But this is all cheating. The reason bad nonprofits exist is because people have already shown they don’t care about the wellbeing of beneficiaries. I’m optimistic enough to believe that if my theoretical meta-charity made it clear that this was the place to give to if you really cared about a certain group, some people would donate to it instead. But it still wouldn’t fix the underlying problem. After all, an organization with similar methods already exists: It’s called Givewell + OpenPhil. 

In the end, it’s Hansonian. Charity, by and large, isn’t about charity.

Doing Two Things at Once

I have some nostalgia for old-school Effective Altruism, back when more of the discussion was around getting everyone to donate 10% of their earnings to Malaria Consortium or GiveDirectly [1]. 

I don’t miss old-EA just because I think those global health causes remain incredibly important (though they are). I miss the promise of a more sane world. Charity is a world, free from our usual sanity-forcing constraints of profit and loss, that can be especially insane. Consider this video which recently circulated of conceptual art classes for Afghan women: 

There’s a lot that could be said about the implicit politics of this video (many have had takes). But even if someone were basically sympathetic to this work, I can’t imagine anyone making the case that this was the most effective use of charitable resources. The classes clearly exist not because of a widespread demand among Afghan women for conceptual art classes over, say, improved healthcare for their children, but because of something else.

The classes are symptomatic of a broader tendency in philanthropy: randomly pairing two things. Usually, the pairing is two personal interests, in this case, Afghan women and conceptual art, but it’s far from the only example. 

Some of these are obviously contrived and unforgivable: giving musical instruments to people with food and housing insecurity; using art to end climate change. There are some that seem like someone just got a bit too clever for their own good. The famous example from Will MacAskill’s Doing Good Better is Playpumps, merry-go-rounds that were supposed to generate electricity (they did not work well).

Sometimes these ideas sound more plausible but could be dismissed by even a cursory look to scale. Consider decarbonization initiatives in Africa, despite the fact that Africa creates a tiny amount of overall carbon emissions. 

Or consider anti-waste initiatives in the US and Europe, ostensibly about preventing marine pollution, despite the fact that the amount they emit is negligible.

EA organizations like Givewell discovered a solution for this: to check whether a charity is impactful or not before donating to them. No matter how many “takedowns” of EA I read, I always come back to this: “Would you rather have people donating to something completely useless?”

I don’t mean to imply that there’s never a reason to skip up Maslow’s Hierarchy, that water must come before food and food before shelter. Refusing to help with anything but the bare necessities could constitute its own form of paternalism. Instead, it’s a matter of actually caring about the people you are trying to help to ask what they want and caring enough about the problem you are trying to solve to actually figure out what it would take to make an impact on it.

So even if everyone isn’t going to become an EA, we might all consider two lesser mandates: to not simply randomly pair things and to try to think a bit about what the person you’re trying to help actually wants and needs. Charitable giving is 2% of the GDP and we should probably try to make it not useless. [2]

[1] EA is better off focused on longtermism, all things considered.

[2] If EA is going to concentrate its intellectual energy on longtermism, I’m keen to see who picks up this project.

The Crux: Collaboration

Jason Crawford asks, “What’s the crux between EA and progress studies?” This is the final part of a short series of posts on the question. See Part 1 and Part 2.

So why should Effective Atruists and the Progress Community work together?

Collaboration is mutually beneficial. As Ben Todd explained in his 2021 EA Global speech, EA organizations often hire from outside EA. There are plenty of people interested in global health and animal welfare who don’t have EA philosophical commitments, but they do just as well in their roles as any EA would. Rather, EA should generally direct people to do things that only EAs would or could do, like AI alignment or EA movement building. But that’s not the case with every area.

The Progress Community should be thinking along the same lines: What are people invested in human progress most uniquely willing to do? B2B SAAS companies might drive human progress, but there’s no reason the Progress Community should direct talent there as plenty of just-as-talented people are able to do the same work. Collaboration allows groups to make use of their comparative advantages. 

“5/5” engagement in EA by cause area. From Todd, EAG 2021. Certain cause areas demand more EA engagement than others.

But what should the Progress Community be focusing on? There’s a bit of a contradiction between my last two posts. In the first, I criticized the Progress Community for not guiding the actions of community members towards doing things to drive Progress. In the second, I have some criticisms of the Progress Community’s inability to respond to certain lines of argument from EA.

There’s probably a good case that Progress should just be a community of “doers” and leave the theorizing to EA. But there are some difficult theoretical questions at play so I’m inclined to think we should do both.

Doing, Revisited

Ben Todd writes that effective altruism needs more megaprojects. 

But there’s been a major constraint in EA: a lack of entrepreneurial talent dedicated to creating and scaling new organizations. At the risk of undue speculation, I strongly suspect that Progress has had more pull with founder and entrepreneur types. In that spirit, let’s look at some megaprojects that would make for ideal areas of collaboration. 

Energy, the Holy Grail of Progress

If I were to chip in on, “What’s the ultimate project for Progress,” my vote right now would be energy too cheap to meter. Progress Studies exists, in large part, as a response to the Great Stagnation, and there’s a reasonably strong case that its direct cause was a slow down in energy production. 

The Henry Adams curve represents the tendency for energy use to grow 7% year over year from around 1800. The trend broke in the early 1970s (WTF happened in 1971?) when it began to flatline. Virtually everything we care about is correlated to more energy. Getting back on the Curve is imperative for progress to continue.

From J Storr Hall’s Where’s My Flying Car. Hall attributes the leveling off to nuclear regulation.

Will MacAskill has called investment in clean energy is the “GiveDirectly of longtermism.” That is to say, it’s the baseline investment that all others should be measured against. That’s because clean energy’s benefits are threefold: it mitigates the harms of climate change; it betters the lives of people; it preserves existing fossil fuel resources with which we could reindustrialize after a civilizational collapse.

Virtually everyone understands that the future of energy will need to be renewable. So EAs and the Progress Community should work on green energy too cheap to meter[1]. The Progress Community should be leading revolutionary projects and EA should be directing talent and resources towards them. 

Biorisk

The other obvious area for collaboration is biotech. It’s a cliche by now to point out that we seem to be in the beginnings of a massive revolution in biology. These new technologies will have clear applications to biorisk: early detection of pathogens, rapid vaccine development, antivirals, better PPE. There would also be shared gains from FDA reforms. Both progress and anti-risk measures would benefit from human challenge trials. Something like FastGrants is a perfect example of EA-Progress crossover.

Fertility

There’s an emerging view in EA known as Aschenbrennerism (Leopold Aschenbrenner). The basic idea is that AGI timelines may be long and birthrates are quickly declining. Population growth is a major (perhaps the foremost) driver of economic growth, so population stagnation and decline could lead to a world with little to no growth. If that happens, it would extend our time of perils and lower the chance of human survival. Thus, Aschenbrennerism emphasizes raising fertility rates, lengthening productive years, and considering gene therapies and enhancements as top priorities.

These are excellent areas for collaboration. Fertility technologies, including artificial wombs, improved egg freezing, and better formula, are all areas where we need more entrepreneurship. They’ve also historically received less investment, plausibly because of gender bias in medicine. So it’s a very exciting area for collaboration.

Immortality, the Black Swan Candidate

Balaji Srinivasan makes a strange yet intriguing argument that ending death is the tautological endgame of technology: technology has the proximate goal of ending scarcity, and mortality is the ultimate cause of scarcity. 

There are plausible EA and Progress cases for immortality. On EA terms, death is a massive opportunity cost. We also suffer from our own fear of death and from the deaths of our loved ones. On Aschenbrennerian terms, more healthy people bring more growth. That’s a good case for immortality as Progress too, though it may miss the fundamental point: Ending mortality might just be, as Srinivasan argues, definitional to progress. 

Even extending healthy life spans without ending mortality could be an important fertility technology: Parents may prefer having children in their late 40s if they could expect to live long enough to see them married. 

Life extension has been investigated as an EA cause (see here, for example), and the debate seems to be ongoing. I’m currently agnostic about the matter. But the Progress Community should clearly be directing people to work on life extension and build out institutions to support it. If EA becomes more interested in the future, the area should be well-prepared to absorb more money. 

Even more…

This list is far from complete. There’s a number of for-profit companies that have made big improvements in traditional EA areas. BeyondMeat and Impossible Foods are making it easier for people to reduce the amount of meat they eat. Progress in lab-grown meat has been slower than some expected, but it still could be the most important animal welfare victory ever. 

Even fintech companies have made important contributions to global poverty work. Sendwave makes it easier to transfer money to Africa and Asia, allowing immigrants to send remittances to their families more cheaply and easily. This will allow billions more dollars per year to go to families rather than Western Union. They have already done a huge amount of good

As we all know by now, biotech companies are incredibly helpful in global health work. The pharmaceutical company GlaxoSmithKline, working in partnership with the Walter Reed Army Institute of Research, recently had their malaria vaccine approved by the WHO. If the rollout continues to go well, it could redefine effective global health work. A number of other pharmaceutical companies are now working on universal vaccines, which would be one of the biggest public health victories ever. 

It shouldn’t be surprising there’s so much overlap. Companies exist to solve problems. Sometimes they solve problems traditionally adressed by charities. These crossovers deflate the EA-Progress conflict to me. Many of the goals of the wider EA community require technological solutions.

Thinking, Revisited

I propose that we have one Progress-oriented organization (or team within an EA organization) that researches the Longtermism with special attention to concerns raised by EA. 

There is a substantial chance that such an organization would conclude EA problems would be best solved by EA means, but both communities value Red Teaming as good epistemic practice, so why not see what a Progress-oriented team criticizing EA could discover? Researchers could tackle a number of questions, including a few that I raised in the last piece:

  • How can we develop better research norms for risky technologies like AI and Gain of Function?

  • How can we better understand technological trajectories? How should we think about the longterm relationship between offensive and defensive technology? What does Progress have to say about Nick Bostrom’s black balls?

  • How can we use progress (wealth + better institutions) to quickly address x-risks as they emerge?

There’s a number of areas where Progress-style thinking could have corrected EA errors more quickly. The most obvious example is the time it took EA to appreciate the importance of economic growth for global health and poverty as Cowen described. There’s also the case of natalism: There were also some rumblings in the past that EAs shouldn't have children as it would mean money not going to EA causes. Some even went as far as to argue that population growth should be generally regarded as negative [2]. These arguments have been rightly rejected, but the influence of Aschenbrennerism and Growth-oriented thinking more broadly could have quickly ended these discussions.

The future

We now have two new Grand Narratives for this century. The first is about the time of perils:

The second is about the end of the Great Stagnation and a new roaring 20s. As compiled by Caleb Watney:

In either account, these are very exciting and important times. I predict that the most interesting developments will come from people invested in the future—both optimistic and woried—working together to solve problems. This may be a trite place to end this series but I’m much more excited to witness these developments than I am to continue pondering the exact philosophical relationship between EA and Progress.

[1] The obvious objection here is that the field is not neglected. I’m agnostic on this at the moment. I would suspect there are at least big gains in shaping the regulatory environment so that we can efficiently roll out fusion or modular nuclear reactors. It’s also plausible that there are dangers with energy being too cheap, as nefarious actors could use it for destructive purposes. 

[2] Anti-natalism, as far as I know, has never been institutionalized in EA. There’ve been some reports with anti-natalist viewpoints, but I won’t be linking them here as they have been rightly rejected by the community. 

The Crux: Thinking

Jason Crawford asks, “What’s the crux between EA and progress studies?” This is the second in a series of posts on the question from a few different angles. See Part 1 here (link).

I have some apprehensions in writing this post. They’re mostly because it would be easy to tell a great story of a grand philosophical clash of ideas: Progress advocates driving the gears of history forward on one side, and Effective Altruists desperately trying to grind them to a halt. But a story like that just wouldn’t be true to the actual disagreements at hand. 

Telling the story of a grand conflict wouldn’t speak to the diversity of opinions in each community. It wouldn’t acknowledge the humility on both sides. It wouldn’t acknowledge that there’s been insignificant engagement between EA and Progress to flesh out core disagreements. And it wouldn’t acknowledge the degree to which matters of mood and emphasis blend into the “truly” philosophical disagreements. So I want to write this post to acknowledge these realities, while trying to draw out some current points of disagreements. 

So first, do they disagree at all?

Mike McCormick likes to quip that “Progress is EA for neoliberals.” And I think that there is a lot of truth to this! To a certain extent, Progress Studies and EA are different moods, one that feels libertarian and another that feels left-leaning [1]. This is especially evident when one considers that early EA efforts were highly focused on redistribution and economic growth was essentially non-existent in the discussion. Per Cowen:

Not too long before he died, [Derek] Parfit gave a talk. I think it’s still on YouTube. I think it was at Oxford. It was on effective altruism. He spoke maybe for 90 minutes, and he never once mentioned economic growth, never talked about gains in emerging economies, never mentioned China.

I’m not sure he said anything in the talk that was wrong, but that omission strikes me as so badly wrong that the whole talk was misleading. It was all about redistribution, which I think has a role, but economic growth is much better when you can get it. So, not knowing enough about some of the social sciences and seeing the import of growth is where he was most wrong.

Based on the political demography of EA, we should expect there to be some persistence in this neglect of economic growth [2]. According to the 2019 survey, 72% affiliated with the Left or Center-Left politically. 

I’m not sure the Progress Community would look that different, but I do think Growth is clearly more salient for them.

These moods lead to differences in emphasis. Cowen, in his discussion with Robin Wiblin, mentions that earlier drafts of Stubborn Attachments had more discussion of existential risks, but he took them out because he thought growth had become the more underrated part of the discussion. 

So the disagreement between Progress and EA might be more like Progress advocates reminding EAs about economic growth. Or in McCormick’s words, to be a bit more neoliberal. And EA might just be reminding them not to cause the apocalypse. Which one you choose might end up just being a matter of your mood affiliation. 

Asking the big question

If this were 2014, I would argue the above characterization as being pretty fair (though Progress Studies would not have formally existed). But since then, the EA community has become increasingly focused on longtermism, a cluster of theories that emphasize the importance of wellbeing not just for sentient creatures that exist today but those that will exist in the future. Will MacAskill informally defines it as “the view that positively influencing the longterm future is a key moral priority of our time.”

This transition has increased EA interest in existential risk (XR, x-risk), catastrophic risk, and different future scenarios. It’s also led to a debate about whether we’re in a unique “time of perils,” where risks are unusually high. It’s this instantiation of EA that Progress has some natural tension with. 

The problem with turning this into a full-fledged philosophical conflict is that there’s a lot of uncertainty about the relationship between existential risk and growth, even on the EA side of things. It’s just a new area of inquiry. My sense is that a lot of people suspect someone to have worked out a good theory of it, but the work has just begun. 

In his post, Crawford asks, “Does XR consider tech progress default-good or default-bad?” This is sort of halfway between a mood question and an empirical question [3]; we have attitudes toward technological development that are only partially related to our predictions about technological development. I would even guess that some of the people I know who are most excited about technology also have higher subjective probabilities of tech-driven dystopian and catastrophic scenarios than people that don’t care for technology. 

I think EAs and the Progress Community would agree that tech progress is good if progress in safety inevitably outpaces progress in destructive capabilities (though some EAs might still wish to stop tech progress entirely to focus on safety depending on the rate of safety). But a large part of the question is going to be answered by what is true about the nature of technological progress. 

Again, because there’s a lot of uncertainty about the nature of technological progress on both sides, I don’t think we should overstate the existence of a big argument. If we knew it would soon bring the apocalypse, nobody would be in favor. If we knew it would prevent it, nobody would be in opposition. And few people have indifference curves on which they would trade off between risk and growth, so even saying that Progress advocates are willing to take more risk for more growth compared to EAs seems a bit premature. 

Also, I keep writing “the nature of technological progress” but I’m not even sure that’s a real thing. It could be highly path dependent. It could depend on cultures or individual contributors. There’s been some general models proposed: the EA-oriented philosopher Nick Bostrom writes about technological progress as an urn out of which we are drawing balls (which represent ideas, discoveries, and innovations). White balls are good or innocuous; they’re most of what we’ve pulled out so far. But there are also grey ones, which can be moderately harmful or bring mixed blessings. Nuclear bombs were a grey ball, maybe CRISPR too. But Bostrom hypothesizes that there are also black balls, technologies which inevitably destroy civilization when they’re discovered. 

Bostrom could be right. But there also could be safety balls that make the world much safer (destroys black balls, so to speak)! Maybe it’s easy to build spaceships fit to colonize the galaxy. Maybe it’s easy to create Dune-style shields. Maybe there’s something that ends the possibility of nuclear war. MRNA vaccine technology clearly is sort of like a safety ball, as it lowers pandemic risk. 

Clearly this is an area that urgently needs further research, but it isn’t even clear what expectations for the research should be. Will we gain better insight into safety balls and their relative frequency to black balls? Will we figure out if and how their relative frequencies change over time? It doesn’t seem like the sort of thing we’ll be able to figure out, but it’s really the core of the question [4]. 

So while I view the questions here as incredibly important, I don’t see them as points of disagreement. The actual points of disagreement right now are more on the margin: would we be better off with technological progress moving a bit more slowly or a bit more quickly? How much should the Precautionary Principle guide us?

Timelines

Perhaps the most prominent contribution to the growth vs x-risk question was made by Leopold Aschenbrenner. He argues that technological progress (really, economic growth) is by default good, because it propels us through our current time of perils into a safer world. 

But even if Aschenbrenner’s model is correct, that we need to grow through our current time of perils to bring about a safer world, it makes for a poor argument against EA. If some talent and capital is willing to start overinvesting in safety early, they should obviously do so. That’s EA. To really argue against EA, Progress advocates would need to argue that EA is overrating x-risk. That would be a hard argument to make! It seems pretty clear that the marginal person who is willing to work on either safety or growth should be working on safety. That’s essentially the EA Longtermist position.

As Tyler Cowen said in a speech he gave at Stanford, “The longer a time horizon you have in mind, the more you should be concerned with sustainability” (in the EA sense of lower x-risk). That is to say, if humanity might be able to survive and flourish until the heat death of the Universe (or even a few million years), we should be obsessed with preventing extinction; we have time for many more industrial revolutions and ten thousand year growth cycles if that’s the case; if there’s a (non-extinction) civilizational collapse, maybe next time will go better. We just need to make sure we don’t mess up so badly we go completely extinct and we have resources around to with which we can reindustrialize. Even a chance at creating that future is so important that growth in the next few hundred years is meaningless.

Just as growth loses its importance on such a long time horizon, it also loses its importance on a short horizon: if the world is ending next year, who cares about compounding returns?

Ironically, as Cowen argues, Progress as a movement to raise the growth rate makes the most sense if you think we are doomed to extinction, but still have a few hundred years of runway left. It’s enough time that the compound returns of growth really matter for human flourishing, and we don’t have to worry that much about x-risk because there was never a chance at the galactic civilization anyways. For example, you might think there’s a .1% chance of human extinction any given year given all the x-risks. That would imply there’s about a 90% chance of surviving the next 100 years, and about a 37% chance of surviving the next 1000 years. The estimate Cowen gives in the Stanford speech is a life expectancy of 600-700 years. 

Considering “small” risks

There’s two potential responses from Progress: The first is a model like Aschenbrenner’s, that suggests after a certain amount of compounding growth x-risk becomes infinitesimal. In this case, Progress advocates are sort of “right by chance” that our best path forward is to simply raise the rate of growth. 

The second is for Progress to just reject EA as too Pascalian. For the unfamiliar, I’m referring to Pascal’s Wager, where it was argued that believing in God is rational because the costs to doing so are low (belief) and the consequences for failing to do so (Hell) are immense. So believing in God has a high expected value (EV) even if the probability of God existing is very low. While some people see Pascalian logic as true, and only “feeling wrong,” because of something like scope insensitivity, it’s usually seen as EV-gone-awry [5]. Even leaders in EA like Eliezer Yudkowsky and Holden Karnofsky tend to reject Pascalian logic, as Crawford notes in his post. My reading of this is that Yudkowsky and Karnofsky see the probabilities they are considering as far higher than any Pascalian range, at least a few percent and probably in the 10s of percent. 

Consider a hypothetical set of policies that would cause us to grow at 1%. If we adopted this set of policies, we would survive the next 1000 years with a 2% probability. If we rejected them, we could grow at 2.5%, but would have a 1% chance of surviving the next 1000 years. And if we survived the next 1000 years (the length of this hypothetical time of perils), we would create a stable galactic civilization where trillions of trillions of people would live flourishing lives for the next 10100 years. In any EV calculation, the amount of EV generated by growing at 1% vs 2.5% would pale in comparison to even the 2% chance at more +EV than is worth trying to describe. But in the modal outcome of both scenarios, the 2.5% growth world would be much better than the 1% growth world. Cowen has described betting on futures less than 2% as Pascalian, so it’s unclear whether he would be willing to adopt the policy set [6], [7].

But for others, 2% might still be in the realm of reasonable cost/benefit calculations. 2% is, after all, about your chance of getting into Y Combinator. There’s no agreed upon probability at which an EV calculation becomes Pascalian, but 2% certainly seems like an upper-bound. But regardless, if there is any chance that we could have a galactic civilization at a number where you don’t feel mugged, the expected value is obviously there to invest marginal resources in safety instead of growth (if not to actively slow down progress altogether).

Also, I would be surprised if most people in the Progress Community were comfortable with 700 years as the modal outcome for humanity. I think most people interested in Progress also want a Galactic Civilization. So we should be thinking about whether there’s a safer way to get there.

The question of AGI

To make this all a bit more grounded, we can consider what artificial general intelligence (AGI) scenarios would justify Progress and which would justify EA. AI is a particularly interesting example here, even relative to other x-risks, because it could be a massive driver of economic growth but it also substantially raises x-risk [8].

Define “likely” and “not likely” as the lowest probability at which you do not feel that you are being Pascalian. Keep in mind two surveys found that AI experts believe the probability of AI by 2100 is between 70% and 80%. A much more conservative report from Tom Davidson at OpenPhil estimates “pr(AGI by 2100) ranges from 5% to 35%, with [a] central estimate around 20%.” 

In the top left quadrant, we are in an AGI situation analogous to Cowen’s world ending next year: Just as we wouldn’t worry about economic growth if the world is ending next year, we need not worry about economic growth if AGI will soon bring exponential growth. If we’re in that world, the Progress enthusiast might as well kick back and relax.

On the other hand, if AGI is likely in the next 100 years and alignment is not likely, EAs have simply won the argument. We should heed Eliezer Yudkowsky and utilize every ounce of human and financial capital to avert unfriendly AGI. From my vantage, it’s clear that members of the Progress community (including myself) haven’t spent enough time deciding where they fall on this. It seems like the most fundamental practical disagreement between Progress and EA, but one that has had almost no debate.

This too seems like more a matter of mood than substance: Plenty of people want to talk about AI, so we’ll be the ones to talk about the industrial revolution. And there’s a more defensible version, that Progress doesn’t have a comparative advantage in discussing AI. But if AI is the central determinate about long-run growth, we need to be talking about it more! It might be the only important thing. 

The Progress movement as it currently exists only truly makes sense in a world where AGI isn’t possible on a short timeline (See also: The Irony of “Longtermism"). I use 100 years, but really it’s any amount of time that compounding returns start really paying off. In these worlds, we should care about making the world better for our children and grandchildren. However, even then, it doesn’t really make sense to pull marginal resources away from AI safety work. It’s also not likely, based on our best current estimates, that we are in either of these worlds. 

Final thoughts

At the moment, I’m not sure there’s any true philosophical conflict between Progress and EA. I think there’s a lot of disagreement about points of emphasis, different attitudes towards precaution, and different suspicions about the nature of technological progress. 

Perhaps the best argument here for Progress advocates is that this longtermist framing is wrong entirely. All the big risks are actually unknown unknowns, so safety work on current margins is going to be inevitably misplaced. What we should be doing is driving the engine of progress forward and building up institutions so that we can quickly react to x-risks when they actually begin to materialize. 

It leads us to a view I am currently most sympathetic with: that longtermism is philosophically true but is a poor action-guiding principle: it causes one to waste too much time and pursue too many dead ends that being a Longtermist isn’t even +EV. And because of that, we should focus our efforts on finding solutions to the specific problems that confront us on our current frontier. [9]

Still, on the whole, I don’t think Progress is winning any arguments against EA [10]. This is arguing on the wrong margin, though. As I said in the beginning, I think people are attracted to this argument because it looks like a Grand Intellectual War between two of our most interesting intellectual movements. But making it a war misses the point. Both groups should be much larger. Both groups share a lot of the same concerns. Both want to work on similar issues. Which raises the question: How should we be working together?

[1]  Another way of getting at the left-leaning mood would be to make reference to EA’s interest in top-down control. This has been stated very plainly in terms of “steering” or Bostrom’s Singleton. I remain agnostic here to whether those are good ideas and simply note their political feel.  [11]

[2] I note the emergence of longtermism as the central tenant of EA has changed this.

[3]  To be clear, I don’t mean “mood question” in a dismissive sense. The Great Stagnation (and Industrial Revolution!) might have been caused by changes of mood, so mood might be one of the most important forces in human history! We should be invested in changing mood in ways congruent with optimal outcomes. But we also shouldn’t be over-invested in small differences in mood that are not consequentially important. 

[4] I suspect we’re going to work around the question of the nature of technological development by thinking more about what the existence of black balls suggests about how we should structure scientific and political institutions. The Progress Community needs to be thinking about how we can structure institutions so people avoid black balls even if we’re not going to become EAs.

[5] Theological issues aside, obviously. 

[6] A number of these positions were ascertained in conversation with Cowen and any misrepresentations are a result of my misunderstanding. 

[7] Cowen seems to believe that matters of extinction are categorically separate from normal EV calculations. It should also be said that Cowen may also reject a question like this as being too far removed from our choice set. 

[8] I’m setting aside the geopolitical dimension and “value lock-in” issues of who discovers AGI for simplicity here, but one’s opinion of those issues would change the calculation. 

[9] It may be worth developing a Growth Based Prescription to X Risk. Please reach out if you’re interested in collaborating on this.

[10] The best engagement I have seen with Progress from EA is Holden Karnofsky in this post (See “rowing”).

[11] Sorry about the unlinked footnotes. Will fix.

The Crux: Action

Jason Crawford asks, “What’s the crux between EA and progress studies?” This series of posts will be my strike at the question from a few different angles. 

There’s a number of new, online movements that are very interested in improving the modern world. I’m particularly interested and connected to two of them: Progress and Effective Altruism. I founded and serve as an editor for Works in Progress, where we seek to elevate important ideas for the future. In college, I founded the Brown University Effective Altruism chapter. I’d like to develop a more clear understanding of how the two movements relate to each other, pragmatically and philosophically, over a short series of posts. 

I want to start by making a distinction: When Patrick Collison and Tyler Cowen first wrote about progress studies, they called for an interdisciplinary academic movement. This meant opening up a discussion between different academics working on issues related to progress (like economic historians, developmental economists, and management scientists) to put together a more complete “science of progress.” 

Since then, there’s been enough popular interest in their idea to merit a larger mandate. We have a large pool of talent and capital that’s invested in the goal of accelerating progress (As Cowen and Collison wrote, "Progress Studies is closer to medicine than biology: The goal is to treat, not merely to understand."). When I write about “Progress” and “the Progress Community,” I’m going to be referring to this larger umbrella, rather than just people interested in doing “progress studies'' in the academic sense.

With that in mind, here’s a general picture of what the current Progress Community landscape might look like:

It’s with this understanding that we can begin to compare Progress to EA. 

Let’s start by looking at each’s actions. Both movements are interested in creating a community and directing that community towards certain ends. In Effective Altruism, it’s relatively clear what that community and direction look like.

At the top of the funnel, a person might move some of their regular donations to a Givewell charity. In the middle, a person might have taken the Giving What We Can pledge, done 80,000 Hours career consulting, and maybe pursued an EA bent to their work, or perhaps is involved with their local EA chapter doing community building. At the bottom of the funnel, a person might be working full time in an EA organization or is doing serious earning-to-give. 

To be clear, this has changed since EA began. In the early days, the movement was much more focused on earning-to-give and donating to global health charities. Since then (and since the involvement of a number of large donors), EA has become more focused on directing talent to found and staff EA organizations and projects, and most of those new organizations and projects are longtermist. But the EA community has always been able to direct their talent. 

Now let’s look at Progress:

I don’t mean to be too pessimistic here: there’s a lot of great projects taking place under the Progress umbrella, and a number of even more exciting projects to be announced in the next few months. But the number of people involved full-time with these projects is likely less than 20. We have largely failed at mobilizing the talent and resources we have at the top and middle of the pipeline in the way that EA did even as an incipient movement. EA could immediately say, “Earn to give and donate to Givewell,” while Progress has no such analogue. “Celebrate progress”? 

This is all the more disappointing when we consider how well the Progress community has done in getting interesting ideas into the popular discourse. There are at least half of a dozen high quality blogs and Substacks devoted to progress, and the larger economics and rationality blogosphere routinely engages with it, not to mention major publications. This is one area where Progress has markedly outperformed EA so far. But if Progress is to be “more like medicine than biology,” we need to be converting that intellectual enthusiasm into real world action and change. 

In terms of actions, the “crux” between EA and Progress might mean, “At a given level of involvement, which track would be more worthwhile?” If you’re a highly skilled, charismatic engineer who wants to be fully committed to one movement in the other, it’s a legitimately difficult question of whether you should try to start a world changing company or do AI safety research. But at the top and middle of the funnel, it isn’t exactly clear what decision you would be making, and whether there is any conflict between the two movements at all. 

So what should Progress be allocating talent towards, beyond its academic questions? The obvious thing would be fast-growing companies. And to a certain extent, Progress as a movement has seemed to be center-left people in tech learning to stop worrying and love capitalism, rediscovering the power of free markets as an engine of growth and the virtue of the entrepreneur (and hardworking employee) in driving growth forward. It’s moral permission to be a proud capitalist. 

All of that is good and correct. Indeed, it is a core message of Tyler Cowen’s Stubborn Attachments. But even if Cowen is right, that the “common sense morality” of working hard and providing for your loved ones is a reasonable approximation of true morality, that goal alone can’t do justice to the potential of Progress as a movement. For one, it’s not in our comparative advantage to push that line, as it’s already pushed by Standard American Values, religious institutions, etc. We also don’t need a movement to push people to work at fast growing start ups, as we already have stock options to do that. There might be a place to help with the amount of talented new grads that go into consulting and banking relative to tech, but this seems more to do with a skills mismatch created in higher education than something a movement is equipped to solve.

A better idea for the Progress Community would be to coordinate people to solve problems, both technological and political, that usually would be intractable but for an unusual coordination of talent and capital. All founders do this coordination to some degree: they identify a problem that people have been unable to solve and coordinate resources to solve it. Especially good founders like Elon Musk are able to coordinate talent and capital to solve problems further afield, by coordinating even more talent and more capital. Progress as a movement and network could do some of the founder’s work for them, like getting all the engineers and VCs that want to work on flying cars in one place. We could convince people to raise their ambitions and career trajectories, identify far-afield, important problems, and then get people invested in actually solving them. In fact, this is sort of like what EA looks like at the moment. 

The political case may be even more interesting. There’s a number of political and societal problems that aren’t being solved despite everyone agreeing that they are big problems. Just like tech, it will take talent, capital, and an organizing force to solve these problems. So we could also do things like get all the people and donors that want to fix housing or transit together. 

In either case, the Progress community could begin by inspiring people to be more ambitious, create a Schelling Point for ambitious people to lower the cost of search and coordination, and identify people to do academic progress studies research, lead pro-progress policy advocacy, and found world changing companies. In both the technological and political case, the Progress community could become a place for definite optimism rather than indefinite optimism. I will return to these ideas for a mature Progress community in a future post. 

But for now, any comparison between the EA and Progress communities needs to begin with this massive discrepancy in action and direction.  Progress advocates can’t credibly tell EAs to be more like them until what that is can be better defined in their actions. At the same time, EAs can say that they will be the real vehicle of progress. Should the Progress Community agree with them? Should progress simply be a subcategory in EA longtermism?

These questions get into the two groups' thinking, which we will address next time.