Jason Crawford asks, “What’s the crux between EA and progress studies?” This is the final part of a short series of posts on the question. See Part 1 and Part 2.
So why should Effective Atruists and the Progress Community work together?
Collaboration is mutually beneficial. As Ben Todd explained in his 2021 EA Global speech, EA organizations often hire from outside EA. There are plenty of people interested in global health and animal welfare who don’t have EA philosophical commitments, but they do just as well in their roles as any EA would. Rather, EA should generally direct people to do things that only EAs would or could do, like AI alignment or EA movement building. But that’s not the case with every area.
The Progress Community should be thinking along the same lines: What are people invested in human progress most uniquely willing to do? B2B SAAS companies might drive human progress, but there’s no reason the Progress Community should direct talent there as plenty of just-as-talented people are able to do the same work. Collaboration allows groups to make use of their comparative advantages.
But what should the Progress Community be focusing on? There’s a bit of a contradiction between my last two posts. In the first, I criticized the Progress Community for not guiding the actions of community members towards doing things to drive Progress. In the second, I have some criticisms of the Progress Community’s inability to respond to certain lines of argument from EA.
There’s probably a good case that Progress should just be a community of “doers” and leave the theorizing to EA. But there are some difficult theoretical questions at play so I’m inclined to think we should do both.
Doing, Revisited
Ben Todd writes that effective altruism needs more megaprojects.
But there’s been a major constraint in EA: a lack of entrepreneurial talent dedicated to creating and scaling new organizations. At the risk of undue speculation, I strongly suspect that Progress has had more pull with founder and entrepreneur types. In that spirit, let’s look at some megaprojects that would make for ideal areas of collaboration.
Energy, the Holy Grail of Progress
If I were to chip in on, “What’s the ultimate project for Progress,” my vote right now would be energy too cheap to meter. Progress Studies exists, in large part, as a response to the Great Stagnation, and there’s a reasonably strong case that its direct cause was a slow down in energy production.
The Henry Adams curve represents the tendency for energy use to grow 7% year over year from around 1800. The trend broke in the early 1970s (WTF happened in 1971?) when it began to flatline. Virtually everything we care about is correlated to more energy. Getting back on the Curve is imperative for progress to continue.
Will MacAskill has called investment in clean energy is the “GiveDirectly of longtermism.” That is to say, it’s the baseline investment that all others should be measured against. That’s because clean energy’s benefits are threefold: it mitigates the harms of climate change; it betters the lives of people; it preserves existing fossil fuel resources with which we could reindustrialize after a civilizational collapse.
Virtually everyone understands that the future of energy will need to be renewable. So EAs and the Progress Community should work on green energy too cheap to meter[1]. The Progress Community should be leading revolutionary projects and EA should be directing talent and resources towards them.
Biorisk
The other obvious area for collaboration is biotech. It’s a cliche by now to point out that we seem to be in the beginnings of a massive revolution in biology. These new technologies will have clear applications to biorisk: early detection of pathogens, rapid vaccine development, antivirals, better PPE. There would also be shared gains from FDA reforms. Both progress and anti-risk measures would benefit from human challenge trials. Something like FastGrants is a perfect example of EA-Progress crossover.
Fertility
There’s an emerging view in EA known as Aschenbrennerism (Leopold Aschenbrenner). The basic idea is that AGI timelines may be long and birthrates are quickly declining. Population growth is a major (perhaps the foremost) driver of economic growth, so population stagnation and decline could lead to a world with little to no growth. If that happens, it would extend our time of perils and lower the chance of human survival. Thus, Aschenbrennerism emphasizes raising fertility rates, lengthening productive years, and considering gene therapies and enhancements as top priorities.
These are excellent areas for collaboration. Fertility technologies, including artificial wombs, improved egg freezing, and better formula, are all areas where we need more entrepreneurship. They’ve also historically received less investment, plausibly because of gender bias in medicine. So it’s a very exciting area for collaboration.
Immortality, the Black Swan Candidate
Balaji Srinivasan makes a strange yet intriguing argument that ending death is the tautological endgame of technology: technology has the proximate goal of ending scarcity, and mortality is the ultimate cause of scarcity.
There are plausible EA and Progress cases for immortality. On EA terms, death is a massive opportunity cost. We also suffer from our own fear of death and from the deaths of our loved ones. On Aschenbrennerian terms, more healthy people bring more growth. That’s a good case for immortality as Progress too, though it may miss the fundamental point: Ending mortality might just be, as Srinivasan argues, definitional to progress.
Even extending healthy life spans without ending mortality could be an important fertility technology: Parents may prefer having children in their late 40s if they could expect to live long enough to see them married.
Life extension has been investigated as an EA cause (see here, for example), and the debate seems to be ongoing. I’m currently agnostic about the matter. But the Progress Community should clearly be directing people to work on life extension and build out institutions to support it. If EA becomes more interested in the future, the area should be well-prepared to absorb more money.
Even more…
This list is far from complete. There’s a number of for-profit companies that have made big improvements in traditional EA areas. BeyondMeat and Impossible Foods are making it easier for people to reduce the amount of meat they eat. Progress in lab-grown meat has been slower than some expected, but it still could be the most important animal welfare victory ever.
Even fintech companies have made important contributions to global poverty work. Sendwave makes it easier to transfer money to Africa and Asia, allowing immigrants to send remittances to their families more cheaply and easily. This will allow billions more dollars per year to go to families rather than Western Union. They have already done a huge amount of good.
As we all know by now, biotech companies are incredibly helpful in global health work. The pharmaceutical company GlaxoSmithKline, working in partnership with the Walter Reed Army Institute of Research, recently had their malaria vaccine approved by the WHO. If the rollout continues to go well, it could redefine effective global health work. A number of other pharmaceutical companies are now working on universal vaccines, which would be one of the biggest public health victories ever.
It shouldn’t be surprising there’s so much overlap. Companies exist to solve problems. Sometimes they solve problems traditionally adressed by charities. These crossovers deflate the EA-Progress conflict to me. Many of the goals of the wider EA community require technological solutions.
Thinking, Revisited
I propose that we have one Progress-oriented organization (or team within an EA organization) that researches the Longtermism with special attention to concerns raised by EA.
There is a substantial chance that such an organization would conclude EA problems would be best solved by EA means, but both communities value Red Teaming as good epistemic practice, so why not see what a Progress-oriented team criticizing EA could discover? Researchers could tackle a number of questions, including a few that I raised in the last piece:
How can we develop better research norms for risky technologies like AI and Gain of Function?
How can we better understand technological trajectories? How should we think about the longterm relationship between offensive and defensive technology? What does Progress have to say about Nick Bostrom’s black balls?
How can we use progress (wealth + better institutions) to quickly address x-risks as they emerge?
There’s a number of areas where Progress-style thinking could have corrected EA errors more quickly. The most obvious example is the time it took EA to appreciate the importance of economic growth for global health and poverty as Cowen described. There’s also the case of natalism: There were also some rumblings in the past that EAs shouldn't have children as it would mean money not going to EA causes. Some even went as far as to argue that population growth should be generally regarded as negative [2]. These arguments have been rightly rejected, but the influence of Aschenbrennerism and Growth-oriented thinking more broadly could have quickly ended these discussions.
The future
We now have two new Grand Narratives for this century. The first is about the time of perils:
Will MacAskill, Are we living at the hinge of history?
Holden Karnofsky, The “Most Important Century” Blog Series
The second is about the end of the Great Stagnation and a new roaring 20s. As compiled by Caleb Watney:
Noah Smith, Techno-optimism for the 2020s; Techno-optimism for 2022
Matthew Yglesias, Some optimism about America’s Covid response
Tyler Cowen, Is the Great Stagnation over?
Caleb Watney, Cracks in the Great Stagnation
In either account, these are very exciting and important times. I predict that the most interesting developments will come from people invested in the future—both optimistic and woried—working together to solve problems. This may be a trite place to end this series but I’m much more excited to witness these developments than I am to continue pondering the exact philosophical relationship between EA and Progress.
[1] The obvious objection here is that the field is not neglected. I’m agnostic on this at the moment. I would suspect there are at least big gains in shaping the regulatory environment so that we can efficiently roll out fusion or modular nuclear reactors. It’s also plausible that there are dangers with energy being too cheap, as nefarious actors could use it for destructive purposes.
[2] Anti-natalism, as far as I know, has never been institutionalized in EA. There’ve been some reports with anti-natalist viewpoints, but I won’t be linking them here as they have been rightly rejected by the community.