One of the interesting emerging movements that I have my eye on at the moment is Effective Altruism (EA for short). This is a new form of philanthropy that looks to do the most good possible. Effective altruists give as much of their income as they can, and they think carefully about how they give to make sure that it is well used. A host of new organisations has sprung up in the last five or six years, dedicated to identifying critical interventions in development, and to encouraging more effective giving.
I’ve got a few nagging doubts about it, which I will come back to. Overall, I think it’s an exciting movement with massive potential and I’ve learned a lot from it. But it looks like there’s been a bit of controversy over the summer.
The buzz this year in the EA world is existential risk. These are the big global threats that could render humankind extinct and, for a number of effective altruists, these are the biggest priority. The conference duly gave over some space to explore this, particularly the risk of artificial intelligence. Understandably, not everyone agreed with this level of attention. One attendee named Dylan Matthews wrote a much-discussed piece in Vox suggesting the movement was going astray, and that global poverty was being described as “a rounding error”.
A bunch of commentators have since joined in, almost all of them quoting the Vox piece rather than the speakers or panellists at EA Global themselves. So this week I’ve been catching up a bit and trying to work out what the fuss is about.
The reasoning behind the focus on existential risk was explained by Oxford philosophy professor Nick Bostrom in his keynote presentation. He calls it the ‘astronomical waste’ argument, and it goes something like this: “the future is very big, potentially.” The planet could be habitable for a billion years, so there is vast potential for fulfilled human life. If our descendants spread out beyond the earth, then that future becomes an order of magnitude bigger. Assuming that in this future we have solved most problems and people are living good lives, then there are a potential 1058 happy lives out there. This vast promise is referred to as humanity’s ‘cosmic endowment’.
Because the potential is so massive, even a very small action that advances us towards this endowment is significant. The numbers are big enough that improving our chances by just one thousandth of a percent still represents billions of fulfilled lives. Compared with that future, poverty and human misery today is a footnote.
Since effective altruists are committed to doing the most good, then the argument goes one step further. The worst thing imaginable is an existential risk, because if humanity goes extinct, that cosmic destiny becomes unattainable. So reducing existential risk is more important than accelerating progress – and altruists who agree with the argument “should focus exclusively on minimizing existential risk”.
There are a number of different existential risks out there. The dominant threat of the last century was nuclear war, and that remains to a lesser degree. Asteroid impact is another. Global pandemics and runaway climate change would be two more that Bostrom didn’t mention – an oversight, since we can actually do something about those. Instead, he focused on the one big one that looms large among the Google employees and tech millionaires in the audience: artificial intelligence – the risk that we might create computers that one day take over the world.
That’s why the biggest global gathering of effective altruists so far had a whole session discussing artificial intelligence – and why so many people have questioned what the movement is up to.
What to make of this? Well, Dylan Matthews was actually there and says he was worried by the talk of AI. But the Google HQ conference is one of three. Melbourne’s has a whole session on parasitic worms, so perhaps we shouldn’t get hung up on the superintelligence panel. Animal rights is prominent too, and of course poverty reduction. Effective altruism is a broad movement, and that there are various legitimate angles for philanthropy. It would be premature to say that the movement has been derailed by secular eschatology. But equally, we probably want to put a lid on superintelligence before it comes to dominate proceedings.
I can see why the conference wanted to talk about AI, since it casts people like them as the potential heroes and saviours of the world. But there’s actually a pretty extremist viewpoint behind the astronomical waste argument, and it has some highly dubious underlying assumptions. More significantly, the focus on existential risk also fails on the effective altruists’ own terms. Let me go through a few:
- For a start, it shifts the focus from alleviating suffering to creating the conditions for future potential happiness – a far more flimsy basis for decision making.
- Secondly, the idea of the cosmic endowment happily assumes that we will have solved all of humanity’s problems in the future, and that those potential billions of human lives will be happy and fulfilled ones. To accept this argument, you have to believe that human perfectibility is not just possible, but inevitable. Proponents also seem to be making cost-benefit calculations based on the assumption that humanity will spread out and conquer space. This makes the whole idea inherently utopian, and undermines the movement’s ambition to be rational and scientific.
- Another principle of effective altruism is that donations go further when they are targeted to the poorest. This uses global inequality to our advantage, and many effective altruists choose not to give to local causes or the arts, knowing that the same money used in a developing world context does so much more good. This focus is lost when donations are aimed at reducing existential risk, because it treats humanity as a whole – and that’s if they’re right. If they’re wrong, then the donation that could have lifted someone out of poverty or saved a life today, supported the salary of an Oxford philosophy professor or a Silicon Valley computer scientist instead.
- Effective altruism relies on measuring impact, and this is very difficult with existential risk. It might be vaguely manageable with climate change or with disease prevention, but it’s completely impossible with an entirely theoretical threat like AI. You might as well fund contingency plans for a zombie apocalypse.
In short, the talk of AI is, given the audience and context, a self-indulgent distraction. A fun one, no doubt, but ultimately the prioritising of existential risk is a philosophical rabbit-hole. What matters now is that is that it doesn’t come to dominate the movement, particularly in the media. The panel on superintelligence included high profile figures like Elon Musk, and it’s a sexy science fiction topic. Melbourne’s session on worms, not so much.
In catching up with the EA Global sessions, I’m beginning to put my finger on what it is that doesn’t sit right with me about effective altruism. The existential threat argument has thrown it into relief for me: effective altruism risks being philanthropy without compassion.
The Victorian philanthropists were ‘moved by the plight of the poor’, and much charity work is still done on this basis. Effective altruism is partly a response to the over-emotional TV appeals and to giving out of sentiment rather than reason. But if it goes too far the other way, then you have giving that is more about maths than about people. ‘Lives saved’ are discussed in the aggregate as if they represent the high score in a computer game. If you can ignore hunger, HIV, illiteracy or the billion people without sanitation, and instead give your millions to funding research into artificial intelligence, you need more compassion in your life. You may also need a slap upside the head.
There’s a passage in the Bible that springs to mind, where the apostle Paul suggests he could give everything he owns to the poor, but that nothing is gained if it isn’t done out of love. Compassion for others is part of what makes us human. It can’t be denied or supressed. Surely truly effective altruism has a balance between compassion and reason. The real power lies in them working together, using compassion to motivate and reason to make sure our giving is well directed.
The second thing that still makes me uncomfortable about the effective altruism movement is the lack of connection to the developing world. It’s not universal and there are plenty of engaged agencies and individuals, but some definitely need to spend more time in the real world.
By giving rather than doing, there’s a distance between the money and the people that need it. There were no minority world voices at the conference. (The nearest I could find was former BBC journalist Rajesh Mirchandani speaking for the Centre for Global Development, which clearly isn’t good enough.) I couldn’t find any attempt to see things from the perspective of the poor, or to work in partnership or consultation with those on the ground. That all comes later presumably, with the recipient charities – but of course your giving has already decided who will be there and what they will be doing. That’s something the movement needs to look at going forward, or it will be repeating the errors of the aid and development world.
In conclusion, I don’t think effective altruism has lost its way, but I can see how it could. If it to succeed in changing the way we give and impact broader culture, it must break out of philosophical circles and Silicon Valley. It needs to engage the minority world, and it needs a human face.