Where is effective altruism going?

effective altruismOne of the interesting emerging movements that I have my eye on at the moment is Effective Altruism (EA for short). This is a new form of philanthropy that looks to do the most good possible. Effective altruists give as much of their income as they can, and they think carefully about how they give to make sure that it is well used. A host of new organisations has sprung up in the last five or six years, dedicated to identifying critical interventions in development, and to encouraging more effective giving.

I’ve got a few nagging doubts about it, which I will come back to. Overall, I think it’s an exciting movement with massive potential and I’ve learned a lot from it. But it looks like there’s been a bit of controversy over the summer.

The buzz this year in the EA world is existential risk. These are the big global threats that could render humankind extinct and, for a number of effective altruists, these are the biggest priority. The conference duly gave over some space to explore this, particularly the risk of artificial intelligence. Understandably, not everyone agreed with this level of attention. One attendee named Dylan Matthews wrote a much-discussed piece in Vox suggesting the movement was going astray, and that global poverty was being described as “a rounding error”.

A bunch of commentators have since joined in, almost all of them quoting the Vox piece rather than the speakers or panellists at EA Global themselves. So this week I’ve been catching up a bit and trying to work out what the fuss is about.

The reasoning behind the focus on existential risk was explained by Oxford philosophy professor Nick Bostrom in his keynote presentation. He calls it the ‘astronomical waste’ argument, and it goes something like this: “the future is very big, potentially.” The planet could be habitable for a billion years, so there is vast potential for fulfilled human life. If our descendants spread out beyond the earth, then that future becomes an order of magnitude bigger. Assuming that in this future we have solved most problems and people are living good lives, then there are a potential 1058 happy lives out there. This vast promise is referred to as humanity’s ‘cosmic endowment’.

Because the potential is so massive, even a very small action that advances us towards this endowment is significant. The numbers are big enough that improving our chances by just one thousandth of a percent still represents billions of fulfilled lives. Compared with that future, poverty and human misery today is a footnote.

Since effective altruists are committed to doing the most good, then the argument goes one step further. The worst thing imaginable is an existential risk, because if humanity goes extinct, that cosmic destiny becomes unattainable. So reducing existential risk is more important than accelerating progress – and altruists who agree with the argument “should focus exclusively on minimizing existential risk”.

There are a number of different existential risks out there. The dominant threat of the last century was nuclear war, and that remains to a lesser degree. Asteroid impact is another. Global pandemics and runaway climate change would be two more that Bostrom didn’t mention – an oversight, since we can actually do something about those. Instead, he focused on the one big one that looms large among the Google employees and tech millionaires in the audience: artificial intelligence – the risk that we might create computers that one day take over the world.

That’s why the biggest global gathering of effective altruists so far had a whole session discussing artificial intelligence – and why so many people have questioned what the movement is up to.

What to make of this? Well, Dylan Matthews was actually there and says he was worried by the talk of AI. But the Google HQ conference is one of three. Melbourne’s has a whole session on parasitic worms, so perhaps we shouldn’t get hung up on the superintelligence panel. Animal rights is prominent too, and of course poverty reduction. Effective altruism is a broad movement, and that there are various legitimate angles for philanthropy. It would be premature to say that the movement has been derailed by secular eschatology. But equally, we probably want to put a lid on superintelligence before it comes to dominate proceedings.

I can see why the conference wanted to talk about AI, since it casts people like them as the potential heroes and saviours of the world. But there’s actually a pretty extremist viewpoint behind the astronomical waste argument, and it has some highly dubious underlying assumptions. More significantly, the focus on existential risk also fails on the effective altruists’ own terms. Let me go through a few:

  • For a start, it shifts the focus from alleviating suffering to creating the conditions for future potential happiness – a far more flimsy basis for decision making.
  • Secondly, the idea of the cosmic endowment happily assumes that we will have solved all of humanity’s problems in the future, and that those potential billions of human lives will be happy and fulfilled ones. To accept this argument, you have to believe that human perfectibility is not just possible, but inevitable. Proponents also seem to be making cost-benefit calculations based on the assumption that humanity will spread out and conquer space. This makes the whole idea inherently utopian, and undermines the movement’s ambition to be rational and scientific.
  • Another principle of effective altruism is that donations go further when they are targeted to the poorest. This uses global inequality to our advantage, and many effective altruists choose not to give to local causes or the arts, knowing that the same money used in a developing world context does so much more good. This focus is lost when donations are aimed at reducing existential risk, because it treats humanity as a whole – and that’s if they’re right. If they’re wrong, then the donation that could have lifted someone out of poverty or saved a life today, supported the salary of an Oxford philosophy professor or a Silicon Valley computer scientist instead.
  • Effective altruism relies on measuring impact, and this is very difficult with existential risk. It might be vaguely manageable with climate change or with disease prevention, but it’s completely impossible with an entirely theoretical threat like AI. You might as well fund contingency plans for a zombie apocalypse.

In short, the talk of AI is, given the audience and context, a self-indulgent distraction. A fun one, no doubt, but ultimately the prioritising of existential risk is a philosophical rabbit-hole. What matters now is that is that it doesn’t come to dominate the movement, particularly in the media. The panel on superintelligence included high profile figures like Elon Musk, and it’s a sexy science fiction topic. Melbourne’s session on worms, not so much.

In catching up with the EA Global sessions, I’m beginning to put my finger on what it is that doesn’t sit right with me about effective altruism. The existential threat argument has thrown it into relief for me: effective altruism risks being philanthropy without compassion.

The Victorian philanthropists were ‘moved by the plight of the poor’, and much charity work is still done on this basis. Effective altruism is partly a response to the over-emotional TV appeals and to giving out of sentiment rather than reason. But if it goes too far the other way, then you have giving that is more about maths than about people. ‘Lives saved’ are discussed in the aggregate as if they represent the high score in a computer game. If you can ignore hunger, HIV, illiteracy or the billion people without sanitation, and instead give your millions to funding research into artificial intelligence, you need more compassion in your life. You may also need a slap upside the head.

There’s a passage in the Bible that springs to mind, where the apostle Paul suggests he could give everything he owns to the poor, but that nothing is gained if it isn’t done out of love. Compassion for others is part of what makes us human. It can’t be denied or supressed. Surely truly effective altruism has a balance between compassion and reason. The real power lies in them working together, using compassion to motivate and reason to make sure our giving is well directed.

The second thing that still makes me uncomfortable about the effective altruism movement is the lack of connection to the developing world. It’s not universal and there are plenty of engaged agencies and individuals, but some definitely need to spend more time in the real world.

By giving rather than doing, there’s a distance between the money and the people that need it. There were no minority world voices at the conference. (The nearest I could find was former BBC journalist Rajesh Mirchandani speaking for the Centre for Global Development, which clearly isn’t good enough.) I couldn’t find any attempt to see things from the perspective of the poor, or to work in partnership or consultation with those on the ground. That all comes later presumably, with the recipient charities – but of course your giving has already decided who will be there and what they will be doing. That’s something the movement needs to look at going forward, or it will be repeating the errors of the aid and development world.

In conclusion, I don’t think effective altruism has lost its way, but I can see how it could. If it to succeed in changing the way we give and impact broader culture, it must break out of philosophical circles and Silicon Valley. It needs to engage the minority world, and it needs a human face.

Tags:

9 Comments on “Where is effective altruism going?”

  1. Simon JM August 27, 2015 at 1:52 am #

    The existential risk factor could just be applied to climate change issues let alone worrying about AI – which I doubt will be here any time soon- or future space faring humanity. But regardless don’t forget activism and systemic change or it’s just a matter of triage on the Titanic.

    It occurred to me just the other night that we are passing a threshold in that humanity is passing the point where Problem of the Commons, cheating, social traps, personal, government or business parasitism/exploitation of others or the environment can no longer be sustained and unless a new paradigm of collective welfare (real socialism)? is taken we all go down the drain.

    Not sure it matters anyway Jeremy too many storms approaching and we will Red Queening -running flat out just to stay where we are- to avoid going backwards. Look at the anti immigration feeling growing; and given the next crash is close people will be hard up looking after themselves let alone worrying about the most effective way to spend their charity money.

    • Jeremy Williams August 27, 2015 at 8:42 am #

      Yes, climate change could be considered an existential risk. Unfortunately the Silicon Valley audience at the conference are predisposed to see AI, since that’s what they’re into. Shame on them for claiming to be a scientific and rational movement, and then being so stunningly blind to the real issues, I say.

      Since people do give to charity though, it would really help matters if they gave well. One well-known example is the donkey sanctuary, which people insist on writing into their wills for sentimental reasons, giving us a charity funded to the tune of £25 million a year with really very little to do. That’s idiocy, and if people did just one minute of research on what to do with their money, charitable giving could be a much more powerful tool for good.

      • Simon JM August 27, 2015 at 12:18 pm #

        I just finished Smile or Die by Barbara Ehrenreich about the American obsession with positive thinking and how that gives a flawed view of reality. So I’m not surprised that if you get a room of US tech heads they will breeze past Climate Change and other real world problems and fixate on the Singularity and its associated threat. Trouble is reality don’t compromise and they will have to deal with it sooner or later.

        Regarding the Donkey Sanctuary I think cases like that are just self indulgent feel good exercises though having said that I’m partial to dog rescue charities.

        Lastly at least for me in principle I have no problem with EA but would wish to see it as just one aspect of an authentic ethical life which sees animal and human wel-being as central to our lifestyle choices. But with so many distractions and personal biases I think we need something extra to drive it home. Religion may once have been able to do it -& I admire the Pope and the Islamic discussion on climate change- but I’m not too sure for a secular consumer would what will redirect peoples gazes.

        • Jeremy Williams August 27, 2015 at 1:49 pm #

          A good book, as I remember it.
          https://makewealthhistory.org/2011/04/11/smile-or-die-by-barbara-ehrenreich/

          At its best, EA helps people to put aside their sentimentality and ask where their money can be most useful when giving. That may include animal charities, as if you’re looking to reduce overall misery rather than just human misery, animal charities can be good value. I’m a supporter of Compassion in World Farming myself.

          And yes, EA is going to be one aspect of a broader change, and the movement is deluded if it thinks otherwise.

        • DevonChap August 29, 2015 at 8:32 pm #

          Most charitable giving is about making the giver feel better. Most ‘ethical’ activities are similar, so the person doing it can say to themselves they are better people. It doesn’t really matter if it actually is beneficial, it isn’t the point.

          Of cause that is what effective altruism is trying to alter but while we can have our own views on how people spend their money, to try to control it like preventing them giving money to donkeys, would be profoundly illiberal so we are stuck with persuasion. May I suggest William MacCaskill’s Doing Good Better: How Effective Altruism Can Help You Make a Difference.

          • Jeremy Williams August 30, 2015 at 8:03 am #

            As far as I’m aware nobody is suggesting an attempt to control how people give their money. What EA is good at is showing people just how much good they could do in the world if they gave better. Once people realise that they could benefit hundreds of people with a well aimed donation, they’re going to start thinking more carefully about how they give.

            You still get to feel good about it, so that motive is not suppressed – it’s just more directed.

            Doing Good Better is on the reading list for the autumn.

          • Simon JM August 31, 2015 at 1:28 pm #

            The other thing that has occurred to me is that if you break down the underlying concept of EA why should you only stop at what is the best charity to give to? The fullest application would be looking at your whole life and or, what actions have the best impact overall. The argument by those criticizing charity is that this is less than optimal and in fact is just enabling systemic problems.http://www.awdnews.com/economy/it-s-all-part-of-capitalism-how-philanthropy-perpetuates-inequality

            Ofc taking on radical activism -was Occupy too radial?- might by itself be the best route either, so at least to me this would while incorporating some charity EA it necessitates a stronger personal commitment to one’s overall lifestyle as well as strong activism.

            Lastly regarding Smile or Die and the above topic I wonder how much positive thinking combined with inequality enabling philanthropy and NGO corruption has to do with situations like this? http://www.filmsforaction.org/articles/resilience-is-futile-how-wellmeaning-nonprofits-perpetuate-poverty/

            While it’s nothing like what has happened with the Red Cross in Haiti it at least to me had some aspects of blaming the victim and if only they had the right resilient attitude things would be better.

          • Jeremy Williams September 1, 2015 at 11:11 am #

            Yes, how we spend our time matters too. The Centre for Effective Altruism includes the organisation 80,000 Hours, which does just that.

            I understand the criticisms of philanthropy like Thorup’s, but they tend to assume that those in favour of philanthropy think of it as the one big solution. It’s not – we still need the radical and systemic change. But in the meantime, what’s the best thing for rich people to do with their surplus? Just sit on it, because giving it away perpetuates inequality? I don’t think so. Would we be better off if billionaires just spend the money on themselves, or hoarded it offshore? Clearly not.

            As with many things, the problem is not philanthropy, but over-reliance on it. EA won’t save the world, but it can help to make the best of our current situation while we work on other solutions.

Trackbacks/Pingbacks

  1. Doing Good Better, by William MacAskill | Make Wealth History - September 13, 2016

    […] lives, or to reduce suffering? Or is the best cause (though MacAskill doesn’t mention it) to prevent existential threat? There’s a mathsy rationalism that gives EA rather uncompromising edge, and that risks […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: