2020-07-26

EA ideas 3: uncertainty

2.0k words (7 minutes)
 
Posts in this series:
FIRST | PREVIOUS | NEXT

Moral uncertainty is uncertainty over the definition of good. For example, you might broadly accept utilitarianism, but still have some credence in deontological principles occasionally being more right.

Moral uncertainty is different from epistemic uncertainty (uncertainty about our knowledge, its sources, and uncertainty over our degree of uncertainty about these things). In practice these often mix – uncertainty over an action can easily involve both moral and epistemic uncertainty – but since is-ought confusions are a common trap in any discussion, it is good to keep these ideas firmly separate.


Dealing with moral uncertainty

Thinking about moral uncertainty quickly gets us into deep philosophical waters.

How do we decide which action to take? One approach is called “My Favourite Theory” (MFT), which is to act entirely in accordance to the moral theory you think is most likely to be correct. There are a number of counterarguments, many of which involve around problems of how we draw boundaries between theories: if you have 0.1 credence in each of 8 consequentialist theories and 0.2 credence in a deontological theory, should you really be a strict deontologist? (More fundamentally: say we have some credence in a family of moral systems with a continuous range of variants – say, differing by arbitrarily small differences in the weights assigned to various forms of happiness – does MFT require we reject this family of theories in favour of ones that vary only discretely, since in the former case the probability of a particular variant being correct is infinitesimal?). For a defence of MFT, see this paper.

If we reject MFT, when making decisions we have to somehow make comparisons between the recommendations of different moral systems. Some regard this as non-sensical; others write theses on how to do it (some of the same ground is covered in a much shorter space here; this paper also discusses the same concerns with MFT that I mentioned in the last paragraph, and problems with switching to “My Favourite Option” – acting according to the option that is most likely to be correct, summed over all moral theories you have credence in).

Another less specific idea is the parliamentary model. Imagine that all moral theories you have some credence in send delegates to a parliament, who can then negotiate, bargain, and vote their way to a conclusion. We can imagine delegates for a low-credence theory generally being overruled, but, on the issues most important to that theory, being able to bargain their way to changing the result.

(In a nice touch of subtlety, the authors take care to specify that though the parliament acts according to a typical 50%-to-pass principle, the delegates act as if they believe that the percent of votes for an action is the probability that it will happen, removing the perverse incentives generated by an arbitrary threshold.)

As an example of other sorts of meta-ethical considerations, Robin Hanson compares the process of fitting a moral theory to our moral intuitions to fitting a curve (the theory) to a set of data points (our moral intuitions). He argues that there’s enough uncertainty over these intuitions that we should take heed of a basic principle of curve-fitting: keep it simple, or otherwise you will overfit, and your curve will veer off in one direction or another when you try to extrapolate.


Mixed moral and epistemic uncertainty

Cause X

We are probably committing a moral atrocity without being aware of it.

This is argued here. The first argument is that past societies have been unaware of serious moral problems and we don’t have strong enough reasons to believe ourselves exempt from this rule. The second is that there are many sources of potential moral catastrophe – there are very many ways of being wrong about ethics or being wrong about key facts – so though we can’t point to any specific likely failure mode with huge consequences, the probability that at least one exists isn’t low.

In addition to an ongoing moral catastrophe, it could be that we are overlooking an opportunity to achieve a lot of good for cheap. In either case there would be a cause, dubbed Cause X, which would be a completely unknown but extremely important way of improving the world.

(In either case, the cause would likely involve both moral and epistemic failure: we’ve both failed to think carefully enough about ethics to see what it implies, and failed to spot important facts about the world.)

“Overlooked moral problem” immediately invites everyone to imagine their pet cause. That is not what Cause X is about. Imagine a world where every cause you support triumphed. What would still be wrong about this world? Some starting points for answering this are presented here.

If you say “nothing”, consider MacAskill’s anecdote in the previous link: Aristotle was smart and spent his life thinking about ethics, but still thought slavery made sense.


Types of epistemic uncertainty

I use the term "epistemic uncertainty" because the concept is broader than just uncertainty over facts. For example, our brains are flawed in predictable ways, and dealing with this is different from dealing with being wrong or having incomplete information about a specific fact.

Flawed brains

A basic cause for uncertainty is that human brains make mistakes. Especially important are biases, which consistently make our thinking wrong in the same way. This is a big and important topic; the classic book is Kahneman’s Thinking, Fast and Slow, but if you prefer sprawling and arcane chains of blog posts, you’ll find plenty here. I will only briefly mention some examples.

The most important bias to avoid when thinking about EA may be scope neglect. In short, people don’t automatically multiply. It is the image of a starving child that counts in your brain, and your brain gives this image the same weight whether the number you see on the page has three zeros or six after it. Trying to reason about any big problem without being very mindful of scope neglect is like trying to captain a ship that has no bottom: you will sink before you move anywhere.

Many biases are difficult to counter, but occasionally someone thinks of a clever trick. Status quo bias is a preference for keeping things as they are. It can often be spotted through the reversal test. For example, say you argue that we shouldn’t lengthen human lifespans further. Ask yourself: should we then decrease life expectancy? If you think that we should have neither more nor less of something, you should also have a good reason for why it just so happens that we have an optimum amount already. What are the chances that the best possible lifespan for humans also happens to be the highest one that present technology can achieve?


Crucial considerations

A crucial consideration is something that flips (or otherwise radically changes) the value of achieving a general goal.

For example, imagine your goal is to end raising cows for meat, because you want to prevent suffering. Now say there’s a fancy new brain-scanner that lets you determine that even though the cow ends up getting chucked into a meat grinder, on average the cow’s happiness is above the threshold for when non-existence is preferable to existence (assume this is a well-defined concept in your moral system). Your morals are the same as before, but now they’re telling you to raise more cows for meat.

An example of a chain of crucial considerations is whether or not we should develop some breakthrough but potentially dangerous technology, like AI or synthetic biology. We might think that the economic and personal benefits make it worth the expense, but a potential crucial consideration is the danger of accidents or misuse. There might be another crucial consideration that it’s better to have the technology developed internationally and in the open, rather than have advances made by rogue states.

There are probably many crucial considerations that are either unknown or unacknowledged, especially in areas that we haven’t thought about for very long.


Cluelessness

The idea of cluelessness is that we are extremely uncertain about the impact of every action. For example, making a car stop as you cross the street might affect a conception later that day, and might make the difference between the birth of a future Gandhi or Hitler later on. (Note that many non-consequentialist moral systems seem even more prone to cluelessness worries – William MacAskill points this out in this paper, and argues for it more informally here.)

I’m not sure I fully understand the concerns. I’m especially confused about what the practical consequences of cluelessness should be on our decision-making. Even if we’re mostly clueless about the consequences of our actions, we should base them on the small amount of information we do have. However, at the very least it’s worth keeping in mind just how big uncertainty over consequences can be, and there are a bunch of philosophy paper topics here.

For more on cluelessness, see for example:


Reality is underpowered

Imagine we resolve all of our uncertainties over moral philosophy, iron out the philosophical questions posed by cluelessness, confidently identify Cause X, avoid biases, find all crucial considerations, and all that remains is the relatively down-to-earth work of figuring out which interventions are most effective. You might think this is simple: run a bunch of randomised controlled trials (RCTs) on different interventions, publish the papers, and maybe wait for a meta-analysis to combine the results of all relevant papers before concluding that the matter is solved.

Unfortunately, it’s often the case that reality is underpowered (in the statistical sense): we can’t run the experiments or collect the data that we’d need to answer our questions.

To take an extreme example, there are many different factors that affect a country’s development. To really settle the issue, we might make groups of, say, a dozen countries each, give them different amounts of the development factors (holding everything else fairly constant), watch them develop over 100 years, run a statistical analysis of the outcomes, and then draw conclusions about how much the factors matter. But try finding hundreds of identical countries with persuadable national leaders (and at least one country must have a science ethics board that lets this study go forwards).

To make a metaphor with a different sort of power: the answers to our questions (on what effects are the most important in driving some phenomenon, or which intervention is the most effective) exist, sharp and clear, but the telescopes with which we try to see them aren’t good enough. The best we can do is interpret the smudges we do see, inferring as much as we can without the brute force of an RCT.

This is an obvious point, but an important one to keep in mind to temper the rush to say we can answer everything if only we run the right study.


Conclusions?

All this uncertainty might seem to imply two conclusions. I support one of them but not the other.

The first conclusion is that the goal of doing good is complicated and difficult (as is the subgoal of having accurate beliefs about the world). This is true, and important to remember. It is tempting to forget analysis and fall back on feelings of righteousness, or to switch to easier questions like “what feels right?” or “what does society say is right?”

The second conclusion is that this uncertainty means we should try less. This is wrong. Uncertainties may rightly redirect efforts towards more research, and reducing key uncertainties is probably one of the best things we can do, but there’s no reason why they should make us reduce our efforts.

Uncertainty and confusion are properties of minds, not reality; they exist on the map, not the territory. To every well-formed question there is an answer. We need only find it.

 

Next post

2020-07-25

EA ideas 2: expected value and risk neutrality

2.6k words (9 minutes)
 
Posts in this series:
PREVIOUS | NEXT


The expected value (EV) of an event / choice / random variable is the sum, over all possible outcomes, of {value of outcome} times {probability of that outcome} (if all outcomes are equally likely, it is the average; if they’re not, it’s the probability-weighted average).

In general, a rational agent makes decisions that maximise the expected value of the things they care about. However, EV reasoning involves more subtleties than its mathematical simplicity suggests, in both the real world and in thought experiments.

Is a 50% chance of 1000€ exactly as good as a certain gain of 500€ (that is, are we risk-neutral?), or a 50% chance of 2000€ with a 50% chance of a 1000€ loss instead?

Not necessarily. A bunch of research (and common sense) says people put decreasing value on an additional unit of money: the thousandth euro is worth more than the ten-thousandth. For example, average happiness scales roughly logarithmically with per-capita GDP. The thing to maximise in a monetary tradeoff is not the money, but the value you place on money; with a logarithmic relationship, the diminishing returns mean that more certain bets are better than naive EV-of-money reasoning implies. A related reason is that people weight losses more than gains, which makes the third case look worse than the first even if you don’t assume a logarithmic money->value function.

However, a (selfish) rational agent will still maximise EV in such decisions – not of money, but of what they get from it.

(If you’re not selfish and live in a world where money can be transferred easily, the marginal benefit curve of efficiently targeted donations is essentially flat for a very long time – a single person will hit quickly diminishing returns after getting some amount of money, but there are enough poor people in the world that enormous resources are needed before you need to worry about everyone reaching the point of very low marginal benefit from more money. To fix the old saying, albeit with some hit to its catchiness: “money can buy happiness only (roughly) logarithmically for yourself, but (almost) linearly in the world at large, given efficient targeting”.)

In some cases, we don’t need to worry about wonky thing->value functions. Imagine the three scenarios above, but instead of euros we have lives. Each life has the same value; there’s no reasonable argument for the thousandth life being worth less than the first. Simple EV reasoning is the right tool.


Why expected value?

This conclusion easily invites a certain hesitation. Any decision involving hundreds of lives is a momentous one; how can we be sure of exactly the right way to value these decisions, even in simplified thought experiments? What’s so great about EV?

A strong argument is that maximising EV is the strategy that leads to the greatest good over many decisions. In a single decision, a risky but EV-maximising choice can backfire – you might take a 50-50 bet of saving 1000 lives and lose, in which case you’ll have done much worse than picking an option of certainly saving 400. However, it’s a mathematical fact that given enough such choices, the actual average value will tend towards the EV. So maximising EV is what results in the most value in the long run.

You might argue that we’re not often met with dozens of similar momentous decisions. Say that we’re reasonably confident the same choice will never pop up again, and certainly not many times; doesn’t the above argument no longer apply? Take a slightly broader view though, and consider which strategy gets you the most value across all decisions you make (of which there will realistically be many, even if no single decision occurs twice): the answer is still EV maximisation. We could go on to construct crazier thought experiments – toy universes in which only one decision ever occurs, for example – and then the argument really begins to break down (though you might try to save it by some wild scheme of imagining many hypothetical agents faced with the same choice and consider a Kantian / rule-utilitarian principle of deciding by answering the question of which strategy would be right if it were the one adopted across all countless hypothetical instances of this decision).

There are other arguments too. Imagine 1000 people are about to die of a disease, and you have to decide between a cure that will certainly cure 400 versus an experimental one that will either cure everyone or save no-one. Imagine you are one of these people. In the first scenario, you have a 40% chance of living; in the second, a 50% chance. Which would you prefer?

On a more mathematical level, von Neumann (an all-around polymath) and Morgenstern (co-founder of game theory with von Neumann) have proved that under fairly basic assumptions of what is rational behaviour, a rational agent acts as if they’re maximising the EV of some preference function.


Problems with EV

Diabolical philosophers have managed to dream up many challenges for EV reasoning. For example, imagine there’s two dollars on the table. You toss a coin; if it’s heads you take the money on the table, if it’s tails the money on the table doubles and you toss again. You have a 1/2 chance of winning 2 dollars, 1/4 chance of winning 4, 1/8 chance of winning 8, and so on, for a total EV of 1/2 x 2 + 1/4 x 4 + … = 1 + 1 + … . The sequence diverges to infinity.

Imagine a choice: one game of the “St. Petersburg lottery” described above, or a million dollars. You’d be crazy not to pick the latter.

Is this a challenge to the principle of maximising EV? Not in our universe. We know that whatever casino we’re playing at can’t have an infinite amount of money, so we’re wise to intuitively reject the St. Petersburg lottery. (This section on Wikipedia has a very nice demonstration of why, even if the casino is backed by Bill Gates’s net worth, the EV of the St. Petersburg game is less than $40.)

The St. Petersburg lottery isn’t the weirdest EV paradox by half, though. In the Pasadena game, the EV is undefined (see the link for a definition, analysis, and an argument that such scenarios are points against EV-only decision-making). Nick Bostrom writes about the problems of consequentialist ethics in an infinite universe (or a universe that has a finite probability of being infinite) here.

There’s also the classic: Pascal’s wager, the idea that even if the probability of god existing is extremely low, the benefits (an eternity in heaven) are great enough that you should seek to believe in god and live a life of Christian virtue.

Unlike even Bostrom’s infinite ethics, Pascal’s wager is straightforwardly silly. We have no reason to privilege the hypothesis of a Christian god over the hypothesis – equally probable given the evidence we have – that there’s a god who punishes us exactly for what the Christian god rewards us for, or that god is a chicken and condemns all chicken-eaters to an eternity of hell. So even if you accept the mathematically dubious multiplication of infinities, Pascal’s wager doesn’t let you make an informed decision one way or another.

However, the general format of Pascal’s wager – big values multiplied by small probabilities – is the cause of much of EV-related craziness, and dealing with such situations is a good example of how naive EV reasoning can go wrong. The more general case is often referred to as Pascal’s mugging, and exemplified by the scenario (see link) where a mugger threatens to torture an astronomical amount of people unless you give them a small amount of money.


Tempering EV extremeness with Bayesian updating

Something similar to Pascal’s mugging easily happens if you calculate EVs by multiplying together very rough guesses involving small probabilities and huge outcomes.

The best and most general approach to these sorts of issues is laid out here.

The key insight is to remember two things. First, every estimate is a probability distribution: if you measure a nail or estimate the effectiveness of a charity, the result isn’t just your best-guess value, but also the uncertainty surrounding it. Second, Bayesian updating is how you change your estimates when given new evidence (and hence you should pay attention to your prior: the estimate you have before getting the new information).

Using some maths detailed here, it can be shown that if your prior and measurement both follow normal distributions, then your new (Bayesian) estimate will be another normal distribution, with a mean (=expected value) that is an average of the prior and measurement means, weighted by the inverse variance of the two distributions. (Note that the link does it with log-normal distributions, but the result is the same; just switch between variables and their logarithms.)

Here’s an interactive graph that lets you visualise this.

The results are pretty intuitive. Let’s say our prior for the effectiveness of some intervention has a mean of zero. If we take a measurement with low variance, our updated estimate probability distribution will shift most of the way towards our new measurement, and its variance will decrease (it will become narrower):

Red is the probability distribution of our prior estimate. Green is our measurement. Black is our new belief, after a Bayesian update of our prior with the measurement. Dotted lines show the EV (=average, since the distributions are symmetrical) for each probability distribution. You can imagine the x-axis as either a linear or log scale.

If the same measurement has greater variance, our estimates shift less:


And if we have a very imprecise measurement – for example, we’ve multiplied a bunch of rough guesses together – the estimate barely shifts even if the estimate is high:


Of course, we can argue about what our priors should be – perhaps, for many of the hypothetical scenarios with potentially massive benefits (for instance concerning potential space colonisation in the future), the variance of our prior should be very large, in which case even highly uncertain guesses will shift our best-guess EV a lot. But the overall point still stands: if you go to your calculator, punch in some numbers, and conclude you’ve discovered something massively more important than anything else, it’s time to think very carefully about how much you can really conclude.

Overall, I think this is a good example of how a bit of maths can knock off quite a few teeth from a philosophical problem.

(Here’s a link to a wider look at pitfalls of overly simple EV reasoning with a different framing, by the same author as this earlier link. And here is another exploration of the special considerations involved with low-probability, high-stakes risks.)


Risk neutrality

An implication of EV maximisation as a decision framework is risk neutrality: when you’ve measured things in units of what you actually care about (e.g. converting money to the value it has for you as discussed above), you should be neutral about the choice between 10% chance of 10 value units and 100% chance of 1, and you really should prefer a 10% chance of 11 “value units” over a 100% chance of 1 “value unit”, or a 50-50 bet between losing 10 and gaining 20 over a certain gain of 14.

This is not an intuitive conclusion, but I think we can be fairly confident in its correctness. Not only do we have robust theoretical reasons for using EV, but we can point to specific bugs in our brains that makes us balk at risk-neutrality: biases like scope neglect, which makes humans underestimate the difference between big and small effects, or loss aversion, which makes losses more salient than gains, or a preference for certainty.

$$$%%IF YOU SEE DOLLAR SIGNS IN THE NEXT SECTION, EQUATION RENDERING VIA MATHJAX IS NOT WORKING IN YOUR BROWSER$$$

Stochastic dominance (an aside)

Risk neutrality is not necessarily specific to EV maximisation. There’s a far more lenient, though also far more incomplete, principle of rational decision making that goes under the clumsy name of “stochastic dominance”: given options $$A$$ and $$B$$, if the probability of a payoff of $$X$$ or greater is more under option $$A$$ than option $$B$$ for all values of $$X$$, then $$A$$ “stochastically dominates” option B and should be preferred. It’s very hard to argue against stochastic dominance.

Consider a risky and a safe bet; to be precise, call them option $$A$$, with a small probability $$p$$ of a large payoff $$L$$, and option $$B$$, with a certain small payoff $$S$$. Assume that $$pL > S$$, so EV maximising says to take option $$A$$. However, we don’t have stochastic dominance: the probability of getting a small amount of value $$v$$ ($$v < S$$) is greater with $$B$$ than $$A$$, whereas the probability of getting a large amount of value ($$S < v < L$$) is greater with option $$A$$.

The insight of this paper (summarised here) is that if we care about the total amount of value in the universe, are sufficiently uncertain about this total amount, and make some assumptions about its distribution, then stochastic dominance alone implies a high level of risk neutrality.

The argument goes as follows: we have some estimate of the probability distribution $$U$$ of value that might exist in the universe. We care about the entire universe, not just the local effects of our decision, so what we consider is $$A + U$$ and $$B + U$$ rather than $$A$$ and $$B$$. Now consider an amount of value $$v$$. The probability that $$A + U$$ exceeds $$v$$ is the probability that $$U > v$$, plus the probability that $$(v - L) < U < v$$ and $$A$$ pays off $$L$$ (we called this probability $$p$$ earlier). The probability that $$B + U$$ exceeds $$v$$ is the probability that $$U > v - S$$.

Is the first probability greater? This depends on the shape of the distribution of $$U$$ (to be precise, we’re asking whether $$P(U > v) + p P(v - L < U < v) > P(U > v - S)$$, which clearly depends on $$U$$). If you do a bunch of maths (which is present in the paper linked above; I haven’t looked through it), it turns out that this is true for all $$v$$ – and hence we have stochastic dominance of $$A$$ over $$B$$ – if the distribution of $$U$$ is wide enough and has a fat tail (i.e. trails off slowly as $$v$$ increases).

What’s especially neat is that this automatically excludes Pascal’s mugging. The smaller the probability $$p$$ of our payoff is, the more stringent the criteria get: we need a wider and wider distribution of $$U$$ before $$A$$ stochastically dominates $$B$$, and at some point even the most stringent Pascalian must admit $$U$$ can’t plausibly have that wide of a distribution.

It’s far from clear what $$U$$’s shape is, and hence how strong this reasoning is (see the links above for that). However, it is a good example of how easily benign background assumptions introduce risk neutrality into the problem of rational choice.


Implications of risk neutrality: hits-based giving

What does risk neutrality imply about real-world altruism? In short, that we should be willing to take risks.

A good overview of these considerations is given in this article. The key point:

[W]e suspect that, in fact, much of the best philanthropy is likely to fail.

For example, GiveWell thinks that Deworm the World Initiative probably has low impact, but still recommends them as one of their top charities because there’s a chance of massive impacts.

Hits-based giving comes with its own share of problems. As the article linked above notes, it can provide a cover for arrogance and make it harder to be open about decision-making. However, just as high-risk high-reward projects make up a disproportionate share of successes in scientific research and entrepreneurship, we shouldn’t be surprised if the bulk of returns on charity comes from a small number of risky bets.

 

Next post

EA ideas 1: rigour and opportunity in charity

2.2k words (8 minutes)

Effective altruism (EA) is about trying to carefully reason how to do the most good. On the practical side, EA has inspired the donation of hundreds of millions of dollars to impactful charities, and lead to many new organisations focused on important causes. On the theoretical side, it has lead to rigorous and precise thought on ethics and how to apply it in the real world.

The intellectual work that has come out of EA is valuable, especially in two ways.

First, much EA work is exceptional in the breadth and weight of the matters it considers. It is interdisciplinary, including everything from meta-ethics to interpreting studies on the effectiveness of vaccination programs in developing countries. Because of its motivation – finding and exploring the most important problems – it zeros in on the weightiest issues in any particular area. EA work is a goldmine of interesting writing, particularly if you find yourself drawn in a discipline-agnostic way to all the biggest questions.

Second, EA often has a scientific precision of argument that is often missing from discussions on abstract things (e.g. meta-ethics) or emotionally charged issues (e.g. saving lives).

This post explains the motivations behind EA, and has a table of contents for this post series.


Altruism, impartial welfarist good, and cause neutrality

I will have more to say in a later post about specific philosophical issues in defining what is moral. For now I will hope that the idea of an impartial welfare-oriented definition of good is sufficiently defensible that I will not be mauled to death by moral philosophers before that post (though if it doesn’t happen by then, it will certainly happen afterwards).

Impartial (in the sense of considering everyone fairly, and giving the same answer regardless of who’s doing the judging) and welfare-oriented (in the sense of valuing happiness, meaning, fulfilment of preferences, and the absence of suffering) good is an intuitive and fairly unobjectionable idea. Yet if we take it as a goal, it points towards a different idea of charity than the current norm.

Most charities are single-issue charities. This generally makes sense: better to have one organisation be really good at distributing malaria nets and one really good at advocating for taking nuclear weapons off high alert, than to have one organisation doing a mediocre job at both (malaria net delivery via ICBM?).

But the siloing of causes often goes further. If the effectiveness of an intervention is considered, it is often after choosing a cause area. To weigh cause areas against each other, to judge the needs of African children against, say, factory farmed pigs, seems like a faux pas at best, and a sin at worst (for a particularly incendiary tirade on the topic, see this article).

However, if we hold ourselves to an impartial welfarist idea of good, this judgement must be made. An artist might choose what to paint based on how they want to express themselves or on a sudden flash of inspiration. A would-be altruist refusing to weigh causes against each other and instead selecting them on the basis of passion or inspiration is acting like our artist. In the artist’s case it doesn’t matter, but the altruist, in doing so, implicitly values their own choice and/or self-expression over the good that their actions might do. This is not altruism by our definition of good.

Of course, people differ in their knowledge and talents, and these tend to align with inspiration. In the real world, it may well be that your greater ability, drive, and/or knowledge in one area outweighs the greater efficiency at which results convert to goodness in some other area. We will also see arguments for not placing all our bets on the same cause, and explore the enormous uncertainties that come in trying to compare causes. But the idea of cause-neutrality – that causes are comparable, and that making these comparisons is an important part of the job of any would-be altruist – remains.


Effectiveness

Focusing on the idea of impartial welfarist good also makes it clear that, in trying to do good, we should focus on the good our actions result in. This may seem like an obvious statement, but it is not true of much charitable work.

For example, we tend to emphasise the sacrifices of the donor over the benefits of the recipients. Consider old tales of people like Francis of Assisi. Their claim to virtue (and sainthood) comes from giving away all their possessions, but the question of how much good this did to the beggars doesn’t come up. This attitude continues in the many modern charity evaluators that focus on metrics like percentage of money spent on overhead costs. Paying big salaries to recruit the best management and administration may genuinely be a cost-effective way of increasing the total good done, but it conflicts with our stereotype of self-sacrificing do-gooders. Of course, there is virtue in selfless sacrifice, but we should remember that the goal of charity is to make recipients better off, not to rank donors.

As with many things humans do, acts of charity often aren't based on rational calculation. Some consider this a good thing: altruistic acts should come from hearts, not spreadsheets. This is wrong – if you care about impartial welfarist good.

It is a fact about our world that good charity is hard, and that charities have vast differences in cost-effectiveness. When one charity results in ten or a hundred times more healthy years of life per dollar spent than another, boring details of statistical effectiveness become important moral facts. (This is true not just of charities, but most kinds of projects that might impact many people – government policy, activism, and so on.)

When the difference in effectiveness between different interventions is often greater than the difference to doing nothing at all, and when these differences are often measured in lives, effectiveness considerations are critical in any attempt to do good.

There is a role for simple, comforting altruism, but this role isn’t making big decisions over how to benefit others. These decisions deserve more than goodwill. They deserve to be made right.


Opportunity

Debates over charitable giving often centre on questions of moral duty and obligation (a good example is Famine, Affluence, and Morality, Peter Singer’s classic paper that laid some of the foundations of what later became EA).

Another framing is to think of it as an opportunity. To someone who cares about impartial welfarist good, altruistic acts are not a burden but an opportunity to achieve valuable things. In particular, there are many reasons to think that we (as in developed-world humans of the early 21st century) have an exceptionally large opportunity to do good.

First, our values are better than those of people in preceding eras. This statement implies many philosophically contentious points, but for the time being I will not defend them, instead appealing to what I hope to be a common sense conviction that human morality isn’t nearly relative enough that it is impossible to differentiate modern secular humanist values from values that support war, slavery, and boundaries on personhood that exclude most people.

(Of course, this statement also suggests that our current moral views are far from perfect too. This is important, very likely true, and will be discussed at length in future posts. The fact that this is increasingly recognised is hopefully a hint that we are at least on the right track.)

Second, we have more resources than people in previous eras. There is also large variation in global income, meaning that if you happen to live in a rich country, you can help many others for cheap. A 2-adult, 1-child UK household with a total income of £30,000 is in the top 10% of the world income distribution and 7 times richer than the median global household.

Third, knowledge on what is effective has increased and technology make it easier to apply this knowledge. Today GiveWell’s thorough charity research can multiply the impact of giving. Twenty years ago, there was no GiveWell. Two hundred years ago, donation guidance, if it existed, might have consisted of the church telling you to donate to them so they can convert people and push their social values.

Fourthly, we may have an unprecedented ability to affect where civilisation is headed (for thoughts on this topic, see for example this link). The steepness of technical advancement increases the variance of possible future outcomes: in the next few decades we might nuke each other or engineer a pandemic – or we can set ourselves on a trajectory towards becoming a sustainable civilisation with billions of happy inhabitants that lasts until the stars burn down. Past eras didn’t have similar power, and if the future goes well humanity will no longer be as vulnerable to catastrophe as we are today, so people living roughly today might have exceptional leverage.


Common EA cause areas

The cause areas most frequently seen as important, and most specific to EA relative to what other charities focus on, are:

  • Global poverty, because the developing world is big, poor, and has many tractable problems with well-researched solutions.
  • Animal welfare, because it is largely ignored, and potentially huge in scope (depending on how much animal lives are valued).
  • Existential risk: focusing on avoiding human extinction or other irrevocable civilisational collapses, because new technologies (AI and biotech in particular) make them scarily plausible. (Sometimes this is motivated even more strongly by long-termism: specifically caring about the overwhelming number of happy future lives that may come to exist over the long-term future if we don't mess things up).

These are far from the only cause areas discussed in EA. Many EA-affiliated people argue either against some of the above, for the overwhelming importance of one relative to the other, or for entirely different causes.


Effective altruism in practice

In practice, EA can seem weird and theoretical.

The main reason for EA weirdness is that it casts a wide net. Everyone agrees that international peacekeeping is an important project, and also a serious one: it doesn’t get much more serious than world leaders intervening to get men with big guns to have big talks about their big disputes. On the other hand, the colonisation of space is important, but seems to have very little gravitas indeed; it’s something out of a science fiction novel. However, just as it’s a brute fact about the world that there are lots of violent people with big guns, it’s also a brute fact that space is big; both of these facts should be taken seriously when considering the long-run future. There might be a clear line between sci-fi and current affairs in a bookshop, but reality doesn't care about genre.

More generally, it’s important to keep in mind that every moral advance started out as a weird idea (for example, it was once considered crazy to suggest that women should get to vote).

Parts of EA are very theoretical. This, too, is by design. Future posts will show many cases where which way we resolve a very abstract issue has a big impact on what the right practical action is – and in many of these cases it is unclear what the right resolution is. Finding out clearly matters.

If EA seems too theoretical or mathematical to you, consider two points. First, whatever the field, doing complex things in the real world tends to involve (or be built on) theoretical heavy lifting. Second, most charity efforts don’t pay much attention to theoretical issues; EA is at very least a helpful counterweight, and likely to uncover missed opportunities.

Whenever the goal is to do good, it is easy to be overwhelmed by feelings of righteousness and forget theoretical scruples. Unfortunately we don’t live in the simple world where what feels right is the same as what is right.

The core of effective altruism is not any particular moral theory or cause area, but a conviction that doing good is both important and difficult, and hence worthy of thought.


This post series:

  1. Rigour and opportunity in charity: this post.
  2. Expected value and risk neutrality: a rational agent maximises the expected value of what it cares about. Expected value reasoning is not free of problems, but, outside extreme thought experiments and applied carefully, it clears most of them, including "Pascal's mugging" (high-stakes, low-probability situations). Expected value reasoning implies risk neutrality. The most effective charity may often be a risky one, and gains from giving may be dominated by a few risky bets.
  3. Uncertainty: we are uncertain about both what is right and what is true (being mindful of the difference is often important). Moral uncertainty raises the question of how we should act when we have credence in more than one moral theory. Uncertainty about truth has many sources, including ones broader than uncertainty about specific facts, such as our biases or the difficulty of confirming some facts. These uncertainties suggest we are unaware of huge problems and opportunities.
  4. Utilitarianism: while not a necessary part of EA thinking, utilitarianism is the most successful description of the core of human ethics so far. In principle (if not practice, due to the complexity of defining utility), it is capable of deciding every moral question, an important property for a moral system. Our moral progress over the past few centuries can be summarised as a transition to more utilitarian morality.


(More coming)