2022-08-20

Effective Altruism in practice

 6.5k words (~17 minutes)

 

I've written about key ideas in Effective Altruism before. But that was the theory. How did EA actually come to exist, and what does it look like in practice?

 

... turns out it looks like a stylised light bulb with a heart.

 

Summary

  • The ideas underpinning EA came from many sources, including:

    • late-1900s analytic moral philosophers like Peter Singer and Derek Parfit;
    • futurist/transhumanist thinkers like Nick Bostrom and Eliezer Yudkowsky focusing on risks from future technologies;
    • a few people working on evaluating charity effectiveness;
    • efforts starting around 2010 by a few Oxford philosophers including William MacAskill and Toby Ord that, sometimes unwittingly, gave structure and a name to a diverse cluster of ideas about how to maximise your positive impact.
  • Though EA is framed around the question of "what does the most good (according to an analytic and often quantiative framework based on impartial welfare-oriented ethics)?" rather than any particular answer to that question, in practice much (but not all!) EA efforts focus on one of the following, due to many people deciding that it's a particularly pressing and (outside EA) neglected problem:

    • reducing the risk of civilisation-wide catastrophe, especially from emerging technologies like advanced AI and biotechnology;
    • health and development in poor countries; and
    • animal welfare.
    • There is also a lot of work at the meta-level, including on figuring out how people can have impactful careers, and trying to direct effort towards the above problems.
  • The funding for most EA-related projects and EA-endorsed charities comes from a combination of:

    • many individual small donors, in particular:

      • people who have taken the Giving What We Can pledge and therefore donate >10% of their salary to highly effective charities;
      • people who explicitly pursue "earning-to-give" (getting a high-paying job in order to donate most of the proceeds to charities);
    • several foundations that derive their wealth from billionaires, including most prominently:

      • Open Philanthropy, mostly funded by Dustin Moskovitz who made his wealth from being a Facebook co-founder; and
      • FTX Foundation, funded by Sam Bankman-Fried and several other early employees at the crypto exchange FTX.
  • There is no monolithic EA organisation (though the Centre for Effective Altruism organises some common things like the EA Global conferences), but rather a large collection of organisations that mainly share:

    • a commitment to maximising their positive impact on the world;
    • a generally rigorous and quantitative approach to doing so; and
    • some link to the cluster of people and organisations in Oxford that first named the idea of Effective Altruism.
    • There are also many charities that have no direct relation to the EA movement, but were identified by charity evaluators like GiveWell as extremely effective, and have thus been extensively funded.
  • EA is very good at attracting talented people, especially ambitious young people at top universities.

  • EA culture leans intellectual and open, and has a high emphasis on "epistemic rigour", i.e. being very careful about trying to figure out what is true, acknowledging and reasoning about uncertainties, etc.

  • Some "axes" within EA include:

    • "long-termists" who focus on possible grand futures of humanity and the existential risks that stand between us and those grand futures, and "near-termists" who work on clearer and more established things like global poverty and animal welfare;
    • a bunch of people and ideas all about frugality and efficient use of money, and another bunch of people and ideas about using the available funding to unblock opportunities for major impact; and
    • a historical tendency to be very good at attracting philosophy/research-type people who like wrestling with difficult abstract questions, versus a growing need to find entrepreneurial, operations, and policy people to actually do things in the real world.

The philosophers

In the beginning (i.e. circa the 1970s, when time is widely known to have begun), there were a bunch of philosophers doing interesting work. One of them was Peter Singer. Peter Singer proposed questions like this (paraphrasing, not quoting, and updated with recent numbers):

Imagine you're wearing a $5000 suit and you walk past a child drowning in a lake. Do you jump into the lake and save the child, even though it ruins your suit?

If you answered yes to the above, then consider this: it is possible to save a child's life in the developing world for $5000; what justification do you have for spending that money on the suit rather than saving the life?

The only difference between the two scenarios seems to be distance to the dying child (and method of death and etc. but ssshh); is that distance really morally significant?

(He is also known for arguing in favour of animal rights and abortion rights.)

Derek Parfit is another. He is particularly famous for the book Reasons and Persons, in which he asks questions (paraphrasing again) like this:

Is a moral harm done if you cause fewer people to exist in the future than otherwise might have? How should we reason about our responsibilities to future generations and non-existing people more generally?

Does there exist a number of people living mediocre (but still positive) lives such that this world is better than some smaller number of people living very good lives?

(He also talks about problems in the philosophy of personal identity, and the contradictions in moral philosophies based on self-interest.)

The transhumanists

Then, largely separately and around the 1990s, there came the transhumanists ("transhumanism" is a wide-reaching umbrella term for humanist thinking about radical future technological change). Perhaps the most notable are Nick Bostrom and Eliezer Yudkowsky.

Nick Bostrom thought long and hard about many wacky-seeming things with potentially cosmic consequences. He popularised the simulation hypothesis (the idea that we might all be living in a computer simulation). He argues against death (something I strongly agree with). He did lots of work on anthropic reasoning, which is about the question of how we should update information we get about the state of the world when taking into account that we wouldn't exist unless the state of the world allowed it. This leads to some thought experiments that I'd classify as infohazards because of their tendency to spark an unending discussion whenever they're described. Conveniently, he also coined the term "infohazard".

Most crucially for EA, though, Bostrom has worked on understanding existential risks, which are events that might destroy humanity or permanently and drastically reduce the capacity of humanity to achieve good outcomes in the future. In particular, he has worked on risks from advanced AI, which he boosted to popularity with the 2014 book Superintelligence.

Bostrom's style of argument is like a dry protein bar, leaning toward straightforward extrapolation of conclusions from premises, especially if the conclusions seem crazy but the premises seem self-evident. Sometimes, though, he does apply some literary flair to make an important point, and also occasionally writes poetry.

Eliezer Yudkowsky wanted to create a smarter-than-human AI as fast as possible, until he realised this might be a Bad Idea and said "oops" and switched to the problem of making sure any powerful AIs we create don't destroy human civilisation. He founded the Machine Intelligence Research Institute (MIRI) to find out the answer.

Yudkowsky also wrote a massive series of blog posts to try to teach people about how to reason well (for example, he covers a lot of ground from the cognitive biases literature), and then went on to try to convey the same lessons in what become the most popular work of Harry Potter fanfiction of all time. His writing and argument style tends toward flowing narratives that are usually both very readable and verbose (though quite hit-or-miss in whether you like it).

He has Opinions (note the capital). He is extremely pessimistic about the chances of solving the AI alignment problem.

Yudkowsky is affiliated much more strongly with the loose "Rationalist community" than with EA. This is a collection of online blogs that was sparked by Yudkowsky's writing, and later in particular also that of Scott Alexander, who has become internet-famous for his own reasons too. The central forum is LessWrong. Both EA and Rationalism involve lots of discussion about far-ranging abstract ideas that (for a certain type of person) are hard to resist; one blogger says "[t]he experience of reading LessWrong for the first time was brain crack" and goes on to propose that EA ideas are best-spread by nerd-sniping (i.e. telling people about ideas they find so interesting that they literally can't help but think about them). Both EA and the Rationalists put an incredible amount of effort and weight on trying to reason well, avoid biases and fallacies, and being careful (and often quantitative) about uncertainties. However, EA focuses more on applying those things to do good in the real world to real people, while the Rationalist vibe is sometimes one of indulging in theorising and practising good thinking for their own sake. (This is not necessarily a criticism - I had fun discussing Lisp syntax in the comments section of the LessWrong version of my review of Structure and Interpretation of Computer Programs, even though arguing about parentheses isn't exactly going to save the world (or is it ... ?)). EA tends to also have a more explicit orientation towards seeking influence.

(I should also note that on the specific topic of AI risk, the Rationalist community is extremely impact-oriented, likely due to founder effects - or perhaps because AI risk is the EA cause area that is most full of juicy technical puzzles and philosophical confusions.)

More philosophers & EA gets a name

Brian Christian's The Alignment Problem mentions in chapter 9 some funny details about the sequence of events that lead to the first few EA-by-name organisations. In 2009, then-Oxford-philosophy-student Will MacAskill had an argument about vegetarianism while in a broom closet. Unlike most arguments about vegetarianism, and echoing the vibe of much future EA thinking, this one was on the meta-level; the debate was not whether factory farming is bad, but how we should deal with the moral uncertainty around whether or not factory farming is ethical. MacAskill eventually started talking with Toby Ord (though in a graveyard rather than a broom closet), another philosophy student interested in questions around moral uncertainty.

Together with one other person, the two of them wrote a book on moral uncertainty. MacAskill and a philosophy-and-physics student called Benjamin Todd founded an organisation called 80 000 Hours to try to figure out how people can choose careers to have the greatest positive impact on the world. Toby Ord founded an organisation called Giving What We Can (GWWC) that encourages people to donate 10% of their salary to exceptionally effective charities. GWWC estimates its roughly 8000 members have donated $277mn, and are likely to donate almost $3bn over their lifetimes.

As an umbrella organisation for both of these, they created the Centre for Effective Altruism. Originally the "Effective Altruism" part was intended purely as a descriptive part of the organisation's name, but at some point started to stand more broadly for the general space of effectively altruistic things that at some point interacted with ideas from the original Oxford cluster.

Later, MacAskill wrote a book called Doing Good Better summarising ideas about why charity effectiveness is important and counterintuitive. Ord in turn wrote The Precipice that summarises ideas about how mitigating existential risks to human civilisation is likely a key moral priority; after all, it would be bad if we all died.

Charity evaluators and billionaires

Independently from (and before) anything happening in Oxford broom closets, starting in 2006 hedge fund managers Holden Karnofsky and Elie Hassenfeld started thinking seriously about which charities to donate to. Upon discovering that this is a surprisingly hard problem, they started GiveWell, an organisation focused on finding exceptionally effective charities. They ended up concentrating on global health (their list includes malaria prevention, vitamin supplementation, and cash transfers, all in developing countries).

After a few years of GiveWell existing, they were put in touch with Dustin Moskovitz and Cari Tuna. At the time, Facebook co-founder Dustin Moskovitz was the world's youngest self-made billionaire, and with his partner Cari Tuna had started a philanthropic organisation called Good Ventures in 2011.

What followed was a cinematic failure of prioritisation, as recounted by Holden Karnofsky himself in this interview. The GiveWell founders decided that "[meeting the billionaires] just doesn't seem very high priority", and thought that "[n]ext time someone's in California we should definitely take this meeting, but [...] this isn't the kind of thing we would rush for [...]". However, Karnofsky realised this meeting was an excellent excuse to go on a date with a Californian he fancied (and later married), and as a result ended up making the trip sooner rather than later.

Moskovitz and Tuna turned out to have very simplistic preferences for charitable giving: they just wanted to do the most good possible. This was an excellent fit with GiveWell's philosophy, and soon Good Ventures partnered with GiveWell in what would later become Open Philanthropy (of which Karnofsky would become co-CEO). Open Philanthropy is a key funder of EA projects, though they fund unrelated things as well (though always through a very EA lens of trying to rigorously and quantitatively maximise impact) . They list all their grants here.

While studying physics at MIT, Sam Bankman-Fried (or "SBF"), already deeply interested in consequentialist moral philosophy, attended a talk by Will MacAskill on EA ideas. After stints at trading companies and the Centre for Effective Altruism, he founded the crypto-focused trading companies Alameda Research and then FTX, and ended up becoming the richest under-30 person in the world. (Though then the value of FTX fell in the crypto crash, and he recently turned 30 to boot.)

EDIT: In November 2022, both FTX and Alameda Research collapsed in a matter of days, and it became clear that FTX had committed major and flagrant financial fraud by transferring customer funds to Alameda, which Alameda then speculated with, and seems to have lost to the tune of billions of dollars. SBF is facing criminal charges. FTX and SBF have been condemned in harsh terms by those running many EA orgs and in countless EA Forum posts. Obviously, FTX and SBF have now very clearly become examples of what NOT to do. All of the following seem true: (a) our prior should be that people committing illegal and immoral actions that lead to extreme wealth and prestige for themselves are most likely acting mostly for the standard boring selfishly-evil reasons, (b) SBF probably had an easier time justifying his crimes because of the story that he could tell himself about doing good for the world, (c) publicly associating himself with EA, and receiving positive attention from EA organisations, helped make SBF appear moral and trustworthy, (d) there existed evidence and signals (in particular reports from Alameda's early days about cut-throat behaviour from SBF) that provided evidence of SBF's character before the FTX collapse, and (e) it is generally harder than it seems in hindsight to be right about whether a business is fraudulent (consider that coutless venture capitalists poured billions into FTX, and presumably had incentive to figure out if the entire thing was a scam). More information will come to light with time, and there are definitely lessons to be learned. Apart from this paragraph, I have not changed any part of this post.

SBF often emphasises that you're more likely to achieve outlier success in business if your goal is to donate the money effectively. There's little personal gain in going from $100M to $10B, so a selfish businessperson is likely to optimise something like "probability I earn more than [amount that lets me do whatever the hell I want for the rest of my life]", while a (mathematically-literate) altruistic one is far more compelled to simply shoot for the highest expected-value outcomes, even if they're risky. (The exception is the selfish businessperson who really likes competing in the billionaire rankings.)

SBF has also said - and is living proof of - the idea that if your strategy to do good is to earn money to donate, you should probably aim for the risky but high-value bets (e.g. starting a company and becoming a billionaire), rather than going into some high-paying finance job earning a crazy-high but non-astronomical salary. Many people persuaded by EA ideas have done the latter, but SBF contributed more than all of them combined. The maths probably still works out even after accounting for the fact that SBF's route was far more unlikely to work than a finance job (he thought FTX had an 80% chance of failure). This post argues so. Wave, a fintech-for-Africa company with strong EA representation in its founding team and a $1.7B valuation in 2021, is another example of EA business success.

SBF and other senior FTX people (many of who care deeply about EA ideas) launched the FTX Foundation, which in particular contains the Future Fund that has quickly become a key funder of the more future-oriented and speculative parts of EA.

These days, being associated with tech billionaires isn't a ringing endorsement. However, consider a few things. First, the tech billionaires aren't the ones who came up with the ideas or set the agendas. Sports car enthusiast and sci-fi nerd Elon Musk decided that sexy cars and rockets are the most important projects in the world and directed his wealth accordingly; Moskovitz, SBF, & co. were persuaded by abstract arguments and donate their wealth to foundations where the selection of projects is done by people more knowledgeable in that than they are. Second, it seems unusually likely that the major EA donors really are sincere and committed to trying to do the most good; after all, if they wanted to maximise their popularity or acclaim, there are better ways of doing that then funding a loose cluster of people often trying to work specifically on the the least-popular charitable causes (since those are most likely to contain low-hanging fruit). Finally, if some tech billionaires endorsing EA is evidence against EA being a good thing, then no tech billionaires endorsing EA must be evidence in favour of EA being a good thing. However cynical you are about tech billionaires, they're still smart people, so a few of them going "huh, this is the type of thing I want to spend all my wealth on" should be more promising than all of them going "nope I don't buy this".

(If EA has some top tech business people, why doesn't it have some top political people too, or even funders from outside tech? My guess is a combination of factors. Politicians skew old while EAs skew young (partly because EA itself is young). Both EAs and tech people tend to be technically/mathematically/intellectually-inclined (though many areas within EA are specifically about social science or the humanities). Both EAs and tech people tend to care less than average about social norms or prestige, while politicians tend to be selected out of the set of people who are willing to optimise very hard for prestige and popularity. Also, expect some policy-related efforts from EA; many EAs work or aim to work in non-political policy roles, and there have even been some political efforts, though there is much to learn in that field.)

Organisations

In addition to the previously-mentioned CEA, 80 000 Hours, Giving What We Can, GiveWell, Open Philanthropy, and FTX Foundation, organisations with a strong EA influence include (but are not limited to):

  • A large number of think-tanks and research institutes, especially ones where people think about the end of the world all day, including

  • A large number of animal welfare charities, which I won't bother listing, except to point out the meta-level Animal Charity Evaluators.

  • A large number of global health charities, including ones that are simply highly recommended (and funded) by GiveWell (in particular Against Malaria Foundation, which routinely tops GiveWell rankings) to ones that also trace their roots solidly to EA.

  • Organisations working on AI risk, including:

    • Anthropic, working on interpreting machine learning models (a program led by Chris Olah) and more general empirically-grounded, engineering-based machine learning safety research.
    • Redwood Research, a smaller company also doing empirical machine learning safety work (and running great ML bootcamps on the side).
    • Centre for Human-compatible AI (CHAI), a research institute at UC Berkeley.
    • Machine Intelligence Research Institute (MIRI), the original AI safety organisation that was founded in 2000 and hence managed to snap up the enviable domain name "intelligence.org". MIRI's research leans much more mathematical and theory-based than that of most other AI alignment organisations.
    • Conjecture, a new organisation focusing on the work that is most relevant if advanced AI is surprisingly close.
    • (OpenAI and DeepMind, the two leading AI companies, both have safety teams that include people very committed to working on existential risk concerns. However, neither is primarily an AI safety company, and both weight advanced AI risks at a company-level less than the other companies on this list. OpenAI in particular currently sees AI risks more through the near-term lens of making sure AI systems and their benefits are widely accessible to everyone, rather than focusing on making sure AI systems don't doom us all (though I guess that too would be a suitably equitable outcome?).)
  • Alvea, a recent vaccine startup, with the eventual goal of enabling faster vaccine roll-out in the next pandemic.

  • Charity Entrepreneurship, a charity incubator that has incubated many charities, including for example Healthier Hens (farmed chicken welfare), the Happier Lives Institute (helping policymakers figure out how to increase people's happiness), and Lead Exposure Elimination Project (working to reduce lead exposure in developing countries).

  • SparkWave, an incubator for software companies that are solving important problems.

  • Effective Thesis, trying to save students from writing pointless theses.

  • Founders Pledge, which helps entrepreneurs commit to giving away money when they sell their companies and donate that money effectively (not to be confused with the more famous Giving Pledge). (So far, about $475M has been donated in this way)

  • Legal Priorities Project, which looks at the legal aspects of trying to do everything else.

  • ALLFED (ALLiance to Feed Earth in Disasters), which aims to be useful in situations where hundreds of millions of people or more are suddenly without food, and which has successfully found the best conceivable name for an organisation that does this.

  • Our World in Data (OWID), the world's best provider of data and graphs on important global issues. I'm not quite sure how interrelated they are with EA directly, but their founder posts on the EA Forum about OWID articles on very EA-related ideas, so there's definitely some overlap.

  • All-Party Parliamentary Group for Future Generations in the UK government.

  • A bunch of organisations focused on getting people interested in the world's biggest problems and teaching them various skills:

    • Atlas Fellowships, a recent initiative for high-schoolers.
    • A collection of Existential Risk Initiatives running, among other things, summer internships where people (mostly undergraduate/postgraduate students) work with mentors on existential risk research: SERI (Stanford), CHERI (Switzerland), CERI (Cambridge), and a newer one at the University of Chicago which I can't yet find a website for, but which will almost certainly not help with the naming situation when it arrives. Thankfully, rumours say there will be soon be a YETI (Yale Existential Threats Initiative), which is a cool and (thank god!) unconfusable name.

Since EA is not a monolithic centralised thing, there is plenty of fuzziness in what counts as an EA organisation, and definitely no official list (and therefore if you're reading this and your org is not on the list, you shouldn't complain - many great orgs were left out). The common features among many of them are:

  • Some causal link to stuff that at some point interacted with the original Oxford cluster.
  • Emphasis on taking altruistic actions with a focus on effectiveness.
  • Emphasis on quantifying the impact of altruistic actions.
  • Emphasis on a scope that is in some way particularly wide-ranging or unconventional, either in sheer size or time (existential risks, the long-run future), geography (focusing on the entire world and often particularly developing countries rather than the organisation's neighbourhood), or in what is cared about (farmed animal welfare, wild animal welfare, the lives of people in the far future, and whatever the hell these people are doing).

The biggest EA events are the Effective Altruism Global (EAG) conferences organised by CEA. These usually happen several times a year, mostly in the UK and the Bay Area, though locally-organised EAGx conferences have more diverse locations.

The Situation

EA has a strong presence especially at top universities. There are large and active EA student groups in the Bay Area, Cambridge, Oxford, and London, but also increasingly New York, Boston, and Berlin, and many smaller local groups (you can find them listed here). The profile of EA in the general public is very small. However, the concentration of talent is extremely high. Add to this the existence of funding bodies with tens of billions of dollars of assets that are firmly aligned with EA principles, and you can expect a lot of important, impactful work to come from people and organisations with some connection to EA in the coming years.

It's important to keep in mind that EA is not a centralised thing. There is no EA tsar, or any single EA organisation that runs the show, or any official EA consensus. It's a cluster of many people and efforts that are joined mainly by caring about the types of ideas I talk about here.

Demographics

This website has a good overview, based on whoever filled in a survey posted to the EA Forum. The gender ratio is unfortunately somewhat skewed (70% male); for comparison, this is roughly the same as for philosophy degrees and better than for software developers (90% male (!?)). Half are 25-34. Over 70% are politically left or centre-left, and few are centre-right (2.5%) or right (1%), though almost 10% are libertarians. Education levels are high, and the five most common degrees are, in order: CS, maths, economics, social science, and philosophy. Most are from western countries.

Culture

EA culture places a lot of weight on epistemics: being honest about your uncertainties, clear about what would make you change your mind on an issue, aware of biases and fallacies, trying to avoid group-think, focusing on the substance of the issue rather than who said it or why, and arguing with the goal of finding the truth rather than defending your pet argument or cause. This is a lofty set of goals. To an astonishing but imperfect extent, and more so than any other concentration of people or writing (except from the equally-good Rationalist community mentioned above) that I've ever had any exposure to, EA succeeds at this.

Related to this, but also turbo-charged by general cultural memes of "critiquing cherished ideas is important", there's a high emphasis of constantly being on the lookout for ways in which you yourself or (in particular) common EA ideas might be wrong. If you read down the list of top-voted posts on the EA Forum, they are about:

  1. Potential failure modes resulting from the influx of money into EA.
  2. High EA spending being a problem for optics and epistemics.
  3. Things current EA community-building efforts are doing wrong, and why this is especially worrying.
  4. Reasons why some key concepts in EA are used misleadingly and unnecessarily.
  5. A list of critiques of EA that someone wants expanded.
  6. A catalogue of personal mistakes that someone made while trying to do good (the key one being that they focused too much on working only at EA organisations).
  7. An argument that standard EA ways of trying to help with developing country development are not as effective as other ways of helping.
  8. And only in 8th place, something that isn't a critique of EA: a post about the historical case of early nuclear weapons researchers mistakenly assuming they were in a race, and implications for today's AI researchers

(If you adjust upvotes on EA Forum posts to account for how active the forum was at the time, the most popular post of all time is Effective Altruism is a Question (not an ideology). It's not a critique, but it's also very revealing.)

Right now, there's an active contest with $100k in prizes for the best critiques of EA. This sort of stuff happens enough that Scott Alexander satirises it here.

This might give the impression of EA as excessively-introspective and self-doubting. There is some truth to the introspectiveness part. However, the general EA attitude is also one of making bold (but reasoned) bets. Recall SBF's altruistically-motivated risk taking, or more generally the fact that one of Open Philanthropy's foundational ideas is to support reasonable-but-risky projects, or even more generally the way EA in general is set up around unconventional and ambitious attempts at doing good.

If I had to name the two most important obstacles to doing important things in the real world, they would be (1) reasoning poorly and not updating enough based on feedback/evidence, and (2) being too risk-averse and insufficiently ambitious. Some cultures, like the good parts of academia, do well on avoiding (1). Others - imagine for example gung-ho Silicon Valley tech entrepreneurs - do well on avoiding (2). Though EA culture varies a lot between places and organisations, on the whole it seems uniquely good at combining these two aspects.

There are differences in culture between different EA hubs/clusters. I mainly have experience of the UK (and especially Cambridge) cluster and the Bay Area one. In the Bay, there is significant overlap between the EA and Rationalist communities, whereas in the UK there's mainly just EA in my experience. The Bay also leans more AI-focused and maybe weirder on average (or perhaps it's just a European vs American culture thing), while in the UK there are many AI-focused people but also many focused on biological fields (biosecurity & alternative proteins) or policy.

"Long-termism" vs "near-termism"

In the history of EA, it's hard not to see an invasion of ideas from the planetary-scale futurism that people like Nick Bostrom and Eliezer Yudkowsky talked about, and Toby Ord (author of The Precipice) and Will MacAskill (about to drop a new book on why we should prioritise the long-term future) increasingly focus on. Holden Karnofsky, who for a long time ran GiveWell, perhaps the most empirically-minded and global health -focused EA organisation, is now co-CEO of Open Philanthropy, responsible specifically for the speculative futurist parts of Open Philanthropy's mission, and writes blog posts about the grand future of humanity and why the coming century may be especially critical (though he is careful to say that he doesn't think the other half of Open Philanthropy's work, or global health / animal welfare -focused charity more generally, is not important).

Perhaps this makes sense. In the long run at least, it seems sensible to expect the largest-scale ideas to be the most important ones. The rate of technological progress, especially in AI, has also been shrinking just what "the long run" means when expressed in years.

The common label applied to the ends of the radical-future-technology-focused versus concrete-current-problem-focused axis are "long-termist" and "near-termist" respectively. The name "long-termist" comes from arguments that the key moral priority is making sure we get to a secure, sustainable, and flourishing future civilisation (since such a civilisation could be very large and long-lasting, and therefore enable an enormous amount of happiness and flourishing). However, the names are a bit misleading. All existential risk work is often lumped into the long-termist category, so we have "long-termist" AI safety people trying to prevent a catastrophe many of them think will probably happen in the next three decades if it happens at all, and "near-termist" global health and development people trying to help the development of countries over a century.

(Many also point out that caring about existential risks does not require the long-termist philosophy.)

Frugality vs spending

The culture of the original Oxford cluster was very frugal, and focused on monetary donations. For example, after founding Giving What We Can (GWWC), Toby Ord donated everything he earned above £ 18 000 to charity (and has continued on a similar track since then). Because of the low available funding, the focus was very much on marginal impact - trying to figure out what existing opportunity could best use one extra dollar.

Since then, the arrival of billionaires meant that funding worries went down.

(For example, "earning to give" has gone down a lot in 80 000 Hours' career rankings. This is the idea that deliberately going into a high-earning job (often in finance) and then donating a significant fraction of your salary to top charities is one of the most effective ways to do good, and a path that many pursued based on the recommendation by 80 000 Hours.)

The bottleneck has moved (or at least been widely perceived to move) from funding to the time of people working on the key problems; instead of focusing on where to allocate the marginal dollar, the focus has somewhat shifted to how to allocate the marginal minute of time. In particular, the core argument of "imagine how far this particular dollar could go if used to effectively improve health in developing countries" has been joined by the argument of "there are plausible civilisation-ending disasters that could happen in the coming decades and require hard work to solve; imagine how sad it would be if we failed to work fast enough because we didn't spend that one dollar".

As a concrete example, Redwood Research organised a machine learning bootcamp aimed at upskilling people for AI safety jobs in January 2021 (and will be running more in the future, something I strongly endorse). Thirty participants (including myself) were flown into Berkeley from around the world, and spent three weeks living in a hotel while taking daily high-reliability COVID tests that I'm pretty sure weren't entirely free (and of course spending the days programming hard and talking about AI alignment (and eating free snack bars at the office - or maybe that last part was just me)). This wasn't cheap, nor was it a typical way to spend charity money (Redwood is funded by Open Philanthropy). But if prediction markets are right that generally-capable AI starts emerging around the end of this decade, and you take one look at the current state of progress on the AI alignment problem, and you do happen to have access to funding - well, it would be sad if being too stingy is how our civilisation failed.

Concretely, to look at only one consequence, Redwood made several hires from the bootcamp, despite the fact that many of the participants (myself included) were still students or otherwise not looking for work. Given how difficult but important hiring is, especially for high-skill technical roles, and the serious possibility that organisations like Redwood making progress is important for solving AI safety problems that might play a big role in how the future of humanity shapes out, this seems like a win.

However, at the same time, it is of course worth keeping in mind that humans are pretty good at thinking to themselves "man, wouldn't it be great if people like me had lots of money?" This, as well as the PR and culture problems of having lots of money sloshing around, are discussed in many EA Forum posts. We already saw that this one (by MacAskill) and this one are, respectively, the first- and second-most upvoted posts of all time on the EA Forum.

Ultimately, the whole point of Effective Altruism is, well, being effective about altruism. Whether EA funders spend quickly or slowly, and whichever causes they target, if they fail to find the best opportunities to do good with money, they haven't succeeded - and they know it.

(It should be noted that the GWWC criterion of donating 10% of your income to charity is met by many EAs, including ones far in space or culture from the original Oxford cluster, and global health is a leading donation target.)

Thinking vs doing

The fact that there's more resources - including not just funding but also the time of talented people - also means that the focus is less on marginal impact. If you have £10 and an hour, then figuring out what existing opportunity has the best ratio of good stuff per dollar is the best bet. But if you have, say, £10 000 000 and ten thousand work hours, then there's also the option of starting new projects and organisations.

(A lot of the weirdness of EA thinking comes from its marginalist nature. The things that are most valuable per marginal unit of money/time/effort are generally the things that are most neglected, and neglected things tend to seem weird because, by definition, few people care about them. For example, the early EA focus basically completely eschewed developed country problems because per-dollar marginal cost-effectiveness was highest in poor countries; from the outside, this may look like a strangely harsh and idiosyncratic selection of causes. With increasing resources, it makes more sense to pursue larger-scale changes, and larger-scale changes sometimes look like more traditional and intuitive causes. For example, while developing country health and projects trying to improve the long-term future are Open Philanthropy's main focuses, they spend some of their massive budget on US criminal justice reform, land-use policy, and immigration policy.) (Though note that the effectiveness of the criminal justice program has come under criticism.)

Since EA now has the resources to start many new organisations, there's also starting to be a shift from EA being very research-oriented to having more and more real-world projects. Even though one of the key EA insights is that doing good requires lots of careful thinking in addition to good intentions and execution ability, the ultimate metric of success is actually improving the world, and that takes steps that aren't just research. I think EA has some headwind to overcome here; as a movement inspired, started, and (early on) largely consisting of philosophers, it has been remarkably successful in appealing to philosophical people and researchers, but not entrepreneurs or operations people to the same extent. I think it is a very welcome trend that this is starting to shift.

Exciting Attempt for Enabling Action on Essential Activities

EA is definitely not ideal, and it is also not guaranteed to survive. Like any real-world community, it is not a timeless platonic ideal of pure perfection that burst into the world fully formed, but rather something with an idiosyncratic history, that consists of real people, and has certain biases and cultural oddities. Still, I think it is probably the most exciting and useful thing in the world to be engaged with.