Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

2024-09-21

Investigating an insurance-for-AI startup

We (Flo & Rudolf) spent a month fleshing out the idea of an insurance-for-AI company. We talked to 15 people in the insurance industry, and did 20 customer interviews. We decided not to continue, but we think it’s still a very promising idea and that maybe someone else should do this. This post describes our findings.

The idea

Theory of change

To reduce AI risks, it would be good if we understood risks well, and if some organisation existed that could incentivise the use of safer AI practices. An insurance company that sells insurance policies for AI use cases has a financial incentive to understand concrete AI risks & harms well, because this feeds into its pricing. This company would also be incentivised to encourage companies to adopt safer AI practices, and could incentivise this by offering lower premiums in return. Like many cyber-insurance companies, it could also provide more general advice & consulting on AI-related risk reduction.

Concrete path

TL;DR: Currently, professionals (e.g. lawyers) have professional indemnity (PI) insurance. Right now, most AI tools involve the human being in the loop. But eventually, the AI will do the work end-to-end, and then the AI will be the one whose mistakes need to be insured. Currently, this insurance does not exist. We would start with law, but then expand to all other forms of professional indemnity insurance (i.e. insurance against harms caused by a professional’s mistakes or malpractice in their work).

Frontier labs are not good customers for insurance, since their size means they mostly do not need external insurance, and have a big information advantage in understanding the risk.

Instead, we would target companies using LLMs (e.g. large companies that use specific potentially-risky AI workflows internally), or companies building LLM products for a specific industry.

We focused on the latter, since startups are easier to sell to. Specifically, we wanted a case where:

  • LLMs were being used in a high-stakes industry like medicine or law
  • there were startups building LLM products in this industry
  • there is some reason why the AI might cause legal liability, for example:
  • the LLM tools are sufficiently automating the work that the liability is plausibly on them rather than the humans
  • AI exceptions in existing insurance policies exist (or will soon exist)

The best example we found was legal LLM tools. Law involves important decisions and large amounts of money, and lawyers can be found liable in legal malpractice lawsuits. LLMs are close to being able to do much legal work end-to-end; in particular, if the work is not checked by a human before being shipped, it is uncertain if existing professional indemnity (PI) insurance applies. People who work in law and law tech are also, naturally, very liability-aware.

Therefore, our plan was:

  1. Become a managing general agent (MGA), a type of insurance company that does not pay claims out of its own capital (but instead finds a reinsurer to agree to pay them, and earns a cut of the premiums).
  2. Design PI policies for AI legal work, and sell these policies to legal AI startups (to help them sell to their law firm customers), or directly to law firms buying end-to-end legal AI tools.
  3. As more and more legal work is done end-to-end by AI, more and more of the legal PI insurance market is AI insurance policies.
  4. As AI advances and AI insurance issues become relevant in other industries, expand to those industries (e.g. medicine, finance, etc.).
  5. Eventually, most of the world’s professional indemnity insurance market (on the order of $10B-100B/year) has switched from insuring against human mistakes to insuring against AI mistakes.
  6. Along the way, provide consulting services for countless businesses switching to AI-based work, helping them reduce the chance of harm, and incentivising this with lowered premiums.
  7. Stay up-to-date on concrete AI risks, likely funding research focused on measuring it in the real-world. Also, for example the claims history that results will be an automatic, high-quality database of AI harms.

We thought this could be a multi-billion dollar company, a natural for-profit home for concrete AI risk research, and a reducer of existential risk from AI. 

How insurance works

Why insurance

Insurance is about cash-flow management. Sometimes, a low-probability accident happens, that either bankrupts the company or just puts an annoying dent in their accounting. If the expected value of such losses exceeded the company’s ability to pay, (competent) insurers would not be willing to sell the policy. But if it’s less, the company can benefit (by e.g. better weathering sudden shocks) by having insurance, and the insurer can make a profit in expectation.

Another way of describing the core function of insurance is as arbitrage between differently-locally-concave utility functions. Assume the policyholder faces a gamble between a good outcome $$x_2$$ and a bad outcome $$x_1$$. If the policyholder’s utility function is more concave than the insurer’s (for example, if it’s the red line below, while the insurer’s is just linear), then the policyholder cares less than the insurer about the difference between $$x_2$$ and $$x_2$$ minus the insurance premium rate $$r$$. If the policyholder's utility function is $$f$$, the policyholder’s expected utility change given insurance is a rise from $$p f(x_1) + (1-p) f(x_2)$$ (where $$p$$ is the chance of the bad outcome) to just always being $$f(x_2 - r)$$, while the insurer is exposed to upside risk $$(1-p)r$$ (green rectangle) and downside risk $$p(x2-x1-r)$$ (red rectangle). The flatter the policyholder’s utility function $$f$$ is around the $$x_2-r$$ to $$x_2$$ region compared to the region before that, the better this trade can be.

There are other benefits of insurance too:

  • Contracts sometimes require one party to have insurance, usually so that the other party knows they can sue and recover damages if necessary.
  • Reputational benefits (“you can trust us because we managed to get an insurer to!”)
  • Infrastructure for handling claims/losses (insurance companies may have non-financial advantages in processing or evaluating claims, so it can be good to contract such functions out)
  • Credible loss prevention advice (“we’re not just a consultancy - our money is also on the line here, and therefore trust us when we tell you to do X”)
  • Credible pricing of harms (similar to the above)

Capital efficiency in insurance comes from scale

Insurance depends on scale because of the central limit theorem. As a simplified example, let’s say you’re an insurance company selling flood insurance. Assume you sell flood insurance policies for £100, and there’s a 10% chance that a flood happens and you need to pay out £600 (so the expected risk is £60). In expectation, you make £40 per policy. However, if you sell 10 policies, and suddenly 2 or more of them trigger, you’ve made revenue of £1000 but a loss of £1200 and you’re bankrupt (assume you don’t have cash reserves). There’s a ~26% chance of this happening ($$\sum_{i=0}^{1} [\text{binomial pdf}_{p=0.1, n=10}(i)] \approx 0.26$$). However, if you sold 100 policies, the probability that 2 or more in every set of 10 triggers is only 0.2% ($$\sum_{i=0}^{19} [\text{binomial pdf}_{p=0.1,n=100}(i)] \approx 0.002$$).

Therefore, even if the expected loss is the same between two insurers, assuming the same risk tolerance (a red area), the insurer with more policies is exposed to less variance in expected loss per policy and can set a lower price:

Therefore, the way you achieve good capital efficiency in insurance is to sell many policies across many different types of risk (ideally uncorrelated ones - so not just flood policies, which might all trigger in the case of rising seas, but diverse types of risks). 

However, in practice the balance sheets and the actual selling of the insurance products are not very tightly coupled in the insurance industry. You do not have to be a company with huge policy scale, you just have to work with such a company (for example, by becoming an MGA - see below).

The insurance industry is complicated and allows for startups

The straightforward consequence of the above incentive is that you get monopoly effects in insurance, where one firm achieves the largest balance sheet and then beats everyone else on price (or pushes everyone else into a daredevil game where they’re more likely to blow up than the big one is).

To some extent, this has happened. However, there’s a separate dynamic too. To quote from a blog post by Felix Stocker:

“Most businesses, especially SMEs, buy their insurance from someone they know personally. Because it’s low on the list of priorities, but important to get right, the CEOs or CFOs responsible stick with people they trust - brokers that can answer questions, and be relied upon to bail them out in a tough spot. Personalities, not brands, are key. Because of this, the concept that best explains the structure of the insurance broking market is Dunbar’s Number - the idea that any one person can only hold a limited number of personal relationships. So each broker has up to about a hundred relationships - but no more than that. And since the end-customer relationship is owned by an individual broker, then the challenge becomes aggregating brokers, rather than the customers themselves.”

There are also many ways to bundle and unbundle the different components of insurance. Felix Stocker writes about this here. In brief:

A common starting point for insurance startups is to be MGAs (managing general agents), that handle pricing (and, sometimes, distribution - i.e. selling), but are backed by the balance sheet of a reinsurer. This would’ve been our approach too.

General liability exceptions are key for new insurance products

There isn’t an insurance product for every niche risk, because companies often hold general liability insurance that covers basically anything.

However, general liability insurance often comes with exceptions. For example, professional liability (also called errors & omissions) is often left to a separate policy, and terrorism & war -related harms are excluded. Also, complicated new risks like cybersecurity have increasingly tended to get exceptions, and be left to specialised cyber policies.

Based on talking to insurance industry experts, we expect AI-related exceptions to general liability and professional indemnity insurance to be coming. In the meantime, the need for them seems somewhat complex and subtle.

Our ideas for pricing risk

We were loosely inspired by Epoch’s “Direct Approach” for forecasting human-level AI. Specifically, we’d make an argument of the form: if we can show that the outputs of the human and the AI are indistinguishable regarding some property (e.g. mistake rate as assessed by humans), then we should treat them as practically the same regarding related properties (e.g. the probability of causing a malpractice lawsuit).

Specifically, our guess for how to price legal professional indemnity insurance for an AI model/scaffold is:

  1. Collect a bunch of legal documents created by the AI, and comparable documents created by the human.
  2. Hire legal experts to assess the potentially-claim-causing mistake rate in the AIs’ and the humans’ work. (Note: legal experts cost over ~$10^2 per hour, so this would be fairly expensive)
  3. Apply a fudge factor to the number of AI mistakes caught, on the assumption that humans are better (having had more practice) at catching human mistakes, and to account for unknown unknowns.
  4. If the fudge factor times the AI claim-relevant mistake rate is lower than the humans’, offer the AI model’s outputs PI cover with the same rates as the relevant human PI for the same firm. If it’s higher, then either don’t offer it, or offer it at a fairly steep additional price, and probably with lower cover. Basically - price in the risk.

We would likely only go through this process if we had first done a more checkbox-style check of the AI workflow, including:

  • Whether reasonable evaluations have been run
  • Whether the AI’s access to protected information is reasonable (e.g. probably the AI should have zero access to customer X’s info while doing work for customer Y)
  • The extent to which humans are in the loop or could intervene
  • Susceptibility to prompt injection attacks (e.g. we might require companies to use something like Lakera Guard)
  • Vulnerability to model updates (e.g. if OpenAI drops a new version of GPT-4 that is worse at specific things, does your workflow switch to it immediately without checks, or have you hardcoded the GPT version number?)

We expect there is room for fancier technical solutions to evaluate risk probabilities. However, we caution that the size of a loss is almost entirely not about the AI, but instead about the context of the case: what legal work was being done, what harm the legal error resulted in in the real world, what the judge’s mood was when they were deciding the case (if it went to court), and so on. Even the probability of risk is only partly about the AI; it also depends, for example, on whether the client who received the bad advice decides to sue in the first place. This is why the core of our approach is side-stepping the problem of evaluating legal malpractice harms from scratch, and instead creating an argument for why the AI lawyer does not have more risk (or has some specific factor more risk) than the human lawyer. We effectively want to import the human lawyer claims history used for existing insurance pricing to the AI case.

We did not prioritise thinking of technical approaches to risk evaluation, because we thought much more of the risk was on the market size (thinking the opposite is perhaps the most common failure mode of tech-focused entrepreneurs). However, having a “magical” tech demo would probably be a good way to get your foot in the door. Showing you could’ve accurately predicted past failure rates might be the type of evidence that insurers care a lot about.

Notes on professional indemnity insurance for law

  • There isn’t a set of “cookie-cutter” templates that most claims fall into; it’s a diverse set.
  • The split is roughly (taking a rough average over several sources):
    • 50% substantive claims (e.g. filing a motion in the wrong court, failing to raise a critical defense point, misapplying rules of evidence)
    • 30% administrative errors (e.g. missing a statute of limitations or court-ordered discovery deadline, forgetting to renew a client’s trademark, typos, filing documents for the wrong client, losing important documents, sending an email with confidential info to the wrong person, leaving sensitive documents visible on a desk, failing to properly redact info in public filings)
      • in particular, missing deadlines is a common one
      • many of these (e.g. missing deadlines, typos, and losing documents) seem like ones where AIs would be much lower
    • 10% client relation issues (e.g. settling a case without client approval, failing to disclose a past representation of an adverse party or a gift, not keeping the client informed, not explaining risks, failing to return calls or emails)
      • apart from some instruction-following or bad explanation -related harms, most of these seem far less applicable to AIs
    • 10% intentional wrongs (e.g. overbilling for work not performed, submitting false evidence, severe COI, selling information to a competitor, using confidentional info to benefit another client)
      • again, most of these are far less applicable to AIs

It seems reasonable that AI legal PI would therefore be even more tilted towards the substantive errors category than human lawyer PI.

Relevant Stakeholders

Munich Re has an AI insurance team. Here is their thinking on the state of the AI insurance market.

Orbital Witness, which accelerates legal due diligence in real estate, built their own custom AI insurance product with a real estate insurer called First Title. 

We won’t share details about the situations of specific startups that we talked to.

Findings

Customer demand

Common reasons for customers not needing insurance for their AI use cases included:

  • not working in a regulated or high-stakes domain
  • there is a human in the loop all the time, so the AI is just a tool and existing PI will probably cover it - for now

We did find several AI-for-law companies that did want an AI-specific insurance policy. Notably, one of them had seen the need to build their own custom insurance product, working with a specialist insurer in their area (real estate law). Several complained about not finding an off-the-shelf solution, and were willing to pay immediately for an insurance policy that addressed their problem. However, in at least one case this was more of a formality required by a contract.

One theme in many customer conversations was that being financially compensated by insurance is not sufficient to make up for damages, because the real damage is to the reputation of the company in question. This is much harder to insure against. Insurance could help indirectly here (e.g. the fact that you were able to get insurance for your product is some evidence that whatever you’re selling doesn’t blow up too often).

Another theme in many customer conversations that people just aren’t thinking that much about AI risks or harms yet. We think this is a consequence of AIs not being deployed in high-stakes use cases. Many organisations are conservative in their applications of AI and choose to start implementing in low stakes domains, such as internal products and answering simple FAQ questions. Nobody wants to be the first company to have AI publicly fail. 

Findings about insurance

The insurance industry is complicated. The legal industry is also complicated. Neither of us had any background in either. The lack of knowledge was generally fixable (thanks to Claude in particular), but the lack of relevant connections significantly slowed our momentum. Early-stage startup exploration is mostly driven by talking to potential customers. This was helped by the fact that we were mostly talking to AI product companies in these spaces, but still was slow going compared to both of our previous experiences getting customer interviews.

An MGA requires a reinsurer, and this takes a lot of time. This meant that, to get started, we would’ve needed not just customers, but a reinsurer. We did not find a reinsurer who was willing to work with us. If we had kept doing this, we would’ve talked to more reinsurers (perhaps starting with Allianz, who have previously reinsured a drone insurance product). The normal time for a new insuretech startup to get a reinsurer is on the order of 6-24 months.

The insurance industry moves slowly and carefully. This makes sense, since insurance companies that make rash and risky moves probably wouldn’t exist for very long. But it is still a very important cultural difference to, for example, the tech world.

Insurance is overwhelmingly about inductive, not deductive, evidence. Claims histories are the gold standard of evidence in the insurance world. If you don’t have a claims history, you will have a hard time.

Insurance is often reactive, and changes are driven by new types of big losses. The industry perks up and starts paying attention and figuring out how to deal with a given risk when a big loss happens related to that risk. In particular, once a big loss happens, lots of insurance actors will want to know how exposed they are to that type of risk, and either reduce exposure to that risk or make money by insuring against it.

Insurance for AI might only become something reinsurers care a lot about after a big event happens and causes harm.

The insurance industry is financially very large but does not have high valuations. Many insurance companies have extremely large revenues, but insurance companies are often valued at only a 1-3x multiple of their revenues (compared to 20x for tech companies). Allianz makes more revenue than Meta and has almost 10x the assets, but as of writing is valued at 1/10th of Meta.

Also, some vague things about insurance that struck us:

  • Insurance is fundamentally on the financialization side of the financialization-v-building axis. Financialization is necessary in a complex world, but it’s perhaps harder to feel the hacker building ethos when that’s what you’re doing.
  • At the claims stage, insurance is fundamentally adversarial: the claimant wants money, and the company is incentivised to not pay.
  • Reducing someone’s exposure to risk can lead to them taking more risk.

Cruxes

Our rough standard was that if we saw a path to getting a reinsurer onboard in clearly less than 6 months, we would start this company. We had several reasons for wanting to move fast:

  • Moving fast is the key to building big impactful things.
  • We felt our opportunity cost was high; in, say, a year of talking to reinsurers before being able to sign our first customer, we could’ve gotten far building something impactful that isn’t an insurance product.
  • The rate of AI progress is high enough that things that move slowly might not have time to matter.

We also were bottlenecked by not having insurance industry connections. Insurance, as mentioned above, is a very network-based field. It is true that many insuretech founders do not have insurance backgrounds, but it is still critical that some industry expert is involved very early on in advisory capacity, and probably the first hire needs to be someone with deep insurance connections.

In summary, we think that insurance for AI is a great idea for a team that is less impatient, and has either more insurance connections or great willingness to find networks in insurance.

Former AIG (American Insurance Group) CEO Hank Greenberg once said: “All I want in life is an unfair advantage”. Someone who - unlike us - does have an unfair advantage in insurance may be able to run with this idea, build a great company, and reduce AI risk.

Alternative approaches

Labs / Big Tech as reinsurers

Insurance requires a large balance sheet to pay out claims from. The standard way to solve this is with a reinsurer. However, who else has a lot of capital, and (unlike reinsurers) a specific interest in AI? Foundation model labs (FMLs) and their Big Tech backers.

This could also simultaneously align FML incentives. Incentive-wise, the natural place to park AI risk is at the AI companies themselves. There are two levels of this:

  1. When there are claims, an FML is involved in paying them out (directly or indirectly)
  2. When there are claims, all FMLs are at least partly on the hook, because the claims are at least partly paid out from a shared pool of capital that all FMLs are involved in. (This creates an incentive among FMLs to care about the safety of the others, making safety tech sharing more likely, and making it easier for the industry to negotiate a slowdown if needed.)

Why might FMLs want to do this? It spreads the risk of things going wrong and incentivises finding errors in other companies’ models early on. It could increase public trust in AI as a whole, which will make adoption easier. In particular, most people don’t know the difference between the top FMLs and so see them as “AI companies.” If one AI company causes a large harm, the public is likely to associate it with AI companies in general. It also seems good, incentive-wise, that the companies driving a technology are the ones who are involved in insuring the risk.

Why might FMLs not want to do this? There are lots of incredibly good reasons. 

  • There are many better uses of capital than parking it in some risk-free place where it can only be used as backing for an insurance product. This includes paying the employee salaries and compute costs that may enable these labs to build AGI and take over the entire economy and then the world - potentially a much higher-margin business than insurance.
  • (If going for the more ambitious version, where all FMLs participate in paying out claims:) By increasing the trust in all FMLs, pooled insurance might reduce a given FML’s competitive edge. For example, that OpenAI and Microsoft offer their Copyright Shield product is a specific advantage for them.
  • If it looks like insurance, or smells like insurance, or quacks like insurance, a horde of regulatory requirements immediately descend upon you. This makes a lot of sense; insurance is a very natural “picking up pennies in front of the steam roller” business. But FMLs understandably would prefer to not deal with this.
  • It’s not their core competency, and companies generally do better when they stick to their core competency.

To try to get around these issues, we explored options for FML backing, including:

  • FMLs / Big Tech simply partner with us, giving us credibility when we go to reinsurers.
  • FMLs / Big Tech become reinsurers.
  • FMLs / Big Tech create a captive insurance body. Captive insurance is when companies in a given industry get together to pool risks, allowing them some regulatory benefits over a normal reinsurer that deals with arbitrary risks, but also some limitations.
  • FMLs / Big Tech enter into some complicated deals with reinsurers that makes it a better deal for them.

We soon reached the point where, with Claude-3.5 serving as our legal team, we were doodling increasingly complicated legal structure diagrams on whiteboards. Some of them were starting to look vaguely pyramid-shaped. That was a good place to leave it.

Selling a risk model rather than an insurance policy

Why not leave the entire insurance industry to the existing insurance companies, and focus on what we really care about: modelling concrete AI risks?

An example of a company that sells risk models to insurance companies is RMS (now part of Moody’s, after changing ownership a few times). They were started in the late 1980s and specialised in natural catastrophe (“nat cat”) risk modelling. They had a technical moat: they were better at modelling things like synthetic hurricane populations than others.

The main disadvantage of such a route is that selling to insurance companies is very painful: they have slow procurement processes, mostly don’t understand technical details, and generally need to see a long track record of correct predictions before they buy. Venture capitalists are also unlikely to be interested in supporting such a company, since their growth rates are usually not stratospheric. For example, RMS was sold to Moody’s in 2021 for $2 billion, but only after almost 30 years in existence, and after already having been sold to Daily Mail along the way.

Might there be a market apart from insurance companies for a risk modelling product? Maybe, but this is unlikely. For natural catastrophe risks at least, insurance companies dominate risk modelling demand by sheer volume - they want updates all the time, whereas governments might want an update for planning purposes once every decade. Given how fast AI changes, though, there may be more actors who have a high rate of demand for risk models and updates on them.

Should AI evaluation / auditing orgs do this?

We haven’t thought about this much, but an org with an AI evaluation/auditing background might be well-placed to move into the insurance (or risk-modelling) space.

Acknowledgements

We’d like to thank Felix Stocker for lots of great advice on how things work in insuretech, Ed Leon Klinger for sharing his insurtech journey, Robert Muir-Woods for a very helpful chat about RMS, Otto Beyer for a valuable initial conversation about the insurance space, Jawad Koradia for helping us get initial momentum and introductions, Will Urquhart for talking with us about underwriting niche risks, the team at Entrepreneur First (in particular Kitty Mayo, Dominik Diak, and Jack Wiseman) for hosting much of our exploration and offering advice & introductions, and various people scattered across AI startups and the insurance industry for taking time to meet with us.

2024-07-23

Positive visions for AI

This post was a collaboration with Florence Hinder

Reasons to make the positive case

Everyone who starts thinking about AI starts thinking big. Alan Turing predicted that machine intelligence would make humanity appear feeble in comparison. I. J. Good said that AI is the last invention that humanity ever needs to invent.

The AI safety movement started from Eliezer Yudkowsky and others on the SL4 mailing list discussing (and aiming for) an intelligence explosion and colonizing the universe. However, as the promise of AI has drawn nearer, visions for AI upsides have paradoxically shrunk. Within the field of AI safety, this is due to a combination of the “doomers” believing in very high existential risk and therefore focusing on trying to avoid imminent human extinction rather than achieving the upside, people working on policy not talking about sci-fi upsides to look less weird, and recent progress in AI driving the focus towards concrete machine learning research rather than aspirational visions of the future.

Both DeepMind and OpenAI were explicitly founded as moonshot AGI projects (“solve intelligence, and then use that to solve everything else” in the words of Demis Hassabis). Now DeepMind - sorry, Google DeepMind - has been eaten by the corporate machinery of Alphabet, and OpenAI is increasingly captured by profit and product considerations.

The torch of AI techno-optimism has moved on the e/acc movement. Their core message is correct: growth, innovation, and energy are very important, and almost no one puts enough emphasis on them. However, their claims to take radical futures seriously are belied by the fact that their visions of the future seem to stop at GenAI unicorns. They also seem to take the general usefulness of innovation not as just a robust trend, but as a law of nature, and so are remarkably incurious about the possibility of important exceptions. Their deeper ideology is in parts incoherent and inhuman. Instead of centering human well-being, they worship the “thermodynamic will of the universe”. “You cannot stop the acceleration”, argues their figurehead, so “[y]ou might as well embrace it” - hardly an inspiring humanist rallying cry.

In this piece, we want to paint a picture of the possible benefits of AI, without ignoring the risks or shying away from radical visions. Why not dream about the future you hope for? It’s important to consider the future you want rather than just the future you don’t. Otherwise, you might create your own unfortunate destiny. In the Greek myth about Oedipus, he was prophesied to kill his father, so his father ordered him to be killed, but he wasn’t and ended up being adopted. Years later he crossed his father on the road in his travels and killed him, as he had no idea who his father was. Oedipus’ father focusing on the bad path might have made the prophecy happen. If Oedipus' father hadn’t ordered him to be killed, he would have known who his father was and likely wouldn’t have killed him. 

When thinking about AI, if we only focus on the catastrophic future, we may cause it to become true by causing an increase in attention on this topic. Sam Altman, who is leading the way in AI capabilities, claimed to have gotten interested from arch-doomer Eliezer Yudkowsky. We may also neglect progress towards positive AI developments; some people think that even direct AI alignment research should not be published because it might speed up the creation of unaligned AI.


With modern AI, we might even get a very direct “self-fulfilling prophecy” effect: current AIs increasingly know that they are AIs, and make predictions about how to act based on their training data which includes everything we write about AI.

Benefits of AI

Since we think a large focus of AI is on what could go wrong, let’s think through what could go well starting from what’s most tangible and close to the current usage of AI to what the more distant future could hold.

  1. AI will do the mundane work
  2. Lowering the costs of coordination
  3. Spreading Intelligence
  4. AI can create more technology
  5. Increased technology, wealth and energy, correlate with life being good
  6. All of the above, and the wealth it creates, could allow people to self-actualise more

Already, AI advances mean that Claude has beocme very useful, and programmers are faster and better. But below we’ll cast a look towards the bigger picture and where this could take us.

AI will do the mundane work

First, there’s a lot of mundane mental work that humans currently have to do. Dealing with admin work, filing taxes, coordinating parcel returns -- these are not the things you will fondly be reminiscing about as you lie on your deathbed. Software has reduced the pain of dealing with such things, but not perfectly. In the future, you should be able to deal with all administrative work by specifying what you want to get done to an AI, and being consulted on decision points or any ambiguities in your preferences. Many CEOs or executives have personal assistants; AIs will mean that everyone will have access to this. 

What about mundane physical work, like washing the dishes and cleaning the toilets? Currently, robotics is bad. But there is no known fundamental obstacle to having good robotics. It seems mainly downstream of a lot of engineering and a lot of data collection. AI can help with both of those. The household robots that we’ve been waiting for could finally become a reality.

Of course, it is unclear whether AIs will first have a comparative advantage against humans in mundane or meaningful work. We’re already seeing that AI models are making massive strides in making art, way before they’re managing our inboxes for us. It may be that there is a transitional period where robotics is lagging but AIs are smarter-than-human, where the main economic value of humans is their hands rather than their brains.

Lowering the cost of coordination

With AI agents being able to negotiate with other AI agents, the cost of coordination is likely to dramatically drop (see here for related discussion). Examples of coordination are agreements between multiple parties, or searching through a large pool of people to match buyers or sellers, or employees and employers. Searching through large sets of people, doing complex negotiations, and the monitoring and enforcement of agreements all take lots of human time. AI could reduce the cost and time taken by such work. In addition to efficiency gains, new opportunities for coordination will open up that would have previously been too expensive.

Small-scale coordination

To give an example of this on the small scale of two individuals, say you are trying to search for a new job. Normally you can’t review every single job posting ever, and employers can’t review every person in the world to see if they want to reach out. However, an AI could filter that for the individual and another AI for the business, and the two AIs could have detailed negotiations with each other to find the best possible match. 

Coordination as a scarce resource

A lot of the current economy is a coordination platform; that’s the main product of each of Google, Uber, Amazon, and Facebook. Reducing the cost of searching for matches and trades should unlock at least as much mundane benefits and economic value as the tech platforms have.

Increased coordination may also reduce the need to group people into roles, hierarchies, and stereotypes. Right now, we need to put people into rigid structures (e.g. large organisations with departments like “HR” or “R&D”, or specific roles like “doctor” or “developer”) when coordinating a large group of people. In addition to upholding standards and enabling specialisation of labour, another reason for this is that people need to be legible to unintelligent processes, like binning of applicants by profession, or the CEO using an org chart to find out who to ask about a problem, or someone trying to buy some type of service. Humans can reach a much higher level of nuance when dealing with their friends and immediate colleagues. The cheap intelligence we get from AI might let us deal with the same level of nuance with a larger group of people than humans can themselves track. This means people may be able to be more unique and differentiated, while still being able to interface with society.

Large-scale Coordination

On a larger scale, increased coordination will also impact geopolitics. Say there are two countries fighting over land or resources. Both countries could have AI agents to negotiate with the other AI agents to search the space of possible deals and find an optimal compromise for both. They could also simulate a vast number of war scenarios to figure out what would happen; much conflict is about two sides disagreeing about who would win and resolving the uncertainty through a real-world test. This relies on three key abilities: the ability to negotiate cheaply, the ability to simulate outcomes, and the ability to stick to and enforce contracts. AI is likely to help with all three. This could reduce the incentives for traditional war, in that no human lives are needed to be lost because the outcome is already known and we can negotiate straight from that. We also know exactly what we are and are not willing to trade off which means it’s easier to optimise for the best compromise for everyone.

Spreading the intelligence

AI lets us spread the benefits of being smart more widely.

The benefits of intelligence are large. For example, this study estimates that a 1 standard deviation increase in intelligence increases your odds of self-assessed happiness by 11%. Now, part of this gain comes from intelligence being a positional good: you benefit from having more intelligence at your disposal than others, for example in competing for a fixed set of places. However, intelligence also has absolute benefits, since it lets you make better choices. And AI means you can convert energy into intelligence. Much as physical machines let the weak gain some of the benefits of (even superhuman) strength, AI might allow all humans to enjoy some of the benefits of being smart.

Concretely, this could have two forms. The first is that you could have AI advisors increase your ability to make plans or decisions, in the same way that - hypothetically - even a near-senile president might still make decent decisions with the help of their smart advisors. With AI, everyone could have access to comparable expert advisors. The effect may be even more dramatic than human advisors: the AI might be superhumanly smart, the AI might be more verifiably smart (a big problem in selecting smart advisors is that it can be hard to tell who is actually smart, especially if you are not), and if AIs are aligned successfully there may be less to worry about in trusting it than in trusting potentially-scheming human advisors. 

The second is AI tutoring. Human 1-1 tutoring boosts educational outcomes by 2 standard deviations (2 standard deviations above average is often considered the cutoff for “giftedness”). If AI tutoring is as good, that’s a big deal.

AI is the ultimate meta-technology

AI is special because it automates intelligence, and intelligence is what you need to build technology, including AI, creating a feedback loop. Some other previous technologies have boosted other technologies; for example, the printing press massively helped the accumulation of knowledge that led to the invention of many other technologies. But we have not before had a technology that could itself directly advance other technology. Such AI has been called PASTA (Process for Automating Scientific and Technological Advancement).

Positive feedback loops - whether self-improving AIs, nuclear reactions, epidemics, or human cultural evolution - are very powerful, so you should be wary of risks from them. Similarly, it is currently at best extremely unclear whether AIs that improve themselves could be controlled with current technology. We should be very cautious in using AI systems to improve themselves.

In the long run, however, most of the value of AI will likely come from their effects on technological progress, much like the next industrial revolution. We can imagine AIs slashing the cost and increasing the speed of science in every field, curing diseases and making entire new veins of technology available, in the same way that steam engines made entirely new veins of coal accessible.

In particular, AIs help de-risk one of the largest current risks to future human progress. One model of the feedback loop behind humanity’s progress in the past few centuries is that people led to ideas led to wealth led to food led to more people.

However, greater wealth no longer translates into more people. The world population, which was exponentially growing for much of the 19th and 20th centuries, is likely to be in decline by the end of the 21st century. This is likely to have negative consequences for the rate of innovation, and as discussed in the next section, a decline in productivity would likely have a negative impact on human wellbeing. However, if AIs start driving innovation, then we have a new feedback loop: wealth leads to energy leads to more AIs leads to ideas leads to wealth.

As long as this feedback loop does not decouple from the human economy and instead continues benefitting humans, this could help progress continue long into the future.

Wealth and energy are good

If you want humans to be well-off, one of the easiest things to do is give them more wealth and more energy. GDP per capita (on a log scale) has a 0.79 correlation with life satisfaction, and per-capita energy use (again on a log scale) has a 0.74 correlation with life satisfaction. Increased wealth and energy correlate with life satisfaction, and we should expect these trends to continue.

Above: GDP per capita (x-axis), energy use (y-axis), and life satisfaction (colour scale) for 142 countries. There are no poor countries with high energy use, and no high energy use countries that are poor. There are no countries with high average life satisfaction that are not high in both energy use and average GDP per capita. The axes are logarithmic, but since economic growth is exponential, countries should be able to make progress at a constant rate along the axis. Data source: Our World In Data (here, here, and here).

(It is true that energy use and economic growth have been increasingly decoupling in rich countries, due to services being more of the economy, and efficiency gains in energy use. However, the latter is effectively increasing the amount of useful energy that can be used - e.g. say the amount of energy needed to cook one meal is now enough to cook two meals, which is effectively the same as gaining more energy. However, efficiency effects are fundamentally limited because there is a physical limit, and also if demand is elastic then efficiency gains lead to increased energy use, meaning it doesn’t help the environment either. Ultimately, if you want to do more things in the physical world, you need more energy).

A wealthy, energy-rich society has many material benefits: plentiful food, advanced medicine, high redistributive spending becomes feasible, and great choice and personal freedom through specialisation of labour and high spending power. A wealthy and energy-rich society also has some important subtler benefits. Poverty and resource constraints sharpen conflict. Economic growth is intimately linked to tolerance and liberalism, by weakening the cultural status and clout of zero-sum strategies like conflict and politicking.

One clear historic example of how increases in energy correlated with improved quality of life was in the industrial revolution, arguably the best and most important thing that ever happened. Before it, trends in human wellbeing seemed either stagnant, fluctuating, or very slow, and after it, all the variables for which we can find good long-term series that are related to human well-being shoot upwards.

Above: variables correlated with human well-being over time. Source:  Luke Muehlhauser

Therefore, it’s worth keeping in mind that boosting energy and wealth is good, actually. And the most powerful way to do that is through inventing new technologies that let us use energy to serve our needs.

The heart of the industrial revolution was replacing part of human manual labour with something cheaper and more powerful. AI that replaces large parts of human mental labour with something cheaper and more powerful should be expected to be similarly transformative. Whether it is a good or bad transformation seems more uncertain. We are lucky that industrialisation happened to make national power very tightly tied to having a large, educated, and prosperous middle class; it is unclear what is the winning strategy in an AI economy. We are also lucky that the powerful totalitarian states enabled by industrial technology have not triumphed so far, and they might get further boosts from AI. Automating mental labour also involves the automation of decision-making, and handing over decision-making to the machines is handing over power to machines, which is more risky than handing the manual labour to them. But if we can safely control our AI systems and engineer good incentives for the resulting society, we could get another leap in human welfare.

Self actualisation

Now say we’ve had a leap in innovation and energy through Transformative AI (TAI) and we’ve also reached a post scarcity world. What happens now? Humans have had all their basic needs met, most jobs are automated,  but what do people spend their time actually doing?

Maslow’s Hierarchy

Maslow’s hierachy of needs is a framework of understanding human needs and drivers for human behaviour. Maslow suggested that in most scenarios people need to mostly satisfy one level before being able to focus on higher-level needs. 

The top level of the hierachy is self-actualisation. The peak of human experience is something that few can currently reach - but maybe everyone could get there.

There is a possible path the world takes in which all humans can reach self-actualisation. With increases in technology & wealth, such as with TAI and a Universal Basic Income (UBI), we would be able to provide the basic needs of food, water, shelter, and clothing for all humans, enabling people to easily meet their basic needs. Humans can now spend more time on the things they want, for example moving up through Maslow’s hierarchy to focusing on increasing love and belonging, self-esteem and self-actualization.

Say you are in a post scarcity world, what would you do if you didn’t have to work?

Would you be spending time with loved ones, engaging in social activities that provide a sense of connection and belonging, self-esteem? Would it be honing your craft and becoming an expert in a particular field? Or would you spend the whole time scrolling on your phone?

Say hypothetically a wealthy billionaire gave you a grant to work on anything you wanted, would you be happy with having the complete freedom to spend your time as you wished?

Often people assume that others will be unhappy with this world, but would you? There is a cognitive bias where people tend to judge themselves as happier than their peers, which could nudge you to think people would be less happy in this world, even if you would enjoy this. 

In this post-scarcity world, humans could spend more time on creative pursuits such as art, music, and any other hobbies – not with the goal of making money, but to reach self-actualisation. 

With AI being better than humans in every dimension, AI can produce the best art in the world, but there is intrinsic value in honing your craft, improving at art or expressing your feelings through it, in and of itself. The vast majority of art is not created to be the best art in the world but for the journey itself. A child that paints a finger painting and the parent who puts it on the wall does not think “my child’s art is better than Van Gogh’s”. Instead, they feel a sense of excitement about the progress their child has made and the creative expression the child has produced. 

Another example is the Olympic games. Nobody needs to win the olympic games to survive, but it lets people express pride in their country, hone their craft, attain status, and so on. But the actual task is just a game, a social construct. More and more tasks will look like social constructs and games we create to challenge each other.

Examples of post-scarcity scenes 

Since this is quite theoretical, let's consider examples where we’ve had “post-scarcity” microcosms to explore. 

The French Bourgeoisie 

The French leisure class, or bourgeoisie, were a class of wealthy elite that emerged in 16th century France. Many had enough money to pursue endeavours like refining their taste in arts and culture. Salon culture was a cornerstone of bourgeoisie social life. Gatherings featuring discussions on literature, art, politics and philosophy. 

Upper Class in the Victorian Era

The upper class in the Victorian era enjoyed a variety of leisure activities that reflected their wealth, status and values. They attended social events and balls, fox hunting and other sports, theater and opera, art and literature, travel, tea parties and social visits, gardening and horticulture, charitable work and philanthropy. Several undertook serious pursuits in science or art. 

Burning Man

Burning Man is an annual festival where people take all the basic things you need with you for a week of living in the desert:food, water, shelter. People have a week to create a new community or city that is a temporary microcosm of a post-scarcity world. They pursue artistic endeavours and creative expression, music, dance and connecting with others. People often talk about Burning Man events being some of the best experiences of their lives. 

Successful Startup Founders in The Bay Area

In San Francisco, there is a crossover with hippie culture and tech, and many people with excess wealth and resources, resulting in many looking for more in life. They try to reach self actualisation, by pursuing many arts and creative pursuits. Hippie movements often encourage communal living, and a sense of connection with those around you. Many may raise eyebrows at the lifestyles of some such people, but it’s hard to claim that it’s a fundamentally bad existence.

More pessimistic views about humans?

It is true that not all cultural tendencies in a post-scarcity world would be positive. In particular, humans have a remarkable ability to have extremely tough and all-consuming social status games, seemingly especially in environments where other needs are met. See for example this book review about the cut-throat social scene of upper-class Manhattan women or this one about the bland sameness and wastefulness of nightlife, or this book review that ends up concluding that the trajectory of human social evolution is one long arc from prehistoric gossip traps to internet gossip traps, with liberal institutions just a passing phase.

But the liberal humanist attitude here is to let humans be humans. Yes, they will have petty dramas and competitions, but if that is what they want, who is to tell them no? And they will also have joy and love.

Would a post-scarcity world have meaning? Adversity is one of the greatest sources of meaning. Consider D-Day, when hundreds of thousands of soldiers got together to charge up a beach under machine-gun fire to liberate a continent from Nazi rule. Or consider a poor parent of four working three jobs to make ends meet. There are few greater sources of meaning. But adversity can be meaningful while involving less suffering and loss. A good future will be shallower, in a sense, but that is a good thing.

Finally, it is unclear if we would get a happy world, even if we had the technology for post-scarcity, because of politics and conflict. We will discuss this later.

Radical improvements

AI might also help with radical but necessary improvements to the human condition.

People die. It is a moral tragedy when people are forced to die against their will, as happens to over 50 million people per year. Medicine is making progress against many causes of death and disability; in the limit it can cure all of them. We should reach that limit as fast as possible, and AI can likely help accelerate the research and deployment of solutions.

One of the greatest inequalities in the world is inequality in intelligence. Some people struggle to perform in simple jobs, while others (well, at least one) are John von Neumann. In the short term, AI might help by making cognitively demanding tasks more accessible to people through AI tutors and AI copilots. In the longer term, AI might help us enhance human intelligence, through brain-AI integration or new medical technology.

Reasons to worry

Though there are many potential upsides for AI and AGI as argued in this post, that doesn’t mean there aren’t risks. 

The plausible risks of AI go all the way to human extinction, meaning this shouldn’t be taken lightly. Since this piece is focused on the upside risk, not the downside risk, we will not argue this point in depth, but it is worth revisiting briefly.

Existential risk from AI is a serious concern

It is intuitive that AI is risky.

First, creating something smarter, faster, and more capable than humans is obviously risky, since you need to very precisely either control it (i.e. stop it from doing things you don’t like) or align it (i.e. make it always try to do what you would want it to do). Both the control and alignment problem for AIs still have unsolved technical challenges. And that’s assuming that AI is in the right hands.

Second, even if the AIs remain in our control, they are likely to be as transformative as the industrial revolution. Eighteenth-century European monarchs would’ve found it hard to imagine how the steam engine could challenge their power, but the social changes that were in part a result of them eventually wrested all their powers away. In the modern world, a lot of power depends on large educated workforces of humans, whereas sufficiently strong AGI might decorrelate power and humans, decreasing the incentive to have people be educated and prosperous - or to have people around at all.

Apart from object-level arguments, consider too the seriousness with which the AI doomsday is discussed. Many top researchers and all top AI lab CEOs have signed a statement saying “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. Nuclear war and pandemics are the only other cases where similarly serious predictions have been made by a similarly serious set of people (though arguably climate change is close: the science on the effects is more established and certain, but while catastrophe is more likely, literal human extinction from it is much less likely).

Side-effects of non-existentially-bad AI might be large

Consider the internet, a widely-successful technology with a lot of benefits. There are credible claims that the internet is responsible for harms ranging from massively increased depression rates among teenagers to political polarisation to widespread productivity loss through addiction and distraction.

In the same way, the success of AI might lead to bad side effects, even if all the existential risks are avoided.

For example, AI could replace human connection. Human friends and partners might increasingly be replaced with AIs. However bad it was in other ways, at least on pre-AI social media you at least interacted with humans (or simple algorithms), but with AIs it’s possible to have what looks like deep emotional relationships. Just look at the Replika subreddit from a year ago when they changed the algorithm to only allow “PG-rated interactions”. Many users were upset. The film “Her” doesn’t seem far off, as Sam Altman acknowledges. Such relationships give the human much more safety and control than in human relationships, which might both be very attractive to humans, while also excessively coddling them. Given that much human happiness and meaning comes from human relationships and bonding, widespread AI substitution of them could mean the destruction of a large part of all human wellbeing and meaning in the world. On a more prosaic level, society might atomise into individuals hoarding compute credits to spend on running their AI companions without connecting with other humans, with severe effects on society’s functioning, or humans might stop having children and human populations might crash. Humanity has flourished through collaboration and socialisation. If we use AIs to replace this in an overly thoughtless way, the fabric of society could crumble.

Apart from being superhuman at forming relationships with humans, AIs might be superhuman at persuasion. We can imagine AIs producing the vast majority of content that people consume. We can imagine a totalitarian world where the governments with the greatest compute resources can dominate the conversation forever. Instead of humans having ideas and sometimes persuading other humans to adopt them, driving social progress, any human-generated ideas might be swamped by a greater quantity of superhumanly persuasive counter-arguments that support the status quo. We can also imagine a dystopian decentralised world. Already, many online memes (in Dawkins’s original sense of the word) are maladaptive, spreading not by having good effects on their hosts but by being incredibly good at spreading from person to person. AI might make us much better at searching the space of ideas for the most viral ones. Ideas that aren’t maximally viral might be outcompeted. Eventually, our institutions could become mere puppets that serve as viral hosts for the most transmissive memes, as part of an endless tug-of-war where AI-generated memes compete to compel humans to spread them.

Seems bad.

Not good nor bad, but some third thing.

Many debates turn into mood affiliation debates. Are guns bad? Is more government good? But remember: politics is the mindkiller. Navigating a complicated world requires more than the ability to stick the label “good” or “bad” on entire domains. If you were seated in the control room of a nuclear power station, you wouldn’t ask yourself: uranium, good or bad? Instead, you want to steer towards the small set of states where the reaction is perched between dying out and exploding, while generating useful clean power.

We’ve also seen again and again that technology and social change have strong effects on each other, and these are often hard to predict. We’ve discussed how industrial technology may have led to democracy. There is serious academic debate about whether the stirrup caused feudalism, or whether the Black Death was a driver of European liberalism, or whether social media was a significant cause of the Arab Spring. The birth control pill was a major influence of the sexual revolution, and the printing press helped the Protestant Reformation. Often, the consequences of a new technology are some obvious direct benefits, some obvious direct harms, and the shifting of some vast social equilibrium that ends up forever reshaping the world in some way no one saw coming. So far we’ve clearly ended up ahead on net, and maybe that will continue.

Humanity has spent over a hundred thousand years riding a feedback loop of accumulating cultural evolution. Over the past few hundred, the industrial revolution boosted the technological progress feedback loop. Human wellbeing has skyrocketed, though along the way we’ve had - and are continuing to have - close calls with nuclear war, totalitarianism, and environmental issues. We’ve had a healthy dose of luck, including in generalities like the incentive structures of industrial economics and specifics like the heroism of Stanislav Petrov. But we’ve also had an enormous amount of human effort and ingenuity spent on trying to chart a good path for civilization, from solar panel subsidies to the Allies winning World War 2.

For most of this time, the direction of the arrow of progress has been obvious. The miseries of poverty and the horrors of close-up totalitarianism are very powerful driving forces after all. And while both continue ravaging the world, developed countries have in many ways gotten complacent. There are fewer obvious areas of improvement for those lucky enough to enjoy a life of affluence in the developed world. But the future could be much better still.

Know where to aim

We think it’s important to have a target of what to aim for. We need to dream about the future we want. A strong culture needs a story of what it is driving towards, and humanity needs a compelling vision of how our future turns out well so we can work together to create the future we all want. AI seems like the biggest upcoming opportunity and risk. We hope we can avoid the risks, and realise the positive vision presented here, together with a hundred other things we can’t yet imagine.


See LessWrong for additional comments & discussion.

2023-06-04

A Disneyland Without Children

The spaceship swung into orbit around the blue-grey planet with a final burn of its engines. Compared to the distance they had travelled, the world, now only some four hundred kilometres below and filling up one hemisphere of the sky, was practically within reach. But Alice was no less confused.

“Well?” she asked.

Charlie stared thoughtfully at the world slowly rotating underneath their feet, oceans glinting in the sunlight. “It looks lickable”, he said.

“We have a task”, Alice said, trying to sound gentle. Spaceflight was hard. Organic life was not designed for it. But their mission was critical, they needed to move fast, and Charlie, for all his quirks, would need to be focused.

“What’s a few minutes when it will take years for anything we discover to be known back home?” Charlie asked.

“No licking”, Alice said.

Charlie rolled his eyes, then refocused them on the surface of the planet below. They were just crossing the coast of one of the larger continents. Blue water was giving way to grey land.

“Look at the texture”, Charlie said. They had seen it from far away with telescopes, but there was something different about seeing it with their bare eyes. Most of the land surface of the planet was like a rug of fine grey mesh. If there had been lights, Alice would have guessed the entire planet’s land was one sprawling city, but as far as their instruments could tell, the world had no artificial lighting.

As far as they could tell, the world also had no radio. They had broadcast messages at every frequency they could, and in desperation even by using their engines to flash a message during their deceleration burn. No response had come.

Alice pulled up one of the telescope feeds on the computer to look closer at the surface. She saw grey rectangular slabs, typically several hundred metres on a side, with wide roads running between them. The pattern was not perfect - sometimes it was irregular, and sometimes there were smaller features too. Some of the smaller ones moved.

“Are they factories?” Charlie asked.

“I’d guess so”, Alice said, watching on the telescope feed as a steady stream of rectangular moving objects, each about ten metres long, slid along a street. Another such stream was moving along an intersecting street, and it looked like they would crash at the intersection, but the timing and spacing was such that vehicles from one stream crossed the road just as there were gaps in vehicles along the other stream.

“A planet covered by factories, then”, Charlie said. “With no one home to turn the lights on.”

“I want to see what they’re making”, Alice said.

-

All through the atmospheric entry of their first drone package, Alice sat tight in her seat and clenched and unclenched her hands. So far all they had done was passive observation or broadcasting. A chunky piece of hardware tracing a streak of red-hot plasma behind it was a much louder knock. She imagined alien jet fighters scrambling to destroy their drones, and some space defence mechanism activating to burn their ship.

The image she saw was a jittery camera feed, showing the black back of the heatshield, the grey skin of the drone package, and a sliver of blue sky. It shook violently as the two halves of the heatshield detached from each other and then the drone package, tumbling off in opposite directions. Land became visible, kilometres below, the grey blocks of the buildings tiny like children’s blocks but still visibly three-dimensional, casting shadows and moving as the drone package continued falling.

The three drones tested their engines, and for a moment flew - or at least slowed their descent - in an ungainly joint configuration, before breaking off from each other and spreading their wings to the fullest. The feed showed the other two drones veering off into the distance on wide narrow wings, and then the view pulled up as the nose of the drone lifted from near-vertical to horizontal.

“Oops, looks like we have company”, Charlie said. He had been tapping away at some other screens while Alice watched the drone deployment sequence.

Alice jumped up from her seat. “What?”

“Our company is … a self-referential joke!”

Alice resisted the temptation to say anything and instead sunk back into her seat. On her monitor, the grey blocks continued slowly moving below the drone. She tapped her foot against the ground.

“Actually though”, Charlie said. “We’re not the only ones in orbit around this planet.”

“What else is orbiting? Has your sense of shame finally caught up with you and joined us?”

“Looks like satellites. Far above us, though. Can you guess how far?”

“I’d guess approximately the distance between you and maturity, so … five light-years?”

Charlie ignored her. “Exactly geostationary altitude”, he said, grinning. The grin was like some platonic ideal of intellectual excitement; too pure for Alice’s annoyance to stay with her, or for her to feel scared about the implications.

“But nothing in lower orbits?” Alice asked.

“No”, Charlie said. “Someone clearly put them there; stuff doesn’t end up at exactly geostationary altitude unless someone deliberately flies a communications or GPS satellite there. Now I can’t be entirely sure that the geostationary satellites are completely dead, but I’d guess that they are.”

“Like everything else”, Alice said, but even as she said so she caught sight of a long trail of vehicles making its way along one of the roads. There was something more real about seeing them on the drone feed.

“Maybe this is just a mining outpost”, Charlie said. “Big rocket launch to blast out a billion tons of ore to god-knows-where, once a year.”

“Or maybe they’re hiding underground or in the oceans”, Alice said.

“Let’s get one of the drones to drop a probe into the oceans. I’ll send one of our initial trio over to the nearest one, it’s only a few hundred kilometres away”, Charlie said.

“Sure”, Alice said.

They split the work of flying the drones, two of them mapping out more and more of the Great Grey Grid (as Alice took to calling it in her head), and one flying over the planet’s largest ocean.

Even the oceans were mostly a barren grey waste. Not empty, though. They did eventually see a few small scaly fish-like creatures that stared at their environment with uncomprehending eyes. Alien life. A young Alice would have been ecstatic. But now she was on a mission, and her inability to figure out what had happened on this planet annoyed her.

In addition to the ocean probe, they had rovers they could send crawling along the ground. Sometimes the doors of the square buildings were open, and Alice would drive a rover past one opening. Most seemed to either be warehouses of stacked crates, or then there would be some kind of automated assembly line of skeletal grey robot arms and moving conveyor belts. A few seemed to place more barriers between the open air and their contents; what went on there, the rovers did not see.

The first time Alice tried to steer a rover into a building, it got run over by a departing convoy of vehicles. The vehicles were rectangular in shape but with an aerodynamic head, with three wheels on each side. Based on their dimensions, she could easily imagine one weighing ten or twenty tons. The rover had no chance.

“Finally!” Charlie had said. “We get to fight these aliens.”

But there was no fight. It seemed like it had been a pure accident, without any hint of malice. The grey vehicles moved and stopped on some schedule of their own, and for all Alice knew they were not just insensitive beasts but blind and dumb ones too.

The next rover got in, quickly scooting through the side of the entrance and then off to one side, out of path of the grey vehicles. It wandered the building on its own, headlights turned on in the otherwise-dark building to bring back a video stream of an assembly line brooded over by those same skeletal hands they had glimpsed from outside. Black plastic beads came in by the million on the grey vehicles. A small thin arm with a spike on the end punctured a few holes on one side, and using these holes two of the black beads were sown onto an amorphous plushy shape. The shape got appendages, were covered with a layer of fluff, and the entire thing became a cheerful purple when it passed through an opaque box with pipes leading into it. It looked like a child’s impression of a hairy four-legged creature with black beady eyes above a long snout. A toy, but for who?

The conveyor belt took an endless line of those fake creatures past the rover’s camera at the end of the assembly line. Alice watched them go, one by one, and fall onto the open back of a grey vehicle. It felt like each and every one made eye contact with her, beady black eyes glinting in the light. She watched for a long time as the vehicle filled up. Once it did, a panel slid over the open top to close the cargo bay, and it sped off out the door. The conveyor belt kept running, but there was a gap of a few metres to the next plushy toy. It came closer and closer to the end - and suddenly a vehicle was driving into place, and the next creature was falling, and it just barely fell into the storage hold of the vehicle while it was driving into place.

“How scary do you find the Blight?” Alice asked.

“Scary enough that I volunteered for this mission”, Charlie said.

Alice remembered the charts they had been shown. They had been hard to miss; even the news, usually full of celebrity gossip and political machinations, had quickly switched to concentrating on the weirdness in the sky once the astronomers spotted it. Starlight dimming in many star systems and what remained of the the light spectra shifting towards the infrared. Draw a barrier around the affected area, and you get a sphere 30 light-years wide, expanding at a third of the speed of light. At the epicentre, a world that had shown all the signs of intelligent life that could be detected from hundreds of light-years away - a world that astronomers had broadcast signals to in the hopes of finally making contact with another civilisation - that had suddenly gone quiet and experienced a total loss of oxygen in its atmosphere. The Blight, they had called it.

In the following years, civilisation had mobilised. A hundred projects had sprung forth. One of them: go investigate the star system that was the second-best candidate for intelligent life, but had refused to answer radio signals, and see if someone was there to help. That was why they were here.

“I think I found something as scary as the Blight”, Alice said. “Come look at this.”

The purple creatures kept parading past the camera feed

-

Over the next five days, while the Blight advanced another forty billion kilometres towards everything they loved back home, Alice and Charlie were busy compiling a shopping catalogue.

“Computers”, Alice said. “Of every kind. A hundred varieties of phones, tablets, laptops, smartwatches, smartglasses, smart-everything.”

“Diamonds and what seems to be jewellery”, Charlie said.

“Millions of tons of every ore and mineral.” They had used their telescopes on what seemed to be a big mine, but they had barely needed them. It was like a huge gash in the flesh of a grey-fleshed and grey-blooded giant, complete with roads that looked like sutures. There were white spots in the image, tiny compared to the mine, each one a sizeable cloud.

“Clothes”, Charlie continued. “Lots and lots of clothes of different varieties. They seem to be shipped around warehouses until they’re recycled.”

“Cars. Sleek electric cars by the million. But we never see them used on the roads, though there are huge buildings were brand-new cars are recycled. And airplanes, including supersonic ones.”

“A lot of things that look like server farms”, Charlie said. “Including ones underwater and on the poles. There’s an enormous amount of compute in this world. Like, mind-boggling. I was thinking we should figure out how to plug into all of it and mine some crypt-”

“Ships with nuclear fusion reactors”, Alice interrupted. There were steady trails of them cutting shortest-path routes between points on the coast.

“Solar panels”, Charlie said. “Basically every spare surface. The building roofs are all covered with solar panels.”

“And children’s plush toys”, Alice said.

They were silent for a while.

“We have a decent idea of what these aliens looked like”, Alice said. “They were organic carbon-based lifeforms, like us. Similar in size too, also bipedal. And it’s like they left some ghostly satanic industrial amusement park running, going through all the motions in their absence, and disappeared.”

“And they didn’t go to space, as far as we know”, Charlie said.

“At least we don’t have any more Blights to worry about then”, Alice said. “I can’t help but imagining that the Blight is something like this. Something that just tiles planets with a Great Grey Grid, does something even worse to the stars, and then moves on.”

“They had space technology, but apparently whoever built the Great Grey Grid didn’t fancy it”, Charlie said. “The satellites might predate it. Probably there were satellites in lower orbits too, but their orbits decayed and they fell down, so we only see the geostationary ones up high.”

“And then what?” Alice said. “All of them vanished into thin air and left behind a highly-automated ghost-town?”

Charlie shrugged.

“Can we plug ourselves into their computers?” Alice asked.

“To mine cr-?”

“To see if anyone’s talking.”

Charlie groaned. “You can’t just plug yourself into a communication system and see anything except encrypted random-looking noise.”

“How do you know they encrypt anything?”

“It would be stupid not to”, Charlie said.

“It would be stupid to blind yourself to the rest of the universe and manufacture a billion plush toys”, Alice said.

“Seems like it will work for them until the Blight arrives.”

-

Alice floated in the middle of the central corridor of the ship. The ship was called Legacy, but even before launch they had taken to calling it “Leggy” for short. The central corridor linked the workstation at the front of the ship where they spent most of their days to the storage bay at the back. In the middle of the corridor, three doors at 120-degree angles from each other lead to the small sleeping rooms, each of them little more than a closet.

Alice had woken up only a few minutes ago, and still felt an early-morning grogginess as well as the pull of her bed. The corridor had no windows or video feeds, but was dimly lit by the artificial blue light from the workstation. They were currently on the night side of the planet.

She took a moment to look at the door of the third sleeping room. It was closed, like always, with its intended inhabitant wrapped in an air-tight seal of plastic in a closed compartment of the storage bay. They would flush him into space before they left for home again; they could have no excess mass on the ship for the return journey.

Alice thought again of the hectic preparations for the mission. Apart from Blightsource, this was only one planet the astronomers had spotted that might have intelligent life on it, and the indications were vague. But when you look into space and see something that looks like an approaching wall of death - well, that has a certain way of inspiring long-shots. Hence the mission, hence Legacy’s flight, hence crossing over the vast cold stretch of interstellar space to see if any answers could be found on this world. Hence Bob’s death while in cryonic suspension for the trip. Hence the hopes of all civilisation potentially resting on her and Charlie figuring valuable out something.

If Charlie and she could find something on this world, some piece of insight or some tool or weapon among the countless pieces of technological wizardry that this world had in spades, that had a credible chance against the Blight when it arrived … maybe there was hope.

Alice pushed off on the wall and set herself in a slow spinning motion. The ship seemed to revolve around her. Bob’s door revolved out of sight, and Charlie’s door became visible -

Wait.

Her gravity-bound instincts kicked in and she tried to stop the spin by shoving back with her hands, but there was nothing below her, so she remained spinning slowly. She breathed in deeply to calm herself down, then kicked out a foot against the wall to push herself to the opposite one. She grabbed one of the handles on the wall and held onto it.

The light on Charlie’s room was off. That meant it was empty.

“Charlie!” Alice called.

No response.

The fear came fast. Here she was, light-years from home, perhaps all alone on a spaceship tracing tight circles around a ghostly automated graveyard planet. The entire mass of the planet stood between her and the sun. Out between the stars, the Blight was closing in on her homeworld. She counted to calm herself down; one, two, three, … and just like that, the Blight was three hundred thousand kilometres closer to home. Unbidden, an image of the fluffy purple creature popped up in her mind, complete with its silly face and unblinking eye contact.

Soundlessly, she used the handles on the wall of the corridor to pull herself towards the workstation. She reached the door, peered inside -

There was Charlie, staring at a computer screen. He looked up and saw Alice. “You scared me!” he said. “Watch out, no need to sneak behind me so quietly.”

“I called your name”, Alice said.

“I know, I know”, Charlie said. “But I’m on to something here, and I just want to run a few more checks and then surprise you with the result.”

“What result?” Alice glanced at some of the screens. Two of the drones were above the Great Grey Grid, one above ocean. With their nuclear power source, they could stay in the air as long as they wanted. Even though their focus was no longer aerial reconnaissance, there was no reason not to keep them mapping the planet from up close, occasionally picking up things that their surveys from the ship did not.

“I fixed the electrical issues with the rover and the cable near the data centre”, Charlie said.

“So you’re getting data, not just frying our equipment?”

“Yes”, Charlie said. “And guess what?”

“What?”

“Guess!”

“You found a Blight-killer”, Alice said.

“No! Even better! These idiots don’t encrypt their data as far as I can tell. And I think a lot of it is natural language.”

“Okay, and can we figure out what it means?”

“We have automated programs for trying to derive syntax rules and so on”, Charlie said. “It’s already found something, including good guesses of which words are prepositions and what type of grammar they have. But mapping words to meaning based on purely statistics of how often they occur is hard.”

“I’ve seen products they have with pictures and instruction manuals”, Alice said. “We could start there.”

“Oh no”, Charlie said. “This is going to be a long process.”

-

By chance, it turned out not to be. Over the next day, they had sent a rover to a furniture factory and had managed, after some attempts, to steal an instruction leaflet out of a printer before the robotic arm could snatch it to be packaged with the furniture. Somehow Alice was reminded of her childhood adventures stealing fruit from the neighbour’s garden.

They had figured out which words meant “cupboard”, “hammer”, and “nail”, and so on. But then another rover on the other side of the world had seen something. It was exploring a grey and windy coast. On one side of the rover was the Great Grey Grid and the last road near the coast, the occasional vehicle hurtling down it. But on the other side was a stretch of rocky beach hammered by white-tipped waves, a small sliver of land that hadn’t been converted to grey.

The land rose by the beach, forming a small hill with jagged rocky sides. The sun shone down on one face of it, but there was a hollow, or perhaps small cave, that was left in the dark by the overhanging rock. And in the rock around this entrance, there were several unmistakable symbols scratched into the rock, each several metres high.

Alice took manual control of the rover and carefully instructed it to drive over the rocky beach towards the cave entrance. On the way it passed what seemed to be a fallen metal pole with some strips of fabric still clinging to it.

Once it was close enough to the mouth of what turned out to be a small cave, the camera could finally see inside.

There was a black cabinet inside. Not far from it, lying on the ground, was the skeleton of a creature with four slender limbs and a large head. Empty eye sockets stared out towards the sky.

Alice felt her heart beating fast. It wasn’t quite right; many of the anatomical details were off. But it was close enough, the similarity almost uncanny. Here, hundreds of light years away, evolution had taken a similar path, and produced sapience. And then killed it off.

“Charlie”, she said in a hoarse voice.

“What?” Charlie asked, sounding annoyed. He had been staring at an instruction manual for a chair, but he looked up and saw the video feed. “Oh”, he said, in a small voice. “We found them.”

Alice tore her eyes away from the skeleton and to the small black cabinet. It had a handle on it. She had the rover extend an arm and open it.

-

The capsule docked with Leggy and in the weightless environment they pushed the cabinet easily into the ship. They had only two there-and-back-again craft - getting back to orbit was hard - but they had quickly decided to use one to get this cabinet up. It had instructions, after all; very clear instructions, though ones that their rovers couldn’t quite follow.

It started from a pictographic representation, etched onto plastic cards, of how you were supposed to read the disks. They managed to build something that could read the microscopic grooves on the disk as per the instructions, and transfer the data to their computers.

After a few hours of work, they had figured out the encodings for numbers, the alphabet, their system of units, and seemingly also some data formats, including for images.

Confirmation came next. The next item on the disk was an image of two of the living aliens, standing on a beach during a sunset. Alice stared into their faces for a long time.

Next there came images next to what were clearly words of text, about fifty of them. Some of the more abstract ones took a few guesses, but ultimately they thought they had a base vocabulary, and with the help of some linguistics software, it did not take very long before they had a translated vocabulary list of about eight thousand words.

Alice was checking the work when Charlie almost shouted: “Look at this!”

Alice looked at what he was pointing at. It was a fragment of text that read:

Hello,

The forms for ordering the new furniture are attached. Please fill them in and we will respond to your order as quickly as we can!

If you need any help, please contact customer support. You will find the phone number on our website.

“What is this? Is Mr Skeleton trying to sell us furniture from beyond the grave?” Alice asked.

“No”, Charlie said. “This isn’t what I got from the recovered data; I haven’t looked at the big remaining chunk yet. This is what I got by interpreting one of the packets of data running on the cables that our rover is plugged into using what we now know about their data formats and the language.”

“And?”

“I don’t get it!” Charlie said. “Why would a world of machines send each other emails in natural language?”

“Why would they manufacture plushy toys? I doubt the robotic arms need cuddles.”

Charlie looked at the world, slowly spinning underneath their ship. “Being so close to it makes me feel creeped out. I don’t get it.”

“You don’t want to lick it anymore?” Alice asked. She decided not to tell Charlie about her own very similar feelings earlier, when she thought for a moment Charlie had gone missing.

Charlie ignored her. “I think the last thing on Mr Skeleton’s hard-drive is a video”, he said. “I’ve checked and it seems to play.”

“You looked at it first?” Alice said in a playfully mocking tone. The thrill of discovery was getting to her.

“Only the first five frames”, Charlie said. “Do you want to watch it?”

-

Our Civilisation: A Story read a short fragment of subtitle, white on black, auto-translated by a program using the dictionary they had built up.

There was a brief shot of some semi-bipedal furry creature walking in the forest. Then one of a fossilised skeleton of something more bipedal and with a bigger head. Then stone tools: triangular ones that might have been spear tips, saw-toothed ones, clubs. A dash of fading red paint on a rock surface, in the shape of a cartoon version of that same bipedal body plan.

There were two pillars of stone in a desert on what looked like a pedestal, some faded inscription at its base and the lone and level sands stretching far away. There was a shot of an arrangement of rocks, some balancing on top of two others, amid a field of green. A massive pyramidal stone structure, lit by the rising sun.

Blocky written script etched on a stone tablet. Buildings framed by columns of marble. A marble statue of one of the aliens, a sling carelessly slung over its shoulder, immaculate in its detail. A spinning arrangement of supported balls orbiting a larger one. And still it moves, the subtitles flashed.

A collection of labelled geometric diagrams on faded yellow paper. Mathematical Principles of Natural Philosophy.

A great ornate building with a spire. A painting of a group of the aliens clad in colourful clothing. An ornate piece of writing. We hold these truths to be self-evident …

A painting of a steam locomotive barrelling along tracks. A diagram of a machine. A black-and-white picture of one of the aliens, then another. Government of the people, for the people, by the people, shall not perish …

An alien with white hair sticking up, holding a small stick of something white and with diagrams of cones behind him. Grainy footage of propeller aircraft streaking through the sky, and then of huge masses of people huddling together and walking across a barren landscape, and then of aliens all in the same clothes charging a field, some of them suddenly jerking about and falling to the ground. We will fight on the beaches, we will fight on the landing grounds …

A black-and-white footage of a mushroom cloud slowly rising from a city below. A picture, in flat pale blue and white, showing a stylised representation of the world’s continents. The same picture, this time black-and-white, on the wall of a room where at least a hundred aliens were sitting.

An alien giving a speech. I have a dream. An alien, looking chubby in a space suit, standing on a barren rocky surface below an ink-black sky next to a pole with a colourful rectangle attached to it.

Three aliens in a room, looking at the camera and holding up a piece of printed text. Disease eradicated.

What looked like a primitive computer. A laptop computer. An abstract helical structure of balls connected by rods, and then flickering letters dancing across the screen.

A blank screen, an arrow extending left to right across it - time, flashed the subtitles- and then another arrow from the bottom-left corner upwards - people in poverty - and then a line crawling from left to right, falling as it did so.

A line folding itself up into a complicated shape. AI system cracks unsolved biology problem.

From then on, the screen showed pictures of headlines.

All routine writing tasks now a solved problem, claims AI company.

Office jobs increasingly automated.

Three-fourths of chief executives of companies on the [no translation] admit to using AI to help write emails, one-third have had AI write a shareholder letter or strategy document.

Exclusive report: world’s first fully-automated company, a website design agency.

Mass layoffs as latest version of [no translation] adopted at [no translation]; ‘stunning performance’ at office work.

Nations race to reap AI productivity gains: who will gain and who will lose?

CEO of [no translation] resigns, claiming job pointless, both internal and board pressure to defer to “excellently-performing” AI in all decisions.

[No translation] ousts executive and management team, announces layoffs; board supports replacing them with AI to keep up with competition.

Entirely or mostly automated companies now delivering 2.5x higher returns on investment on average; ‘the efficiency difference is no joke’, says chair of [no translation].

Year-on-year economic growth hits 21% among countries with advanced AI access.

Opinion: the new automated economy looks great on paper but is not serving the needs of real humans.

Mass protests after [no translation], a think-tank with the ear of the President, is discovered to be funded and powered by AI board of [no translation], and to have practically written national economic policy for the past two years.

‘No choice but forward’, says [no translation] after latest round of worries about AI; unprecedented economic growth still strong.

[No translation 1] orders raid of [no translation 2] over fears [no translation 2] is not complying with latest AI use regulations, but cannot execute order due to noncompliance from the largely-automated police force; ‘we are working with our AI advisers and drivers in accordance with protocol, and wish to assure the [no translation 3] people that we are still far from the sci-fi scenario where our own police cars have rebelled against us.’

‘AI overthrow’ fears over-hyped, states joint panel of 30 top AI scientists and business-people along with leading AI advisory systems; ‘they’re doing a good job maximising all relevant metrics and we should let them keep at it, though businesses need to do a better job of selecting metrics and tough regulation is in order.’

Opinion: we’re better-off under a regime of rigorous AI decision-making than under corrupt politicians; let the AIs repeat in politics what they’ve done for business over the last five years.

‘The statistics have never looked so good’ - Prime Minister reassures populace as worries mount over radical construction projects initiated by top AI-powered companies.

Expert panel opinion: direct AI overthrow scenario remains distant threat, but more care should be exercised over choice of target metrics; recommend banning of profit-maximisation target metric.

Movement to ban profit-maximising AIs picks up pace.

Top companies successfully challenge new AI regulation package in court.

‘The sliver of the economy over which we retain direct control will soon be vanishingly small’, warns top economist, ‘action on AI regulation may already be too late’.

Unverified reports of mass starvation in [no translation]; experts blame agricultural companies pivoting to more land-efficient industries.

Rant goes viral: ‘It’s crazy, man, we just have these office AIs that only exist in the cloud, writing these creepily-human emails to other office AIs, all overseen by yet another AI, and like most of their business is with other AI companies; they only talk to each other, they buy and sell from each other, they do anything as long as it makes those damned numbers on their spreadsheets just keep ticking up and up; I don’t think literally any human has ever seen a single product out of the factory that just replaced our former neighbourhood, but those factories just keep going up everywhere.’

Revolution breaks out in [no translation]; government overthrown, but it’s business-as-usual for most companies, as automated trains, trucks, and ships keep running.

[No translation] Revolution: Leaked AI-written email discovered, in which the AI CEO ordered reinforcement of train lines and trains three weeks ago. ‘We are only trying to ensure the continued functioning of our supply chains despite the recent global unrest, in order to best serve our customers’, CEO writes in new blog post.

[No translation] Revolution: crowds that tried swarming train lines run over by trains; ‘the trains didn’t even slow down’, claim witnesses. CEO cites fiduciary duties.

Despite unprecedented levels of wealth and stability, you can’t actually do much: new report finds people trying to move house, book flight or train tickets, or start a new job or company often find it difficult or impossible; companies prioritising serving ‘more lucrative’ AI customers and often shutting down human-facing services.

Expert report: ‘no sign of human-like consciousness even in the most advanced AI systems’, but ‘abundantly clear’ that ‘the future belongs to them’.

New report: world population shrinking rapidly; food shortages, low birth rates, anti-natalist attitudes fuelled by corporate campaigns to blame.

The screen went blank. Then a video of an alien appeared, sitting up on a rocky surface. Alice took a moment to realise that it’s the same cave they found the skeleton in. The alien’s skin was wrapped tight around its bones, and even across the vast gulf of biology and evolutionary history, Alice could tell that it is not far from death. It opened its mouth, and sound came out. Captions appeared beneath it.

“It is the end”, the alien said, its eyes staring at them from between long unkempt clumps of hair. “On paper, I am rich beyond all imagination. But I have no say in this new world. And I cannot find food. I will die.”

The wind tugged at the alien’s long hair, but otherwise the alien was so still that Alice wondered if it had died there and then.

“There is much I would like to say”, the alien says. “But I do not have the words, and I do not have the energy.” It paused. “I hope it was not all in vain. Or, that if for us it was, that for someone up there it isn’t.”

The video went blank.

Alice and Charlie watched the blank screen in silence.

“At least the blight they birthed seems to have stuck to their world”, Charlie said after a while.

“Yeah”, Alice said, slowly. “But I don’t think we’ll find anything here.”

Legacy completed nine more orbits of the planet, and then jettisoned all unnecessary mass into space. Its engines jabbed against the darkness of space, bright enough to be visible from the planet’s surface. There was no one to see them.

On a factory down on the planet, an assembly line of beady-eyed purple plush toys marched on endlessly.


The title of this work is taken from a passage in Superintelligence: Paths, Dangers, Strategies, where Nick Bostrom writes:

We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today—a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland without children. [emphasis added]

The outline of events presented draws inspiration from several sources, but most strongly on Paul Christiano’s article What failure looks like.