AI Benefits Post 1: Introducing “AI Benefits”

post by Cullen (Cullen_OKeefe) · 2020-06-22T16:59:22.605Z · LW · GW · 3 comments

Contents

  Introducing “AI Benefits”
  AI Benefits is Distinct from Default Market Outcomes
None
3 comments

This is a post in a series on "AI Benefits." It is cross-posted from my personal blog. For other entries in this series, navigate to the AI Benefits Blog Series Index page.

This post is also discussed on the Effective Altruism Forum. Links to those cross-posts are available on the Index page.

For comments on this series, I am thankful to Katya Klinova, Max Ghenis, Avital Balwit, Joel Becker, Anton Korinek, and others. Errors are my own.

If you are an expert in a relevant area and would like to help me further explore this topic, please contact me.

Introducing “AI Benefits”

Since I started working on the Windfall Clause report in summer 2018, I have become increasingly focused on how advanced AI systems might benefit humanity. My present role at OpenAI has increased my attention to this question, given OpenAI’s mission of “ensur[ing] artificial general intelligence (AGI) . . . benefits all of humanity.”

This post is the first in a series on “AI Benefits.” The series will be divided into two parts. In the first part, I will explain the current state of my thinking on AI Benefits. In the second part, I will highlight some of the questions on which I have substantial uncertainty.

My hope is that this series will generate useful discussion—and particularly criticism—that can improve my thinking. I am particularly interested in the assistance of subject-matter experts in the fields listed at the end of who can offer feedback on how to approach these questions. Although I do raise some questions explicitly, I also have uncertainty surrounding many of the framing assumptions made in this series. Thus, commentary on any part of this series is welcome and encouraged.

In the first few posts, starting with this one, I want to briefly explain what I mean by “AI Benefits” for the purposes of this series. The essential idea of AI Benefits is simple: AI Benefits means AI applications that are good for humanity. However, the concept has some wrinkles that need further explaining.

AI Benefits is Distinct from Default Market Outcomes

One important clarification is the distinction between AI Benefits as I will be using the term and other benefits from AI that arise through market mechanisms.

I expect AI to create and distribute many benefits through market systems, and in no way wish to reject markets as important mechanisms for generating and distributing social benefits. However, I also expect profit-seeking businesses to vigorously search for and develop profitable forms of benefits, and thus likely to be generated absent further intervention.

Since I care about being beneficial relative to a counterfactual scenario in which I do nothing, I am more interested in discussing benefits that businesses are unlikely to generate from profit-maximizing activities alone. Thus, for the rest of this series I will be focusing on the subset of AI Benefits that individuals could receive other than what markets would likely provide by default by actors not motivated by social benefit.

There are several reasons why an AI Benefit might not be well provided-for by profit-maximizing businesses.[1] The first is that some positive externalities from a Benefit might not be easily capturable by market actors.[2] A classic example of this might be using AI to combat climate change—a global public good. It is common for innovations to have such positive externalities. The Internet is a great example of this; its “creators”—whoever they are—likely captured very little of the benefits that flow from it.

Profit-maximizers focus on consumers’ willingness to pay (“WTP”). However, a product for which rich consumers have high WTP can yield far lower improvements to human welfare and happiness than a product aimed at poor consumers, giving the latter’s low WTP. Accordingly, investments in products benefitting poor consumers might be underproduced by profit-seekers relative to the social value—happiness and flourishing—it could create.[3] Actors less concerned with profits should fill this gap. Consumers’ bounded rationality could also lead them to undervalue certain products.

This line of work also focuses on what benefactors can do unilaterally. Therefore, I largely take market incentives as stable, even though the maximally beneficial thing to do might be to improve market incentives. (As an example, carbon pricing could fix the negative externalities associated with carbon emissions.) Such market-shaping work is very important to good outcomes, but is less tractable for individual benefactors other than entire governments. However, a complete portfolio of Benefits might include funding advocacy for better policies.

A point of clarification: although by definition markets will generally not provide the subset of AI Benefits on which I am focusing, market actors can play a key role in providing AI Benefits. Some profit-driven firms already engage in philanthropy and other corporate social responsibility (“CSR”) efforts. Such behaviors may or may not be ultimately aimed at profit. Regardless, the resources available to large for-profit AI developers may make them adept at generating AI Benefits. The motivation, not the legal status of the actor, determines whether a beneficial activity counts as the type of AI Benefits I care about.

Of course, nonprofits, educational institutions, governments, and individuals also provide such nonmarket benefits. The intended audience for this work is anyone aiming to provide nonmarket AI Benefits, including for-profit firms acting in a philanthropic or beneficent capacity.


  1. Cf. Hauke Hillebrandt & John Halstead, Impact Investing Report 26–27 (2018), https://founderspledge.com/research/fp-impact-investing [https://perma.cc/7S5Y-7S6F]; Holden Karnofsky, Broad Market Efficiency, The GiveWell Blog (July 25, 2016), https://blog.givewell.org/2013/05/02/broad-market-efficiency/ [https://perma.cc/TA8D-YT69]. ↩︎

  2. See Hillebrandt & Halstead, supra, at 14–15. ↩︎

  3. See id. at 12–14. ↩︎

3 comments

Comments sorted by top scores.

comment by [deleted] · 2020-06-23T00:26:42.603Z · LW(p) · GW(p)

Just a note that your windfall clause link to your website is broken. https://cullenokeefe.com/windfall-clause takes me a "We couldn't find the page you're looking for" error.

Replies from: Cullen_OKeefe
comment by Cullen (Cullen_OKeefe) · 2020-06-23T16:52:57.922Z · LW(p) · GW(p)

Thanks! Fixed.

comment by Donald Hobson (donald-hobson) · 2020-06-25T23:05:11.589Z · LW(p) · GW(p)

One possibility that I and many others consider likely is a singularity - Foom.

Along its lines of thinking are

Once the first AI reaches human level, or AGI or some milestone around that point, it can self improve very rapidly.

This AI will rapidly construct self replicating nanotech or some better tech we haven't yet imagined, and become very powerful.

At this stage, what happens is basically whatever the AI wants to happen. Any problem short of quantum vacuum collapse can be quickly magiced away by the AI.

There will only be humans alive at this point if we have programmed the AI to care about humans. (A paperclip maximising AI would disassemble the humans for raw materials)

Any human decisions beyond this point are irrelevant, except to the extent that the AI is programmed to listen to humans.

There is no reason for anything resembling money or a free market to exist, unless the AI wants to preserve them for some reason. And it would be a toy market, under the total control of the AI.

If you do manage to get multiple competing AI's around, and at least one cares about humans, we become kings in the chess game of the AI's (Pieces that it is important to protect, but are basically useless)