AI Benefits Post 3: Direct and Indirect Approaches to AI Benefits

post by Cullen (Cullen_OKeefe) · 2020-07-06T18:48:02.363Z · LW · GW · 0 comments

Contents

  Direct and Indirect Approaches to AI Benefits
None
No comments

This is a post in a series on "AI Benefits." It is cross-posted from my personal blog. For other entries in this series, navigate to the AI Benefits Blog Series Index page.

This post is also discussed on the Effective Altruism Forum [EA · GW].

For comments on this series, I am thankful to Katya Klinova, Max Ghenis, Avital Balwit, Joel Becker, Anton Korinek, and others. Errors are my own.

If you are an expert in a relevant area and would like to help me further explore this topic, please contact me.

Direct and Indirect Approaches to AI Benefits

I have found it useful to distinguish between two high-level approaches to producing AI Benefits: direct and indirect. The direct approach, which dominates current discussions of AI for Good, is to apply AI technologies to some problem. This is a natural way to think of creating AI Benefits: to try to use the AI itself to do something beneficial.

However, AI companies should resist becoming hammers to whom every social problem looks like a nail. Some social problems are not yet, and may never be, best addressed through the application of AI. Other resources, particularly money [? · GW], are perhaps more useful in these contexts. Thus, in some circumstances, an AI developer might do the most good by maximizing its income (perhaps subject to some ethical side-constraints) and donating the surplus to an organization better-positioned to turn spare resources into good outcomes. This is the indirect approach to AI Benefits.

Actors, including actors aiming to be beneficial, only have finite resources. Therefore, there will often be a tradeoff between pursuing direct and indirect benefits, especially when Benefits are not profit-maximizing (which by hypothesis they are not for the sake of this blog series). A company that uses spare resources (compute, employee time, etc.) to build a socially beneficial AI application presumably could have also used those resources to derive a profit through its normal course of business.

The beneficial return on resources allocated directly versus indirectly will probably vary considerably between organizations. For example, a company working in algorithmic trading might not be able to directly solve many neglected social problems with its software, but could probably easily donate a chunk of its profits to some charity helping the poor. Conversely, an NLP startup working on a translation for a language spoken primarily by a poor population might be unable to make much profit (due to users’ low income) but may generate enormous benefits to that population by subsidizing its service. Moreover, as this example shows, the decision to develop one type of AI over another may make one or the other approach easier later.

Although this distinction may seem straightforward, the comparison between the approaches may not be due to measurement and comparison problems.

As a final note, the community of AI Benefactors should be wary of excessive focus on Benefits that are easy to pursue, are likely to succeed, or indeed already exist. Neglected problems may have a higher initial return on investment. Furthermore, pursuing options with uncertain benefits can yield valuable information. Finally, many of the most-beneficial applications of AI probably have not been invented yet, and so probably require high-risk R&D efforts. The availability—and sometimes preferability—of indirect Benefits should not discourage high-risk, high-reward direct Benefits engineering efforts (though the ideal portfolio of AI Benefits across all beneficial organizations probably includes some of both).

0 comments

Comments sorted by top scores.