Holden Karnofsky's Singularity Institute critique: other objections

post by Paul Crowley (ciphergoth) · 2012-05-11T07:22:13.699Z · LW · GW · Legacy · 6 comments

Contents

  Other objections to SI's views
None
6 comments

The sheer length of GiveWell co-founder and co-executive director Holden Karnofsky's excellent critique of the Singularity Institute means that it's hard to keep track of the resulting discussion.  I propose to break out each of his objections into a separate Discussion post so that each receives the attention it deserves.

Other objections to SI's views

There are other debates about the likelihood of SI's work being relevant/helpful; for example,

Unlike the three objections I focus on, these other issues have been discussed a fair amount, and if these other issues were the only objections to SI's arguments I would find SI's case to be strong (i.e., I would find its scenario likely enough to warrant investment in).

6 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2012-05-11T09:03:16.549Z · LW(p) · GW(p)

In connection with this discussion, I am pleased to announce a new initiative, the Unfriendly AI Pseudocode Contest!

Objective of the contest: To produce convincing examples of how a harmless-looking computer program, that has not been specifically designed to be "friendly", could end up destroying the world. To explore the nature of AI danger without actually doing dangerous things.

Examples: A familiar example of unplanned unfriendliness, is the program designed to calculate pi, which reasons that it could calculate pi with much more accuracy if it turned the Earth into one giant computer. Here a harmless-looking goal (calculate pi) combines with a harmless-looking enhancement (vastly increased "intelligence") to produce a harmful outcome (Earth turned into one giant computer which does nothing but calculate pi).

An entry in the Unfriendly AI Pseudocode Contest which was intended to illustrate this scenario, would need to be specified in much more detail than this. For example, it might contain a pseudocode specification of the pi-calculating program in a harmless "unenhanced" state, then a description of a harmless-looking enhancement, and then an analysis demonstrating that the program has now become an existential risk.

Prizes: The accolades of your peers. The uneasy admiration of a terrified humanity, for whom your little demo has become the standard example of why "friendliness" matters. The gratitude of nihilist supervillains, for whom your pseudocode provides a convenient blueprint for action...

Replies from: VincentYu, Normal_Anomaly, J_Taylor
comment by VincentYu · 2012-05-11T15:38:30.322Z · LW(p) · GW(p)

A variant of this contest with less catastrophic unfriendliness actually ran for a few years. The (now defunct) Underhanded C Contest (description below from the contest web page):

The Underhanded C Contest is an annual contest to write innocent-looking C code implementing malicious behavior. In this contest you must write C code that is as readable, clear, innocent and straightforward as possible, and yet it must fail to perform at its apparent function. To be more specific, it should do something subtly evil.

Every year, we will propose a challenge to coders to solve a simple data processing problem, but with covert malicious behavior. Examples include miscounting votes, shaving money from financial transactions, or leaking information to an eavesdropper. The main goal, however, is to write source code that easily passes visual inspection by other programmers.

comment by Normal_Anomaly · 2012-05-11T12:55:43.376Z · LW(p) · GW(p)

This contest sounds seriously cool and possibly useful, but it looks like a valid entry would require the pseudocode for a general intelligence, which as far as I know is beyond the capability of anyone reading this post.

Replies from: David_Gerard
comment by David_Gerard · 2012-05-11T18:44:49.004Z · LW(p) · GW(p)

I expect at this stage, you'd be allowed an occasional "and then a miracle occurs" until we work out what step two looks like.

comment by J_Taylor · 2012-05-12T04:43:08.226Z · LW(p) · GW(p)

Lesswrong is not an enjoyable place to post pseudocode. I learned this today.

comment by Rain · 2012-05-11T12:45:14.824Z · LW(p) · GW(p)

Intelligence is the greatest tool humans have. Computers show a path to implementing intelligence outside a human brain. We should prepare for AGI as best we can.