Responses to Catastrophic AGI Risk: A Survey

post by lukeprog · 2013-07-08T14:33:50.800Z · LW · GW · Legacy · 8 comments

Contents

8 comments

A great many Less Wrongers gave feedback on earlier drafts of "Responses to Catastrophic AGI Risk: A Survey," which has now been released. This is the preferred discussion page for the paper.

The report, co-authored by past MIRI researcher Kaj Sotala and University of Louisville’s Roman Yampolskiy, is a summary of the extant literature (250+ references) on AGI risk, and can serve either as a guide for researchers or as an introduction for the uninitiated.

Here is the abstract:

Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may pose a catastrophic risk to humanity. After summarizing the arguments for why AGI may pose such a risk, we survey the field’s proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors, and proposals for creating AGIs that are safe due to their internal design.

8 comments

Comments sorted by top scores.

comment by Nick_Beckstead · 2013-07-09T11:51:12.133Z · LW(p) · GW(p)

This sounds like a good thing to be releasing. I think clearly going over the basics and the state of the field is important. I look forward to reading it.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-10T11:37:31.758Z · LW(p) · GW(p)

I'd love to hear your thoughts once you have read it.

comment by JoshuaZ · 2013-07-09T03:18:25.436Z · LW(p) · GW(p)

Given that these papers are consistently too long for journals, has MIRI considered trying to deliberately make shorter versions that can be published in journals?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-09T12:04:28.666Z · LW(p) · GW(p)

I'm under the impression that A Comparison of Decision Algorithms on Newcomblike Problems was written partially with the purpose of being a journal-length description of TDT, but I'm not sure whether it has actually been submitted anywhere.

comment by iceman · 2013-07-08T23:07:58.917Z · LW(p) · GW(p)

Did you mean to post this in discussion? I would assume a MIRI blog post would be cross posted to Main.

Replies from: lukeprog
comment by lukeprog · 2013-07-09T02:43:50.923Z · LW(p) · GW(p)

I think Discussion is the right place for this one, but thanks for checking.

comment by jamesbdunn · 2013-07-09T15:34:58.169Z · LW(p) · GW(p)

One basic problem with the AGI Risk Survey is that humans consistently represent AI as a collective function with recognizable features of human cognition. The problem with modeling AGI based upon human cognitive features is that our body and environment stabilize the evolving network systems. Our hormones, limb nervous systems, brain lobes, nutritional sources & utilization, environmental interactions, socialization, breeding characteristics (genetic memory)... and thousands of other recursive influencers...provide a constantly evolving system (humans) that we fail to recognize to have evolved over millions of years (exa-state numbers of systemic step events).

AI is intended to evolve over much shorter time spans and in a contained interactive environment that will not likely have the consistent ecological and socialization/genetic selection consequences. Therefore, we can not expect AIG to recognizably be human. The thought processes will quickly evolve to optimize itself within its environment and life entity features.

What will be common to ALL cognitive life? This is tough since humans are the only current reference.

If humans could have lived forever, would we have evolved socially and intellectually? For what reason.

For AIG, there must be a complex system of interactive systems that WILL guide and stimulate its evolution, or the AIG has no potential to evolve, AND evolve synergistically with humanity (in ways we can understand, or even recognize). But given the non-biological foundations for AI, those systems are not related to human evolution.

An innate part of being human is empathy. Is this what allows us to be human and to survive? Can we hardwire empathy into AI so that it evolves consistent with its environment AND develop human-like features that our simple minds can recognize? Regardless of the foundations of AI construction.

Empathy is used both constructively and destructively by humans; recognized.

Just because humans are limited in the ability to consider broad implications, does not mean AI will be as limited; nor as gifted (various forms of AI are intended to have restrictions in their intellectual capabilities; soldiers, cognitive functions for specific purposes (mining equipment...) ...).

Regulating AI development is not practical, we don't even regulate our own politicians (treason related to illegal allocations - intentionally weakening national security to illegally allocate national resources). Before we can regulate AI, monitoring must be established. Google Search: eliminate all corruption

So unless we develop broad universal monitoring (universities building and managing the NSA for example), these discussions are pointless because researchers will develop whatever catches their whim. To include: "Let's see what happens when it sees an internet port?"

comment by lukstafi · 2013-07-08T17:01:37.516Z · LW(p) · GW(p)

I'm glad to see Mark Waser cited and discussed, I think he was omitted in a former draft but I might misremember. ETA: I misremembered, I've confused it with http://friendly-ai.com/faq.html which has an explicitly narrower focus.