Desired articles on AI risk?

post by lukeprog · 2012-11-02T05:39:07.817Z · LW · GW · Legacy · 26 comments

Contents

  Forthcoming
  Desired
None
26 comments

I've once again updated my list of forthcoming and desired articles on AI risk, which currently names 17 forthcoming articles and books about AGI risk, and also names 26 desired articles that I wish researchers were currently writing.

But I'd like to hear your suggestions, too. Which articles not already on the list as "forthcoming" or "desired" would you most like to see written, on the subject of AGI risk?

Book/article titles reproduced below for convenience...

Forthcoming

Desired

 

 

26 comments

Comments sorted by top scores.

comment by Giles · 2012-11-02T15:19:12.975Z · LW(p) · GW(p)

"Why If Your AGI Doesn't Take Over The World, Somebody Else's Soon Will"

i.e. however good your safeguards are, it doesn't help if:

  • another team can take your source code and remove safeguards (and why they might have incentives to do so)
  • Multiple discovery means that your AGI invention will soon be followed by 10 independent ones, at least one of which will lack necessary safeguards

EDIT: "safeguard" here means any design feature put in to prevent the AGI obtaining singleton status.

comment by Gedusa · 2012-11-02T15:44:56.184Z · LW(p) · GW(p)

Something on singletons: desirability, plausibility, paths to various kinds (strongly relates to stable attractors)

"Hell Futures - When is it better to be extinct?" (not entirely serious)

Replies from: wedrifid
comment by wedrifid · 2012-11-03T00:22:26.799Z · LW(p) · GW(p)

"Hell Futures - When is it better to be extinct?" (not entirely serious)

Why (not serious)?

comment by Giles · 2012-11-02T17:58:37.947Z · LW(p) · GW(p)

I'd be interested to see a critique of Hanson's em world, but within the same general paradigm (i.e. not "that won't happen because intelligence explosion").

e.g.

  • ems would respect our property rights why exactly?
  • how useful is analysis given "ems behave just like fast copyable humans" assumption probably won't be valid for long?
Replies from: DaFranker
comment by DaFranker · 2012-11-02T18:18:36.402Z · LW(p) · GW(p)

how useful is analysis given "ems behave just like fast copyable humans" assumption probably won't be valid for long?

Yeah, I don't see how that assumption could last long.

Make me an upload, and suddenly you've got a bunch of copies learning a bunch of different things, and another bunch of copies experimenting and learning on how to create diff patches to do stable knowledge merging from multiple studying branch copies. Wouldn't be long before the trunk mind becomes a supergenius polyexpert if not an outright general superintelligence, if it works.

That's just one random way things could go weird out of many others anyone could think of.

Replies from: Giles
comment by Giles · 2012-11-02T19:16:23.174Z · LW(p) · GW(p)

I think Hanson comes at this from the angle of "let's apply what's in our standard academic toolbox to this problem". I think there might be people who find this approach convincing who would skim over more speculative-sounding stuff, so I think that approach might be worth pursuing.

I really don't disagree with your analysis but I wonder which current academic discipline comes closest to being able to frame this kind of idea?

comment by amcknight · 2012-11-08T22:06:04.842Z · LW(p) · GW(p)

A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.

Replies from: Caspar42, lukeprog
comment by Caspar Oesterheld (Caspar42) · 2016-05-06T22:50:46.219Z · LW(p) · GW(p)

Regarding impossibility results, there is now also Brian Tomasik's Three Types of Negative Utilitarianism.

There are also these two attempted formalizations of notions of welfare:

comment by lukeprog · 2012-11-17T09:47:32.321Z · LW(p) · GW(p)

impossibility results

Here's one.

comment by novalis · 2012-11-02T06:52:17.245Z · LW(p) · GW(p)

Something on logical uncertainty

Why the hard problems of AI (mainly, how to represent the world) are likely to ever be solved.

comment by SilasBarta · 2012-11-04T00:21:37.765Z · LW(p) · GW(p)

What are your requirements for the desired articles? Is it sufficient that, say, the request respondent read the abstracts of all relevant papers, and then summarizes and cites them? If so, I can knock out a few of these soon.

comment by Manfred · 2012-11-02T08:26:31.594Z · LW(p) · GW(p)

"What Would AIXI Do With Infinite Computing Power and a Halting Oracle?"

This is a fun one, but even easier would be "What would AIXI do with Aladdin's genie?"

comment by jmmcd · 2012-11-04T18:43:28.693Z · LW(p) · GW(p)

"Experiments we could run today on a laptop which might tell us something about AI risk"

It might be very short, haha. But if not, it would be interesting!

comment by Bruno_Coelho · 2012-11-03T02:48:49.301Z · LW(p) · GW(p)

It seems that no one is working in papers about the convergence of values. In a scale of difficult, math problems seens the priority, but the value and preferences disagreement impose a constrain in the implementantion. More specifically, in the "programmer writing code with black boxes" part.

comment by John_Maxwell (John_Maxwell_IV) · 2012-11-02T19:21:54.205Z · LW(p) · GW(p)

How does making LW posts about these compare to writing papers with a more academic focus?

comment by blogospheroid · 2012-11-02T16:01:32.065Z · LW(p) · GW(p)

What is the proportionality thesis in the context of Intelligence Explosion?

The one I googled says something about the worst punishments for the worst crimes.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-11-02T16:51:56.730Z · LW(p) · GW(p)

From David Chalmers' paper:

We might call this assumption a proportionality thesis: it holds that increases in intelligence (or increases of a certain sort) always lead to proportionate increases in the capacity to design intelligent systems. Perhaps the most promising way for an opponent to resist is to suggest that this thesis may fail. It might fail because here are upper limits in intelligence space, as with resistance to the last premise. It might fail because there are points of diminishing returns: perhaps beyond a certain point, a 10% increase in intelligence yields only a 5% increase at the next generation, which yields only a 2.5% increase at the next generation, and so on. It might fail because intelligence does not correlate well with design capacity: systems that are more intelligent need not be better designers. I will return to resistance of these sorts in section 4, under “structural obstacles”.

Replies from: lukeprog
comment by lukeprog · 2012-11-02T17:45:29.312Z · LW(p) · GW(p)

Also note that Chalmers (2010) says that perhaps "the most promising way to resist" the argument for intelligence explosion is to suggest that the proportionality thesis may fail. Given this, Chalmers (2012) expresses "a mild disappointment" that of the 27 authors who commented on Chalmers (2010) for a special issue of Journal of Consciousness Studies, none focused on the proportionality thesis.

Replies from: blogospheroid
comment by blogospheroid · 2012-11-03T02:59:54.286Z · LW(p) · GW(p)

Thank you! Kaj and Luke. I am reading the singularity reply essay by Chalmers right now.

comment by negamuhia · 2012-11-04T14:57:46.855Z · LW(p) · GW(p)

"AI Will Be Maleficent By Default"

seems like an a priori predetermined conclusion (bad science, of the "I want this to be true" kind), rather than a research result (good problem statement for AGI risk research). A better title would be rephrased as a research question:

"Will AI Be Maleficent By Default?"

Replies from: pengvado
comment by pengvado · 2012-11-05T19:16:19.780Z · LW(p) · GW(p)

If you've already done the research, and the wishlist entry is just for writing an article about it, then putting your existing conclusion in the title is fine.

comment by gwern · 2012-12-18T18:18:30.841Z · LW(p) · GW(p)

"Autonomous Technology and the Greater Human Good" by Steve Omohundro

Summary slides: http://selfawaresystems.files.wordpress.com/2012/12/autonomous-technology-and-the-greater-human-good.pdf

comment by vallinder · 2012-11-08T17:47:55.568Z · LW(p) · GW(p)

"What Would AIXI Do With Infinite Computing Power and a Halting Oracle?"

Is this problem well-posed? Doesn't the answer depend completely on the reward function?

comment by diegocaleiro · 2012-11-05T12:34:21.567Z · LW(p) · GW(p)

I'd like to hear futther commentary on "Stable Attractors of Technologically Advanced Civilizations" for a few reasons. 1) If it relates to cultural evolution, it relates to my masters 2) Seems like a sociology/memetics problem, areas I studied for a while. 3) Whoever would like to tackle it, I'd like to cooperate.

comment by DaFranker · 2012-11-02T14:38:29.324Z · LW(p) · GW(p)

I wish I were confident enough in my skills, knowledge and rationality to actually work on some of these papers. "Self-Modification and Löb's Theorem" is exactly what comes up when I query my brain for "awesome time spent in a cave dedicated entirely to solving X". All the delicious mind-bending recursion.

Hopefully in a few years I'll look back on this and chuckle. "Hah, to think that back then I thought of simple things like self-modification and Löb's Theorem as challenges! How much stronger I've become, now."

More on topic, I feel like there's a need for something addressing the specific AGI vs Specialized AI questions/issues. Among things that pop to mind, why a self-modifying "specialized" AI will, given enough time and computing power, just end up becoming a broken paperclip-maximizing AGI anyway - even if it's "grown" from weak Machine Learning algorithms.

comment by amcknight · 2012-11-08T22:09:41.580Z · LW(p) · GW(p)

A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.