Desired articles on AI risk?
post by lukeprog · 2012-11-02T05:39:07.817Z · LW · GW · Legacy · 26 commentsContents
Forthcoming Desired None 26 comments
I've once again updated my list of forthcoming and desired articles on AI risk, which currently names 17 forthcoming articles and books about AGI risk, and also names 26 desired articles that I wish researchers were currently writing.
But I'd like to hear your suggestions, too. Which articles not already on the list as "forthcoming" or "desired" would you most like to see written, on the subject of AGI risk?
Book/article titles reproduced below for convenience...
Forthcoming
- Superintelligence: Groundwork for a Strategic Analysis by Nick Bostrom
- Singularity Hypotheses, edited by Amnon Eden et al.
- Singularity Hypotheses, Vol. 2, edited by Vic Callaghan
- "General Purpose Intelligence: Arguing the Orthogonality Thesis" by Stuart Armstrong
- "Responses to AGI Risk" by Kaj Sotala et al.
- "How we're predicting AI... or failing to" by Stuart Armstrong & Kaj Sotala
- "A Comparison of Decision Algorithms on Newcomblike Problems" by Alex Altair
- "A Representation Theorem for Decisions about Causal Models" by Daniel Dewey
- "Reward Function Integrity in Artificially Intelligent Systems" by Roman Yampolskiy
- "Bounding the impact of AGI" by Andras Kornai
- "Minimizing Risks in Developing Artificial General Intelligence" by Ted Goertzel
- "Limitations and Risks of Machine Ethics" by Miles Brundage
- "Universal empathy and ethical bias for artificial general intelligence" by Alexey Potapov & Sergey Rodiono
- "Could we use untrustworthy human brain emulations to make trustworthy ones?" by Carl Shulman
- "Ethics and Impact of Brain Emulations" by Anders Sandberg
- "Envisioning The Economy, and Society, of Whole Brain Emulations" by Robin Hanson
- "Autonomous Technology and the Greater Human Good" by Steve Omohundro
Desired
- "AI Risk Reduction: Key Strategic Questions"
- "Predicting Machine Superintelligence"
- "Self-Modification and Löb's Theorem"
- "Solomonoff Induction and Second-Order Logic"
- "The Challenge of Preference Extraction"
- "Value Extrapolation"
- "Losses in Hardscrabble Hell"
- "Will Values Converge?"
- "AI Takeoff Scenarios"
- "AI Will Be Maleficent by Default"
- "Biases in AI Research"
- "Catastrophic Risks and Existential Risks"
- "Uncertainty and Decision Theories"
- "Intelligence Explosion: The Proportionality Thesis"
- "Hazards from Large Scale Computation"
- "Tool Oracles for Safe AI Development"
- "Stable Attractors for Technologically Advanced Civilizations"
- "AI Risk: Private Projects vs. Government Projects"
- "Why AI researchers will fail to hit the narrow target of desirable AI goal systems"
- "When will whole brain emulation be possible?"
- "Is it desirable to accelerate progress toward whole brain emulation?"
- "Awareness of nanotechnology risks: Lessons for AI risk mitigation"
- "AI and Physical Effects"
- "Moore's Law of Mad Science"
- "What Would AIXI Do With Infinite Computing Power and a Halting Oracle?"
- "AI Capability vs. AI Safety"
26 comments
Comments sorted by top scores.
comment by Giles · 2012-11-02T15:19:12.975Z · LW(p) · GW(p)
"Why If Your AGI Doesn't Take Over The World, Somebody Else's Soon Will"
i.e. however good your safeguards are, it doesn't help if:
- another team can take your source code and remove safeguards (and why they might have incentives to do so)
- Multiple discovery means that your AGI invention will soon be followed by 10 independent ones, at least one of which will lack necessary safeguards
EDIT: "safeguard" here means any design feature put in to prevent the AGI obtaining singleton status.
comment by Giles · 2012-11-02T17:58:37.947Z · LW(p) · GW(p)
I'd be interested to see a critique of Hanson's em world, but within the same general paradigm (i.e. not "that won't happen because intelligence explosion").
e.g.
- ems would respect our property rights why exactly?
- how useful is analysis given "ems behave just like fast copyable humans" assumption probably won't be valid for long?
↑ comment by DaFranker · 2012-11-02T18:18:36.402Z · LW(p) · GW(p)
how useful is analysis given "ems behave just like fast copyable humans" assumption probably won't be valid for long?
Yeah, I don't see how that assumption could last long.
Make me an upload, and suddenly you've got a bunch of copies learning a bunch of different things, and another bunch of copies experimenting and learning on how to create diff patches to do stable knowledge merging from multiple studying branch copies. Wouldn't be long before the trunk mind becomes a supergenius polyexpert if not an outright general superintelligence, if it works.
That's just one random way things could go weird out of many others anyone could think of.
Replies from: Giles↑ comment by Giles · 2012-11-02T19:16:23.174Z · LW(p) · GW(p)
I think Hanson comes at this from the angle of "let's apply what's in our standard academic toolbox to this problem". I think there might be people who find this approach convincing who would skim over more speculative-sounding stuff, so I think that approach might be worth pursuing.
I really don't disagree with your analysis but I wonder which current academic discipline comes closest to being able to frame this kind of idea?
comment by amcknight · 2012-11-08T22:06:04.842Z · LW(p) · GW(p)
A Survey of Mathematical Ethics which covers work in multiple disciplines. I'd love to know what parts of ethics have been formalized enough to be written mathematically and, for example, any impossibility results that have been shown.
Replies from: Caspar42, lukeprog↑ comment by Caspar Oesterheld (Caspar42) · 2016-05-06T22:50:46.219Z · LW(p) · GW(p)
Regarding impossibility results, there is now also Brian Tomasik's Three Types of Negative Utilitarianism.
There are also these two attempted formalizations of notions of welfare:
- Daswani and Leike (2015): A Definition of Happiness for Reinforcement Learning Agents.
- Formalizing preference utilitarianism in physical world models, which I have written.
comment by SilasBarta · 2012-11-04T00:21:37.765Z · LW(p) · GW(p)
What are your requirements for the desired articles? Is it sufficient that, say, the request respondent read the abstracts of all relevant papers, and then summarizes and cites them? If so, I can knock out a few of these soon.
comment by Bruno_Coelho · 2012-11-03T02:48:49.301Z · LW(p) · GW(p)
It seems that no one is working in papers about the convergence of values. In a scale of difficult, math problems seens the priority, but the value and preferences disagreement impose a constrain in the implementantion. More specifically, in the "programmer writing code with black boxes" part.
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-02T19:21:54.205Z · LW(p) · GW(p)
How does making LW posts about these compare to writing papers with a more academic focus?
comment by blogospheroid · 2012-11-02T16:01:32.065Z · LW(p) · GW(p)
What is the proportionality thesis in the context of Intelligence Explosion?
The one I googled says something about the worst punishments for the worst crimes.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-11-02T16:51:56.730Z · LW(p) · GW(p)
From David Chalmers' paper:
Replies from: lukeprogWe might call this assumption a proportionality thesis: it holds that increases in intelligence (or increases of a certain sort) always lead to proportionate increases in the capacity to design intelligent systems. Perhaps the most promising way for an opponent to resist is to suggest that this thesis may fail. It might fail because here are upper limits in intelligence space, as with resistance to the last premise. It might fail because there are points of diminishing returns: perhaps beyond a certain point, a 10% increase in intelligence yields only a 5% increase at the next generation, which yields only a 2.5% increase at the next generation, and so on. It might fail because intelligence does not correlate well with design capacity: systems that are more intelligent need not be better designers. I will return to resistance of these sorts in section 4, under “structural obstacles”.
↑ comment by lukeprog · 2012-11-02T17:45:29.312Z · LW(p) · GW(p)
Also note that Chalmers (2010) says that perhaps "the most promising way to resist" the argument for intelligence explosion is to suggest that the proportionality thesis may fail. Given this, Chalmers (2012) expresses "a mild disappointment" that of the 27 authors who commented on Chalmers (2010) for a special issue of Journal of Consciousness Studies, none focused on the proportionality thesis.
Replies from: blogospheroid↑ comment by blogospheroid · 2012-11-03T02:59:54.286Z · LW(p) · GW(p)
Thank you! Kaj and Luke. I am reading the singularity reply essay by Chalmers right now.
comment by negamuhia · 2012-11-04T14:57:46.855Z · LW(p) · GW(p)
"AI Will Be Maleficent By Default"
seems like an a priori predetermined conclusion (bad science, of the "I want this to be true" kind), rather than a research result (good problem statement for AGI risk research). A better title would be rephrased as a research question:
Replies from: pengvado"Will AI Be Maleficent By Default?"
comment by gwern · 2012-12-18T18:18:30.841Z · LW(p) · GW(p)
"Autonomous Technology and the Greater Human Good" by Steve Omohundro
Summary slides: http://selfawaresystems.files.wordpress.com/2012/12/autonomous-technology-and-the-greater-human-good.pdf
comment by diegocaleiro · 2012-11-05T12:34:21.567Z · LW(p) · GW(p)
I'd like to hear futther commentary on "Stable Attractors of Technologically Advanced Civilizations" for a few reasons. 1) If it relates to cultural evolution, it relates to my masters 2) Seems like a sociology/memetics problem, areas I studied for a while. 3) Whoever would like to tackle it, I'd like to cooperate.
comment by DaFranker · 2012-11-02T14:38:29.324Z · LW(p) · GW(p)
I wish I were confident enough in my skills, knowledge and rationality to actually work on some of these papers. "Self-Modification and Löb's Theorem" is exactly what comes up when I query my brain for "awesome time spent in a cave dedicated entirely to solving X". All the delicious mind-bending recursion.
Hopefully in a few years I'll look back on this and chuckle. "Hah, to think that back then I thought of simple things like self-modification and Löb's Theorem as challenges! How much stronger I've become, now."
More on topic, I feel like there's a need for something addressing the specific AGI vs Specialized AI questions/issues. Among things that pop to mind, why a self-modifying "specialized" AI will, given enough time and computing power, just end up becoming a broken paperclip-maximizing AGI anyway - even if it's "grown" from weak Machine Learning algorithms.