LINK: Quora brainstorms strategies for containing AI risk
post by Mass_Driver · 2016-05-26T16:32:02.304Z · LW · GW · Legacy · 1 commentsContents
1 comment
In case you haven't seen it yet, Quora hosted an interesting discussion of different strategies for containing / mitigating AI risk, boosted by a $500 prize for the best answer. It attracted sci-fi author David Brin, U. Michigan professor Igor Markov, and several people with PhDs in machine learning, neuroscience, or artificial intelligence. Most people from LessWrong will disagree with most of the answers, but I think the article is useful as a quick overview of the variety of opinions that ordinary smart people have about AI risk.
1 comments
Comments sorted by top scores.
comment by Viliam · 2016-05-30T11:53:57.880Z · LW(p) · GW(p)
Looking at the answers...
We need multiple AIs, equal to each other, to limit each other from becoming too dangerous.
Here are some fictional examples of x-risk, including Vogons and Cthulhu. This said, to control AI we need something like Asimov's laws, only better.
Something like inverse reinforcement learning, but more research is required.
The AI must care about consensus. Just like democracy. Or human brain.
The main danger from technological change is economical. On the technical side the answer is deep learning.
The previous answers are all wrong, read Yudkowsky to understand why. The solution is to interface human brains with computers, and evolve into cyborgs, ultimately leaving our original bodies behind.
The AI needs both logic and emotions. We can make it safe by giving it limited computing power.
The AI needs emotional intelligence. It needs to love and to be loved. Also, I had a chihuahua once.
We need to educate the AI just like we would educate an autistic child.
We should treat the AI as a foreign visitor, with hospitality. Here is a quote about foreigners by Derrida.
Each AI should have a unique identifier and a kill switch that the government can use to shut it down.
Make the AI police itself. Build it in space, so it doesn't compete for energy with humans. Don't let it self-modify.
I am an AI researcher, and you are just projecting your human fears into computers. The real risk is autonomous weapon systems, but that has nothing to do with computers becoming self-aware.
We should build a hierarchy of AIs that will police it's rogue members.
Tool AI.
We need high-quality research, development, and testing; redundant protections against system failure; and keeping engineers from sabotaging the project.
Read Superintelligence by Nick Bostrom.
The AI should consist of multiple subagents, each with limited timespan, trading information with each other. Any part showing signs of exponential growth will be removed automatically. Humans will be able to override the system at any time.
etc.