Future of Moral Machines - New York Times [link]
post by Dr_Manhattan · 2011-12-26T14:44:01.763Z · LW · GW · Legacy · 10 commentsContents
10 comments
http://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/
10 comments
Comments sorted by top scores.
comment by Thomas · 2011-12-26T15:14:32.182Z · LW(p) · GW(p)
Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process.
What else it would be? Except the divine origin of thoughts nothing was submitted as an alternative so far.
Replies from: TheOtherDave, Manfred↑ comment by TheOtherDave · 2011-12-26T16:00:20.168Z · LW(p) · GW(p)
I distrust "what else would it be"-style arguments; they are ultimately appeals to inadequate imagination.
Certainly of the things we understand reasonably well, computation is the only candidate that could explain intelligence; if intelligence weren't fundamentally a computational process it would have to fundamentally be something we don't yet understand.
Just to be clear, I'm not challenging the conclusion; given the sorts of things that intelligence does, and the sorts of things that computations do, that intelligence is a form of computation seems pretty likely to me. What I'm pushing back on is the impulse to play burden-of-proof tennis with questions like this, rather than accepting the burden of proof and trying to meet it.
Replies from: billswift↑ comment by billswift · 2011-12-27T04:59:32.748Z · LW(p) · GW(p)
I can imagine a great many other things it could be, but in the real world people have to go by the evidential support. Your post is just a variation of the "argument from ignorance" , as in "We don't know in detail what intelligence is, so it could be something else", even though you admit "Certainly of the things we understand reasonably well, computation is the only candidate that could explain intelligence".
↑ comment by Manfred · 2011-12-26T17:24:45.740Z · LW(p) · GW(p)
Building an AI does not require it being a computer - it could be a bunch of rubber bands if that's what worked. The assumption is more like intelligence is not inherently mysterious, and humans are not at some special perfect point of intelligence.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2011-12-27T05:24:50.354Z · LW(p) · GW(p)
Building an AI does not require it being a computer - it could be a bunch of rubber bands if that's what worked
You can build a computer out of pretty much anything, including rubber bands.
comment by orthonormal · 2011-12-26T15:41:12.630Z · LW(p) · GW(p)
Has anyone read the book that the article was a self-promotion for? (I have mediocre expectations, given the article; but mediocre would be an improvement in high-status treatment of the issue.)
Replies from: Dr_Manhattan↑ comment by Dr_Manhattan · 2011-12-27T13:26:44.043Z · LW(p) · GW(p)
"try before you buy" link
comment by Sush · 2011-12-27T17:18:57.100Z · LW(p) · GW(p)
On the subject of morality in robots, I would assume that when (if?) we devise a working cognitive model of an A.I. that would be indistinct from a human in every observable circumstance, the chances of it developing/learning sociopathic behaviour would be no different from a human developing psychopathic tendencies (which, although I can provide no scientific proof, I imagine is in the minority).
I know this is an abstraction that doesn't do justice to the work people are doing on working towards this model, but I think the complexities of AI are one of the things that lead certain people to the knee-jerk reaction that all post-singularity AIs will want to exterminate the human race. (possessing a phobia because you don't understand something etc etc...)
comment by Emile · 2011-12-26T17:09:58.741Z · LW(p) · GW(p)
The Department of Defense report "Autonomous Military Robotics: Risk, Ethics, and Design" linked looks interesting (it doesn't seem to have been linked here before, though it's from 2008). I'll check it out.
Edit: I skimmed through the bits that looked interesting; there's an off-hand reference to "friendliness theory" but the difficult bits of getting a machine to have a correct morality seem glossed over (justified by the claim that that these are supposed to be special-purpose robots with a definite mission and orders to obey, not AGIs - though some of the stuff they describe sounds "AI hard" to me). There's some mention of robots building other robots and running amok in the risks, and some references to Kurzweil.
comment by Dr_Manhattan · 2011-12-26T15:59:37.863Z · LW(p) · GW(p)
On the plus side for the article:
Discussion of AI ethics in a major newspaper (we'll get out of the crank file any day now)
Some good bridging of the inferential distance via discussion of physical robot interactions (self-driving cars, etc)