Raw silicon ore of perfect emptiness
post by Pavitra · 2011-05-30T00:54:48.882Z · LW · GW · Legacy · 6 commentsContents
6 comments
Does building a computer count as explaining something to a rock?
(If we still had open threads, I would have posted this there. As it is, I figure this is better than not saying anything.)
6 comments
Comments sorted by top scores.
comment by cousin_it · 2011-05-30T08:10:36.676Z · LW(p) · GW(p)
Does killing you and building a computer out of your atoms count as explaining something to you?
Replies from: Alicorn, outlier, ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-05-30T08:37:38.525Z · LW(p) · GW(p)
Or killing you and writing a message with your guts?
comment by XiXiDu · 2011-05-30T09:01:16.087Z · LW(p) · GW(p)
I ask the question so many people here hate to be asked, why is this being downvoted?
What does it mean to explain something, what does it mean to understand something?
If you explain something to a human being, you actively alter the chemical/atomic configuration of that human. The sort of change caused by nurture becomes even more apparent if you consider how explanations can alter the course of a human life. If you were going to explain the benefits of daily physical exercise to a teenager, it will change the bodily, including neurological, makeup of that human being dramatically.
Where do you draw the line? If you "explain" an AI how to self-improve, the outcome will be a more dramatic change as that of a rock being converted into a Turing-machine.
What would happen to a human being trying to understand as much as possible over the next 10^50 years? The upper limit on the information that can be contained within a human body is 2.5072178×10^38 megabytes, if it was used as a perfect data storage. Consequently, any human living for a really long time would have to turn "rock" into memory to understand.
Similarly, imagine I would be able to use a brain-computer-interface to integrate an external memory storage into your long-term-memory. If this external memory does comprise certain patterns, that you could use to recognize new objects, why wouldn't this count as an explanation of what constitutes those objects?
ETA
Before someone misinterprets what I mean: explaining friendliness to an AI obviously isn't enough to make it behave friendly. But one can explain friendliness, even to a paperclip-maximizer, by implementing it directly or by writing it down onto paper and making it read it. Of course, that doesn't automatically make it a part of its utility-function.