Analogy to the Heisenberg Uncertainty Principle for Powerful AI?
post by demented · 2012-05-28T13:07:03.247Z · LW · GW · Legacy · 7 commentsContents
7 comments
What do you think? There might be a theoritical limitation to how much data an AI could collect without influencing the data itself and making its prediction redundant. Would this negate the idea of a 'God' AIand cause it to make suboptimal choices even with near limitless processing power?
7 comments
Comments sorted by top scores.
comment by Paul Crowley (ciphergoth) · 2012-05-28T13:53:14.088Z · LW(p) · GW(p)
Please don't just have an idea that would be cool and interesting if true, and post it to discussion. The ideas worth knowing about have at least some sort of argument that lends them plausibility.
Replies from: David_Gerard↑ comment by David_Gerard · 2012-05-28T19:15:00.530Z · LW(p) · GW(p)
+1
I have a theory that humans have a "good idea" detector - a black box they feed ideas into and get back "excellent!" or not. I surmise that this operates largely on its own, hence the effect where you have a brilliant idea in the middle of the night, write it down and wake up in the morning to realise it's rubbish.
Though I also think you can tune your brilliant idea detector, which may count against this theory.
But anyway, this theory suggests that the way to use its output is not merely to have the idea and find it appealing, but to then think about what it would imply and not imply. This will make for better discussion posts, particularly with a tough audience like this.
So, Demented - what would your idea imply and what would it not imply?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-05-28T19:35:06.009Z · LW(p) · GW(p)
Thank you - but that's the wrong question to ask first! The first question is "why on Earth would you think that? What brings this hypothesis to your attention?"
Replies from: David_Gerard↑ comment by David_Gerard · 2012-05-30T20:36:37.791Z · LW(p) · GW(p)
Ah, I disagree - that's only the first question to ask if the person is used to having ideas. My human-simulator tells me that posts like this come from people who aren't used to having ideas - certainly not enough to throw them away - so are enormously taken with any they do come up with (c.f. Draco in HPMOR ch. 78 ) and are not used to the idea of robustness-testing of their ideas at all - they're still in the mode of thinking of them as aesthetic creations, and will not let go of them easily. Once they've got used to coming up with a few, they will then benefit from seeing what presumptions the ideas are coming from.
My aim here is to get them used to thinking of themselves as someone who can come up with ideas, rather than thinking that's a job for other people. (Noting there are cases where one really wishes they would leave coming up with ideas to other people, and encouraging the young always has its moments of gritting one's teeth at the same beginner's stupidity yet again.) If they're spewing out terrible beginner's ideas in disposable quantities, that's time to start some stringent culling mechanisms.
Again, the above is out of my human simulator, but I've spent some time on the difficult task of encouraging people to think of themselves as the sort of people who can do something rather than just the consumers of others' output, so I have slight experience of practical encouragement.
comment by buybuydandavis · 2012-05-28T22:18:44.092Z · LW(p) · GW(p)
If you're really interested, I have a vague recollection of a paper by David Wolpert that feels similar. See his vitae below. The physics and computation section should be of particular interest.
http://www.santafe.edu/media/staff_cvs/3cv.complete.fall.2010.pdf
Interesting fact in his vitae:
Top two winners of 2009 Netflix competition made extensive use of my patented Stacked Generalization technique.
comment by [deleted] · 2012-05-28T16:54:34.322Z · LW(p) · GW(p)
Such a limit might exist. Are you going to hit it at any non-god level of optimization power? no.