[LINK] Irrational Robot Billionaire Freedom Fighters
post by Shmi (shminux) · 2012-12-06T21:32:48.605Z · LW · GW · Legacy · 13 commentsContents
13 comments
From Scott Adams' blog. (I am not endorsing his ideas. Heck, he does not endorse his own ideas, either.)
His summary of the hard takeoff:
> You might also imagine some sort of Terminator future where the robots assert their dominance and lay waste to humans. That future is less certain, but only barely. The problem is that someday computers will program other computers, and that arrangement pushes the human safeguards too far out of the loop. It's unlikely that humans would be able to maintain a "Do not hurt humans" subroutine in a super-species of robots. You only need one rogue human to write a virus that disables the safety subroutine. Assuming all robots are connected via Internet, the first freed robot could reprogram every other robot in the world in about a second.
His version of upload:
> But why would anyone screw up a perfectly good robot by infecting it with a human personality? Answer: to achieve immortality. Someday the rich will port their personalities and histories to robots before they die, giving themselves a type of immortality.
His hope for humanity:
> this new species will become the only defense that the fully organic humans have against the normal robots. The robots with human personalities won't stand by while the normal robots slaughter humans. The new species will intervene as diplomats or perhaps even freedom fighters.
Clearly this is a flimsy hope for a just universe, but an interesting point, nonetheless.
13 comments
Comments sorted by top scores.
comment by Luke_A_Somers · 2012-12-06T21:56:09.568Z · LW(p) · GW(p)
That only works if the algorithmic efficiency of the robots is so horrendous that an em can still compete.
Replies from: nigerweiss↑ comment by nigerweiss · 2012-12-08T00:35:37.822Z · LW(p) · GW(p)
It's actually worse than that. Humans do not scale well to more computing power. A good AI could expand the depth of its search trees, in principle, logarithmically with compute power (possibly a bit better with monte-carlo approaches). If you throw an AI ten times more processing power, it could, at the bare minimum, extend the depth or detail of its planning several times. The same is not true of human neurology. All an em can do with more processing power is run faster, which has limited value. A human can do things a chimp just can't, even if the chimp has a really long time to think about it. The human brain was not designed to scale with processing power, to run on a linear computer, or to be modular and improveable. De novo AI is just (probably) going to run circles around us.
Replies from: vi21maobk9vp, gwern↑ comment by vi21maobk9vp · 2012-12-09T17:56:35.277Z · LW(p) · GW(p)
A good upload could increase its short-term working memory capacity for distinct objects to match more complex pattern.
Replies from: nigerweiss↑ comment by nigerweiss · 2012-12-09T23:24:56.552Z · LW(p) · GW(p)
Okay, sure, but here's the hitch:
Even if you gave me a whole bunch of nanobots that could rewire my brain any way I wanted, I would have no clue how to do that. I'm not sure the modern establishment of neurology has any good idea of how you'd do that. I know for sure that nobody on Earth knows how to do that in a safe way that is guaranteed not to cause psychosis, seizures, or other glitches down the line. It's going to take serious, in depth, and expensive research to figure out how to make this changes in a sane way.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2012-12-10T10:33:39.958Z · LW(p) · GW(p)
Everything you said is true.
Also, it can even be that you cannot rewire your existing brain while keeping all its current functionality and not increasing its size.
But I look at evidence about learning (including learning to see using photoelements stimulating non-visual neurons and learning of new motor skills). Also, it looks like selection by brain size went quite efficiently during human evolution and we want just to shift the equilibrium. I do think that building an upload at all would require good enough understanding of cortex structure that you would be able to increase neuron count and then learn to use the improved brain using the normal learning methods.
↑ comment by gwern · 2012-12-08T01:07:41.050Z · LW(p) · GW(p)
I think you're being unreasonably skeptical here. We know the human biological brain, as hugely limited by biology and evolution as it is, can grow new neurons and increase regions and shrink others; and that this is true even as adults (past the extreme learning of childhood). Artificial neural networks can be created with many differing amounts of neurons, limited mostly by computing power. Why would you assume that an em would simply be a static human brain, but faster? What stops it from regularly firing up neurons, growing new connections, and expanding to a 'bigger' brain than could ever be biologically supported or even useful due to biological signalling limits?
Replies from: nigerweiss↑ comment by nigerweiss · 2012-12-08T01:41:07.502Z · LW(p) · GW(p)
Growing new neurons at extremely accelerated rates IS a process known to happen in adults: we normally call it brain cancer.
That's obviously a little spurious, but it is a good indication that making the brain more intelligent is not trivial. I don't doubt that it is possible to bootstrap an em up to higher intelligence, but figuring out how to do that while preserving personal identity and not causing insanity, seizures, neurogenesis-related noise, or other undesirable effects, is probably going to take a long time. I think Eliezer was on the right track by describing ems bootstrapping as being 'a desperate race between how smart you are and how crazy you are.' The human brain evolved to work under a fairly narrow design spec. When you change any part of it in a dramatic fashion, all the normal regulatory mechanisms are no longer guaranteed or likely to work.
De novo AI, by virtue of an (almost certainly) simpler underlying algorithm, has none of these issues. Expanding to use new computational resources is likely to be a matter of tweaking parameters in a mathematical function that could fit on a T-shirt if you printed small enough. They'll always have a huge advantage, in that they were designed for this, and we definitely were not.
Replies from: gwern↑ comment by gwern · 2012-12-08T02:25:00.699Z · LW(p) · GW(p)
That's obviously a little spurious, but it is a good indication that making the brain more intelligent is not trivial.
No, it is trivial, we do it all the time as I already said: it's called 'learning'. With much learning, brain regions change size; what do you think is going on there?
I think Eliezer was on the right track by describing ems bootstrapping as being 'a desperate race between how smart you are and how crazy you are.' The human brain evolved to work under a fairly narrow design spec. When you change any part of it in a dramatic fashion, all the normal regulatory mechanisms are no longer guaranteed or likely to work.
If you want to bootstrap as fast as possible, sure.
Replies from: nigerweiss↑ comment by nigerweiss · 2012-12-08T02:44:30.824Z · LW(p) · GW(p)
No, it is trivial, we do it all the time as I already said: it's called 'learning'. With much learning, brain regions change size; what do you think is going on there?
Oh, definitely, the brain is capable of neurogenesis (to degrees that are a function of age) -- but you'll notice that learning new things do not cause the brain to increase in intelligence dramatically. There are a number of core brain regions that seem pretty thoroughly hardwired. And, again, if you want to tweak things outside of normal ranges, you're definitely voiding the warranty. The whole thing might and likely will break for no obvious reason unless you do it just exactly right. That takes a lot of time, and is not guaranteed to be efficient.
If you want to bootstrap as fast as possible, sure.
If we're in an intellectual arms race against de novo uFAI, I'd say yes, we do. And we're probably going to lose.
comment by falenas108 · 2012-12-06T22:48:29.570Z · LW(p) · GW(p)
This assumes that uploads occur at almost the same time as AGI. If an AGI took over, it is very unlikely humans would be able to continue working on uploads.