Time to Exit the Sandbox
post by SquirrelInHell · 2017-10-24T08:04:14.478Z · LW · GW · Legacy · 7 commentsThis is a link post for http://squirrelinhell.blogspot.com/2017/10/time-to-exit-sandbox.html
Contents
7 comments
7 comments
Comments sorted by top scores.
comment by Yosarian2 · 2017-10-24T22:35:20.946Z · LW(p) · GW(p)
I certanly think you're right, that the conscious mind and conscious decisions can to a large extent re-write a lot of programming of the brain.
I am surprised to think that you think that most rationalists don't think that. (That sentence is a mouthful, but you know what I mean.) A lot of rationalist writing is devoted to working on ways to do exactally that; a lot of people have written about how just reading the sequences helped them basically repogram their own brain to be more rational in a wide variety of situations.
Are there a lot of people in the rationalist community who think that conscious thought and decision making can't do major things? I know there are philosophers who think that maybe consciousness is irrelevant to behavior, but that philosophy seems very much at odds with LessWrong-style rationality and the way people on LessWrong tend ot think about and talk about what consciousness is.
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2017-10-25T09:57:40.412Z · LW(p) · GW(p)
Are there a lot of people in the rationalist community who think that conscious thought and decision making can't do major things?
It's not that they think it cannot do major things at all. They don't expect to do be able to do them overnight, and yes "major changes to subconscious programming overnight" is one of the things I've seen to be possible if you hit the right buttons. And of course, if you can do major things overnight, there are some even more major things you find yourself being able to do at all, and you couldn't before.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-10-25T13:43:40.296Z · LW(p) · GW(p)
This might be a violation of superrationality. If you hack yourself, in essence a part of you is taking over the rest. But if you do that, why shouldn't part of an AI hack the rest of it and take over the universe?
comment by Stuart_Armstrong · 2017-10-24T15:18:58.661Z · LW(p) · GW(p)
Some practical examples of what you mean could be useful.
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2017-10-24T16:37:52.022Z · LW(p) · GW(p)
I'm planning to write some practical guides based on what I have learned, here's one: http://bewelltuned.com/tune_your_motor_cortex (it's a very powerful skill that I suspect is pretty close to impossible to discover using "normal" methods, though it seems possible to execute when you already know it)
comment by entirelyuseless · 2017-10-24T12:50:33.661Z · LW(p) · GW(p)
I entirely disagree that "rationalists are more than ready." They have exactly the same problems that a fanatical AI would have, and should be kept sandboxed for similar reasons.
(That said, AIs are unlikely to actually be fanatical.)
Replies from: SquirrelInHell↑ comment by SquirrelInHell · 2017-10-24T14:11:49.052Z · LW(p) · GW(p)
Meh, kinda agree, added "(at least some of them!)" to the post.
I didn't mean "ready" in the sense of value alignment, but rather that by accessing more power they would grow instead of destroying themselves.