Free Will as Unsolvability by Rivals

post by Pavitra · 2011-03-28T03:28:31.885Z · LW · GW · Legacy · 11 comments

Contents

11 comments

Nadia wanted to solve Alonzo. To reduce him to a canonical, analytic representation, sufficient to reconfigure him at will. If there was a potential Alonzo within potential-Alonzo-space, say, who was utterly devoted to Nadia, who would dote on her and die for her, an Alonzo-solution would make its generation trivial.

from True Names, by Cory Doctorow and Benjamin Rosenbaum

 

Warning: this post tends toward the character of mainstream philosophy, in that it relies on the author's intuitions to draw inferences about the nature of reality.

 

If you are dealing with an intelligence vastly more or less intelligent than yourself, there is no contest. One of you can play the other like tic-tac-toe. The stupid party's values are simply irrelevant to the final outcome.

If you are dealing with an intelligence extremely close to your own -- say, two humans within about five IQ points of each other -- then both parties' values will significantly affect the outcome.

If you are dealing with an intelligence moderately more or less intelligent than yourself, such as a world-class politician or an average eight-year-old child respectively, then the weaker intelligence might be able to slightly affect the outcome.

 

If we formalize free will as the fact that what we want to do has a causal effect on what we actually do, then perhaps we can characterize the sensation of free will -- the desire to loudly assert in political arguments that we have free will -- as a belief that our values will have a causal effect on the eventual outcome of reality.

This matches the sense that facing a terrifyingly powerful intelligence, one that can solve us completely, strips away our free will, which in turn probably explains the common misconception that free will is incompatible with reductionism -- knowing that an explanation exists feels like having the explanation be known by someone. We don't want to be understood.

It matches the sense that a person's free will can be denied by forcing them into a straitjacket and tossing them in a padded cell. It matches the assumption that not having free will would feel like sitting at the wheel of a vehicle that was running on autopilot and refusing manual commands.

 

In general, we can distinguish three successive stages at which free will can be cut off:

Probably the last two, and possibly all three, cannot remain cleanly separated under close scrutiny. But the model has such a deep psychological appeal that I think it must be useful somehow, if only as an intermediate step in easing lay folk into compatibilism, or in predicting and manipulating the vast majority of humans that believe or alieve it.

11 comments

Comments sorted by top scores.

comment by CronoDAS · 2011-03-29T00:37:33.099Z · LW(p) · GW(p)

If you are dealing with an intelligence vastly more or less intelligent than yourself, there is no contest. One of you can play the other like tic-tac-toe. The stupid party's values are simply irrelevant to the final outcome.

Possible counterexamples: the rabies virus is pretty good at getting mammals to do its bidding. Similarly, the protozoa Toxoplasma gondii, when it infects mice and rats, actually causes the infected rodents to become attracted to, instead of averse to, the scent of cat urine, which assists the parasite in transferring itself to its preferred host.

Replies from: aleksiL
comment by aleksiL · 2011-03-30T13:16:10.323Z · LW(p) · GW(p)

Both of those seem to fit the pattern perfectly when you consider evolution as an actor.

Maybe we should be discussing optimization power instead of intelligence; evolution seems a pretty decent manipulator considering how stupid it is.

Replies from: Pavitra
comment by Pavitra · 2011-03-31T01:49:56.601Z · LW(p) · GW(p)

Yes, that's what I had in mind. Optimization power; strength of causation from desire to effect.

comment by Vladimir_Nesov · 2011-03-28T10:24:28.068Z · LW(p) · GW(p)

Consider a specific thought experiment where modeling others is important, for example Prisoner's Dilemma. Even if your opponent is much smarter than you, smart enough to simulate your every thought, you can bargain with them using your access to the outcome-button. You can threaten them with defecting if they think too fast or too difficult for you to follow (in particular, they won't be able to unconditionally simulate you if they comply). And so you'll rob them of this first-mover advantage by having your finger on the trigger. Your decision is still your own.

(This idea comes from the discussion of "unintended simulation" thought experiment variant on SIAI decision theory list.)

comment by mutterc · 2011-03-28T17:18:54.695Z · LW(p) · GW(p)

We don't want to be understood.

+1. Look at the reaction (of just about anybody) to being reduced to a "type", no matter how well it fits.

comment by [deleted] · 2011-03-28T06:39:32.080Z · LW(p) · GW(p)

"You're not dead. You're as alive as I am."

Then words tumbled from him; more than in all the days since Relay. "True. The illusion of self-awareness. Happy automatons, running on trivial programs. I'll bet you never guess. From the inside, how can you? From the outside, from Old One's view--" He looked away from her, dizzy with a doubled vision.

Ravna drifted closer till her face was just centimeters from his. She floated free, except for one foot tucked into the floor. "Dear Pham, you are wrong. You've been at the Bottom, and at the Top, but never in between.... 'The illusion of self-awareness'? That's a commonplace of any practical philosophy in the Beyond. It has some beautiful consequences, and some scary ones. All you know are the scary ones. Think: the illusion must apply just as surely to the Powers."

"No. He could make devices like you and I."

-- Vernor Vinge, A Fire Upon The Deep (published in April 1992)

comment by [deleted] · 2011-03-30T20:07:36.309Z · LW(p) · GW(p)

Free Will as Unsolvability by Rivals

I thank you for this new insight, besides general rationality training this is the kind of thing I visit LW for!

This matches the sense that facing a terrifyingly powerful intelligence, one that can solve us completely, strips away our free will, which in turn probably explains the common misconception that free will is incompatible with reductionism -- knowing that an explanation exists feels like having the explanation be known by someone. We don't want to be understood.

This might also explain why one needed something as strong as religion to make hundreds of millions of people to stop nominally believing in Libertarian free will. The number of people's who ceased to believe in it because of their belief in a omnipotent omniscient "terrifyingly powerful intelligence" dwarfs the number who have done so on materialist grounds.

comment by Pavitra · 2011-05-27T02:20:08.406Z · LW(p) · GW(p)

I only just now found this pithy, relevant quote:

You have the right to remain cryptic.

...though of course it would be rude to suggest that the meaning I attribute to it was the intended one.

comment by Manfred · 2011-04-30T20:36:08.804Z · LW(p) · GW(p)

I agree that the feeling of free will is the mind projection bias, but I think we can make things simpler and less arbitrary by applying it to ourselves rather than rivals. It goes like "Why do I think I have free will? Because I cannot predict what I will do next. Therefore I am unpredictable." So "feels like free will" would be a property of any chaotic self-reflecting being with the mind projection bias.

This can be extended back to define free will itself with the additional stipulation that free will is defined as the stuff that makes you feel like you have free will. Some people definitely reject this additional stipulation, but I think the other options are silly :P

comment by XiXiDu · 2011-03-28T15:41:09.727Z · LW(p) · GW(p)

Pavitra, I'd love to know what you think about my post on free will:

For example, children and some mentally handicapped people are not responsible in the same way as healthy adults. They cannot give consent or enter into legally binding contracts. One of the reasons for this is that they lack control, are easily influenced by others. Healthy humans exert a higher control than children and handicapped people. You experience, or possess a greater extent of freedom proportional to the amount of influence and effectiveness of control you exert over the environment versus the environment over you.

In other words, I think a paperclip maximizer is dangerous because it has more free will, i.e. is free to (not free from) realize what it wants as its effect on the universe is much larger than that of a human(s). An agent's perception to be free is therefore correlated with the ability to realize its goals, the probability of success.

Replies from: Pavitra
comment by Pavitra · 2011-03-29T18:30:26.049Z · LW(p) · GW(p)

Your linked post seems to be more about an agent interacting with a dumb-matter environment, and about the relationship between free will and determinism. My post is specifically about what happens when two agents interact with each other. The point I was trying to make is that the sense of indignation that accompanies the intuition of free will is tied to the desire to protect one's utility function from alteration in the presence of a hostile intelligence.

Your comment bridges the two nicely.