How can humans make precommitments?

post by Incorrect · 2011-09-15T01:19:36.748Z · LW · GW · Legacy · 22 comments

Contents

22 comments

How can you precommit to something where the commitment is carried out only after you know your commitment strategy has failed?

It would seem to make it impossible to commit to blackmail when the action of blackmail has negative utility. How can you possibly convince your rational future self to carry out a commitment they know will not work?

You could attempt to adopt a strategy of always following your commitments. From your current perspective this is useful but once you have learned your strategy has failed, what's to prevent you from just disregarding the strategy?

If a commitment strategy will fail you don't want to make the commitment but if you will not follow the commitment even when the strategy fails then you never made the commitment in the first place.

For example, in nuclear war why would you ever retaliate? Once you know your strategy of nuclear deterrence has failed, shooting back will only cause more civilian casualties.

I'm not saying commitments aren't useful, I'm just not sure how you can make them. How do you prevent your future self from reasoning their way out of them?

I apologize if reading this makes it harder for any of you to make precommitments. I'm hoping someone has a better solution than simply tricking your future self.

22 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2011-09-15T05:22:53.187Z · LW(p) · GW(p)

You could attempt to adopt a strategy of always following your commitments. From your current perspective this is useful but once you have learned your strategy has failed, what's to prevent you from just disregarding the strategy?

Disregarding it once will convince yourself and others that you will disregard it in the future, and remove your ability to make other precommitments.

The nuclear war example is more complicated, because presumably having a nuclear war will be the last thing you ever do. I would credit it to evolved instincts. Evolution "knows" that precommitments are important, so it gives us the desire to follow them even when it is not immediately rational to do so - for example, a lust for revenge that ought to be sufficient to make us retaliate in nuclear war, or a concept of "honor" that does the same.

comment by ArisKatsaris · 2011-09-15T10:45:30.083Z · LW(p) · GW(p)

I'm not saying commitments aren't useful, I'm just not sure how you can make them. How do you prevent your future self from reasoning their way out of them?

Our brain has several mechanisms by which we can make commitments: Honor. Pride. Duty. Guilt. You can use any of those emotional mechanisms to the service of enforcing commitments.

comment by Oscar_Cunningham · 2011-09-15T09:43:14.511Z · LW(p) · GW(p)

Because CDT isn't rational. You don't always have to act only for the sake of things that you can cause. If you're a transparent agent then you sometimes have to become the kind of agent that will carry out a precommitment. If that commitment fails then the rational thing to do is to carry out your threat.

EDIT: No-one else in the thread appears to understand that you don't need to have an additional reason (like a third party agreement) in order to carry out your threat.

comment by [deleted] · 2011-09-15T06:38:01.665Z · LW(p) · GW(p)

I'm not saying commitments aren't useful, I'm just not sure how you can make them. How do you prevent your future self from reasoning their way out of them?

People don't normally do things because that would be the rational thing to do. They do things because they believe to be the kind of person who does such things. Usually you would have to train to overcome that bias, but in this case you can make it work in your favor. So here is the three step program to learn to precommit:

  1. convince yourself rationally that being able to precommit has a great expected utility and that hacking yourself to be able to precommit is a good thing.
  2. Make lots of small easy to follow precommitments like precommiting what to have for lunch, etc. But always be sure to doublecheck that you will actually be able to do it and won't be inconvenienced by it. When it is time to follow through on them remember you're not doing it for the precommitments but for a "higher good".
  3. When you have followed on a precommitment tell yourself aloud: "I am the kind of person who always follows through on precommitments."

That should make precommitments second nature to you.

comment by Manfred · 2011-09-15T05:59:53.223Z · LW(p) · GW(p)

Put the keys to the nuclear weapons in the hands of people who have been conditioned to retaliate as part of their job.

In terms of general ways of precommitting, there are a few options:

  • Get someone to punish you if you do the thing you don't want to do. For example, you could sign a contract that says you'll have to pay a large fee if you don't bargain a car salesman down to a certain price - now they know that they must either sell it to you at that price or walk away, and so you win if that price is still profitable for them.
  • Start doing the stuff you want people to think you'll do, so that its cost is reduced if you have to make good on the threat. For example, you could position your army near the border to make the neighboring country stop stealing your cows.
  • Put control in the hands of a third person who does have an incentive to carry out the threat. For example, when you're acquiring a small company, don't send the CEO, instead hire an independent negotiator who only gets paid if they bargain them down to a certain price. The CEO might not be willing to just walk away from the deal, but an independent negotiator can, and so the small company is more likely to capitulate.
Replies from: khafra
comment by khafra · 2011-09-16T14:05:17.024Z · LW(p) · GW(p)

Also, rip the steering wheel off and chug a fifth of whisky.

Replies from: Manfred
comment by Manfred · 2011-09-16T14:39:39.552Z · LW(p) · GW(p)

True. But this is only good as a straight commitment, not a conditional commitment, which is what's necessary for most kinds of coercion.

comment by imaxwell · 2011-09-15T03:53:01.173Z · LW(p) · GW(p)

The most obvious solution is to coerce your future self, by creating a future downside of not following through that is worse than the future downside of following through. Nuclear deterrence is a tough one, but In principle this is no different from coercing someone else. (I guess one could ask if it's any more ethical, at that...)

comment by Eugine_Nier · 2011-09-15T02:05:05.377Z · LW(p) · GW(p)

Internalize the logic of why precommitments are useful.

Replies from: Incorrect
comment by Incorrect · 2011-09-15T02:10:02.433Z · LW(p) · GW(p)

I'm not sure what internalize means in this context. How is internalization accomplished?

Replies from: None
comment by [deleted] · 2011-09-15T02:36:52.894Z · LW(p) · GW(p)

By taking the idea of precommitements absolutely seriously. However, I'm not sure if it is actually possible in practice, and I doubt that the standard techniques for decompartmentalization are sufficient.

comment by [deleted] · 2011-09-15T04:54:29.020Z · LW(p) · GW(p)

See a lawyer and notary and sign a contract. Be skeptical of precommitments when this isn't a realistic option.

comment by Eugine_Nier · 2011-09-15T02:47:57.752Z · LW(p) · GW(p)

Another way to think about this, modify your utility function to care about your precommitments.

To use your example:

For example, in nuclear war why would you ever retaliate? Once you know your strategy of nuclear deterrence has failed, shooting back will only cause more civilian casualties.

Of course, not retaliating will ensure that the future of humanity is dominated by the evil values (if I didn't consider their values evil, why did I get into a nuclear standoff with them?) of someone who is, furthermore, willing to start an nuclear war.

I personally find that much more terrifying then the deaths of a few of their civilians in this generation.

Replies from: Incorrect
comment by Incorrect · 2011-09-15T03:05:30.781Z · LW(p) · GW(p)

You can't always do it like that in the least convenient possible world.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-09-15T03:17:12.671Z · LW(p) · GW(p)

You seem to be misunderstanding of the purpose of the "least convenient possible world". The idea is that if your interlocutor gives a weak argument and you can think of a way to strengthen it you should attempt to answer the strengthened version. You should not be invoking "least convenient possible world" to self sabotage attempts to solve problems in the real world.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2011-09-15T09:38:48.361Z · LW(p) · GW(p)

No, this is a correct use of LCPW. The question asked how keeping to precommitments is rationally possible, when the effects of carrying out your threat are bad for you. You took one example and explained why, in that case, retaliating wasn't in fact negative utility. But unless you think that this will always be the case (it isn't) the request for you to move to the LCPW is valid.

Replies from: jhuffman
comment by jhuffman · 2011-09-15T18:29:27.017Z · LW(p) · GW(p)

Yes I think that is right. Perhaps the LCPW in this case is one in which retaliation is guaranteed to mean an end to humanity. So a preference for one set of values over another isn't applicable. This is somewhat explicit to a mutually assured destruction deterrence strategy but nonetheless once the other side pushes the button you have a choice to put an end to humanity or not. Its hard to come up with a utility function that prefers that even considering a preference for meeting pre-commitments. Its like the 0th law of robotics - no utility evaluation can exceed the existence of humanity.

comment by wedrifid · 2011-09-15T06:50:48.197Z · LW(p) · GW(p)

It would seem to make it impossible to commit to blackmail when the action of blackmail has negative utility. How can you possibly convince your rational future self to carry out a commitment they know will not work?

You put the answer in the title. We are humans, not rational agents. We have built in mechanisms to handle this. Pride, embrace it. This actually becomes easier with experience. I've found that in times when I've tried to be a good little CDT agent and suppress my human instincts it has gone badly for me. My personal psychology doesn't react well to the suppression and I've actually been surprised how often failing to follow through with a threat (or what should be an implied threat) had more negative consequences than I anticipated. On this my instincts and my ethics are aligned.

comment by shokwave · 2011-09-15T13:14:03.999Z · LW(p) · GW(p)

Use a third party, preferably a binding legal contract or similar.

comment by TrE · 2011-09-15T05:23:25.790Z · LW(p) · GW(p)

Ideally, your decision to follow that precommitment should be so strong that you don't really have a choice, retaliating is something you don't even think about but execute by default. With precommitments, you want to restrict your own decision-possibilities.

If I hadn't dissolved the question already, I'd probably have come up with something like "by making precommitments, you want to undermine your free will so that once that events (nuclear strike etc.) happened, you don't have a free choice anymore because your free will is nonexistent in that situation".

comment by Anonymous9155 · 2011-09-15T02:56:22.611Z · LW(p) · GW(p)

How can humans make precommitments?

We can't.

Posted under a throwaway account to avoid impairing my ability to pretend to make precommitments that I'm not actually guaranteed to follow.

Replies from: pedanterrific
comment by pedanterrific · 2011-09-15T03:03:33.714Z · LW(p) · GW(p)

Agree, except I'm not concerned with preserving my ability to give obviously false promises.