Updating on hypotheticals

post by casebash · 2015-11-06T11:49:03.800Z · LW · GW · Legacy · 22 comments

This post is based on a discussion with ChristianKl on Less Wrong Chat. Thanks!

Many people disagreed with my previous writings on hypotheticals on Less Wrong (link1link2). For those who still aren’t convinced, I’ll provide another argument on why you should take hypotheticals seriously. Suppose you are discussing the issue of whether it’d be okay to flick the switch if a train was about to collide with and destroy an entire world as a way to try to sell someone on utilitarian ethics (see trolley problem). The other person objects that this is an unrealistic situation and so there is no point wasting time on this discussion.

This may seem unreasonable, but I suppose a person who believes that their time is very valuable may not feel that it is actually worth their time indulging in the hypothetical that A->B unless the other person is willing to explain to them why this result would relate to how we should act in the real world. This might be especially likely to be true if they have had similar discussion before and so they have a low prior that the other person will be able to relate it to the real world.

However, at this stage, they almost certainly have to update, in the sense that if you are following the rule of updating on new evidence, you have most likely already received new evidence. The argument is as follows: As soon as you have heard A->B (if it would save a world, I would flick a switch), your brain has already performed a surface level evaluation on that argument. Realistically, the thinker in the situation probably knows that it is really tough to make the argument that we should allow an entire world to be destroyed instead of ending one life. Now, the fact that it is tough to argue against something doesn’t mean that it should be accepted. For example, many philosophical proofs or halves of mathematical paradoxes seem very hard to argue against at first, but we may have an intuitive sense that there is a flaw there to be found if we are smart enough and look hard enough.

However, even if we aren’t confident in the logic we still have to update our priors, once we know that there is an argument for it that at least appears to check out. Obviously we will update to a much lesser degree than if we were confident in the logic, but we still have to update to some extent, even if we think the chance of A->B being analogous to the real world is incredibly small, as there will always be *some* chance that it is analogous assuming the other person isn’t talking nonsense. So even though the analogy hardly seems to fit the real world and even though you’ve perhaps spent only second thinking about whether A->B checks out, you’ve still got to update. I'll add another quick note: you only have to update on the first instance, when you see the same or a very similar problem again, you don't have to update.

How does this play out? An intellectually honest response would be along the lines of: “Okay, your argument seems to check out on first glance, but I’m rather skeptical that it’d hold up if I spent enough time thinking about it. Anyway, supposing that it was true, why should the real world be anything like A?”. This is much more honest than simply trying to dismiss the hypothetical by stating that A is nothing like reality.

There’s one objection that I need to answer. Maybe you say that you haven’t considered A->B at all. I would be really skeptical of this. There is a small chance I’m committing the typical mind fallacy, but I’m pretty sure that your mind considered both A->B and “this is analogous with reality” and you decided to argue for the second because you didn’t find a strong counter-argument against A->B. And if you did actually find a strong counter-argument, but choose to challenge the hypothetical instead, why not use your counter-argument? Why not engage with your opponent directly and take down their argument as this is more persuasive than dodging the question? There probably are situations where this seems reasonable, such if the argument against A->B is very long and complicated, but you think it is much easier to convince the other person that the situation isn’t analogous. These situations might exist, but I would suspect that these situations are relatively rare.

22 comments

Comments sorted by top scores.

comment by ChristianKl · 2015-11-06T12:40:29.071Z · LW(p) · GW(p)

Suppose you are discussing the issue of whether it’d be okay to flick the switch if a train was about to collide with and destroy an entire world as a way to try to sell someone on utilitarian ethics (see trolley problem). The other person objects that this is an unrealistic situation and so there is no point wasting time on this discussion.

Philosophy has a very bad track record at reliable production of knowledge. Reliable knowledge is much more often produced by interaction with the real world and empiric learning.

If you allow people to substantially affect your beliefs by hypotheticals that have nothing to do with reality I don't think you will update in the right direction.

Nassim Taleb wrote a lot on the subject of how the track record of people who try to hold beliefs that are detached from reality and only based on abstract reasoning is pretty poor.

“Okay, your argument seems to check out on first glance, but I’m rather skeptical that it’d hold up if I spent enough time thinking about it. Anyway, supposing that it was true, why should the real world be anything like A?”

A lot of thinks don't hold up in reality but you don't find the flaw by spending significant time thinking about the issue. That's why it's useful to not focus too much of your beliefs on abstract arguments but base as much as possible in interaction with the real world and be exposed to realworld feedback.

Replies from: casebash
comment by casebash · 2015-11-06T12:53:02.607Z · LW(p) · GW(p)

I never said that you had to substantially update. I think I might have addressed these points to some extent in the previous two posts, although there is probably more to be said on this. The fact that philosophy has a poor track record is a good point. I'm not going to address it here and now though. If I wanted to address this properly it'd need its own post and I generally don't like to invest the effort until I see an issue come up on multiple occasions.

Replies from: ChristianKl
comment by ChristianKl · 2015-11-06T13:05:44.657Z · LW(p) · GW(p)

I never said that you had to substantially update.

This can be a bit motte-and-bailey. Without further definition of how much you think one should update it's hard to talk about it.

Replies from: casebash
comment by casebash · 2015-11-06T13:12:42.271Z · LW(p) · GW(p)

It isn't a Motte-and-bailey as I already acknowledged this limitation in the original post: "Obviously we will update to a much lesser degree than if we were confident in the logic, but we still have to update to some extent"

Replies from: ChristianKl
comment by ChristianKl · 2015-11-06T13:16:48.701Z · LW(p) · GW(p)

It would be useful to present an example and express your beliefs of how strongly you should update in numbers.

Replies from: EGarrett
comment by EGarrett · 2015-11-06T17:56:58.354Z · LW(p) · GW(p)

Just to underline here...philosophy has a bad track record because when it finds something concrete and useful, it gets split off into things like science and ethics, and very abstract things tend to be all that's left.

Hypotheticals are probably in the same class. Useful when they apply to reality, entertaining or stimulating sometimes even when they don't...and in some cases neither. The third category is the one I ignore.

comment by ChristianKl · 2015-11-06T12:33:58.405Z · LW(p) · GW(p)

Can you bring examples from LessWrong where you think people didn't understand hypotheticals and unfairly dismissed a hypotheticals?

Replies from: casebash
comment by casebash · 2015-11-06T12:54:33.974Z · LW(p) · GW(p)

Thanks for the suggestion, but I think it is probably better to keep this abstract and not call particular people out. I guess if I'm being honest, trying to remember when I saw an example and then find it, seems like a lot of work too.

Replies from: casebash
comment by casebash · 2015-11-06T23:15:55.441Z · LW(p) · GW(p)

Okay, so I found an example. It's from a top level poster, so I think it's fine to use as an example.

comment by Richard_Kennaway · 2015-11-07T08:51:56.778Z · LW(p) · GW(p)

How does this play out? An intellectually honest response would be along the lines of: “Okay, your argument seems to check out on first glance, but I’m rather skeptical that it’d hold up if I spent enough time thinking about it. Anyway, supposing that it was true, why should the real world be anything like A?”. This is much more honest than simply trying to dismiss the hypothetical by stating that A is nothing like reality.

These are the long and the short version of the same thought.

comment by Jiro · 2015-11-06T15:27:30.868Z · LW(p) · GW(p)

This is another case of Goodhart's Law. If you update your priors based on being asked about A->B, people can take advantage of that fact, which changes whether it is a good idea to update your prior on being asked about A->B.

Replies from: casebash
comment by casebash · 2015-11-06T15:36:56.506Z · LW(p) · GW(p)

That wasn't quite the argument. It was that you should update when asked about A->B if it looked like there prima facie appeared a solid argument for it, even if you were sure it had a flaw somewhere. So it was never automatic.

comment by casebash · 2015-11-06T13:10:52.195Z · LW(p) · GW(p)

Of those who have read all three posts, how has it affected your opinion on hypotheticals?

[pollid:1074]

Subtract one "No change, but I already agreed with you" as I needed to vote in order to be able to see results!

comment by [deleted] · 2015-11-09T11:33:29.473Z · LW(p) · GW(p)

Links 1 and 2 don't show:(

comment by Dagon · 2015-11-06T23:04:31.297Z · LW(p) · GW(p)

However, even if we aren’t confident in the logic we still have to update our priors, once we know that there is an argument for it that at least appears to check out.

I don't buy it.

Update is proportional to surprise. How surprising is it that your stoned housemate or random internet poster is attempting to pose this hypothetical, and what beliefs do you propose need updating?

Replies from: casebash
comment by casebash · 2015-11-06T23:13:24.741Z · LW(p) · GW(p)

It isn't surprising that they are proposing it, what is surprising is that their argument that A->B seems to check out on first glance. So if your previous model was, we have no idea of whether A->B is true or not, then you should be updating.

Replies from: Dagon
comment by Dagon · 2015-11-08T17:14:32.123Z · LW(p) · GW(p)

My previous model already incorporated the surface features of the hypothetical, that's how I got my initial reaction. What is new about THIS presentation of A->B that I didn't expect?

Is there a concrete example to use? I think there is a lot of variation possible across hypotheticals and across different participants' past experiences with hypotheticals. In a perfect agent, there is no update possible on fictional evidence. In real-world agents, it'll depend entirely on what hasn't already been considered.

Replies from: casebash
comment by casebash · 2015-11-10T05:50:49.318Z · LW(p) · GW(p)

Here's a concrete example. Imagine a trolley problem with one person on one track and a million people on another track. If Bob doesn't want to engage in the hypothetical because it is "unrealistic", then his mind has most likely already considered the fact that it would be very hard to argue against switching were he to accept the hypothetical. Many people will do this every time a hypothetical comes up and act as though they have no idea whether the hypothetical is true or not. However, this isn't quite true, Bob already knows that he would find it very hard to argue against the point being made; indeed, if it were easy to argue against the point being made, Bob would probably do that instead of dodging the hypothetical. So Bob has to update, but only on the first instance of such a problem.

Maybe stating it in terms of updating obscures things rather than clearing them up? I'm actually not sure if I'd write this article the same way if I was writing it again.

Replies from: Dagon
comment by Dagon · 2015-11-10T20:19:07.581Z · LW(p) · GW(p)

I can't follow what priors Bob has in this case and what update you think he should make. I do think that in this example, the presenter of the hypothetical should update on the evidence that Bob doesn't find the fictional case worth discussing.

I think stating things in terms of beliefs (priors and updates) is extremely helpful when discussing communication and reflective knowledge. But I haven't seen the detail needed for it to be a compelling (or even understandable) point on this specific topic.

comment by Lumifer · 2015-11-06T15:46:10.428Z · LW(p) · GW(p)

You keep on insisting that people have to update. Why do they have to?

Replies from: casebash
comment by casebash · 2015-11-06T22:30:54.084Z · LW(p) · GW(p)

Well, you don't have to update. As a first approximation, I'll just say that a perfectly rational agent would update if updating was costless. I've now edited this into the article.

comment by cansumartin · 2015-11-07T11:57:34.333Z · LW(p) · GW(p)

Every one follow update information to doing there own job