Counterfactual trade
post by owencb · 2015-03-09T13:23:54.252Z · LW · GW · Legacy · 19 commentsContents
19 comments
Counterfactual trade is a form of acausal trade, between counterfactual agents. Compared to a lot of acausal trade this makes it more practical to engage in with limited computational and predictive powers. In Section 1 I’ll argue that some human behaviour is at least interpretable as counterfactual trade, and explain how it could give rise to phenomena such as different moral circles. In Section 2 I’ll engage in wild speculation about whether you could bootstrap something in the vicinity of moral realism from this.
Epistemic status: these are rough notes on an idea that seems kind of promising but that I haven’t thoroughly explored. I don’t think my comparative advantage is in exploring it further, but I do think some people here may have interesting things to say about it, which is why I’m quickly writing this up. I expect at least part of it has issues, and it may be that it’s handicapped by my lack of deep familiarity with the philosophical literature, but perhaps there’s something useful in here too. The whole thing is predicated on the idea of acausal trade basically working.
0. Set-up
Acausal trade is trade between two agents that are not causally connected. In order for this to work they have to be able to predict the other’s existence and how they might act. This seems really hard in general, which inhibits the amount of this trade that happens.
If we had easier ways to make these predictions we’d expect to see more acausal trade. In fact I think counterfactuals give us such a method.
Suppose agents A and B are in scenario X, and A can see a salient counterfactual scenario Y containing agents A’ and B’ (where A is very similar to A’ and B is very similar to B’). Suppose also that from the perspective of B’ in scenario Y, X is a salient counterfactual scenario. Then A and B’ can engage in acausal trade (so long as A cares about A’ and B’ cares about B). Let’s call such trade counterfactual trade.
Agents might engage in counterfactual trade either because they do care about the agents in the counterfactuals (at least seems plausible for some beliefs about a large multiverse), or because it’s instrumentally useful as a tractable decision rule which works as a better approximation to what they’d ideally like to do than similarly tractable versions.
1. Observed counterfactual trade
In fact, some moral principles could arise from counterfactual trade. The rule that you should treat others as you would like to be treated is essentially what you’d expect to get by trading with the counterfactual in which your positions are reversed. Note I’m not claiming that this is the reason people have this rule, but that it could be. I don’t know whether the distinction is important.
It could also explain the fact that people have lessening feelings of obligation to people in widening circles around them. The counterfactual in which your position is swapped with that of someone else in your community is more salient than the counterfactual in which your position is swapped with someone from a very different community -- and you expect it to be more salient to their counterpart in the counterfactual, too. This means that you have a higher degree of confidence in the trade occurring properly with people in close counterfactuals, hence more reason to help them for selfish reasons.
Social shifts can change the salience of different counterfactuals and hence change the degree of counterfactual trade we should expect. (There is something like a testable prediction in this direction, of the theory that humans engage in counterfactual trade! But I haven’t worked through the details enough to get to that test.)
2. Towards moral realism?
Now I will get even more speculative. As people engage in more counterfactual trade, their interests align more closely. If we are willing to engage with a very large set of counterfactual people, then our interests could converge to some kind of average of the interests of these people. This could provide a mechanism for convergent morality.
This would bear some similarities to moral contractualism with a veil of ignorance. There seem to be some differences, though. We’d expect to weigh the interests of others only to the extent to which they too engage (or counterfactually engage?) in counterfactual trade.
It also has some similarities to preference utilitarianism, but again with some distinctions: we would care less about satisfying the preferences of agents who cannot or would not engage in such trade (except insofar as our trade partners may care about the preferences of such agents). We would also care more about the preferences of agents who could have more power to affect the world. Note that this sense of “care less” is as-we-act. If we start out for example with a utilitarian position before engaging in counterfactual trade, then although we will end up putting less effort into helping those who will not trade than before, this will be compensated by the fact that our counterfactual trade partners will put more effort into that.
If this works, I’m not sure whether the result is something you’d want to call moral realism or not. It would be a morality that many agents would converge to, but it would be ‘real’ only in the sense that it was a weighted average of so many agents that individual agents could only shift it infinitessimally.
19 comments
Comments sorted by top scores.
comment by gjm · 2015-03-09T14:16:02.037Z · LW(p) · GW(p)
What does "the counterfactual in which your positions are reversed" actually mean? Suppose I'm a white slaveowner in the early 19th-century US, pondering how I should treat my slaves. It's hard to see that there's any possible world that resembles this one except that I am a slave and the person who is now my slave is the owner (because the societal mechanisms that make some people slaves and some slaveowners are slanted in particular ways and there's no plausible way our positions could be interchanged without also making us utterly different people).
(For the avoidance of doubt: I do not in any way endorse slavery and I'd love to believe that counterfactual-me wouldn't be willing to be a slaveowner even in the early 19th-century US. It's just a useful example.)
Replies from: owencb, owencb↑ comment by owencb · 2015-03-09T15:15:19.395Z · LW(p) · GW(p)
Good question. It is definitely underspecified (this is true of many counterfactuals as people think about them).
Where it's harder to get counterfactuals to work, this is likely to make them less salient, which means people are going to be less confident of having trade partners, so less likely to engage in trade (needing a better reward/cost ratio, I suppose).
↑ comment by owencb · 2015-03-09T22:46:24.752Z · LW(p) · GW(p)
Note that you can potentially trade with counterfactuals that aren't strongly symmetric. You can trade with a counterfactual where the person who is your slave is in a position of power over you, even if that's not an owner/slave relationship.
comment by Lumifer · 2015-03-09T15:12:20.221Z · LW(p) · GW(p)
I don't quite understand this. "Counterfactual" means "does not exist" and "something made up".
Then A and B’ can engage in acausal trade
B' comes from the counterfactual scenario Y which means that neither Y nor B' exist in reality and they are just figments of A's imagination.
Replies from: owencb↑ comment by owencb · 2015-03-09T15:26:38.359Z · LW(p) · GW(p)
I agree that not everyone will be interested in engaging in counterfactual trade. I gestured towards some reasons why you might be:
Replies from: LumiferAgents might engage in counterfactual trade either because they do care about the agents in the counterfactuals (at least seems plausible for some beliefs about a large multiverse), or because it’s instrumentally useful as a tractable decision rule which works as a better approximation to what they’d ideally like to do than similarly tractable versions.
↑ comment by Lumifer · 2015-03-09T15:29:48.034Z · LW(p) · GW(p)
My question isn't why would someone be interested, my question is how one can engage in trade with a figment of one's own imagination.
Replies from: evand↑ comment by evand · 2015-03-09T15:50:14.977Z · LW(p) · GW(p)
The result ends up looking like "I know they would have done the same for me".
Replies from: Lumifer↑ comment by Lumifer · 2015-03-09T16:10:11.212Z · LW(p) · GW(p)
Who are "they"?
Replies from: evand, owencb↑ comment by evand · 2015-03-10T15:10:13.717Z · LW(p) · GW(p)
"My friend was in trouble, so I helped them out, even though I knew I would never be repaid. I know they would have done the same for me."
Replies from: Lumifer↑ comment by Lumifer · 2015-03-10T16:43:15.974Z · LW(p) · GW(p)
Yes. Notice: a real friend, not a counterfactual one. Also "I knew I would never be repaid" makes this not a trade but just an altruistic act.
And "they would have done the same for me" is just games you play in your head. It could just as easily be "He probably wouldn't do that for me, but I don't care, he's my friend".
Replies from: Bobertron↑ comment by Bobertron · 2015-03-11T11:19:14.235Z · LW(p) · GW(p)
No, your real friend is the one you helped. The friend that helps you in a counterfactual situation where you are in trouble is just in your head, not real. Your counterfactual friend helps you, but in return you help your real friend. The benefit you get is that once you really are in trouble, the future version of your friend is similar enough to the counterfactual friend that he really will help you. The better you know your friend, the likelier this is.
I'm not saying that that isn't a bit silly. But I think it's coherent. In fact it might be just a geeky way to describe how people often think in reality.
Replies from: Lumifer↑ comment by owencb · 2015-03-10T12:08:19.124Z · LW(p) · GW(p)
The direct interpretation is that "they" are people elsewhere in a large multiverse. That they could be pictured as a figment of imagination gives the agent evidence about their existence.
The instrumental interpretation is that one acts as though trading with the figment of one's imagination, as a method of trade with other real people (who also act this way), because it is computationally tractable and tends to produce better outcomes all-round.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-10T14:50:57.063Z · LW(p) · GW(p)
I am sorry, this makes no sense to me at all.
Playing games inside your own mind has nothing to do with trades with other real entities, acausal or not.
Replies from: owencb, evand↑ comment by owencb · 2015-03-10T18:28:54.222Z · LW(p) · GW(p)
The first version isn't inside your own mind.
If you think there is a large multiverse, then there are many worlds including people very much like you in a variety of situations (this is a sense of 'counterfactual' which isn't all in the mind). Suppose that you care about people who are very similar to you. Then you would like to trade with real entities in these branches, when they are able to affect something you care about. Of course any trade with them will be acausal.
In general it's very hard to predict the relative likelihoods of different worlds, and the likelihood of agents in them predicting the existence of your world. This provides a barrier to acausal trade. Salient counterfactuals (in the 'in the mind' sense) give you a relatively easy way of reasoning about a slice of worlds you care about, including the fact that your putative trade partner also has a relatively easy way of reasoning about your world. This helps to enable trade between these branches.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-10T18:40:55.461Z · LW(p) · GW(p)
When you say "multiverse" and "branches", do you specifically mean the MWI?
Can you walk me through an example of a trade where you explicitly label all the moving parts with what they are and where they are?
In particular, if you assume MWI and assume that other branches exist, then people-like-you in other branches are not counterfactual because you assume they exist to start with.
↑ comment by evand · 2015-03-10T15:14:50.464Z · LW(p) · GW(p)
I give you things, and you give me things. The result is positive sum. That's trade. Causal and acausal trades both follow this pattern.
In the causal case, each transfer is conditional on the other transfer. Possibly in the traditional form of a barter transaction, possibly in the form of "if you don't reciprocate, I'll stop doing this in the future."
In the acausal case, it's predicated on the belief that helping out entities who reason like you do will be long-run beneficial when other entities who reason like you do help you out. There's no specific causal chain connecting two individual transfers.
Replies from: Lumifer↑ comment by Lumifer · 2015-03-10T16:46:16.629Z · LW(p) · GW(p)
I give you things, and you give me things. The result is positive sum. That's trade.
Provided "I" and "you" are both real, existing entities and are not counterfactuals.
If you give things to a figment of your imagination and it gives things back to you, well, either you have something going on with your tulpa or you probably should see a psychotherapist :-/
comment by kokotajlod · 2015-06-05T19:56:53.534Z · LW(p) · GW(p)
This is an interesting idea! Some thoughts:
Doesn't acausal trade (like all trade) depend on enforcement mechanisms? I can see how two AI's might engage in counterfactual trade, since they can simulate each other and see that they self-modify to uphold the agreement, but I don't think a human would be able to do it.
Also, I'd like to hear more about motivations for engaging in counterfactual trade. I get the multiverse one, though I think that's a straightforward case of acausal trade rather than a case of counterfactual trade, since you would be trading with a really existing entity in another universe. But can you explain the second motivation more?