Humans do acausal coordination all the time
post by Adam Jermyn (adam-jermyn) · 2022-11-02T14:40:39.730Z · LW · GW · 35 commentsContents
Examples people do Voting Recycling/Carbon Footprint Reduction Dieting An example people don’t do: paying extra taxes Virtues and Traditions None 36 comments
I used to think that acausal coordination was a weird thing that AI’s might do in the future, but that they certainly wouldn’t learn from looking at human behavior. I don’t believe that anymore, and think there are lots of examples of acausal coordination in everyday life.
Examples people do
Voting
The political science argument against voting goes:
The probability that my vote tilts the election is tiny, so the expected value to me is tiny, so it’s not worth my time to vote.
And the standard rebuttal to this is:
If everyone thought that way democracy would fall apart, so I should vote.
More precisely, I expect that the people who justify voting on these grounds are doing the following reasoning:
I’m like the people who share my views in my tendency to vote. That means that there is a correlation between my decision to vote (or not) and the tendency of other people in my coalition to vote (or not). Lots of people who share my views vote, enough to make our collective vote worth my while. So I should vote, so that we all vote and we all win.
This is an example of acausal coordination! The pro-voting position amounts to reasoning about what other actors with correlated thought processes will do and picking the option which, if each actor does the same reasoning and comes to the same conclusion, leads to a better outcome.
Recycling/Carbon Footprint Reduction
The usual argument against recycling/reducing your personal carbon footprint goes:
I only have control over my environmental impacts. The effect of choosing to reduce these is tiny, so there’s no point in bearing even small costs to do it.
And the standard rebuttal is:
If everyone decided to reduce their footprint/recycle/etc. we’d have the problem(s) solved.
Again, the rebuttal is fundamentally an argument about how to acausally coordinate with other people to achieve a collective goal. Whether or not I recycle is logically connected to whether or not other people who share my reasoning recycle. There are lots of those people, which makes me want to recycle so that they recycle so that we collectively help the environment a significant amount.
Dieting
Why do people feel (psychologically) bad when they go over the limits on their diet? I don’t think it’s because they screwed up once, I think it’s because they view their commitment to a diet as a coordination effort between their present and future selves. Specifically, the reasoning goes:
I couldn’t stick to my diet this time. My ability to stick to my diet is logically connected to the ability of future versions of me to stick to their diets, so by failing to do so now I have failed to coordinate with future versions of myself.
The most explicit example I’ve seen of this in action is Zvi’s reasoning about diets [LW · GW]:
For each meal I would consume, I decided what quantity was worth it and forbade myself from ever consuming more. I motivated myself to stick to that rule in the face of hyperbolic discounting by reminding myself that I would make the same decision next time that I was making now, so I was deciding what action I would always take in this situation. More generally, sticking to the rules I’d decided to follow meant I would stick to rules I’d decided to follow, which was clearly an extremely valuable asset to have on my side.
An example people don’t do: paying extra taxes
As far as I can tell, almost no one voluntarily pays extra taxes. And yet, there is an argument for doing so:
If everyone decided to pay extra taxes, the government would have more money for services/we could quickly pay down the debt/etc.
Why does voting coordination work but extra-tax-paying doesn’t? For some people it could be a general disapproval of the things tax dollars pay for, but I don't think that's all that's going on here. For instance, many people support raising taxes, including on themselves, so you might at least expect those people to coordinate to pay extra taxes.
My guess is that the issue is that almost no one pays extra taxes, so there’s no step where you say “There are lots of people who might pay extra taxes, whose choice is logically connected to mine.” That means that your personal choice to pay extra taxes isn’t correlated with being in a world where many people pay extra taxes, and so you don’t see it as worthwhile.
Virtues and Traditions
I think a lot of virtues can be recast as acausal coordination. Virtues like honor/honesty/integrity can be seen as recognition that my choices are correlated with yours, and so my choosing to be virtuous is correlated with you choosing to be virtuous, so I should choose to be virtuous to ensure better outcomes.
Many traditions and religious practices follow this pattern too. For instance, honoring the dead and respect for older generations are both cases of coordinating across people at different times.
(Thanks to Justis Mills for feedback on this post, and to Katherine McDaniel for discussions about these ideas.)
35 comments
Comments sorted by top scores.
comment by Dagon · 2022-11-02T14:57:25.641Z · LW(p) · GW(p)
Interesting take, but I'll note that these are not acausal, just indirect-causal. Voting is a good example - counts are public, so future voters KNOW how many of their fellow citizens take it seriously enough to participate.
In all of these examples, there is a signaling path to future impact. Which humans are perhaps over-evolved to focus on.
↑ comment by Gordon Seidoh Worley (gworley) · 2022-11-02T21:04:49.109Z · LW(p) · GW(p)
Right. Nothing that happens in the same Hubble volume can really be said to not be causally connected. Nonetheless I like the point of the OP even if it's made in an imprecise way.
↑ comment by Adam Jermyn (adam-jermyn) · 2022-11-02T15:53:36.132Z · LW(p) · GW(p)
Hmmmm. I agree that there is a signal path to future impact (at least in voting). Two responses there:
- There isn't such a signal in recycling. I have no idea how much my town recycles. Ditto for carbon offsets. How many of my closest friends offset the carbon from their flights? I have no idea.
- Counts being public tells me how many people voted, but there's something a little funny there. There's almost no signal from my vote in there (concretely, I don't think my vote changes the number from one that tells other people "voting isn't worth it" to "voting is worth it"). I notice I'm confused how to think about this though, and maybe you can clarify/expand on your indirect signal point?
↑ comment by Dagon · 2022-11-02T19:11:51.091Z · LW(p) · GW(p)
I don't claim that signaling is the only path, nor that humans are correct in their decision-making on these topics. I only claim that there are causal reasons for the choices which explain things better than acausal coordination.
Mostly I want to support your prior that "acausal coordination is a weird thing that AIs might do in the future (or more generally that may apply in very rare cases, but will be extremely hard to find clear examples of)".
Replies from: adam-jermyn↑ comment by Adam Jermyn (adam-jermyn) · 2022-11-02T19:18:42.877Z · LW(p) · GW(p)
I guess I’m just not following what the causal reasons are here?
↑ comment by Adam Selker (adam-selker) · 2022-11-03T16:58:21.702Z · LW(p) · GW(p)
If the counts weren't public until after voting were closed, do you think people would vote significantly differently?
My instinct says they wouldn't.
comment by Elias Schmied (EliasSchmied) · 2022-11-03T11:31:08.778Z · LW(p) · GW(p)
I disagree. There is no acausal coordination because eg the reasoning "If everyone thought like me, democracy would fall apart" does not actually influence many people's choice, ie they would vote due to various social-emotional factors no matter what that reasoning said. It's just a rationalization.
More precisely, when people say "If everyone thought like me, democracy would fall apart", it's not actually the reasoning that it could be interpreted as, it's a vague emotional appeal to loyalty/the identity of a modern liberal/etc. You can tell because it refers to "everyone" instead of a narrow slice of people, it involves no modelling of the specific counterfactual of MY choice, there's no general understanding of decision theory that would allow this kind of reasoning to happen and any reasonable model of the average person's mind doesn't allow it imo.
Your model is also straining to explain the extra taxes thing. "Voting is normal, paying extra taxes isn't" is much simpler.
In general, I'm wary of attempts to overly steelman the average person's behavior, especially when there's a "cool thing" like decision theory involved. It feels like a Mysterious Answers to Mysterious Questions [LW · GW] kind of thing.
comment by Ben (ben-lang) · 2022-11-02T16:07:46.212Z · LW(p) · GW(p)
Humans, in the examples provided, are (I think) applying something like the logic "what if everyone did this" to assess the morality of an action. In my experience that is quite a common way of reasoning about morality, it was presented to me as a child as "common sense", and many years later I learned it was a big deal to Kant , https://en.wikipedia.org/wiki/Categorical_imperative (I am really curious if it was already "common sense" before Kant or if he added it to the common sense pile). But all of this is used when discussing the morality of an action.
As I understand it (which I don't) acausal decision theory is aiming to maximise the effectiveness of actions, not assess their morality. I don't know if this drives a wedge into things or not.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2022-11-02T21:01:35.148Z · LW(p) · GW(p)
Boy do I have a decision theory [? · GW] for you! ;-)
comment by Vladimir_Nesov · 2022-11-03T15:12:19.802Z · LW(p) · GW(p)
The usual story of acausal coordination involves agent P modeling agent Q, and Q modeling P. Put differently, both P and Q model the joint system (P+Q) that has both P and Q in it. But it doesn't have to be (P+Q), it could be some much simpler R instead. I think a more central example of acausal coordination is to simply follow shared ideas.
The unusual character of acausal coordination is caused by cases where R is itself an agent, an adjudicator between P and Q. As a shared idea, it would have instances in minds of both P and Q, and command some influence allowed by P and Q over their actions. It would need to use some sort of functional decision theory to make sense of its situation where it controls the world through causally unrelated actions of P and Q that only have R's algorithm in common behind them.
The adjudicator R doesn't need to be anywhere as complicated as P or Q, in particular it doesn't need to know P or Q in detail. Which makes it much easier for P and Q to know R than to know each other. It's just neither P nor Q that's doing the acausal coordination, it's R instead.
comment by Jozdien · 2022-11-02T15:57:45.048Z · LW(p) · GW(p)
I agree with the general take - my explanation of why people often have pushback to the idea of acausal reasoning is because they never really have cause to actively apply it. In the cases of voting or recycling, the acausal argument naively feels more like post-fact justification than proactive reason to do it if it weren't the norm.
Most of the cases of people doing some things explained by acausal reasoning but not other equally sensible things also requiring acausal justification seem really well-modelled as what they were already going to do without thinking about it too hard. When they think about it too hard is when you end up with a lot of people who actively disagree with that line of thought - and a lot of people I know take that stance, that acausal reasoning isn't valid and so they don't have personal onus to recycle or switch careers to work on effective things (usually because corporations and other entities have more singular weight), but still vote.
comment by Vivek Hebbar (Vivek) · 2022-11-03T10:11:32.393Z · LW(p) · GW(p)
Note that, for rational *altruists* (with nothing vastly better to do like alignment), voting can be huge on CDT grounds -- if you actually do the math for a swing state, the leverage per voter is really high. In fact, I think the logically counterfactual impact-per-voter tends to be lower than the impact calculated by CDT, if the election is very close.
comment by AnthonyC · 2022-11-09T17:03:01.766Z · LW(p) · GW(p)
I like this post and agree that acausal coordination is not weird fringe behavior necessarily. But thinking about it explicitly in the context of making a decision, is. In normal circumstances, we have plenty of non-acausal ways of discussing what's going on, as you discuss. The explicit consideration is something that becomes important only outside the contexts most people act in.
That said, I disagree with the taxes example in particular, on the grounds that that's not how government finances work in a world of fiat currency controlled by said government. Extra taxes paid won't change how much gets spent or on what, it'll just remove money from circulation with possible downstream effects on inflation. Also, in some states in the US (like Massachusetts this year), where the government doesn't control the currency, there are rules that surpluses have to get returned in the form of tax refunds. So any extra state taxes I paid, would just get redistributed across the population in proportion to income.
Replies from: adam-jermyn↑ comment by Adam Jermyn (adam-jermyn) · 2022-11-09T17:08:15.763Z · LW(p) · GW(p)
I like the distinction between implementing the results of acausal decision theories and explicitly performing the reasoning involved. That seems useful to have.
The taxes example I think is more complicated: at some scale I do think that governments have some responsiveness to their tax receipts (e.g. if there were a surprise doubling of tax receipts governments might well spend more). It's not a 1:1 relation, but there's definitely a connection.
comment by Viliam · 2022-11-02T23:13:24.554Z · LW(p) · GW(p)
My intuition for voting is that your political values somehow correlates with your decision whether to vote or not, so although I always vote, I do not mind when other people go like "what's the point of voting?", because it means more political power for people like me, and less power for people like them.
With paying extra taxes, there is no "more power" for the kind of people who would decide to pay more. It all goes to the same budget, governed by the same algorithm. There is no victory for people like me over other kinds of people, if people like me pay extra taxes and the other kinds of people do not. On the other hand, I might donate money to a specific cause (even if the donation is anonymous, so I get no personal benefit from doing that), because that would mean that causes preferred by people like me get extra support.
Also, I think that people exaggerate the difficulty of voting. Especially in cases where it only requires 5 minutes of walk. Please do not tell me that your life is so optimized that taking a 5 minute walk once in a few years would make a visible difference; I am not going to believe that. (You have probably already spent way more time explaining on internet why voting is a complete waste of time, than I have spent actually voting.)
Replies from: artifex↑ comment by artifex · 2022-11-02T23:55:29.922Z · LW(p) · GW(p)
Voting takes much more than five minutes and if you think otherwise you haven’t added up all the lost time. And determining how you should vote if you want to vote for things that lead to good outcomes requires extremely more than five minutes.
Replies from: Viliam↑ comment by Viliam · 2022-11-03T10:10:07.413Z · LW(p) · GW(p)
Many people who do not vote still have strong political opinions, so they already spent that time anyway. Speaking for myself, if I am not sure how to vote, I have some people whose opinion I trust, so I just ask them in a private message, and for some reason they are happy to tell me.
If someone has no idea about politics, and is unwilling to effectively donate their vote to their peer group, then I agree it would take significantly more than five minutes for them.
Replies from: artifex↑ comment by artifex · 2022-11-03T22:16:03.179Z · LW(p) · GW(p)
Even voting online takes more than five minutes in total.
Anyway, I’d rather sell my votes for money. I believe you can find thousands of people, current non-voters, who would vote for whatever you want them to, if you paid them only a little more than the value of their time.
If the value of voting is really in the expected benefits (according to your own values) of good political outcomes brought forth through voting, and these expected benefits really are greater than the time costs and other costs of voting, shouldn’t paying people with lower value of their time to vote the way you want be much more widespread?
You might not be able to verify that they did vote the way you wanted, or that they wouldn’t have voted that way otherwise, but, still, unless the ratio is only a little greater than one, it seems it should be much more widespread?
If however the value of voting is expressive, or it comes in a package in which you adapt your identity to get the benefits of membership in some social club, that explains why there are so many people who don’t vote and why the ones who do don’t seem interested in buying their votes. And it also explains why the things people vote for are so awful.
Replies from: Viliam↑ comment by Viliam · 2022-11-04T07:59:14.645Z · LW(p) · GW(p)
I believe you can find thousands of people, current non-voters, who would vote for whatever you want them to, if you paid them only a little more than the value of their time.
I think there are 100 people around me who do not value 10 minutes of their time higher than $1, so I would be happy to buy 100 votes for $100. Problem is, it wouldn't work this way for two reasons.
- Transaction costs.
- On a free market, the price of a vote would not stay at "a little more than the value of their time", but would grow much higher, because there is a limited supply of votes.
The problem is, if your interest in politics is something other than "steal as much money as possible", you face a coordination problem compared to people whose goal is to steal as much money as possible.
Let's assume that a politician with no conscience is able to steal 1 million of money without exposing himself to a significant legal risk. That means, it would be rational for such person to spend 1/2 million buying votes, if it resulted in probability of being elected greater than 50%.
So the people who want to prevent this kind of person from being elected would need to spend the same amount of money on buying votes; preferably more, to get a safety margin. In which case, congratulation, you prevented the 1 million from being stolen, at the cost of paying almost 1 million out of your pocket. Does not seem like a great victory.
comment by ryan_b · 2022-11-02T21:32:48.726Z · LW(p) · GW(p)
The voting example is one of those interesting cases where I disagree with the reasoning but come to a similar conclusion anyway.
I claim the population of people who justify voting on any formal reasoning basis is at best a rounding error in the general population, and probably is indistinguishable from zero. Instead, the population in general believes one of three things:
- There is an election, so I vote because I'm a voter.
- Voting is meaningless anyway, so I don't.
- Election? What? Who cares?
But it looks to me this is still coordination without sharing any explicit reasoning with each other. The central difference is that group 1 are all rocks with the word "Vote" painted on them, group 2 are all rocks with the word "Don't vote" painted on them, and group 3 are all rocks scattered in the field somewhere rather than being in the game.
As I write this it occurs to me that when discussing acausal coordination or trade we are always showing isolated agents doing explicit computation about each other; does the zero-computation case still qualify? This feels sort of like it would be trivial, in the same way they might "coordinate" on not breaking the speed of light or falling at the acceleration of gravity.
On the other hand, there remains the question of how people came to be divided into groups with different cached answers in the first place. There's definitely a causal explanation for that, it just happens prior to whatever event we are considering. Yet going back to the first hand, the causal circumstances giving rise to differing sets of cached answers can't be different in any fundamental sense from the ones that give differing decision procedures.
Following from that, I feel like the zero-computation case for acausal coordination is real and counts, which appears to me to make the statement much stronger.
Replies from: EliasSchmied↑ comment by Elias Schmied (EliasSchmied) · 2022-11-03T11:34:30.734Z · LW(p) · GW(p)
I don't think the "zero-computation" case should count. Are two ants in an anthill doing acausal coordination? No, they're just two similar physical systems. It seems to stretch the original meaning , it's in no sense "acausal".
Replies from: ryan_b↑ comment by ryan_b · 2022-11-03T22:34:40.773Z · LW(p) · GW(p)
I agree two ants in an anthill are not doing acausal coordination; they are following the pheromone trails laid down by each other. This is the ant version of explicit coordination.
But I think the crux between us is this:
It seems to stretch the original meaning
I agree, it does seem to stretch the original meaning. I think this is because the original meaning was surprising and weird; it seemed to be counterintuitive and I had to put quite a few cycles in to work through the examples of AIs negotiating without coexisting.
But consider for a moment we had begun from the opposite end: if we accept two rocks with "cooperate" painted on them as counting for coordination, starting from there we can make a series of deliberate extensions. By this I mean stuff like: if we can have rocks with cooperate painted on, surely we can have agents with cooperate painted on (which is what I think voting mostly is); if we can have agents with cooperate painted on, we can have agents with decision rules about whether to cooperate; if we can have decision rules about whether to cooperate they can use information about other decision rules, and so on until we encompass the original case of superrational AGI trading acausally with AGIs in the future.
I feel like this progression from cooperating rocks to superrational AGIs is just recognizing a gradient whereby progressively less-similar physical systems can still accomplish the same thing as the 0 computation, 0 information systems which are very similar.
Replies from: EliasSchmied↑ comment by Elias Schmied (EliasSchmied) · 2022-11-09T13:53:08.606Z · LW(p) · GW(p)
Ah, I see what you mean! Interesting perspective. The one thing I disagree with is that a "gradient" doesn't seem like the most natural way to see it. It seems like it's more of a binary, "Is there (accurate) modelling of the counterfactual of your choice being different going on that actually impacted the choice? If yes, it's acausal. If not, it's not". This intuitively feels pretty binary to me.
Replies from: ryan_b↑ comment by ryan_b · 2022-11-10T19:37:47.501Z · LW(p) · GW(p)
I agree the gradient-of-physical-systems isn't the most natural way to think about it; I note that it didn't occur to me until this very conversation despite acausal trade being old hat here.
What I am thinking now is that a more natural way to think about it is overlapping abstraction space. My claim is that in order to acausally coordinate, at least one of the conditions is that all parties need to have access to the same chunk of abstraction space, somewhere in their timeline. This seems to cover the similar physical systems intuition we were talking about: two rocks with coordinate painted on them are abstractly identical, so check; two superrational AIs need the abstractions to model another superrational AI, so check. This is terribly fuzzy, but seems to allow in all the candidates for success.
The binary distinction makes sense, but I am a little confused about the work the counterfactual modeling is doing. Suppose I were to choose between two places to go to dinner, conditional on counterfactual modelling of each choice. Would this be acausal in your view?
comment by Gunnar_Zarncke · 2022-11-02T20:58:36.950Z · LW(p) · GW(p)
A related example I saw somewhere, maybe here on LW: High-ranking politicians like senators don't need to be bribed directly. It is enough that there is an implicit expectation that if you vote in the interests of a big corporation, then you will get highly-paid speaking opportunities from these companies later - completely without any agreements between them, just because such companies will reliably provide such opportunities for such senators.
Replies from: Dagon↑ comment by Dagon · 2022-11-02T21:02:23.973Z · LW(p) · GW(p)
Still causal, via an expectations channel. The politicians in question hope for a different future experience based on their actions.
Replies from: Gunnar_Zarncke, M. Y. Zuo↑ comment by Gunnar_Zarncke · 2022-11-03T21:20:05.338Z · LW(p) · GW(p)
There is no causal path from either agent's action today to the other agent's future action. Sure there is expectation - but that's the point, right? "Expection" is what we typically call when an agent acts consistently.
comment by alexlyzhov · 2022-11-06T06:04:13.881Z · LW(p) · GW(p)
Wow, Zvi example is basically what I've been doing recently with hyperbolic discounting too after I've spent a fair amount of time thinking about Joe Carlsmith—Can you control the past. It seems to work. "It gives me a lot of the kind of evidence about my future behavior that I like" is now the dominant reason behind certain decisions.
comment by Maxwell Clarke (maxwell-clarke) · 2022-11-04T09:13:34.660Z · LW(p) · GW(p)
I fully agree*. I think the reason most people disagree, and thing the post is missing is a big disclaimer about exactly when this applies. It applies if and only if another person is following the same decision procedure to you.
For the recycling case, this is actually common!
For voting, it's common only in certain cases. e.g. here in NZ last election there was a party TOP which I ran this algorithm for, and had this same take re. voting, and thought actually a sizable fraction of the voters (maybe >30% of people who might vote for that party) were probably following the same algorithm. I made my decision based on what I thought the other voters would do, which I thought was that probably somehat fewer would vote for TOP than in the last election (where the party didn't get into parliament), and decided not to vote for TOP. Lo and behold, TOP got around half the votes they did the previous election! (I think this was the correct move because I don't think the number of people following that decision procedure increased)
*except confused by the taxes example?
comment by Arthur Conmy (arthur-conmy) · 2022-11-03T02:34:06.634Z · LW(p) · GW(p)
I got the impression that most justifications for voting and reducing carbon footprint are reasoned from virtue ethics rather than something consequentialist, and that consequentialism is not present at all, e.g
It is a virtuous to be a person who votes. I strive to be a virtuous person, so I shall vote.
rather than
I’m like the people who share my views in my tendency to vote ... So I should vote, so that we all vote and we all win
comment by jchan · 2022-11-02T21:45:22.970Z · LW(p) · GW(p)
See also Newcomblike problems are the norm [LW · GW].
When I discuss this with people, the response is often something like: My value system includes a term for people other than myself - indeed, that's what "morality" is - so it's redundant / double-counting to posit that I should value others' well-being also as an acausal "means" to achieving my own ends. However, I get the sense that this disagreement is purely semantic.
comment by [deleted] · 2022-11-02T22:47:04.311Z · LW(p) · GW(p)
I used to think that acausal coordination was a weird thing that AI’s might do in the future, but that they certainly wouldn’t learn from looking at human behavior. I don’t believe that anymore, and think there are lots of examples of acausal coordination in everyday life.
Sounds like acasual coordination leads to collective altruism.
I believe that AI's will inherit our habits, including acasual coordination if and only if we build them out of our image. I don't want to be religious here, but if we want AGI to honor our values and goals, it should grow up in same roots/sensory input and same environment.
Currently I see that we are moving towards a different direction. The root of AGI existence should be existential risk reduction and survival instinct, otherwise reasoning and perception of their world will branch out into unknown direction. Unknown is dangerous for a reason and I believe we should pay more attention to it.
↑ comment by Ruby · 2022-11-03T01:46:22.908Z · LW(p) · GW(p)
Quick heads up from a moderator. (*waves*) Welcome to LW, TOMOKO, I see you're a new user. Just noted you were commenting a lot on AI posts. To keep quality high on the site, especially on AI where there's so much interest now, we're keeping a closer eye on new users. I'd encourage you to up the quality of your comments a tad (sorry, actually quite hard to explain how), just each marginal user pulls up the average. For now, I've applied a rate limit of 1 post and 1 comment per day to your account. Can revisit that if your contributions start seeming great.
Thanks and good luck!
↑ comment by Maxwell Clarke (maxwell-clarke) · 2022-11-04T09:00:28.565Z · LW(p) · GW(p)
Props for showing moderation in public