Does butterfly affect?

post by pchvykov · 2021-05-14T04:20:58.374Z · LW · GW · 21 comments

Can a butterfly really cause a hurricane?

Having been working in complexity science, I realize that I have a problem with the conventional understanding of the butterfly effect. Sure, in ideal deterministic chaotic systems, the butterfly effect does a great job illustrating the quasi-stochastic nature of chaos and sensitivity to small perturbations. But in the real complex world, far from a mathematically idealized deterministic model, there is no reasonable sense in which we can say that a butterfly’s wings can cause a hurricane. Even if this isn’t surprising, as intuitively people won’t attribute a hurricane to a butterfly, I would argue that causally attributing World War I to the assassination of Archduke Ferdinand follows the same fallacy.

Consider a pot of supercooled water. Any minúte impurity in it can seed the freezing transition of the entire pot. But can we really say that this impurity “caused” the transition? This seems to depend on the counterfactual world: what would have happened otherwise? If the pot is sitting in an isolated clean room where there would not typically be any dust other perturbations, then this one particular impurity may be to blame for the phase-transition. But out in the messy “real world,” abundant with impurities of all kinds, supercooled water would not survive long anyhow — if not this impurity, then another would have seeded the transition a moment later.

Now, this counterfactual definition of causality, formalized by Judea Pearl using tools information theory, similarly gives problems for the classic interpretation of the butterfly effect. Would the hurricane have happened if not for the butterfly? To answer this question carefully, it isn’t enough to look at what the world would look like if we removed the butterfly, but held everything else fixed. Instead, we must consider the full statistical ensemble of possible world, and quantify to what extent the butterfly shifts that ensemble. To model this in a mathematically idealized setting, we could take some deterministic chaotic system, add some small stochastic noise to it at all times t to generate the statistical ensemble of possibilities, and then consider the effect of some slight “butterfly” perturbation at a given time t*. In the typical scenario, the impact of this single perturbation will not rise above the impact of the persistent background noise inherent to any complex real-world system. As such, the impact of butterfly’s wings will typically not rise above the persistent stochastic inputs affecting the Earth (such as from quantum noise or outer space).

This reasoning also undermines some of the reductionist mechanistic perspective we often have on the world (especially in the proverbial “West”). Roughly speaking, it may often be quite difficult to cleanly attribute a real-world outcome to one particular cause. For example, regretful thinking like “if only I had…” often relies on holding the world fixed and only changing that one past decision — which is just as problematic as for butterflies and hurricanes. And so, perhaps indeed, “the devil is in the details” — it isn’t the one-time events that truly determine our lives, but rather it is how we guide their continual unfolding.

Do you buy the logic - and the conclusion? Or is this a bit of a straw-man argument, and people don't really think this way about causes? 

[cross-posted from my blog https://medium.com/bs3]

21 comments

Comments sorted by top scores.

comment by lsusr · 2021-05-14T07:50:08.312Z · LW(p) · GW(p)

Let us ignore quantum indeterminacy.

The butterfly effect comes from chaos theory. The word "cause" as used in chaos theory means whether the a change to an initial state from to of a system from will change a final state to final state . Human intentional control is irrelevant.

In this comment I use chaos theory's definition of "cause".

But can we really say that this impurity “caused” the transition?

Yes.

This seems to depend on the counterfactual world: what would have happened otherwise?

The water wouldn't have transitioned at that point in time. (Assuming there no other perturbation.)

Would the hurricane have happened if not for the butterfly?

Probably not. (Rounds to zero.)

Do you buy the logic - and the conclusion?

No. You are conflating two different uses of the word "cause". One is a vague philosophical definition involving human intention. The other is a technical definition derived from mathematics.

You never defined the word "cause". I think that if you rigorously defined the word "cause" at the beginning of your post then your argument would selfdestruct. Either it would no longer apply to chaos theory or it would no longer apply to your scenarios involving human intentionality.

[Edit: This is overly harsh. See continued discussion.]

Or is this a bit of a straw-man argument, and people don't really think this way about causes?

A straw-man argument is performed with the wrong intentions. I don't think your argument is straw-man. Your intentions are fine. Your words are clear, coherent and falsifiable. Your argument is merely wrong.

[Edit: This is overly harsh. See continued discussion.]

Replies from: pchvykov
comment by pchvykov · 2021-05-15T00:45:44.752Z · LW(p) · GW(p)

Cool - thanks for your feedback! I agree that I could be more rigorous with my terminology. Nonetheless, I do think I have a rigorous argument underneath all this - even if it didn't come across. Let me try to clarify:

I did not mean to refer to human intentionality anywhere here. I was specifically trying to argue that the "chaos-theory definition of causality" you give, while great in idealized deterministic systems, is inadequate in complex messy "real world." Instead, the rigorous definition I prefer is the counter-factual information theoretic one, developed by Judea Pearl, and which I here tried to outline in layman's terms. This definition is entirely ill-posed in a deterministic chaotic system, but will work as soon as we have any stochasticity (from whatever source).

Does this address your point at all, or am I off-base?

Replies from: lsusr
comment by lsusr · 2021-05-15T01:54:11.895Z · LW(p) · GW(p)

You do address my point. This comment [LW(p) · GW(p)] helped too. I think I understand better what you're getting at now. I think you are trying to explain how attempting to trace causation back with the precision of chaos theory is impossible in complex real world situations of limited information and that an alternative definition of causation is necessary to handle such contexts. Such contexts constitute the majority of practical experience.

I no longer believe your argument would selfdestruct if you included a rigorous definition of causality. I understand now that your argument does not depend on human intentionality. Neither is it wrong.

Replies from: pchvykov
comment by pchvykov · 2021-05-15T04:53:05.996Z · LW(p) · GW(p)

whow, some Bayesian updating there - impressive! :)

comment by Richard_Kennaway · 2021-05-14T07:47:06.181Z · LW(p) · GW(p)

I commented [LW(p) · GW(p)] about this on a related post.

Replies from: pchvykov
comment by pchvykov · 2021-05-15T00:50:30.358Z · LW(p) · GW(p)

ah yes, great minds think alike! =)

What I really like about J. Pearl's counter-factual causality framework is that it gives a way to make these arguments rigorously, and even to precisely quantify "how much did the butterfly cause the tornado" - in bits!

comment by shminux · 2021-05-14T07:37:55.750Z · LW(p) · GW(p)

Would the hurricane have happened if not for the butterfly?

You are talking about counterfactuals, and those a difficult problem to solve when there is only one deterministic or probabilistic world and nothing else. A better question is "Does a model where 'a hurricane would not have happened as it had, if not for the butterfly' make useful and accurate predictions about the parts of the world we have not yet observed?" If so, then it's useful to talk about a butterfly causing a hurricane, if not, then it's a bad model. This question is answerable, and as someone with an expertise in "complexity science," whatever it might be, you are probably well qualified to answer it. It seems that your answer is "the impact of butterfly’s wings will typically not rise above the persistent stochastic inputs affecting the Earth, " meaning that the model where a butterfly caused the hurricane is not a useful one. In that clearly defined sense, you have answered the question you posed. 

Replies from: pchvykov
comment by pchvykov · 2021-05-15T00:58:42.598Z · LW(p) · GW(p)

Yes!! Very cool - going even one meta level up. I agree that usefulness of proposed models is certainly the ultimate judge of whether it's "good" or not. To make this even more concrete, we could try to construct a game and compare the mean performance of two agents having the two models we want to compare... I wonder if anyone's tried that... As far as I know, the counterfactual approach is "state of the art" for understanding causality these days - and it is a bit lacking for the reason you say. This could be a  cool paper to write!

Replies from: shminux
comment by shminux · 2021-05-15T03:07:25.064Z · LW(p) · GW(p)

The counterfactual approach is indeed very popular, despite its obvious limitations. You can see a number of posts from Chris Leung here on the topic, for example. As for comparing performance of different agents, I wrote a post [LW · GW] about it some years ago, not sure if that is what you meant, or if it even makes sense to you. 

Replies from: pchvykov
comment by pchvykov · 2021-05-15T05:21:00.933Z · LW(p) · GW(p)

hmm, so what I was thinking is whether we could give an improved definition of causality based on something like "A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments" - which may have a funny dependence on the game or environment we choose. 

Though as hard as the counterfactual definition is to work with in practice, this may be even harder... 

You post may be related to this, though not the same, I think. I guess what I'm suggesting isn't directly about decision theory. 

Replies from: shminux
comment by shminux · 2021-05-16T01:01:12.456Z · LW(p) · GW(p)

A causes B iff the model [A causes B] performs superior to other models in some (all?) games / environments

There are two parts that go into this: the rules of the game, and the initial state of it. You can fix one or both, you can vary one or both. And by "vary" I mean "come up with a distribution, draw an instance at random used for a particular run" then see which runs cause what. For example, in physics you could start with general relativity and vary the gravitational constant, the cosmological constant, the initial expansion rate, the homogeneity levels etc. Your conclusion might be something like "given this range of parameters, the inhomogeneities cause the galaxies to form around them. Given another range of parameters, the universe might collapse or blow up without any galaxies forming. So, yes, as you said,

"A causes B" ... has a funny dependence on the game or environment we choose

In the Game of Life, given a certain setup, a glider can hit a stable block, causing its destruction. This setup could be unique or stable to a range of perturbations or even large changes, and it still would make sens sense to use the cause/effect concept. 

The counterfactuals in all those cases would be in the way we set up a particular instance of the universe: the laws and the initial conditions. They are counterfactual because in our world we only have the one run, and all others are imagined, not "real". However, if one can set up a model of our world where a certain (undetectable) variation leads to a stable outcome, then those would be counterfactuals. The condition that the variations are undetectable given the available resolution is essential, otherwise it would not look like the same world to us. I had a post about that [LW · GW], too. 

An example of this "low-res" causing an apparent counterfactual is the classic [LW · GW

If Lee Harvey Oswald hadn't shot John F. Kennedy, someone else would have

If you can set up a simulation with varying initial conditions that includes, as Eliezer suggests, a conspiracy to kill JFK, but varies in whether Oswald was a good/available tool for it, then, presumably, in many of those runs JFK would have been shot within a time frame that is not too different from our particular realization. In some others JFK would have been killed but poisoned or stabbed, not shot, and so the Lee Harvey Oswald would not be the butterfly you are describing. In the models where there is no conspiracy, Oswald would have been the butterfly, again, as Eliezer describes. There are many other possible butterflies and non-butterflies in this setup, of course, from gusts of wind at a wrong time, to someone discovering the conspiracy early, etc.

Note that some of those imagined worlds are probably impossible physically, as in, when extrapolated into the past they would have caused macroscopic effects that are incompatible with observations. For example, Oswald missing the mark with his shot may have resulted from the rifle being of poor quality which would have been incompatible with the known quality control procedures when it was made. 

Hope some of this makes sense.

comment by Donald Hobson (donald-hobson) · 2021-05-16T16:18:59.600Z · LW(p) · GW(p)

Instead, we must consider the full statistical ensemble of possible world, and quantify to what extent the butterfly shifts that ensemble.

add some small stochastic noise to it at all times t to generate the statistical ensemble of possibilities

In the typical scenario, the impact of this single perturbation will not rise above the impact of the persistent background noise inherent to any complex real-world system.

I think these quotes illustrate the mind projection fallacy. The "noise" is not an objective thing sitting out there in the real world, it is a feature of your own uncertainty. 

Suppose you have a computational model of the weather. You make the simplifying assumption that water evaporation is a function only of air temperature and humidity. Whereas in reality, the evaporation depends on puddle formation and plant growth and many other factors. Out in the real world, the weather follows its own rules perfectly. Those rules are the equations of the whole universe. "noise" is just what you call a hopefully small effect you don't have the knowledge, compute or inclination to calculate. 

If you have a really shoddy model of the weather, it won't be able to compute much. If you add a butterflys wing flaps to a current weather model, the knowledge of that small effect will be lost due to the mass of other small effects metrologists haven't calculated. Adding or removing a butterfly's wingflap doesn't meaningfully change our predictions given current predictive ability.  However, to a sufficiently advanced future weather predictor, that wingflap could meaningfully change the chance of a tornado. The predictor would need to be tracking every other wingflap globally, and much else besides. 

We are modelling as probabilistic processes that are actually deterministic but hard to calculate.

comment by johnswentworth · 2021-05-14T05:37:57.231Z · LW(p) · GW(p)

I'm strikethrough-ing this comment for being less kind than the author/post deserves. But it does make true and useful points, and I don't have the energy to rewrite it to be kinder right now, so I'm not deleting it outright.

The supercooled water example isn't actually an example of chaos. It's an example where the system is in a metastable state, and any perturbation causes it to switch to a more-stable state. Stable states are exactly what chaos isn't.

A better intuition for something chaos-like: imagine that we take add together a whole bunch of numbers, then check whether the result is odd or even. Changing any single number from odd to even, or vice-versa, causes the end result to flip. Chaos is like that: one small perturbation can cause a large-scale change (like changing the path of a hurricane); there are a wide variety of possible small perturbations, any one of which could cause the large-scale outcome to change back and forth between possible outcomes.

there is no reasonable sense in which we can say that a butterfly’s wings can cause a hurricane... Now, this counterfactual definition of causality, formalized by Judea Pearl using tools information theory, similarly gives problems for the classic interpretation of the butterfly effect... To answer this question carefully, it isn’t enough to look at what the world would look like if we removed the butterfly, but held everything else fixed.

Um... no. Removing the butterfly and holding everything else (specifically all other initial conditions/"random" external inputs) fixed is exactly what Pearl's counterfactual framework says to do here. And that Pearl-style counterfactual does not give any troubles whatsoever interpreting chaos. A small perturbation can indeed cause a macroscopic change, in the exact sense of "cause" formalized by Pearl: the macroscopic change would not have happened without the small perturbation, holding everything else fixed.

There is a perfectly reasonable sense in which small perturbations cause large changes in realistic chaotic systems, and Pearl's counterfactual framework is exactly the "reasonable sense" in question. If this is "unreasonable" in some sense, then this post has not actually made an argument for that or said in what sense it is unreasonable.

(I do not know if weather systems are chaotic enough that a small perturbation could cause a hurricane to not happen at all, but I'm pretty sure they're chaotic enough that a small perturbation could cause a hurricane's path to change significantly, e.g. send it to Florida rather than Mexico or vice-versa.)

(Side-note: in the supercooled water example, the counterfactual analysis would presumably say that any particular impurity did not cause the transition, precisely because there were likely other impurities which would have caused the transition anyway.)

I do think there's probably a way to turn the thing-you're-trying-to-say into a viable argument, but it isn't about chaos. In particular:

it may often be quite difficult to cleanly attribute a real-world outcome to one particular cause.

This is absolutely true. The number-adding analogy shows why: if changing any number would change the outcome, then there isn't really a sense in which one number caused the outcome more than any other. For each number, there is a well-defined counterfactual in which that number caused the outcome.

Counterfactual analysis - i.e. "holding the world fixed and only changing that one past decision"  - is not at all problematic, for butterflies or hurricanes or anything else. The mistake which I think you're trying to point to is in arbitrarily picking one particular cause to focus on, when any other cause is just as relevant.

Replies from: pchvykov
comment by pchvykov · 2021-05-15T01:14:18.125Z · LW(p) · GW(p)

I'm not sure why this was crossed out - seems quite civil to me... And I appreciate your thoughts on this!

I do think we agree at the big-picture level, but have some mismatch in details and language. In particular, as I understand J. Pearl's counter-factual analysis, you're supposed to compare this one perturbation against the average over the ensemble of all possible other interventions. So in this sense, it's not about "holding everything else fixed," but rather about "what are all the possible other things that could have happened."

Replies from: johnswentworth
comment by johnswentworth · 2021-05-15T03:42:46.444Z · LW(p) · GW(p)

I believe that would be an interventional analysis, in Pearl's terms, not a counterfactual analysis.

I'm not sure why this was crossed out - seems quite civil to me...

I noticed this was only your fourth LW post, and you have the sort of knowledge and mindset which seems likely to yield very interesting posts in the future, so I didn't want to leave a comment which might discourage writing more posts. I'm glad it didn't come across too harsh. :)

Replies from: pchvykov
comment by pchvykov · 2021-05-15T05:29:55.930Z · LW(p) · GW(p)

cool - and I appreciate that you think my posts are promising! I'm never sure if my posts have any meaningful 'delta' - seems like everything's been said before. 

But this community is really fun to post for, with meaningful engagement and discussion =)

comment by MichaelLowe · 2021-05-15T15:00:57.936Z · LW(p) · GW(p)

Thank you for this interesting post. Could you clarify your assertion that the real world is not an idealistic deterministic system? Of course we cannot model it as such, but ignoring quantum effects, the world is deterministic. In that sense it seems to me that we might be unable to never confidently conclude that the butterfly caused the hurricane, but it could still be true. (and yes, in that Buddhist fable, my position has always been that trees do fall down, even if nobody sees it)

comment by TAG · 2021-05-15T13:34:42.834Z · LW(p) · GW(p)

Even if this isn’t surprising, as intuitively people won’t attribute a hurricane to a butterfly, I would argue that causally attributing World War I to the assassination of Archduke Ferdinand follows the same fallacy.

If "cause" means that given A, B must necessarily happen , irrespective if other factors , then the butterfly wing and the assassination aren't causes. But there are other was of defining "cause" where they are!

But out in the messy “real world,” abundant with impurities of all kinds, supercooled water would not survive long anyhow — if not this impurity, then another would have seeded the transition a moment late

If "cause" means that given A, B must necessarily happen , and nothing but A could bring about B , then the impurity wasn't the cause. But there are other was of defining "cause" where it was!

comment by simon · 2021-05-15T02:03:52.818Z · LW(p) · GW(p)

Sure, the butterfly is really minor compared to everything else going on, and so only "causes" the hurricane if you unnaturally consider the butterfly as a variable while many more important factors are held fixed.

But, I don't believe the assassination of Franz Ferdinand is in the same category. While there's certainly a danger that hindsight could make certain events look more pivotal than they really were, the very fact that we have a natural seeming chain of apparent causation from the assassination to the war is evidence against it being a "butterfly effect". 

Replies from: pchvykov
comment by pchvykov · 2021-05-15T05:46:25.758Z · LW(p) · GW(p)

Yeah, I'm quite curious to understand this point too - certainly not sure how far this reasoning can be applied (and whether Ferdinand is too much of a stretch). I was thinking of this assassination as the "perturbation in a super-cooled liquid" - where it's really the overall geopolitical tension that was the dominant cause, and anything could have set off the global phase transition. Though this gets back to the limitations of counter-factual causality in the real-world...

Replies from: TAG
comment by TAG · 2021-05-15T13:46:12.594Z · LW(p) · GW(p)

It not determined by reality, it's determined by your interests. The geopolitical tension and the assassination are both reasonable answers, depending on the exact question.