Causality does not imply correlation
post by Richard_Kennaway · 2009-07-08T00:52:28.329Z · LW · GW · Legacy · 58 commentsContents
58 comments
It is a commonplace that correlation does not imply causality, however eyebrow-wagglingly suggestive it may be of causal hypotheses. It is less commonly noted that causality does not imply correlation either. It is quite possible for two variables to have zero correlation, and yet for one of them to be completely determined by the other.
58 comments
Comments sorted by top scores.
comment by Steve_Rayhawk · 2009-07-09T09:16:48.624Z · LW(p) · GW(p)
The actual relationship is: A causes B. Furthermore, there is no noise in the process. A is varying randomly, but B is deterministically caused by A and nothing else, and not by a complex process either.
[. . .] It does not matter what smooth waveform the signal generator puts out, it will have zero correlation with the current that it is the sole cause of.
This equivocates the entire waveform A and the values of A at single points in time. The random value of the entire waveform A can be seen as the sole cause of the entire value of the waveform B, under one representation of the probability relations. But there is no representation under which the random value of A at a single point in time can be seen as the sole cause of the random value of B at that point in time. What could be a sole cause of the value of B at any point in time is the value of A at that time together with any one of three other variables: the value of a hidden low-pass-filtered white noise at that time, the value of A at an immediately preceding time in the continuum limit, or, if this is a second-order system, the value of B at an immediately preceding time in the continuum limit.
As entire waveforms, the random value of A is perfectly correlated with the random value of B (up to the rank of the covariance of B), because B is a deterministic linear transformation of A. As values at single points in time, the random value of A is uncorrelated with the random value of B.
So, marginalizing out the equivocation, either A is a sole deterministic cause of B, and A and B are perfectly correlated (but correlation is not logically necessary; see below), or A and B have zero correlation, and A is not a sole deterministic cause of B.
. . . Spirtes et al's Causation, Prediction, and Search. One of the axioms used in the last-mentioned is the Faithfulness Axiom. See the book for the precise formulation; informally put it amounts to saying that if two variables are uncorrelated, then they are causally independent. . . . The purpose of this article is to argue that this is not the case.
Emphasis added here and below.
Causation, Prediction, and Search, page 31:
Faithfulness Condition: Let G be a causal graph and P a probability distribution generated by G. <G, P> satisfies the Faithfulness Condition if and only if every conditional independence relation true in P is entailed by the Causal Markov Condition applied to G.
Wikipedia on correlation:
If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X². Then Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated.
Spirtes's example on page 71 looks like a linear Gaussian causal system. In a linear Gaussian causal system, uncorrelation is equivalent to simple marginal independence and can imply complete conditional independence.
Theorem: In the long run, a bounded, differentiable real function has zero correlation with its first derivative. [. . .]
Notice that unlike the case that Spirtes considers, where the causal connections between two variables just happen to have multiple effects that exactly cancel, the lack of correlation between A and B is robust.
Yes, I think this is true for values of a function and its derivative sampled at single uniformly random times (for some limit sense of "uniform" and "a function").
comment by jimrandomh · 2009-07-08T13:43:58.025Z · LW(p) · GW(p)
Type error! Causal relationships between booleans imply correlation between them. Causal relationships between numbers imply correlation only if the relation is monotonic. Other types, such as strings, need not even have a meaningful definition of correlation, but they can nevertheless be causally related to eachother.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-07-09T07:35:25.340Z · LW(p) · GW(p)
That is true, but not relevant. These numbers have, I think, as strong an independence as bit strings can have: no bit that can be extracted from A is correlated with a bit that is extracted from B.
comment by Mike Bishop (MichaelBishop) · 2009-07-08T13:53:35.374Z · LW(p) · GW(p)
There is a really interesting discussion/debate about Pearl's and Rubin's approaches to causal inference going on at Andrew Gelman's Blog. Part One. Part two. Part three.
Pearl is contributing in the comments.
Replies from: cousin_itcomment by CronoDAS · 2009-07-08T02:34:09.816Z · LW(p) · GW(p)
The output of a pseudorandom number generator is determined by the seed value, but good luck finding a correlation between them! ;)
Replies from: SilasBarta, Richard_Kennaway↑ comment by SilasBarta · 2009-07-08T17:17:35.442Z · LW(p) · GW(p)
As per my comment here, there is no statistical correlation between the PRNG and the seed value, but there is mutual information.
When someone says "no correlation" to mean "no statistical correlation", people hear "no correlation", which invokes that clump of conceptspace in their minds which implies "no mutual information". But that isn't true. There are other ways for variables to be related than statistical correlation, and mutual information is one way, and this is an important distinction to make before you get all giddy!
Replies from: CronoDAS↑ comment by Richard_Kennaway · 2009-07-08T13:08:54.797Z · LW(p) · GW(p)
That is another way that correlations can fail to detect what is happening.
comment by mistercow · 2009-07-08T03:39:22.974Z · LW(p) · GW(p)
That is what A and B are: a randomly wandering variable A and its rate of change B.
Maybe I'm not quite understanding, but it seems to me that your argument relies on a rather broad definition of "causality". B may be dependent on A, but to say that A "causes" B seems to ignore some important connotations of the concept.
I think what bugs me about it is that "causality" implies a directness of the dependency between the two events. At first glance, this example seems like a direct relationship. But I would argue that B is not caused by A alone, but by both A's current and previous states. If you were to transform A so that a given B depended directly on a given A', I think you would indeed see a correlation.
I realize that I'm kind of arguing in a circle here; what I'm ultimately saying is that the term "cause" ought to imply correlation, because that is more useful to us than a synonym for "determine", and because that is more in line (to my mind, at least) with the generally accepted connotations of the word.
Replies from: SilasBarta, JGWeissman, Jack, Richard_Kennaway↑ comment by SilasBarta · 2009-07-08T16:51:50.347Z · LW(p) · GW(p)
Maybe I'm not quite understanding, but it seems to me that your argument relies on a rather broad definition of "causality". B may be dependent on A, but to say that A "causes" B seems to ignore some important connotations of the concept.
Very true. Once again, I'm going to have to recommend in the context of a Richard Kennaway post, the use of more precise concepts. Instead of "correlation", we should be talking about "mutual information", and it would be helpful if we used Judea Pearl's definition of causality.
Mutual information between two variables means (among many equivalent definitions) how much you learn about one variable by learning the other. Statistical correlation is one way that there can be mutual information between two variables, but not the only way.
So, like what JGWeissman said, there can be mutual information between the two series even in the absence of a statistical correlation that directly compares time t in one to time t in the other. For example, there is mutual information between sin(t) and cos(t), even though d(sin(t))/dt = cos(t), and even though they're simultaneously uncorrelated (i.e. uncorrelated when comparing time t to time t). The reason there is mutual information is that if you know sin(t), a simple time-shift tells you cos(t).
As for causation, the Pearl definition is (and my apologies I may not get this right) that:
"A causes B iff, after learning A, nothing else at the time of A or B gives you information about B. (and A is the minimal such set for which this is true)"
In other words, A causes B iff A is the minimal set for which B is conditionally independent given A.
So, anyone want to rephrase Kennaway's post with those definitions?
↑ comment by JGWeissman · 2009-07-08T05:51:33.078Z · LW(p) · GW(p)
But I would argue that B is not caused by A alone, but by both A's current and previous states.
This is the right idea. For small epsilon, B(t) should have a weak negative correlation with A(t - epsilon), a weak positive correlation with A(t + epsilon). and a strong positive correlation with the difference A(t + epsilon) - A(t - epsilon).
The function A causes the function B, but the value of A at time t does not cause the value of B at time t. Therefore the lack of correlation between A(t) and B(t) does not contradict causation implying correlation.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-08T13:04:17.233Z · LW(p) · GW(p)
Therefore the lack of correlation between A(t) and B(t) does not contradict causation implying correlation.
Only trivially. Since B = dA/dt, the correlation between B and dA/dt is perfect. Likewise for any other relationship B = F(A): B correlates perfectly with F(A). But you would only compare B and F(A) if you already had some reason to guess they were related, and having done so would observe they were the same and not trouble with correlations at all.
If you do not know that B = dA/dt and have no reason to guess this hypothesis, correlations will tell you nothing, especially if your time series data has too large a time step -- as positively recommended in the linked paper -- to see dA/dt at all.
↑ comment by Jack · 2009-07-08T04:10:25.702Z · LW(p) · GW(p)
I don't think you are arguing in a circle. B is caused by current and previous As. Obviously we're not going to see a correlation unless we control for the previous state of A. Properly controlled the relationship between the two variables will be one-to-one, won't it?
↑ comment by Richard_Kennaway · 2009-07-08T12:53:52.940Z · LW(p) · GW(p)
But I would argue that B is not caused by A alone, but by both A's current and previous states.
Consider not the abstract situation of B = dA/dt, but the concrete example of the signal generator. It would be a perverse reading of the word "cause" to say that the voltage does not cause the current. You can make the current be anything you like by suitably manipulating the voltage.
But let this not degenerate into an argument about the "real" meaning of "cause". Consider instead what is being said about the systems studied by the authors referenced in the post.
Lacerda, Spirtes, et al. do not use your usage. They talk about time series equations in which the current state of each variable depends on the previous states of some variables, but still they draw causal graphs which do not have a node for every time instant of every variable, but a node for every variable. When x(i+1) = b y(i) + c z(i), they talk about y and z causing x.
The reason that none of their theorems apply to the system B = dA/dt is that when I discretise time and put this in the form of a difference equation, it violates the precondition they state in section 1.2.2. This will be true of the discretisation of any system of ordinary differential equations. It appears to me that that is a rather significant limitation of their approach to causal analysis.
Replies from: mistercow↑ comment by mistercow · 2009-07-08T13:52:39.487Z · LW(p) · GW(p)
Consider not the abstract situation of B = dA/dt, but the concrete example of the signal generator. It would be a perverse reading of the word "cause" to say that the voltage does not cause the current. You can make the current be anything you like by suitably manipulating the voltage.
But you can make a similar statement for just about any situation where B = dA/dt, so I think it's useful to talk about the abstract case.
For example, you can make a car's velocity anything you like by suitably manipulating its position. Would you then say that the car's position "causes" its velocity? That seems awkward at best. You can control the car's acceleration by manipulating its velocity, but to say "velocity causes acceleration" actually sounds backwards.
But let this not degenerate into an argument about the "real" meaning of "cause". Consider instead what is being said about the systems studied by the authors referenced in the post.
But isn't this really the whole argument? If the authors implied that every relationship between two functions implies correlation between their raw values, then that is, I think, self-evidently wrong. The question then, is do we imply correlation when we refer to causation? I think the answer is generally "yes".
Replies from: Cyan↑ comment by Cyan · 2009-07-08T14:41:35.544Z · LW(p) · GW(p)
I think intervention is the key idea missing from the above discussion of which of the the derivative function and the integrated function is the cause and which is the effect. In the signal generator example, voltage is a cause of current because we can intervene directly on the voltage. In the car example, acceleration is a cause of velocity because we can intervene directly on acceleration. This is not too helpful on its own, but maybe it will point the discussion in a useful direction.
comment by SilasBarta · 2009-07-08T01:24:07.735Z · LW(p) · GW(p)
In a further article I will exhibit time series for three variables, A, B, and C, where the joint distribution is multivariate normal, the correlation of A with C is below -0.99, and each has zero correlation with B. ...
And in the current comment section, I'm going to give away the answer, since I've run through the PCT demos. (Sorry, I don't know how to format for spoilers, will edit once I figure out or someone tells me.)
You sure you didn't want to figure out on your own? Okay, here goes. Kennaway is describing a feedback control system: a system that observes a variable's current value and outputs a signal that attempts to bring it back towards a reference value. A is an external disturbance. B is the deviation of the system from the reference value (the error). C is the output of the controller.
The controller C will push in the opposite direction of the disturbance A, so A and C will be about anti-correlated. Their combined effect is to keep B very close to zero with random deviations, so B is uncorrelated with both.
The disturbance and the controller jointly cause the error. So, we have A->B and C->B. The error also causes the controller to output what it does, so B->C. (I assume directed cycles are allowed since there are four possible connections and you said there are 16 possible graphs.)
Together, that's A-->B<-->C
(In other news, Kennaway or pjeby will suggest I'm not giving due attenction to Perceptual Control Theory.)
(Edit: some goofs)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-08T12:22:42.797Z · LW(p) · GW(p)
You have read my mind perfectly and understood the demos! But I'll go ahead and make the post anyway, when I have time, because there are some general implications to draw from the disconnect between causality and correlation. Such as, for example, the impossibility of arriving at A-->B<-->C for this example from any existing algorithms for deriving causal structure from statistical information.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-07-08T17:21:08.258Z · LW(p) · GW(p)
the impossibility of arriving at A-->B<-->C for this example from any existing algorithms for deriving causal structure from statistical information.
Correct me if I'm wrong, but I think I already know the insight behind what you're going to say.
It's this: there is no fully general way to detect all mutual information between variables, because that would be equivalent to being able to compute Kolmogorov complexity (minimum length to output a string), which would in turn be equivalent to solving the Halting problem.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-08T20:46:45.875Z · LW(p) · GW(p)
Correct me if I'm wrong
You're wrong. :-)
Kolmogorov complexity will play no part in the exposition.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-07-08T22:17:55.963Z · LW(p) · GW(p)
Check my comment: I was only guessing the underlying insight behind your future post, not its content.
I obviously leave room for the possibility that you'll present a more limited or more poorly-defended version of what I just stated. ;-)
comment by PhilGoetz · 2015-02-06T06:37:54.939Z · LW(p) · GW(p)
The proof is left as an exercise.
Do you actually have a proof?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2015-02-06T10:23:44.116Z · LW(p) · GW(p)
Do you actually have a proof?
Yes. But it's not deep; I recommend trying yourself before consulting the answer. It follows straightforwardly from the fact that the integral of x(dx/dt) is (x^2)/2. The rest is bookkeeping to eliminate edge cases.
I didn't trouble to state the result with complete precision in the OP. For reference, here is an exact formulation (Theorem 2 of the linked note):
Let x be a differentiable real function. If the averages of x and dx/dt over the whole real line exist, and the correlation of x and dx/dt over the whole real line exists, then the correlation is zero.
Replies from: PhilGoetz, PhilGoetz, PhilGoetz↑ comment by PhilGoetz · 2015-02-08T21:40:00.263Z · LW(p) · GW(p)
Let x be a differentiable real function.
I think precision would require you to state this in terms of a variable x and the function f(x). (EDIT: Sorry; please ignore this.)
If the averages of x and dx/dt over the whole real line exist,
This is a pretty harsh requirement! It will be true for constant functions, cyclic functions, symmetric functions, and maybe asymptotically-bounded functions. I don't think you can say it's true for y=x.
Replies from: gjm, Richard_Kennaway↑ comment by Richard_Kennaway · 2015-02-08T23:35:03.819Z · LW(p) · GW(p)
I think precision would require you to state this in terms of a variable x and the function f(x).
gjm has read the note I linked; I suggest you do the same. That is what a link is for.
This is a pretty harsh requirement!
Not particularly. The speed of a car, the temperature of a room, the height of an aircraft: such things are all around you. Stating the property of the whole real line is an idealisation, but Theorem 1 of the note treats of finite intervals also, and there is a version of the theorems for time series.
I don't think you can say it's true for y=x.
In keeping with the terminology established at the note I linked, I take this to mean x=t. Yes, it is not true of x=t. This does not have an average over the whole real line.
Replies from: gjm, PhilGoetz↑ comment by PhilGoetz · 2015-02-09T20:25:57.719Z · LW(p) · GW(p)
gjm has read the note I linked; I suggest you do the same. That is what a link is for.
I wish I hadn't made my comment about precision, which was too nitpicking and unhelpful. But as long as we're being snippy with each other:
To be excruciatingly precise: You just said you were being precise, then said "Let x be a differentiable real function." That isn't precise; you need to specify right there that it's a function of t. If you'd said the link stated it precisely, that would be different.
I admit that I would have interpreted it correctly by making the most-favorable, most-reasonable interpretation and assuming x was a function of t. But, because of the sorts of things I usually see done with x and t, I assumed that x was a function of time, and the function of interest was some function of x(t), and I jumped to the conclusion that you meant to say "Let f(x) be a differentiable real function." Which I would not have done had you in fact been precise, and said "Let x(t) be a differentiable real function."
↑ comment by PhilGoetz · 2015-02-16T07:02:40.126Z · LW(p) · GW(p)
Sorry that I sounded dismissive. It's a nice proof, and it wasn't obvious to me.
I am uncomfortable with using Pearson correlation to mean correlation. Consider y=sin(x), dy/dx = cos(x). These are "uncorrelated" according to Pearson correlation, but given one, there are at most 2 possibilties for the other. So knowing one gives you almost complete info about the other. So calling them "independent" seems wrong.
comment by Epictetus · 2015-02-06T04:16:04.916Z · LW(p) · GW(p)
Correlation only looks for linear relationships. For example, suppose we have a random variable X that takes values -2, -1, 1, or 2 each with probability 1/4. Define the random variable Y=X^2. The correlation is 0. Despite a functional relationship (causality if I've ever seen it), the two variables are uncorrelated.
comment by Steve_Rayhawk · 2009-07-08T22:39:17.476Z · LW(p) · GW(p)
The actual relationship is: A causes B. Furthermore, there is no noise in the process. A is varying randomly, but B is deterministically caused by A and nothing else, and not by a complex process either.
[. . .] It does not matter what smooth waveform the signal generator puts out, it will have zero correlation with the current that it is the sole cause of.
This equivocates the entire waveform A and the values of A at single points in time. The random value of the entire waveform A is a sole cause of the entire value of the waveform B. The random value of A at a single point in time is not a sole cause of the random value of B at that point in time. What would be a sole cause of the value of B at any point in time is the value of A at that time together with any one of three other variables: the value of a hidden low-pass-filtered white noise at that time, the value of A at an immediately preceding time in the continuum limit, or, if this is a second-order system, the value of B at an immediately preceding time in the continuum limit.
As entire waveforms, the random value of A is perfectly correlated with the random value of B (up to the rank of the covariance of B), because B is a deterministic linear transformation of A. As values at single points in time, the random value of A is uncorrelated with the random value of B.
So, marginalizing out the equivocation, either A is a sole deterministic cause of B, and A and B are perfectly correlated (but correlation is not logically necessary; see below), or A and B have zero correlation, and A is not a sole deterministic cause of B.
. . . Spirtes et al's Causation, Prediction, and Search. One of the axioms used in the last-mentioned is the Faithfulness Axiom. See the book for the precise formulation; informally put it amounts to saying that if two variables are uncorrelated, then they are causally independent. . . . The purpose of this article is to argue that this is not the case.
Emphasis added here and below.
Causation, Prediction, and Search, page 31:
Faithfulness Condition: Let G be a causal graph and P a probability distribution generated by G. <G, P> satisfies the Faithfulness Condition if and only if every conditional independence relation true in P is entailed by the Causal Markov Condition applied to G.
Wikipedia on correlation:
If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X². Then Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated.
Spirtes's example on page 71 looks like a linear Gaussian causal system. In a linear Gaussian causal system, uncorrelation is identical with simple marginal independence, and it can imply complete conditional independence.
Theorem: In the long run, a bounded, differentiable real function has zero correlation with its first derivative. . . .
Notice that unlike the case that Spirtes considers, where the causal connections between two variables just happen to have multiple effects that exactly cancel, the lack of correlation between A and B is robust.
Yes, I think this is true for values of a function and its derivative sampled at single uniformly random times (for a limit sense of "uniform").
comment by [deleted] · 2015-06-21T01:22:12.120Z · LW(p) · GW(p)
sd
comment by Annoyance · 2009-07-08T16:17:53.475Z · LW(p) · GW(p)
This is a really spectacular post.
One quibble: in the case being discussed, one variable is actually a property of the other variable, rather than another thing that is affected by something else.
Is it really appropriate to say that A causes B when B is just a property of A?
Replies from: thomblake↑ comment by thomblake · 2009-07-08T16:27:58.758Z · LW(p) · GW(p)
I was thinking this as well, but you could construct a situation that doesn't have this problem - like a mechanical system that relies on the derivative to perform some action deterministically.
Replies from: SilasBarta, Richard_Kennaway↑ comment by SilasBarta · 2009-07-08T17:06:10.175Z · LW(p) · GW(p)
That's actually an interesting issue in control systems. IIRC, if you set up a system so that some variable B is a function of the time-derivative of A, B=f( dA(t)/dt ), and it requires you to know dA(T)/dt to compute B(T), such a system is called "acausal". I believe this is because you can't know dA(T)/dt until you know A(t) after time T.
So any physically-realizable system that depends on the time-derivative of some other value, is actually depending on the time-derivative at a previous point in time.
In contrast, there is no such problem for the integral. If I only know the time series of A(t) up to time T, then I know the integral of A up to time T, and such a relationship is not acausal.
In the general case, for a relationship between two systems where B is a function of A, the transfer function from A to B, num(s)/den(s) must be such that the deg(num) <= deg(den), where deg() denotes the degree of a polynomial.
(The transfer function is ratio of B to A in the Laplace domain, usually given the variable s to replace t. Multiplying by s in the Laplace domain corresponds to differentiation in the time domain, and dividing by s is integration.)
(edit to clarify, then again to clarify some more)
↑ comment by Richard_Kennaway · 2009-07-08T16:50:08.913Z · LW(p) · GW(p)
You mean, like the mechanical (well, electronic) one I described?
B = dA/dt doesn't imply that B is the cause of A. As I pointed out, a current generator attached to a capacitor causes the voltage, the reverse of the first example, but the mathematical relation between voltage and current is the same.
"Cause" is an everyday concept that tends to dissolve when looked at too closely. The research on causal analysis of statistical data quite sensibly does not try to define it.
Replies from: SilasBarta↑ comment by SilasBarta · 2009-07-08T17:12:43.229Z · LW(p) · GW(p)
"Cause" is an everyday concept that tends to dissolve when looked at too closely. The research on causal analysis of statistical data quite sensibly does not try to define it.
Except for everyone following Pearl.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-08T21:41:54.877Z · LW(p) · GW(p)
Ah, ok. Applying his definition ("variable X is a probabilistic-cause of variable Y if P(y|do(x)) != P(y) for some values x and y") to the signal generator, it says that the voltage causes the current; in the current-source version, that the current causes the voltage. That's exactly what I would say as well.
Of course, his limitation to acyclic relations excludes from his analysis systems that are only slightly more complicated, such as .
Replies from: Steve_Rayhawk↑ comment by Steve_Rayhawk · 2009-07-08T23:30:59.978Z · LW(p) · GW(p)
his limit to acyclic relations
That's what dynamic Bayesian networks are for. The current values of state variables of a system near stable equilibrium are not caused by each other; they are caused by past values. Dynamic Bayesian networks express this distinction with edges that pass forward in time.
excludes from his analysis systems that are only slightly more complicated, such as
The continuous-time limit of a dynamic Bayesian network can be a differential equation such as this.
(ETA) A dynamic Bayesian network is syntactic sugar for an ordinary Bayesian network that has the same structure in each of a series of time slices, with edges from nodes in each time slice to nodes in the next time slice. The Bayesian network that is made by unrolling a dynamic Bayesian network is still completely acyclic. Therefore, Bayesian networks have at least the representation power of finitely iterated systems of explicit recurrence relations and are acyclic, and continuum limits of Bayesian networks have at least the representation power of systems of differential equations and are acyclic. (Some representation powers that these Bayesian networks do not have are the representation powers of systems of implicit recurrence relations, systems of differential algebraic equations without index reduction, and differential games. Something like hybrid Bayesian-Markovian networks would have some of these representation powers, but they would have unphysical semantics (if physics is causal) and would be hard to use safely.)
(Dynamic Bayesian networks at the University of Michigan Chemical Engineering Process Dynamics and Controls Open Textbook ("ControlWiki"))
comment by ShardPhoenix · 2009-07-08T13:09:28.875Z · LW(p) · GW(p)
Is encryption another example, or do you have to take into account the full system including the key?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-08T13:46:31.610Z · LW(p) · GW(p)
As with CronoDAS's suggestion of a pseudorandom generator, this can easily yield variables possessing a strong causal connection but no correlation.
Correlations -- product-moment or any other statistical calculation -- are machines to detect relationships between variables that are obscured by passive fog. Random generators and cryptosystems are machines to defeat detection even by an adversary. It is not a surprise that crypto beats correlation.
More surprising is the existence of systems as simple as B = dA/dt which also defeat correlation. The scatter-plot looks like pure fog, yet there are no extraneous noise sources and no adversarial concealment. The relationship between the variables is simply invisible to the statistical tools used in causal analysis.
comment by SilasBarta · 2009-07-08T23:32:07.774Z · LW(p) · GW(p)
Whoa, what happened there? Someone made a comment here that made a lot of good points (others had made them, but this poster did it much better), and I voted it up, as did someone else, but now it's gone, and there's no "comment deleted" indication.
Is LW not always giving the most up-to-date site?
Replies from: Steve_Rayhawk, SilasBarta↑ comment by Steve_Rayhawk · 2009-07-09T00:47:33.169Z · LW(p) · GW(p)
My use of the word "cause" was not technically accurate. I needed to find a way to word my comment that didn't use it that way.
↑ comment by SilasBarta · 2009-07-08T23:32:46.540Z · LW(p) · GW(p)
Okay ... and now that comment shows up again in "recent comments". Weird.
ETA: Okay, I guess that comment is actually a different one.
is confused
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-09T00:09:58.097Z · LW(p) · GW(p)
A comment that has no follow-ups can be deleted by the author without leaving a "comment deleted" placeholder.
comment by Steve_Rayhawk · 2009-07-08T23:01:57.715Z · LW(p) · GW(p)
. . . Spirtes et al's Causation, Prediction, and Search. One of the axioms used in the last-mentioned is the Faithfulness Axiom. See the book for the precise formulation; informally put it amounts to saying that if two variables are uncorrelated, then they are causally independent. . . . The purpose of this article is to argue that this is not the case.
Emphasis added here and below.
Causation, Prediction, and Search, page 31:
Faithfulness Condition: Let G be a causal graph and P a probability distribution generated by G. <G, P> satisfies the Faithfulness Condition if and only if every conditional independence relation true in P is entailed by the Causal Markov Condition applied to G.
Wikipedia on correlation:
If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X². Then Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated.
Spirtes's example on page 71 looks like a linear Gaussian causal system. In a linear Gaussian causal system, uncorrelation is identical with simple marginal independence, and it can imply complete conditional independence.
Theorem: In the long run, a bounded, differentiable real function has zero correlation with its first derivative. . . .
Notice that unlike the case that Spirtes considers, where the causal connections between two variables just happen to have multiple effects that exactly cancel, the lack of correlation between A and B is robust.
Yes, I think this is true for values of a function and its derivative sampled at single uniformly random times (for a limit sense of "uniform").
comment by prase · 2009-07-08T09:54:10.740Z · LW(p) · GW(p)
Theorem: In the long run, a bounded, differentiable real function has zero correlation with its first derivative.
I don't understand the theorem. What does "in the long run" mean? Is it that in limit a,b->\infty
(\int{a,b} f(x)f'(x) dx)/(b-a)=(\int{a,b} f(x) dx)(\int_a^b f'(y) dy)/(b-a)^2 ?
Sorry for the quasi-TEX notation, even the underscore doesn't appear here. Is there any elegant way to write formulae on LW?
Replies from: Richard_Kennaway, Vladimir_Nesov↑ comment by Richard_Kennaway · 2009-07-08T12:26:02.765Z · LW(p) · GW(p)
Not quite, it's that as a and b go to infinity,
(\int_{a,b}f(x)f'(x)dx)/(b-a))
goes to zero. \int_{a,b}f(x)f'(x)dx = [ f(x)^2/2 ]^b_a, which is bounded, while b-a is unbounded, QED.
LaTeX to Wiki might work, but LaTeX to LW comment doesn't.
Replies from: Vladimir_Nesov, prase↑ comment by Vladimir_Nesov · 2009-07-08T12:37:08.847Z · LW(p) · GW(p)
f'(x)dx}{b-a})
Source code:
!\f'(x\)dx}{b-a})
Formatting tutorial on the Wiki
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-08T13:13:27.225Z · LW(p) · GW(p)
I tried, but it didn't work for me. I could make a codecogs URL to exhibit the image in my browser, but it got munged when I tried the ![](...) embedding.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-08T13:16:26.011Z · LW(p) · GW(p)
The problem must be in escape character (see the last section of the wiki article). Try copy-pasting the code I gave above in your comment, and notice the placement of backslashes.
↑ comment by prase · 2009-07-08T13:43:31.795Z · LW(p) · GW(p)
The standard form for correlation coefficient is
cov(x,y)=N(-)
where N is normalisation; it seems that you suppose that =0 and <f'> finite, then. =0 follows from boundedness, but for the derivative it's not clear. If <f'> on (a,b) grows more rapidly than (b-a), anything can happen.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-07-08T13:58:33.771Z · LW(p) · GW(p)
If <f'> on (a,b) grows more rapidly than (b-a)
This cannot happen. f is assumed bounded. Therefore the average of f' over the interval [a,b] tends to zero as the bounds go to infinity.
The precise, complete mathematical statement and proof of the theorem does involve some subtlety of argument (consider what happens if f = sin(exp(x))) but the theorem is correct.
↑ comment by Vladimir_Nesov · 2009-07-08T11:40:27.331Z · LW(p) · GW(p)
See the description on the Wiki of how to include LaTeX in comments.