Posts

Comments

Comment by Ilya Shpitser (ilya-shpitser) on Latent Variables and Model Mis-Specification · 2018-11-07T16:17:32.617Z · LW · GW

Hi Jacob. We (@JHU) read your paper on problems with ML recently!

---

"On the other hand, some people take robust statistical correlation to be the definition of a causal relationship, and thus do consider causal and counterfactual reasoning to be the same thing."

These people would be wrong, because if A <- U -> B, A and B are robustly correlated (due to a spurious association via U), but intuitively we would not call this association causal. Example: A and B are eye color of siblings. Pretty stable, but not causal.

However, I don't see how the former part of the above sentence implies the part after "thus."

---

Causal and counterfactual reasoning intersect, but neither is a subset of the other. An example of counterfactual reasoning I do that isn't causal is missing data. An example of causal reasoning that isn't counterfactual is stuff Phil Dawid does.

---

If you are worried about robustness to model misspecification, you may be interested in reading about multiply robust methods based on theory of semi-parametric statistical models and influence functions. My friends and I have some papers on this. Here is the original paper (first author at JHU now) showing double ("two choose one") robustness in a missing data context:

https://pdfs.semanticscholar.org/e841/fa3834e787092e4266e9484158689405b7b0.pdf

Here is a paper on mediation analysis I was involved in that gets "three choose two" robustness:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4710381/

---

I don't know how counterfactuals get you around model misspecification. My take is, counterfactuals are something that might be of primary interest sometimes, in which case model specification is one issue you have to worry about.

Comment by Ilya Shpitser (ilya-shpitser) on No Really, Why Aren't Rationalists Winning? · 2018-11-05T15:54:53.585Z · LW · GW

"Rationalists are very good at epistemic rationality."

As people very good at _epistemic_ rationality, I am sure you realize that the relevant comparison is success after one had been exposed to rationality with hypothetical success had one not been exposed.

Comment by Ilya Shpitser (ilya-shpitser) on Deep learning - deeper flaws? · 2018-10-02T04:50:33.223Z · LW · GW

No, that wouldn't surprise me in 5 years. Nor would that count as "scary progress" to me. That's bipedalism, not strides towards general intelligence.

---

"Well, it makes sense."

That makes your beliefs a religion, my friend.

Comment by Ilya Shpitser (ilya-shpitser) on Deep learning - deeper flaws? · 2018-09-29T02:17:29.589Z · LW · GW

I often hear this response: "I can't make bets on my beliefs about the Eschaton, because they are about the Eschaton."

My response to this response is: you have left the path of empiricism if you can't translate your insight into [topic] (in this case "AI progress") into taking money via {bets with empirically verifiable outcomes} from folks without your insight.

---

If you are worried the world will change too much in 25 years, can you formulate a nearer-term bet you would be happy with? For example, something non-toy DL+RL would do in 5 years.

Comment by Ilya Shpitser (ilya-shpitser) on Deep learning - deeper flaws? · 2018-09-28T12:57:26.177Z · LW · GW

Well, I am fairly sure DL+RL will not lead to HLAI, on any reasonable timescale that would matter to us. You are not sure. Seems to me, we could turn this into a bet. Any sort of bet where you say DL+RL -> HLAI after X years, I will probably take the negation of, gladly.

Comment by Ilya Shpitser (ilya-shpitser) on Deep learning - deeper flaws? · 2018-09-27T20:32:15.213Z · LW · GW

It's easy to tell -- they will run out of steam. Want to bet money on a concrete claim? I love money.

Comment by Ilya Shpitser (ilya-shpitser) on Deep learning - deeper flaws? · 2018-09-27T16:28:45.022Z · LW · GW

I disagree. Why would it be simple? Even people who try to get self-driving cars to work (e.g. at Uber) are now using an "engineering stack" approach, rather than formal RL+DL.

Comment by Ilya Shpitser (ilya-shpitser) on Deep learning - deeper flaws? · 2018-09-27T12:35:41.984Z · LW · GW

No.

Comment by Ilya Shpitser (ilya-shpitser) on Deep learning - deeper flaws? · 2018-09-26T23:37:28.519Z · LW · GW

Sure, and RL is not a regression problem. The reason RL methods can do causality is they can perform an essentially infinite number of experiments in toy worlds. DL can help RL scale up to more complex toy worlds, and some worlds that are not so toy anymore. But there, it's not DL on it's own -- it's DL+RL.

DL is very useful, indeed! In fact, one could use DL as a "subroutine" for causal analysis of the sort Pearl worries about. In fact, people do this now.

Point being it's no longer "DL", it's "DL-as-a-way-to-do-regression + other-methods-that-use-regressions-as-a-subroutine."

"Can you point to a particular task needing causal inference that you think these methods cannot solve?"

To answer this -- anything that's not a regression problem. At best, you can use DL as a subroutine in some other larger algorithm that needs its own insights to work, that are unrelated to DL. So why would DL get all the credit for solving the problem?

---

Dota's far from solved, so far.

Comment by Ilya Shpitser (ilya-shpitser) on Deep learning - deeper flaws? · 2018-09-26T18:52:35.409Z · LW · GW

Can't use regression methods for problems that are not regression problems. Causal inference is generally not a regression problem. It's not an issue of scale, it's an issue of wrong tool for the job.