Posts

Comments

Comment by IlyaShpitser on Related Discussion from Thomas Kwa's MIRI Research Experience · 2023-10-07T20:35:06.554Z · LW · GW

Nate's an asshole, and this is cult dynamics.  Make your wisdom saving throws, folks.

Comment by IlyaShpitser on If influence functions are not approximating leave-one-out, how are they supposed to help? · 2023-09-22T15:51:38.946Z · LW · GW

Influence functions are for problems where you have a mismatch between the loss of the target parameter you care about and the loss of the nuisance function you must fit to get the target parameter.

Comment by IlyaShpitser on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-21T18:34:13.193Z · LW · GW

It's simple. "You" (the rationalist community) are selected for being bad at making wisdom saving throws, so to speak.

You know, let's look at Yudkowsky, with all of his very public, very obvious character dysfunction and go "yes, this is the father figure/Pope I need to change my life."

The only surprise here is the type of stuff you are agonizing about didn't happen earlier, and isn't happening more often.

Comment by IlyaShpitser on "Publish or Perish" (a quick note on why you should try to make your work legible to existing academic communities) · 2023-03-30T15:30:20.760Z · LW · GW

It's important to internalize that the intellectual world lives in the attention economy, like eveything else.

Just like "content creators" on social platforms think hard about capturing and keeping attention, so do intellectuals and academics.  Clarity and rigor is a part of that.


No one has time, energy, (or crayons, as the saying goes) for half-baked ramblings on a blog or forum somewhere.

Comment by IlyaShpitser on On Investigating Conspiracy Theories · 2023-02-22T22:54:41.079Z · LW · GW

If you think you can beat the American __ Association over a long run average, that's great news for you!  That means free money!

Being right is super valuable, and you should monetize it immediately.

---

Anything else is just hot air.

Comment by IlyaShpitser on The role of Bayesian ML in AI safety - an overview · 2023-01-28T00:25:17.354Z · LW · GW

Lots of Bayes fans, but can't seem to define what Bayes is.

Since Bayes theorem is a reformulation of the chain rule, anything that is probabilistic "uses Bayes theorem" somewhere, including all frequentist methods.

Frequentists quantify uncertainty also, via confidence sets, and other ways.

Continuous updating has to do with "online learning algorithms," not Bayes.

---

Bayes is when the target of inference is a posterior distribution.  Bonus Bayes points: you don't care about frequentist properties like consistency of the estimator.

Comment by IlyaShpitser on Logical Probability of Goldbach’s Conjecture: Provable Rule or Coincidence? · 2022-12-29T14:32:49.655Z · LW · GW

Does your argument fail for https://en.wikipedia.org/wiki/Goldbach%27s_weak_conjecture?

If so, can you explain why?  If not, it seems your argument is no good, as a good proof of this (weaker) claim exists.

Not that you asked my advice, but I would stay away from number theory unless you get a lot of training.

Comment by IlyaShpitser on What is causality to an evidential decision theorist? · 2022-04-17T19:27:05.784Z · LW · GW

For the benefit of other readers: this post is confused.

Specifically on this (although possibly also on other stuff): (a) causal and statistical DAGs are fundamentally not the same kind of object, and (b) no practical decision theory used by anyone includes the agent inside the DAG in the way this post describes.

---

"So if the EDT agent can find a causal structure that reflects their (statistical) beliefs about the world, then they will end up making the same decision as a CDT agent who believes in the same causal structure."

A -> B -> C and A <- B <- C reflect the same statistical beliefs about the world.

Comment by IlyaShpitser on [RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm. · 2022-04-10T16:48:03.943Z · LW · GW

If you think it's a hard bet to win, you are saying you agree that nothing bad will happen.  So why worry?

Comment by IlyaShpitser on [RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm. · 2022-04-08T22:54:25.057Z · LW · GW

Wanna bet some money that nothing bad will come of any of this on the timescales you are worried about?

Comment by IlyaShpitser on Ukraine Post #2: Options · 2022-03-11T02:35:10.295Z · LW · GW

Big fan of Galeev.

Comment by IlyaShpitser on 12 interesting things I learned studying the discovery of nature's laws · 2022-02-21T13:45:45.006Z · LW · GW

Some reading on this:

https://csss.uw.edu/files/working-papers/2013/wp128.pdf

http://proceedings.mlr.press/v89/malinsky19b/malinsky19b.pdf

https://arxiv.org/pdf/2008.06017.pdf

---

From my experience it pays to learn how to think about causal inference like Pearl (graphs, structural equations), and also how to think about causal inference like Rubin (random variables, missing data).  Some insights only arise from a synthesis of those two views.

Pearl is a giant in the field, but it is worth remembering that he's unusual in another way (compared to a typical causal inference researcher) -- he generally doesn't worry about actually analyzing data.

---

By the way, Gauss figured out not only the normal distribution trying to track down Ceres' orbit, he actually developed the least squares method, too!  So arguably the entire loss minimization framework in machine learning came about from thinking about celestial bodies.

Comment by IlyaShpitser on An Open Philanthropy grant proposal: Causal representation learning of human preferences · 2022-01-12T21:45:36.680Z · LW · GW

Classical RL isn't causal, because there's no confounding (although I think it is very useful to think about classical RL causally, for doing inference more efficiently).

Various extensions of classical RL are causal, of course.

A lot of interesting algorithmic fairness isn't really causal.  Classical prediction problems aren't causal.


However, I think domain adaptation, covariate shift, semi-supervised learning are all causal problems.

---

I think predicting things you have no data on ("what if the AI does something we didn't foresee") is sort of an impossible problem via tools in "data science."  You have no data!

Comment by IlyaShpitser on An Open Philanthropy grant proposal: Causal representation learning of human preferences · 2022-01-11T18:12:31.174Z · LW · GW

A few comments:

(a) I think "causal representation learning" is too vague, this overview (https://arxiv.org/pdf/2102.11107.pdf) talks about a lot of different problems I would consider fairly unrelated under this same heading.

(b) I would try to read "classical causal inference" stuff.  There is a lot of reinventing of the wheel (often, badly) happening in the causal ML space.

(c) What makes a thing "causal" is a distinction between a "larger" distribution we are interested in, and a "smaller" distribution we have data on.  Lots of problems might look "causal" but really aren't (in an interesting way) if formalized properly.

Please tell Victor I said hi, if you get a chance :).

Comment by IlyaShpitser on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-04T02:49:20.339Z · LW · GW

I gave a talk at FHI ages ago on how to use causal graphs to solve Newcomb type problems.  It wasn't even an original idea: Spohn had something similar in 2012.

I don't think any of this stuff is interesting, or relevant for AI safety.  There's a pretty big literature on model robustness and algorithmic fairness that uses causal ideas.

If you want to worry about the end of the world, we have climate change, pandemics, and the rise of fascism.

Comment by IlyaShpitser on $1000 USD prize - Circular Dependency of Counterfactuals · 2022-01-02T19:32:00.751Z · LW · GW

Counterfactuals (in the potential outcome sense used in statistics) and Pearl's structural equation causality semantics are equivalent.

Comment by IlyaShpitser on Omicron: My Current Model · 2021-12-30T00:17:05.262Z · LW · GW

Could you do readers an enormous favor and put references in when you say stuff like this:

"Vitamin D and Zinc, and if possible Fluvoxamine, are worth it if you get infected, also Vitamin D is worth taking now anyway (I take 5k IUs/day)."

Comment by IlyaShpitser on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T02:03:33.506Z · LW · GW

"MIRI/CFAR is not a cult."

What does being a cult space monkey feel like from the inside?

This entire depressing thread is reminding me a little of how long it took folks who watch Rick and Morty to realize Rick is an awful abusive person, because he's the show's main character, and isn't "coded" as a villain.

Comment by IlyaShpitser on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T15:37:44.205Z · LW · GW

+1 to all this.

Comment by IlyaShpitser on Dominic Cummings : Regime Change #2: A plea to Silicon Valley · 2021-10-05T22:02:35.549Z · LW · GW

I am not going to waste my time arguing against formalism.  When it comes to things like formalism I am going to follow in my grandfather's footsteps, if it comes time to "have an argument" about it.

Comment by IlyaShpitser on Dominic Cummings : Regime Change #2: A plea to Silicon Valley · 2021-10-05T18:29:18.063Z · LW · GW

What Cummings is proposing is formalism with a thin veneer of silicon valley jargon, like "startups" or whatever, designed to be palatable to people like the ones who frequent this website.

He couldn't be clearer, re: where his influences are coming from, he cites them at the end.  It's Moldbug, and Siskind (Siskind's email leaks show what his real opinions are, he's just being a bit coy).

The proposed system is not going to be more democratic, it is going to be more formalist.

Comment by IlyaShpitser on Dominic Cummings : Regime Change #2: A plea to Silicon Valley · 2021-10-04T21:11:05.040Z · LW · GW

Fascism is bad, Christian.

Comment by IlyaShpitser on Factors of mental and physical abilities - a statistical analysis · 2021-08-18T21:56:41.753Z · LW · GW

My response is we have fancy computers and lots of storage -- there's no need to do psychometric models of the brain with one parameter anymore, we can leave that to the poor folks in the early 1900s.

How many parameters does a good model of the game of Go have, again?  The human brain is a lot more complicated, still.

There are lots of ways to show single parameter models are silly, for example discussions of whether Trump is "stupid" or not that keep going around in circles.

Comment by IlyaShpitser on Factors of mental and physical abilities - a statistical analysis · 2021-08-18T13:17:16.439Z · LW · GW

"Well, suppose that factor analysis was a perfect model. Would that mean that we're all born with some single number g that determines how good we are at thinking?"

"Determines" is a causal word.  Factor analysis will not determine causality for you.

I agree with your conclusion, though, g is not a real thing that exists.

Comment by IlyaShpitser on Question about Test-sets and Bayesian machine learning · 2021-08-09T17:53:42.135Z · LW · GW

Start here: https://en.wikipedia.org/wiki/Bayes_estimator

Comment by IlyaShpitser on We have some evidence that masks work · 2021-07-12T21:54:58.895Z · LW · GW

Should be doing stuff like this, if you want to understand effects of masks:

https://arxiv.org/pdf/2103.04472.pdf 

Comment by IlyaShpitser on Progress on Causal Influence Diagrams · 2021-07-01T16:35:46.809Z · LW · GW

https://auai.org/uai2021/pdf/uai2021.89.preliminary.pdf (this really is preliminary, e.g. they have not yet uploaded a newer version that incorporates peer review suggestions).

---

Can't do stuff in the second paper without worrying about stuff in the first (unless your model is very simple).

Comment by IlyaShpitser on Progress on Causal Influence Diagrams · 2021-06-30T15:58:31.683Z · LW · GW

Pretty interesting.

Since you are interested in policies that operate along some paths only, you might find these of interest:

https://pubmed.ncbi.nlm.nih.gov/31565035/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6330047/

We have some recent stuff on generalizing MDPs to have a causal model inside every state ('path dependent structural equation models', to appear in UAI this year).
 

Comment by IlyaShpitser on Trying to approximate Statistical Models as Scoring Tables · 2021-06-29T21:05:31.938Z · LW · GW

https://arxiv.org/pdf/1207.4124.pdf

Comment by IlyaShpitser on Deep limitations? Examining expert disagreement over deep learning · 2021-06-27T12:58:05.919Z · LW · GW

3: No, that will never work with DL by itself (e.g. as fancy regressions).

4: No, that will never work with DL by itself (e.g. as fancy regressions).

5: I don't understand this question, but people already use DL for RL, so the "support" part is already true.  If the question is asking whether DL can substitute for doing interventions, then the answer is a very qualified "yes," but the secret sauce isn't DL, it's other things (e.g. causal inference) that use DL as a subroutine.

---

The problem is, most folks who aren't doing data science for a living themselves only view data science advances through the vein of hype, fashion trends, and press releases, and so get an entirely wrong sense of what is truly groundbreaking and important.

Comment by IlyaShpitser on Bayesian and Frequentists Approaches to Statistical Analysis · 2021-04-26T20:52:59.430Z · LW · GW

If there is, I don’t know it.

There's a ton of work on general sensitivity analysis in the semi-parametric stats literature.

Comment by IlyaShpitser on Many methods of causal inference try to identify a "safe" subset of variation · 2021-03-31T14:02:59.155Z · LW · GW

If there is really both reverse causation and regular causation between Xr and Y, you have a cycle, and you have to explain what the semantics of that cycle are (not a deal breaker, but not so simple to do.  For example if you think the cycle really represents mutual causation over time, what you really should do is unroll your causal diagram so it's a DAG over time, and redo the problem there).

You might be interested in this paper (https://arxiv.org/pdf/1611.09414.pdf) that splits the outcome rather than the treatment (although I don't really endorse that paper).

---


The real question is, why should Xc be unconfounded with Y?  In an RCT you get lack of confounding by study design (but then we don't need to split the treatment at all).  But this is not really realistic in general -- can you think of some practical examples where you would get lucky in this way?

Comment by IlyaShpitser on We got what's needed for COVID-19 vaccination completely wrong · 2021-02-10T17:09:50.835Z · LW · GW

Christian, I don't usually post here anymore, but I am going to reiterate a point I made recently: advocating for a vaccine that isn't adequately tested is coming close to health fraud.

Testing requirements are fairly onerous, but that is for a good reason.

Comment by IlyaShpitser on Making Vaccine · 2021-02-09T19:40:16.318Z · LW · GW

Recommending this to others seems to be coming pretty close to health fraud.

The reasonably ponderous systems in place for checking if things work and aren't too risky are there for a reason.

Comment by IlyaShpitser on Covid 2/4: Safe and Effective Vaccines Aplenty · 2021-02-08T14:14:49.581Z · LW · GW

"That’s the test. Would you put it in your arm rather than do nothing? And if the answer here is no, then, please, show your work."


Seems to be an odd position to take to shift the burden of proof onto the vaccine taker rather than than the scientist.

---

I think a lot of people, you included, are way overconfident on how transmissible B.1.1.7. is.

Comment by IlyaShpitser on No nonsense version of the "racial algorithm bias" · 2021-01-22T13:43:49.231Z · LW · GW

90% of the work ought to go into figuring out what fairness measure you want and why.  Not so easy.  Also not really a "math problem."  Most ML papers on fairness just solve math problems.

Comment by IlyaShpitser on Covid 9/10: Vitamin D · 2020-09-13T15:58:22.764Z · LW · GW

A whole paper, huh.

---

I am contesting the whole Extremely Online Lesswrong Way<tm> of engaging with the world whereby people post a lot and pontificate, rather than spending all day reading actual literature, or doing actual work.

Comment by IlyaShpitser on Covid 9/10: Vitamin D · 2020-09-11T16:43:28.621Z · LW · GW

"Unless you’d put someone vulnerable at risk, why are you letting another day of your life go by not living it to its fullest? "

As soon as you start advocating behavior changes based on associational evidence you leave the path of wisdom.

---

You sure seem to have a lot of opinions about statisticians being conservative about making claims without bothering to read up on the relevant history and why this conservativism might have developed in the field.

Comment by IlyaShpitser on What is the interpretation of the do() operator? · 2020-08-28T01:42:00.269Z · LW · GW

You can read Halpern's stuff if you want an axiomatization of something like the responses to the do-operator.

Or you can try to understand the relationship of do() and counterfactual random variables, and try to formulate causality as a missing data problem (whereby a full data distribution on counterfactuals and an observed data distribution on factuals are related via a coarsening process).

Comment by IlyaShpitser on RFC: COVID-19 Statistical Guilt · 2020-08-06T15:18:05.594Z · LW · GW

http://www.mit.edu/~maxkw/pdfs/halpern2018towards.pdf

Comment by IlyaShpitser on Writing Causal Models Like We Write Programs · 2020-05-06T16:54:56.684Z · LW · GW

How is this different from just a regular imperative programming language with imperative assignment?

Causal models are just programs (with random inputs, and certain other restrictions if you want to be able to represent them as DAGs). The do() operator is just imperative assignment.

Comment by IlyaShpitser on LessWrong Coronavirus Link Database · 2020-03-17T16:13:58.334Z · LW · GW

Here are directions: https://www.instructables.com/id/The-Pandemic-Ventilator/

I think the sorts of people I want to see this blog website will know what to do with the information on it.

Comment by IlyaShpitser on LessWrong Coronavirus Link Database · 2020-03-17T15:20:35.923Z · LW · GW

Medical information on covid-19: https://emcrit.org/ibcc/covid19/

Comment by IlyaShpitser on LessWrong Coronavirus Link Database · 2020-03-17T15:19:24.610Z · LW · GW

https://panvent.blogspot.com/ <- Spread this to your biomedical engineering friends, or any hobbyist who can build things. We need to ramp up ventilator capacity, now. Even if they are 80% as good as a high tech one, but cheap to make, they will save lives.

There's a long history of designing and making devices like these for the Third World places that need them. We will need these soon, here and everywhere.

Comment by IlyaShpitser on Announcing the AI Alignment Prize · 2018-02-03T21:08:09.874Z · LW · GW

Some references to lesswrong, and value alignment there.

Comment by IlyaShpitser on Announcing the AI Alignment Prize · 2017-12-16T07:09:28.016Z · LW · GW

anyone going to the AAAI ethics/safety conf?

Comment by IlyaShpitser on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T22:54:55.011Z · LW · GW

One of my favorite examples of a smart person being confused about something is ET Jaynes being confused about Bell inequalities.

Smart people are confused all the time, even (perhaps especially) in their area.

Comment by IlyaShpitser on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:05:05.785Z · LW · GW

You are really confused about statistics and learning, and possibly also about formal languages in theoretical CS. I neither want nor have time to get into this with you, just wanted to point this out for your potential benefit.

Comment by IlyaShpitser on Teaching rationality in a lyceum · 2017-12-06T17:06:54.853Z · LW · GW

http://callingbullshit.org/syllabus.html

(This is not "Yudkowskian Rationality" though.)

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T21:49:30.790Z · LW · GW

Dear Christian, please don't pull rank on my behalf. I don't think this is productive to do, and I don't want to bring anyone else into this.