Posts

Comments

Comment by IlyaShpitser on Factors of mental and physical abilities - a statistical analysis · 2021-08-18T21:56:41.753Z · LW · GW

My response is we have fancy computers and lots of storage -- there's no need to do psychometric models of the brain with one parameter anymore, we can leave that to the poor folks in the early 1900s.

How many parameters does a good model of the game of Go have, again?  The human brain is a lot more complicated, still.

There are lots of ways to show single parameter models are silly, for example discussions of whether Trump is "stupid" or not that keep going around in circles.

Comment by IlyaShpitser on Factors of mental and physical abilities - a statistical analysis · 2021-08-18T13:17:16.439Z · LW · GW

"Well, suppose that factor analysis was a perfect model. Would that mean that we're all born with some single number g that determines how good we are at thinking?"

"Determines" is a causal word.  Factor analysis will not determine causality for you.

I agree with your conclusion, though, g is not a real thing that exists.

Comment by IlyaShpitser on Question about Test-sets and Bayesian machine learning · 2021-08-09T17:53:42.135Z · LW · GW

Start here: https://en.wikipedia.org/wiki/Bayes_estimator

Comment by IlyaShpitser on We have some evidence that masks work · 2021-07-12T21:54:58.895Z · LW · GW

Should be doing stuff like this, if you want to understand effects of masks:

https://arxiv.org/pdf/2103.04472.pdf 

Comment by IlyaShpitser on Progress on Causal Influence Diagrams · 2021-07-01T16:35:46.809Z · LW · GW

https://auai.org/uai2021/pdf/uai2021.89.preliminary.pdf (this really is preliminary, e.g. they have not yet uploaded a newer version that incorporates peer review suggestions).

---

Can't do stuff in the second paper without worrying about stuff in the first (unless your model is very simple).

Comment by IlyaShpitser on Progress on Causal Influence Diagrams · 2021-06-30T15:58:31.683Z · LW · GW

Pretty interesting.

Since you are interested in policies that operate along some paths only, you might find these of interest:

https://pubmed.ncbi.nlm.nih.gov/31565035/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6330047/

We have some recent stuff on generalizing MDPs to have a causal model inside every state ('path dependent structural equation models', to appear in UAI this year).
 

Comment by IlyaShpitser on Trying to approximate Statistical Models as Scoring Tables · 2021-06-29T21:05:31.938Z · LW · GW

https://arxiv.org/pdf/1207.4124.pdf

Comment by IlyaShpitser on Deep limitations? Examining expert disagreement over deep learning · 2021-06-27T12:58:05.919Z · LW · GW

3: No, that will never work with DL by itself (e.g. as fancy regressions).

4: No, that will never work with DL by itself (e.g. as fancy regressions).

5: I don't understand this question, but people already use DL for RL, so the "support" part is already true.  If the question is asking whether DL can substitute for doing interventions, then the answer is a very qualified "yes," but the secret sauce isn't DL, it's other things (e.g. causal inference) that use DL as a subroutine.

---

The problem is, most folks who aren't doing data science for a living themselves only view data science advances through the vein of hype, fashion trends, and press releases, and so get an entirely wrong sense of what is truly groundbreaking and important.

Comment by IlyaShpitser on Bayesian and Frequentists Approaches to Statistical Analysis · 2021-04-26T20:52:59.430Z · LW · GW

If there is, I don’t know it.

There's a ton of work on general sensitivity analysis in the semi-parametric stats literature.

Comment by IlyaShpitser on Many methods of causal inference try to identify a "safe" subset of variation · 2021-03-31T14:02:59.155Z · LW · GW

If there is really both reverse causation and regular causation between Xr and Y, you have a cycle, and you have to explain what the semantics of that cycle are (not a deal breaker, but not so simple to do.  For example if you think the cycle really represents mutual causation over time, what you really should do is unroll your causal diagram so it's a DAG over time, and redo the problem there).

You might be interested in this paper (https://arxiv.org/pdf/1611.09414.pdf) that splits the outcome rather than the treatment (although I don't really endorse that paper).

---


The real question is, why should Xc be unconfounded with Y?  In an RCT you get lack of confounding by study design (but then we don't need to split the treatment at all).  But this is not really realistic in general -- can you think of some practical examples where you would get lucky in this way?

Comment by IlyaShpitser on We got what's needed for COVID-19 vaccination completely wrong · 2021-02-10T17:09:50.835Z · LW · GW

Christian, I don't usually post here anymore, but I am going to reiterate a point I made recently: advocating for a vaccine that isn't adequately tested is coming close to health fraud.

Testing requirements are fairly onerous, but that is for a good reason.

Comment by IlyaShpitser on Making Vaccine · 2021-02-09T19:40:16.318Z · LW · GW

Recommending this to others seems to be coming pretty close to health fraud.

The reasonably ponderous systems in place for checking if things work and aren't too risky are there for a reason.

Comment by IlyaShpitser on Covid 2/4: Safe and Effective Vaccines Aplenty · 2021-02-08T14:14:49.581Z · LW · GW

"That’s the test. Would you put it in your arm rather than do nothing? And if the answer here is no, then, please, show your work."


Seems to be an odd position to take to shift the burden of proof onto the vaccine taker rather than than the scientist.

---

I think a lot of people, you included, are way overconfident on how transmissible B.1.1.7. is.

Comment by IlyaShpitser on No nonsense version of the "racial algorithm bias" · 2021-01-22T13:43:49.231Z · LW · GW

90% of the work ought to go into figuring out what fairness measure you want and why.  Not so easy.  Also not really a "math problem."  Most ML papers on fairness just solve math problems.

Comment by IlyaShpitser on Covid 9/10: Vitamin D · 2020-09-13T15:58:22.764Z · LW · GW

A whole paper, huh.

---

I am contesting the whole Extremely Online Lesswrong Way<tm> of engaging with the world whereby people post a lot and pontificate, rather than spending all day reading actual literature, or doing actual work.

Comment by IlyaShpitser on Covid 9/10: Vitamin D · 2020-09-11T16:43:28.621Z · LW · GW

"Unless you’d put someone vulnerable at risk, why are you letting another day of your life go by not living it to its fullest? "

As soon as you start advocating behavior changes based on associational evidence you leave the path of wisdom.

---

You sure seem to have a lot of opinions about statisticians being conservative about making claims without bothering to read up on the relevant history and why this conservativism might have developed in the field.

Comment by IlyaShpitser on What is the interpretation of the do() operator? · 2020-08-28T01:42:00.269Z · LW · GW

You can read Halpern's stuff if you want an axiomatization of something like the responses to the do-operator.

Or you can try to understand the relationship of do() and counterfactual random variables, and try to formulate causality as a missing data problem (whereby a full data distribution on counterfactuals and an observed data distribution on factuals are related via a coarsening process).

Comment by IlyaShpitser on RFC: COVID-19 Statistical Guilt · 2020-08-06T15:18:05.594Z · LW · GW

http://www.mit.edu/~maxkw/pdfs/halpern2018towards.pdf

Comment by IlyaShpitser on Writing Causal Models Like We Write Programs · 2020-05-06T16:54:56.684Z · LW · GW

How is this different from just a regular imperative programming language with imperative assignment?

Causal models are just programs (with random inputs, and certain other restrictions if you want to be able to represent them as DAGs). The do() operator is just imperative assignment.

Comment by IlyaShpitser on LessWrong Coronavirus Link Database · 2020-03-17T16:13:58.334Z · LW · GW

Here are directions: https://www.instructables.com/id/The-Pandemic-Ventilator/

I think the sorts of people I want to see this blog website will know what to do with the information on it.

Comment by IlyaShpitser on LessWrong Coronavirus Link Database · 2020-03-17T15:20:35.923Z · LW · GW

Medical information on covid-19: https://emcrit.org/ibcc/covid19/

Comment by IlyaShpitser on LessWrong Coronavirus Link Database · 2020-03-17T15:19:24.610Z · LW · GW

https://panvent.blogspot.com/ <- Spread this to your biomedical engineering friends, or any hobbyist who can build things. We need to ramp up ventilator capacity, now. Even if they are 80% as good as a high tech one, but cheap to make, they will save lives.

There's a long history of designing and making devices like these for the Third World places that need them. We will need these soon, here and everywhere.

Comment by IlyaShpitser on Announcing the AI Alignment Prize · 2018-02-03T21:08:09.874Z · LW · GW

Some references to lesswrong, and value alignment there.

Comment by IlyaShpitser on Announcing the AI Alignment Prize · 2017-12-16T07:09:28.016Z · LW · GW

anyone going to the AAAI ethics/safety conf?

Comment by IlyaShpitser on The Critical Rationalist View on Artificial Intelligence · 2017-12-08T22:54:55.011Z · LW · GW

One of my favorite examples of a smart person being confused about something is ET Jaynes being confused about Bell inequalities.

Smart people are confused all the time, even (perhaps especially) in their area.

Comment by IlyaShpitser on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:05:05.785Z · LW · GW

You are really confused about statistics and learning, and possibly also about formal languages in theoretical CS. I neither want nor have time to get into this with you, just wanted to point this out for your potential benefit.

Comment by IlyaShpitser on Teaching rationality in a lyceum · 2017-12-06T17:06:54.853Z · LW · GW

http://callingbullshit.org/syllabus.html

(This is not "Yudkowskian Rationality" though.)

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T21:49:30.790Z · LW · GW

Dear Christian, please don't pull rank on my behalf. I don't think this is productive to do, and I don't want to bring anyone else into this.

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T22:49:15.327Z · LW · GW

well, using philosophy i did that hard part and figured out which ones are good.

http://existentialcomics.com/comic/191

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-01T02:10:45.597Z · LW · GW

Who are you talking to? To the audience? To the fourth wall?

Surely not to me, I have no sway here.

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-30T19:38:28.069Z · LW · GW

Your sockpuppet: "There is a shortage of good philosophers."

Me: "Here is a good philosophy book."

You: "That's not philosophy."

Also you: "How is Ayn Rand so right about everything."

Also you: "I don't like mainstream stuff."

Also you: "Have you heard that I exchanged some correspondence with DAVID DEUTSCH!?"

Also you: "What if you are, hypothetically, wrong? What if you are, hypothetically, wrong? What if you are, hypothetically, wrong?" x1000


Part of rationality is properly dealing with people-as-they-are. What your approach to spreading your good word among people-as-they-are led to is them laughing at you.

It is possible that they are laughing at you because they are some combination of stupid and insane. But then it's on you to first issue a patch into their brain that will be accepted, such that they can parse your proselytizing, before proceeding to proselytize.

This is what Yudkowsky sort of tried to do.


How you read to me is a smart young adult who has the same problem Yudkowsky has (although Yudkowsky is not so young anymore) -- someone who has been the smartest person in the room for too long in their intellectual development, and lacks the sense of scale and context to see where he stands in the larger intellectual community.

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T23:13:32.336Z · LW · GW

Spirtes, Glymour, and Scheines, for starters. They have a nice book. There are other folks in that department who are working on converting mathematical foundations into an axiomatic system where proofs can be checked by a computer.

I am not going to do leg work for you, and your minions, however. You are the ones claiming there are no good philosophers. It's your responsibility to read, and keep your mouth shut if you are not sure about something.

It's not my responsibility to teach you.

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-29T22:17:18.145Z · LW · GW

I know lots of folks at CMU who are good.

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-28T23:40:01.837Z · LW · GW

Jerzy Neyman gets credit for lots of things, but in particular in my neck of the woods for inventing the potential outcome notation. This is the notation for "if the first object had not been, the second never had existed" in Hume's definition of causation.

Comment by IlyaShpitser on Open thread, October 30 - November 5, 2017 · 2017-11-28T21:50:37.814Z · LW · GW

Oof.

Comment by IlyaShpitser on Open Letter to MIRI + Tons of Interesting Discussion · 2017-11-28T13:21:04.779Z · LW · GW

Hi, Hume's constant conjunction stuff I think has nothing to do with free lunch theorems in ML (?please correct me if I am missing something?), and has to do with defining causation, an issue Hume was worried about all his life (and ultimately solved, imo, via his counterfactual definition of causality that we all use today, by way of Neyman, Rubin, Pearl, etc.).

Comment by IlyaShpitser on LW 2.0 Open Beta Live · 2017-11-25T19:44:29.504Z · LW · GW

Sorry, did you say weird/esoteric technology?

https://www.destroyallsoftware.com/talks/wat

https://www.destroyallsoftware.com/talks/the-birth-and-death-of-javascript

Comment by IlyaShpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-22T21:25:26.140Z · LW · GW

I guess the way I would slice disciplines is like this:

(a) Makes empirical claims (credences change with evidence, or falsifiable, or [however you want to define this]), or has universally agreed rules for telling good from bad (mathematics, theoretical parts of fields, etc.)

(b) Does not make empirical claims, and has no universally agreed rules for telling good from bad.

Some philosophy is in (a) and some in (b). Most statistics is in (a), for example.


Re: (a), most folks would need a lot of study to evaluate claims, typically at the graduate level. So the best thing to do is get the lay of the land by asking experts. Experts may disagree, of course, which is valuable information.

Re: (b), why are we talking about (b) at all?

Comment by IlyaShpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-22T15:24:07.566Z · LW · GW

Yeah, credentials are a poor way of judging things.

They are not, though. It's standard "what LW calls 'Bayes' and what I call 'reasoning under uncertainty'" -- you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.


You might say "oh, I heard about how argument screens authority." This is actually not true though, even by "LW Bayesian" lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn't true.

It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.


The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of "experts" writ large, except for an explicitly enumerated subset (perhaps ones EY or other "recognized community thought leaders" liked).

This is a part of community DNA, starting with EY's stuff, and Luke's "philosophy is a diseased discipline."

That is crazy.

Comment by IlyaShpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-20T15:49:36.939Z · LW · GW

Throwing books at someone is generally known as "courtier's reply".

The issue here also is Brandolini's law:

"The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it."


The problem with the "courtier's reply" is you could always appeal to it, even if Scott Aaronson is trying to explain something about quantum mechanics to you, and you need some background (found in references 1, 2, and 3) to understand what he is saying.


There is a type 1 / type 2 error tradeoff here. Ignoring legit expert advice is bad, but being cowed by an idiot throwing references at you is also bad.

As usual with tradeoffs like these, one has to decide on a policy that is willing to tolerate some of one type of error to keep the error you care about to some desired level.


I think a good heuristic for deciding who is an expert and who is an idiot with references is credentialism. But credentialism has a bad brand here, due to a "love affair with amateurism" LW has. One of the consequences of this love affair is a lot of folks here make the above trade off badly (in particular they ignore legit advice to read way too frequently).

Comment by IlyaShpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-18T20:00:26.435Z · LW · GW

Everything you say in your post, about Popper issues, demonstrates huge ignorance.

Do you even know the name of Popper's philosophy?

It seems that you're completely out of your depth.

The reason you have trouble applying reason is b/c u understand reason badly.


I have a thought. Since you are a philosopher, would your valuable time not be better spent doing activities philosophers engage in, such as writing papers for philosophy journals?

Rather than arguing with people on the internet?


If you are here because you are fishing for people to go join your forum, may I suggest that this place is an inefficient use of your time? It's mostly dead now, and will be fully dead soon.

Comment by IlyaShpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-17T19:26:47.312Z · LW · GW

I don't think you and I have much to talk about.

Comment by IlyaShpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-17T14:45:21.632Z · LW · GW

If you have a job and a family, and don't have time to get into what Popper actually said, maybe don't offer your opinion on what Popper actually said? That's just introducing bad stuff into a discussion for no reason.

Wovon man nicht sprechen kann, darüber muss man schweigen.


"The virtue of silence."

Comment by IlyaShpitser on Less Wrong Lacks Representatives and Paths Forward · 2017-11-17T01:18:45.615Z · LW · GW

You should probably actually read Popper before putting words in his mouth.

According to Popper, not matter how much scientific evidence we have in favor of e.g. theory of relativity, all it needs is one experiment that will falsify it, and then all good scientists should stop believing in it.

You found this claim in a book of his? Or did you read some Wikipedia, or what?

For example, this is a quote from the Stanford Encyclopedia of Philosophy:

Popper has always drawn a clear distinction between the logic of falsifiability and its applied methodology. The logic of his theory is utterly simple: if a single ferrous metal is unaffected by a magnetic field it cannot be the case that all ferrous metals are affected by magnetic fields. Logically speaking, a scientific law is conclusively falsifiable although it is not conclusively verifiable. Methodologically, however, the situation is much more complex: no observation is free from the possibility of error—consequently we may question whether our experimental result was what it appeared to be.

Thus, while advocating falsifiability as the criterion of demarcation for science, Popper explicitly allows for the fact that in practice a single conflicting or counter-instance is never sufficient methodologically to falsify a theory, and that scientific theories are often retained even though much of the available evidence conflicts with them, or is anomalous with respect to them.

You guys still do that whole "virtue of scholarship" thing, or what?

Comment by IlyaShpitser on Stupid Questions September 2017 · 2017-11-15T19:09:29.317Z · LW · GW

It is very annoying that

любой is translated both as "any" and "every."

какой-либо is closer to formal logical "there exists" or "any."

Comment by IlyaShpitser on Stupid Questions September 2017 · 2017-11-15T19:00:13.603Z · LW · GW

Крымская tатарка?

Я одессит, родился в Крыму.

Comment by IlyaShpitser on Stupid Questions September 2017 · 2017-11-15T14:30:23.403Z · LW · GW

It is possible to say that, but the work is being done by "combination." You can also say "for every permutation of n" and that means something different.

Typically when you say "for every x out of 30, property(x) holds" it means something like:

"every poster on lesswrong is a human being" (or more formally, "for every poster on lesswrong, that poster is a human being." (Note, this statement is meaningful but probably evaluates to false.)


Quantification is always over a set. If you are talking about permutations, you are first making a set of all permutations of 30 things (of which there are 30 factorial), and then saying "for every permutation in this set of permutations some property holds").


edit: realized your native language might be Ukrainian: I think a similar issue exists in Ukrainian quantifier adjectives.

Comment by IlyaShpitser on Stupid Questions September 2017 · 2017-11-14T23:09:40.007Z · LW · GW

"Every" doesn't need an order.

"For every x, property(x) holds" means "it is not the case that for any x, property(x) does not hold."

"For any x, property(x) holds" means "it is not the case that for every x, property(x) does not hold."

In Russian, quantifier adjectives are often implicit, which could be a part of the problem here. Native Russian speakers (like me) often have problems with this, also with definite vs indefinite articles in English.

edit: not only implicit but ambiguous when explicit, too!


Person below is right, "every" is sort of like an infinite "AND" and "any" is sort of like an infinite "OR."

Comment by IlyaShpitser on Simple refutation of the ‘Bayesian’ philosophy of science · 2017-11-10T14:45:18.751Z · LW · GW

I would still say that cause and effect is a subset of the kind of models that are used in statistics.

You would be wrong, then. The subset relation is the other way around. Bayesian networks are not causal models, they are statistical independence models.

Compressing information has nothing to do with causality. No experimental scientist talks about causality like that, in any field. There is a big literature on something called "compressed sensing," for example, but that literature (correctly) does not generally make claims about causality.

I'm not aware of a theory or a model that uses vastly different entities to explain and to predict.

I am.

You can't tune (e.g. trade off bias/variance properly) causal models in any kind of straightforward way, because the parameter of interest is never unobserved, unlike standard regression models. Causal inference is a type of unsupervised problem, unless you have experimental data.

Rather than arguing with me about this, I suggest a more productive use of your time would be to just read some stuff on causal inference. You are implicitly smuggling in some definition you like that nobody uses.

Comment by IlyaShpitser on Simple refutation of the ‘Bayesian’ philosophy of science · 2017-11-09T22:04:58.618Z · LW · GW

"explanation", as far as the concept can be modelled mathematically, is fitness to data and low complexity

Nope. To explain, e.g. to describe "why" something happened, is to talk about causes and effects. At least that's the way people use that word in practice.

Prediction and explanation are very very different.