Posts

Suspiciously balanced evidence 2020-02-12T17:04:20.516Z · score: 49 (16 votes)
"Future of Go" summit with AlphaGo 2017-04-10T11:10:40.249Z · score: 3 (4 votes)
Buying happiness 2016-06-16T17:08:53.802Z · score: 37 (40 votes)
AlphaGo versus Lee Sedol 2016-03-09T12:22:53.237Z · score: 19 (19 votes)
[LINK] "The current state of machine intelligence" 2015-12-16T15:22:26.596Z · score: 3 (4 votes)
Scott Aaronson: Common knowledge and Aumann's agreement theorem 2015-08-17T08:41:45.179Z · score: 15 (15 votes)
Group Rationality Diary, March 22 to April 4 2015-03-23T12:17:27.193Z · score: 6 (7 votes)
Group Rationality Diary, March 1-21 2015-03-06T15:29:01.325Z · score: 4 (5 votes)
Open thread, September 15-21, 2014 2014-09-15T12:24:53.165Z · score: 6 (7 votes)
Proportional Giving 2014-03-02T21:09:07.597Z · score: 6 (14 votes)
A few remarks about mass-downvoting 2014-02-13T17:06:43.216Z · score: 27 (44 votes)
[Link] False memories of fabricated political events 2013-02-10T22:25:15.535Z · score: 17 (20 votes)
[LINK] Breaking the illusion of understanding 2012-10-26T23:09:25.790Z · score: 19 (20 votes)
The Problem of Thinking Too Much [LINK] 2012-04-27T14:31:26.552Z · score: 7 (11 votes)
General textbook comparison thread 2011-08-26T13:27:35.095Z · score: 9 (10 votes)
Harry Potter and the Methods of Rationality discussion thread, part 4 2010-10-07T21:12:58.038Z · score: 5 (7 votes)
The uniquely awful example of theism 2009-04-10T00:30:08.149Z · score: 38 (48 votes)
Voting etiquette 2009-04-05T14:28:31.031Z · score: 10 (16 votes)
Open Thread: April 2009 2009-04-03T13:57:49.099Z · score: 5 (6 votes)

Comments

Comment by gjm on On Suddenly Not Being Able to Work · 2020-08-26T14:52:07.639Z · score: 5 (3 votes) · LW · GW

Here's a link to the paper whose abstract is quoted there.

Their main reported result is a bit weird: allegedly players aren't more likely to make suboptimal moves in the online tournament, but when they do their suboptimal moves are somewhat worse.

Comment by gjm on On Defining your Terms · 2020-08-19T10:36:39.797Z · score: 14 (6 votes) · LW · GW

A collection of grains of rice is a pile if and only if (1) a majority of the rice grains are supported by other grains rather than whatever surface the pile is on and (2) you can get from any grain to any other grain by stepping from grain to grain, each step happening between grains whose surfaces are no further apart than 10% of the diameter of a median rice grain in the pile.

Of course I don't claim that this is The One True Definition of "pile", but it kinda annoys me that heaps/piles have been the standard example of fuzziness about number for centuries, and everyone assumes that if you did lay down a somewhat arbitrary definition then it would be of the form "at least N objects", when in fact I think what fuzzily distinguishes piles from non-piles is something else -- you can definitely have a pile with 6 objects in it, and you can definitely have 100 objects of the same type that don't form a pile -- and one can give somewhat-plausible clear-cut definitions with no arbitrary numbers-of-objects in them.

This is mostly a mostly-irrelevant tangent, but it's maybe worth pointing out explicitly the phenomenon that when you have some notion that you leave undefined it's possible to be mistaken about what sort of notion it is. If you're having an argument about piles or heaps then you will likely go astray if you start asking "well, how many grains of rice do we need to have before we definitely have a heap?". Compare: "how complex does a nervous system have to be before we assign moral significance to the organism whose nervous system it is?" which is fairly clearly wrong in the same sort of way as treating "pile" as purely numerical.

Comment by gjm on Survey Results: 10 Fun Questions for LWers · 2020-08-19T10:19:28.973Z · score: 19 (8 votes) · LW · GW

I think that as well as noting the means and medians of the questions about "how much karma" and "how many dollars", it's worth pointing out explicitly that in both cases the modal value was zero. I think the zeros reflect, at least for some of the people saying zero, positions that are somewhat different in kind from the non-zeros. (A rejection of the very idea that you decide whether a post is worth reading by looking at its score; the opinion that there's no need at all for the book.)

That won't always be true; someone who would pay $3 for the book will probably have answered zero. Still, the fraction of zeros seems like a relevant statistic here in both those cases.

Comment by gjm on When Truth Isn't Enough · 2020-08-18T15:16:44.845Z · score: 2 (1 votes) · LW · GW

It appears to have been something like "denotation OK, connotations iffy". (Someone objects to "iffy" in one of the comments.)

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-17T22:02:54.935Z · score: 2 (1 votes) · LW · GW

Noted. (I take it "this one" means this post rather than requesting that I not acknowledge having read this comment.)

I don't 100% promise to comply (e.g., if I see you saying something importantly false and no one else comments on it, I might do so) but I'll leave your posts alone unless some need arises that trumps courtesy :-).

Since in connection with this you publicly slandered me over on your website, I will add that I consider your analysis there of my motives and purposes to be extremely wrong.

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-16T11:09:07.135Z · score: 2 (1 votes) · LW · GW

Yep, "en masse" is vague, and what it turns out curi actually did -- which is less drastic than what his use of the word "mirrored" and his past history with LW led me to assume -- was not so very en masse as I feared. My apologies, again, for not checking.

I didn't, of course, claim to know what happens in every jurisdiction; the point of my "in every jurisdiction I know of" was the reverse of what you're taking it to be.

I don't know anything much about the law in Tuvalu and Mauritius, but I believe they are both signatories to the Berne Convention, which means that their laws on copyright are probably similar to everyone else's. The Berne Convention requires signatories to permit some quotation, and its test for what exceptions are OK doesn't give a great deal of leeway to allow more (see e.g. https://www.keionline.org/copyright/berne-convention-exceptions-revisions), so the situation there is probably similar to that in the UK (which is where I happen to be and where the site you linked to is talking about).

The general rule about quoting in the UK is that you're allowed to quote the minimum necessary (which is vague, but that's not my fault, because the law is also vague). What I (wrongly) thought curi had done would not, I think, be regarded as the minimum necessary to achieve a reasonable goal. But, again, what he actually did is not what I guessed, and what he did is OK.

If someone sees something I wrote on Google and takes an interest in it, the most likely result is that they follow Google's link and end up in the place where I originally wrote it, where they will see it in its original context. If someone sees something I wrote that curi has "mirrored" on his own site, the most likely result is that they see whatever curi has chosen to quote, along with his (frequently hostile) comments of which I may not even be aware since I am not a regular there, and comments from others there (again, likely hostile; again, of which I am not aware).

None of that means that curi shouldn't be allowed to quote what I said (to whatever extent is required for reasonable criticism and review, etc.) but I hope it makes it clearer why I might be more annoyed by curi's "mirroring" than Google's.

(Thanks for the update; as it happens I didn't see your comment until after you posted it. Not that there's any reason why you need care, but I approve of how you handled that.)

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-16T10:50:44.602Z · score: 3 (2 votes) · LW · GW

I had not looked, at that point; I took "mirrored" to mean taking copies of whole discussions, which would imply copying other people's writing en masse. I have looked, now. I agree that what you've put there so far is probably OK both legally and morally.

My apologies for being a bit twitchy on this point; I should maybe explain for the benefit of other readers that the last time curi came to LW, he did take a whole pile of discussion from the LW slack and copy it en masse to the publicly-visible internet, which is one reason why I thought it plausible he might have done the same this time.

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-16T01:00:36.512Z · score: 1 (2 votes) · LW · GW

Quoting is a copyright violation in every jurisdiction I know of, if it's done en masse. Evidence to the contrary, please?

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-16T00:33:40.051Z · score: 2 (1 votes) · LW · GW

I think that attempting to discuss something as broad as "the basics of induction" might be problematic just because the topic is so broad. People mean a variety of different things by terms like "induction" or "inductivism" and there's a great danger of talking past one another.

For instance, the sort of induction principle I would (tentatively) endorse doesn't at first glance look like an induction principle at all: it's something along the lines of "all else being equal, prefer simpler propositions". There are lots of ways to do something along those lines, some are better than others, I don't claim to know the One True Best Way to do it, but I think this is the right approach. This gets you something like induction because theories in which things change gratuitously tend to be more complex. But whether you would call me an inductivist, I don't know. I am fairly sure we don't disagree about everything in this area, and it's quite possible that our relevant disagreements are not best thought of as disagreements about induction, as opposed to disagreements about (say) inference or probability or explanation or simplicity that have consequences for what we think about induction.

(My super-brief answers to your questions about induction, taking "induction" for this purpose to mean "the way I think we should use empirical evidence to arrive at generalized opinions": It's trying to solve the problem of how you discover things about the world that go beyond direct observations. "Solve" might be too strong a word, but it addresses it by giving a procedure that, if the world behaves in regular ways, will tend to move your beliefs into better correspondence with reality as you get more evidence. (It seems, so far, as if the world does behave in regular ways, but of course I am not taking that as anything like a deductive proof that this sort of procedure is correct; that would be circular.) You do it by (1) weighting your beliefs according to complexity in some fashion and then (2) adjusting them as new evidence comes in -- in one idealized version of the process you do #1 according to a "universal prior" and #2 according to Bayes' theorem, though in practice the universal prior is uncomputable and applying Bayes in difficult cases involves way too much computation, so you need to make do with approximations and heuristics. I do not, explicitly, claim that the future resembles the past (or, rather, I kinda do claim it, but not as an axiom but as an inductive generalization arrived at by the means under discussion); I prefer simpler explanations, and ones where the future resembles the past are often simpler. For evidence to support one claim over another, it needs to be more likely when the former claim is true than when the latter is; of course this doesn't follow merely from its being consistent with the former claim. Most evidence is consistent with most claims.)

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-15T20:02:15.411Z · score: 3 (2 votes) · LW · GW

I also saved a copy of much of the Slack discussion. (Not all of it -- there was a lot -- but substantial chunks of the bits that involved me.) Somehow, I managed to save those discussions without posting other people's writing on the public internet without their consent.

You do not have my permission (or I suspect anyone else's) to copy our writing on LW to your own website. Please remove it and commit to not doing it again. (If you won't, I suspect you might be heading for another ban.)

(I haven't looked yet at the more substantive stuff in your comment. Will do shortly. But please stop with the copyright violations already. Sheesh.)

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-15T10:34:26.209Z · score: 2 (1 votes) · LW · GW

I do not know of any literature that I am confident says the exact same things about induction as I would. (There might well be some literature I would completely agree with; my opinions are not super-idiosyncratic. In this context, though, the relevant thing is probably what I think about what Popper thinks about induction, which is a much more specific topic, or even what I think about what you think about induction, equally specific but much less likely to be already addressed in the literature.)

We had some discussion before about Popper's argument where -- this summary should not be taken too literally, since its main purpose is to identify the argument in question -- he gives a certain additive decomposition of Pr(A|B) and calls one of the addends "deductive support" and the other "inductive support"; he then proves that the "inductive support" is a negative number, and says that therefore induction is nonsense. (I think I looked at that argument in particular because you said you found it convincing.) I find many things about his argument unsatisfactory; the most important, and the one I focused on before, is that I think all the work is being done by the names he uses ("deductive support", "inductive support") and I don't think the names accurately correspond to anything in reality. That is: his mathematical calculations are fine, it's really true that s() = s() + s(), but there's no good reason for giving the two addends the names he does, and if you just called them "support term 1" and "support term 2" then no one would think that his argument offered anything remotely like a refutation of induction. He's implicitly assuming some proposition like "support term 2 is the best characterization of what empirical evidence B gives for A"; he gives no justification for anything like that, and without it his argument doesn't make any real contact with what he's trying to prove.

This discussion was on Slack (which unfortunately hides all but the most recent messages unless you pay them, which LW doesn't). We had at least two other discussions going on concurrently, about whether our opinions about what propositions are true should be binary or something more like probabilistic and about the overall merits of "critical rationalism". The discussion largely broke down (in ways I am quite sure I saw as your fault and you saw as mine, which I don't propose to go into right now because this comment is long enough already) and then you got banned from the slack. At the point when it ended, I was happy to continue discussing Popper's argument or probabilistic beliefs or critical rationalism, but you were (I think) only interested in discussing two things: paths-forward-type methodologies, and how I was allegedly being an unsatisfactory participant in the discussion.

I don't think this comment thread would be a great venue for discussion of critical rationalism generally, of Popper's argument against induction, or of the idea that our opinions of propositions should generally be quantitative rather than binary (or, more strongly, that something like the language and techniques of probability theory are appropriate for quantifying them), what with it being nominally a discussion of how "the law of least effort contributes to the conjunction fallacy". If you are interested in pursuing any of those discussions, maybe I can make a post summarizing my position and we can proceed in comments there. But, fair warning, I will not have any interest in diverting the discussion to matters methodological, and while I will gladly undertake to argue in good faith I will not be giving any of the specific undertakings you frequently ask for.

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-14T20:39:16.763Z · score: 5 (3 votes) · LW · GW

I think "this" in your last sentence is underspecified.

It is unfortunate that there are things that might be useful to say (because they convey potentially useful information) but that one usually can't or shouldn't because (1) they might cause offence or (2) they violate norms designed to reduce such offence-causing or (3) they would harm others' social status in a way we don't generally want people to be able to harm others' social status.

One "could" solve that problem by radically changing human nature such that people no longer get easily offended and can no longer be manipulated into changing their social-status judgements in inappropriate ways. Of course "could" is in scare-quotes there because no plausible way of changing human nature in such a way is in sight. (And if there were, I don't think I would want to trust either you or me with the human-nature-changing apparatus.)

Otherwise, I'm not sure what a solution to the problem might look like. People are offendable and manipulable, and if there's a way to design social norms to stop those buttons getting pushed excessively without sometimes preventing what might have been useful communication, I haven't seen any sign of it. And, as with the issue of wasting time on unproductive conversations / ending conversations prematurely, my feeling is that the problem you're focusing on is likely the wrong one because right now more harm is being done by rudeness and status-fights than by being unable to say otherwise-useful things that risk offending or status-fighting, and interventions that make those things more sayable will (in my view) likely do more harm than good.

----

Since you (more or less) asked for it, here is the brief version of my norm-breaking explanation of why I fear that discussions between us might be less fruitful than one would hope: 1. Our past discussions have not led anywhere useful; my perception (which I'm sure differs from yours) is that you have repeatedly attempted to switch from discussing issues that actually interest me (about, e.g., how well broadly-Bayesian approaches work, and about Popper's alleged refutation of empirical induction) to methodological issues (around e.g. what you call "paths forward"), and that you have repeatedly insisted that to discuss things with you others must adopt particular highly restrictive patterns of discussion which don't look to me as if they actually improve the discourse. 2. My tentative interpretation of this, and of other interactions of yours that I've observed -- and here comes the norm-breaking bit -- is that on some level you aren't really interested in discussion on an equal footing; you are looking for disciples not peers, you find discussion satisfactory only when you get to control the terms on which it happens, and when that doesn't seem possible you generally choose to engage in status-attacks on the other parties instead of discussing on equal terms. I don't think you are here on LW in order to have a discussion in which you and we might refine our ideas by correcting each others' errors; I think you are here to demonstrate your superiority and hopefully pick up some new disciples. 3. I have not been convinced, by what I have read of your writing and seen of your debating, that you are the intellectual superior of everyone around you as you seem to think you are, and in particular I am not convinced that you are my intellectual superior in ways that would justify treating you as a guru, nor that you are possessed of insights valuable enough that jumping through the methodological hoops you hold out would lead to gains that would justify the frustration involved.

Now, obviously I could be wrong about any or all of that; and even if I'm not, it's all perfectly compatible with your having some interesting ideas to share, useful critiques of things I currently believe, new information I don't have, etc. So I'm perfectly happy to engage with you on equal terms under something like the LW-usual norms of discussion; but jumping through your hoops would (1) be more annoying than any gains would justify, and (2) incentivize what I see as your harmful uncooperative and status-seeking behaviour patterns.

(If you disagree with my characterization of you -- which you may well do -- and care enough about changing it to take some time trying to do so -- which I suspect you don't -- then things that might change my mind include (1) providing recent examples where you engaged in discussion with someone who disagreed with you and decided that they were right and you were wrong and (2) providing recent examples where you conceded that someone else you were in discussion with was smarter than you, at least in some specific area relevant to the discussion. Or even, though of course it would be much less evidence, (3) providing recent examples where you had a discussion with someone you didn't already know to agree with you about most things and didn't attempt to lay down strict conditions they had to follow in order to continue the discussion. For the avoidance of doubt, I am not at all suggesting that you're obliged to do either of those things, nor that your not doing so would be strong evidence that my characterization of you is correct.)

Comment by gjm on The Best Educational Institution in the World · 2020-08-14T18:52:24.813Z · score: 2 (1 votes) · LW · GW

It looks like you've found that the EA Hotel was a good educational institution for you. What reason do you have to think that that generalizes, and how far?

(If "X worked really well for me" is enough to qualify something as a great educational institution, then we have plenty of great educational institutions already.)

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-14T17:23:26.774Z · score: 6 (3 votes) · LW · GW

LW is less constrained than most places by such social norms. However, to some extent those social norms are in place because breaking them tends to have bad results on net, and my experience is that a significant fraction of people who want to break them may think they are doing it to be frank and open and honest and discuss things rationally without letting social norms get in the way, but actually are being assholes in just the sort of way people who violate social norms usually are: they enjoy insulting people, or want to do it as a social-status move, or whatever. And a significant fraction of people who say they're happy for norms to be broken "at" them may think they are mature and sensible enough not to be needlessly offended when someone else says "I think you're pretty stupid" (or whatever), but actually get bent out of shape as soon as that happens.

If it's any consolation, I have my own opinions about the likely outcome of such a discussion, some of which I too might be socially prohibited from expressing out loud :-).

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-13T21:02:38.016Z · score: 2 (1 votes) · LW · GW

I think the whole "impasse chains" mechanism is unhelpful, and as you suspected I am not much more enthusiastic about using it with length 3 than with length 5. (Though if I were somehow required to have a discussion on those terms with either length 3 or length 5, I would indeed prefer length 3.)

The example you link to is interesting, but to me it seems like a fine example of how your approach doesn't work well! Not just because in that case the discussion ended up getting terminated as unconstructive -- of course that is inevitable given why you linked to it at all. But:

  • The "impasse chain" concept didn't end up actually being useful. The other person did things you found unhelpful; you declared an "impasse"; and then every time he responded you just stonewalled him incremented your impasse count until it reached 5.
    • ... Even though the responses you reacted to in this way were (so it seems to me) clearly responsive to the things you said constituted an impasse; e.g., asking for examples of what you were wanting with the arithmetic-expression tree.
  • It's entirely possible that the discussion was never going to be productive and so ending it was a good idea -- but the particular complaints that provoked its ending seem strange to me. E.g., you asked him to "make a tree of 1-2+3" to see whether he understood the notion of tree you were using; while his first attempt was (1) wrong because he misread the formula and (2) garbled because he thought he could use indentation in a way that didn't actually work, what he subsequently did was perfectly reasonable, especially in the context of "idea trees". And while I don't think he explained what he was saying about externalities super-clearly, I think his placement of those two propositions in the tree was quite reasonable.

So this has mostly served to confirm my initial opinion that having a discussion with some sort of impasse-chain-based rules for termination would not be any sort of improvement on (1) having a discussion whose ending condition is informal as usual or (2) not having a discussion at all.

As for whether you deign to respond to my comments about the actual topic of the post whose comments we're in, that's up to you. My (perhaps biased) evaluation of my own track record is that I am not in the habit of abandoning discussions because I'm losing the argument. (I do sometimes lose interest for other reasons, and I don't think there is anything wrong with that.) But if you reckon my comments are low-quality and I'm likely to bail prematurely, you'll have to decide for yourself whether that's a risk you want to take.

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-12T12:23:43.708Z · score: 2 (1 votes) · LW · GW

So, to summarize the proposal behind that link:

  • an "impasse", here, is anything that stops the original discussion proceeding fruitfully;
  • when you encounter one, you should switch to discussing the impasse;
  • that discussion may also reach an impasse, which you deal with the same way;
  • it's OK to give up unilaterally when you accumulate enough impasses-while-dealing-with-impasses-while-dealing-with-impasses;
  • you propose that a good minimum would be a chain of five or more impasses.

I think only a small minority of discussions are so important as to justify a commitment to continuing until the fifth chained impasse.

I do agree that there's a genuine problem you're trying to solve with this stuff, but I think your cure is very much worse than the disease. My own feeling is that for all but the most important discussions nothing special is needed; if anything, I think there are bigger losses from people feeling unable to walk away from unproductive discussions than from people walking away when there was still useful progress to be made, and so I'd expect that measures to make it harder to walk away will on balance do more harm than good.

(Not necessarily; perhaps there are things one could do that make premature walking-away harder but don't make not-premature walking-away harder. I don't know of any such things, and the phenomenon you alluded to earlier, that premature walking-away often feels like fully justified walking away to the person doing it, makes it harder to contrive them.)

I also think that, in practice, if A thinks B is being a bozo then having made a commitment to continue discussion past that point often won't result in A continuing; they may well just leave despite the commitment. (And may be right to.) Or they may continue, but adding resentment at being obliged to keep arguing with a bozo to whatever other things made them want to leave, and the ensuing discussion is not very likely to be fruitful.

I guess I haven't yet addressed one question you asked: would I like to address the premature-ending problem if it weren't too expensive? If there were a magical spell that would arrange that henceforth discussions wouldn't end when (1) further discussion would in fact be fruitful and (2) the benefits of that discussion would exceed the costs, for both parties -- then yes, I think I'd be happy for that spell to be cast. But I am super-skeptical about actually possible measures to address the problem, because I think the opposite problem (of effort going into discussions that are not in fact productive enough to justify that effort) is actually a bigger problem, and short of outright magic it seems very difficult to improve one of those things without making the other worse.

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-12T00:05:56.619Z · score: 4 (2 votes) · LW · GW

On the basis of past discussions with you, I suspect that when you say "debate these matters to a rational conclusion" you may mean something like "commit to never ever deciding that the discussion is no longer providing anyone with enough enlightenment to be worth the effort involved". And the answer is: no, I do not want to make any such commitment, and I don't think anyone ever should, because it amounts to undertaking to give a potentially limitless amount of time and effort to something of finite value. (I doubt that anyone else here will be willing to make such a commitment either. If that's truly something you require in order to have a discussion then I think that's functionally equivalent to not being interested in discussion. Of course you don't have to be interested in discussion! But in that case maybe you should say so up front.)

My position on that methodological question has not changed appreciably since a previous discussion we had, though of course what you're wanting now is not necessarily the same as what you wanted then and so my opinion of what you want now might differ from my opinion about what you wanted then.

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-11T15:52:27.487Z · score: 3 (2 votes) · LW · GW

It seems like you're saying that it's ... somehow improper to write something that implies doubt about your premises? (I don't think I'm assuming contrary premises to yours, though I don't see any particular reason why that would be a problem, but I am indeed not simply assuming that your premises are right.) If I've understood right, then I don't understand what problem you see. If not, then maybe you could explain what your objection actually is?

I'm also not sure quite what those "extensive other reasons" are. You say "LoLE hasn’t been tested in a controlled, blinded scientific setting". You tell us that it's been extensively debated and has stood up to criticism in (what you don't quite say explicitly but I'm pretty sure is) the pickup-artist community. I don't think LW should adopt a norm of accepting things merely because someone tells us they have stood up to criticism among pickup artists.

(Reason 1: it's just one smallish group of people; such groups can easily develop biases of many kinds, and it's not hard to imagine ways in which that could happen in that group in particular. Reason 2: we don't even know that LoLE has stood up to criticism in that community; all we know is that you say it has.)

Imagine that someone comes here and writes an article about how some cognitive bias is explained by the Law Of X, which is well known among { Trotskyites | evangelical Christians | radical feminists | burglars }. I don't think it would be reasonable for that person to respond to criticism of the article by saying: "it's not appropriate to question the truth and applicability of the Law of X -- it's well established, as { Trotskyites | evangelical Christians | radical feminists | burglars } can attest". Why is this case different?

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-11T15:33:05.403Z · score: 2 (1 votes) · LW · GW

What do you mean by "it's already a major part of our culture"? Specifically, do you mean (1) that conserving visible effort is a thing people do a lot in our culture (i.e., LoLE is right) or (2) that the idea that people conserve visible effort is well established in our culture (i.e., LoLE is well known)? (To me, "LoLE is a major part of our culture" means #2, but it sounds as if you may mean #1.)

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-11T15:28:39.273Z · score: 2 (1 votes) · LW · GW

In order to explain the conjunction fallacy (or other biases) LoLE is only any use if it does lead to actually conserving effort. (On the specific matter at hand; of course they may put effort into other things.) The alleged pattern is (unless I've badly misunderstood):

  • You ask me "which of these two things is more likely?"
  • If I think carefully through the options, I will see that it's more likely that Linda is a librarian simpliciter than that she's a feminist librarian.
  • But I don't do that, because I want to look like I'm doing everything effortlessly.
  • Instead I use some simple quick heuristic that lets me look effortless.
  • Unfortunately that leads me to the wrong answer.

And in this sequence of events, it's essential that I actually do put in less effort. If instead I had some way of looking as if I'm making no effort while actually doing the careful thinking that would get me the right answer, then I would get the right answer.

What am I missing here?

[EDITED to add:] Also: if the question at hand is why psychologists don't appeal to the LoLE as an explanation for the conjunction fallacy, and the answer (as I suggest) is that they already have what looks like an obvious explanation in terms of actually conserving effort, then it doesn't really matter that much whether the LoLE involves actual effort-conservation or merely apparent effort-conservation, no?

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-10T09:29:29.198Z · score: 5 (3 votes) · LW · GW

You suggest that the "law of least effort" has been ignored by academic researchers because they disapprove of people associated with it. It looks to me as if something different is going on.

I had a look at some writing about fallacies, heuristics and biases. I didn't find anything about the LoLE, but I likewise didn't find any other speculation about particular reasons why our brains tend to prefer low-effort approximate solutions most of the time.

That doesn't look to me like ignoring the LoLE, it looks like ignoring the question to which it's an answer: what are the factors that make us mostly avoid effort and hard concentration?

The two most likely reasons for this ignoring seem to me to be

  • that if what you're researching is what decisions people tend to make in particular situations, then the details of the underlying mechanisms might be interesting but aren't directly relevant and one might reasonably leave them for someone else to figure out; and
  • that it tends to feel like a question whose answer is so obvious (and maybe soe trivial) that it doesn't need looking into further: of course we conserve effort; what else would any living thing do? If the question were explicitly raised, I suspect many would just mutter something about efficiency and evolution and move on.

Neither of those has anything to do with the perceived skeeviness of the people associated with a particular other view.

Incidentally, in Kahneman's "Thinking, fast and slow" you will find the following sentence: "This is how the law of least effort comes to be a law". Kahneman's LoLE is not the same as yours. He just means that people conserve effort so far as they can. And the "this is how" just amounts to stating that effort is disagreeable. I'm pretty sure this is the second of my reasons above: it simply hasn't occurred to Kahneman that "why do our brains prefer to conserve effort?" is a question that needs asking at all.

My feeling -- which of course I have no concrete evidence for -- is that if it's true that the pickup-artists' LoLE is ignored by psychologists even when they do consider the actual question it purports to be an answer to -- and I have no idea whether that's true, not having seen much consideration of that actual question -- then the more likely explanation is that they haven't thought of it at all, rather than that they've thought of it and dismissed it because they don't like the people advocating it.

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-10T08:56:30.738Z · score: 2 (1 votes) · LW · GW

Although the title of this post is "The Law of Least Effort Contributes to the Conjunction Fallacy", almost all of the actual post is dedicated to explaining what the LOLE is, to suggesting that it's neglected because of prejudice against the people it's associated with, and so forth.

The portion that actually purports to link the LOLE to the conjunction fallacy just says this:

LoLE encourages people to try to look casual, chill, low effort, even a little careless -- the opposite of tryhard. The experimental results of Conjunction Fallcy research fit these themes.

And, sure, something like that might be true, but you need to do better than that to be convincing. E.g., come up with some experimental design where, if your theory is right, the LoLE should have a stronger effect on some participants than others, and look for systematic correlation between expected-LoLE-ness and conjunction effect. (If your theory is right then it seems like this should apply to many other "effort-conserving" fallacies besides the conjunction effect, so maybe those should be tested for too.) And then actually do the experiment or persuade someone else to do it.

[EDITED to fix a typo.]

Comment by gjm on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-10T08:49:08.166Z · score: 3 (2 votes) · LW · GW

Your argument is that the "Law of Least Effort" motivates people not to try very hard, and not trying very hard produces conjunction-fallacy-like effects.

It seems to me that

  • this is an overcomplicated explanation, in that we don't need the "Law of Least Effort" to motivate people not to try very hard (conserving resources is valuable in itself, and effort is usually disagreeable), and
  • it doesn't explain why we get the conjunction fallacy in particular; it's not obvious that this particular sort of wrong answer is lower-effort than the right answer.
Comment by gjm on How to Respond to a Black Hole Astronomer's Gish Gallop · 2020-08-08T16:50:00.276Z · score: 6 (3 votes) · LW · GW

It's flattering that you call me a "black hole astronomer" (since it implies that what I've written about black hole astronomy looks like it's written by someone who works in the field), but I am not one.

Everything in my comment is in response to things you wrote. That makes it the exact reverse of a Gish gallop.

I do not have any sock puppets. (I can think of exactly one online thing where I have more than one account. It's an internet chess server that can serve you up puzzles and give you a numerical indication of how you're doing at solving them. I wanted two accounts so that I could use one for trying to solve the puzzles in limited time and another where I would think for as long as I needed before trying a move. I emailed the people who run the server to check they didn't mind. I mention all this just as an indication that I am unusually scrupulous about this sort of thing.)

So far as I can see, this article consists of (1) a little bit of "framing", (2) lengthy quotations from something I wrote, your response to it, and my response to that, and your (very short) response to that, and (3) a self-congratulatory remark about what a great job you did responding to me. I am at a loss to understand what you think that will achieve.

Comment by gjm on The New Scientific Method · 2020-08-05T11:40:24.594Z · score: 6 (3 votes) · LW · GW

If someone shows evidence of (1) being generally clueful and (2) having read what you wrote, and they "have missed the points I was making", then I suggest that you might find it advisable to explain what they misunderstood and how your intention differs from what they thought you said. Just saying "you misunderstood me" achieves nothing. (Unless your only goal is to attack someone who has disagreed with you, I suppose. There may be places where that's an effective tactic, but I think that here it's unlikely to make anyone think better of you or worse of them.)

Comment by gjm on The New Scientific Method · 2020-08-05T11:37:18.208Z · score: 4 (2 votes) · LW · GW

Thanks for the kind words! I guess it might be worth making into an article. (And I agree that if so it would be best to make it more standalone and less debate-y, though it might be worth retaining a little context.) I'm on holiday right now so no guarantees :-).

Comment by gjm on How Beliefs Change What We See in Starlight · 2020-08-05T07:53:48.443Z · score: 7 (5 votes) · LW · GW

You write that

In the case of gravitational waves, back in the 1970s, hundreds of independent research groups constructed simple devices to measure them and they all compared their results. Each research group thought that it had measured gravitational waves, but when the results were combined, they all had to conclude that no one had been measuring gravitational waves. They had all been measuring different sources of noise.

So far as I can tell, this is not in fact true. E.g., this article about Joseph Weber says that Weber claimed to have measured gravitational waves, lots of other labs tried to replicate his results, and they all said they hadn't been able to. So does this one. The latter cites this paper in Physical Review Letters in which one pair of collaborators say exactly that. Here's another article saying the same. I haven't found anything that claims that everyone thought they had found gravitational waves, or that combining everyone's results made it plain that that wasn't so.

(I also remark that the story you tell in this paragraph is awfully similar to the one you want to debunk. Lots of scientific observations so noisy that you can't get anything reliable from any of them, but when you put them together you get something usable. This is exactly what you poke fun at at, e.g., the start of your "Weber's ghost" post. Surely you can't consider that approach valid when it says "no gravitational waves here" but invalid when it says "gravitational waves here"...)

Comment by gjm on The Ghost of Joseph Weber · 2020-08-05T07:30:21.573Z · score: 2 (1 votes) · LW · GW

Calculating opportunity costs is great, but that isn't what you did.

Comment by gjm on The Conceited Folly of Certainty · 2020-07-28T23:52:06.569Z · score: 4 (3 votes) · LW · GW

Obviously this is a very tangential issue, but when you say

There may only be a single word in our language that triggers the thought of one specific person: genius.

I can't agree. I bet there are plenty of people, who when they hear "genius", think first of other people besides Einstein. Even within physics, I bet the fact that there's a biography of Feynman called "Genius" encourages some to think of him rather than Einstein. And there will be plenty of people who will pick, say, Dante or Gauss instead.

Other possibly better candidate words:

  • Names of schools of thought etc. Presumably we shouldn't allow terms like "Marxism" or "Euclidean", but how about e.g. "objectivism" -> Rand, "communism" -> Marx, "cyberpunk" -> Gibson, "cubism" -> Picasso.
  • Words that happen to be names of famous works, or near to them. "utopia" -> More, "wasteland" -> Eliot (even though his poem's name isn't exactly that), "tempest" -> Shakespeare, etc.
  • Words closely associated with super-duper-famous people. If you want to make people think of Einstein, I suspect "relativity" does better than "genius". If you want to make people think of Jesus and "Christianity" is cheating, "gospel" and "resurrection" probably work well.

I'm pretty sure all of these work better than ("genius", Einstein) if the question is "if asked to associate a person with this word, what fraction of people will pick the person I have in mind?". I think some of them also work better if the question is "if you just say this word, what fraction of people will immediately think of the person I have in mind?".

Comment by gjm on Why You Might Want a New Way to Visualize Biases · 2020-07-27T21:47:06.351Z · score: 14 (6 votes) · LW · GW

I think there's a terminological issue. What you have here is a new graphical notation for (certain kinds of statements about / structures involving) biases, not a way to visualize them. To me, at least, visualizing something means displaying the thing in a way that lets you see that thing itself more clearly; but here, the point isn't to see the biases more clearly -- what you're trying to clarify is a larger structure in which the biases are largely-unanalysed elements.

The criticism you got was mostly saying "this so-called way of visualizing biases doesn't actually show you anything about the biases" -- which is correct but, if I'm understanding you right, largely misses the point because the goal was never to provide a tool for understanding biases better; what you want to understand is bad habits, case histories, and the like. Biases are components of those things; you want a notation that includes a way of representing biases; but this isn't a visualization of biases any more than the usual notation for electrical circuits is a visualization of (say) resistors.

Comment by gjm on The Ghost of Joseph Weber · 2020-07-24T16:13:03.295Z · score: 5 (3 votes) · LW · GW

Your descriptions of what I said in the comments on "The New Scientific Method" are not accurate. They are like your purported quotations from Katie Bouman's talk (though at least you didn't put them in quotation marks this time): in condensing what I actually said into a brief and quotable form, you have apparently attempted to make it sound as silly as possible rather than summarizing as accurately as possible. I think you shouldn't do that.

(My description in terms of "weirdness" was meant to help to clarify what is going on in an algorithm that you criticized but apparently hadn't understood well. It turns out that it was a mistake to try to be as clear and helpful as possible, rather than writing defensively so as to make it as difficult as possible for someone malicious to pick things that sound silly.)

I already told you (in comments on that other post) what motivates me: bad science, and especially proselytizing bad science, makes me sad. It makes me especially sad when it happens on Less Wrong, which aims to be a home for good clear thinking. Having seen the previous iteration of Less Wrong badly harmed by political cranks who exploited the (very praiseworthy) local culture of taking ideas seriously even when they are nonstandard or appear bad at a first glance, I am not keen to leave uncriticized a post that is confidently wrong about so many things.

I don't know what anyone else may have done, but I at least have not downvoted all your comments and posts. I have downvoted some specific things that seem to me badly wrong; that's what downvoting is meant for. (As it happens, it looks to me as if you have downvoted all my comments on your posts.)

Comment by gjm on The Ghost of Joseph Weber · 2020-07-24T15:57:11.439Z · score: 6 (4 votes) · LW · GW

Dustin's point, as I understand it, is not that you overestimated or that you underestimated, nor that you didn't give a detailed accounting of all the facilities involved, it's that you're confusing two completely different questions. (1) How much did the EHT project cost? (2) How much did the telescopes used by the EHT project cost to build and run? You made a claim about #1 and when challenged on it offered some numbers relating to #2.

You do say one thing that purports to link them: "... if EHT kept the telescopes in operation when they would've otherwise lost funding ...". But that's one heck of a big if and I know of no reason to think that EHT kept any telescopes in operation that would otherwise have lost funding. And even if it did, that wouldn't justify including the cost of building the telescopes in your estimate of the cost of EHT, unless the telescopes in question were never used for anything other than EHT.

(One journalistic outlet has given a concrete estimate for the cost of the EHT project. They say 50 to 60 million dollars. I don't know where they got that estimate or how much to trust it, but it sounds much much more believable to me than your "billions of dollars".)

Comment by gjm on Swiss Political System: More than You ever Wanted to Know (I.) · 2020-07-19T14:55:59.336Z · score: 18 (11 votes) · LW · GW

This appears to be entirely untrue.

Here's a link to a point-by-point comparison of the two constitutions, wherein you can readily search for "supreme" and find e.g. that III.1 is exactly the same in the two: "The judicial power of the Confederate States shall be vested in one Supreme Court, and in such inferior courts as the Congress may, from time to time, ordain and establish. The judges, both of the Supreme and inferior courts, shall hold their offices during good behavior, and shall, at stated times, receive for their services a compensation which shall not be diminished during their continuance in office."

Comment by gjm on The New Scientific Method · 2020-07-18T19:05:49.775Z · score: 13 (7 votes) · LW · GW

No, we pointed out the blatant errors in what you wrote.

You've claimed in a couple of places, now, that I am engaging in misleading rhetoric, but so far the only thing you've said about that that's concrete enough to evaluate is that I wrote a lot ... in my response to something you wrote that was longer. If there are specific things I have written that you find misleading, I would be interested to know what they are and what in them you object to. I'm not trying to pull any rhetorical tricks, but of course that's no guarantee of not having written something misleading, and if so I would like to fix it.

Comment by gjm on The New Scientific Method · 2020-07-18T16:14:53.277Z · score: 6 (5 votes) · LW · GW

Just for fun, I improved the algorithm a bit. Here's what it does now:

Still not perfect -- there's some residual banding that is never going to be 100% removable but could be made much better with more understanding of the statistics of typical images, and something at the right-hand side that seems like the existing algorithm ought to be doing better and I'm not quite sure why it doesn't. But I don't feel like taking a lot more time to debug it.

You're seeing its performance on the same input image that I used for testing it, so it's always possible that there's some overfitting. I doubt there's much, though; everything it does is still rather generic.

(In case anyone cares, the changes I made are: 1. At a couple of points it does a different sort of incremental row-by-row/column-by-column optimization, that tries to make each row/column in succession differ as little as possible from (only) its predecessor, and doesn't treat wrapping around at 255/0 as indicating an extra-big change. 2. There's an annoying way for the algorithm to get stuck in a local minimum, where there's a row discontinuity and a column discontinuity and fixing one of them would push some pixels past the 255/0 boundary because of the other discontinuity; to address this, there's a step that considers row/column pairs and tries making e.g. a small increase in pixel values right of the given column and a small decrease in pixel values right of the given row. Empirically, this seems to help get out of those local minima. 3. The regularizer (i.e., the "weirdness"-penalizing part of the algorithm) now tries not only to make adjacent pixels be close in value, but also (much less vigorously) to make some other nearby pixels be close in value.)

Comment by gjm on The New Scientific Method · 2020-07-18T15:55:31.204Z · score: 7 (4 votes) · LW · GW

Kirsten is claiming that my algorithm is invalid, even though (so it seems to me) it demonstrably does a decent job of reconstructing the original image I gave it despite the corruption of the rows and columns. Of course she can still claim that the EHT team's reconstruction algorithm doesn't have that property, that it only gives plausible output images because it's been constructed in such a way that it can't do otherwise, but at the moment I'm not arguing that the EHT team's reconstruction algorithm is any good, I'm arguing only that one specific thing Kirsten claimed about it is flatly untrue: namely, that if the phases are corrupted in anything like the way Bouman describes then there is literally no way to get any actual phase information from the data. The point of the image reconstruction I'm demonstrating here is that you can have corruption with a similar sort of pattern that's just as severe but still be able to do a lot of reconstruction, because although the individual measurements' phases are hopelessly corrupt (in my example: the individual pixels' values are hopelessly corrupt) there is still good information to be had about their relationships.

[EDITED to add:] ... Or maybe I misunderstood? Perhaps "This method is invalid" she means that somehow anything that has the same sort of shape as what I'm doing here is bad, even if it demonstrably gives good results. If so, then I guess my problem is that she hasn't enabled me to understand why she considers it "invalid". Her objections all seem to me like science-by-slogan: describe something in a way that makes it sound silly, and you've shown it's no good. Unfortunately, all sorts of things that can be made to sound silly turn out to be not silly at all, so when faced with a claim that something that demonstrably works is no good I'm going to need more than slogans to convince me.

Comment by gjm on The New Scientific Method · 2020-07-18T14:04:37.603Z · score: 2 (1 votes) · LW · GW

Hmm, there should be two images there but only one of them is showing up for me. Let me try again:

This should appear after "This helps a bit:". (I'm putting this in an answer rather than editing the previous comment because if there's some bug in image uploading or display then it seems like having only one image per comment might make it less likely to be triggered.)

Comment by gjm on Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle · 2020-07-18T12:18:33.870Z · score: 6 (3 votes) · LW · GW

Yup, all understood. (I think in practice any given use of the word is likely to have a bit of both meanings in it, with or without concomitant equivocation.)

[EDITED to add:] Maybe it's worth saying a little about your analogy. To whatever extent the analogy does more than merely illustrate your distinction between rationalist-1 and rationalist-2 (and I take it it is intended to do a little more, since the distinction was perfectly clear without it), it seems that you see yourself as being in something like the position of the Hypothetical Evangelical, asking us all "do ye not therefore err, because ye know not the Sequences, neither the power of rationality?". That of course is why I dedicated most of my comment to arguing against your actual claim, that those who seek truth through clear thinking must choose their concept-boundaries without any considerations other than minimizing average description length. And my position is a bit like one I remember holding not infrequently in the past, when I was (alas) a moderately-evangelical Christian: I can see how you see the Sequences as supporting your position, but I don't think that's the only or the best way to interpret the relevant bits of the Sequences, and to whatever extent Eliezer was saying the same thing as you I'm afraid I think that Eliezer was wrong. (Yes, I am suggesting, tongue somewhat but not wholly in cheek, a parallel between some Christians' equivocation between "God's will" and "what is written in the bible" and your appeal to the authority of Eliezer's posts about word and concepts when arguing for what seem to me inadvisably-extreme positions on how rationalists should use words.)

Incidentally, I'm aware that that's now twice in a row that I've responded very briefly and then edited in more substantive comments. I promise I'm not doing it out of any wish to deceive or anything like that. It's just that sometimes I'm right on the fence about how much it's worth saying.

Comment by gjm on The New Scientific Method · 2020-07-18T11:48:52.057Z · score: 17 (7 votes) · LW · GW

I haven't based my career on these methods. I don't know where you get that idea from.

[EDITED to add:] Oh, maybe I do know. Perhaps you're reasoning as follows: "This stuff is obviously wrong. Gareth is defending it. The only possible explanation is that he has some big emotional investment in its rightness. The most likely reason for that is that he does it for a living. Perhaps he's even on the LIGO or EHT project and is taking my criticisms personally." So, for the avoidance of doubt: my only emotional investment in this is that bad science (particularly proselytizing bad science) makes me sad; I have no involvement in the LIGO or EHT project, or any other similar Big Science project, nor have I ever had; my career is in no way based on image reconstruction algorithms like this one. (On the other hand: I have from time to time worked on slightly-related things; e.g., for a while I worked with phase reconstruction algorithms for digital holograms, which is less like this than it might sound but not 100% unlike it, and I do some image processing algorithm work in my current job. But if all the stuff EHT is doing, including their image reconstruction techniques, turned out to be 100% bullshit, the impact on my career would be zero.)

----

The method successfully takes a real image (I literally grabbed the first thing I found in my temp directory and cropped a 100x100 piece from it), corrupted with errors of the kind I described (which really truly do mean that the probability distribution of the values for any pair of pixels is uniform) and yields something clearly recognizable as a mildly messed-up version of the original image.

So ... What do you mean by "invalid"? I mean, you could mean two almost-opposite things. (1) You think I'm cheating, so any success in the reconstruction just means that the data aren't as badly corrupted as I say. (2) You think I'm failing, so when I say I've done the reconstruction with some success you disagree and think my output is as garbagey as the input looks and contains no real information about the (original, uncorrupted) image. (Maybe there are other possibilities too, but those seem the obvious ones.)

I'm happy to address either of those claims, but they demand different sorts of addressing (for #1 I guess I'd provide the actual code and input data, and some calculations showing that my claim about how random the pixel values are is true, making it as easy as possible for you to replicate my results if you care; for #2 I'd put a little more effort into the algorithm so that its reconstructions become more obviously good enough).

Comment by gjm on The New Scientific Method · 2020-07-18T10:22:06.219Z · score: 15 (8 votes) · LW · GW

Every part of my comment is in response to a corresponding part of what you wrote. So your accusation of Gish-galloping is exactly backwards. The reason why the Gish gallop works is that it's generally much quicker to make a bogus claim than to refute it. You made a lot of bogus claims; I had to write a lot in order to refute them all. (Well, I'm sure I didn't get them all.) ... And then you accuse me of doing a Gish gallop? Really?

If you think my logic is wrong, then please show where it's wrong. If your complaint is only that it's convoluted -- well, science and mathematics are complicated sometimes. I don't actually think what I'm saying is particularly convoluted, but I will admit to preferring precise reasoning over vague slogans like "no adding uncorrelated phase errors".

Of course I don't claim that literally every thing you wrote was wrong. But I do think (and that is what I said) that in none of the cases where you (mis)quoted Bouman and complained was your objection actually correct.

If there were "one point on which all the others rest" then indeed I could have picked it out and concentrated on that. But there isn't: you make a lot of complaints, and so far as I can see there isn't one single one on which all else depends. (If I had to identify one single thing on which all your objections rest, it would be something that you haven't actually said and indeed I'm only guessing: I think you don't believe black holes exist and therefore when someone presents what claims to be evidence of black holes you "know" it must be wrong and therefore go looking for all the ways they could have erred. But of course my attempt at diagnosis could be wrong.)

As for the assorted mere insinuations ("extreme black-and-white thinking", "lack of nuance", "convoluted logic", "weakness", "redundancies and redirects", "jargon-rich", "smoke and mirrors") -- well, other readers (if any there be) can make up their own minds about that. But for what it's worth, none of that was a goal; I try to write as simply and clearly as correctness allows. I'm sure I don't always achieve perfect success; who does?

Comment by gjm on The New Scientific Method · 2020-07-18T10:10:35.788Z · score: 7 (5 votes) · LW · GW

I did not say that resolution limitations could be overcome by just taking more data.

I did say that unsystematic noise can be dealt with by taking more data. This should not be controversial; it is standard procedure and for most kinds of noise it works extremely well. (For some kinds of noise the right way to use the extra data is more sophisticated than e.g. just averaging. There are possible noise conditions where no amount of extra data will help you because there is literally no information at all in your measurements, but that is not the situation the EHT is facing.)

You keep talking about "hard limits" and "cardinal sins" but I don't think you have understood the material you are looking at well enough to make those claims.

Comment by gjm on The New Scientific Method · 2020-07-18T10:07:01.188Z · score: 5 (3 votes) · LW · GW

You say "You admitted above that they added uncorrelated phase errors together. That is a cardinal sin of data analysis". I don't know where you learned data analysis, but this feels to me like a principle you just made up in order to criticize Bouman. She's got an algorithm that (among other things) does something whose effect is to add up some phase errors. It happens that doing so makes them cancel out. There is nothing whatsoever wrong with this. (The errors are systematically related to one another, though I don't think correlation is the right language to use to describe this relationship.)

Comment by gjm on The New Scientific Method · 2020-07-18T09:21:15.314Z · score: 22 (8 votes) · LW · GW

Your statement "This means you can't use it ... end of story" is simply objectively wrong.

Consider the following situation, which is analogous but easier to work with. You have a (monochrome) image, represented in the usual way as an array of pixels with values between (say) 0 and 255. Unfortunately, your sensor is broken in a way somewhat resembling these telescopes' phase measurements, and every row and every column of the sensor has a randomly-chosen pixel-value offset between 0 and 255. So instead of the correct pixel(r,c) value in row r and column c, your sensor reports pixel(r,c) + rowerror(r) + colerror(c).

The row errors or the column errors alone would suffice to make it so that every individual pixel value is, as far as its own measurement goes, completely unknown. Just like the phases of the individual interferometric measurements. BUT the errors are consistent across each row and each column, rather than being independent for each pixel, just as the interferometric errors are consistent for each telescope.

So, what happens if you take this hopelessly corrupted input image, where each individual pixel (considered apart from all the others) is completely random, and indeed each pair of pixels (considered apart from all the others) is completely random, and apply a simple-minded reconstruction algorithm? I tried it. Here's the algorithm I used: for each row in turn, see what offset minimizes "weirdness" (which I'll define in a moment) and apply it; likewise for each column in turn; repeat this until "weirdness" stops decreasing. "Weirdness" means sum of squares of differences of adjacent pixels; so I'm exploiting the fact that real-world images have a bit of spatial coherence. Here's the result:

On the left we have a 100x100 portion of a photo. (I used a fairly small bit because my algorithm is super-inefficient and I implemented it super-inefficiently.) In the middle we have the same image with all those row and column errors. To reiterate, every row is offset (mod 256) by a random 8-bit value; every column is offset (mod 256) by a random 8-bit value; this means that every pair of pixel values, considered on their own, is uniformly randomly distributed and contains zero information. But because of the relationships between these errors, even the incredibly stupid algorithm I described produces the image on the right; it has done 26 passes and you can already pretty much make out what's in the image. At this point, individual row/column updates of the kind I described above have stopped improving the "weirdness".

Not satisfied? I tweaked the algorithm so it's still stupid but slightly less so: now on all iterations past the 10th it also considers blocks of rows and columns either starting with the first or ending with the last, and tries changing the offsets of the whole block in unison. The idea is to help it get out of local minima where a whole block is wrongly offset and changing one row/column at an edge of the block just moves the errors from one place to another without reducing the overall amount of error. This helps a bit:

though some of the edges in the image are still giving it grief. I can see a number of ways to improve the results further and am confident that I could make a better algorithm that almost always gets the image exactly right (up to a possible offset in all the pixel values) for real pictures, but I think the above suffices to prove my point: although each individual measurement is completely random because of the row and column errors, there is still plenty of information there to reconstruct the image from.

Comment by gjm on Telling more rational stories · 2020-07-17T18:44:03.572Z · score: 5 (4 votes) · LW · GW

Nitpick: you write "Eliezar" throughout where it should be "Eliezer".

Comment by gjm on The New Scientific Method · 2020-07-17T18:20:42.501Z · score: 16 (8 votes) · LW · GW

The anti-EHT portion of this seems about as bad, I'm afraid, as the anti-LIGO portion of your other anti-LIGO post. You point and laugh at various things said by Bouman, and you're wrong about all of them.

(Also, it turns out that all your quotations are not actually quotations. You're paraphrased Bouman, often uncharitably and often apparently not understanding what she was actually saying. I don't think you should do that.)

----

First, a couple from Bouman's TEDx talk before the EHT results came out. These you just say are "absurd" and suggest that she had "terrible teachers and no advisor", without offering any actual explanation of what's wrong with them, so I'll have to guess at what your complaint is.

Bouman says (TEDx talk, 11:00) "I can take random image fragments and assemble them like a puzzle to construct an image of a black hole". Perhaps your objection is to the idea that this amounts to making an image at random, or something. If you read the actual description of the CHIRP algorithm Bouman is discussing, you will find that nothing in it is anything like that crazy; Bouman is just trying to give a sketchy idea (for a lay audience with zero scientific or mathematical expertise) of what she's doing. I don't much like her description of it, but understood as an attempt to convey the idea to a lay audience there's nothing absurd about it.

Bouman says (TEDx talk, 6:40): "Some images are less likely than others and it is my job to design an algorithm that gives more weight to the images that are more likely.". Perhaps your objection is that she's admitting to biasing the algorithm so it only ever gives images that are consistent with (say) the predictions of General Relativity. Again, if you read the actual paper, it's really not doing that.

So what is it doing? Like many image-reconstruction algorithms, the idea is to search for a reconstruction that minimizes a measure of "error plus weirdness". The error term describes how badly wrong the measurements would have to be if reality matched the candidate reconstruction; the weirdness term (the fancy term is "regularizer") describes how surprising the candidate reconstruction is in itself. Obviously this framework is consistent with an algorithm with a ton of bias built in (e.g., the "weirdness" term simply measures how different the candidate reconstruction is from a single image you want to bias it towards) or one with no bias built in at all (e.g., the "weirdness" term is always zero and you just pick an image that minimizes the error). What you want, as Bouman explicitly says at some length, is to pick your "weirdness" term so that the things it penalizes are ones that are unlikely to be real even if we are quite badly wrong about (e.g.) exactly what happens in the middle of a galaxy.

The "weirdness" term in the CHIRP algorithm is a so-called "patch prior", which means that you get it by computing individual weirdness measures for little patches of the image, and you do that over lots of patches that cover the image, and add up the results. (This is what she's trying to get at with the business about random image fragments.) The patches used by CHIRP are only 8x8 pixels, which means they can't encode very much in the way of prejudices about the structure of a black hole.

If you picked a patch prior that said that the weirdness of a patch is just the standard deviation of the pixels in the patch, then (at least for some ways of filling in the details) I think this is equivalent to running a moving-average filter over your image. I point this out just as a way of emphasizing that using a patch prior for your "weirdness" term doesn't imply any controversial sort of bias.

For CHIRP, they have a way of building a patch prior from a large database of images, which amounts to learning what tiny bits of those images tend to look like, so that the algorithm will tend to produce output whose tiny pieces look like tiny pieces of those images. You might worry that this would also tend to produce output that looks like those images on a larger scale, somehow. That's a reasonable concern! Which is why they explicitly checked for that. (That's what is shown by the slide from the TEDx talk that I thought might be misleading you, above.) The idea is: take several very different large databases of images, use each of them to build a different patch prior, and then run the algorithm using a variety of inputs and see how different the outputs are with differently-learned patch priors. And the answer is that the outputs look almost identical whatever set of images they use to build the prior. So whatever features of those 8x8 patches the algorithm is learning, they seem to be generic enough that they can be learned equally well from synthetic black hole images, from real astronomical images, or from photos of objects here on earth.

So, "an algorithm that gives more weight to the images that are more likely" doesn't mean "an algorithm that looks for images matching the predictions of general relativity" or anything like that; it means "an algorithm that prefers images whose little 8x8-pixel patches resemble 8x8-pixel patches of other images, and by the way it turns out that it hardly matters what other images we use to train the algorithm".

Oh, a bonus: you remember I said that one extreme is where the "weirdness" term is zero, so it definitely doesn't import any problematic assumptions about the nature of the data? Well, if you look at the CalTech talk at around 38:00 you'll see that Bouman actually shows you what you get when you do almost exactly that. (It's not quite a weirdness term of zero; they impose two constraints, first that the amount of emission in each place is non-negative, and second a "field-of-view constraint" which I assume means that they're only interested in radio waves coming from the region of space they were actually trying to measure. ... And it still looks pretty decent and produces output with much the same form as the published image.

----

Then you turn to a talk Bouman gave at CalTech. You say that each of the statements you quote out of context "would disqualify an experiment", so let's take a look. With these you've said a bit more about what you object to, so I'm more confident that my responses will actually be responsive to your complaints than with the TEDx talk ones. These are in the same order as your list, which is almost but not quite the same as their order within the talk.

Bouman says (CalTech, 5:08) “this is equivalent to taking a picture of an orange on the moon.” I already discussed this in comments on your other post: you seem to think that "this seems impossibly hard" is the same thing as "this is actually impossibly hard", and that's demonstrably wrong because other things that have seemed as obviously difficult as getting an image of an orange on the moon have turned out to be possible, and the whole point of Bouman's talk is to say that this one turned out to be possible too. Of course it could turn out that she's wrong, but what you're saying here is that we should just assume she's wrong. That would have made us dismiss (for instance) radio, electronic computers, and the rocketry that would enable us to put an orange on the moon if we chose to do so.

Bouman says (CalTech, 14:40) “the challenge of dealing with data with 100% uncertainty.” Except that she doesn't, at least not anywhere near that point in the video. She does say that some things are off by "almost 100%", but she doesn't e.g. use the word "uncertainty" here. Which makes it rather odd that the first thing you do is to talk about their choice of using "uncertainty" to quantify things. So, anyway, you begin by suggesting that maybe what they're trying to do is to average data with itself several times to reduce its errors. That would indeed be stupid, but you have no reason to think that the EHT team was doing that (or anything like it) so this is just a dishonest bit of rhetoric. Then you say "They convinced themselves that they could use measurements with 100% uncertainty because the uncorrelated errors would would cancel out and they called this procedure: closure of systematic gain error" and complain that it's only correlated errors that you can get rid of by adding things up, not uncorrelated ones. Except that so far as I can tell you just made up all the stuff about correlated versus uncorrelated errors.

So let's talk about this for a moment, because it seems pretty clear that you haven't understood what they're doing and have assumed the most uncharitable possible interpretation. You say: "But from what I could tell, this procedure was nothing more than multiplying together the amplitudes and adding the phase errors together and from what I learned about basic data analysis, you can only add together correlated errors that you want to remove from your data."

Nope, the procedure is not just multiplying the amplitudes and adding the phases, although it does involve doing something a bit like that, and the errors are correlated with one another in particular ways which is why the method works.

So, they have a whole lot of measurements, each of which comes from a particular pair of telescopes at a particular time (and at a particular signal frequency, but I think they pick one frequency and just work with that). Each telescope at each time has a certain unknown gain error and a certain unknown phase error, and the measurements they take (called "visibilities" are complex numbers with the property that if i,j are two telescopes then the effect of those errors is to multiply V(i,j) by . And now the point is that there are particular ways of combining the measurements that make the gains or the phases completely cancel out. So, e.g., if you take the product V(i,j) V(j,k) V(k,l) then the gain errors are still there but the phases cancel, so despite the phase errors in the individual measurements you know the true phase of that product. And if you take the product (V(a,b) V(c,d)) / (V(a,d) V(b,c)) then the phase errors are still there but the gains cancel, so despite the gain errors in the individual measurements you know the true magnitude of that product.

So: yes, the errors are correlated, in a very precise way, because the errors are per-telescope rather than per-visibility-measurement. That doesn't let you compute exactly what the measurements should have been -- you can't actually get rid of the noise -- but it lets you compute some other slightly less informative derived measurements from which that noise has been completely removed.

(There will be other sources of noise, of course. But those aren't so large and their effects can be effectively reduced by taking more/longer measurements.)

Also, by the way, what she's saying is not that overall the uncertainty is 100% (again, I feel that I have to reiterate that so far as I can tell that particular formulation of the statement is one you just made up and not anything Bouman actually said) but that for some measurements the gain error is large. (Mostly because one particular telescope was badly calibrated.)

Bouman says (CalTech, 16:00) “the CLEAN algorithm is guided a lot by the user.” Yes, and she is pointing out that this is an unfortunate feature of the ("self-calibrating") CLEAN algorithm, and a way in which her algorithm is better. (Also, if you listen at about 35:00, you'll find that they actually develope da way to make CLEAN not need human guidance.)

Bouman says (CalTech, 19:30) “Most people use this method to do calibration before imaging, but we set it up to do calibration during imaging by multiplying the amplitudes and adding the phases to cancel out uncorrelated noise.” This is the business with products and quotients of visibilities that I described above.

Bouman says (CalTech, 31:40) “A data set will equally predict an image with or without a hole if you lack phase information.” Except she doesn't say that or anything like it: you just made that up. If this is meant to be a paraphrase of what she said, it's an incredibly bad one. But, to address what might be your point here (this is an instance where you just misquote, point and laugh, without explaining what problem you claim to see, so I have to guess), note that it is not the case that they lack phase information. Their phases have errors, as discussed above, and some of those errors may be large, but they are systematically related in a way that means that a lot of information about the phases is recoverable without those errors.

Bouman says (CalTech, 39:30) “The phase data is unusable and the amplitude data is barely usable.” Once again, that isn't actually what she says. I don't find her explanation in this part of the talk very clear, but I think what she's actually talking about is using the best-fit parameters they got as a way of inferring what those gain and/or phase errors are, not because they need to do that to get their image but because they can then do that multiple times using different radio sources and different image-processing pipelines and check that they get similar results each time -- it's another way to see how robust their process is, by giving it an opportunity to produce silly results -- and she's saying that the phases are too bad to be able to do that. Again, there absolutely is usable phase information even though each individual measurement's phase is scrambled.

If you look at about 40:50 there's a slide that makes this clearer. Their reconstruction uses "closure phases" (i.e., the quantities computed in the way I described above, where the systematic phase errors cancel) and "closure amplitudes" (i.e., the other quantities computed in the way I described above, where the systematic amplitude errors cancel) and "absolute amplitudes" (which are bad but not so bad you can't get useful information from them) -- but not the absolute phases (which are so bad that you can't get useful information from them without doing the cancellation thing). That slide also shows an image produced by not using the absolute amplitudes at all, just for comparison; the difference between the two gives some idea of how bad the amplitude errors are. (Mostly not that bad, as she says.)

Bouman says (CalTech, 36.20) “The machine learning algorithm tuned hundreds of thousands of ‘parameters’ to produce the image.” I'm pretty sure you misunderstood here, though her explanation is not very clear. (The possibility of misunderstanding is one reason why it is very bad that you keep presenting your paraphrases as if they are actual quotations.) What they had hundreds of thousands of was parameter values. That is, they looked at hundreds of thousands of possible combinations of values for their parameters, and for the particular bit of their process she's describing at this point they chose a (still large but) much smaller number of parameter-combinations and looked at all the resulting reconstructions. (The goal here is to make sure that their process is robust and doesn't give results that vary wildly as you vary the parameters that go into the reconstruction procedure.)

Comment by gjm on The Ghost of Joseph Weber · 2020-07-17T13:29:15.035Z · score: 8 (5 votes) · LW · GW

So, you did mean the Bouman talk I found. As I say, she wasn't "the leader of that project" and she did not say what you say she did.

The particular things that you claim there are "absurd" are not absurd, it's just that you don't understand the procedures they describe and are taking them in the most uncharitable way possible.

(I haven't listened to the CalTech talk so can't comment with any authority on what Bouman meant by all the things you quote her as having said there, but it is absolutely not true that "any single one of the statements [] would disqualify an experiment", and amusingly the single statement you choose to attack there at greatest length is the most obviously not-disqualifying. You say, and I quote, "Most sensible researchers would agree that if the resolution of your experiment is equivalent to taking a picture of an orange on the moon, this means that you cannot do your experiment.". You appear to be arguing that if something sounds impossibly hard, then you should just assume that it is, literally, impossibly hard and that it can never be done. Once upon a time, "equivalent to speaking in New York and being heard in Berlin" would have sounded like it meant impossibly hard. Once upon a time, "equivalent to adding up a thousand six-digit numbers correctly in a millisecond" would have sounded like it meant impossibly hard. Some things that sound impossibly hard turn out to be possible. The EHT folks claim that taking a picture with orange-on-the-moon resolution turns out to be possible. Of course they could be wrong but they aren't obviously wrong; what they're claiming breaks no known laws of physics, for instance. And obviously they aren't unaware that getting a picture of an orange on the moon is very difficult. So I think it's downright ridiculous to say that their project is unreasonable because they're trying to do something that sounds impossibly hard.)

Comment by gjm on The Ghost of Joseph Weber · 2020-07-16T01:45:19.814Z · score: 5 (5 votes) · LW · GW

There is a TED talk by (actually, an interview with, as part of TED2019) Sheperd Doeleman, head of the EHT collaboration, whose transcript you can read on the TED website. It doesn't say anything even slightly like that. Is there some other TED talk by her that you're referring to? (I can't find any evidence that there is another.)

The only other thing I can find that you conceivably might be referring to is a TEDx talk by Katie Bouman, from 2017 (before the EHT picture was produced). Her title is "How to take a picture of a black hole" and it includes a prediction of roughly what the picture might be expected to look like, and includes the words "my role in helping to take the first image of a black hole is to design algorithms that find the most reasonable image that also fits the telescope measurements". Maybe that's what you mean?

She doesn't say "exactly", or even approximately, that applying the same pipeline to random input would generate a similar result. Quite the reverse; let me quote her again. "What would happen if Einstein's theories didn't hold? We'd still want to reconstruct an accurate picture of what was going on. If we bake Einstein's equations too much into our algorithms, we'll just end up seeing what we expect to see. In other words, we want to leave the option open for there being a giant elephant at the centre of our galaxy." She says, in other words, that a key consideration in their work was not doing exactly what you say she said they did.

(Shortly after that bit there is a slide that, if wilfully misunderstood, might seem to fit your description. Its actual meaning is pretty much the reverse. I won't go into details right now because I don't know whether you saw that slide and misunderstood it; I don't know whether this is the TED talk you're referring to at all. But I guess this is it.)

Incidentally: Katie Bouman was a PhD student, was not an astronomer, and was certainly not the leader of the EHT project. The project was already happening and already funded, but I suppose you could call her talk "selling the project to the public" in the sense in which any attempt to describe anything neat one's doing is "selling the project". Bah.

Comment by gjm on Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle · 2020-07-15T20:51:23.114Z · score: 2 (1 votes) · LW · GW

Sure, whatever works for you. (Including not getting back to me later, of course. If what I wrote came across as trying to impose an obligation then I put it badly.) I hope the videoconference went well.

Comment by gjm on The Ghost of Joseph Weber · 2020-07-15T20:48:04.686Z · score: 8 (5 votes) · LW · GW

Nope, not playing any more of that game. If you want to make a point, make it. If you want to hint vaguely that you're smarter than me by posing as Socrates, go ahead if you wish but don't expect my cooperation.

Comment by gjm on Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle · 2020-07-15T07:55:58.380Z · score: -5 (6 votes) · LW · GW

(Why write a lengthy and potentially controversial piece if you know you haven't time to engage with responses? But --)

[EDITED to add:] Of course, maybe you have plenty of time to engage with other responses and something about mine specifically or about me specifically makes you value such engagement particularly little. In that case there'd be no particular inconsistency. I don't know of any reason why my comments, specifically, would be not worth engaging with -- but then I wouldn't, would I?

The statement "as an aspiring epistemic rationalist I am not allowed to do X" can be interpreted in three ways. (1) "Not doing X is part of what the word 'rationalist' means." (2) "Among rationalists, X is morally prohibited." (3) "X is in some fashion objectively wrong for everyone, and it happens that rationalists pay particular attention to that sort of wrongness."

The behaviour of others who consider themselves rationalists is relevant to #1 because the meaning of a word is determined by how it is actually used. It is relevant to #2 because what is prohibited in a given community is determined by the opinions of that community as a whole. It is only tangentially relevant to #3, and I suspect that #3 is your actual meaning; but (a) prima facie #1 and #2 are also possible, and (b) even with #3 I think one function of "as an aspiring epistemic rationalist" in what you wrote is to encourage readers who also think of themselves that way to feel bad about disagreeing, which I think they shouldn't and are less likely to if aware that either your usage of "rationalist" or your opinion about what "rationalists" are allowed to do is highly idiosyncratic.