post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Brendan Long (korin43) · 2020-08-04T16:17:40.026Z · LW(p) · GW(p)

I'm assuming it wasn't your goal, but this feels like too much of a personal attack on a particular scientist.

As far as I can tell, Dr. Bouman is just one of many users of the CLEAN algorithm, which Wikipedia says was invented in 1974.

The algorithm she was apparently instrumental in creating is CHIRP, which Wikipedia says is useful exactly because it doesn't require the user-input that you're complaining about:

While the BSMEM and SQUEEZE algorithms may perform better with hand-tuned parameters, tests show CHIRP can do better with less user expertise.

Your points about how noisy this data is and the limitations of the algorithms used to construct these images are really interesting, but the narrative of this article puts too much blame for the problems in astrophysics on a single scientist.

Replies from: lsusr
comment by lsusr · 2020-08-04T20:46:51.167Z · LW(p) · GW(p)

The photo especially makes it feel like a personal attack. It is not clear to me what purpose this photo serves other than to make the attack extra personal.

comment by Brendan Long (korin43) · 2020-08-04T19:50:39.000Z · LW(p) · GW(p)

It looks like gjm already explained how you're giving a misleading account of what these algorithms do and how Dr. Bauman used them in a comment 18 days ago [LW(p) · GW(p)]:

The "weirdness" term in the CHIRP algorithm is a so-called "patch prior", which means that you get it by computing individual weirdness measures for little patches of the image, and you do that over lots of patches that cover the image, and add up the results. (This is what she's trying to get at with the business about random image fragments.) The patches used by CHIRP are only 8x8 pixels, which means they can't encode very much in the way of prejudices about the structure of a black hole.

[...]

For CHIRP, they have a way of building a patch prior from a large database of images, which amounts to learning what tiny bits of those images tend to look like, so that the algorithm will tend to produce output whose tiny pieces look like tiny pieces of those images. You might worry that this would also tend to produce output that looks like those images on a larger scale, somehow. That's a reasonable concern! Which is why they explicitly checked for that. (That's what is shown by the slide from the TEDx talk that I thought might be misleading you, above.) The idea is: take several very different large databases of images, use each of them to build a different patch prior, and then run the algorithm using a variety of inputs and see how different the outputs are with differently-learned patch priors. And the answer is that the outputs look almost identical whatever set of images they use to build the prior. So whatever features of those 8x8 patches the algorithm is learning, they seem to be generic enough that they can be learned equally well from synthetic black hole images, from real astronomical images, or from photos of objects here on earth.

[...]

Oh, a bonus: you remember I said that one extreme is where the "weirdness" term is zero, so it definitely doesn't import any problematic assumptions about the nature of the data? Well, if you look at the CalTech talk at around 38:00 you'll see that Bouman actually shows you what you get when you do almost exactly that. (It's not quite a weirdness term of zero; they impose two constraints, first that the amount of emission in each place is non-negative, and second a "field-of-view constraint" which I assume means that they're only interested in radio waves coming from the region of space they were actually trying to measure. ... And it still looks pretty decent and produces output with much the same form as the published image.

[...]

Bouman says (CalTech, 16:00) “the CLEAN algorithm is guided a lot by the user.” Yes, and she is pointing out that this is an unfortunate feature of the ("self-calibrating") CLEAN algorithm, and a way in which her algorithm is better. (Also, if you listen at about 35:00, you'll find that they actually developed a way to make CLEAN not need human guidance.)

comment by lsusr · 2020-08-04T20:57:40.560Z · LW(p) · GW(p)

I was advised that the articles I first posted here on Less Wrong were too long, wide-ranging, and technical, so with this article, I'm trying a more scaled-back, focused, 'reveal culture' style. Does it work better than the approach in the article below?

This is better but it still contains two separate ideas. One idea is "I don’t really believe that black holes exist because I think that theorists got drunk on general relativity and invented them." The other idea has to do with the creation of this specific image.

I agree [LW(p) · GW(p)] with korin43 [LW(p) · GW(p)] that the second idea already "feels like too much of a personal attack on a particular scientist". However, if you really want to continue in that direction then you should read all of the relevant scientific papers, write a post (or posts) explaining exactly, in mathematical terms[1] how the algorithms work and then explain the error in careful unambiguous mathematical terms. Strip out everything else[2]. Less Wrong is read by theoretical physicists, quantitative hedge fund managers, specialists in machine learning, and so on. We can handle the math. We do not have time for unnecessary words.

I think it would be more constructive to go in the direction of "black holes do not exist". The problem is not that your articles are too technical. The problem is that they depend unnecessarily upon deep technical knowledge from unrelated domains. An article explaining why black holes do not exist would require technical knowledge in only a single domain.


  1. Less Wrong supports MathJax. ↩︎

  2. In your comments and articles, you often speculate on the motives and thought processes of people you disagree with. I think your writing would benefit from leaving this out and sticking to the facts. Instead of arguing "[person] who expouses [popular idea] is wrong because ", your writing would improve if you wrote "[unpopular idea] is right because ". ↩︎

Replies from: korin43
comment by Brendan Long (korin43) · 2020-08-04T21:12:39.142Z · LW(p) · GW(p)

I would add that an article showing problems with either CLEAN or CHIRP would be interesting (especially if you can demonstrate them by actually running the algorithm, or point at other people's results doing that), but an article about both at the same time is needlessly complex.

comment by lsusr · 2020-08-04T21:25:28.192Z · LW(p) · GW(p)

I think that it is unfortunate that she was put out into the public eye at such an early stage of her career, before she could fully understand what she was doing. [emphasis mine]

This is yet another example of a personal attack. Please refrain from making unsubstantiated claims belittling other people.

Replies from: Raemon
comment by Raemon · 2020-08-04T21:29:16.378Z · LW(p) · GW(p)

This is a reminder to me to write up an article that looks at what the norms actually should be for speculating about other people's motivations. 

(I think there is some sort of general "Be careful psychologizing people, especially when criticizing them" norm that's sort of in the water, but I don't know that I've seen it written up clearly. I currently endorse Duncan Sabien's proposed norm of "if you're going to do that, clearly label your cruxes for what would change your mind about their psychology", but it's a non-obvious norm IMO and took me a while to come around on.)

comment by Dagon · 2020-08-06T17:48:48.873Z · LW(p) · GW(p)
Compartmentalization is a good way to make sure that no one ever understands the big picture

We're not asking for compartmentalization, we're asking for clearer composition. The argument about meta-level trends or aggregates fully depends on multiple strong examples, each of which is independently verifiable. It's fine to give pointers between the arguments, but they shouldn't actually depend on each other in a circular way.

comment by Brendan Long (korin43) · 2020-08-05T20:07:27.548Z · LW(p) · GW(p)

That's a good point. I think I'm more interested in your meta-point about science in general anyway, but I think the problem is that at a first glance, your supporting arguments seem to be wrong. Given that Dr Bouman worked on one of several teams trying to avoid exactly the problem you're talking about by using multiple different methods, and her CHIRP algorithm was created specifically to avoid the biases that CLEAN introduces, your meta argument doesn't work unless you go deeper and make a stronger argument that CHIRP is biased or broken in the way you're claiming.