If I predict that a ghost will appear in the castle tower every midnight, I might put a camera with a timer in the tower and attempt to capture an image of the ghost. I could repeat this process every night for a year. Perhaps the ghost only shows up at midnight on certain holidays. My hypothesis about the ghost would be falsified if none of the images show a ghost.
BUT, if I combine all of the images I’ve taken into one image, so that a blurry, ghostly form begins to take shape as a result of all of the dust floating around in the individual images, can I announce that I have taken a picture of a ghost?
If I select only the subset of images that produce the most ghost-like form when combined, can I announce that I have taken a high-resolution picture of a ghost?
If you would like to hear this post read aloud, try this video.
I think that most people would say that the answer to those questions is, ‘no‘. Yet in the scientific community today, gravitational wave and black hole physicists are doing both of these things and getting praised for their work.
The Event Horizon Telescope (EHT) collaboration combined blurry telescope images into one image, fine tuning calibration constants about regions of interest until what they wanted to see finally appeared.
The gravitational wave observatory LIGO-VIRGO filters their noisy data with a template of what they want to see. In many ways, this is like taking a ghost shaped filter and applying it to a photograph.
What is fascinating about these multi-billion dollar projects is that these sorts of logic errors occur at every single layer of their experimental design and those who work on the projects show no awareness of the ways in which their work is out of line with the scientific method. In the LIGO-VIRGO experiment, a cursory examination reveals:
The lack of attention to earlier measurements,
The lack of a control variable or consistency with predictions,
The use of templates or tunable filters to extract what they want to see in noisy data,
The impossibility of distinguishing the source of what they are measuring,
The impossibility of determining the source’s properties,
The lack of awareness of the limitations of their theoreticalapproximations,
LIGO claims to have confirmed general relativity by using general relativity to construct theories about black holes. Is this circular reasoning? Is the dog still chasing its tail? Obviously yes, but some people require more convincing than that, so I will go through each of the points listed above.
In principle, the scientific method ensures that the community will correct its errors, but if it doesn’t study its own history, there is no guarantee that it won’t repeat its errors over and over again.
Back in the 1960s, hundreds of independent research groups constructed devices to measure gravitational waves and they all compared their results. Each research group thought that it had measured gravitational waves, but when the results were combined, they all had to conclude that no one had been measuring gravitational waves from deep space. They had all been measuring different sources of noise.
Fast forward fifty years and the physics community has given a Nobel Prize to a group that claimed to have measured gravitational waves with a device so expensive it is impossible to duplicate it enough times to make sure it isn’t just measuring noise.
I find it rather strange that the physics community keeps forgetting the science lesson in which the teacher says, “you must repeat your measurement under many different conditions for it to be worth anything” and “you can’t measure control variables.”
If you didn’t pay attention in science class, I’ll remind you of the purpose of a control variable. In an experiment, the control variable is not changed. The purpose of an experiment is not to study the control variable. The control is used for comparison.
In a well-designed experiment, one typically blinds the researchers to the type of variable they are measuring, so that they don’t bias their data collection. In medical research, this is called a double-blind controlled study. The control variable is usually a placebo and the researchers don’t know which patient got a placebo and which got the medicine. If you can’t conduct your research in this way, you should always doubt your conclusions because it was possible that you biased your result with your expectations.
These are fundamental principles of the scientific method and expensive physics experiments like LIGO and EHT are blind to it. In fact, all of the modern physics experiments that attempt to measure the noise floor of empty space (LHC-Higgs, COBE-CMB, LIGO-VIRGO, BICEP, etc..) are dubious for similar reasons — they have no control variable because they are trying to measure the control variable – it is like a dog chasing its tail.
If your measurement tool is moving around by the same amount as the thing you want to measure, you aren’t going to be able to measure it very well.
If you can’t have a control variable, at a minimum, your experiment should reproduce some predictions and LIGO has not done this either. As of December 2019, they had announced 50 detections since 2017 but, as of May 2020, they are only standing by 10 of those detections because they were forced to attribute many of those ‘detections’ to known sources of noise. If I compare this to the papers they wrote predicting how many collisions they would expect to see after their upgrade these numbers are widely off the mark. They expected to see at least a few black hole collisions and at least one neutron star collision per month since August 2019. It has been 9 months,
Templates and Filtering Data
The purpose of the scientific method is to make sure that you are seeing things that are really there. A scientist does not want to be biased by what he wants to see, but I’m afraid that LIGO has built bias into their experimental design by using black hole-shaped templates to filter and tune their data — sometimes by hand. In the second image below, you can see what their data looked like without their fine-tuned filtering and it looks nothing like the ‘black hole ringdown signature’ that they published in their Nobel Prize winning paper.
Then there is the issue of the scientific ethics of hand tuned data:
“If LIGO did anything wrong,” [a LIGO supporter] added, “it was not making it crystal-clear that pieces of that [famous, Nobel Prize winning] figure were illustrative and the detection claim is not based on that plot.” [Respected Niels Bohr Institute researchers], however, accused LIGO scientists in an email of “misconduct” and making “the conscious decision not to inform the reader that they were violating one of the central canons of good scientific practice.”
When the Niels Bohr Institute research group told LIGO that there was unaccounted for correlated noise throughout their prize winning data, LIGO replied [paraphrasing], ‘we put the wrong data in our prize winning paper and if you look at the right data, you won’t see the correlated noise. Oh, and you forgot to use an FFT windowing function even though that is a mistake that no physicist would ever make.’
Based on these issues, I think LIGO has used a ghost shaped filter to help them see more ghosts, but this isn’t even the worst of their problems.
Distinguish the Source
Even if a ghostly apparition or a black hole collision really happens somewhere in the universe, LIGO has no way to distinguish its signature from some more mundane, local occurence. As in, they cannot distinguish a deep space gravitational wave from a more local wave, yet those who claim to measure black hole collisions in deep space beleive that they can do this. If ghosthunters can’t know if they measured a cloud of smoke or a ghost, how can LIGO know if they measured the sun burping or colliding black holes?
A LIGOnaut or ghost-hunter might respond to this criticism by pointing to the measurement they took of a gamma ray or ectoplasm burst occuring at the same time that they measured a ghostly collision signature, but there again, I see evidence of mass delusion. There were 4700 authors, or one third of all astronomers in existence, on a paper about correlations between something that LIGO measured and something that other astronomers measured, but the timing of the signals doesn’t match up and if a message goes out to the ghost hunting commmunity to start looking for ectoplasm because someone thinks they saw a ghost, I think it is quite possible that ghost hunters might find a booger left by a kid and determine that it is ectoplasm.
This should worry people because false alarms about weak gamma ray detections regularly prompt LIGO to dig through their noise for evidence of signals that look like black hole mergers.
“The Fermi team calculated the odds of such an event being the result of a coincidence or noise at 0.22%. However, observations from the INTEGRAL telescope’s all-sky SPI-ACS instrument indicated that any energy emission in gamma-rays and hard X-rays from the event was less than one millionth of the energy emitted as gravitational waves, concluding that “this limit excludes the possibility that the event is associated with substantial gamma-ray radiation, directed towards the observer.” If the signal observed by the Fermi GBM was genuinely astrophysical, SPI-ACS would have detected it with a significance of 15 sigma above the background. The AGILE space telescope also did not detect a gamma-ray counterpart of the event. A follow-up analysis of the Fermi report by an independent group, released in June 2016, purported to identify statistical flaws in the initial analysis, concluding that the observation was consistent with a statistical fluctuation or an Earth albedo transient on a 1-second timescale.” Fermi Gamma-ray Space Telescope – Wikipedia
The sort of gamma ray detection at Fermi that initiates the search for correlations with LIGO happens many times per day and when thousands of people are all expecting to see something buried in noise, I think a fluke is still a possibility. Call me crazy, but I think that skepticism is a good thing to hold onto when you are dealing with noisy, sensitive measurements taken by highly motivated groups of people.
Maybe I am just being too strict in my interpretation of the scientific method. After all, if something dark and mysterious happens in deep space and causes the sun to burp and that causes the Earth’s core to gurgle and that causes a lake to heat up which causes a thunderstorm which causes a lightning strike which hits a Schumann resonance and LIGO detects that, did LIGO detect a gravitational wave? By the standards of modern physics, many people would say, ‘yes‘. But modern physicists do lots of strange things in their attempts to measure unmeasureable things. Neutrinos, Higgs particles, cosmic microwave background radiation, oh my.
Determine the Source’s Properties
Suppose that I believe that the blips measured by LIGO really come from deep space black hole collisions and I believe that they have accurately estimated the size of the objects which collided, even though those estimates are absurdly larger than what they had expected to see based on other measurements and based on the theory of black holes.
Suppose that I blind myself to these errors. Can I believe in the extrapolations from these estimates which are used to determine the distance to the collision, or have they made mistakes there, as well? In their first gravitational wave announcement, they wrote that the collision was 1.3 billion light years away.
If you detect a wave at two locations and you know its propagation speed, you can determine the direction from which the wave came, but not the distance to the thing that caused the wave. LIGO claims that they can determine the size of the objects which created the wave they detected based on the frequency of the wave. Larger frequencies correspond to larger objects. From there, you need to guess how far away such objects typically are. They use the estimates of how big they think the objects were and how far away such objects typically are to make an estimate of how big the wave should be when it gets to us. Ignoring circularity of logic, an absolute measurement of the amplitude of the wave should then tell you more about how far away it was and since they assume that the signal traveled at the speed of light, they conclude when the event occurred.
There are a lot of assumptions in this chain of logic and while it might make sense at first glance, an absolute measurement of the amplitude of the wave isn’t really possible with their apparatus. Every red shift, amplification, or filtering of the signal is associated with a factor which adds an error to a determination of the absolute amplitude of the signal, and by tuning these difficult to determine error estimates, you can give yourself just about any result you want.
Surely the people who built LIGO weren’t so stupid as to ignore all of these issues. They must’ve had a fundamental, basic research justification for this experiment. It can’t have been built solely for the sake of the engineering byproducts and busy work. One would hope that the leaders of the project did not knowingly send a small army of students off on a decades-long, multi-billion dollar wild goose chase or ghost hunt.
I’ll unpack the theoretical basis of the experiment in four layers: general, colloquial, technical, and esoteric. Pick your favorite poison. The language degenerates quickly and the esoteric and technical arguments are, of course, the most tedious, so I’ll do them last.
Generally speaking, by conflating relative and absolute coordinate systems and language, LIGO has convinced an influential subset of people that impossible things are possible. We can imagine having an absolute, godlike perspective, but we cannot actually adopt one through our measurements because nothing is truly stationary. We can only measure things from a relative perspective and LIGO’s experimental design implicitly assumes it can take an absolute, godlike perspective. I find that thinking about bubbles helps illuminate the folly of this way of thinking.
Colloquially speaking, If you believe in the original Michelson-Morely experiment, then LIGO cannot measure the flexing of the Earth because if the Earth stretches in one direction, it must contract in the other and their device is insensitive to this motion. Also, if light and matter waves change their shape in the same way at the same time, you can’t use light waves to measure matter waves. In contrast, you can measure how the Earth flexes by comparing the path traveled by a particle beam in a linear accelerator to that of a light pulse with a stable arrival time, showing how matter aether and luminiferous aether behave in different ways.
Technically speaking, Previous experiments with Michelson interferometers suggested that in absolute space, light’s wavelength and the interferometer cavity dimensions will change in the same way at the same time while the amount of time required for light to be reflected by the mirrors will be constant. In relative space, the cavity dimensions are constant because they track the changes of the general relativity coordinate system while the amount of time the light requires to enter and exit a mirror changes (and you might wonder why they thought there would be an advantage to making the interferometer several kilometers long). In either case, the changes in one arm of the interferometer exactly counteract the changes in the other arm of the interferometer such that when the Earth turns or when a wave passes through, no change will be measured at the detector.
LIGO insists that they can bypass this issue by adding ‘fresh light’ to the system which allows them to sample the changes in the cavity size. This complication makes the system too difficult for most people to visualize intuitively and this causes them to fall back on mathematics which open the door to errors of approximation. From my perspective, those who claim to understand how the fresh-light concept works are akin to those who admire the emperor’s new clothes.
I’ll try to debunk the fresh-light concept in the language of reflection time. When a gravitational wave arrives, the reflection time at the first mirror and the combiner mirror are decreased at the same time that reflection time at the second mirror and the combiner mirror are increased. If you think sequentially, fresh light that hits the first mirror gets slowed down and it will be compared to older light that has been sped up on the second mirror. The difference between the fresh light and the old light will show up as a change at the detector. This is what would happen if the laser were pulsed, but LIGO’s laser is not pulsed, it is continuous.
LIGO has conflated a discrete process with its continuous-wave apparatus. If you think of simultaneous processes, as is required for an apparatus that uses continuous waves, there is no such thing as fresh light or old light – there is just light and you see that the concept is nonsense. They are using sequential thinking for a simultaneous process and if a person cannot mentally animate two processes occurring in parallel, then he will be more likely to believe in LIGO. In a broader sense, most of the mathematics we use in physics is an attempt to sequentially approximate simultaneous processes and if you think about entanglement or quantum mechanics from this perspective, the concepts lose their relativistic mystery.
At this point, a determined LIGOnaut might pull out some technical jargon. Complicated figures would be employed in an attempt to convince us that sequential, discrete analysis can be used to approximate their continuous, simultaneous process.
“Cannot light from a laser be considered pulsed but at a fast rate since it is produced with stimulated emission so that light entering the vacuum arms with the mirrors are pulsed at LIGO-VIRGO? By slowing the rate that light is emitted from a laser, pulsing is more obvious.”
But any non-LIGO physicist knows that stimulated emission in a laser means that the photons are emitted in proportion to the particle number squared. It is a collective, oscillatory effect in which large groups of particles oscillate in sync, making the waves they generate coherent rather than random. This process is not conducive to the emission of individual photons as in an incoherent process. Pulses of laser light are produced through a different process which involves giving a continuous wave a frequency chirp and sending it through a collection of dispersive elements, like prisms or diffraction gratings. If LIGO is using this justification, it is yet another instance of the conflation of concepts that should not be mixed. The rate of light emission would have to be slowed to the level of individual photons for this pulsing effect to be measurable within a system designed for coherent light and the detector would need to be sensitive to changes in the number of individual photons arriving. Direct measurement of individual photons is not possible with their system. In any case, if the system is only sensitive to changes on the level of individual photons, then it makes no sense to have the powerful, continuous-wave in the interferometer at all.
I recall a similarly obtuse debate about the physics of the beam-splitter, but I’d rather not dive into that at the moment.
Esoterically speaking, in Lorentzian Maxwell’s equations, transverse and longitudinal waves will have the same speed in free space and, just as in the original Michelson-Morely experiment, if you use those equations to describe the experiment, LIGO shouldn’t be able to measure anything, but if one decides that the continuous, Lorentzian Maxwell’s equations in relative, Reimannian, curved space are merely an approximation of a more discrete, grainy, recursive system described by Galilean Maxwell’s equations in absolute, Cartesian, flat space, then there will be discontinuities between transverse and longitudinal waves which might be measured as a sort of friction coefficient that corresponds to the strength of the ambient gravitational or magnetic field. In olden times, one would call these discontinuities magnetic monopoles, but today, we call them all sorts of things – displacement currents, positrons, electrons, matter, antimatter,.. black holes.
Perhaps the inventors of LIGO were trying to determine the properties of the magnetic monopoles filling space. That might be worth doing. “Wait!” You might say. “Magnetic monopoles do not exist. I learned that in my first physics classes.”
“If you’d known these equations 300 years ago, you’d have been very powerful.” “A student just like you came from this University and won a Nobel Prize.” She says the words “paradox,” “puzzle,” and ‘”mystery” several times and is selling physics to impressionable young people as a discipline which can make them powerful, noble, and famous. Notice how she ignores the approximation inherent in setting the divergence of the magnetic field equal to zero in Gauss’s law.
Basically, magnetic monopoles are localized swirls of space and time that exist when you don’t try to directly measure them or that exist in an instant but not over a measurable timestep. They are sort of like vortices or bubbles — ephemeral, ghostly little things. Modern physicists have taken to calling them ‘particles’ and confusing people about the concept of antimatter. A black hole is a representation of the largest ‘particle’ we can think of and when we imagine their collision in distant space, we picture matter and antimatter annihilating and releasing energy, just as we observe in our Earthly particle colliders. This is why the idea of a black hole is so appealing to the physics community. Yet, just because an approximation like general relativity works on one length scale, there is no reason to believe that its approximations are valid on another. After all, space acts flat over some length scales and curved over others. That is why approximating black holes with general relativity and thinking that they literally exist in a mathematically perfect form is too much of a fanciful stretch for many people. It is understandable that people are curious about them because if black holes are like virtual particles in colliders, then they borrow energy from the vacuum and disappear as quickly as they form, however, if the vacuum is fed by an outside source of energy, then a black hole might form and remain for a long time. If black holes exist, it tells us something about whether positive or negative entropy rule the universe and this has a certain quasi-religious, psychological impact on many people.
I don’t, of course, know how all physicists think, but I know two physicists very well: myself and my husband, a man who believes in LIGO’s measurements.
His funding sources have nothing to do with LIGO and he is very secure in his job, but he has been conditioned to believe that it is impolite, politically unwise, and not his business to think critically about anyone’s research other than his own. The politically astute thing to do in the case of LIGO is to approve of them and give them the benefit of the doubt without spending any time thinking about the issue, even though he would be an ideal person do deconstruct their engineering designs. He genuinely has not given the matter two seconds of thought.
He stopped listening to me early in my post-doctoral work when I began to have ideas independently of him. At that point, he refused to talk to me about anything related to work. Any time I tried to explain something, I got shut down. I soon figured out that the reason he shut me down was that he really couldn’t understand what I was saying. His mind works in a very narrow, specialized fashion and he couldn’t understand any of the associations I was making.
I find the following argument quite clear, but he can’t even force himself to read such a thing because he is so specialized.
If general relativity is a static approximation of a more dynamic system, then its predictions about black holes will be inaccurate. Black holes might not exist.
Even if general relativity is an adequate approximation of a dynamic system and black holes do exist, the method of creating an image out of noisy, contaminated data is still inaccurate.
Using a single result produced through faulty data analysis to claim confirmation of a hypothesis based on an extrapolation from an approximation is terrible science on multiple levels.
A theory might be supported through results produced with many independent experimental methods addressing many hypotheses about the theory, but using multiple data analysis methods with a single data set to claim confirmation of a theory is absurd.
Those who make images of black holes and gravitational wave measurements are doing all of these things and physicists like my husband refuse to worry about such matters outside of their narrow purview.
The best that science can do is to create a controlled experiment in which changing one variable causes another variable to change. Since this is not possible in astronomy, we must rely on astronomers to exercise restraint when they interpret their data. That restraint appears to be sorely lacking in the present day community.
I’ve been writing about these issues for a while, but I’m not sure how to hit the right buttons to get the message across.
In recent months a few physics apostates have begun to make noise about these matters, as well, noting that LIGO has produced nothing but false alarms over the past year. As in, once they got their systems in full operation, they were forced to rule out all of the things they would’ve ordinarily proclaimed to be black hole mergers. What does that say about all of their earlier, published detections? Nothing good.
A guy I met on Twitter named Thaddeus Guttierez is taking a more academic approach to being a LIGO gadfly by attempting to identify things like lightning strikes that occured at the same time as LIGO’s claimed detections, and while he doesn’t appear to be in the mainstream academic community, he speaks their language. He sent me these links to mainstream LIGO gadflies:
A common thing for a young or aspiring physicist to say is, “If your ideas are so great, why haven’t you published them im a peer-reviewer journal?” They might not know that for an ex-physicist, ex-post-doc like me who is unaffiliated, publishing in a peer-reviewed journal is very expensive. APS journals charge two thousand dollars and Springer journals only waive their fees if you are employed by an approved institute.
I found one, online peer-reviewed journal (frontiersin.org) that does not seem to charge. It appears to be a new model for scientific publishing in which the referees are not anonymous, but I’m a bit suspicious of it. As in all internet media, it is far too easy to distort an author’s impression of their work’s distribution. (I just checked into this a bit more closely, and they do charge a fee for publication – if your paper is accepted. I don’t know how much it is. I think they should pay me, not the other way around.)
After putting effort into debunking this stuff, I do ask myself why I bother and I think that I want to demystify the pop-sci nonsense used to lure young people into physics servitude. I think they might find better things to do with their time and I’d like to help them avoid the mistakes I made.
Thanks for presenting your thesis. However, one of your figures doesn't support your argument on closer inspection. The figure that you point to as being the 'unfiltered' data is measuring cross-correlation between the Hanford and Livingston datasets, so we should expect it to look completely different than the datasets themselves.
I also want to push back on a particular point - there's nothing wrong in principle with using a black-hole shaped filter to find black holes. You just have to adjust the prior based on the complexity of your filter.
If you click on the link to the article about the data source, I think you will find that you have misinterpreted that figure and that my interpetation is correct. Thank you for pointing out that the presentation can look ambiguous without that context.
I read the article. samshap is 100% right and you are 100% wrong.
[EDITED to add:] "What is asserted without evidence can be dismissed without evidence" -- but I might as well give some justification for my claim. Here is what the article says:
First I begin by cross-correlating the Hanford and Livingston data, after whitening and band-passing, in a very narrow 0.02s window around GW150914. This produces the following:
(followed by the graph you provide here). Note two things.
First: "I begin by cross-correlating the Hanford and Livingston data" -- just as samshap says.
Second: "in a very narrow 0.02s window". That's about 1/10 of the time period represented by the main plots, which go from 0.25s to 0.45s "relative to September 14, 2015 at 09:50:45 UTC" (not that we can tell from your presentation, because you clipped off the bottom part of the figure which includes the time axes). So this could not possibly be an alternative to the other plots; the horizontal axes aren't in any way compatible.
The context for this is that the (LIGO-skeptical) Cresswell et al paper is looking at the time lags between LIGO observations, and claiming to cast doubt on the idea that seeing two very similar signals at the two detectors at a certain time-lag is evidence of anything. So, in particular, Cresswell et al try to show that you can get the same 7ms lag by looking at other things without the actual signal in it. (One of the things they look at is the residual noise from the LIGO data, after subtracting off the black-hole-merger model. This is why it's relevant that the actual best-fit model is better than the "illustrative" one -- because if you subtract off a crude model, what remains will have some real signal in it, so it's unsurprising if it shows some of the same temporal correlations as the actual signal does.) So now Ian Harry shows the cross-correlation graph for the LIGO data before subtracting off the fitted model, and after subtracting the (best) fitted model. The graph you reproduce here is the cross-correlation before subtracting the model; the next one (not reproduced here) is the cross-correlation after subtracting the model, which shows no 7ms spike.
Note that the context makes excellent sense of having a cross-correlation graph at this point in the article, and would make no sense at all of having a raw-LIGO-observation-data graph instead.
This is what the author of the linked photo wrote. I find it quite clear.
"With that all said I try to reproduce Figure 7. First I begin by cross-correlating the Hanford and Livingston data, after whitening and band-passing, in a very narrow 0.02s window around GW150914. This produces the following:
There is a clear spike here at 7ms (which is GW150914), with some expected “ringing” behaviour around this point. This is a much less powerful method to extract the signal than matched-filtering, but it is completely signal independent, and illustrates how loud GW150914 is."
How the prize-winning figure was produced was much less clear, but I didn't go into the details because I wanted to give a larger perspective on the methods employed rather than bury everyone in tedium.
If you subtract chirped signals from one another with a slight phase shift, do you get a chirped signal that looks like the initial chirp? This is another method to get the signal in the prize winning figure. It does not require templates or hand tuning. That was the point I was trying to make.
(I wish you wouldn't keep calling it "the prize-winning figure". Obviously Nobel Prizes are not in fact awarded for figures, and I do not believe you have any evidence for the implied claim that if the figure had looked different then the LIGO team wouldn't have won the Nobel Prize.)
I'm not sure what point you're now making; it looks to me as if it has nothing to do with what we were talking about before. Are you saying that the LIGO team should have used a different technique to identify gravitational wave events? If so, that claim requires much more evidence than "I thought of another way to do it". Or are you saying that some plot they made is in fact the result of subtracting two related signals with a phase shift and that this is some sort of sign of incompetence or fraud or something? Or what?
In any case, it seems like you've given up defending your claim that the plot from Ian Harry's article is some sort of "original" less-cleaned-up version of the plot you keep calling "the prize-winning figure". Which is just as well, because that claim is indefensible.
Whereas I think you should try harder to explain it, because it's not making any sense to me as a justification for your (plainly incorrect) claim about that figure and right now my leading hypothesis is that you just don't understand the mathematics and/or the physics involved well enough to see what's going on and are trying to obfuscate, and there is a (not very high) limit to how much trouble I am willing to go to to understand something that seems likely not to be worth understanding.
I could, of course, be wrong about this. As I already mentioned, I am very fallible. Feel free to convince me.
I might as well answer your question about chirped signals. If you have a signal that looks like f(t)sin(t+kt2) where f is a slowly varying function (compared with the chirpy factor) then subtracting a slightly time-shifted copy of it gives you roughly the derivative, which when f varies slowly is roughly f(t)(1+kt)cos(t+kt2), which is indeed a chirped signal that resembles the initial chirp albeit with some extra variation in amplitude. If you have a phase-shifted version available instead of a time-shifted one, the resemblance is closer because the 1+kt factor goes away. So yes, subtracting chirpy signals with a small shift gives you similar-ish chirpy signals. Now, how does this give any reason to think that that plot is a less-processed version of "the prize-winning figure"?
Nope, not playing any more of that game. If you want to make a point, make it. If you want to hint vaguely that you're smarter than me by posing as Socrates, go ahead if you wish but don't expect my cooperation.
Pretty much everything here seems wrong to me. Some comments, in rough order of appearance:
You call the EHT a multi-billion-dollar project. I don't think I believe you. Can you provide some actual figures?
You say that LIGO-VIRGO "filters their noisy data with a template of what they want to see". _Every_ kind of filtering can, with a sufficient lack of charity, be described that way. (E.g., even the most simple-minded moving-average filter amounts to saying that you're looking for signals with relatively little very high-frequency content, but you expect there to he high frequencies present in the noise.) There is nothing wrong with doing it, either; what matters is how you then analyse the results. If you think LIGO's analysis is wrong, you need to explain how it's wrong; making a complaint that amounts to "they filter their data" is no good; that's what everyone does and there's nothing wrong with it.
You say that it's circular reasoning if you say you've confirmed GR by using GR to construct theories and then checking that your observations match the theories. It's not circular reasoning at all, it's how science works. You take a theory, you put some effort into working out what the theory says you should see, and you look at whether you see that or not. Again, it's very possible to do that wrongly -- confirmation bias is a thing -- but a complaint that amounts to "they claimed to have confirmed a theory by doing experiments based on that theory" is no good; that's what everyone does and there's nothing wrong with it.
You say the gravitational wave community has exhibited a "lack of attention to earlier measurements", on the basis that earlier measurements claimed to have found black holes and turned out to be wrong, and LIGO/VIRGO isn't doing the _exact same thing_ that made it possible to check that the earlier claims were wrong, namely combining large numbers of independent verifications. But (1) your description of those earlier measurements doesn't match what's in the article you link to (you say hundreds of independent groups all thought they'd found GWs and they only discovered they were wrong when they combined their results; the article says _one_ researcher claimed to have found GWs, everyone else disagreed, and when they looked they found errors in his analysis), and (2) it is not always the case that when something goes wrong and gets fixed, next time around you should apply the exact same fix in advance; sometimes there are better ways. Repeating an experiment N times reduces the noise by a factor of sqrt(N) (at least for certain common kinds of noise) and there may be ways to reduce it more effectively per dollar spent.
You say LIGO fails to use "control variables". This is nonsense. Anything they don't vary is a control variable, and "using control variables" is not a virtue. What you're actually describing in the paragraph beginning "In a well-designed experiment" is a control _group_ or simply a _control_. Some experiments use controls, some don't; it's not clear to me what it would _mean_ to use a control in the case of LIGO, and it seems to me that you could consider _all the times it doesn't detect anything_ to constitute control measurements.
You say LIGO "had announced 50 detections" as of 2019-12 but as of 2020-12 "are only standing by 10 of those". But you don't quote what they actually said, or provide any links. The "Open data from the first and second observing runs of Advanced LIGO and Advanced Virgo" paper published on 2020-12-25 says that those runs produced "11 confident detections and 14 marginal triggers". That doesn't look to me like a claim of 50 detections. Could you please be more specific about what they claimed in 2019-12 and what they said in 2020-05? I am betting that if there is anything resembling a "50 detections" claim, it was something like "50 candidate events" and confirming only 10 of them is in no way evidence of anything wrong.
You say "They expected to see at least a few black hole collisions and at least one neutron star collision per month since August 2019. It has been 9 months". (Obvious implication: they aren't seeing what they said they would see.) According to https://www.ligo.caltech.edu/news/ligo20200326, when they suspended their third observing run near the end of 2020-03 (because of COVID-19) they had seen 56 detections (I don't know whether this means candidates, fully confirmed detections, or what) in the ~400 days of run 3. That's about four per month. Seems to fit their prediction just fine.
You complain, again, about LIGO's use of "templates" and compare a couple of graphs to show how different less-filtered data look compared with their published plots. But your plot purporting to be "the same data ... with only a whitening filter and a Fourier transform" is no such thing. Look at the y-axis label: "Cross-correlation". This is the cross-correlation between the Hanford and Livingston signals. It is nothing remotely like the raw data, nor should it be.
So far as I can see, in the plot you show ("This was what LIGO published and used to win a Nobel Prize")the data in the top frames is _not_ the result of any sort of template-filtering at all. (I think there's some bandpass filtering, which is absolutely routine, and that's it.)
You quote some accusations of "misconduct" on the basis that "pieces of that figure were illustrative and the detection claim is not based on that plot". The thing that's "illustrative" is the _second_ row in the "Nobel Prize" plot, and the point of the remark about how it's only "illustrative" is that the properly-done fit (which _was_ the basis for the detection claim) matches the observed signals _better_. The point of the "illustrative" plots is to let you see by eye that the actually-observed signals have the right sort of shape.
You object to the LIGO researchers' response to the Copenhagen objectors because they said (in your paraphrase) "you forgot to use an FFT windowing function even though that is a mistake no physicist would ever make". Well, sometimes physicists make mistakes you wouldn't think they would. The relevant question here is: _Did_ Cresswell et al fail to do it, or didn't they? Green and Moffatt say their results look like they did fail. I haven't seen any rebuttal to that.
You say that the LIGO researchers have no way to distinguish distant gravitational waves from other possible sources nearer-by. Well, obviously one can never prove that an observation doesn't come from some currently unknown source producing signals by currently unknown means, but that criticism applies equally to _all_ observational science. We think we see a supernova a long way off, but maybe it's actually some thing much nearer to us that _just happens_ to have happened exactly in between us and the star we think went supernova. Sure, it's possible, but we have a simpler explanation! And so it is for LIGO. If you have a specific alternative theory for what LIGO has been detecting, let's hear it. (You kinda-sorta, I assume mostly frivolously, mention a possible string of events. "After all, if something dark and mysterious happens in deep space and causes the sun to burp and that causes the Earth's core to gurgle and that causes a lake to heat up which causes a thunderstorm which causes a lightning strike which hits a Schumann resonance and LIGO detects that ...". But obviously that's not relevant because (1) it has no details that would enable us to tell what sort of observations such a sequence of events might produce and (2) something _that_ local would not produce results that would fool LIGO except by extreme coincidence; that's why they have multiple detectors thousands of miles apart.)
You complain that the black-hole events LIGO claims to have found "are absurdly larger than what they had expected to see based on other measurements and based on the theory of black holes". I would like some details substantiating that complaint. I remark that LIGO would _always_ tend to detect larger-mass black-hole events for the obvious reason that they produce stronger gravitational waves and LIGO needs to be staggeringly sensitive to detect anything at all.
Your theoretical objections to the whole mode of operation of LIGO looks all wrong to me, in multiple ways, but I am not a general-relativist and won't get into that particular argument (but I remark that if you were right, that seems like the sort of error that I would expect The Scientific Establishment to pounce on instantly, so the fact that LIGO is generally a respectable high-prestige operation is evidence against).
And, while we're talking about the actual physics, the following sentence seems to me like evidence of hopeless confusion (on your part, I'm afraid, not that of the scientific establishment): "there will be discontinuities between transverse and longitudinal waves which might be measured as a sort of friction coefficient [...] In olden times, one would call these discontinuities magnetic monopoles, but today, we call them all sorts of things -- displacement currents, positrons, electrons, matter, antimatter, ... black holes". I don't think this makes contact with reality at any point.
Somewhere around here, I lost the will to live, so I've paid less detailed attention to the end than to the beginning.
(I am very fallible and the chances are that there is at least one mistake in what I've written above. But there would need to be one hell of a lot of mistakes for your complaints about gravitational wave detection to be convincing to me.)
(I did come to this topic with an ax to grind, but I tried to be a bit less polemical in the article I wrote above.)
In 1991, congress agreed to fund the design phase of LIGO to the tune of 23 million dollars. The construction phase ended in 2002 and was funded by the NSF with an initial grant of 400 million dollars, making it the largest project ever funded by the NSF. The first version didn’t work so more money was spent through other grant requests. I can’t find a tally of how over budget they went or the other grants which they were able to secure. The grand total between 1990 and 2010 could’ve been a billion dollars or more.
Over the 5 years between 2010 and 2015, another 620 million dollars was spent on the “Advanced LIGO upgrade”. More money from international collaborators also flowed in, but that is hard to tally.
Some first results were announced with great fanfare starting in 2017 and they plan to continue spending money on LIGO until it reaches its “design sensitivity” in 2021.
After announcing first results, the two project leaders collected millions of dollars in prize money.
Taken altogether, you have a ~2 billion dollar project which has supported probably a couple thousand professors, students, and scientific equipment manufacturers since the 1990s. On average that would be 35k per year per person - unevenly distributed. I find this appalling not because I hate scientists, but because I don’t believe in their interpretation of their experiment and think that the educations the people in this project got about how science should be done were very bad. They would’ve been better off paying 2000 people to just think about things carefully for 30 years rather than organizing them into an anthill which builds nonsense. Meanwhile, nobody in the scientific community can openly criticize LIGO because if you want to get a grant, you probably will need approval from a LIGOnaut.
I didn't ask how much was spent on LIGO. I asked how much was spent on EHT. Those are very different projects.
(So I'm afraid everything you wrote above was irrelevant to the question I asked. I regret not making it clearer, though I confess I'm not quite sure how I could have made it clearer since what I wrote was "You call the EHT a multi-billion-dollar project. I don't think I believe you. Can you provide some actual figures?".)
Also, in case it wasn't obvious, the question about the cost of EHT was very much the least important part of what I wrote; obviously it doesn't make much difference to the rightness or wrongness of your claims about LIGO whether you got the size of the EHT project right or not.
I'm sorry, I didn't read as closely as I should've. You wrote a lot and I am happy to address individual points, but not all of them at once. I do not have the numbers on hand for EHT, but since it combined the efforts of ~10 radio frequency telescopes, if EHT kept the telescopes in operation when they would've otherwise lost funding, it may have been quite expensive. The telescopes themselves have surely cost billions to build and operate. One was located on the south pole.
I find EHT absolutely absurd for reasons that I didn't go into in this article, but I gave a talk about that project at IdaLabs in Berlin in March.
It's unclear if you're claiming that you have actual figures that show the EHT actually cost billions of dollars or if you're claiming that you think it's likely, but just a guess, that it kept all those radio telescopes "in business", or if you're taking back your claim that it cost billions of dollars.
The telescopes are not cheap, even if they are supported by the work of many institutes. The data center alone for this one is 80 million. For the telescope itself:
5,030 person hours have been spent working on site, by 27 dedicated team members (including 130 hours by the Student Army, a team of 7 students from Curtin University) since January 2012 to ‘build’ the telescope
7 km (4.3miles) of trenching has been dug
10 km of low voltage electrical cable has been laid
16 km of fibre optic cable has been laid – by hand
42 km of coaxial cable has been dragged and laid- by hand
9 tonnes of mesh (400 sheets) has been used to create the antenna bases – each lifted and placed by hand
4,608 RF connectors have been used – each secured by hand
It did not seem like you were making such an argument, nor was I asserting that you were making such an argument.
The telescope could have cost umpteen trillions of dollars and that fact alone would not support your claim that EHT cost billions of dollars.
I'm not sure how to understand the fact that the previous statement is obvious and yet you still made your comments. I feel like the most charitable interpretation that I can come up with still does not leave a good impression of your overall argument.
I'm not harping on this apparent mistake for no reason. It's just that of all the things described by gjm this seems like it might be the easiest to explicate.
It is the easiest to explain because gjm's other points demonstrated a deeper misunderstanding of what I view as fundamental epistemological issues. What is very concerning about EHT and LIGO is that they are used as training facilities for data scientists who apply these methods in other fields. If these flaws see the light of day, an entire industry is threatened - not just black hole astronomy.
I wrote an article about this for an introductory level audience kirstenhacker.wordpress.com/2020/05/28/how-beliefs-change-what-we-see-in-starlight/
So, you seem to continue to use a rhetorical device wherein you do not directly address the points that your interlocuters are bringing up and just answer the question you wish was asked.
For example, this comment I'm replying to here has almost zero bearing on what I said. Saying EHT is bad is not a way to address the argument that EHT did not cost billions of dollars. EHT may very well be bad, but that has no bearing on the subject at hand.
In your previous comment to me in this thread you did the same thing.
That is a way to make a rough estimate in the same way that providing the construction costs for a whole shopping mall is a way of providing a rough estimate of how much it costs for me to walk in the door of said mall.
In other words, there are too many unknowns and counterfactuals for that to even begin to be a useful way of calculating how much EHT cost.
In a way it's almost besides the point. You made the positive claim, seemingly without any solid facts, that it cost billions of dollars. When you were called on it, a way to increase the confidence of others in your arguments and presented facts would be to say something like "you know, I shouldn't have left that in there, I withdraw that statement".
By not doing so and sticking to your guns you increase the weight others give to the idea that you're not being intellectually honest.
Your current tack might be useful in political rhetoric in some quarters, but it doesn't seem like it will be effective with your current audience.
" That is a way to make a rough estimate in the same way that providing the construction costs for a whole shopping mall is a way of providing a rough estimate of how much it costs for me to walk in the door of said mall. "
So you think that I grossly underestimated the cost by multiplying the cost of one of the cheaper facilities by the number of facilities? You are probably right, since some of the facilities were in rather inhospitable climes (the south pole) -- and that would surely add to their cost.
I am most certainly sticking to my guns. I've seen no counter-arguments here that hold even a teaspoon of water.
I've got you insisting that my estimate of the project cost is dishonest because I don't have a detailed accounting of all ten facilities.
I've got gjm insisting that adding and subtracting uncorrelated errors to reduce the error of a measurement is a valid way to do error propagation. (he wrote this in the comments on my The New Scientific Method post)
I've got gjm insisting that organizing randomly scrambled phase data according to 'weirdness' is a valid experimental technique. (his comments on this can be found in my New Scientific Method post)
and I've got the moderator, Oliver, defending gjm's reasoning and insisting that my five articles on the practice of the scientific method do not deserve the 'scientific methods and philosophy' tag for which he is responsible. I believe that he considers himself to be an expert in 'many worlds' quantum mechanics.
In short, since I've arrived in this space dedicated to rationality, I've encountered three, rather hostile people who have managed to team up to give me a Karma of -87 by downvoting all of my comments and posts. I'd like to find out more about what motivates these people.
Dustin's point, as I understand it, is not that you overestimated or that you underestimated, nor that you didn't give a detailed accounting of all the facilities involved, it's that you're confusing two completely different questions. (1) How much did the EHT project cost? (2) How much did the telescopes used by the EHT project cost to build and run? You made a claim about #1 and when challenged on it offered some numbers relating to #2.
You do say one thing that purports to link them: "... if EHT kept the telescopes in operation when they would've otherwise lost funding ...". But that's one heck of a big if and I know of no reason to think that EHT kept any telescopes in operation that would otherwise have lost funding. And even if it did, that wouldn't justify including the cost of building the telescopes in your estimate of the cost of EHT, unless the telescopes in question were never used for anything other than EHT.
(One journalistic outlet has given a concrete estimate for the cost of the EHT project. They say 50 to 60 million dollars. I don't know where they got that estimate or how much to trust it, but it sounds much much more believable to me than your "billions of dollars".)
Whenever people come to vastly different estimates of how much a project costs, one that calculates opportunity cost and another that makes a naive estimate of accounting costs, there is a lesson to be learned about the importance of multidisciplinary education that trains people to think along multidimensional lines. This would prevent discussions like this one from happening so often.
Your descriptions of what I said in the comments on "The New Scientific Method" are not accurate. They are like your purported quotations from Katie Bouman's talk (though at least you didn't put them in quotation marks this time): in condensing what I actually said into a brief and quotable form, you have apparently attempted to make it sound as silly as possible rather than summarizing as accurately as possible. I think you shouldn't do that.
(My description in terms of "weirdness" was meant to help to clarify what is going on in an algorithm that you criticized but apparently hadn't understood well. It turns out that it was a mistake to try to be as clear and helpful as possible, rather than writing defensively so as to make it as difficult as possible for someone malicious to pick things that sound silly.)
I already told you (in comments on that other post) what motivates me: bad science, and especially proselytizing bad science, makes me sad. It makes me especially sad when it happens on Less Wrong, which aims to be a home for good clear thinking. Having seen the previous iteration of Less Wrong badly harmed by political cranks who exploited the (very praiseworthy) local culture of taking ideas seriously even when they are nonstandard or appear bad at a first glance, I am not keen to leave uncriticized a post that is confidently wrong about so many things.
I don't know what anyone else may have done, but I at least have not downvoted all your comments and posts. I have downvoted some specific things that seem to me badly wrong; that's what downvoting is meant for. (As it happens, it looks to me as if you have downvoted all my comments on your posts.)
When noneconomical language is used to obfuscate, it is necessary to paraphrase in order to restore clarity to the discussion and make the simple, silly, underlying errors easier to see.
I have made 6 posts on Less Wrong about physics experiments that I find to be particularly bad in their understanding of the scientific method and in their experimental design. You have chosen to defend two of those experiments at length. That you equate your defense of these experiments as an attack on 'bad science' (i.e. me) suggests that you may be suffering from cognitive dissonance and you are using projection to comfort yourself.
A lot of this reads like you are trying to apply the structure of an experiment to a thing that is, um, not an experiment. Like, we all learn the steps of an experiment in school (where they often incorrectly call the experimental method "the scientific method"). But there are whole sciences, like astronomy, and cosmology, and geology, that don't do experiments, they just make observations and analyze them in the context of what we already know from experiments in other areas of science. That is what LIGO does. We can't do experiments on gravitational ways, because we don't have the capacity to produce gravitational waves. All we can do is observe them. And that is still a perfectly valid scientific endeavor. And in particular, it is a scientific endeavor in which the notion of a "control" doesn't seem to make a whole lot of sense. Now, I don't have the technical competence to evaluate these kinds of high level physics things for myself, I don't know the math of general relativity, so I'm not going to try. But I generally trust the scientific community, and I'm not going to update much on a blog post that seems to misunderstand what these things are trying to do.
Thank you for explaining what confused you about how I presented this topic. I tried to draw these themes together in the final paragraph:
"The best that science can do is to create a controlled experiment in which changing one variable causes another variable to change. Since this is not possible in astronomy, we must rely on astronomers to exercise restraint when they interpret their data. That restraint appears to be sorely lacking in the present day community."
But maybe I should emphasise this earlier. The social pressures within the LCDM cosmology establishment are rather unusual and they are elaborated within the links in the last paragraph.
Figuring out past detections were false, seems like cases of trying to replicate earlier findings, i.e. doing things right.
LIGO has no way to distinguish its signature from some more mundane, local occurence.
Why not? This?:
If you detect a wave at two locations and you know its propagation speed, you can determine the direction from which the wave came, but not the distance to the thing that caused the wave.
APS journals charge two thousand dollars
After putting effort into debunking this stuff, I do ask myself why I bother and I think that I want to demystify the pop-sci nonsense used to lure young people into physics servitude. I think they might find better things to do with their time and I’d like to help them avoid the mistakes I made.
Thanks for the interesting read. I absolutely lack the background to comment on your conclusions, but your post made me remember some questions I had on Black Holes that no physicist I talked to could answer, I never would have guessed the field had detractors.
If you don't mind me asking, are you also a climate change skeptic?
There is a TED talk by (actually, an interview with, as part of TED2019) Sheperd Doeleman, head of the EHT collaboration, whose transcript you can read on the TED website. It doesn't say anything even slightly like that. Is there some other TED talk by her that you're referring to? (I can't find any evidence that there is another.)
The only other thing I can find that you conceivably might be referring to is a TEDx talk by Katie Bouman, from 2017 (before the EHT picture was produced). Her title is "How to take a picture of a black hole" and it includes a prediction of roughly what the picture might be expected to look like, and includes the words "my role in helping to take the first image of a black hole is to design algorithms that find the most reasonable image that also fits the telescope measurements". Maybe that's what you mean?
She doesn't say "exactly", or even approximately, that applying the same pipeline to random input would generate a similar result. Quite the reverse; let me quote her again. "What would happen if Einstein's theories didn't hold? We'd still want to reconstruct an accurate picture of what was going on. If we bake Einstein's equations too much into our algorithms, we'll just end up seeing what we expect to see. In other words, we want to leave the option open for there being a giant elephant at the centre of our galaxy." She says, in other words, that a key consideration in their work was not doing exactly what you say she said they did.
(Shortly after that bit there is a slide that, if wilfully misunderstood, might seem to fit your description. Its actual meaning is pretty much the reverse. I won't go into details right now because I don't know whether you saw that slide and misunderstood it; I don't know whether this is the TED talk you're referring to at all. But I guess this is it.)
Incidentally: Katie Bouman was a PhD student, was not an astronomer, and was certainly not the leader of the EHT project. The project was already happening and already funded, but I suppose you could call her talk "selling the project to the public" in the sense in which any attempt to describe anything neat one's doing is "selling the project". Bah.
So, you did mean the Bouman talk I found. As I say, she wasn't "the leader of that project" and she did not say what you say she did.
The particular things that you claim there are "absurd" are not absurd, it's just that you don't understand the procedures they describe and are taking them in the most uncharitable way possible.
(I haven't listened to the CalTech talk so can't comment with any authority on what Bouman meant by all the things you quote her as having said there, but it is absolutely not true that "any single one of the statements  would disqualify an experiment", and amusingly the single statement you choose to attack there at greatest length is the most obviously not-disqualifying. You say, and I quote, "Most sensible researchers would agree that if the resolution of your experiment is equivalent to taking a picture of an orange on the moon, this means that you cannot do your experiment.". You appear to be arguing that if something sounds impossibly hard, then you should just assume that it is, literally, impossibly hard and that it can never be done. Once upon a time, "equivalent to speaking in New York and being heard in Berlin" would have sounded like it meant impossibly hard. Once upon a time, "equivalent to adding up a thousand six-digit numbers correctly in a millisecond" would have sounded like it meant impossibly hard. Some things that sound impossibly hard turn out to be possible. The EHT folks claim that taking a picture with orange-on-the-moon resolution turns out to be possible. Of course they could be wrong but they aren't obviously wrong; what they're claiming breaks no known laws of physics, for instance. And obviously they aren't unaware that getting a picture of an orange on the moon is very difficult. So I think it's downright ridiculous to say that their project is unreasonable because they're trying to do something that sounds impossibly hard.)