Friday, November 16, 2018

New paper claims that LIGO’s gravitational wave detection from a neutron star merger can’t be right


Two weeks ago, New Scientist warmed up the story about a Danish groups’ claim that the LIGO collaboration’s signal identification is flawed. This story goes back to a paper published in Summer 2017.

After the publication of this paper, however, the VIRGO gravitational wave interferometer came online, and in August 2017 the both collaborations jointly detected another event. Not only was this event seen by the two LIGO detectors and the VIRGO detector, several telescopes also measured optical signals that arrived almost simultaneously and fit with the hypothesis of the event being a neutron-star merger. For most physicists, including me, this detection removed any remaining doubts about LIGO’s event-detection.

Now a few people have pointed out to me that the Journal of Cosmology and Astroparticle Physics (JCAP) recently published a paper by an Italian group which claims that the gravitational wave signal of the neutron-star merger event must be fishy:

    GRB 170817A-GW170817-AT 2017gfo and the observations of NS-NS, NS-WD and WD-WD mergers
    J.A. Rueda et al
    JCAP 1810, 10 (2018), arXiv:1802.10027 [astro-ph.HE]

The executive summary of the paper is this. They claim that the optical signal does not fit with the hypothesis that the event is a neutron-star merger. Instead, they argue, it looks like a specific type of white-dwarf merger. A white-dwarf merger, however, would not result in a gravitational wave signal strong enough to be measurable by LIGO. So, they conclude, there must be something wrong with the LIGO event. (The VIRGO measurement of that event has a signal-to-noise ratio of merely two, so it doesn’t increase the significance all that much.)

I am not much of an astrophysicist, but I know a few things about neutron stars, most notably that it’s more difficult to theoretically model them than you may think. Neutron stars are not just massive balls that sit in space. They are rotating hot balls of plasma with pressure gradients that induce various phases of matter. And the equation of state of nuclear matter in the relevant ranges is not well-understood. There’s tons of complex and even chaotic dynamics going on. In short, it’s a mess.

In contrast to this, the production of gravitational waves is a fairly well-understood process that does not depend much on exactly what the matter does. Therefore, the conclusion that I would draw from the Italian paper is that we are misunderstanding something about neutron stars. (Or at least they are.)

But, well, as I said, it’s not my research area. JCAP is a serious journal, and the people who wrote the paper are respected astrophysicists. It’s not folks you can easily dismiss. So I decided to look into this a bit.

First, I contacted the spokesperson of the LIGO collaboration, David Shoemaker. This is still the same person who last year answered my question what the collaboration’s response to the Danish criticism is by merely stating he has full confidence in LIGO’s results. Since the Danish group raised the concern that the collaboration suffers from confirmation bias, this did little to ease my worries.

This time I asked Shoemaker for a comment on the Italian groups’ new claim that the LIGO measurement conflicts with the optical measurements. Turns out that his replies landed in my junk folder until I publicly complained about the lack of response, which prompted him to try a different email account. Please see update below.

Secondly, I noticed that the first version of the Italian group’s paper that is available on the arXiv heavily referenced the Danish group.


Curiously enough, these references seem to have entirely disappeared from the published version. I therefore contacted Andrew Jackson from the Danish group to hear if he has something to say about the Italian group’s claims and whether he’d heard of them. He didn’t respond.

Third, I contacted the corresponding author of the Italian paper, Jorge Rueda, but he did not correspond with me. I then moved on to the paper’s second author Remo Ruffini, which was more fruitful. According to Wikipedia, Ruffini is director of the International Centre for Relativistic Astrophysics Network and co-author of 21 textbooks about astrophysics and gravity.

I asked Ruffini whether he had been in contact with the LIGO collaboration about their findings on the neutron star merger. Ruffini did not respond to this question, though I asked repeatedly. When I asked whether they have any reason to doubt the LIGO detection, Ruffini referred me to (you’ll love this) the New Scientist article.

I subsequently got Ruffini’s permission to quote his emails, so let me just tell you what he wrote in his own words:

“Dear Sabine not only us but many people are questioning the Ligo People as you see in this link: the drama is of public domain. Remo Ruffini”

Michael Brooks, btw, who wrote the New Scientist article knew about the story because I had written about it earlier, so it has now gone around a full circle. After I informed Ruffini that I write a blog he told me that:

“we are facing the greatest dramatic disaster in all scientific world since Galileo. Do propagate this dramatic message to as many people as possible.”

Yo.

Update: Here is the response from Shoemaker that Google pushed in the junk folder (not sure why). I am sorry I complained about the lack of response without checking the junk folder - my bad.

He points out that there is a consensus in the community that the gravitational wave event in question can be explained as a neutron-star merger. (Well, I guess it’s a consensus if you disregard the people who do not consent.) He also asks me to mention (as I did earlier) that the data of the whole first observing run is available online. Alas, this data does not include the 2017 event that is under discussion here. For this event only a time-window is available. But for all I can tell, the Italians did not even look at that data.

Basically, I feel reassured in my conclusion that you can safely ignore the Italian paper.

Thursday, November 15, 2018

Modified gravity, demystified [video]

Here is the promised follow-up on my earlier video about dark matter. This time I explain how Modified Newtonian Dynamics gives rise to flat rotation curves and what’s the deal with the Tully-Fisher relation. Fixed my make-up issues but now I put the microphone in the wrong place, hence the noise from my shirt. Sorry about that. Click on “CC” in the bottom bar to get English captions.


The problem with the language of the automatic transcription disappeared as spontaneously as it had appeared. As you can see, if it works, it works remarkably well, except that it’s missing all punctuation.

Update: Now available with German and Italian captions. Click on gear icon/subtitles in bottom bar to change language.

Sunday, November 11, 2018

Guest Post: Phillip Helbig reviews “Lost in Math”

[Phillip Helbig worked in cosmology and gravitational lensing at Hamburg and Jodrell Bank Observatories and the Kapteyn Astronomical Institute. Although no longer employed in academia, he regularly attends conferences and writes book reviews for The Observatory, as well as the occasional journal paper. Phillip is a regular commenter on this blog.]

I've read a huge number of popular-science books, and my first impression is that Sabine's book is very well written. One could also think that Sabine is a native speaker (or, rather, a native writer) of English. The style is breezy without rambling, and direct quotations make it clear what the illustrious interviewees actually said, without any filter of interpretation (but see below for a caveat). Sabine's own position is very clear; this is almost an op-ed. Whether or not one agrees with her, this approach is preferable to introducing one's own biases into what might appear to the uninitiated as an objective description.

Enough praise; now for the critique. Let me emphasize, though, that I agree with everything which I don't discuss here, which is most of the book. In the interest of stimulating discussion, I'll concentrate on those few areas where I see things differently.

It is not always clear what needs to be explained. In discussions of fine-tuning and so on, one often reads about numerical coincidences, which imply that two numbers are roughly the same, but also about small (or large) numbers, which allegedly also need an explanation. (Since the inverse of a large ratio is a small ratio, I will speak only of small numbers in what follows.) It needs to be clear what is even potentially puzzling: it is always ratios near 1. In other words, if the smallness of some quantity is the result of a near cancellation, then that implies a ratio near 1 of the quantities which almost cancel; if the number is just small in relation to some other quantity because it has nothing to do with that other quantity, then it certainly needs no explanation.

Another aspect of the presentation I disagree with is the claim that the standard model has been "souped up" with dark matter and dark energy, as if these were some sort of epicycles, fudge factors brought in so that theory and observations match. On her blog, Sabine has often pointed out that general relativity says nothing about the sources of gravitation, so while dark matter might be interesting or even mysterious because we don't know what it is, it is not some sort of addition to general relativity. The same goes for the cosmological constant. Yes, Einstein initially introduced it as a fudge factor, and later abandoned it, but the universe is independent of the contingent history via which we have learned about it. From a mathematical point of view, one could just have easily included the cosmological constant from the beginning. Indeed, in other areas of physics, what is not forbidden actually happens, and if someone claims that something doesn't happen, that some quantity is 0, etc, then the burden of proof is on the person making the claim. Actually, what is interesting is that no fudge factors have had to be introduced. Despite a huge amount of cosmological data, a model with just a few parameters---all of which were known even back when there was almost no data---which was derived when there were some data but considerably less than now still fits the observations.

I have tremendous respect for George Ellis. However, I don't always agree with him, even on matters of science. I think that Sabine lets him too easily off the hook because they seem to agree on many issues. Ellis dismisses the idea that we could be living in a simulation, but is careful to point out that science cannot disprove the existence of God. One could just as well say that we cannot disprove that we are living in a simulation and dismiss the idea of God. Strictly speaking, one can disprove neither, but can use various arguments to discuss the probabilities of both. Also, after criticizing certain ideas as being non-scientific, Ellis says of one of his own ideas, that nothing is physically infinite: "There's no way I can prove it.... But we should use it as a principle." This isn't the place to argue with Ellis; my point is that if Sabine could be obstinate enough to stay in Weinberg's office even after he had essentially asked her to leave, she should have called out these two obvious contradictions on the part of Ellis. I think that this is a good example of confirmation bias. (Interestingly, Tegmark is also critical of the idea of physical infinity but, in contrast to Ellis, is a strong proponent of the multiverse.)

My main disagreement with Sabine concerns fine-tuning. I think that this is due to an unnecessary attachment to probability. Many normally think that fine-tuning and low probability go hand in hand. As Sabine points out, though, without knowledge of the underlying probability distribution, one cannot say whether an anthropic explanation involving the multiverse leads to likely values. But is that even necessary? One can discuss fine-tuning for life, in the sense that slight changes of various parameters (within the otherwise allowed range) would lead to a universe incompatible with life. There can be absolutely no debate that the universe is fine-tuned in this sense. Whether the values we observe for the physical constants are likely in some sense is unknown, but also irrelevant. One must be careful not to confuse fine-tuning in the particle-physics sense of lack of naturalness (discussed above) with the case of values being within a small region of possible parameter space (regardless of how likely that small region is by some definition.) As an aside, it is not true, as Sabine claims on p. 114, that fine-tuning goes away if one considers many changes simultaneously. A good discussion of fine-tuning, which also rebuts many common objections, including that one, can be found in the book by Lewis and Barnes.

It is also beside the point whether, also mentioned on p. 114, somewhere in parameter space there is another region compatible with life; the point is that most of it is not. A good comparison is with the "coincidence" that the Earth is just at the right distance of the Sun for the existence of life. The explanation is simple: there are many solar systems with planets at various distances from their stars. By chance, some will be at the right distance for life. It is also completely irrelevant how likely these are, as long as the probability is non-zero. The same goes for the multiverse. Given the multiverse (perhaps a daunting proposition), then fine-tuning is not puzzling at all. A good case for the multiverse is made by Max Tegmark. (Lewis and Barnes mention the multiverse in a book about fine-tuning; Tegmark does the opposite.) I think that most examples of fine-tuning are real; again, the book by Lewis and Barnes is a good summary. In one famous case of alleged fine-tuning I disagree, and that is the flatness problem. I wrote an entire paper about that, so I won't say much about it here. I'm also sure that most of the people who have thought much about the multiverse don't make this simple mistake.

Suppose I flip a coin a hundred times and it comes up heads every time. It seems that Sabine would say that this outcome is just as probable as any other outcome (which is true) and therefore that there is no reason to assume that the coin is not fair (which is false). I think that most people would disagree with Sabine, and I agree with those people. If one must discuss probabilities in conjunction with fine-tuning, or vice versa, what is relevant is not the probability per se, but rather the probability relative to some situation which is important to us.

A common theme in Sabine's book is that fundamental physics, having become "lost in math", has not made much progress in recent decades. This is a correlation, but is there a causation? Perhaps the problems are just really hard theoretically, and experimentally nothing is accessible at the moment. In neither case would this be the first time that something like this has happened. Thus, while I sympathize with the main theme of the book, I don't think that there is a watertight case for the claim.

Perhaps other approaches will be more successful, but the burden of proof is on those who make such claims. Yes, maybe progress is difficult due to lack of funding for those thinking outside the box, and without funding, it is difficult to prove whether an alternative approach would pay off.

Does beauty distract us from truth? Perhaps, in some cases, but in these I claim (probably agreeing with Sabine here) that it is not beauty per se, but rather a false sense of beauty. Aesthetics in some sense, perhaps something similar to Pirsig's "quality", has been a useful guide in some cases. At the end of the day, though, the route to truth doesn't matter; a successful theory is a successful theory regardless of the path trough which it was arrived at.



Some comments from me:

First, as you can see, Phillip unfortunately used his review to propagate his own notion of fine-tuning. I therefore want to warn you that this is not the way most physicists use the word and therefore not the way I use the word in my book. Please don't let yourself get confused.

Second, Ellis correctly points out that the simulation hypothesis is not science because you cannot disprove it. This is totally in line with him saying that science cannot disprove god. And, yes, Ellis puts forward metaphysical principles, but in contrast to the other physicists I spoke to, he is aware that these are unprovable.

Third, I discuss the issue of fairness in the interview with Weinberg using the example of poker. It's a useless objection because we have equally little idea what counts as "fair" in the multiverse as we know the probability distribution. Neither of those are notions that make sense scientifically.

Fourth, I address the often-raised claim that progress has slowed down because "the problems are just hard" right in the beginning of the book. To sum it up once again: no one can tell how much of the slow-down is due to the problems being harder, but certainly using flawed methodologies will not help.

Fifth, I don't think there is anything like a "false sense of beauty". You decide what is beauty for yourself. Just don't mistake your sense of beauty for a scientific criterion.

Saturday, November 10, 2018

Self-driving car rewarded for speed learns to spin in circles. Or, how science works like a neural net.

When I write about problems with the current organization of scientific research, I like to explain that science is a self-organizing, adaptive system. Unfortunately, that’s when most people stop reading because they have no idea what the heck I am talking about.

I now realized there is a better way to explain it, one which has the added benefit of raising the impression that it’s both a new idea and easy to understand: Science works like a neural network. Or an artificial intelligence, just to make sure we have all the buzzwords in place. Of course that’s because neural networks are really adaptive systems, neither of which is really a new idea, but then even Coca Cola sometimes redesigns their bottles.

In science, we have a system with individual actors that we feed with data. This system tries to optimize a certain reward-function and gets feedback about how well it’s doing. Iterate, and the system will learn ways to achieve its goals by extrapolating patterns in the data.

Neural nets can be a powerful method to arrive at new solutions for data-intensive problems. However, whether the feedback loop gives the desired result strongly depends on how carefully you configure the reward function. To translate this back to my going on about the malaises of scientific research, if you give researchers the wrong incentives, they will learn unintended lessons.

Just the other day I came across a list of such unintended lessons learned by neural nets. Example: Reward a simulated car for continuously going at high speed, and it will learn to rapidly spin in a circle:

Likewise, researchers rewarded to produce papers at a high frequency will learn to rapidly spin around their own axis by inventing and debating problems that don’t lead anywhere. Some recent examples from my own field are the black hole firewall, the non-naturalness of the Higgs-mass, or the string theory swampland.

Here is another gem: “Agent pauses the game indefinitely to avoid losing.” I see close parallels to the current proliferation of theories that are impossible to rule out, such as supersymmetries and multiverses.

But it could be worse, at least we are not moving backwards:

At least we are not moving backward yet. Because now that I think about it, rediscovering long-known explanations would also be a good way to feign productivity.

Of course I know of the persistent myth that scientific research is evaluated by its ability to describe observations, so I must add some words on this: I know that’s what you were told, but it’s not how it works in practice. In practice, scientists and funding agencies likewise must evaluate hypotheses prior to test to decide what is worth the time and money of testing to begin with. And the only ones able to evaluate the promise of research directions are researchers themselves.

It follows that there is no external reward function which you can impose on scientists that will optimize the return on investment. The best – indeed the only – method at your disposal is to let scientists make the evaluation internally, and then use their evaluation to distribute funding. In doing this, you may want to impose constraints on how the funding is used, eg by encouraging researchers to study specific topics. Such external constraints will reduce the overall efficiency, but this may be justifiable for societal reasons.

In case you missed it, this solution – which I have written and spoken about for more than a decade now – could come right out of the neo-libertarian’s handbook. The current system is over-regulated and therefore highly inefficient. More regulations will not fix it. This is why I am personally opposed to top-down solutions, like requirements coming from funding agencies.

However, the longer the current situation goes on, the more people we will have in the system who are convinced that what they are doing is the right thing, and the longer it will take for the problem to resolve even if you remove the flawed incentives. Indeed, in my impression the vast majority of scientists today already falls into this category: They sincerely believe that publications and citations are reliable indicators for good research.

Why do these problems persist even though they have been known for decades? I think the major reason is that most people – and that includes scientists themselves – do not understand the operation of the systems that they themselves are part of. It is not something that evolution allowed us to develop any intuitive grasp for.

Scientists in particular by and large think of themselves as islands. They do not take into account the manifold ways in which the information they obtain is affected by the networks they are part of, and neither do they consider that their assessment of this information is influenced by the opinions of others. This is a serious shortcoming in the present education of scientists.

Will drawing an analogy between scientific research and neural nets help them see the light? I don’t know. But maybe then in the not-so-far future we will all be replaced by AIs anyway. At least those sometimes get debugged.

Thursday, November 08, 2018

I'm hiring: Postdoc In Quantum Foundations in Frankfurt, Germany

I am looking for a postdoctoral researcher to join me and my small group at the Frankfurt Institute for Advanced Studies for a project in quantum foundations.

This postdoc position is a two year scholarship supported by the Franklin Fetzer Fund. The research project-bound, ie the candidate will work on a particular topic under my supervision. The position comes with a modest travel budget.

Applicants should have a background in quantum foundations or quantum information, especially path integral formalism and decoherence theory. Applications should contain a CV, a list of publications, and at least 2 letters of recommendations. Documents should be sent by email to hossi@fias.uni-frankfurt.de with the subject “Postdoc 2018”.

The application deadline is December 7th, 2018.

The Frankfurt Institute for Advanced Studies is a non-profit research organization located on the North campus of the JW-Goethe University in Frankfurt, Germany. It is an international think-tank that collects researchers pursuing a large variety of topics ranging from physics to neuroscience to economics. The building is new, the people are friendly, and I am not remotely as terrible as they told you.

Further questions should be directed to hossi@fias-uni-frankfurt.de.

Monday, November 05, 2018

Book Review: “Rigor Mortis” by Richard Harris

Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions
Richard Harris
Basic Books (April 4, 2017)

15 years in the foundations of physics taught me little about the universe and much about human behavior. I eventually poured my frustration into a book, which was published a few months ago under the title “Lost in Math” and which documents that bad methodologies survive in scientific communities simply because they enable the continued production of papers.

While the foundations of physics are the research area that I am personally most interested in, its lack of progress arguably has limited societal relevance. Who really cares if we will eventually manage to quantize gravity. If our fruitless attempts at least entertain the masses, maybe that’s justification enough to finance string theorists.

But the same problems exist in other research areas, and in some cases lives are at stake. In his book “Rigor Mortis,” the US-American science journalist Richard Harris has a close look at what is going on in biomedicine and drug development. You may think that my experience with physicists should have warned me, but really I had no idea.

While I follow the popular science literature on drug development to some extent, it is certainly not a topic that I know a lot about. And those popular science accounts tend to be celebrations of the supposedly great breakthroughs, most of which we never hear of again. I was under the impression that since in the life sciences you can at least experimentally check hypotheses, it can’t possibly be as bad as in the foundation of physics. Well, I was wrong.

In “Rigor Mortis,” Harris goes through the various kinds of flawed scientific methodologies that have spread in those research communities. Poor experimental design, hypothesis-fishing, sloppy statistics, mislabeled cell-lines, contaminated but still used antibodies, the abundance of irreproducible results, outright fraud and misconduct, and long-retracted zombie-papers that continue to be cited nevertheless. No longer do I wonder why the development of new drugs has basically stalled and why the alleged breakthrough discoveries never pan out.

Harris makes some efforts to convince the reader that the problem has been recognized and some people try to do something about it. While I appreciate the attempted optimism, that’s lipstick on a pig. Yes, there have been initiatives for this and that, and some of those have indeed partly addressed a specific problem. For example, requiring researchers to pre-register trials prevents them from later changing the hypothesis they were testing. But the overarching problem that the organization of scientific research is inefficient to the point of choking progress still exists and no one is doing anything about it.

Harris book is thankfully short on the actual research studies, where I say “thankfully” because I get lost easily in elaborations about molecules with unpronounceable names and 20 different enzymes that may or may not be doing this or that. For me any article about drug development come down to “this thing fits into that thing and we hope it will have this effect.” Harris does nothing of that sort and instead focuses on the way research is pursued.

I was also relieved to find that Harris largely spares the reader dreadful stories about patients who succumbed to their illness after long suffering. It’s not that I think those stories shouldn’t be told, they have their place, but personally I would prefer if popular science articles stayed clear of them. For me it’s a reason not to read an article if I have to fear someone may die in the next paragraph.

Harris interviewed a few people whose voices appear in some places. His writing is clean and clear and easy to follow, which is to say he writes better than I, damn. It’s not a long book, but it’s full with information, and it’s scary. You should read it.

Thursday, November 01, 2018

Story about LIGO noise resurfaces in New Scientist

Cover of New Scientist
Nov 3rd 2018.
The current New Scientist issue has an “exclusive feature” under the headline: “DID WE REALLY FIND GRAVITATONAL WAVES? Breakthrough physics result questioned.”

The article is by Michael Brooks and it’s a summary of a claim I wrote about last year, that the original 2015 gravitational wave detection by the LIGO collaboration was not a real signal.

This claim was made by a Danish group around the physicist Andrew Jackson. This group tried to reproduce the data analysis of the LIGO collaboration with the publicly available data and could not.

The New Scientist article quotes Duncan Brown at Syracuse, who until recently was a member of LIGO, with reassuring the reader that the Danes are “credible scientists,” and Slava Mukhanov who likewise emphasizes that the Danes are people “with a high reputation.” Slava is also on record stating that “There is no mistake” in the analysis of the Danish group. Peter Coles chimes in to say that “I think their paper is a good one and it’s a shame that some of the LIGO team have been so churlish in response.”

The New Scientist article then draws a comparison between the LIGO case and the BICEP case. BICEP looked for the so-called primordial gravitational waves, which are in a different wavelength regime than LIGOs. Their supposed signal turned out to be merely noise.

The two measurements, however, work entirely differently because BICEP did not (attempt to) directly measure gravitational waves. Instead, it looked for a secondary signal that is the imprint of the primordial gravitational waves in the cosmic microwave background. The BICEP signal was contaminated by foreground from the Milky Way. The same problem does not exist for LIGO.

Michael Brooks in the New Scientist article then points out that this is the first time we are analyzing gravitational wave signals and it’s still early days, so if an independent analysis cannot reproduce the result that’s a problem.

Interestingly, Brooks seems to have found out that the key figure in the LIGO paper about the first discovery does not actually show the quantity that they used in the data analysis. I had been told about this previously, though I cannot now recall the details. (I believe it was something about the plotted quantity not actually showing the relevant significance. Anyone knows better, pls leave a comment.)

The way that I heard about it was that some members of the collaboration wanted a pretty plot that “could be printed on a T-shirt,” ie they opted for beauty over scientific relevance. I don’t know if that’s what really happened, but it sounds plausible enough. I recall thinking at the time that if that’s true it was a dumb decision; clearly this move pissed off some people in the collaboration and those had no reason to keep their mouth shut forever.

For me the issue with the Danish group’s criticism was not whether the signal is real. LIGO people pointed out problems with the Danes’ analysis to me that even I could understand. No, the issue for me was that the collaboration didn’t make an effort helping others to reproduce their analysis. They also did not put out an official response, indeed have not done so until today. I thought then – and still think – this is entirely inappropriate of a scientific collaboration. It has not improved my opinion that whenever I raised the issue LIGO folks would tell me they have better things to do.

We have here a group of researchers not associated with the collaboration which tries to follow the analysis methods that the collaboration reported and they cannot confirm the collaboration’s results. This should not happen. If the collaboration is not able to explain their procedures so that other scientists can find out what they’re doing, that is a problem that must be fixed. This is the first time anyone analyses data for gravitational wave signals and the methodology needs to be clearly documented. Evidently, this is not presently the case.

The Danes btw haven’t been the only ones who tried to redo the LIGO analysis and didn’t manage to. I know this not because I’m obsessed with LIGO, but because people send me references about this. I also get plenty of emails and comments from cranks who think that LIGO is a fraud and just wasting tax-money and so on. All this is reason why I think the LIGO collaboration is doing a disservice to science by ignoring the matter.

I was thus happy to read in the New Scientist article that some people from the LIGO collaboration are at least working on a response. But, well, it’s certainly taking some time.

What happened after the Danish group made their claim in June last year is that  the VIRGO collaboration joined LIGO’s search for gravitational waves. So now the analysis draws on data from three detection sites. They have since seen a gravitational wave event with an optical counterpart recorded in several telescopes. Brooks reports that the Danish groups still doubts the detection because this event, which happened in August 2017, was originally labelled a “glitch”. The story about the glitches is indeed peculiar. The glitches are occasional false alarms in the detectors. They tend to not have the frequency spectrum of the real events however. So it seems to me like a stretch that the Danes are holding on to their claim, and I am not sure why New Scientist dug this up now.

If you cannot (or do not want to) access the New Scientist piece, Jennifer Ouellette has an excellent summary on Ars Technica.



Update: The LIGO collab has published a brief response to the New Scientist piece on their website:
“1 Nov 2018 -- Claims in a paper by Creswell et al. of puzzling correlations in LIGO data have broadened interest in understanding the publicly available LIGO data around the times of the detected gravitational-wave events. The features presented in Creswell et al. arose from misunderstandings of public data products and the ways that the LIGO data need to be treated. The LIGO Scientific Collaboration and Virgo Collaboration (LVC) have full confidence in our published results. We are preparing a paper that will provide more details about LIGO detector noise properties and the data analysis techniques used by the LVC to detect gravitational-wave signals and infer their source properties.”