Pages

Saturday, February 28, 2015

Are pop star scientists bad for science?

[Image Source: Asia Tech Hub]

In January, Lawrence Krauss wrote a very nice feature article for the Bulletin of the Atomic Scientists, titled “Scientists as celebrities: Bad for science or good for society?” In his essay, he reflects on the rise to popularity of Einstein, Sagan, Feynman, Hawking, and deGrasse Tyson.

Krauss, not so surprisingly, concludes that scientific achievement is neither necessary nor sufficient for popularity, and that society benefits from scientists’ voices in public debate. He does not however address the other part of the question that his essay’s title raises: Is scientific celebrity bad for science?

I have to admit that people who idolize public figures just weird me out. It isn’t only that I am generally suspicious of groups of any kinds and avoid crowds like the plague, but that there is something creepy about fans trying to outfan each other by insisting their stars are infallible. It’s one thing to follow the lives of popular figures, be happy for them and worry about them. It’s another thing to elevate their quotes to unearthly wisdom and preach their opinion like supernatural law.

Years ago, I unknowingly found myself in a group of Feynman fans who were just comparing notes about the subject of their adoration. In my attempt to join the discussion I happily informed them that I didn’t like Feynman’s books, didn’t like, in fact, his whole writing style. The resulting outrage over my blasphemy literally had me back out of the room.

Sorry, have I insulted your hero?

An even more illustrative case is that of Michael Hale making a rather innocent joke about a photo of Neil deGrasse Tyson on twitter, and in reply getting shot down with insults. You can find some (very explicit) examples in the writeup of his story “How I Became Thousands of Nerds' Worst Enemy by Tweeting a Photo.” After blowing up on twitter, his photo ended up on the facebook page “I Fucking Love Science.” The best thing about the ensuing facebook thread is the frustration of several people who apparently weren’t able to turn off notifications of new comments. The post has been shared more than 50,000 times, and Michael Hale now roasts in nerd hell somewhere between Darth Vader and Sauron.

Does this seem like scientist’s celebrity is beneficial to balanced argumentation? Is fandom ever supportive to rational discourse?

I partly suspect that Krauss, like many people his age and social status, doesn’t fully realize the side-effects that social media attention brings, the trolls in the blogosphere’s endless comment sections and the anonymous insults in the dark corners of forum threads. I agree with Krauss that it’s good that scientists voice their opinions in public. I’m not sure that celebrity is a good way to encourage people to think on their own. Neither, for that matter, are facebook pages with expletives in the title.

Be that as it may, pop star scientists serve, as Steve Fuller put it bluntly, as “marketing”
“The upshot is that science needs to devote an increased amount of its own resources to what might be called pro-marketing.”
Agreed. And for that reason, I am in favor of scientific celebrity, even though I doubt that idolization can ever bring insight. But let us turn now to the question what ill effects celebrity can have on science.

Many of those who become scientists report getting their inspiration from popular science books, shows, or movies. Celebrities clearly play a big role in this pull. One may worry that the resulting interest in science is then very focused on a few areas that are the popular topics of the day. However, I don’t see this worry having much to do with reality. What seems to happen instead is that young people, once their interest is sparked, explore the details by themselves and find a niche that they fit in. So I think that science benefits from popular science and its voices by inspiring young people to go into science.

The remaining worry that I can see is that scientific pop stars affect the interests of those already active in science. My colleagues always outright dismiss the possibility that their scientific opinion is affected by anything or anybody. It’s a nice demonstration of what psychologists call the “bias blind spot”. It is well documented that humans pay more attention to information that they receive repeatedly and in particular if it comes from trusted sources. This was once a good way to extract relevant information in a group of 300 fighting for survival. But in the age of instant connectivity and information overflow, it means that our interests are easy to play.

If you don’t know what I mean, imagine that deGrasse Tyson had just explained he read my recent paper and thinks it’s totally awesome. What would happen? Well, first of all, all my colleagues would instantly hate me and proclaim that my paper is nonsense without even having read it. Then however, a substantial amount of them would go and actually read it. Some of them would attempt to find flaws in it, and some would go and write follow-up papers. Why? Because the papal utterance would get repeated all over the place, they’d take it to lunch, they’d discuss it with their colleagues, they’d ask others for opinion. And the more they discuss it, the more it becomes interesting. That’s how the human brain works. In the end, I’d have what the vast majority of papers never gets: attention.

That’s a worry you can have about scientific celebrity, but to be honest it’s a very constructed worry. That’s because pop star scientists rarely if ever comment on research that isn’t already very well established. So the bottomline is that while it could be bad for science, I don’t think scientific celebrity is actually bad for science, or at least I can’t see how.

The above mentioned problem of skewing scientific opinions by selectively drawing attention to some works though is a real problem with the popular science media, which doesn’t shy away from commenting on research which is still far from being established. The better outlets, in the attempt of proving their credibility, stick preferably to papers of those already well-known and decorate their articles with quotes from more well-known people. The result is a rich-get-richer trend. On the very opposite side, there’s a lot of trash media that seem to randomly hype nonsense papers in the hope of catching readers with fat headlines. This preferably benefits scientists who shamelessly oversell their results. The vast majority of serious high quality research, in pretty much any area, goes largely unnoticed by the public. That, in my eyes, is a real problem which is bad for science.

My best advice if you want to know what physicists really talk about is to follow the physics societies or their blogs or journals respectively. I find they are reliable and trustworthy information sources, and usually very balanced because they’re financed by membership fees, not click rates. Your first reaction will almost certainly be that their news are boring and that progress seems incremental. I hate to spell it out, but that’s how science really is.

Thursday, February 19, 2015

New experiment doesn’t see fifth force, rules out class of dark energy models

Sketch of new experiment.
Fig 1 from arXiv:1502.03888

Three months ago, I told you about a paper that suggested a new way to look for certain types of dark matter fields, called “chameleon fields”. Chameleon fields can explain the observed accelerated expansion of the universe without the necessity to introduce a cosmological constant. Their defining feature is a “screening mechanism” that suppresses the field in the vicinity of matter. The Chameleon fields become noticeable between galaxies, where the average energy density is very thin, but the field is tiny and unmeasurable nearby massive objects, such as the Earth.

Or so we thought, until it was pointed out by Burrage, Copeland and Hinds that the fields should be observable in vacuum chambers, when measured not with massive probes but with light ones, such as for example single atoms. The idea is that the Chameleon field inside a vacuum chamber would not be suppressed, or not very much suppressed, and then atoms in the chamber are subject to a modified gravitational field, that is the usual gravitational field, plus the extra force from the Chameleon.

You might not believe it, but half a year after they proposed the experiment, it’s been done already, by a group of researchers from Berkeley and the University of Pennsylvania
    Atom-interferometry constraints on dark energy
    Paul Hamilton, Matt Jaffe, Philipp Haslinger, Quinn Simmons, Holger Müller, Justin Khoury
    arXiv:1502.03888 [physics.atom-ph]
I am stunned to say the least! I’m used to experiments taking a decade or two from the idea to planning. If they come into existence at all. So how awesome is this?

Here is what they did in the experiment.

They used a vacuum chamber in which there is an atom interferometer for a cloud of about 10 million Cesium atoms. The vacuum chamber also contains a massive sphere. The sphere serves to suppress the field on one arm of the interferometer, so that a phase difference resulting from the Chameleon field should become measurable. The atoms are each brought into superpositions somewhere above the sphere, and split into two wave-packages. They are directed with laser pulses, that make one wave-package go up – away from the sphere – and down again, and the other wave-package go down – towards the sphere – and up again. Then the phase-shift between the different wave-packages is measured. This phase-shift contains information about the gravitational field on each path.

They also make a measurement in which the sphere is moved aside entirely, so as to figure out what is the offset from the gravitational field of the Earth alone, which allows them to extract the (potential) influence of the Chameleon field by itself.

Their data doesn’t contain any evidence for an usual fifth force, so they can exclude the presence of the Chameleon field to some precision which derives from their data. Thanks to using atoms instead of more massive probes, their experiment is the first of which the precision is high enough to rule out part of the parameter space in which a dark energy field could have been found. The models for Chameleon fields have a second parameter, and part of this space isn’t excluded yet. However, if the experiment can be improved by some orders of magnitude, it might be possible to rule it out completely. This would mean then that we could discard of these models entirely.

It is always hard to explain how one can get excited about null results, but see: ruling out certain models frees up mental space for other ideas. Of course the people who have worked on it won’t be happy, but such is science. (Though Justin Khoury, who is originator of the idea, co-authored the paper and so seems content contributing to its demise.) The Chaemeleon isn’t quite dead yet, but I’m happy to report it’s on the way to the nirvana of deceased dark energy models.

Sunday, February 15, 2015

Open peer review and its discontents.

Some days ago, I commented on an arxiv paper that had been promoted by the arxiv blog (which, for all I know, has no official connection with the arxiv). This blogpost had an aftermath that gave me something to think.

Most of the time when I comment on a paper that was previously covered elsewhere, it’s to add details that I found missing. More often than not, this amounts to a criticism which then ends up on this blog. If I like a piece of writing, I just pass it on with approval on twitter, G+, or facebook. This is to explain, in case it’s not obvious, that the negative tilt of my blog entries is selection bias, not that I dislike everything I haven’t written myself.

The blogpost in question pointed out shortcomings of a paper. Trying to learn from earlier mistakes, I was very explicit about what that means, namely that the conclusion in the paper isn’t valid. I’ve now written this blog for almost nine years, and it has become obvious that the careful and polite scientific writing style plainly doesn’t get across the message to a broader audience. If I write that a paper is “implausible,” my colleagues will correctly parse this and understand I mean it’s nonsense. The average science journalist will read that as “speculative” and misinterpret it, either accidentally or deliberately, as some kind of approval.

Scientists also have a habit of weaving safety nets with what Peter Woit once so aptly called ‘weasel words’, ambiguous phrases that allow them on any instance to claim they actually meant something else. Who ever said the LHC would discover supersymmetry? The main reason you most likely perceive the writing on my blog as “unscientific” is lack of weasel words. So I put my head out here on the risk of being wrong without means of backpedalling, and as a side-effect I often come across as actively offensive.

If I got a penny each time somebody told me I’m supposedly “aggressive” because I read Strunk’s `Elements of Style,’ then I’d at least get some money for writing. I’m not aggressive, I’m expressive! And if you don’t buy that, I’ll hit some adjectives over your head. You can find them weasel words in my papers though, in the plenty, with lots of ifs and thens and subjunctives, in nested subordinate clauses with 5 syllable words just to scare off anybody who doesn’t have a PhD.

In reaction to my, ahem, expressive blogpost criticizing the paper, I very promptly got an email from a journalist, Philipp Hummel, who was writing on an article about the paper for spectrum.de, the German edition of Scientific American. His article has meanwhile appeared, but since it’s in German, let me summarize it for you. Hummel didn’t only write about the paper itself, but also about the online discussion around it, and the author’s, mine, and other colleagues’ reaction to it.

Hummel wrote by email he found my blogpost very useful and that he had also contacted the author asking for a comment on my criticism. The author’s reply can be found in Hummel’s article. It says that he hadn’t read my blogpost, wouldn’t read it, and wouldn’t comment on it either because he doesn’t consider this proper ‘scientific means’ to argue with colleagues. The proper way for me to talk to him, he let the journalist know, is to either contact him or publish a reply on the arxiv. Hummel then asked me what I think about this.

To begin with I find this depressing. Here’s a young researcher who explicitly refuses to address criticism on his work, and moreover thinks this is proper scientific behavior. I could understand that he doesn’t want to talk to me, evil aggressive blogger that I am, but that he refuses to explain his research to a third party isn’t only bad science communication, it’s actively damaging the image of science.

I will admit I also find it slightly amusing that he apparently believes I must have an interest talking to him, or in him talking to me. That all the people whose papers I have once commented on show up wanting to talk is stuff of my nightmares. I’m happy if I never hear from them again and can move on. There’s lots of trash out there that needs to be beaten.

That paper and its author, me, and Hummel, we’re of course small fish in the pond, but I find this represents a tension that presently exists in much of the scientific community. A very prominent case was the supposed discovery of “arsenic life” a few years ago. The study was exposed flawed by online discussion. The arsenic authors refused to comment on this, arguing that:
“Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated […] This is a common practice not new to the scientific community. The items you are presenting do not represent the proper way to engage in a scientific discourse and we will not respond in this manner.”
Naïve as I am, I thought that theoretical physics is less 19th century than that. But now it seems to me this outdated spirit is still alive also in the physics community. There is a basic misunderstanding here about necessity and use of peer review, and the relevance of scientific publication.

The most important aspect of peer review is that it assures that a published paper has been read at least by the reviewers, which otherwise wouldn’t be the case. Public peer review will never work for all papers simply because most papers would never get read. It works just fine though for papers that receive much attention, and in these cases anonymous reviewers aren’t any better than volunteer reviewers with similar scientific credentials. Consequently, public peer review, when it takes place, should be taken as least as seriously as anonymous review.

Don’t get me wrong, I don’t think that all scientific discourse should be conducted in public. Scientists need private space to develop their ideas. I even think that most of us go out with ideas way too early, because we are under too much pressure to appear productive. I would never publicly comment on a draft that was sent to me privately, or publicize opinions voiced in closed meetings. You can’t hurry thought.

However, the moment you make your paper publicly available you have to accept that it can be publicly commented on. It isn’t uncommon for researchers, even senior ones, to have stage fright upon arxiv submission for this reason. Now you’ve thrown your baby into the water and have to see whether it swims or sinks.

Don’t worry too much, almost all babies swim. That’s because most of my colleagues in theoretical physics entirely ignore papers that they think are wrong. They are convinced that in the end only truth will prevail and thus practice live-and-let-live. I used to do this too. But look at the evidence: it doesn’t work. The arxiv now is full with paid research so thin a sneeze could wipe it out. We seem to have forgotten that criticism is an integral part of science, it is essential for progress, and for cohesion. Physics leaves me wanting more every year. It is over-specialized into incredibly narrow niches, getting worse by the day.

Yes, specialization is highly efficient to optimize existing research programs, but it is counterproductive to the development of new ones. In the production line of a car, specialization allows to optimize every single move and every single screw. And yet, you’ll never arrive at a new model listening to people who do nothing all day than looking at their own screws. For new breakthroughs you need people who know a little about all the screws and their places and how they belong together. In that production line, the scientists active in public peer review are the ones who look around and say they don’t like their neighbor’s bolts. That doesn’t make for a new car, all right, but at least they do look around and they show that they care. The scientific community stands much to benefit from this care. We need them.

Clearly, we haven’t yet worked out a good procedure for how to deal with public peer review and with these nasty bloggers who won’t shut up. But there’s no going back. Public peer review is here to stay, so better get used to it.

Wednesday, February 11, 2015

Do black hole firewalls have observable consequences?

In a paper out last week Niayesh Afshordi and Yasaman Yazdi from Perimeter Institute study the possibility that black hole firewalls can be observed by their neutrino emission:
In their work, the authors assume that black holes are not black, but emit particles from some surface close by the black hole horizon. They look at the flux of particles that we should receive on Earth from this surface. They argue that highly energetic neutrinos, which have been detected recently but whose origin is presently unknown, might have originated at the black hole firewall.

The authors explain that from all possible particles that might be produced in such a black hole firewall, neutrinos are most likely to be measured on Earth, because the neutrinos interact only very weakly and have the best chances to escape from a hot and messy environment. In this sense their firewall has consequences different from what a hard surface would have, which is important because a hard surface rather than an event horizon has previously been ruled out. The authors make an ansatz for two different spectra of the neutrino emission, a power-law and a black-body law. They then use the distribution of black holes to arrive at an estimate for the neutrino flux on Earth.

Some experiments have recently measured neutrinos with very high energies. The origin of these highly energetic neutrinos is very difficult to explain from known astrophysical sources. It is presently not clear how they are produced. Afshordi and Yazdi suggest that these highly energetic neutrinos might come from the black hole firewalls if these have a suitable power-law spectrum.

It is a nice paper that tries to make contact between black hole thermodynamics and observation. This contact has so far only been made for primordial (light) black holes, but these have never been found. Afshordi and Yazdi instead look at massive and solar-mass black holes.

The idea of a black hole firewall goes back to a 2012 paper by Almheiri, Marolf, Polchinski, and Sully, hereafter referred to as AMPS. AMPS pointed out in their paper that what had become the most popular solution attempt to the black hole information loss problem is in fact internally inconsistent. They showed that four assumptions about black hole evaporation that were by many people believed to all be realized in nature could not simultaneously be true. These four assumptions are:
  1. Black hole evaporation does not destroy information.
  2. Black hole entropy counts microstates of the black hole.
  3. Quantum field theory is not significantly affected until close to the horizon.
  4. An infalling observer who crosses the horizon (when the black hole mass is still large) does not notice anything unusual.
The last assumption is basically the equivalence principle: An observer in a gravitational field cannot locally tell the difference to acceleration in flat space. A freely falling observer is not accelerated (by definition) and thus shouldn’t see anything. Assumption three is a quite weak version of knowing that quantum gravitational effects can’t just pop up anywhere in space that is almost flat. Assumption 3 and 4 are general expectations that don’t have much to do with the details of the attempted solution to the black hole information loss problem. The first two assumptions are what describe the specific scenario that supposedly solves the information loss problem. You may or may not buy into these.

These four assumptions are often assigned to black hole complementarity specifically, but that is mostly because the Susskind, Thorlacius, Uglum paper nicely happened to state these assumptions explicitly. The four assumptions are more importantly also believed to hold in string theory generally, which supposedly solves the black hole information loss problem via the gauge-gravity duality. Unfortunately, most articles in the popular science media have portrayed all these assumptions as known to be true, but this isn’t so. They are supported by string theory, so the contradiction is a conundrum primarily for everybody who believed that string theory solved the black hole information loss problem. If you do not, for example, commit to the strong interpretation of the Bekenstein-Hawking entropy (assumption 2), which is generally the case for all remnant scenarios, then you have no problem to begin with.

Now since AMPS have shown that the four assumptions are inconsistent, at least one of them has to be given up, and all the literature following their paper discussed back and forth which one should be given up. If you give up the equivalence principle, then you get the “firewall” that roasts the infalling observer. Clearly, that should be the last resort, since General Relativity is based on the equivalence principle. Giving up any of the other assumptions is more reasonable. Already for this reason, using the AMPS argument to draw the conclusion that black holes must be surrounded by a firewall is totally nutty.

As for my personal solution to the conundrum, I have simply shown that the four assumptions aren’t actually mutually incompatible. What makes them incompatible is a fifth assumption that isn’t explicitly stated in the above list, but enters later in the AMPS argument. Yes, I suffer from a severe case of chronic disagreeability, and I’m working hard on my delusions de grandeur, but this story shall be told another time.

The paper by Afshordi and Yazdi doesn’t make any use whatsoever of the AMPS calculation. They just assume that there is something emitting something close by the black hole horizon, and then they basically address the question what that something must do to explain the excess neutrino flux at high energies. This scenario has very little to do with the AMPS argument. In the AMPS paper the outgoing state is Hawking radiation, with suitable subtle entanglements so that it is pure and evolution unitary. The average energy of the emitted particles that is seen by the far away observer (us) is still zilch for large black holes. It is in fact typically below the background temperature of the cosmic microwave background, so the black holes won’t even evaporate until the universe has cooled some more (in some hundred billion years or so). It is only the infalling observer who notices something strange is going on if you drop assumption 4, which, I remind you, is already nutty.

So what is the merit of the paper?

Upon inquiry, Niayesh explained that the motivation for studying the emission of a possible black hole firewall in more general terms goes back to an earlier paper that he co-authored, arXiv:1212.4176. In this paper they argue (as so many before) that black holes are not the endstate of black hole collapse, but that instead spacetime ends already before the horizon (if you want to be nutty, be bold at least). This idea has some similarities with the fuzzball idea.

I am generally unenthusiastic about such solutions because I like black holes, and I don’t believe in anything that’s supposed to stabilize collapsing matter at small curvature. So I am not very convinced by their motivation. Their powerlaw ansatz is clearly constructed to explain a piece of data that currently wants explanation. To me their idea makes more sense when read backwards: Suppose the mysterious highly energetic neutrinos come from the vicinity of black holes. Given that we know the distribution of the black holes, what is the spectrum by which the neutrinos should have been emitted?

In summary: The paper doesn’t really have much to do with the black hole firewall, but it deals instead with an alternative end state of black hole collapse and asks for its observational consequences. The good thing about their paper is that they are making contact to observation, which is rare in an area plagued by lack of phenomenology, so I appreciate the intent. It would take more than a few neutrinos though to convince me that black holes don’t have a horizon.

Wednesday, February 04, 2015

Black holes don’t exist again. Newsflash: It’s a trap!

Several people have pointed me towards an article at phys.org about this paper
    Absence of an Effective Horizon for Black Holes in Gravity's Rainbow
    Ahmed Farag Ali, Mir Faizal, Barun Majumder
    arXiv:1406.1980 [gr-qc]
    Europhys.Lett. 109 (2015) 20001
Among other things, the authors claim to have solved the black hole information loss problem, and the phys.org piece praises them as using a “new theory.” The first author is cited saying: “The absence of an effective horizon means there is nothing absolutely stopping information from going out of the black hole.”

The paper uses a modification of General Relativity known under the name of “rainbow gravity” which means that the metric and so the space-time background is energy-dependent. Dependent on which energy, you ask rightfully. I don’t know. Everyone who writes papers on this makes their own pick. Rainbow gravity is an ill-defined framework that has more problems than I can list here. In the paper the authors motivate it, amazingly enough, by string theory.

The argument goes somewhat like this: rainbow gravity has something to do with deformed special relativity (DSR), some versions of which have something to do with a minimal length, which has something to do with non-commutative geometry, which has something to do with string theory. (Check paper if you don’t believe this is what they write.) This argument has more gaps than the sentence has words.

To begin with DSR was formulated in momentum space. Rainbow gravity is supposedly a formulation of DSR in position space, plus that it takes into account gravity. Except that it is known that the only ways to do DSR in position space in a mathematically consistent way either lead to violations of Lorentz-invariance (ruled out) or violations of locality (also ruled out).

This was once a nice idea that caused some excitement, but that was 15 years ago. For what I am concerned, papers on the topic shouldn’t be accepted for publication any more unless these problems are solved or at least attempted to be solved. At the very least the problems should be mentioned in an article on the topic. The paper in question doesn’t list any of these issues. Rainbow gravity isn’t only not new, it is also not a theory. It once may have been an idea from which a theory might have been developed, but this never happened. Now it’s a zombie idea that doesn’t die because journal editors think it must be okay if others have published papers on it too.

There is one way to make sense of rainbow gravity which is in the context of running coupling constants. Coupling constants, including Newton’s constant, aren’t actually constant, but depend on the energy scale that the physics is probed with. This is a well-known effect which can be measured for the interactions in the standard model and it is plausible that it should also exist for gravity. Since the curvature of spacetime depends on the strength of the gravitational coupling, the metric then becomes a function of the energy that it is probed with. This is to my knowledge also the only way to make sense of deformed special relativity. (I wrote a paper on this with Xavier and Roberto some years ago.) Alas, to see any effect from this you’d need to do measurements at Planckian energies (com), and the energy-dependent metric would only apply directly in the collision region.

In their paper the authors allude to some “measurement” that supposedly sets the energy in their metric. Unfortunately, there is never any observer doing any measurement, so one doesn’t know which energy it is. It’s just a word that they appeal to. What they do instead is making use of a known relation in some versions of DSR that prevents one from measuring distances below the Planck length. They then argue that if one cannot resolve structures below the Planck length then the horizon of a black hole cannot be strictly speaking defined. That quantum gravity effects should blur out the horizon to finite width is correct in principle.

Generally, all surfaces of zero width, like the horizon, are mathematical constructs. This is hardly a new insight, but it’s also not very meaningful. The “surface of the Earth” for example doesn’t strictly speaking exist either. You will still smash to pieces if you jump out of a window, you just can’t tell exactly where you will die. Similarly, that the exact location of the horizon cannot be measured doesn’t mean that the space-time does no longer have a causally disconnected region. You just can’t tell exactly when you enter it. The authors’ statement that:
“The absence of an effective horizon means there is nothing absolutely stopping information from going out of the black hole.”
is therefore logically equivalent to the statement that there is nothing absolutely stopping you at the surface of the Earth when you jump out the window.

The paper also contains a calculation. The authors first point out that in the normal metric of the Schwarzschild black hole an infalling observer needs a finite time to cross the horizon, but for a faraway observer it looks like it takes an infinite time. This is correct. If one calculates the time in the faraway observer’s coordinates it diverges if the infalling observer approaches the horizon. The authors then find out that it takes only a finite time to reach a surface that is still a Planck length away from the horizon. This is also correct. It’s also a calculation that normally is assigned to undergrad students.

They try to conclude from this that the faraway observer sees a crossing of the horizon in finite time, which doesn’t make sense because they’ve previously argued that one cannot measure exactly where the horizon is, though they never say who is measuring what and how. What it really means is that the faraway observer cannot exactly tell when the horizon is crossed. This is correct too, but since it takes an infinite time anyway, the uncertainty is also infinite. The authors then argue: “Divergence in time is actually an signal of breakdown of spacetime description of quantum theory of gravity, which occurs because of specifying a point in spacetime beyond the Planck scale.” The authors, in short, conclude that if an observer cannot tell exactly when he reaches a certain distance, he can never cross it. Thus the position at which the asymptotic time diverges is never reached. And the observer is never causally connected.

In their paper, this reads as follows:
“Even though there is a Horizon, as we can never know when a string cross it, so effectively, it appears as if there is no Horizon.”
Talking about strings here is just cosmetics, the relevant point is that they believe if you cannot tell exactly when you cross the horizon, you will never become causally disconnected, which just isn’t so.

The rest of the paper is devoted to trying to explain what this means, and the authors keep talking about some measurements which are never done by anybody. If you would indeed make a measurement that reaches the Planck energy (com) at the horizon, you could indeed locally induce a strong perturbation, thereby denting away the horizon a bit, temporarily. But this isn’t what the authors are after. They are trying to convince the reader that the impossibility of resolving distances arbitrarily well, though without actually making any measurement, bears some relevance for the causal structure of spacetime.

A polite way to summarize this finding is that the calculation doesn’t support the conclusion.

This paper is a nice example though to demonstrate what is going wrong in theoretical physics. It isn’t actually that the calculation is wrong, in the sense that the mathematical manipulations are most likely correct (I didn’t check in detail, but it looks good). The problem is that not only is the framework that they use ill-defined (in their version it is plainly lacking necessary definitions, notably the transformation behavior under a change of coordinate frame and the meaning of the energy scale that they use), but that they moreover misinterpret their results.

The authors do not only not mention the shortcomings of the framework that they use but also oversell it by trying to connect it to string theory. Even though they should know that the type of uncertainty that results from their framework is known to NOT be valid in string theory. And the author of the phys.org article totally bought into this. The tragedy is of course that for the authors their overselling has worked out just fine and they’ll most likely do it again. I’m writing this in the hope to prevent it, though on the risk that they’ll now hate me and never again cite any of my papers. This is how academia works these days, or rather, doesn’t work. Now I’m depressed. And this is all your fault for pointing out this article to me.

I can only hope that Lisa Zyga, who wrote the piece at phys.org, will learn from this that solely relying on the author’s own statements is never good journalistic practice. Anybody working on black hole physics could have told her that this isn’t a newsworthy paper.