Tuesday, April 28, 2015

What should everybody know about the foundations of physics?

How do we best communicate research on the foundations of physics? That was topic of a panel discussion at the conference I attended last week. Organized by Brendan Foster from FQXi, on the panel were, next to me, Matt Leifer and Dagomir Kaszlikowski, winner of last year’s FQXi video contest. And, yes, Matt was wearing his anti-quantum zealot shirt :)

It turned out that Matt and I had quite similar thoughts on the purpose of public outreach. I started with pointing out we most often have two different aims, inspiration and education, that sometimes conflict with each other. To this Matt added a third aim, “activation,” by which he meant that we sometimes want people to react to our outreach message, such as maybe signing up for a newsletter, attending a lecture, or donating to a specific cause. Dagomir explained that making movies with sexy women is the way to better communicate physics.

As I laid out in an earlier blogpost, the dual goals of inspiration and education create a tension that seems inevitable. The presently most common way to inspire the masses is to entirely avoid technical terms and cut back on accuracy for the sake of catchy messages – and heaven forbid showing equations. Since the readers are never exposed to any technical terms or equations, they are doomed to forever remain in the shallow waters. This leads to an unfortunate gap in the available outreach literature, where on the one hand we have the seashore with big headlines and very little detail, and in the far distance we have the island of education with what are basically summaries of the technical literature and already well above most people’s head. There isn’t much in the middle, and most readers never learn to swim.

This inspiration-education-gap is sometimes so large that it creates an illusion of knowledge among those only reading the inspirational literature. Add to this that many physicists who engage in outreach go to great lengths trying to convince the audience that it’s all really not so difficult, and you create a pool of people who are now terribly inspired to do research without having the necessary education. Many of them will just end up being frustrated with the popular science literature that doesn’t help them to gain any deeper knowledge. A small fraction of these become convinced that all the years it takes to get a PhD are really unnecessary and that reading popular science summaries prepares them well for doing research on their own. These are the people who then go on to send me their new theory of quantum mechanics that solves the black hole paradox or something like that.

The tension leading to this gap is one we have inherited from print media which only allows a fixed level of technical detail, then often chosen to be a low level as to maximize the possible audience. But now that personalization and customization is all en vogue it would be possible to bridge this gap online. It would take effort, of course, but I think it would be worth it. To me bridging this gap between inspiration and education is clearly one of the goals we should be working towards, to help people who are interested to learn more gradually and build their knowledge. Right now some bloggers are trying to fill the gap, but the filling is spotty and not coordinated. We could do much better than that.

The other question that came up repeatedly during the panel discussion was whether we really need more inspiration. Several people, including Matt Leifer and Alexei Grinbaum, thought that physics has been very successful recently to reach the masses, and yes, the Brian Cox effect and the Big Bang Theory were named in this context. I think they are right to some extent – a lot has changed in the last decades. Though we could always do better of course. Alexei said that we should try to make the term “entanglement” as commonly used as relativity. Is that a goal we should strive for?

When it comes to inspiration, I am not sure at all that it is achievable or even particularly useful that everybody should know what a bipartite state is or what exactly is the problem with renormalizing quantum gravity. As I also said in the panel discussion, we are all in the first line interested in what benefits us personally. One can’t eat quantum gravity and it doesn’t cure cancer and that’s where most people’s interest ends. I don’t blame them. While I think that everybody needs a solid basic education in math and physics, and the present education leaves me wanting,  I don’t think everybody needs to know what is going on at the research frontier in any detail.

What I really want most people to know about the foundations of physics is not so much exactly what research is being conducted, but what are the foundational questions to begin with and why is this research relevant at all. I have the impression that much of the presently existing outreach effort doesn’t do this. Instead of giving people the big picture and the vision – and then a hand if they want to know more – public outreach is often focused on promoting very specific research agendas. The reason for this is mostly structural, because much of public outreach is driven by institutes or individuals who are of course pushing their own research. Very little public outreach is actually done for the purpose of primarily benefitting the public. Instead, it is typically done to increase visibility or to please a sponsor.

The other reason though is that many scientists don’t speak about their vision, or maybe don’t think about the big picture themselves all that much. Even I honestly don’t understand the point of much of the research in quantum foundations, so if you needed any indication that public outreach in quantum foundations isn’t working all that well, please take me as a case study. For all I can tell there seem to be a lot of people in this field who spend time reformulating a theory that works perfectly fine, and then make really sure to convince everybody their reformulation does exactly the same as quantum mechanics has always done.

Why, oh why, are they so insistent on finding a theory that is both realist and local, when it would be so dramatically more interesting to find a theory that allows for non-local information transfer and still be compatible with all data we have so far. But maybe that’s just me. In any case, I wish that more people had the courage to share their speculation what this research might lead to, in a hundred or a thousand years. Will we have come to understand that non-locality is real and in possible to exploit for communication? Will we be able to create custom-designed atomic nuclei?

As was pointed out by Matt and Brendan several times, it is unfortunate that there aren’t many scientific studies dedicated to finding out what public outreach practices are actually efficient, and efficient for what. Do the movies with sexy women actually get across any information? Does the inspiration they provide actually prompt people to change their attitude towards science? Do we succeed at all in raising awareness that research on the foundations of physics is necessary for sustainable progress? Or do we, despite our best intensions, just drive people into the arms of quantum quacks because we hand them empty words but not enough detail to tell the science from the pseudoscience?

I enjoyed this panel discussion because most often the exchange about public outreach that I have with my colleagues comes down to them declaring that public outreach just takes time and money away from research. In the end of course these basic questions remain: Who does it, and who pays for it?

In summary, I think what we need is more effort to bridge the gap between inspiration and education, and I want to see more vision and less promotion in public outreach.

Friday, April 24, 2015

The problem with Poincaré-invariant networks

Following up on the discussions on these two previous blogposts, I've put my argument why Poincaré-invariant networks in Minkowski space cannot exist into writing, or rather drawing. The notes are on the arxiv today.

The brief summary goes like this: We start in 1+1 dimensions. Suppose there is a Poincaré-invariant network in Minkowski space that is not locally infinitely dense, then its nodes must be locally finite and on the average be distributed in a Poincaré-invariant way. We take this distribution of points and divide it up into space-time tiles of equal volume. Due to homogeneity, each of these volumes must contain the same number of nodes on the average. Now we pick one tile (marked in grey).


In that tile we pick one node and ask where one of its neighbors is. For the network to be Poincaré-invariant, the distribution of the links to the neighbor must be Lorentz-invariant around the position of the node we have picked. Thus the probability for the distribution of the neighboring node must be uniform on each hyberbola at equal proper distance from the node. Since the hyperbolae are infinitely long, the neighbor is with probability one at infinite spatial distance and arbitrarily close to the lightcone. The same is the case for all neighbors.

Since the same happens for each other node, and there are infinitely many nodes in the lightlike past and future of the center tile, there are infinitely many links passing through the center tile, and due to homogeneity also through every other tile. Consequently the resulting network is a) highly awkward and b) locally infinitely dense.


This argument carries over to 3+1 dimensions, for details see paper. This implies there aren't any finite Poincaré-invariant triangulations, since their edges would have to form a network which we've just seen doesn't exist.

What does this mean? It means that whenever you are working with an approach to quantum gravity based on space-time networks or triangulations, then you have to explain how you want to recover local Lorentz-invariance. Just saying "random distribution" doesn't make the problem go away. The universe isn't Poincaré-invariant, so introducing a preferred frame is not in and by itself unreasonable or unproblematic. The problem is to get rid of it on short distances, and to make sure it doesn't conflict with existing constraints on Lorentz-invariance violations.

I want to thank all those who commented on my earlier blogposts which prompted me to write up my thoughts.

Tuesday, April 21, 2015

Away Note

I will be travelling the next weeks, so blogging might be spotty and comment moderation slow. I'll first be in Washington DC speaking at a conference on New Directions in the Foundations of Physics (somewhat ironically after I just decided I've had enough of the foundations of physics). And then I'll be at PI and the University of Waterloo (giving the same talk, with more equations and less philosophy). And, yes, I've packed the camera and I'm trigger happy ;)

Friday, April 17, 2015

A wonderful 100th anniversary gift for Einstein

This year, Einstein’s theory of General Relativity celebrates its 100th anniversary. 2015 is also the “Year of Light,” and fittingly so, because the first and most famous confirmation of General Relativity was the light deflection on the Sun.

As light carries energy and is thus subject of gravitational attraction, a ray of light passing by a massive body should be slightly bent towards it. This is so both in Newton’s theory of gravity and in Einstein’s, but Einstein’s deflection is by a factor two larger than Newton’s. Because of this effect, the positions of stars seem to slightly shift as they stand close by the Sun, but the shift is absolutely tiny: The deflection of light from a star close to the rim of the Sun is just about a thousandth of the Sun's diameter, and the deflection drops rapidly the farther away the star’s position is from the rim.

In the year 1915 one couldn’t observe stars in such close vicinity of the Sun because if the Sun does one thing it’s shining really brightly, which is generally bad if you want to observe something small and comparably dim next to it. The German astronomer Johann Georg von Soldner had calculated the deflection in Newton’s theory already in 1801. His paper wasn’t published until 1804, and then with a very defensive final paragraph that explained:
“Uebrigens glaube ich nicht nöthig zu haben, mich zu entschuldigen, daß ich gegenwärtige Abhandlung bekannt mache; da doch das Resultat dahin geht, daß alle Perturbationen unmerklich sind. Denn es muß uns fast eben so viel daran gelegen seyn, zu wissen, was nach der Theorie vorhanden ist, aber auf die Praxis keinen merklichen Einfluß hat; als uns dasjenige interessirt, was in Rücksicht auf Praxis wirklichen Einfluß hat. Unsere Einsichten werden durch beyde gleichviel erweitert.”

[“Incidentally I do not think it should be necessary for me to apologize that I publish this article even though the result indicates that the deviation is unobservably small. We must pay as much attention to knowing what theoretically exists but has no influence in practice, as we are interested in that what really affects practice. Our insights are equally increased by both.” - translation SH]
A century passed and physicists now had somewhat more confidence in their technology, but still they had to patiently wait for a total eclipse of the Sun during which they were hoping to observe the predicted deflection of light.

In 1919 finally, British astronomer and relativity aficionado Arthur Stanley Eddington organized two expeditions to observe a solar eclipse with a zone of totality roughly along the equator. He himself travelled to Principe, an island in the Atlantic ocean, while a second team observed the event from Sobral in Brazil. The results of these observations were publicly announced November 1919 at a meeting in London that made Einstein a scientific star over night: The measured deflection of light did fit to the Einstein value, while it was much less compatible with the Newtonian bending.

As history has it, Eddington’s original data actually wasn’t good enough to make that claim with certainty. His measurements had huge error bars due to bad weather and he also might have cherry-picked his data because he liked Einstein’s theory a little too much. Shame on him. Be that as it may, dozens of subsequent measurement proved his premature announcement correct. Einstein was right, Newton was wrong.

By the 1990s, one didn’t have to wait for solar eclipses any more. Data from radio sources, such as distant pulsars, measured by very long baseline interferometry (VLBI) could now be analyzed for the effect of light deflection. In VLBI, one measures the time delay by which wavefronts from radio sources arrive at distant detectors that might be distributed all over the globe. The long baseline together with a very exact timing of the signal’s arrival allows one to then pinpoint very precisely where the object is located – or seems to be located. In 1991, Robertson, Carter & Dillinger confirmed to high accuracy the light deflection predicted by General Relativity by analyzing data from VLBI accumulated over 10 years.

But crunching data is one thing, seeing it is another thing, and so I wanted to share with you today a plot I came across coincidentally, in a paper from February by two researchers located in Australia.

They have analyzed the VLBI data from some selected radio sources over a period of 10 years. In the image below, you can see how the apparent position of the blazar (1606+106) moves around over the course of the year. Each dot is one measurement point; the “real” position is in the middle of the circle that can be inferred at the point marked zero on the axes.

Figure 2 from arXiv:1502.07395

How is that for an effect that was two centuries ago thought to be unobservable?

Publons

The "publon" is "the elementary quantum of scientific research which justifies publication" and it's also a website that might be interesting for you if you're an active researcher. Publons helps you collect records of your peer review activities. On this website, you can set up an account and then add your reviews to your profile page.

You can decide whether you want to actually add the text of your reviews, or not, and to which level you want your reviews to be public. By default, only the journal for which you reviewed and the month during which the review was completed will be shown. So you need not be paranoid that people will know all the expletives you typed in reply to that idiot last year!

You don't even have to add the text of your review at all, you just have to provide a manuscript number. Your review activity is then checked against the records of the publisher, or so is my understanding.

Since I'm always interested in new community services, I set up an account there some months ago. It goes really quickly and is totally painless. You can then enter your review activities on the website or - super conveniently - you just forward the "Thank You" note from the publisher to some email address. The record then automatically appears on your profile within a day or two. I forwarded a bunch of "Thank You" emails from the last months, and now my profile page looks like follows:



The folks behind the website almost all have a background in academia and probably know it's pointless trying to make money from researchers. One expects of course that at some point they will try to monetize their site, but at least so far I have received zero spam, upgrade offers, or the dreaded newsletters that nobody wants to read.

In short, the site is doing exactly what it promises to do. I find the profile page really useful and will probably forward my other "Thank You" notes (to the extent that I can dig them up), and then put the link to that page in my CV and on my homepage.

Sunday, April 12, 2015

Photonic Booms: How images can move faster than light, and what they can tell us.

If you sweep a laser pointer across the moon, will the spot move faster than the speed of light? Every physics major encounters this question at some point, and the answer is yes, it will. If you sweep the laser pointer it in an arc, the velocity of the spot increases with the distance to the surface you point at. On Earth, you only have to rotate the laser in a full arc within a few seconds, then it will move faster than the speed of light on the moon!



This simplified explanation would be all there is to say were the moon a disk, but the moon isn’t a disk and this makes the situation more interesting. The speed of the spot also increases the more parallel the surface you aim at is relative to the beam’s direction. And so the spot’s speed increases without bound as it reaches the edge of the visible part of the moon.

That’s the theory. In practice of course your average laser pointer isn’t strong enough to still be visible on the moon.

This faster-than-light motion is not in conflict with special relativity because the continuous movement of the spot is an illusion. What actually moves are the photons in the laser beam, and they move at the always same speed of light. But different photons illuminate different parts of the surface in a pattern synchronized by the photon’s collective origin, which appears like a continuous movement that can happen at arbitrary speed. It isn’t possible in this way to exchange information faster than the speed of light because information can only be sent from the source to the surface, not between the illuminated parts on the surface.

That is for what the movement of the spot on the surface is concerned. Trick question: If you sweep a laser pointer across the moon, what will you see? Note the subtle difference – now you have to take into account the travel time of the signal.

Let us assume for the sake of simplicity that you and the moon are not moving relative to each other, and you sweep from left to right. Let us also assume that the moon reflects diffusely into all directions, so you will see the spot regardless of where you are. This isn’t quite right but good enough for our purposes.

Now, if you were to measure the speed of the spot on the surface of the moon it would appear on the left moving faster than the speed of light initially, then slowing down as it approaches the place on the moon’s surface that is orthogonal to the beam, then speed up again. But that’s not what you would see on Earth. That’s because the very left and very right edges are also farther away and so the light takes longer to reach us. You would instead see a pair of spots appear close by the left edge and then separate, one of them disappearing at the left edge, the other moving across the moon to disappear on the other edge. The point where the spot pair seems to appear is the position where the velocity of the spot on the surface drops from above the speed of light to below.


This pair creation of spots happens for the same reason you hear a sonic boom when a plane passes by faster than the speed of sound. That’s because the signal (the sound or the light) is slower than what is causing the signal (the plane or the laser hitting the surface of the moon). The spot pair creation is thus signal of a “photonic boom,” a catchy phrase coined by Robert Nemiroff, Professor for astrophysics at Michigan Technological University, and one of the two people behind the Astronomy Picture Of the Day that clogs our facebook feeds every morning.

The most surprising thing about this spot pair creation is that nobody ever thought through this until December 2014, when Nemiroff put out a paper in which he laid out the math of the photonic booms. The above considerations for a perfectly spherical surface can be put in more general terms, taking into account also relative motion between the source and the reflecting surface. The upshot is that the spot pair creation events carry information about the structure of the surface that they are reflected on.

But why, you might wonder, who cares about spots on the Moon? To begin with, if you were to measure the structure of any object, say an asteroid, by aiming at it with laser beams and recording the reflections, then you would have to take into account this effect. Maybe more interestingly, these spot pair creations probably occur in astrophysical situations. Nemiroff in his paper for example mentions the binary pulsar 3U 0900-40, whose x-rays may be scattering off the surface of its companion, a signal that one will misinterpret without knowing about photonic booms.

The above considerations don’t only apply to illuminated spots but also to shadows. Shadows can be cast for example by opaque clouds on reflecting nebula, resulting in changes of brightness that may appear to move faster than the speed of light. There are many nebula that show changes in brightness thought to be due to such effects, like for example the Hubble Nebula (HVN: NGC 2260). Again, one cannot properly analyze these situations without taking into account the spot pair creation effect.

In his January paper, Nemiroff hints at an upcoming paper “in preparation” with a colleague, so I think we will hear more about the photonic booms in the near future.

In 2015, Special Relativity is 110 years old, but it still holds surprises for us.

This post first appeared on Starts with A Bang with the title "Photonic Booms".

Tuesday, April 07, 2015

No, the black hole information loss problem has not been solved. Even if PRL thinks so.

This morning I got several requests for comments on this paper which apparently was published in PRL
    Radiation from a collapsing object is manifestly unitary
    Anshul Saini, Dejan Stojkovic
    Phys.Rev.Lett. 114 (2015) 11, 111301
    arXiv:1503.01487 [gr-qc]
The authors claim they find “that the process of gravitational collapse and subsequent evaporation is manifestly unitary as seen by an asymptotic observer.”

What do they do to arrive at this groundbreaking result that solves the black hole information loss problem in 4 PRL-approved pages? The authors calculate the particle production due to the time-dependent background of the gravitational field of a collapsing mass-shell. Using the mass-shell is a standard approximation. It is strictly speaking unnecessary, but it vastly simplifies the calculation and is often used. They use the functional Schrödinger formalism (see eg section II of this paper for a brief summary), which is somewhat unusual, but its use shouldn’t make a difference for the outcome. They find the time evolution of the particle production is unitary.

In the picture they use, they do not explicitly use Bogoliubov transformations, but I am sure one could reformulate their time-evolution in terms of the normally used Bogoliubov-coefficients, since both pictures have to be unitarily equivalent. There is an oddity in their calculation which is that in their field expansion they don’t seem to have anti-particles, or else I am misreading their notation, but this might not matter much as long as one keeps track of all branch cuts.

Due to the unusual picture that they use one unfortunately cannot directly compare their intermediate results with the standard calculation. In the most commonly used Schrödinger picture, the operators are time-independent. In the picture used in the paper, part of the time-dependence is pushed into the operators. Therefore I don’t know how to interpret these quantities, and in the paper there’s no explanation on what observables they might correspond to. I haven’t actually checked the steps of the calculation, but it all looks quite plausible as by method and functional dependence.

What’s new about this? Nothing really. The process of particle production in time-dependent background fields is unitary. The particles produced in the collapse process do form a pure state. They have to because it’s a Hamiltonian evolution. The reason for the black hole information loss is not that the particle production isn’t unitary – Bogoliubov transformations are by construction unitary – but that the outside observer in the end doesn’t get to see the full state. He only sees the part of the particles which manage to escape. The trouble is that these particles are entangled with the particles that are behind the horizon and eventually hit the singularity.

It is this eventual destruction of half of the state at the singularity that ultimately leads to a loss of information. That’s why remnants or baby-universes in a sense solve the information loss problem simply by preventing the destruction at the singularity, since the singularity is assumed to not be there. For many people this is a somewhat unsatisfactory solution because the outside observer still doesn’t have access the information. However, since the whole state still exists in a remnant scenario the time evolution remains unitary and no inconsistency with quantum mechanics ever arises. The new paper is not a remnant scenario, I am telling you this to explain that what causes the non-unitarity is not the particle production itself, but that the produced particles are entangled across the horizon, and part of them later become inaccessible, thereby leaving the outside observer with a mixed state (read: “information loss”).

The authors in the paper never trace out the part behind the horizon, so it’s not surprising the get a pure state. They just haven’t done the whole calculation. They write (p. 3) “Original Hawking radiation density matrix contains only the diagonal elements while the cross-terms are absent.” The original matrix of the (full!) Hawking radiation contains off-diagonal terms, it’s a fully entangled state. It becomes a diagonal, mixed, matrix only after throwing out the particles behind the horizon. One cannot directly compare the both matrices though because in the paper they use a different basis than one normally does.

So, in summary, they redid a textbook calculation by a different method and claimed they got a different result. That should be a warning sign. This is a 30+ years old problem, thousands of papers have been written about it. What are the odds that all these calculations have simply been wrong? Another warning sign is that they never explain just why they manage to solve the problem. They try to explain that their calculation has something in common with other calculations (about entanglement in the outgoing radiation only) but I cannot see any connection, and they don’t explain it either.

The funny thing about the paper is that I think the calculation, to the extent that they do it, is actually correct. But then the authors omit the last step, which means they do not, as stated in the quote above, calculate what the asymptotic observer sees. The conclusion that this solves the black hole information problem is then a classical non-sequitur.