Pages

Thursday, December 31, 2015

Book review: “Beyond the Galaxy” by Ethan Siegel

Beyond the Galaxy: How Humanity Looked Beyond Our Milky Way and Discovered the Entire Universe
By Ethan Siegel
World Scientific Publishing Co (December 9, 2015)

Ethan Siegel’s book is an introduction to modern cosmology that delivers all the facts without the equations. Like Ethan’s collection “Starts With a Bang,” it is well-explained and accessible for the reader without any prior knowledge in physics. But this access doesn’t come without effort. This isn’t a book for the strolling pedestrian who likes being dazzled by the wonders of modern science, it’s a book for the inquirer who wants to turn around everything behind the display-window of science news.

“Beyond the Galaxy” tells the history of the universe and the basics of the relevant measurement techniques. It explains the big bang theory and inflation, the formation of matter in the early universe, dark matter, dark energy, and briefly mentions the multiverse. Siegel elaborates on the cosmic microwave background and what we have learned from it, baryon acoustic oscillations, and supernovae redshift. For the most part, the book sticks closely with well-established physics and stays away from speculations, except when it comes to the possible explanations for dark matter and dark energy.

Having said what the book contains, let me spell out what it doesn’t contain. This is not a book about astrophysics. You will not find elaborate discussions about all the known astrophysical objects and their physical process. This is also not a book about particle physics. Ethan does not include dark matter direct detection experiments, and while some particle physics necessarily enters the discussion of matter formation, he sticks with the very essentials. It is also not a history book. Though Ethan does a good job giving the reader a sense of the timeline of discoveries, this is clearly not the focus of his interest.

Ethan might not be the most lyrical writer ever, but his explanations are infallibly clear and comprehensible. The book is accompanied by numerous illustrations that are mostly helpful, though some of them contain more information than is explained in the text.

In short, Ethan’s book is the missing link between cosmology textbooks and popular science articles. It will ease your transition if you are attempting one, or, if that is not your intention, it will serve to tie together the patchy knowledge that news articles often leave us with. It is the ideal starting point if you want to get serious about digging into cosmology, or if you are just dissatisfied by the vagueness of much contemporary science writing. It is, in one word, a sciency book.

[Disclaimer: Free review copy, plus I write for Ethan once per month.]

Wednesday, December 30, 2015

How does a lightsaber work? Here is my best guess.

A lightsaber works by emitting a stream of magnetic monopoles. Magnetic monopoles are heavy particles that source magnetic fields. They are so-far undiscovered but many physicists believe they are real due to theoretical arguments. For string theorist Joe Polchinski, for example, “the existence of magnetic monopoles seems like one of the safest bets that one can make about physics not yet seen.” Magnetic monopoles are so heavy however that they cannot be produced by any known processes in the universe – a minor technological complication that I will come back to below.




Depending on the speed at which the monopoles are emitted, they will either escape or return back to the saber’s hilt which has the opposite magnetic charge. You could of course just blast your opponent with the monopoles, but that would be rather boring. The point of a lightsaber isn’t to merely kill your enemies, but to kill them with style.



So you are emitting this stream of monopoles. Since the hilt has the opposing magnetic charge they pull after them magnetic force lines. Next you eject some electrically charged particles – electrons or ions – into this field with an initial angular velocity. These will circle in spirals around the magnetic field and, due to the circular motion, they will emit synchroton radiation, which is why you can see the blade.

Due to the emission of light and the occasional collision with air molecules, the electrically charged particles slow down and eventually escape the magnetic field. That doesn’t sound really healthy, so you might want to make sure that their kinetic energy isn’t too high. To then still get an emission spectrum with a significant contribution in the visible range, you need a huge magnetic field. Which can’t really be healthy either, but at least it decays inversely proportional to the distance from the blade.

Letting the monopoles escape has the advantage that you don’t have to devise a complicated mechanism to make sure they actually return back to the hilt. It has the disadvantage though that one fighter’s monopoles can be sucked up by the other’s saber if that has opposite charge. Can the blades pass through each other? Well, if they both have the same charges, they repel. You couldn’t easily pass them through each other, but they would probably distort each other to some extent. How much depends on the strength of the magnetic field that keeps the electrons caught.


Finally, there is the question how to produce the magnetic monopoles to begin with. For this, you need a pocket-sized accelerator that generates collision energies at the Planck scale. The most commonly used method for this is to use a Kyber crystal. This also means that you need to know string theory to accurately calculate how a lightsaber operates. May the Force be with you.

[For more speculation, see also Is a Real Lightsaber Possible? by Don Lincoln.]

Tuesday, December 29, 2015

Book review: “Seven brief lessons on physics” by Carlo Rovelli

Seven Brief Lessons on Physics
By Carlo Rovelli
Allen Lane (September 24, 2015)

Carlo Rovelli’s book is a collection of essays about the fundamental laws of physics as we presently know them, and the road that lies ahead. General Relativity, quantum mechanics, particle physics, cosmology, quantum gravity, the arrow of time, and consciousness, are the topics that he touches upon in this slim, pocket-sized, 79 pages collection.

Rovelli is one of the founders of the research program of Loop Quantum Gravity, an approach to understanding the quantum nature of space and time. His “Seven brief lessons on physics” are short on scientific detail, but excel in capturing the fascination of the subject and its relevance to understand our universe, our existence, and ourselves. In laying out the big questions driving physicists’ quest for a better understanding of nature Rovelli makes it clear how the, often abstract, contemporary research is intimately connected with the ancient desire to find our place in this world.

As a scientist, I would like to complain about numerous slight inaccuracies, but I forgive them since they are admittedly not essential to the message Rovelli is conveying, that is the value of knowledge for the sake of knowledge itself. The book is more a work of art and philosophy than of science, it’s the work of a public intellectual reaching out to the masses. I applaud Carlo for not dumbing down his writing, for not being afraid of using multi-syllable words and constructing nested sentences; it’s a pleasure to read. He seems to spend too much time on the beach playing with snail-shells though.

I might have recommended the book as a Christmas present for your relatives who never quite seem to understand why anyone would spend their life pondering the arrow of time, but I was too busy pondering the arrow of time to finish the book before Christmas.

I would recommend this book to anyone who wants to understand how fundamental questions in physics tie together with the mystery of our own existence, or maybe just wants a reminder of what got them into this field decades ago.

[Disclaimer: I got the book as gift from the author.]

Sunday, December 27, 2015

Dear Dr B: Is string theory science?

This question was asked by Scientific American, hovering over an article by Davide Castelvecchi.

They should have asked Ethan Siegel. Because a few days ago he strayed from the path of awesome news about the universe to inform his readership that “String Theory is not Science.” Unlike Davide however, Ethan has not yet learned the fine art of not expressing opinions that marks the true science writer. And so Ethan dismayed Peter Woit, Lubos Motl, and me in one sweep. That’s a noteworthy achievement, Ethan!

Upon my inquiry (essentially a polite version of “wtf?”) Ethan clarified that he meant string theory has no scientific evidence speaking for it and changed the title to “Why String Theory Is Not A Scientific Theory.” (See URL for original title.)

Now, Ethan is wrong with believing that string theory doesn’t have evidence speaking for it and I’ll come to this in a minute. But the main reason for his misleading title, even after the correction, is a self-induced problem of US science communicators. In reaction to an often raised Creationist’s claim that Darwinian natural selection is “just a theory,” they have bent over backwards trying to convince the public that scientists use the word “theory” to mean an explanation that has been confirmed by evidence to high accuracy. Unfortunately, that’s not how scientists actually use the word, have never used it, and will probably never use it.

Scientists don’t name their research programs following certain rules. Instead, which expression sticks is mostly coincidence. Brans-Dicke theory, Scalar-Tensor theory, terror management theory, or recapitulation theory, are but a few examples of “theories” that have little or no evidence speaking in their favor. Maybe that shouldn’t be so. Maybe “theory” should be a title reserved only for explanations widely accepted in the scientific community. But looking up definitions before assigning names isn’t how language works. Peanuts also aren’t nuts (they are legumes), and neither are Cashews (they are seeds). But, really, who gives a damn?

Speaking of nuts, the sensible reaction to the “just a theory” claim is not to conjure up rules according to which scientists allegedly use one word or the other, but to point out that any consistent explanation is better than a collection of 2000 years old fairy tales that are neither internally consistent nor consistent with observation, and thus an entirely useless waste of time.

And science really is all about finding useful explanations for observations, where “useful” means that they increase our understanding of the world around us and/or allow us to shape nature to our benefits. To find these useful explanations, scientists employ the often-quoted method of proposing hypotheses and subsequently testing them. The role of theory development in this is to identify the hypotheses which are most promising and thus deserve being put to test.

This pre-selection of hypotheses is a step often left out in the description of the scientific method, but it is highly relevant, and its relevance has only increased in the last decades. We cannot possibly test all randomly produced hypotheses – we neither have the time nor the resources. All fields of science therefore have tight quality controls for which hypotheses are worth paying attention to. The more costly experimental test of new hypotheses becomes, the more relevant is this hypotheses pre-selection. And it is in this step where non-empirical theory assessment enters.

Non-empirical theory assessment was topic of the workshop that Davide Castelvecchi’s SciAm article reported on. (For more information about the workshop, see also Natalie Wolchover’s summary in Quanta, and my summary on Starts with a Bang.) Non-empirical theory assessment is the use of criteria that scientists draw upon to judge on the promise of a theory before it can be put to experimental test.

This isn’t new. Theoretical physicists have always used non-empirical assessment. What is new is that in foundational physics it has remained the only assessment for decades, which hugely inflates the potential impact of even smallest mistakes. As long as we have frequent empirical assessment, faulty non-empirical assessment cannot lead theorists far astray. But take away the empirical test, and non-empirical assessment requires utmost objectivity in judgement or we will end up in a completely wrong place.

Richard Dawid, one of the organizers of the Munich workshop, has, in a recent book, summarized some non-empirical criteria that practitioners list in favor of string theory. It is an interesting book, but of little practical use because it doesn’t also assess other theories (so the scientist complains about the philosopher).

String theory arguably has empirical evidence speaking for it because it is compatible with the theories that we know, the standard model and general relativity. The problem is though that, for what the evidence is concerned, string theory so far isn’t any better than the existing theories. There isn’t a single piece of data that string theory explains which the standard model or general relativity doesn’t explain.

The reason many theoretical physicists prefer string theory over the existing theories are purely non-empirical. They consider it a better theory because it unifies all known interactions in a common framework and is believed to solve consistency problems in the existing theories, like the black hole information loss problem and the formation of singularities in general relativity. Whether it is actually correct as a unified theory of all interactions is still unknown. And short of a uniqueness proof, no non-empirical argument will change anything about this.

What is known however is that string theory is intimately related to quantum field theories and gravity, both of which are well-confirmed by evidence. This is why many physicists are convinced that string theory too has some use in the description of nature, even if this use eventually may not be to describe the quantum structure of space and time. And so, in the last decade string theory has become regarded less as a “final theory” and more as mathematical framework to address questions that are difficult or impossible to answer with quantum field theory or general relativity. It yet has to prove its use on these accounts.

Speculation in theory development is a necessary part of the scientific method. If a theory isn’t developed to explain already existing data, there is always a lag between the hypotheses and their tests. String theory is just another such speculation, and it is thereby a normal part of science. I have never met a physicist who claimed that string theory isn’t science. This is a statement I have only come across by people who are not familiar with the field – which is why Ethan’s recent blogpost puzzled me greatly.

No, the question that separates the community is not whether string theory is science. The controversial question is how long is too long to wait for data supporting a theory? Are 30 years too long? Does it make any sense to demand payoff after a certain time?

It doesn’t make any sense to me to force theorists to abandon a research project because experimental test is slow to come by. It seems natural that in the process of knowledge discovery it becomes increasingly harder to find evidence for new theories. What one should do in this case though is not admit defeat on the experimental front and focus solely on the theory, but instead increase efforts to find new evidence that could guide the development of the theory. That, and the non-empirical criteria should be regularly scrutinized to prevent scientists from discarding hypotheses for the wrong reasons.

I am not sure who is responsible for this needlessly provocative title of the SciAm piece, just that it’s most likely not the author, because the same article previously appeared in Nature News with the somewhat more reasonable title “Feuding physicists turn to philosophy for help.” There was, however, not much feud at the workshop, because it was mainly populated by string theory proponents and multiverse opponents, who nodded to each other’s talks. The main feud, as always, will be carried out in the blogosphere...

Tl;dr: Yes, string theory is science. No, this doesn’t mean we know it’s a correct description of nature.

Thursday, December 24, 2015

Is light a wave or a particle?

2015 was the International Year of Light. In May, I came across this video by the Max Planck Society, in which some random people on the street in Munich were asked whether light is a wave or a particle. Most of them answered in German, but here is a rough translation of their replies:
    “Uh, physics. It's been a long time. I guess it’s... a particle. — Particle. — Particle. — A particle. — A particle. — Light is... a particle. — I had physics up to 12th class. We discussed this a whole year. But I still don’t know. — A wave. — A wave? — Is this a trick question? — It’s both! Wave-particle duality. You should know that. — The duality of light. — It acts as both. — It’s hard to quantify what it is. It’s energy. — I am fascinated that nature has paradoxa. That one finds out through physics that not everything can be computed.”
So I thought some explanation is in order:



This is the first time I’ve tried the new green screen. As you can see, it has indeed solved my eye-erasure problem. (And for the experts, I hope you apologize my sloppiness in specifying the U(1) gauge group.)

On that occasion, I also want to wish you all Happy Holidays!


Like what you find on my blog? I want to kindly draw your attention to the donate button in the top right corner :o)

Saturday, December 19, 2015

Ask Dr B: Is the multiverse science? Is the multiverse real?

Kay zum Felde asked:
“Is the multiverse science? How can we test it?”
I added “Is the multiverse real” after Google offered it as autocomplete:


Dear Kay,

This is a timely question, one that has been much on my mind in the last years. Some influential theoretical physicists – like Brian Greene, Lenny Susskind, Sean Carroll, and Max Tegmark – argue that the appearance of multiverses in various contemporary theories signals that we have entered a new era of science. This idea however has been met with fierce opposition by others – like George Ellis, Joe Silk, Paul Steinhardt, and Paul Davies – who criticize the lack of testability.

If the multiverse idea is right, and we live in one of many – maybe infinitely many – different universes, then some of our fundamental questions about nature might never be answered with certainty. We might merely be able to make statements about how likely we are to inhabit a universe with some particular laws of nature. Or maybe we cannot even calculate this probability, but just have to accept that some things are as they are, with no possibility to find deeper answers.

What bugs the multiverse opponents most about this explanation – or rather lack of explanation – is that succumbing to the multiverse paradigm feels like admitting defeat in our quest for understanding nature. They seem to be afraid that merely considering the multiverse an option discourages further inquiries, inquiries that might lead to better answers.

I think the multiverse isn’t remotely as radical an idea as it has been portrayed, and that some aspects of it might turn out to be useful. But before I go on, let me first clarify what we are talking about.

What is the multiverse?

The multiverse is a collection of universes, one of which is ours. The other universes might be very different from the one we find ourselves in. There are various types of multiverses that theoretical physicists believe are logical consequences of their theories. The best known ones are:
  • The string theory landscape
    String theory doesn’t uniquely predict which particles, fields, and parameters a universe contains. If one believes that string theory is the final theory, and there is nothing more to say than that, then we have no way to explain why we observe one particular universe. To make the final theory claim consistent with the lack of predictability, one therefore has to accept that any possible universe has the same right to existence as ours. Consequently, we live in a multiverse.

  • Eternal inflation
    In some currently very popular models for the early universe our universe is just a small patch of a larger space. As result of a quantum fluctuation the initially rapid expansion – known as “inflation” – slows down in the region around us and galaxies can be formed. But outside our universe inflation continues, and randomly occurring quantum fluctuations go on to spawn off other universes – eternally. If one believes that this theory is correct and that we understand how the quantum vacuum couples to gravity, then, so the argument, the other universes are equally real as ours.

  • Many worlds interpretation
    In the Copenhagen interpretation of quantum mechanics the act of measurement is ad hoc. It is simply postulated that measurement “collapses” the wave-function from a state with quantum properties (such as being in two places at once) to a distinct state (at only one place). This postulate agrees with all observations, but it is regarded unappealing by many (including myself). One way to avoid this postulate is to instead posit that the wave-function never collapses. Instead it ‘branches’ into different universes, one for each possible measurement outcome – a whole multiverse of measurement outcomes.

  • The Mathematical Universe
    The Mathematical Universe is Max Tegmark’s brain child, in which he takes the final theory claim to its extreme. Any theory that describes only our universe requires the selection of some mathematics among all possible mathematics. But if a theory is a final theory, there is no way to justify any particular selection, because any selection would require another theory to explain it. And so, the only final theory there can be is one in which all mathematics exists somewhere in the multiverse.
This list might raise the impression that the multiverse is a new finding, but that isn’t so. New is only the interpretation. Since every theory requires observational input to fix parameters or pick axioms, every theory leads to a multiverse. Without sufficient observational input any theory becomes ambiguous – it gives rise to a multiverse.

Take Newtonian gravity: Is there a universe for each value of Newton’s constant? Or General Relativity: Do all solutions to the field equations exist? And Loop Quantum Gravity has multiverses with different parameters for an infinite number of solutions like string theory. It’s just that Loop Quantum Gravity never tried to be a theory of everything, so nobody worries about this.

What is new about the multiverse idea is that some physicists are no longer content with having a theory that describes observation. They now have additional requirements for a good theory, like for example that the theory have no ad hoc prescriptions like collapsing wavefunctions; no small, large, or in fact any numbers; or initial conditions that are likely according to some currently accepted probability distribution.

Is the multiverse science?

Science is what describes our observations of nature. But this is the goal and not necessarily the case for each step along the way. And so, taking multiverses seriously, rather than treating them as the mathematical artifact that I think they are, might eventually lead to new insights. The real controversy about the multiverses is how likely it is that new insights will emerge from this approach eventually.

The maybe best example for how multiverses might become scientific is eternal inflation. It has been argued that the different universes might not be entirely disconnected, but can collide, thereby leaving observable signatures in the cosmic microwave background. Another example for testability comes from Mersini-Houghton and Holman who have looked into potentially observable consequences of entanglement between different universes. And in a rather mindbending recent work, Garriga, Vilenkin and Zhang, have argued that the multiverse might give rise to a distribution of small black holes in our universe which also has consequences that could become observable in the future.

As to probability distributions on the string theory landscape, I don’t see any conceptual problem with that. If someone could, based on a few assumptions, come up with a probability measure according to which the universe we observe is the most likely one, that would for me be a valid computation of the standard model parameters. The problem is of course to come up with such a measure.

Similar things could be said about all other multiverses. They don’t presently seem very useful to describe nature. But pursuing the idea might eventually give rise to observable consequences and further insights.

We have known since the dawn of quantum mechanics that it’s wrong to require all mathematical structures of a theory to directly correspond to observables – wave-functions are the best counter example. How willing physicists are to accept non-observable ingredients of a theory as necessary depends on their trust in the theory and on their hope that it might give rise to deeper insights. But there isn’t a priori anything unscientific with a theory that contains elements that are unobservable.

So is the multiverse science? It is an extreme speculation, and opinions widely differ on how promising it is as a route is to deeper understanding. But speculations are a normal part of theory development, and the multiverse is scientific as long as physicists strive to eventually derive observable consequences.

Is the multiverse real?

The multiverse has some brain-bursting consequences. For example that everything that can happen does happen, and it happens an infinite amount of times. There are thus infinitely many copies of you, somewhere out there, doing their own thing, or doing exactly the same as you. What does that mean? I have no clue. But it makes for an interesting dinner conversation through the second bottle of wine.

Is it real? I think it’s a mistake to think of “being real” as a binary variable, a property that an object either has or has not. Reality has many different layers, and how real we perceive something depends on how immediate our inference of the object from sensory input is.

A dog peeing on your leg has a very simple and direct relation to your sensory input that does not require much decoding. You would almost certainly consider it real. On the contrary, evidence for the quark model contained in a large array of data on a screen is a very indirect sensory input that requires a great deal of decoding. How real you consider quarks thus depends on your knowledge of, and trust in, the theory and the data. Or trust in the scientists dealing with the theory and the data as it were. For most physicists the theory underlying the quark model has proved reliable and accurate to such high precision that they consider quarks as real as the peeing dog.

But the longer the chain of inference, and the less trust you have in the theories used for inference, the less real objects become. In this layered reality the multiverse is currently at the outer fringes. It’s as unreal as something can be without being plain fantasy. For some practitioners who greatly trust their theories, the multiverse might appear almost as real as the universe we observe. But for most of us these theories are wild speculations and consequently we have little trust in this inference.

So is the multiverse real? It is “less real” than everything else physicists have deduced from their theories – so far.

Wednesday, December 16, 2015

No, you don’t need general relativity to ride a hoverboard.

Image credit: Technologistlaboratory.
This morning, someone sent me a link to a piece that appeared on WIRED

The hoverboards in question here are the currently fashionable two-wheeled motorized boards that are driven by shifting your weight. I haven’t tried one, but it sure looks like fun.

I would have ignored this article as your average internet nonsense, but turns out the WIRED piece is written by someone by name Rhett Allain who, according to the website “is an Associate Professor of Physics at Southeastern Louisiana University.” Which makes me fear that some readers might actually believe what he wrote. Because he is something with professor, certainly he must know the physics.

Now, the claim of the article is correct in the sense that if you took the laws of physics and removed general relativity then there would be no galaxy formation, no planet Earth, no people, and certainly no hoverboards. I don’t think though that Allain had such a philosophical argument in mind. Besides, on this ground you could equally well argue that you can’t throw a pebble without general relativity because there wouldn’t be any pebbles.

What Allain argues instead is that you somehow need the effects of gravity to be the same as that of acceleration and that this sounds a little like general relativity, therefore you need general relativity.

You should find this claim immediately suspicious because if you know one thing about general relativity it’s that it’s hard to test. If you couldn’t “ride a hoverboard without Einstein’s theory of General Relativity,” then why bother with light deflection and gravitational lensing to prove that the theory is correct? Must be a giant conspiracy of scientists wasting taxpayers’ money I presume.

Image Credit: Jared Mecham
Another reason to be suspicious about the correctness of this argument is the author’s explanation that special relativity is special because “Well, before Einstein, everyone thought reference frames were relative.” I am hoping this was just a typographical error, but just to avoid any confusion: before Einstein time was absolute. It’s called special relativity because according to Einstein, time too is relative.

But to come back to the issue about gravity. What you need to drive a hoverboard is to balance the inertial force caused by the board’s acceleration with another force, for which you have pretty much only gravity available. If the board accelerates and pushes forward your feet (friction required), you better bend forward to shift your center of mass because otherwise you’ll fall flat on your back. Bend forward too much and you fall on your nose because gravity. Don’t bend enough, you’ll fall backwards because inertia. To keep standing, you need to balance these forces.

This is basic mechanics and has nothing to do with General Relativity. That one of the forces is gravity is irrelevant to the requirement that you have to balance them to not fall. And even if you take into account that it’s gravity, Newtonian gravity is entirely sufficient. And it doesn’t have anything to do with hoverboards either. You can also see people standing on a train bend forwards when the train accelerates because otherwise they’ll fall in dominoes. You don’t need to bend when sitting because the seat back balances the force for you.

What’s different about general relativity is that it explains gravity is not a force but a property of space-time. That is, it deviates from Newtonian gravity. These deviations are ridiculously small corrections though and you don’t need to take them into account for your average Joe on the Hoverboard, unless possibly Joe is a Neutron star.

The key ingredient to general relativity is the equivalence principle, a simplified version of which states that the gravitational mass is equal to the inertial mass. This is my best guess of what Allain was alluding to. But you don’t need the equivalence principle to balance forces. The equivalence principle just tells you exactly how the forces are balanced. In this case it would tell you the angle you have to aim at to not fall.

In summary: The correct statement would have been “You can’t ride a hoverboard without balancing forces.” If you lean too much forward and write about General Relativity without knowing how it works, you’ll fall flat on your nose.

Friday, December 11, 2015

Quantum gravity could be observable in the oscillation frequency of heavy quantum states

Observable consequences of quantum gravity were long thought inaccessible by experiment. But we theorists might have underestimated our experimental colleagues. Technology has now advanced so much that macroscopic objects, weighting as much as a billionth of a gram, can be coaxed to behave as quantum objects. A billionth of a gram might not sound much, but it is huge compared to the elementary particles that quantum physics normally is all about. It might indeed be enough to become sensitive to quantum gravitational effects.

One of the most general predictions of quantum gravity is that it induces a limit to the resolution of structures. This limit is at an exceedingly tiny distance that is the Planck-length, 10-33 cm. There is no way we can directly probe it. However, theoretically the presence of such a minimal length scale leads to a modification of quantum field theory. This is generally thought of as an effective description of quantum gravitational effects.

These models with a minimal length scale come in three types. One in which Poincaré-invariance, the symmetry of Special Relativity, is broken by the introduction of a preferred frame. One in which Poincaré-symmetry is deformed for freely propagating particles. And one in which it is deformed, but only for virtual particles.

The first two types of these models make predictions that have already been ruled out. The third one is the most plausible model because it leaves Special Relativity intact in all observables – the deformation only enters in intermediate steps. But for this reason, this type of model is also extremely hard to test. I worked on this ten years ago, but got eventually so frustrated that I abandoned the topic: Whatever observable I computed, it was dozens of orders of magnitude below measurement precision.

A recent paper by Alessio Belanchia et al now showed me that I might have given up too early. If one asks how such a modification of quantum mechanics affects the motion of heavy quantum mechanical oscillators, Planck-scale sensitivity is only a few orders of magnitudes away.
    Tests of Quantum Gravity induced non-locality via opto-mechanical quantum oscillators
    Alessio Belenchia, Dionigi M. T. Benincasa, Stefano Liberati, Francesco Marin, Francesco Marino, Antonello Ortolan
    arXiv:1512.02083 [gr-qc]

The title of their paper refers to “non-locality” because the modification due to a minimal length leads to higher-order terms in the Lagrangian. In fact, there have to be terms up to infinite order. This is a very tame type of non-locality, because it is confined to Planck scale distances. How strong the modification is however also depends on the mass of the object. So if you can get a quite massive object to display quantum behavior, then you can increase your sensitivity to effects that might be indicative of quantum gravity.

This has been tried before. A bad example was this attempt, which implicitly used models of either the first or second type, that are ruled out by experiment already. A more recent and much more promising attempt was this proposal. However, they wanted to test a model that is not very plausible on theoretical grounds, so their test is of limited interst. As I mentioned in my blogpost however, this was a remarkable proposal because it was the first demonstration that the sensitivity to Planck scale effects can now be reached.

The new paper uses a system that is pretty much the same as that in the previous proposal. It’s a small disk of silicone, weighting a nanogram or so, that is trapped in an electromagnetic potential and cooled down to some mK. In this trap, the disk oscillates at a frequency that depends on the mass and the potential. This is a pure quantum effect – it is observable and it has been observed.

Belanchia et al calculate how this oscillation would be modified if the non-local correction terms were present and find that the oscillation is no longer simply harmonic but becomes more complicated (see figure). They then estimate the size of the effect and come to the conclusion that, while it is challenging, existing technology is only a few orders of magnitude away from reaching Planck scale precision.

The motion of the mean-value, x, of the oscillator's position in the potential as a function of time, t. The black curve shows the motion without quantum gravitational effects, the red curve shows the motion with quantum gravitational effects (greatly enlarged for visibility). The experiment relies on measuring the difference.
I find this a very exciting development because both the phenomenological model that is being tested here and the experimental precision seems plausible to me. I have recently had some second and third thoughts about the model under question (it’s complicated) and believe that it has some serious shortcomings, but I don’t think that these matter in the limit considered here.

It is very likely that we will see more proposals for testing quantum gravity with heavy quantum-mechanical probes, because once sensitivity reaches a certain parameter range, there suddenly tend to be loads of opportunities. At this point I have become tentatively optimistic that we might indeed be able to measure quantum gravitational effects within, say, the next two decades. I am almost tempted to start working on this again...

Saturday, December 05, 2015

What Fermilab’s Holometer Experiment teaches us about Quantum Gravity.

Tl;dr: Nothing. It teaches us nothing. It just wasted time and money.

The Holometer experiment at Fermilab just published the results of their search for holographic space-time foam. They didn’t find any evidence for noise that could be indicative of quantum gravity.

The idea of the experiment was to find correlations in quantum gravitational fluctuations of space-time by using two very sensitive interferometers and comparing their measurements. Quantum gravitational fluctuations are exceedingly tiny, and in all existing models they are far too small to be picked up by interferometers. But the head of the experiment, Craig Hogan, argued that, if the holographic principle is valid, then the fluctuations should be large enough to be detectable by the experiment.

The holographic principle is the idea that everything that happens in a volume can be encoded on the volume’s surface. Many physicists believe that the principle is realized in nature. If that was so, it would indeed imply that fluctuations have correlations. But these correlations are not of the type that the experiment could test for. They are far too subtle to be measureable in this way.

In physics, all theories have to be expressed in form of a consistent mathematical description. Mathematical consistency is an extremely strong constraint when combined with the requirement that the theory also has to agree with all observations we already have. There is very little that can be changed in the existing theories that a) leads to new effects and b) does not spoil the compatibility with existing data. It’s not an easy job.

Hogan didn’t have a theory. It’s not that I am just grumpy  he said so himself: “It's a slight cheat because I don't have a theory,” as quoted by Michael Moyer in a 2012 Scientific American article.

For what I have extracted from Hogan’s papers on the arxiv, he tried twice to construct a theory that would capture his idea of holographic noise. The first violated Lorentz-invariance and was thus already ruled out by other data. The second violated basic properties of quantum mechanics and was thus already ruled out too. In the end he seems to have given up finding a theory. Indeed, it’s not an easy job.

Searching for a prediction based on a hunch rather than a theory makes it exceedingly unlikely that something will be found. That is because there is no proof that the effect would even be consistent with already existing data, which is difficult to achieve. But Hogan isn’t a no-one; he is head of Fermilab’s Center for Particle Astrophysics. I assume he got funding for his experiment by short-circuiting peer review. A proposal for such an experiment would never have passed peer review – it simply doesn’t live up to today’s quality standards in physics.

I wasn’t the only one perplexed about this experiment becoming reality. Hogan relates the following anecdote: “Lenny [Susskind] has an idea of how the holographic principle works, and this isn’t it. He’s pretty sure that we’re not going to see anything. We were at a conference last year, and he said that he would slit his throat if we saw this effect.” This is a quote from another Scientific American article. Oh, yes, Hogan definitely got plenty of press coverage for his idea.

Ok, so maybe I am grumpy. That’s because there are hundreds of people working on developing testable models for quantum gravitational effects, each of whom could tell you about more promising experiments than this. It’s a research area by name quantum gravity phenomenology. The whole point of quantum gravity phenomenology is to make sure that new experiments test promising ranges of parameter space, rather than just wasting money.

I might have kept my grumpiness to myself, but then the Fermilab Press release informed me that “Hogan is already putting forth a new model of holographic structure that would require similar instruments of the same sensitivity, but different configurations sensitive to the rotation of space. The Holometer, he said, will serve as a template for an entirely new field of experimental science.”

An entirely new field of experimental science, based on models that either don’t exist or are ruled out already and that, when put to test, morph into new ideas that require higher sensitivity. That scared me so much I thought somebody has to spell it out: I sincerely hope that Fermilab won’t pump any more money into this unless the idea goes through rigorous peer review. It isn’t just annoying. It’s a slap into the face of many hard-working physicists whose proposals for experiments are of much higher quality but who don’t get funding.

At the very least, if you have a model for what you test, you can rule out the model. With the Holometer you can’t even rule out anything because there is no theory and no model that would be tested with it. So what we have learned is nothing. I can only hope that at least this episode draws some attention to the necessity of having at mathematically consistent model. It’s not an easy job. But it has to be done.

The only good news here is that Lenny Susskind isn’t going to slit his throat.

Thursday, December 03, 2015

Peer Review and its Discontents [slide show]

I have made a slide-show of my Monday talk a the Munin conference and managed to squeeze a one-hour lecture into 23 minutes. Don't expect too much, nothing happens in this video, it's just me mumbling over the slides (no singing either ;)). I was also recorded on Monday, but if you prefer the version with me walking around and talking for 60 minutes you'll have to wait a few days until the recording goes online.



I am very much interested in finding a practical solution to these problems. If you have proposals to make, please get in touch with me or leave a comment.

Tuesday, December 01, 2015

Hawking radiation is not produced at the black hole horizon.

Stephen Hawking’s “Brief History of Time” was one of the first popular science books I read, and I hated it. I hated it because I didn’t understand it. My frustration with this book is a big part of the reason I’m a physicist today – at least I know who to blame.

I don’t hate the book any more – admittedly Hawking did a remarkable job of sparking public interest in the fundamental questions raised by black hole physics. But every once in a while I still want to punch the damned book. Not because I didn’t understand it, but because it convinced so many other people they did understand it.

In his book, Hawking painted a neat picture for black hole evaporation that is now widely used. According to this picture, black holes evaporate because pairs of virtual particles nearby the horizon are ripped apart by tidal forces. One of the particles gets caught behind the horizon and falls in, the other escapes. The result is a steady emission of particles from the black hole horizon. It’s simple, it’s intuitive, and it’s wrong.

Hawking’s is an illustrative picture, but nothing more than that. In reality – you will not be surprised to hear – the situation is more complicated.

The pairs of particles – to the extent that it makes sense to speak of particles at all – are not sharply localized. They are instead blurred out over a distance comparable to the black hole radius. The pairs do not start out as points, but as diffuse clouds smeared all around the black hole, and they only begin to separate when the escapee has retreated from the horizon a distance comparable to the black hole’s radius. This simple image that Hawking provided for the non-specialist is not backed up by the mathematics. It contains an element of the truth, but take it too seriously and it becomes highly misleading.

That this image isn’t accurate is not a new insight – it’s been known since the late 1970s that Hawking radiation is not produced in the immediate vicinity of the horizon. Already in Birrell and Davies’ textbook it is clearly spelled out that taking the particles from the far vicinity of the black hole and tracing them back to the horizon – thereby increasing (“blueshifting”) their frequency – does not deliver the accurate description in the horizon area. The two parts of the Hawking-pairs blur into each other in the horizon area, and to meaningfully speak of particles one should instead use a different, local, notion of particles. Better even, one should stick to calculating actually observable quantities like the stress-energy tensor.

That the particle pairs are not created in the immediate vicinity of the horizon was necessary to solve a conundrum that bothered physicists back then. The temperature of the black hole radiation is very small, but this is in the far distance to the black hole. For this radiation to have been able to escape, it must have started out with an enormous energy close by the black hole horizon. But if such an enormous energy was located there, then an infalling observer should notice and burn to ashes. This however violates the equivalence principle, according to which the infalling observer shouldn’t notice anything unusual upon crossing the horizon.

This problem is resolved by taking into account that tracing back the outgoing radiation to the horizon does not give a physically meaningful result. If one instead calculates the stress-energy in the vicinity of the horizon, one finds that it is small and remains small even upon horizon crossing. It is so small that an observer would only be able to tell the difference to flat space on distances comparable to the black hole radius (which is also the curvature scale). Everything fits nicely, and no disagreement with the equivalence principle comes about.

[I know this sounds very similar to the firewall problem that has been discussed more recently but it’s a different issue. The firewall problem comes about because if one requires the outgoing particles to carry information, then the correlation with the ingoing particles gets destroyed. This prevents a suitable cancellation in the near-horizon area. Again however one can criticize this conclusion by complaining that in the original “firewall paper” the stress-energy wasn’t calculated. I don’t think this is the origin of the problem, but other people do.]

The actual reason that black holes emit particles, the one that is backed up by mathematics, is that different observers have different notions of particles.

We are used to a particle either being there or not being there, but this is only the case so long as we move relative to each other at constant velocity. If an observer is accelerated, his definition of what a particle is changes. What looks like empty space for an observer at constant velocity suddenly seems to contain particles for an accelerated observer. This effect, named after Bill Unruh – who discovered it almost simultaneously with Hawking’s finding that black holes emit radiation – is exceedingly tiny for accelerations we experience in daily life, thus we never notice it.

The Unruh effect is very closely related to the Hawking effect by which black holes evaporate. Matter that collapses to a black hole creates a dynamical space-time that gives rise to an acceleration between observers in the past and in the future. The result is that the space-time around the collapsing matter, that did not contain particles before the black hole was formed, contains thermal radiation in the late stages of collapse. This Hawking-radiation that is emitted from the black hole is the same as the vacuum that initially surrounded the collapsing matter.

That, really, is the origin of particle emission from black holes: what is a “particle” depends on the observer. Not quite as simple, but dramatically more accurate.

The image provided by Hawking with the virtual particle pairs close by the horizon has been so stunningly successful that now even some physicists believe it is what really happens. The knowledge that blueshifting the radiation from infinity back to the horizon gives a grossly wrong stress-energy seems to have gotten buried in the literature. Unfortunately, misunderstanding the relation between the flux of Hawking-particles in the far distance and in the vicinity of the black hole leads one to erroneously conclude that the flux is much larger than it is. Getting this relation wrong is for example the reason why Mersini-Houghton came to falsely conclude that black holes don’t exist.

It seems about time someone reminds the community of this. And here comes Steve Giddings.

Steve Giddings is the nonlocal hero of George Musser’s new book “Spooky Action at a Distance.” For the past two decades or so he’s been on a mission to convince his colleagues that nonlocality is necessary to resolve the black hole information loss problem. I spent a year in Santa Barbara a few doors down the corridor from Steve, but I liked his papers better when we was still on the idea that black hole remnants keep the information. Be that as it may, Steve knows black holes inside and out, and he has a new note on the arxiv that discusses the question where Hawking radiation originates.

In his paper, Steve collects the existing arguments why we know the pairs of the Hawking radiation are not created in the vicinity of the horizon, and he adds some new arguments. He estimates the effective area from which Hawking-radiation is emitted and finds it to be a sphere with a radius considerably larger than the black hole. He also estimates the width of wave-packets of Hawking radiation and shows that it is much larger than the separation of the wave-packet’s center from the horizon. This nicely fits with some earlier work of his that demonstrated that the partner particles do not separate from each other until after they have left the vicinity of the black hole.

All this supports the conclusion that Hawking particles are not created in the near vicinity of the horizon, but instead come from a region surrounding the black hole with a few times the black hole’s radius.

Steve’s paper has an amusing acknowledgement in which he thanks Don Marolf for confirming that some of their colleagues indeed believe that Hawking radiation is created close by the horizon. I can understand this. When I first noticed this misunderstanding I also couldn’t quite believe it. I kept pointing towards Birrell-Davies but nobody was listening. In the end I almost thought I was the one who got it wrong. So, I for sure am very glad about Steve’s paper because now, rather than citing a 40 year old textbook, I can just cite his paper.

If Hawking’s book taught me one thing, it’s that sticky visual metaphors that can be a curse as much as they can be a blessing.

Friday, November 27, 2015

Away note

I’ll be traveling the next two weeks. First I’ll be going to a conference on “scholarly publishing” in the picturesque city of Tromsø. The “o” with the slash is Norwegian and the trip is going to beat my personal farthest-North record that is currently held by Reykjavik (or some village with an unpronouncable name a little North of that).

I don’t have the faintest clue why they invited me, to give a keynote lecture out of all things, in company of some Nobelprize winner. But I figured I’d go and tell them what’s going wrong with peer review, at least that will be entertaining. Thanks to a stomach bug that my husband brought back from India, by means of which I lost an estimated 800 pounds in 3 days, “tell them what's going wrong with peer review” is so far pretty much the whole plan for the lecture.

The week after I’ll be going to a workshop in Munich on the question “Why trust a theory?”. This event is organized by the Munich Center for Mathematical Philosophy, where I already attended an interesting workshop two years ago. This time the workshop is dedicated to the topics raised in Richard Dawid’s book “String Theory and the Scientific Method" which I reviewed here. The topic has since been a lot on my mind and I’m looking forward to the workshop.

Monday, November 23, 2015

Dear Dr B: Can you think of a single advancement in theoretical physics, other than speculation, since the early 1980's?

This question was asked by Steve Coyler, who was a frequent commenter on this blog before facebook ate him up. His full question is:
“Can you think of a single advancement in theoretical physics, other than speculation like Strings and Loops and Safe Gravity and Twistors, and confirming things like the Higgs Boson and pentaquarks at the LHC, since Politizer and Wilczek and Gross (and Coleman) did their thing re QCD in the early 1980's?”
Dear Steve:

What counts as “advancement” is somewhat subjective – one could argue that every published paper is an advancement of sorts. But I guess you are asking for breakthroughs that have generated new research areas. I also interpreted your question to have an emphasis on “theoretical,” so I will leave aside mostly experimental advances, like electron lasers, attosecond spectroscopy, quantum dots, and so on.

Admittedly your question pains me considerably. Not only does it demonstrate you have swallowed the stories about a crisis in physics that the media warm up and serve every couple of months. It also shows that I haven’t gotten across the message I tried to convey in this earlier post: the topics which dominate the media aren’t the topics that dominate actual research.

The impression you get about physics from reading science news outlets is extremely distorted. The vast majority of physicists have nothing to do with quantum gravity, twistors, or the multiverse. Instead they work in fields that are barely if ever mentioned in the news, like atomic and nuclear physics, quantum optics, material physics, plasma physics, photonics, or chemical physics. In all these areas theory and experiment are very closely tied together, and the path to patents and applications is short.

Unfortunately, advances in theoretical physics get pretty much no media coverage whatsoever. They only make it into the news if they were experimentally confirmed – and then everybody cheers the experimentalists, not the theorists. The exceptions are the higher speculations that you mention, which are deemed news-worthy because they supposedly show that “everything we thought about something is wrong.” These headlines are themselves almost always wrong.

Having said that, your question is difficult for me to answer. I’m not a walking and talking encyclopedia of contemporary physics, and in the early 1980s I was in Kindergarten. The origin of many research areas that are hot today isn’t well documented because their history hasn’t yet been written. This is to warn you that I might be off a little with the timing on the items below.

I list for you the first topics that come to my mind, and I invite readers to submit additions in the comments:

  • Topological insulators. That’s one of the currently hottest topics in physics, and many people expect a Nobelprize to go into this area in the near future. A topological insulators is a material that conducts only on its surface. They were first predicted theoretically in the mid 80s.

  • Quantum error correction, quantum logical gates, quantum computing. The idea of quantum computing came up in the 1980s, and most of the understanding of quantum computation and quantum information is only two decades old. [Corrected date: See comment by Matt.]

  • Quantum cryptography. While the first discussion of quantum cryptography predates the 1980s, the field really only took off in the last two decades. Also one of the hottest topics today because first applications are now coming up. [Corrected date: See comment by Matt.]

  • Quantum phase transitions, quantum critical points. I haven’t been able to find out exactly when this was first discussed, but it’s an area that has flourished in the last 20 years or so. This is work mainly lead by theory, not experiment.

  • Metamaterials. While materials with a negative refraction index were first discussed in the mid 60s, this wasn’t paid much attention to until the late 1990, when further theoretical work demonstrated that materials with negative permittivity and permeability should exist. The first experimental confirmation came in in 2000, and since then the field has exploded. This is another area which will probably see a Nobelprize in the soon future. You have read in the news about this under the headline “invisibility cloak.”

  • Dirac (Weyl) materials. These are materials in which excitations behave like Dirac (Weyl) fermions. Graphene is an example. Again I don’t really know when this was first predicted, but I think it was past 1980.

  • Fractional Quantum Hall Effect The theoretical explanation was provided by Laughlin in 1983, and he was awarded a Nobelprize in 1998, together with two experimentalists. [Added, see comment by Flavio.]

  • Inflation. Inflation is the rapid expansion in the early universe, a theoretical prediction that served to solve a lot of problems. It was developed in the early 1980s.

  • Effective field theory/Renormalization group running. While the origin of this framework go back to Wilson in 1975, this field has only taken off in the mid 90s. This topic too is about to become hot because the breakdown of effective field theory is one of the possible explanations for the unnatural parameters of the Standard Model indicated by recent LHC data.

  • Quantum Integrable Systems. This is a largely theoretical field that is still waiting to see its experimental prime-time. One might argue that the first papers on the topic were written already by Bethe in the 1930s, but most of the work has been in the last 20 years or so.

  • Conformal field theory. Like the previous topic, this area is still heavily dominated by theory and is waiting for its time to come. It started taking off in the mid 1990s. It was topic of one of the first-ever arxiv papers.

  • Geometrical frustration, spin glasses. Geometrically frustrated materials have a large entropy even at zero temperature. You have read about these in the context of monopoles in spin-ice. Much of the theoretical work on this started only in the mid 1980s and it’s still a very active research area.

  • Cosmological Perturbation Theory. This is the mathematical framework necessary to describe the formation of structures in the universe. It was developed starting in the 1980s.

  • Gauge-gravity duality (AdS/CFT). This is a relation between different types of field theories which was discovered in the late 1990s. Its applications are still being explored, but it’s one of the most promising research directions in quantum field theory at the moment.
If you want to get a visual impression for what is going on in physics you can browse arxiv papers using Paperscape.org. You see there all arxiv papers as dots. The larger the dot, the more citations. The images in this blogpost are screenshots from Paperscape.

You can follow this blog on facebook here.

Tuesday, November 17, 2015

The scientific method is not a myth

Heliocentrism, natural selection, plate tectonics – much of what is now accepted fact was once controversial. Paradigm-shifting ideas were, at their time, often considered provocative. Consequently the way to truth must be pissing off as many people as possible by making totally idiotic statements. Like declaring that the scientific method is a myth, which was most recently proclaimed by Daniel Thurs on Discover Blogs.

Even worse, his article turns out to be a book excerpt. This hits me hard after just having discovered that someone by name Matt Ridley also published a book full of misconceptions about how science supposedly works. Both fellows seem to have the same misunderstanding: the belief that science is a self-organized system and therefore operates without method – in Thurs’ case – and without governmental funding – in Ridley’s case. That science is self-organized is correct. But to conclude from this that progress comes from nothing is wrong.

I blame Adam Smith for all this mistaken faith in self-organization. Smith used the “invisible hand” as a metaphor for the regulation of prices in a free market economy. If the actors in the market have full information and act perfectly rational, then all goods should eventually be priced at their actual value, maximizing the benefit for everyone involved. And ever since Smith, self-organization has been successfully used out of context.

In a free market, the value of the good is whatever price this ideal market would lead to. This might seem circular but it isn’t: It’s a well-defined notion, at least in principle. The main argument of neo-conservatism is that any kind of additional regulation, like taxes, fees, or socialization of services, will only lead to inefficiencies.

There are many things wrong with this ideal of a self-regulating free market. To begin with real actors are neither perfectly rational nor do they ever have full information. And then the optimal prices aren’t unique; instead there are infinitely many optimal pricing schemes, so one needs an additional selection mechanism. But oversimplified as it is, this model, now known as equilibrium economics, explains why free markets work well, or at least better than planned economies.

No, the main problem with trust in self-optimization isn’t the many shortcomings of equilibrium economics. The main problem is the failure to see that the system itself must be arranged suitably so that it can optimize something, preferably something you want to be optimized.

A free market needs, besides fiat money, rules that must be obeyed by actors. They must fulfil contracts, aren’t allowed to have secret information, and can’t form monopolies – any such behavior would prevent the market from fulfilling its function. To some extent violations of these rules can be tolerated, and the system itself would punish the dissidents. But if too many actors break the rules, self-optimization would fail and chaos would result.

Then of course you may want to question whether the free market actually optimizes what you desire. In a free market, future discounting and personal risk tends to be higher than many people prefer, which is why all democracies have put in place additional regulations that shift the optimum away from maximal profit to something we perceive as more important to our well-being. But that’s a different story that shall be told another time.

The scientific system in many regards works similar to a free market. Unfortunately the market of ideas isn’t as free as it should be to really work efficiently, but by and large it works well. As with the market economies though, it only works if the system is set up suitably. And then it optimizes only what it’s designed to optimize, so you better configure it carefully.

The development of good scientific theories and the pricing of goods are examples for adaptive systems, and so is natural selection. Such adaptive systems generally work in a circle of four steps:
  1. Modification: A set of elements that can be modified.
  2. Evaluation: A mechanism to evaluate each element according to a measure. It’s this measure that is being optimized.
  3. Feedback: A way to feed the outcome of the evaluation back into the system.
  4. Reaction: A reaction to the feedback that optimizes elements according to the measure by another modification.
With these mechanisms in place, the system will be able to self-optimize according to whatever measure you have given it, by reiterating a cycle going through steps one to four.

In the economy the set of elements are priced goods. The evaluation is whether the goods sell. The feedback is the vendor being able to tell how many goods sell. The reaction is to either change the prices or improve the goods. What is being optimized is the satisfaction (“utility”) of vendors and consumers.

In natural selection the set of elements are genes. The evaluation is whether the organism thrives. The feedback is the dependence of the amount of offspring on the organisms’ well-being. The reaction is survival or extinction. What is being optimized are survival chances (“fitness”).

In science the set of elements are hypotheses. The evaluation is whether they are useful. The feedback is the test of hypotheses. The reaction is that scientists modify or discard hypotheses that don’t work. What is being optimized in the scientific system depends on how you define “useful.” It once used to mean predictive, yet if you look at high energy physics today you might be tempted to think it’s instead mathematical elegance. But that’s a different story that shall be told another time.

That some systems optimize a set of elements according to certain criteria is not self-evident and doesn’t come from nothing. There are many ways systems can fail at this, for example because feedback is missing or a reaction isn’t targeted enough. A good example for lacking feedback is the administration of higher education institutions. They operate incredibly inefficiently, to the extent that the only way one can work with them is by circumvention. The reason is that, by my own experience, it’s next to impossible to fix obviously nonsensical policies or to boot incompetent administrative personnel.

Natural selection, to take another example, wouldn’t work if genetic mutations scrambled the genetic code too much because whole generations would be entirely unviable and feedback wasn’t possible. Or take the free market. If we’d all agree that tomorrow we don’t believe in the value of our currency any more, the whole system would come down.

Back to science.

Self-optimization by feedback in science, now known as the scientific method, was far from obvious for people in the middle ages. It seems difficult to fathom today how they could not have known. But to see how this could be you only have to look at fields where they still don’t have a scientific method, like much of the social and political sciences. They’re not testing hypotheses so much as trying to come up with narratives or interpretations because most of their models don’t make testable predictions. For a long time, this is exactly what the natural sciences also were about: They were trying to find narratives, they were trying to make sense. Quantification, prediction, and application came much later, and only then could the feedback cycle be closed.

We are so used to rapid technological progress now that we forget it didn’t used to be this way. For someone living 2000 years ago, the world must have appeared comparably static and unchanging. The idea that developing theories about nature allows us to shape our environment to better suit human needs is only a few hundred years old. And now that we are able to collect and handle sufficient amounts of data to study social systems, the feedback on hypotheses in this area will probably also become more immediate. This is another opportunity to shape our environment better to our needs, by recognizing just which setup makes a system optimize what measure. That includes our political systems as well as our scientific systems.

The four steps that an adaptive system needs to cycle through don’t come from nothing. In science, the most relevant restriction is that we can’t just randomly generate hypotheses because we wouldn’t be able to test and evaluate them all. This is why science heavily relies on education standards, peer review, and requires new hypotheses to tightly fit into existing knowledge. We also need guidelines for good scientific conduct, reproducibility, and a mechanism to give credits to scientists with successful ideas. Take away any of that and the system wouldn’t work.

The often-depicted cycle of the scientific method, consisting of hypotheses-generation and subsequent testing, is incomplete and lacks details, but it’s correct in its core. The scientific method is not a myth.


Really I think today anybody can write a book about whatever idiotic idea comes to their mind. I suppose the time has come for me to join the club.

Monday, November 16, 2015

I am hiring: Postdoc in AdS/CFT applications to condensed matter

I am hiring a postdoc for a 3-year position based at Nordita in Stockholm. The research is project-bound, funded by a grant from the Swedish Research Council. I am looking for someone with a background in AdS/CFT applications to condensed matter and/or analogue gravity. If you want to know what the project is about, have a look at these recent papers. It’s a good contract with full benefits. Please submit your application documents (CV, research interests, at least two letters) here. Further questions should be addressed to hossi[at]nordita.org

Thursday, November 12, 2015

Mysteriously quiet space baffles researchers

The Parkes Telescope. [Image Source]

Astrophysicists have concluded the yet most precise search for the gravitational wave background created by supermassive black hole mergers. But the expected signal isn’t there.


Last month, Lawrence Krauss rumored that the newly updated gravitational wave detector LIGO had seen its first signal. The news spread quickly – and was shot down almost as quickly. The new detector still had to be calibrated, a member of the collaboration explained, and a week later it emerged that the signal was probably a test run.


While this rumor caught everybody’s attention, a surprise find from another gravitational wave experiment almost drowned in the noise. The Parkes Pulsar Timing Array Project just published results from analyzing 11 years’ worth of data in which they expected to find evidence for gravitational waves created by mergers of supermassive black holes. The sensitivity of their experiment is well within the regime where the signal was predicted to be present. But the researchers didn’t find anything. Spacetime, it seems, is eerily quiet.

The Pulsar Timing Array project uses the 64 m Parkes radio telescope in Australia to monitor regularly flashing light sources in our galaxy. Known as pulsars, such objects are thought to be created in some binary systems, where two stars orbit around a common center. When a neutron star succeeds in accreting mass from the companion star, an accretion disk forms and starts to emit large amounts of particles. Due to the rapid rotation of the system, this emission goes into one particular direction. Since we can only observe the signal when it is aimed at our telescopes, the source seems to turn on and off in regular intervals: A pulsar has been created.

The astrophysicists on the lookout for gravitational waves use the fastest-spinning pulsars as enormously precise galactic clocks. These millisecond pulsars rotate so reliably that their pulses get measurably distorted already by minuscule disturbances in spacetime. Much like buoys move with waves on the water, pulsars move with the gravitational waves when space and time is stretched. In this way, the precise arrival times of the pulsars’ signals on Earth gets distorted. The millisecond pulsars in our galaxy are thus nothing but a huge gravitational wave detector that nature has given us for free.

Take the pulsar with the catchy name PSR J1909-3744. It flashes us every 2.95 milliseconds, a hundred times in the blink of an eye. And, as the new experiment reveals, it does so to a precision within a few microseconds, year after year after year. This tells the researchers that the the noise they expected from supermassive black hole mergers is not there.

The reason for this missing signal is a great puzzle right now. Most known galaxies, including our own, seem to host huge black holes with masses of more than a million times that of our Sun. And in the vastness of space and on cosmological times, galaxies bump into each other every once and then. If that happens, they most often combine to a larger galaxy and, after some period of turmoil, the new galaxy will have a supermassive binary black hole at its center. These binary systems emit gravitational waves which should be found throughout the entire universe.

The prevalence of gravitational waves from supermassive binary black holes can be estimated from the probability of a galaxy to host a black hole, and the frequency in which galaxies merge. The emission of gravitational waves in these systems is a consequence of Einstein’s theory of General Relativity. Combine the existing observations with the calculation for the emission, and you get an estimate for the background noise from gravitational waves. The pulsar timing should be sensitive to this noise. But the new measurement is inconsistent with all existing models for the gravitational wave background in this frequency range.

Gravitational waves are one of the key predictions of General Relativity, Einstein’s masterwork which celebrates its 100th anniversary this year. They have never been detected directly, but the energy loss that gravitational waves must cause has been observationally confirmed in stellar binary systems. A binary system acts much like a gravitational antenna: it constantly emits a radiation, just that instead of electromagnetic waves it is gravitational waves that the system sends into space. As a consequence of the constant loss of energy, the stars move closer together and the rotation frequency of binary systems increases. In 1993 the Physics Nobel Prize went to Hulse and Taylor for pioneering this remarkable confirmation of General Relativity.

Ever since, researchers have tried to find other ways to measure the elusive gravitational waves. The amount of gravitational waves they expect depends on their wavelength – roughly speaking, the longer the wavelength, the more of them should be around. The LIGO experiment is sensitive to wavelengths of the order of some thousand km. The network of pulsars however is sensitive to wavelengths of a several lightyears, corresponding to 1016 meters or even more. At these wavelengths astrophysicists expected a much larger background signal. But this is now excluded by the recent measurement.

Estimated gravitational wave spectrum. [Image Source]

Why the discrepancy with the models? In their paper the researchers offer various possible explanations. To begin with, the estimates for the number of galaxy mergers or supermassive binary black holes could be wrong. Or the supermassive black holes might not be able to form close-enough binary systems in the mergers. Or it could be that the black holes experience an environment full with interstellar gas, which would reduce the time during which they emit gravitational waves. There are many astrophysical scenarios that might explain the observation. An absolutely last resort is to reconsider what General Relativity tells us about gravitational-wave emission.

 You have just witnessed the birth of a new mystery in physics.


[This post previously appeared at Starts with a Bang.]