Monday, November 23, 2015

Dear Dr B: Can you think of a single advancement in theoretical physics, other than speculation, since the early 1980's?

This question was asked by Steve Coyler, who was a frequent commenter on this blog before facebook ate him up. His full question is:
“Can you think of a single advancement in theoretical physics, other than speculation like Strings and Loops and Safe Gravity and Twistors, and confirming things like the Higgs Boson and pentaquarks at the LHC, since Politizer and Wilczek and Gross (and Coleman) did their thing re QCD in the early 1980's?”
Dear Steve:

What counts as “advancement” is somewhat subjective – one could argue that every published paper is an advancement of sorts. But I guess you are asking for breakthroughs that have generated new research areas. I also interpreted your question to have an emphasis on “theoretical,” so I will leave aside mostly experimental advances, like electron lasers, attosecond spectroscopy, quantum dots, and so on.

Admittedly your question pains me considerably. Not only does it demonstrate you have swallowed the stories about a crisis in physics that the media warm up and serve every couple of months. It also shows that I haven’t gotten across the message I tried to convey in this earlier post: the topics which dominate the media aren’t the topics that dominate actual research.

The impression you get about physics from reading science news outlets is extremely distorted. The vast majority of physicists have nothing to do with quantum gravity, twistors, or the multiverse. Instead they work in fields that are barely if ever mentioned in the news, like atomic and nuclear physics, quantum optics, material physics, plasma physics, photonics, or chemical physics. In all these areas theory and experiment are very closely tied together, and the path to patents and applications is short.

Unfortunately, advances in theoretical physics get pretty much no media coverage whatsoever. They only make it into the news if they were experimentally confirmed – and then everybody cheers the experimentalists, not the theorists. The exceptions are the higher speculations that you mention, which are deemed news-worthy because they supposedly show that “everything we thought about something is wrong.” These headlines are themselves almost always wrong.

Having said that, your question is difficult for me to answer. I’m not a walking and talking encyclopedia of contemporary physics, and in the early 1980s I was in Kindergarten. The origin of many research areas that are hot today isn’t well documented because their history hasn’t yet been written. This is to warn you that I might be off a little with the timing on the items below.

I list for you the first topics that come to my mind, and I invite readers to submit additions in the comments:

  • Topological insulators. That’s one of the currently hottest topics in physics, and many people expect a Nobelprize to go into this area in the near future. A topological insulators is a material that conducts only on its surface. They were first predicted theoretically in the mid 80s.

  • Quantum error correction, quantum logical gates, quantum computing. The idea of quantum computing came up in the 1980s, and most of the understanding of quantum computation and quantum information is only two decades old. [Corrected date: See comment by Matt.]

  • Quantum cryptography. While the first discussion of quantum cryptography predates the 1980s, the field really only took off in the last two decades. Also one of the hottest topics today because first applications are now coming up. [Corrected date: See comment by Matt.]

  • Quantum phase transitions, quantum critical points. I haven’t been able to find out exactly when this was first discussed, but it’s an area that has flourished in the last 20 years or so. This is work mainly lead by theory, not experiment.

  • Metamaterials. While materials with a negative refraction index were first discussed in the mid 60s, this wasn’t paid much attention to until the late 1990, when further theoretical work demonstrated that materials with negative permittivity and permeability should exist. The first experimental confirmation came in in 2000, and since then the field has exploded. This is another area which will probably see a Nobelprize in the soon future. You have read in the news about this under the headline “invisibility cloak.”

  • Dirac (Weyl) materials. These are materials in which excitations behave like Dirac (Weyl) fermions. Graphene is an example. Again I don’t really know when this was first predicted, but I think it was past 1980.

  • Fractional Quantum Hall Effect The theoretical explanation was provided by Laughlin in 1983, and he was awarded a Nobelprize in 1998, together with two experimentalists. [Added, see comment by Flavio.]

  • Inflation. Inflation is the rapid expansion in the early universe, a theoretical prediction that served to solve a lot of problems. It was developed in the early 1980s.

  • Effective field theory/Renormalization group running. While the origin of this framework go back to Wilson in 1975, this field has only taken off in the mid 90s. This topic too is about to become hot because the breakdown of effective field theory is one of the possible explanations for the unnatural parameters of the Standard Model indicated by recent LHC data.

  • Quantum Integrable Systems. This is a largely theoretical field that is still waiting to see its experimental prime-time. One might argue that the first papers on the topic were written already by Bethe in the 1930s, but most of the work has been in the last 20 years or so.

  • Conformal field theory. Like the previous topic, this area is still heavily dominated by theory and is waiting for its time to come. It started taking off in the mid 1990s. It was topic of one of the first-ever arxiv papers.

  • Geometrical frustration, spin glasses. Geometrically frustrated materials have a large entropy even at zero temperature. You have read about these in the context of monopoles in spin-ice. Much of the theoretical work on this started only in the mid 1980s and it’s still a very active research area.

  • Cosmological Perturbation Theory. This is the mathematical framework necessary to describe the formation of structures in the universe. It was developed starting in the 1980s.

  • Gauge-gravity duality (AdS/CFT). This is a relation between different types of field theories which was discovered in the late 1990s. Its applications are still being explored, but it’s one of the most promising research directions in quantum field theory at the moment.
If you want to get a visual impression for what is going on in physics you can browse arxiv papers using You see there all arxiv papers as dots. The larger the dot, the more citations. The images in this blogpost are screenshots from Paperscape.

You can follow this blog on facebook here.

Tuesday, November 17, 2015

The scientific method is not a myth

Heliocentrism, natural selection, plate tectonics – much of what is now accepted fact was once controversial. Paradigm-shifting ideas were, at their time, often considered provocative. Consequently the way to truth must be pissing off as many people as possible by making totally idiotic statements. Like declaring that the scientific method is a myth, which was most recently proclaimed by Daniel Thurs on Discover Blogs.

Even worse, his article turns out to be a book excerpt. This hits me hard after just having discovered that someone by name Matt Ridley also published a book full of misconceptions about how science supposedly works. Both fellows seem to have the same misunderstanding: the belief that science is a self-organized system and therefore operates without method – in Thurs’ case – and without governmental funding – in Ridley’s case. That science is self-organized is correct. But to conclude from this that progress comes from nothing is wrong.

I blame Adam Smith for all this mistaken faith in self-organization. Smith used the “invisible hand” as a metaphor for the regulation of prices in a free market economy. If the actors in the market have full information and act perfectly rational, then all goods should eventually be priced at their actual value, maximizing the benefit for everyone involved. And ever since Smith, self-organization has been successfully used out of context.

In a free market, the value of the good is whatever price this ideal market would lead to. This might seem circular but it isn’t: It’s a well-defined notion, at least in principle. The main argument of neo-conservatism is that any kind of additional regulation, like taxes, fees, or socialization of services, will only lead to inefficiencies.

There are many things wrong with this ideal of a self-regulating free market. To begin with real actors are neither perfectly rational nor do they ever have full information. And then the optimal prices aren’t unique; instead there are infinitely many optimal pricing schemes, so one needs an additional selection mechanism. But oversimplified as it is, this model, now known as equilibrium economics, explains why free markets work well, or at least better than planned economies.

No, the main problem with trust in self-optimization isn’t the many shortcomings of equilibrium economics. The main problem is the failure to see that the system itself must be arranged suitably so that it can optimize something, preferably something you want to be optimized.

A free market needs, besides fiat money, rules that must be obeyed by actors. They must fulfil contracts, aren’t allowed to have secret information, and can’t form monopolies – any such behavior would prevent the market from fulfilling its function. To some extent violations of these rules can be tolerated, and the system itself would punish the dissidents. But if too many actors break the rules, self-optimization would fail and chaos would result.

Then of course you may want to question whether the free market actually optimizes what you desire. In a free market, future discounting and personal risk tends to be higher than many people prefer, which is why all democracies have put in place additional regulations that shift the optimum away from maximal profit to something we perceive as more important to our well-being. But that’s a different story that shall be told another time.

The scientific system in many regards works similar to a free market. Unfortunately the market of ideas isn’t as free as it should be to really work efficiently, but by and large it works well. As with the market economies though, it only works if the system is set up suitably. And then it optimizes only what it’s designed to optimize, so you better configure it carefully.

The development of good scientific theories and the pricing of goods are examples for adaptive systems, and so is natural selection. Such adaptive systems generally work in a circle of four steps:
  1. Modification: A set of elements that can be modified.
  2. Evaluation: A mechanism to evaluate each element according to a measure. It’s this measure that is being optimized.
  3. Feedback: A way to feed the outcome of the evaluation back into the system.
  4. Reaction: A reaction to the feedback that optimizes elements according to the measure by another modification.
With these mechanisms in place, the system will be able to self-optimize according to whatever measure you have given it, by reiterating a cycle going through steps one to four.

In the economy the set of elements are priced goods. The evaluation is whether the goods sell. The feedback is the vendor being able to tell how many goods sell. The reaction is to either change the prices or improve the goods. What is being optimized is the satisfaction (“utility”) of vendors and consumers.

In natural selection the set of elements are genes. The evaluation is whether the organism thrives. The feedback is the dependence of the amount of offspring on the organisms’ well-being. The reaction is survival or extinction. What is being optimized are survival chances (“fitness”).

In science the set of elements are hypotheses. The evaluation is whether they are useful. The feedback is the test of hypotheses. The reaction is that scientists modify or discard hypotheses that don’t work. What is being optimized in the scientific system depends on how you define “useful.” It once used to mean predictive, yet if you look at high energy physics today you might be tempted to think it’s instead mathematical elegance. But that’s a different story that shall be told another time.

That some systems optimize a set of elements according to certain criteria is not self-evident and doesn’t come from nothing. There are many ways systems can fail at this, for example because feedback is missing or a reaction isn’t targeted enough. A good example for lacking feedback is the administration of higher education institutions. They operate incredibly inefficiently, to the extent that the only way one can work with them is by circumvention. The reason is that, by my own experience, it’s next to impossible to fix obviously nonsensical policies or to boot incompetent administrative personnel.

Natural selection, to take another example, wouldn’t work if genetic mutations scrambled the genetic code too much because whole generations would be entirely unviable and feedback wasn’t possible. Or take the free market. If we’d all agree that tomorrow we don’t believe in the value of our currency any more, the whole system would come down.

Back to science.

Self-optimization by feedback in science, now known as the scientific method, was far from obvious for people in the middle ages. It seems difficult to fathom today how they could not have known. But to see how this could be you only have to look at fields where they still don’t have a scientific method, like much of the social and political sciences. They’re not testing hypotheses so much as trying to come up with narratives or interpretations because most of their models don’t make testable predictions. For a long time, this is exactly what the natural sciences also were about: They were trying to find narratives, they were trying to make sense. Quantification, prediction, and application came much later, and only then could the feedback cycle be closed.

We are so used to rapid technological progress now that we forget it didn’t used to be this way. For someone living 2000 years ago, the world must have appeared comparably static and unchanging. The idea that developing theories about nature allows us to shape our environment to better suit human needs is only a few hundred years old. And now that we are able to collect and handle sufficient amounts of data to study social systems, the feedback on hypotheses in this area will probably also become more immediate. This is another opportunity to shape our environment better to our needs, by recognizing just which setup makes a system optimize what measure. That includes our political systems as well as our scientific systems.

The four steps that an adaptive system needs to cycle through don’t come from nothing. In science, the most relevant restriction is that we can’t just randomly generate hypotheses because we wouldn’t be able to test and evaluate them all. This is why science heavily relies on education standards, peer review, and requires new hypotheses to tightly fit into existing knowledge. We also need guidelines for good scientific conduct, reproducibility, and a mechanism to give credits to scientists with successful ideas. Take away any of that and the system wouldn’t work.

The often-depicted cycle of the scientific method, consisting of hypotheses-generation and subsequent testing, is incomplete and lacks details, but it’s correct in its core. The scientific method is not a myth.

Really I think today anybody can write a book about whatever idiotic idea comes to their mind. I suppose the time has come for me to join the club.

Monday, November 16, 2015

I am hiring: Postdoc in AdS/CFT applications to condensed matter

I am hiring a postdoc for a 3-year position based at Nordita in Stockholm. The research is project-bound, funded by a grant from the Swedish Research Council. I am looking for someone with a background in AdS/CFT applications to condensed matter and/or analogue gravity. If you want to know what the project is about, have a look at these recent papers. It’s a good contract with full benefits. Please submit your application documents (CV, research interests, at least two letters) here. Further questions should be addressed to hossi[at]

Thursday, November 12, 2015

Mysteriously quiet space baffles researchers

The Parkes Telescope. [Image Source]

Astrophysicists have concluded the yet most precise search for the gravitational wave background created by supermassive black hole mergers. But the expected signal isn’t there.

Last month, Lawrence Krauss rumored that the newly updated gravitational wave detector LIGO had seen its first signal. The news spread quickly – and was shot down almost as quickly. The new detector still had to be calibrated, a member of the collaboration explained, and a week later it emerged that the signal was probably a test run.

While this rumor caught everybody’s attention, a surprise find from another gravitational wave experiment almost drowned in the noise. The Parkes Pulsar Timing Array Project just published results from analyzing 11 years’ worth of data in which they expected to find evidence for gravitational waves created by mergers of supermassive black holes. The sensitivity of their experiment is well within the regime where the signal was predicted to be present. But the researchers didn’t find anything. Spacetime, it seems, is eerily quiet.

The Pulsar Timing Array project uses the 64 m Parkes radio telescope in Australia to monitor regularly flashing light sources in our galaxy. Known as pulsars, such objects are thought to be created in some binary systems, where two stars orbit around a common center. When a neutron star succeeds in accreting mass from the companion star, an accretion disk forms and starts to emit large amounts of particles. Due to the rapid rotation of the system, this emission goes into one particular direction. Since we can only observe the signal when it is aimed at our telescopes, the source seems to turn on and off in regular intervals: A pulsar has been created.

The astrophysicists on the lookout for gravitational waves use the fastest-spinning pulsars as enormously precise galactic clocks. These millisecond pulsars rotate so reliably that their pulses get measurably distorted already by minuscule disturbances in spacetime. Much like buoys move with waves on the water, pulsars move with the gravitational waves when space and time is stretched. In this way, the precise arrival times of the pulsars’ signals on Earth gets distorted. The millisecond pulsars in our galaxy are thus nothing but a huge gravitational wave detector that nature has given us for free.

Take the pulsar with the catchy name PSR J1909-3744. It flashes us every 2.95 milliseconds, a hundred times in the blink of an eye. And, as the new experiment reveals, it does so to a precision within a few microseconds, year after year after year. This tells the researchers that the the noise they expected from supermassive black hole mergers is not there.

The reason for this missing signal is a great puzzle right now. Most known galaxies, including our own, seem to host huge black holes with masses of more than a million times that of our Sun. And in the vastness of space and on cosmological times, galaxies bump into each other every once and then. If that happens, they most often combine to a larger galaxy and, after some period of turmoil, the new galaxy will have a supermassive binary black hole at its center. These binary systems emit gravitational waves which should be found throughout the entire universe.

The prevalence of gravitational waves from supermassive binary black holes can be estimated from the probability of a galaxy to host a black hole, and the frequency in which galaxies merge. The emission of gravitational waves in these systems is a consequence of Einstein’s theory of General Relativity. Combine the existing observations with the calculation for the emission, and you get an estimate for the background noise from gravitational waves. The pulsar timing should be sensitive to this noise. But the new measurement is inconsistent with all existing models for the gravitational wave background in this frequency range.

Gravitational waves are one of the key predictions of General Relativity, Einstein’s masterwork which celebrates its 100th anniversary this year. They have never been detected directly, but the energy loss that gravitational waves must cause has been observationally confirmed in stellar binary systems. A binary system acts much like a gravitational antenna: it constantly emits a radiation, just that instead of electromagnetic waves it is gravitational waves that the system sends into space. As a consequence of the constant loss of energy, the stars move closer together and the rotation frequency of binary systems increases. In 1993 the Physics Nobel Prize went to Hulse and Taylor for pioneering this remarkable confirmation of General Relativity.

Ever since, researchers have tried to find other ways to measure the elusive gravitational waves. The amount of gravitational waves they expect depends on their wavelength – roughly speaking, the longer the wavelength, the more of them should be around. The LIGO experiment is sensitive to wavelengths of the order of some thousand km. The network of pulsars however is sensitive to wavelengths of a several lightyears, corresponding to 1016 meters or even more. At these wavelengths astrophysicists expected a much larger background signal. But this is now excluded by the recent measurement.

Estimated gravitational wave spectrum. [Image Source]

Why the discrepancy with the models? In their paper the researchers offer various possible explanations. To begin with, the estimates for the number of galaxy mergers or supermassive binary black holes could be wrong. Or the supermassive black holes might not be able to form close-enough binary systems in the mergers. Or it could be that the black holes experience an environment full with interstellar gas, which would reduce the time during which they emit gravitational waves. There are many astrophysical scenarios that might explain the observation. An absolutely last resort is to reconsider what General Relativity tells us about gravitational-wave emission.

 You have just witnessed the birth of a new mystery in physics.

[This post previously appeared at Starts with a Bang.]

Tuesday, November 10, 2015

Dear Dr. B: What do physicists mean when they say time doesn’t exist?

That was a question Brian Clegg didn’t ask but should have asked. What he asked instead in a recent blogpost was: “When physicists say many processes are independent of time, are they cheating?” He then answered his own question with yes. Let me explain first what’s wrong with Brian’s question, then I’ll tell you something about the existence of time.

What is time-reversal invariance?

The problem with Brian’s question is that no physicist I know would ever say that “many processes are independent of time.” Brian, I believe, didn’t mean time-independent processes but time-reversal invariant laws. The difference is important. The former is a process that doesn’t depend on time. The latter is a symmetry of the equations determining the process. Having a time reversal-invariant law means that the equations remain the same when the direction of time is reversed. This doesn’t mean the processes remain the same.

The mistake is twofold. Firstly, a time-independent process is a very special case. If you watch a video that shows a still image, it doesn’t matter if you watch it forward or backward. So, yes, time-independence implies time-reversal invariance. But secondly, if the underlying laws are time-reversal invariant, the processes themselves are reversible, but not necessarily invariant under this reversal. You can watch any movie backwards with the same technology that you can watch it forwards, yet the plot will look very different. The difference is the starting point, the “initial condition.”

The fundamental laws of nature, for all we know today, are time-reversal invariant*. This means you can rewind any movie and watch it backwards using the same laws. The reason that movies look very different backwards than forwards is a probabilistic one, captured in the second law of thermodynamics: entropy never decreases. In large open systems, it instead tends to increase. The initial state is thus almost always very different from the final state.

Probabilities enter not through the laws themselves, but through these initial conditions. It is easy enough to set up a bowl with flour, sugar, butter, and eggs (initial condition), and then mix it (the law) to a smooth dough. But it is for all practical purposes impossible to set up dough so that a reverse-spinning mixer would separate the eggs from the flour.

In principle the initial state for this unmixing exists. We know it exists because we can create its time-reversed version. But you would have to arrange the molecules in the dough so precisely that it’s impossible to do. Consequently, you never see dough unmixing, never see eggs unbreaking, and facelifts don’t make you younger.

It is worth noting that all of this is true only in very large systems, with a large number of constituents. This is always the case for daily life experience. But if a system is small enough, it is indeed possible for entropy to decrease every once in a while just by chance. So you can ‘unmix’ very small patches of dough.

What does any of this have to do with the existence of time? Not very much. Time arguably does exist. In a previous blogpost I explained that the property of being reality isn’t binary (true or false), but it is graded from “mostly true” to “most likely false.” Things don’t either exist or don’t exist, they exist at various levels of immediacy, depending on how detached they are from direct sensory exploration.

Space and time are something we experience every day. Einstein taught us space and time are combined in space-time, and its curvature is the origin of gravity. We move around in space-time. If space-time wasn’t there, we wouldn’t be there because there wouldn’t be any “there” to be at, and since space and time belong together time exists the same way as space does.

Claiming that time doesn’t exist is therefore misusing language. In General Relativity, time is a coordinate, one that is relevant to obtain predictions for observables. It isn’t uniquely defined, and it is not itself observable, but that doesn’t make it non-existent. If you’d ask me what it means for time to exist, I’d say it’s the Lorentzian signature of the metric, and that is something which we need for our theories to work. Time is, essentially, the label to order frames in the movie of our universe.

Why do some physicists say that time isn’t real?

When physicists say that time doesn’t exist they mean one of two things: 1) The passage of time is an illusion, or 2) Time isn’t fundamental.

As to 1). In our current description of the universe, the past isn’t fundamentally different from the future. It will look different in outcome, but it will be made of the same stuff and it work the same way. There is no dividing line that separates past and future, and that demarks the present moment.

Our experience of there being a “present” comes from the one-sidedness of memory-formation. We can only form memory about things from a time where entropy was smaller, so we can’t remember the future. The perception of time passing comes from the update of our memory in the direction of entropy increase.

In this view, every moment in time exists in some way, though from our personal experience at each moment most of them are remote from experience (the past) or inaccessible from experience (the future). The perception of existence itself is time-dependent and also individual. You might say that the future is so remote to your perception, and prediction so close to impossible, that it is on the level of non-existence. I wouldn’t argue with you about that, but if you learn some more General Relativity your perception might shift.

Now this point of view irks some people, by which I mean Lee Smolin. Lee doesn’t like it that the laws of nature we know today do not give a special relevance to a present moment. He argues that this signals there is something missing in our theories, and that time should be “real.” What he means by that is that the laws of nature themselves must give rise to something like a present moment, which is not presently the case.

As to 2). We know that General Relativity cannot be the fundamental theory of space and time because it breaks down when gravity becomes very strong. The underlying theory might not have a notion of time, instead space and/or time might be emergent – they might be built up of something more fundamental.

I have some sympathy for this idea because I find it plausible that Euclidean and Lorentzian signatures are two different phases of the same underlying structure. This necessarily implies that time isn’t fundamental, but that it comes about in some phase transition.

Some people say that in this case “time doesn’t exist” but I find this extremely misleading. Any such theory would have to reproduce General Relativity and give rise to the notion of time we presently use. Saying that something isn’t real because it’s emergent is a meaningless abuse of terminology. It’s like saying the forest doesn’t exist because it’s made of trees.

In summary, time is real in a well-defined way, has always been real, and will always be real. When physicists say that time isn’t real they normally use it as a short-hand to refer to specific properties of their favorite theories, for example that the laws are time-reversal invariant, or that space-time is emergent. The one exception is Lee Smolin who means something else. I’m not entirely sure what, but he has written a book or two about it.

* Actually they’re not, they’re CPT invariant. But if you know the difference then I don’t have to explain you the difference.

Monday, November 09, 2015

Another new social networking platform for scientists: Loop

Logo of Loop Network website.

A recent issue of Nature magazine ran an advertisement feature for “Loop,” a new networking platform to “maximize the impact of researchers and their discoveries.” It’s an initiative by Frontiers, an open access publisher. Of course I went and got an account to see what it does and I’m here to report back.

In the Nature advert, the CEO of Frontiers interviews herself and answers the question “What makes Loop unique?” with “Loop is changing research networking on two levels. Firstly, researchers should not have to go to a network; it should come to them. Secondly, researchers should not have to fill in dozens of profiles on different websites.”

So excuse me when I was awaiting a one-click registration that made use of one of my dozens of other profiles. Instead I had to fill in a lengthy registration form that, besides name and email, didn’t only ask for my affiliation and country of residence and job description and field of occupation, domain, and speciality, but also for my birthdate, before I had to confirm my email address.

Since that network was so good at “coming to me” it wasn’t possible after registration to import my profile from any other site, Google Scholar, ORCID, Linkedin, ResearchGate, or whathaveyou, facebook, G+, twitter if you must. Instead, I had to fill in my profile yet another time. Neither, for all I can tell, can you actually link your other accounts to the Loop account.

If you scroll down the information pages, it turns out what the integration refers to is “Your Loop profile is discoverable via the articles you have authored on and in the Frontiers journals.” Somewhat underwhelming.

Then you have to assemble a publication list. I am lucky to have a name that, for all I know, isn’t shared by anybody else on the planet, so it isn’t so difficult to scan the web for my publications. The Loop platform came up with 86 suggested finds. These appeared in separate pop-up windows. If you have ever done this process before you can immediately see the problem: Typically in these lists there are many duplicate entries. So going through the entries one by one without seeing what is already approved means you have to memorize all previous items. Now I challenge you to recall whether item number 86 had appeared before on the list.

Finally done with this, what do you have there? A website that shows a statistic for how many people have looked at your profile (on this site, presumably), how many people have downloaded your papers (from this site, presumably) and a number of citations which shows zero for me and for a lot of other profiles I looked at. A few people have a number there from the Scopus database. I conclude that Loop doesn’t have its own citation metric, and neither uses the one from Google Scholar or Spires.

As to the networking, you get suggestions for people you might know. I don’t know any of the suggested people, which isn’t surprising because we already noticed they’re not importing information, so how are they supposed to know who I know? I’m not sure what I would like to follow any of these people for, why that would be any better than following them elsewhere, or not at all. I followed some random person just because. If that person actually did something (which he doesn’t, much like everybody else whose profile I looked at), presumably it would appear in my feed. From that angle, it looks much like any other networking website. There is also a box that asks me to share something with my network of one.

In summary, for all I can tell this website is as useless as it gets. I don’t have the faintest clue what they think it’s good for. Even if it’s good for something it does a miserable job at telling me what that something is. So save your time.

Sunday, November 08, 2015

10 things you should know about black holes [video]

Since I had the blue screen up already, I wanted to try out some things to improve my videos. I'm quite happy with this one (finally managed to export it in a reasonable resolution), but I noticed too late I should have paid more attention to the audio. Sorry about that. Next time I'll use an external mic. I have also decided to finally replace the blue screen with a green screen, which I hope will solve the problem with the eye erasure.

Friday, November 06, 2015

New music video


I know you can hardly contain the excitement about my new lipstick and the badly illuminated blue screen, so please enjoy my newest release, exclusively for you, dear reader.

I actually wrote this song last year, but then I mixed myself into a mush. In the hope that I've learned some things since, I revisited this project and gave it a second try. Thought I'm still not quite happy with it (I never seem to get the vocals right), I strongly believe there's a merit to finishing up things. Also, if I have to hear this thing once again my head will implode (though at least that would set an end to the concussion symptoms and neck pain I caused myself with the hair shaking). Lesson learned: hitting your forehead against a wall isn't really pleasant. If you feel like engaging in it, you should at the very least videotape it, because that justifies just about anything stupid.

Sunday, November 01, 2015

Dumb Holes Leak

Tl;dr: A new experiment demonstrates that Hawking radiation in a fluid is entangled, but only in the high frequency end. This result might be useful to solve the black hole information loss problem.

In August I went to Stephen Hawking’s public lecture in the fully packed Stockholm Opera. Hawking was wheeled onto the stage, placed in the spotlight, and delivered an entertaining presentation about black holes. The silence of the audience was interrupted only by laughter to Hawking’s well-placed jokes. It was a flawless performance with standing ovations.

In his lecture, Hawking expressed hope that he will win the Nobelprize for the discovery that black holes emit radiation. Now called “Hawking radiation,” this effect should have been detected at the LHC had black holes been produced there. But time has come, I think, for Hawking to update his slides. The ship to the promised land of micro black holes has long left the harbor, and it sunk – the LHC hasn’t seen black holes, has not, in fact, seen anything besides the Higgs.

But you don’t need black holes to see Hawking radiation. The radiation is a consequence of applying quantum field theory in a space- and time-dependent background, and you can use some other background to see the same effect. This can be done, for example, by measuring the propagation of quantum excitations in Bose-Einstein condensates. These condensates are clouds of about a billion or so ultra-cold atoms that form a fluid with basically zero viscosity. It’s as clean a system as it gets to see this effect. Handling and measuring the condensate is a big experimental challenge, but what wouldn’t you do to create a black hole in the lab?

The analogy between the propagation of excitations on background fluids and in a curved space-time background was first pointed out by Bill Unruh in the 1980s. Since then, many concrete examples have been found for condensed-matter systems that can be used as stand-ins for gravitational fields; they are summarily known as “analogue gravity system” – this is “analogue” as in “analogy,” not as opposed to “digital.”

In these analogue gravity systems, the quantum excitations are sound waves, and the corresponding quantum particles are called “phonons.” A horizon in such a space-time is created at the boundary of a region in which the velocity of the background fluid exceeds the speed of sound, thereby preventing the sound waves from escaping. Since these fluids trap sound rather than light, such gravitational analogues are also called “dumb,” rather than “black” holes.

Hawking radiation was detected in fluids a few years ago. But these measurements only confirmed the thermal spectrum of the radiation and not its most relevant property: the entanglement across the horizon. The entanglement of the Hawking radiation connects pairs of particles, one inside and one outside the horizon. It is a pure quantum effect: The state of either particle separately is unknown and unknowable. One only knows that their states are related, so that measuring one of the particles determines the measurement outcome of the other particle – this is Einstein’s “spooky action at a distance.”

The entanglement of Hawking radiation across the horizon is origin of the black hole information loss problem. In a real black hole, the inside partner of the entangled pair eventually falls into the singularity, where it gets irretrievably lost, leaving the state of its partner undetermined. In this process, information is destroyed, but this is incompatible with quantum mechanics. Thus, by combining gravity with quantum mechanics, one arrives at a result that cannot happen in quantum mechanics. It’s a classical proof by contradiction, and signals a paradox. This headache is believed to be remedied by the, still missing, theory of quantum gravity, but exactly what the remedy is nobody knows.

In a new experiment, Jeff Steinhauer from the Israel Institute of Technology measured the entanglement of the Hawking radiation in an analogue black hole; his results are available on the arxiv.

For this new experiment, the Bose Einstein condensate was trapped and put in motion with laser light, making it an effectively one-dimensional system in flow. In this trap, the condensate had low density on one half and a higher density in the other half, achieved by a potential step from a second laser. The speed of sound in such a condensate depends on the density, so that a higher density corresponds to a higher speed of sound. The high density region thus allowed the phonons to escape and corresponds to the outside of the horizon, whereas the low density region corresponds to the inside of the horizon.

The figure below shows the density profile of the condensate:

Figure 1 from 1510.00621. The density profile of the condensate. 

In this system then, Steinhauer measured correlations of fluctuations. These flowing condensates don’t last very long, so to get useful data, the same setting must be reproduced several thousand times. The analysis clearly shows a correlation between the excitations inside and outside the horizon, as can be seen in the figure below. The entanglement appears in the grey lines on the diagonal from top left to bottom right. I have marked the relevant feature with red arrows (ignore the green ones, they indicate matches between the measured angles and the theoretical prediction).

When Steinhauer analyzed the dependence on the frequency, he found a correlation only in the high frequency end, not in the low frequency end. This is as intriguing as confusing. In a real black hole all frequencies should be entangled. But if the Hawking radiation was not entirely entangled across the horizon, that might allow information to escape. One has to be careful, however, to not read too much into this finding.

To begin with, let us be clear, this is not a gravitational system. It’s a system that shares some properties with the gravitational case. But when it comes to the quantum behavior of the background, that may or may not be a useful comparison. Even if it was, the condensate studied here is not rotationally symmetric, as a real black hole would be. Since the rotational symmetry is essential for the red-shift in the gravitational potential, I actually don’t know how to interpret the low frequencies. Possibly they correspond to a regime that real black holes just don’t have. And then the correlation might just have gotten lost in experimental uncertainties – limitations by finite system size, number of particles, noise, etc – on which the paper doesn’t have much detail.

The difference between the analogue gravity system, which is the condensate, and the real gravity system is that we do have a theory for the quantum properties of the condensate. If gravity was quantized in a similar way, then studies like the one done by Steinhauer, might indicate where Hawking’s calculation fails – for it must fail if the information paradox is to be solved. So I find this a very interesting development.

Will Hawking and Steinhauer get a Nobelprize for the discovery and detection of the thermality and entanglement of the radiation? I think this is very unlikely, for right now it isn’t clear whether this is even relevant for anything. Should this finding turn out to be key to developing a theory of quantum gravity however, that would be groundbreaking. And who knows, maybe Hawking will again be invited to Stockholm.

Thursday, October 29, 2015

What is basic science and what is it good for?

Basic science is, above all, a stupid word. It sounds like those onions we sliced in 8th grade. And if people don’t mistake “basic” for “everybody knows,” they might think instead it means “foundational,” that is, dedicated to questioning the present paradigms. But that’s not what the word refers to.

Basic science refers to research which is not pursued with the aim of producing new technologies; it is sometimes, more aptly, referred to as “curiosity driven” or “blue skies” research. The NSF calls it “transformative,” the ERC calls it “frontier” research. Quite possibly they don’t mean exactly the same, which is another reason why it’s a stupid word.

A few days ago, Matt Ridley wrote an article for the Wall Street Journal in which he argues that basic research, to the extent that it’s necessary at all, doesn’t need governmental funding. He believes that it is technology that drives science, not the other way round. “Deep scientific insights are the fruits that fall from the tree of technological change,” Ridley concludes. Apparently he has written a whole book with this theme, which is about to be published next week. The WSJ piece strikes me as shallow and deliberately provocative, published with the only aim of drawing attention to his book, which I hope has more substance and not just more words.

The essence of the article seems to be that it’s hard to demonstrate a correlation, not to mention causation, between tax-funded basic science and economic growth. Instead, Ridley argues, in many examples scientific innovations originated not in one single place, but more or less simultaneously in various different places. He concludes that tax-funded research is unnecessary.

Leaving aside for a moment that measures for economic growth can mislead about a countries’ prosperity, it is hardly surprising that a link between tax-funded basic research and economic growth is difficult to find. It must come as a shock to nationalists, but basic research is the possibly most international profession in existence. Ideas don’t stop at country borders. Consequently, to make use of basic research, you don’t yourself need to finance it. You can just wait until a breakthrough occurs elsewhere and then pay your people to jump on it. The main reason we so frequently see examples of simultaneous breakthroughs in different groups is that they build on more or less the same knowledge. Scientists can jump very quickly.

But the conclusion that this means one does not need to support basic research is just wrong. It’s a classic demonstration of the “free rider” problem. Your country can reap the benefits of basic research elsewhere, as long as somebody else does the thinking for you. But if every country does this, innovation would run dry, eventually.

Besides this, the idea that technology drives science might have worked in the last century but it does no longer work today. The times where you could find new laws of nature by dabbling with some equipment in the lab are over. To make breakthroughs today, you need to know what to build, and you need to know how to analyze your data. Where will you get that knowledge if not from basic resarch?

The technologies we use today, the computer that you sit in front of – semiconductors, lasers, liquid crystal displays – are based on last century’s theories. We still reap the benefits. And we all do, regardless of whether our nation paid salary for one of quantum mechanics’ founding fathers. But if we want progress to continue in the next century, we have to go beyond that. You need basic research to find out which direction is promising, which is a good investment. Or otherwise, you’ll waste lots of time and money.

There is a longer discussion that one can have whether some types of basic research have any practical use at all. It is questionable, for example, that knowing about the accelerated expansion of the universe will ever lead to a better phone. In my perspective the materialistic focus is as depressing as meaningless. Sure, it would be nice if my damned phone battery wouldn’t die in the middle of a call, and, yeah, I want to live forever watching cat videos on my hoverboard. But I fail to see what it’s ultimately good for. The only meaning I can find in being thrown into this universe is to understand how it works and how we are part of it. To me, knowledge is an end unto itself. Keep your hoverboard, just tell me how to quantize gravity.

Here is a simple thought experiment. Consider all tax-funded basic research were to cease tomorrow. What would go missing? No more stories about black holes, exoplanets, or loophole-free tests of quantum entanglement. No more string theory, no multiverses, no theories of everything, no higgsinos, no dark matter, no cosmic neutrinos, extra-dimensions, wormholes, or holographic universes. Except for a handful of lucky survivors at partly privately funded places – like Perimeter Institute, the KAVLI institutes, and some Templeton-funded initiatives, who in no way would be able to continue all of this research – all this research would die quickly. The world would be a poorer place, one with no hope of ever understanding this amazing universe that we live in.

Democracy is a funny thing, you know, it’s kind of like an opinion poll. Basic research is tax-funded in all developed countries. Could there be any clearer expression of the people’s opinion? They say: we want to know. We want to know where we come from, and what we are made of, and what’s the fate of our universe. Yes, they say, we are willing to pay taxes for that, but please tell us. As someone who works in basic research, I see my task as delivering to this want.

Monday, October 26, 2015

Black holes and academic walls

Image credits: Paul Terry Sutton
According to Einstein you wouldn’t notice crossing a black hole horizon. But now researchers argue that a firewall or brickwall would be in your way. Have they entirely lost their mind?

Tl;dr: Yes.

It is hard, sometimes, to understand why anyone would waste time on a problem as academic as black hole information loss. And I say that as someone who spent a significant part of the last decade pushing this very problem around in my head. Don’t physicists have anything better to do, in a world that is suffering from war and disease, bad grammar even? What drives these researchers, other than the hope to make headlines for solving a 40 years old conundrum?

Many physicists today work on topics that, like black hole information loss, seem entirely detached from reality. Black holes only succeed in destroying information once they entirely evaporate, and that won’t happen for the next 100 billion years or so. What drives these researchers is not making tangible use of their insights, but the recognition that someone today has to pave way for the science that will become relevant in a hundred, a thousand, or ten thousand years from now. And as I scan the human mess in my news feed, the unearthly cleanliness of the argument, the seemingly inescapable logic leading to a paradox, admittedly only adds to its appeal.

If black hole information loss was a cosmic whodunit, then quantum theory would be the victim. Stephen Hawking demonstrated in the early 1970s that when one combines quantum theory with gravity, one finds that black holes must emit thermal radiation. This “Hawking radiation” is composed of particles that besides their temperature do not contain any information. And so, when a black hole entirely evaporates all the information about what fell inside must ultimately be destroyed. But such destruction of information is incompatible with the very quantum theory one used to arrive at this conclusion. In quantum theory all processes can happen both forward and backward in time, but black hole evaporation, it seems, cannot be reversed.

This presented physicists with a major conundrum, because it demonstrated that gravity and quantum theory refused to combine. It didn’t help either to try to explain away the problem alluding to the unknown theory of quantum gravity. Hawking radiation is not a quantum gravitational process, and while quantum gravity does eventually become important in the very late stages of a black hole’s evaporation, the argument goes that by this time it is too late to get all the information out.

The situation changed dramatically in the late 1990s, when Maldacena proposed that certain gravitational theories are equivalent to gauge theories. Discovered in string theory, this famed gauge-gravity correspondence, though still mathematically unproved, does away with the problem because whatever happens when a black hole evaporates is equivalently described in the gauge theory. The gauge theory however is known to not be capable of murdering information, thus implying that the problem doesn’t exist.

While the gauge-gravity correspondence convinced many physicists, including Stephen Hawking himself, that black holes do not destroy information, it did not shed much light on just exactly how the information escapes the black hole. Research continued, but complacency spread through the ranks of theoretical physicists. String theory, it seemed, had resolved the paradox, and it was only a matter of time until details would be understood.

But that wasn’t how things panned out. Instead, in 2012, a group of four physicist, Almheiri, Marolf, Polchinski, and Sully (AMPS) demonstrated that what was thought to be a solution is actually also inconsistent. Specifically they demonstrated that four assumptions, generally believed by most string theorists to all be correct, cannot in fact be simultaneously true. These four assumptions are that:
  1. Black holes don’t destroy information.
  2. The Standard Model of particle physics and General Relativity remain valid close by the black hole horizon.
  3. The amount of information stored inside a black hole is proportional to its surface area.
  4. An observer crossing the black hole horizon will not notice it.
The second assumption rephrases the statement that Hawking radiation is not a quantum gravitational effect. The third assumption is a conclusion drawn from calculations of the black hole microstates in string theory. The fourth assumption is Einstein’s equivalence principle. In a nutshell, AMPS say that at least one of these assumptions must be wrong. One of the witnesses is lying, but who?

In their paper, AMPS suggested, maybe not quite seriously, giving up on the least contested of these assumptions, number 4). Giving up on 4), the other three assumptions imply that an observer falling into the black hole would encounter a “firewall” and be burnt to ashes. The equivalence principle however is the central tenet of general relativity and giving it up really is the last resort.

For the uninitiated observer, the lying witness is obviously 3). In contrast to the other assumptions, which are consequences of theories we already know and have tested to high precision, number 3) comes from a so-far untested theory. So if one assumption has to dropped then maybe it is the assumption that string theory is right about the information content of black holes, but that option isn’t very popular with string theorists...

And so within a matter of months the hep-th category of the arxiv was cluttered with attempts to reconcile the disagreeable assumptions with another. Proposed solutions included everything from just accepting the black hole firewall to the multiverse to elaborated thought-experiments meant to demonstrate that an observer wouldn’t actually notice being burnt. Yes, that’s modern physics for you.

I too of course have an egg in the basket. I found the witnesses all to be convincing, none of them seemed to be lying. And taking them at face value, it finally occurred to me that what made the assumptions seemingly incompatible was an unstated fifth assumption. Like witnesses’ accounts might suddenly all make sense once you realize the victim wasn’t killed at the same place the body was found, the four assumptions suddenly all make sense when you do not require the information to be saved in a particular way (that the final state is “typical” state). Instead the requirement that energy must be locally conserved near the horizon makes the firewall impossible and at the same time also told me exactly just how the black hole evaporation remains compatible with quantum theory.

I think nobody really liked my paper because it lead you to the rather strange conclusion that somewhere near the horizon there is a boundary which does alter the quantum theory, yet in a way that isn’t noticeable for any observer near by the black hole. It is possible to measure its effects, but only in the far distance. And while my proposal did resolve the firewall conundrum, it didn’t do anything about the black hole information loss problem. I mentioned in a side-note that in principle one could use this boundary to hand information into the outgoing radiation, but that would still not explain how the information would get into the boundary to begin with.

After publishing this paper, I vowed once again to never think about black hole evaporation again. But then last month, an arxiv preprint appeared by ‘t Hooft. One of the first to dabble in black hole thermodynamics, in his new paper ‘t Hooft proposes that the black hole horizon acts like a boundary that reflects information, a “brick wall” as New Scientist wants it. This new idea has been inspired by Stephen Hawking’s recent suggestion that much of the information falling into black holes continues to be stored on the horizon. If that is so, then giving the horizon a chance to act can allow the information to leave again.

I don’t think that bricks are much of an improvement over fire and I’m pretty sure that this exact realization of the idea won’t hold up. But after all the confusion, this might eventually allow us to better understand just exactly how the horizon interacts with the Hawking radiation and how it might manage to encode information in it.

Fast forward a thousand years. At the end of the road there is a theory of quantum gravity that will allow us to understand the behavior of space and time on shortest distance scales and, so many hope, the origin of quantum theory itself. Progress might seem incremental and sometimes history leads us in circles, but what keeps physicists going is the knowledge that there must be a solution.

[This post previously appeared at Starts with a Bang.]

Monday, October 19, 2015

Book review: Spooky Action at a Distance by George Musser

Spooky Action at a Distance: The Phenomenon That Reimagines Space and Time--and What It Means for Black Holes, the Big Bang, and Theories of Everything
By George Musser
Scientific American, To be released November 3, 2015

“Spooky Action at a Distance” explores the question Why aren’t you here? And if you aren’t here, what is it that prevents you from being here? Trying to answer this simple-sounding question leads you down a rabbit hole where you have to discuss the nature of space and time with many-world proponents and philosophers. In his book, George reports back what he’s found down in the rabbit hole.

Locality and non-locality are topics as confusing as controversial, both in- and outside the community, and George’s book is a great introduction to an intriguing development in contemporary physics. It’s a courageous book. I can only imagine how much headache writing it must have been, after I once organized a workshop on nonlocality and realized that no two people could agree on what they even meant with the word.

George is a very gifted writer. He gets across the most relevant concepts the reader needs to know on a nontechnical level with a light and unobtrusive humor. The historical background is nicely woven together with the narrative, and the reader gets to meet many researchers in the field, Steve Giddings, Fotini Markopoulou, and Nima Arkani-Hamed, to only mention a few.

In his book, George lays out how the attitude of scientists towards nonlocality has gone from acceptance to rejection and makes a case that now the pendulum is swinging back to acceptance again. I think he is right that this is the current trend (thus the workshop).

I found the book somewhat challenging to read because I was constantly trying to translate George’s metaphors back into equations and I didn’t always succeed. But then that’s a general problem I have with popular science books and I can’t blame George for this. I have another complaint though, which his that George covers a lot of different research in rapid succession without adding qualifiers about these research programs’ shortcomings. There’s quantum graphity and string theory and black holes in AdS and causal sets and then there’s many worlds. The reader might be left with the mistaken impression that these topics are somehow all related with each other.

Spooky Action at a Distance starts out as an Ode to Steve Giddings and ends as a Symphony for Arkani-Hamed. For my taste it’s a little too heavy on person-stories, but then that seems to be the style of science writing today. In summary, I can only say it’s a great book, so go buy it, you won’t regret it.

[Disclaimers: Free review copy; I know the author.]

Fade-out ramble: You shouldn’t judge a book by its subtitle, really, but whoever is responsible for this title-inflation, please make it stop. What’s next? Print the whole damn book on the cover?

Monday, October 12, 2015

A newly proposed table-top experiment might be able to demonstrate that gravity is quantized

Tl;dr: Experimentalists are bringing increasingly massive systems into quantum states. They are now close to masses where they might be able to just measure what happens to the gravitational field.

Quantum effects of gravity are weak, so weak they are widely believed to not be measurable at all. Freeman Dyson indeed is fond of saying that a theory of quantum gravity is entirely unnecessary, arguing that we could never observe its effects anyway. Theorists of course disagree, and not just because they’re being paid to figure out the very theory Dyson deems unnecessary. Measurable or not, they search for a quantized version of gravity because the existing description of nature is not merely incomplete – it is far worse, it contains internal contradictions, meaning we know it is wrong.

Take the century-old double-slit experiment, the prime example for quantum behavior. A single electron that goes through the double-slit is able to interact with itself, as if it went through both slits at once. Its behavior is like that of a wave which overlaps with itself after passing an obstacle. And yet, when you measure the electron after it went through the slit it makes a dot on a screen, like a particle would. The wave-like behavior again shows up if one measures the distribution of many electrons that passed the slit. This and many other experiments demonstrate that the electron is neither a particle nor a wave – it is described by a wave-function from which we obtain a probability distribution, a formulation that is the core of quantum mechanics.

Well understood as this is, it leads to a so-far unsolved conundrum.

The most relevant property of the electron’s quantum behavior is that it can go through both slits at once. It’s not that half of the electron goes one way and half the other. Neither does the electron sometimes take this slit and sometimes the other. Impossible as it sounds, the electron goes fully through both slits at the same time, in a state referred to as quantum superposition.

Electrons carry a charge and so they have an electric field. This electric field also has quantum properties and moves along with the electron in its own quantum superposition. The electron also has a mass. Mass generates a gravitational field, so what happens to the gravitational field? You would expect it to also move along with the electron, and go through both slits in a quantum superposition. But that can only work if gravity is quantized too. According to Einstein’s theory of General Relativity though, it’s not. So we simply don’t know what happens to the gravitational field unless we find a theory of quantum gravity.

It’s been 80 years since the question was first raised, but we still don’t know what’s the right theory. The main problem is that gravity is an exceedingly weak force. We notice it so prominently in our daily life only because, in contrast to the other interactions, it cannot be neutralized. But the very reason that planet Earth doesn’t collapse to a black hole is that much stronger forces than gravity prevent this from happening. The electromagnetic force, the strong nuclear force, and even the supposedly weak nuclear force, are all much more powerful than gravity.

For the experimentalist this means they either have an object heavy enough so its gravitational field can be measured. Or they have an object light enough so its quantum properties can be measured. But not both at once.

At least that was the case so far. But the last decade has seen an enormous progress in experimental techniques to bring heavier and heavier objects into quantum states and measure their behavior. And in a recent paper a group of researchers from Italy and the UK propose an experiment that might just be the first feasible measurement of the gravitational field of a quantum object.

Almost all researchers who work on the theory of quantum gravity expect that the gravitational field of the electron behaves like its electric field, that is, it has quantum properties. They are convinced of this because we have a well-working theory to describe this situation. Yes, I know, they told you nobody has quantized gravity, but that isn’t true. Gravity has been quantized in the 1960s by DeWitt, Feynman, and others using a method known as perturbative quantization. However, the result one gets with this method only works when the gravitational field is weak, and it breaks down when gravity becomes strong, such as at the Big Bang or inside black holes. In other words, this approach, while well understood, fails us exactly in the situations we are interested in the most.

Because of this failure in strong gravitational fields, perturbatively quantized gravity cannot be a fundamentally valid theory; it requires completion. It is this completion that is normally referred to as “quantum gravity.” However, when gravitational fields are weak, which is definitely the case for the little electron, the method works perfectly fine. Whether it is realized in nature though, nobody knows.

If the gravitational field is not quantized, one has instead a theory known as “semi-classical gravity,” in which the matter is quantized but gravity isn’t. Though nobody can make much sense of this theory conceptually, it’s infuriatingly hard to disprove. If the gravitational field of the electron remained classical, its distribution would follow the probability of the electron taking either slit rather than itself going through the slits with this probability.

To see the difference, consider you put a (preferably uncharged) test particle in the middle between the slits to see where the gravitational pull goes. If the gravitational field is quantized, then in half of the cases when the electron goes through the slit, the test particle will move left, in the other half of cases it would move right (it would also destroy the interference pattern). If the gravitational field is classical however, the test particle won’t move because it’s pulled equally to both sides.

So the difference between quantized and semi-classical gravity is observable. Unfortunately, even for the most massive objects that can be pushed through double slits, like large molecules, the gravitational field is far too weak to be measurable.

In the new paper now, the researchers propose a different method. They consider a tiny charged disk of osmium with a mass of about a nano-gram, held by electromagnetic fields in a trap. The particle is cooled down to some hundred mK which brings it into the lowest possible energy state. Above this ground-level there are now discrete energy levels for the disk, much like the electron orbits around the atomic nucleus, except that the level spacing is tiny. The important point is that the exact energy values of these levels depend on the gravitational self-interaction of the whole object. Measure the spacing of the energy levels precisely enough, and you can figure out whether the gravitational field was quantized or not.

Figure 1 from arxiv:arXiv:1510.01696. Depicted are the energy levels of the disk in the potential, and how they shift with the classical gravitational self-interaction taken into account, for two different scenarios of the distribution of the disk’s wave-function.

For this calculation they use the Schrödinger-Newton equation, which is the non-relativistic limit of semi-classical gravity incorporated in quantum mechanics. In an accompanying paper they have worked out the description of multi-particle systems in this framework, and demonstrated how the system approximately decomposes into a center-of-mass variable and the motions relative to the center of mass. They then calculate how the density distribution is affected by the gravitational field caused by its own probability distribution, and finally the energy levels of the system.

I haven’t checked this calculation in detail, but it seems both plausible that the effect should be present, and that it is large enough to potentially be measurable. I don’t know much about these types of experiments, but two of the authors of the paper, Hendrik Ulbricht and James Bateman, are experimentalists and I trust they know what current technology allows to measure.

Suppose they make this measurement and they do, as expected, not find the additional shift of energy levels that should exist if gravity was unquantized. This would not, strictly speaking, demonstrate that perturbatively quantized gravity is correct, but merely that the Schrödinger-Newton equation is incorrect. However, since these are the only two alternatives I am aware of, it would in practice be the first experimental confirmation that gravity is indeed quantized.

Tuesday, October 06, 2015

Repost in celebration of the 2015 Nobel Prize in Physics: Neutrino masses and angles

It was just announced that this year's Nobel Prize in physics goes to Takaaki Kajita from the Super-Kamiokande Collaboration and Arthur B. McDonald from the Sudbury Neutrino Observatory (SNO) Collaboration “for the discovery of neutrino oscillations, which shows that neutrinos have mass.” On this occasion, I am reposting a brief summary of the evidence for neutrino masses that I wrote in 2007.

Neutrinos come in three known flavors. These flavors correspond to the three charged leptons, the electron, the muon and the tau. The neutrino flavors can change during the neutrino's travel, and one flavor can be converted into another. This happens periodically. The neutrino flavor oscillations have a certain wavelength, and an amplitude which sets the probability of the change to happen. The amplitude is usually quantified in a mixing angle θ. In this, sin2(2 θ) = 1, or θ = π/4 corresponds to maximal mixing, which means one flavor changes completely into another, and then back.

This neutrino mixing happens when the mass-eigenstates of the Hamiltonian are not the same as the flavor eigenstates. The wavelength λ of the oscillation turns out to depend (in the relativistic limit) on the difference in the squared masses Δm2 (not the square of the difference!) and the neutrino's energy E as λ = 4Em2. The larger the energy of the neutrinos the larger the wavelength. For a source with a spectrum of different energies around some mean value, one has a superposition of various wavelengths. On distances larger than the typical oscillation length corresponding to the mean energy, this will average out the oscillation.

The plot below from the KamLAND Collaboration shows an example of an experiment to test neutrino flavor conversion. The KamLAND neutrino sources are several Japanese nuclear reactors that emit electron anti-neutrinos with a very well known energy and power spectrum, that has a mean value around some MeV. The average distance to the reactors is ~180 km. The plot shows the ratio of the observed electron anti-neutrinos to the expected number without oscillations. The KamLAND result is the red dot. The other data points were earlier experiments in other locations that did not find a drop. The dotted line is the best fit to this data.

[Figure: KamLAND Collaboration]

One sees however that there is some kind of redundancy in this fit, since one can shift around the wavelength and stay within the errorbars. These reactor data however are only one of the measurements of neutrino oscillations that have been made during the last decades. There are a lot of other experiments that have measured deficites in the expected solar and atmospheric neutrino flux. Especially important in this regard was the SNO data that confirmed that indeed not only there were less solar electron neutrinos than expected, but that they actually showed up in the detector with a different flavor, and the KamLAND analysis of the energy spectrum that clearly favors oscillation over decay.

The plot below depicts all the currently available data for electron neutrino oscillations, which places the mass-square around 8×10-5 eV2, and θ at about 33.9° (i.e. the mixing is with high confidence not maximal).

[Figure: Hitoshi Murayama, see here for references on the used data]

The lines on the top indicate excluded regions from earlier experiments, the filled regions are allowed values. You see the KamLAND 95%CL area in red, and SNO in brown. The remaining island in the overlap is pretty much constrained by now. Given that neutrinos are so elusive particles, and this mass scale is incredibly tiny, I am always impressed by the precision of these experiments!

To fit the oscillations between all the known three neutrino flavors, one needs three mixing angles, and two mass differences (the overall mass scale factors out and does not enter, neutrino oscillations thus are not sensitive to the total neutrino masses). All the presently available data has allowed us to tightly constrain the mixing angles and mass squares. The only outsider (that was thus excluded from the global fits) is famously LSND (see also the above plot), so MiniBooNE was designed to check on their results. For more info on MiniBooNE, see Heather Ray's excellent post at CV.

This post originally appeared in December 2007 as part of our advent calendar A Plottl A Day.

Friday, October 02, 2015

Book Review: “A Beautiful Question” by Frank Wilczek

A Beautiful Question: Finding Nature's Deep Design
By Frank Wilczek
Penguin Press (July 14, 2015)

My four year old daughter recently discovered that equilateral triangles combine to larger equilateral triangles. When I caught a distracted glimpse of her artwork, I thought she had drawn the baryon decuplet, an often used diagram to depict relations between particles composed of three quarks.

The baryon decuplet doesn’t come easy to us, but the beauty of symmetry does, and how amazing that physicists have found it tightly woven into the fabric of nature itself: Both the standard model of particle physics and General Relativity, our currently most fundamental theories, are in essence mathematically precise implementations of symmetry requirements. But next to being instrumental for the accurate description of nature, the appeal of symmetries is a human universal that resonates in art and design throughout cultures. For the physicist, it is impossible not to note the link, not to see the equations behind the art. It may be a curse or it may be a blessing.

For Frank Wilczek it clearly is a blessing. In his most recent book “A Beautiful Question,” he tells the success of symmetries in physics, and goes on to answer his question whether “the world embodies beautiful ideas” with a clear “Yes.”

Lara’s decuplet
Wilczek starts from the discovery of basic mathematical relationships like Pythagoras’ theorem (not shying away from explaining how to prove it!) and proceeds through the history of physics along selected milestones such as musical harmonies, the nature of light and the basics of optics, Newtonian gravity and its extension to General Relativity, quantum mechanics, and ultimately the standard model of particle physics. He briefly touches on condensed matter physics, graphene in particular, and has an interesting digression about the human eye’s limited ability to decode visual information (yes, the shrimp again).

In the last chapters of the book, Wilczek goes into quite some detail about the particle content of the standard model, and in just which way it seems to be not as beautiful as one may have hoped. He introduces the reader to extended theories, grand unification and supersymmetry, invented to remedy the supposed shortcomings of the standard model. The reader who is not familiar with the quantum numbers used to classify elementary particles will likely find this chapter somewhat demanding. But whether or not one makes the effort to follow the details, Wilczek’s gets his message across clearly: Striving for beauty in natural law has been a useful guide, and he expects it to remain one, even though he is careful to note that relying on beauty has on various occasions lead to plainly wrong theories, such as the attempt to explain planetary orbits with the Platonic solids, or to the idea to develop a theory of atoms based on the mathematics of knots.

“A Beautiful Question” is a skillfully written reflection, or “meditation” as Wilczek puts it. It is well structured and accompanied by many figures, including two inserts with color prints. The book also contains an extensive glossary, recommendations for further reading, and a timeline of the discoveries mentioned in the text.

My husband’s decuplet.
The content of the book is unique in the genre. David Goldberg’s book “The Universe in the Rearview Mirror: How Hidden Symmetries Shape Reality,” for example, also discusses the role of symmetries in fundamental physics, but Wilzcek gives more space to the connection between aesthetics in art and science. “A Beautiful Question” picks up and expands on the theme of Steven Weinberg’s 1992 book “Dreams of a Final Theory” that also expounded the relevance of beauty in the development of physical theories. More than 20 years have passed, but the dream is still as elusive today as it was back then.

For all his elaboration on the beauty of symmetry though, Wilczek’s book falls short of spelling out the main conundrum physicists face today: We have no reason to be confident that the laws of nature which we have yet to discover will conform to the human sense of beauty. Neither does he spend many words on aspects of beauty beyond symmetry; Wilczek only briefly touches on fractals, and never goes into the rich appeal of chaos and complexity.

My mother used to say that “symmetry is the art of the dumb,” which is maybe a somewhat too harsh criticism on the standard model, but seeing that reliance on beauty has not helped us within the last 20 years, maybe it is time to consider that the beauty of the answers might not reveal itself as effortlessly as does the tiling of the plane to a 4 year old. Maybe the inevitable subjectivity in our sense of aesthetic appeal that has served us well so far is about to turn from a blessing to a curse, misleading us as to where the answers lie.

Wilczek’s book contains something for every reader, whether that is the physicist interested to learn how a Nobel Prize winner thinks of the connection between ideas and reality, or the layman wanting to know more about the structure of fundamental law. “A Beautiful Question” reminds us of the many ways that science connects to the arts, and invites us to marvel at the success our species has had in unraveling the mysteries of nature.

[An edited version of this review appeared in the October issue of Physics Today.]

Service Announcement: Backreaction now on facebook!

Over the years the discussion of my blogposts has shifted over to facebook. To follow this trend and to make it easier for you to engage, I have now set up a facebook page for this blog. Just "like" the page to get the newest blogposts and other links that I post :)

Thursday, October 01, 2015

When string theorists are out of luck, will Loop Quantum Gravity come to rescue?

Tl;dr: I don’t think they want rescuing.

String theorists and researchers working on loop quantum gravity (LQG) like to each point out how their own attempt to quantize gravity is better than the others’. In the end though, they’re both trying to achieve the same thing – consistently combining quantum field theory with gravity – and it is hard to pin down just exactly what makes strings and loops incompatible. Other than egos that is.

The obvious difference used to be that LQG works only in 4 dimensions, whereas string theory works only in 10 dimensions, and LQG doesn’t allow for supersymmetry, which is a consequence of quantizing strings. However, several years ago the LQG framework has been extended to higher dimensions, and they can now also include supergravity, so that objection is gone.

Then there’s the issue with Lorentz-invariance, which is respected in string theory, but its fate in LQG has been subject of much debate. As of recently though, some researchers working on LQG have argued that Lorentz-invariance, used as a constraint, leads to requirements on the particle interactions, which then have to become similar to some limits found in string theory. This should come as no surprise to string theorists who have been claiming for decades that there is one and only one way to combine all the known particle interactions...

Two doesn’t make a trend, but I have a third, which is a recent paper that appeared on the arxiv:
Bodendorfer argues in his paper that loop quantization might be useful for calculations in supergravity and thus relevant for the AdS/CFT duality.

This duality relates certain types of gauge theories – similar to those used in the standard model – with string theories. In the last decade, the duality has become exceedingly popular because it provides an alternative to calculations which are difficult or impossible in the gauge theory. The duality is normally used only in the limit where one has classical (super)gravity (λ to ∞) and an infinite number of color charges (Nc to ∞). This limit is reasonably well understood. Most string theorists however believe in the full conjecture, which is that the duality remains valid for all values of these parameters. The problem is though, if one does not work in this limit, it is darned hard to calculate anything.

A string theorist, they joke, is someone who thinks three is infinitely large. Being able to deal with a finite number of color charges is relevant for applications because the strong nuclear force has 3 colors only. If one keeps the size of the space-time fixed relative to the string length (which corresponds to fixed λ), a finite Nc however means taking into account string effects, and since the string coupling gs ~ λ/Nc goes to infinity with λ when Nc remains finite, this is a badly understood limit.

In his paper, Bodendorfer looks at the limit of finite Nc and λ to infinity. It’s a clever limit in that it gets rid of the string excitations, and instead moves the problem of small color charges into the realm of super-quantum gravity. Loop quantum gravity is by design a non-perturbative quantization, so it seems ideally suited to investigate this parameter range where string theorists don’t know what to do. But it’s also a strange limit in that I don’t see how to get back the perturbative limit and classical gravity once one has pushed gs to infinity. (If you have more insight than me, please leave a comment.)

In any case, the connection Bodendorfer makes in his paper is that the limit of Nc to ∞ can also be obtained in LQG by a suitable scaling of the spin network. In LQG one works with a graph that has a representation label, l. The graph describes space-time and this label enters the spectrum of the area operator, so that the average quantum of area increases with this label. When one keeps the network fixed, the limit of large l then blows up the area quanta and thus the whole space, which corresponds to the limit of Nc to infinity.

So far, so good. If LQG could now be used to calculate certain observables on the gravity side, then one could further employ the duality to obtain the corresponding observables in the gauge theory. The key question is though whether the loop-quantization actually reproduces the same limit that one would obtain in string theory. I am highly skeptical that this is indeed the case. Suppose it was. This would mean that LQG, like string theory, must have a dual description as a gauge theory still outside the classical limit in which they both agree (they better do). The supersymmetric version of LQG used here has the matter content of supergravity. But it is missing all the framework that in string theory eventually give rise to branes (stacks thereof) and compactifications, which seem so essential to obtain the duality to begin with.

And then there is the problem that in LQG it isn’t well understood how to get back classical gravity in the continuum limit, which Bodendorfer kind of assumes to be the case. If that doesn’t work, then we don’t even know whether in the classical limit the two descriptions actually agree.

Despite my skepticism, I think this is an important contribution. In lack of experimental guidance, the only way we can find out which theory of quantum gravity is the correct description of nature is to demonstrate that there is only one way to quantize gravity that reproduces the General Relativity and the Standard Model in the suitable limits while being UV-finite. Studying how the known approaches do or don’t relate to each other is a step to understanding whether one has any option in the quantization, or whether we do indeed already have enough data to uniquely identify the sought-after theory.

Summary: It’s good someone is thinking about this. Even better this someone isn’t me. For a theory that has only one parameter, it seems to have a lot of parameters.