Showing posts with label Physics. Show all posts
Showing posts with label Physics. Show all posts

Thursday, August 03, 2017

Self-tuning brings wireless power closer to reality

Cables under my desk.
One of the unlikelier fights I picked while blogging was with an MIT group that aimed to wirelessly power devices – by tunneling:
“If you bring another resonant object with the same frequency close enough to these tails then it turns out that the energy can tunnel from one object to another,” said Professor Soljacic.
They had proposed a new method for wireless power transfer using two electric circuits in magnetic resonance. But there’s no tunneling in such a resonance. Tunneling is a quantum effect. Single particles tunnel. Sometimes. But kilowatts definitely don’t.

I reached out to the professor’s coauthor, Aristeidis Karalis, who told me, even more bizarrely: “The energy stays in the system and does not leak out. It just jumps from one to the other back and forth.”

I had to go and calculate the Poynting vector to make clear the energy is – as always – transmitted from one point to another by going through all points in between. It doesn’t tunnel, and it doesn’t jump either. For the MIT guys’ envisioned powering device with the resonant coils the energy flow is focused between the coils’ centers.

The difference between “jumping” and “flowing” energy is more than just words. Once you know that energy is flowing, you also know that if you’re in its way you might get some of it. And the more focused the energy, the higher the possible damage. This means, large devices have to be close together and the energy must be spread out over large surfaces to comply with safety standards.

Back then, I did some estimates. If you want to transfer, say, 1 Watt, and you distribute it over a coil with radius 30cm, you end up with a density of roughly 1 mW/cm2. That already exceeds the safety limit (in the frequency range 30-300 MHz). And that’s leaving aside there usually must be much more energy in the resonance field than what’s actually transmitted. And 30cm isn’t exactly handy. In summary, it’ll work – but it’s not practical and it won’t charge the laptop without roasting what gets in the way.

The MIT guys meanwhile founded a company, Witricity, and dropped the tunneling tale.

Another problem with using resonance for wireless power is that the efficiency depends on the distance between the circuits. It doesn’t work well when they’re too far, and not when they’re too close either. That’s not great for real-world applications.

But in a recent paper published in Nature, a group from Stanford put forward a solution to this problem. And even though I’m not too enchanted by transfering power by magnetic resonance, it is a really neat idea:
Usually the resonance between two circuits is designed, meaning they receiver’s and sender’s frequencies are tuned to work together. But in the new paper, the authors instead let the frequency of the sender range freely – they merely feed it energy. They then show that the coupled system will automatically tune to a resonance frequency at which efficiency is maximal.

The maximal efficiency they reach is the same as with the fixed-frequency circuits. But it works better for shorter distances. While the usual setting is inefficient both at too short and too long distances, the self-tuned system has a stable efficiency up to some distance, and then decays. This makes the new arrangement much more useful in practice.
Efficiency of energy transfer as a function of distance
between the coils (schematic). Blue curve is for the
usual setting with pre-fixed frequency. Red curve is
for the self-tuned circuits.

The group didn’t only calculate this, they also did an experiment to show it works. One limitation of the present setup though is that it works only in one direction, so still not too practical. But it’s a big step forward.

Personally, I’m more optimistic about using ultrasound for wireless power transfer than about the magnetic resonance because ultrasound presently reaches larger distances. Both technologies, however, are still very much in their infancy, so hard to tell which one will win out.

(Note added: Ultrasound not looking too convincing either, ht Tim, see comments for more.)

Let me not forget to mention that in an ingenious paper which was completely lost on the world I showed you don’t need to transfer the total energy to the receiver. You only need to send the information necessary to decrease entropy in the receiver’s surrounding, then it can draw energy from the environment.

Unfortunately, I could think of how to do this only for a few atoms at a time. And, needless to say, I didn’t do any experiment – I’m a theoretician after all. While I’m sure in a few thousand years everyone will use my groundbreaking insight, until then, it’s coils or ultrasound or good, old cables.

Friday, July 28, 2017

New paper claims string theory can be tested with Bose-Einstein-Condensates

Fluorescence image of
Bose-Einstein-Condensate.
Image Credits: Stefan Kuhr and
Immanuel Bloch, MPQ
String theory is infamously detached from experiment. But in a new paper, a group from Mexico put forward a proposal to change that
    String theory phenomenology and quantum many–body systems
    Sergio Gutiérrez, Abel Camacho, Héctor Hernández
    arXiv:1707.07757 [gr-qc]
Ahead, let me be clear they don’t want to test string theory, but the presence of additional dimensions of space, which is a prediction of string theory.

In the paper, the authors calculate how additional space-like dimensions affect a condensate of ultra-cold atoms, known as Bose-Einstein-Condensate. At such low temperatures, the atoms transition to a state where their quantum wave-function acts as one and the system begins to display quantum effects, such as interference, throughout.

In the presence of extra-dimensions, every particle’s wave-function has higher harmonics because the extra-dimensions have to close up, in the simplest case like circles. The particle’s wave-functions have to fit into the extra dimensions, meaning their wave-length must be an integer fraction of the radius.

Each of the additional dimensions has a radius of about a Planck length, which is 10-35m or 15 orders of magnitude smaller than what even the LHC can probe. To excite these higher harmonics, you correspondingly need an energy of 1015 TeV, or 15 orders of magnitude higher than what the LHC can produce.

How do the extra-dimensions of string theory affect the ultra-cold condensate? They don’t. That’s because at those low temperatures there is no way you can excite any of the higher harmonics. Heck, even the total energy of the condensates presently used isn’t high enough. There’s a reason string theory is famously detached from experiment – because it’s a damned high energy you must reach to see stringy effects!

So what’s the proposal in the paper then? There isn’t one. They simply ignore that the higher harmonics can’t be excited and make a calculation. Then they estimate that one needs a condensate of about a thousand particles to measure a discontinuity in the specific heat, which depends on the number of extra-dimensions.

It’s probably correct that this discontinuity depends on the number of extra-dimensions. Unfortunately the authors don’t go back and check what’s the mass per particle in the condensate that’s needed to make this work. I’ve put in the numbers and get something like a million tons. That gigantic mass becomes necessary because it has to combine with the miniscule temperature of about a nano-Kelvin to have a geometric mean that exceeds the Planck mass.

In summary: Sorry, but nobody’s going to test string theory with Bose-Einstein-Condensates.

Wednesday, July 19, 2017

Penrose claims LIGO noise is evidence for Cyclic Cosmology

Noise is the physicists’ biggest enemy. Unless you are a theorist whose pet idea masquerades as noise. Then you are best friends with noise. Like Roger Penrose.
    Correlated "noise" in LIGO gravitational wave signals: an implication of Conformal Cyclic Cosmology
    Roger Penrose
    arXiv:1707.04169 [gr-qc]

Roger Penrose made his name with the Penrose-Hawking theorems and twistor theory. He is also well-known for writing books with very many pages, most recently “Fashion, Faith, and Fantasy in the New Physics of the Universe.”

One man’s noise is another man’s signal.
Penrose doesn’t like most of what’s currently in fashion, but believes that human consciousness can’t be explained by known physics and that the universe is cyclically reborn. This cyclic cosmology, so his recent claim, gives rise to correlations in the LIGO noise – just like what’s been observed.

The LIGO experiment consists of two interferometers in the USA, separated by about 3,000 km. A gravitational wave signal should pass through both detectors with a delay determined by the time it takes the gravitational wave to sweep from one US-coast to the other. This delay is typically of the order of 10ms, but its exact value depends on where the waves came from.

The correlation between the two LIGO detectors is one of the most important criteria used by the collaboration to tell noise from signal. The noise itself, however, isn’t entirely uncorrelated. Some sources of the correlations are known, but some are not. This is not unusual – understanding the detector is as much part of a new experiment as is the measurement itself. The LIGO collaboration, needless to say, thinks everything is under control and the correlations are adequately taken care of in their signal analysis.

A Danish group of researchers begs to differ. They recently published a criticism on the arXiv in which they complain that after subtracting the signal of the first gravitational wave event, correlations remain at the same time-delay as the signal. That clearly shouldn’t happen. First and foremost it would demonstrate a sloppy signal extraction by the LIGO collaboration.

A reply to the Danes’ criticism by Ian Harry from the LIGO collaboration quickly appeared on Sean Carroll’s blog. Ian pointed out some supposed mistakes in the Danish group’s paper. Turns out though, the mistake was on his site. Once corrected, Harry’s analysis reproduces the correlations which shouldn’t be there. Bummer.

Ian Harry did not respond to my requests for comment. Neither did Alessandra Buonanno from the LIGO collaboration, who was also acknowledged by the Danish group. David Shoemaker, the current LIGO spokesperson, let me know he has “full confidence” in the results, and also, the collaboration is working on a reply, which might however take several months to appear. In other words, go away, there’s nothing to see here.

But while we wait for the LIGO response, speculations abound what might cause the supposed correlation. Penrose beat everyone to it with an explanation, even Craig Hogan, who has run his own experiment looking for correlated noise in interferometers, and who I was counting on.

Penrose’s cyclic cosmology works by gluing the big bang together with what we usually think of as the end of the universe – an infinite accelerated expansion into nothingness. Penrose conjectures that both phases – the beginning and the end – are conformally invariant, which means they possess a symmetry under a stretching of distance scales. Then he identifies the end of the universe with the beginning of a new one, creating a cycle that repeats indefinitely. In his theory, what we think of as inflation – the accelerated expansion in the early universe – becomes the final phase of acceleration in the cycle preceding our own.

Problem is, the universe as we presently see it is not conformally invariant. What screws up conformal invariance is that particles have masses, and these masses also set a scale. Hence, Penrose has to assume that eventually all particle masses fade away so that conformal invariance is restored.

There’s another problem. Since Penrose’s conformal cyclic cosmology has no inflation it also lacks a mechanism to create temperature fluctuations in the cosmic microwave background (CMB). Luckily, however, the theory also gives rise to a new scalar particle that couples only gravitationally. Penrose named it  “erebon” after the ancient Greek God of Darkness, Erebos, that gives rise to new phenomenology.

Erebos, the God of Darkness,
according to YouTube.
The erebons have a mass of about 10-5 gram because “what else could it be,” and they have a lifetime determined by the cosmological constant, presumably also because what else could it be. (Aside: Note that these are naturalness arguments.) The erebons make up dark matter and their decay causes gravitational waves that seed the CMB temperature fluctuations.

Since erebons are created at the beginning of each cycle and decay away through it, they also create a gravitational wave background. Penrose then argues that a gravitational wave signal from a binary black hole merger – like the ones LIGO has observed – should be accompanied by noise-like signals from erebons that decayed at the same time in the same galaxy. Just that this noise-like contribution would be correlated with the same time-difference as the merger signal.

In his paper, Penrose does not analyze the details of his proposal. He merely writes:
“Clearly the proposal that I am putting forward here makes many testable predictions, and it should not be hard to disprove it if it is wrong.”
In my impression, this is a sketchy idea and I doubt it will work. I don’t have a major problem with inventing some particle to make up dark matter, but I have a hard time seeing how the decay of a Planck-mass particle can give rise to a signal comparable in strength to a black hole merger (or why several of them would add up exactly for a larger signal).

Even taking this at face value, the decay signals wouldn’t only come from one galaxy but from all galaxies, so the noise should be correlated all over and at pretty much all time-scales – not just at the 12ms as the Danish group has claimed. Worst of all, the dominant part of the signal would come from our own galaxy and why haven’t we seen this already?

In summary, one can’t blame Penrose for being fashionable. But I don’t think that erebons will be added to the list of LIGO’s discoveries.

Thursday, July 13, 2017

Nature magazine publishes comment on quantum gravity phenomenology, demonstrates failure of editorial oversight

I have a headache and
blame Nature magazine for it.
For about 15 years, I have worked on quantum gravity phenomenology, which means I study ways to experimentally test the quantum properties of space and time. Since 2007, my research area has its own conference series, “Experimental Search for Quantum Gravity,” which took place most recently September 2016 in Frankfurt, Germany.

Extrapolating from whom I personally know, I estimate that about 150-200 people currently work in this field. But I have never seen nor heard anything of Chiara Marletto and Vlatko Vedral, who just wrote a comment for Nature magazine complaining that the research area doesn’t exist.

In their comment, titled “Witness gravity’s quantum side in the lab,” Marletto and Vedral call for “a focused meeting bringing together the quantum- and gravity-physics communities, as well as theorists and experimentalists.” Nice.

If they think such meetings are a good idea, I recommend they attend them. There’s no shortage. The above mentioned conference series is only the most regular meeting on quantum gravity phenomenology. Also the Marcel Grossmann Meeting has sessions on the topic. Indeed, I am writing this from a conference here in Trieste, which is about “Probing the spacetime fabric: from concepts to phenomenology.”

Marletto and Vedral point out that it would be great if one could measure gravitational fields in quantum superpositions to demonstrate that gravity is quantized. They go on to lay out their own idea for such experiments, but their interest in the topic apparently didn’t go far enough to either look up the literature or actually put in the numbers.

Yes, it would be great if we could measure the gravitational field of an object in a superposition of, say, two different locations. Problem is, heavy objects – whose gravitational fields are easy to measure – decohere quickly and don’t have quantum properties. On the other hand, objects which are easy to bring into quantum superpositions are too light to measure their gravitational field.

To be clear, the challenge here is to measure the gravitational field created by the objects themselves. It is comparably easy to measure the behavior of quantum objects in the gravitational field of the Earth. That has something to do with quantum and something to do with gravity, but nothing to do with quantum gravity because the gravitational field isn’t quantized.

In their comment, Marletto and Vedral go on to propose an experiment:
“Likewise, one could envisage an experiment that uses two quantum masses. These would need to be massive enough to be detectable, perhaps nanomechanical oscillators or Bose–Einstein condensates (ultracold matter that behaves as a single super-atom with quantum properties). The first mass is set in a superposition of two locations and, through gravitational interaction, generates Schrödinger-cat states on the gravitational field. The second mass (the quantum probe) then witnesses the ‘gravitational cat states’ brought about by the first.”
This is truly remarkable, but not because it’s such a great idea. It’s because Marletto and Vedral believe they’re the first to think about this. Of course they are not.

The idea of using Schrödinger-cat states, has most recently been discussed here. I didn’t write about the paper on this blog because the experimental realization faces giant challenges and I think it won’t work. There is also Anastopolous and Hu’s CQG paper about “Probing a Gravitational Cat State” and a follow-up paper by Derakhshani, which likewise go unmentioned. I’d really like to know how Marletto and Vedral think they can improve on the previous proposals. Letting a graphic designer make a nice illustration to accompany their comment doesn’t really count much in my book.

The currently most promising attempt to probe quantum gravity indeed uses nanomechanical oscillators and comes from the group of Markus Aspelmeyer in Vienna. I previously discussed their work here. This group is about six orders of magnitude away from being able to measure such superpositions. The Nature comment doesn’t mention it either.

The prospects of using Bose-Einstein condensates to probe quantum gravity has been discussed back and forth for two decades, but clear is that this isn’t presently the best option. The reason is simple: Even if you take the largest condensate that has been created to date – something like 10 million atoms – and you calculate the total mass, you are still way below the mass of the nanomechanical oscillators. And that’s leaving aside the difficulty of creating and sustaining the condensate.

There are some other possible gravitational effects for Bose-Einstein condensates which have been investigated, but these come from violations of the equivalence principle, or rather the ambiguity of what the equivalence principle in quantum mechanics means to begin with. That’s a different story though because it’s not about measuring quantum superpositions of the gravitational field.

Besides this, there are other research directions. Paternostro and collaborators, for example, have suggested that a quantized gravitational field can exchange entanglement between objects in a way that a classical field can’t. That too, however, is a measurement which is not presently technologically feasible. A proposal closer to experimental test is that by Belenchia et al, laid out their PRL about “Tests of Quantum Gravity induced non-locality via opto-mechanical quantum oscillators” (which I wrote about here).

Others look for evidence of quantum gravity in the CMB, in gravitational waves, or search for violations of the symmetries that underlie General Relativity. You can find a little summary in my blogpost “How Can we test Quantum Gravity”  or in my Nautilus essay “What Quantum Gravity Needs Is More Experiments.”

Do Marletto and Vedral mention any of this research on quantum gravity phenomenology? No.

So, let’s take stock. Here, we have two scientists who don’t know anything about the topic they write about and who ignore the existing literature. They faintly reinvent an old idea without being aware of the well-known difficulties, without quantifying the prospects of ever measuring it, and without giving proper credits to those who previously wrote about it. And they get published in one of the most prominent scientific journals in existence.

Wow. This takes us to a whole new level of editorial incompetence.

The worst part isn’t even that Nature magazine claims my research area doesn’t exist. No, it’s that I’m a regular reader of the magazine – or at least have been so far – and rely on their editors to keep me informed about what happens in other disciplines. For example with the comments pieces. And let us be clear that these are, for all I know, invited comments and not selected from among unsolicited submissions. So, some editor deliberately chose these authors.

Now, in this rare case when I can judge their content’s quality, I find the Nature editors picked two people who have no idea what’s going on, who chew up 30 years old ideas, and omit relevant citations of timely contributions.

Thus, for me the worst part is that I will henceforth have to suspect Nature’s coverage of other research areas is equally miserable as this.

Really, doing as much as Googling “Quantum Gravity Phenomenology” is more informative than this Nature comment.

Sunday, July 09, 2017

Stephen Hawking’s 75th Birthday Conference: Impressions

I’m back from Cambridge, where I attended the conference “Gravity and Black Holes” in honor of Stephen Hawking’s 75th birthday.

First things first, the image on the conference poster, website, banner, etc is not a psychedelic banana, but gravitational wave emission in a black hole merger. It’s a still from a numerical simulation done by a Cambridge group that you can watch in full on YouTube.



What do gravitational waves have to do with Stephen Hawking? More than you might think.

Stephen Hawking, together with Gary Gibbons, wrote one of the first papers on the analysis of gravitational wave signals. That was in 1971, briefly after gravitational waves were first “discovered” by Joseph Weber. Weber’s detection was never confirmed by other groups. I don’t think anybody knows just what he measured, but whatever it was, it clearly wasn’t gravitational waves. Also Hawking’s – now famous – area theorem stemmed from this interest in gravitational waves, which is why the paper is titled “Gravitational Radiation from Colliding Black Holes.”

Second things second, the conference launched on Sunday with a public symposium, featuring not only Hawking himself but also Brian Cox, Gabriela Gonzalez, and Martin Rees. I didn’t attend because usually nothing of interest happens at these events. I think it was recorded, but haven’t seen the recording online yet – will update if it becomes available.

Gabriela Gonzalez was spokesperson of the LIGO collaboration when the first (real) gravitational wave detection was announced, so you have almost certainly seen her. She also gave a talk at the conference on Tuesday. LIGO’s second run is almost done now, and will finish in August. Then it’s time for the next schedule upgrade. Maximal design sensitivity isn’t expected to be reached until 2020. Above all, in the coming years, we’ll almost certainly see much better statistics and smaller error bars.

The supposed correlations in the LIGO noise were worth a joke by the session’s chairman, and I had the pleasure of talking to another member of the LIGO collaboration who recognized me as the person who wrote that upsetting Forbes piece. I clearly made some new friends there^^. I’d have some more to say about this, but will postpone this to another time.

Back to the conference. Monday began with several talks on inflation, most of which were rather basic overviews, so really not much new to report. Slava Mukhanov delivered a very Russian presentation, complaining about people who complain that inflation isn’t science. Andrei Linde then spoke about attractors in inflation, something I’ve been looking into recently, so this came in handy.

Monday afternoon, we had Jim Hartle speaking about the No-Boundary proposal – he was not at all impressed by Neil Turok et al’s recent criticism – and Raffael Bousso about the ever-tightening links between general relativity and quantum field theory. Raffael’s was the probably most technical talk of the meeting. His strikes me as a research program that will still run in the next century. There’s much to learn and we’ve barely just begun.

On Tuesday, besides the already mentioned LIGO talk, there were a few other talks about numerical general relativity – informative but also somehow unexciting. In the afternoon, Ted Jacobson spoke about fluid analogies for gravity (which I wrote about here), and Jeff Steinhauer reported on his (still somewhat controversial) measurement of entanglement in the Hawking radiation of such a fluid analogy (which I wrote about here.)

Wednesday began with a rather obscure talk about how to shove information through wormholes in AdS/CFT that I am afraid might have been somehow linked to ER=EPR, but I missed the first half so not sure. Gary Gibbons then delivered a spirited account of gravitational memory, though it didn’t become clear to me if it’s of practical relevance.

Next, Andy Strominger spoke about infrared divergences in QED. Hearing him speak, the whole business of using soft gravitons to solve the information loss problem suddenly made a lot of sense! Unfortunately I immediately forgot why it made sense, but I promise to do more reading on that.

Finally, Gary Horowitz spoke about all the things that string theorists know and don’t know about black hole microstates, which I’d sum up with they know less than I thought they do.

Stephen Hawking attended some of the talks, but didn’t say anything, except for a garbled sentence that seems to have played back by accident and stumped Ted Jacobson.

All together, it was a very interesting and fun meeting, and also a good opportunity to have coffee with friends both old and new. Besides food for thought, I also brought back a conference bag, a matching pen, and a sinus infection which I blame on the air conditioning in the lecture hall.

Now I have a short break to assemble my slides for next week’s conference and then I’m off to the airport again.

Tuesday, June 20, 2017

If tensions in cosmological data are not measurement problems, they probably mean dark energy changes

Galaxy pumpkin.
Src: The Swell Designer
According to physics, the universe and everything in it can be explained by but a handful of equations. They’re difficult equations, all right, but their simplest feature is also the most mysterious one. The equations contain a few dozen parameters that are – for all we presently know – unchanging, and yet these numbers determine everything about the world we inhabit.

Physicists have spent much brain-power on the question where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even in the realm of science.

One of the key questions when it comes to the parameters is whether they are really constant, or whether they are time-dependent. If the vary, then their time-dependence would have to be determined by yet another equation, and that would change the whole story that we currently tell about our universe.

The best known of the fundamental parameters that dictate the universe how to behave is the cosmological constant. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assume to be, well, constant. If it isn’t, it is more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.

The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small non-zero value is extremely well established by measurement, well enough that a Nobel Prize was awarded for its discovery in 2011.

The Nobel Prize winners Perlmutter, Schmidt, and Riess, measured the expansion rate of the universe, encoded in the Hubble parameter, by looking at supernovae distributed over various distances. They concluded that the universe is not only expanding, but is expanding at an increasing rate – a behavior that can only be explained by a nonzero cosmological constant.

It is controversial though exactly how fast the expansion is today, or how large the current value of the Hubble constant, H0, is. There are different ways to measure this constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.

One way to determine the Hubble constant is by using the cosmic microwave background (CMB). The small temperature fluctuations in the CMB spectrum encode the distribution of plasma in the early universe and the changes of the radiation since. From fitting the spectrum with the parameters that determine the expansion of the universe, physicists get a value for the Hubble constant. The most accurate of such measurements is currently that from the Planck satellite.

Another way to determine the Hubble constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their discovery, and the precision of this method has since been improved. These two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4 σ. That’s a probability of less than one in thousand to be due to random data fluctuations.

Various explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical because the tension goes away when the finer structures (the large multipole moments) of the data is omitted. For many astrophysicists, this is an indicator that something’s amiss either with the Planck measurement or the data analysis.

Or maybe it’s a real effect. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to changes in the cosmological constant.

That the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But the currently most popular explanation for the data tension seems to be that the cosmological constant changes in time.

A group of researchers from Spain, for example, claims that they have a stunning 4.1 σ preference for a time-dependent cosmological constant over an actually constant one.

This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterization they might have previously tried. (The theoretical physicist’s variant of post-selection bias.)

Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell what is the reason their model seems to work better. A couple of cosmologists who I asked why this group’s remarkable results have been ignored complained that the data analysis is opaque.

Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing.

Weak gravitational lensing happens when a foreground galaxy distorts the images of farther away galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing which is caused by massive nearby objects – such as black holes – and deforms point-like sources to partials rings. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the shapes of galaxies.

The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ8. Their data, too, is in conflict with the CMB data from the Planck satellite.

The members of the KiDs collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations the one that works best is that the cosmological constant changes with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.

In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are cautious and most of them bet on a systematic problem with the Planck data. However, if the Planck measurement receives independent confirmation, the next best bet is on time-dependent dark energy. It wouldn’t make our future any brighter though. The universe would still expand forever into cold darkness.


[This article previously appeared on Starts With A Bang.]

Update June 21: Corrected several sentences to address comments below.

Wednesday, June 14, 2017

What’s new in high energy physics? Clockworks.

Clockworks. [Img via dwan1509].
High energy physics has phases. I don’t mean phases like matter has – solid, liquid, gaseous and so on. I mean phases like cranky toddlers have: One week they eat nothing but noodles, the next week anything as long as it’s white, then toast with butter but it must be cut into triangles.

High energy physics is like this. Twenty years ago, it was extra dimensions, then we had micro black holes, unparticles, little Higgses – and the list goes on.

But there hasn’t been a big, new trend since the LHC falsified everything that was falsifiable. It’s like particle physics stepped over the edge of a cliff but hasn’t looked down and now just walks on nothing.

The best candidate for a new trend that I saw in the past years is the “clockwork mechanism,” though the idea just took a blow and I’m not sure it’ll go much farther.

The origins of the model go back to late 2015, when the term “clockwork mechanism” was coined by Kaplan and Rattazzi, though Cho and Im pursued a similar idea and published it at almost the same time. In August 2016, clockworks were picked up by Giudice and McCullough, who advertised the model as a “a useful tool for model-building applications” that “offers a solution to the Higgs naturalness problem.”

Gears. Img Src: Giphy.
The Higgs naturalness problem, to remind you, is that the mass of the Higgs receives large quantum corrections. The Higgs is the only particle in the standard model that suffers from this problem because it’s the only scalar. These quantum corrections can be cancelled by subtracting a constant so that the remainder fits the observed value, but then the constant would have to be very finely tuned. Most particle physicists think that this is too much of a coincidence and hence search for other explanations.

Before the LHC turned on, the most popular solution to the Higgs naturalness issue was that some new physics would show up in the energy range comparable to the Higgs mass. We now know, however, that there’s no new physics nearby, and so the Higgs mass has remained unnatural.

Clockworks are a mechanism to create very small numbers in a “natural” way, that is from numbers that are close by 1. This can be done by copying a field multiple times and then coupling each copy to two neighbors so that they form a closed chain. This is the “clockwork” and it is assumed to have a couplings with values close to 1 which are, however, asymmetric among the chain neighbors.

The clockwork’s chain of fields has eigenmodes that can be obtained by diagonalizing the mass matrix. These modes are the “gears” of the clockwork and they contain one massless particle.

The important feature of the clockwork is now that this massless particle’s mode has a coupling that scales with the clockwork’s coupling taken to the N-th power, where N is the number of clockwork gears. This means even if the original clockwork coupling was only a little smaller than 1, the coupling of the lightest clockwork mode becomes small very fast when the clockwork grows.

Thus, clockworks are basically a complicated way to make a number of order 1 small by exponentiating it.

I’m an outspoken critic of arguments from naturalness (and have been long before we had the LHC data) so it won’t surprise you to hear that I am not impressed. I fail to see how choosing one constant to match observation is supposedly worse than introducing not only a new constant, but also N copies of some new field with a particular coupling pattern.

Either way, by March 2017, Ben Allanach reports from Recontres de Moriond – the most important annual conference in particle physics – that clockworks are “getting quite a bit of attention” and are “new fertile ground.”

Ben is right. Clockworks contain one light and weakly coupled mode – difficult to detect because of the weak coupling – and a spectrum of strongly coupled but massive modes – difficult to detect because they’re massive. That makes the model appealing because it will remain impossible to rule it out for a while. It is, therefore, perfect playground for phenomenologists.

And sure enough, the arXiv has since seen further papers on the topic. There’s clockwork inflation and clockwork dark mattera clockwork axion and clockwork composite Higgses – you get the picture.

But then, in April 2017, a criticism of the clockwork mechanism appears on the arXiv. Its authors Craig, Garcia Garcia, and Sutherland point out that the clockwork mechanism can only be used if the fields in the clockwork’s chain have abelian symmetry groups. If the group isn’t abelian the generators will mix together in the zero mode, and maintaining gauge symmetry then demands that all couplings be equal to one. This severely limits the application range of the model.

A month later, Giudice and McCullough reply to this criticism essentially by saying “we know this.” I have no reason to doubt it, but I still found the Craig et al criticism useful for clarifying what clockworks can and can’t do. This means in particular that the supposed solution to the hierarchy problem does not work as desired because to maintain general covariance one is forced to put a hierarchy of scales into the coupling already.

I am not sure whether this will discourage particle physicists from pursuing the idea further or whether more complicated versions of clockworks will be invented to save naturalness. But I’m confident that – like a toddler’s phase – this too shall pass.

Wednesday, June 07, 2017

Dear Dr B: What are the chances of the universe ending out of nowhere due to vacuum decay?

    “Dear Sabine,

    my names [-------]. I'm an anxiety sufferer of the unknown and have been for 4 years. I've recently came across some articles saying that the universe could just end out of no where either through false vacuum/vacuum bubbles or just ending and I'm just wondering what the chances of this are occurring anytime soon. I know it sounds silly but I'd be dearly greatful for your reply and hopefully look forward to that

    Many thanks

    [--------]”


Dear Anonymous,

We can’t predict anything.

You see, we make predictions by seeking explanations for available data, and then extrapolating the best explanation into the future. It’s called “abductive reasoning,” or “inference to the best explanation” and it sounds reasonable until you ask why it works. To which the answer is “Nobody knows.”

We know that it works. But we can’t justify inference with inference, hence there’s no telling whether the universe will continue to be predictable. Consequently, there is also no way to exclude that tomorrow the laws of nature will stop and planet Earth will fall apart. But do not despair.

Francis Bacon – widely acclaimed as the first to formulate the scientific method – might have reasoned his way out by noting there are only two possibilities. Either the laws of nature will break down unpredictably or they won’t. If they do, there’s nothing we can do about it. If they don’t, it would be stupid not to use predictions to improve our lives.

It’s better to prepare for a future that you don’t have than to not prepare for a future you do have. And science is based on this reasoning: We don’t know why the universe is comprehensible and why the laws of nature are predictive. But we cannot do anything about unknown unknowns anyway, so we ignore them. And if we do that, we can benefit from our extrapolations.

Just how well scientific predictions work depends on what you try to predict. Physics is the currently most predictive discipline because it deals with the simplest of systems, those whose properties we can measure to high precision and whose behavior we can describe with mathematics. This enables physicists to make quantitatively accurate predictions – if they have sufficient data to extrapolate.

The articles that you read about vacuum decay, however, are unreliable extrapolations of incomplete evidence.

Existing data in particle physics are well-described by a field – the Higgs-field – that fills the universe and gives masses to elementary particles. This works because the value of the Higgs-field is different from zero even in vacuum. We say it has a “non-vanishing vacuum expectation value.” The vacuum expectation value can be calculated from the masses of the known particles.

In the currently most widely used theory for the Higgs and its properties, the vacuum expectation value is non-zero because it has a potential with a local minimum whose value is not at zero.

We do not, however, know that the minimum which the Higgs currently occupies is the only minimum of the potential and – if the potential has another minimum – whether the other minimum would be at a smaller energy. If that was so, then the present state of the vacuum would not be stable, it would merely be “meta-stable” and would eventually decay to the lowest minimum. In this case, we would live today in what is called a “false vacuum.”

Image Credits: Gary Scott Watson.


If our vacuum decays, the world will end – I don’t know a more appropriate expression. Such a decay, once triggered, releases an enormous amount of energy – and it spreads at the speed of light, tearing apart all matter it comes in contact with, until all vacuum has decayed.

How can we tell whether this is going to happen?

Well, we can try to measure the properties of the Higgs’ potential and then extrapolate it away from the minimum. This works much like Taylor series expansions, and it has the same pitfalls. Indeed, making predictions about the minima of a function based on a polynomial expansion is generally a bad idea.

Just look for example at the Taylor series of the sine function. The full function has an infinite number of minima at exactly the same value but you’d never guess from the first terms in the series expansion. First it has one minimum, then it has two minima of different value, then again it has only one – and the higher the order of the expansion the more minima you get.

The situation for the Higgs’ potential is more complicated because the coefficients are not constant, but the argument is similar. If you extract the best-fit potential from the available data and extrapolate it to other values of the Higgs-field, then you find that our present vacuum is meta-stable.

The figure below shows the situation for the current data (figure from this paper). The horizontal axis is the Higgs mass, the vertical axis the mass of the top-quark. The current best-fit is the upper left red point in the white region labeled “Metastability.”
Figure 2 from Bednyakov et al, Phys. Rev. Lett. 115, 201802 (2015).


This meta-stable vacuum has, however, a ridiculously long lifetime of about 10600 times the current age of the universe, take or give a few billion billion billion years. This means that the vacuum will almost certainly not decay until all stars have burnt out.

However, this extrapolation of the potential assumes that there aren’t any unknown particles at energies higher than what we have probed, and no other changes to physics as we know it either. And there is simply no telling whether this assumption is correct.

The analysis of vacuum stability is not merely an extrapolation of the presently known laws into the future – which would be justified – it is also an extrapolation of the presently known laws into an untested energy regime – which is not justified. This stability debate is therefore little more than a mathematical exercise, a funny way to quantify what we already know about the Higgs’ potential.

Besides, from all the ways I can think of humanity going extinct, this one worries me least: It would happen without warning, it would happen quickly, and nobody would be left behind to mourn. I worry much more about events that may cause much suffering, like asteroid impacts, global epidemics, nuclear war – and my worry-list goes on.

Not all worries can be cured by rational thought, but since I double-checked you want facts and not comfort, fact is that current data indicates our vacuum is meta-stable. But its decay is an unreliable prediction based the unfounded assumption that there either are no changes to physics at energies beyond the ones we have tested, or that such changes don’t matter. And even if you buy this, the vacuum almost certainly wouldn’t decay as long as the universe is hospitable for life.

Particle physics is good for many things, but generating potent worries isn’t one of them. The biggest killer in physics is still the 2nd law of thermodynamics. It will get us all, eventually. But keep in mind that the only reason we play the prediction game is to get the best out of the limited time that we have.

Thanks for an interesting question!

Wednesday, May 31, 2017

Does parametric resonance solve the cosmological constant problem?

An oscillator too.
Source: Giphy.
Tl;dr: Ask me again in ten years.

A lot of people asked for my opinion about a paper by Wang, Zhu, and Unruh that recently got published in Physical Reviews D, one of the top journals in the field.


Following a press-release from UBC, the paper has attracted quite some attention in the pop science media which is remarkable for such a long and technically heavy work. My summary of the coverage so far is “bla-bla-bla parametric resonance.”

I tried to ignore the media buzz a) because it’s a long paper, b) because it’s a long paper, and c) because I’m not your public community debugger. I actually have own research that I’m more interested in. Sulk.

But of course I eventually came around and read it. Because I’ve toyed with a similar idea some while ago and it worked badly. So, clearly, these folks outscored me, and after some introspection I thought that instead of being annoyed by the attention they got, I should figure out why they succeeded where I failed.

Turns out that once you see through the math, the paper is not so difficult to understand. Here’s the quick summary.

One of the major problems in modern cosmology is that vacuum fluctuations of quantum fields should gravitate. Unfortunately, if one calculates the energy density and pressure contained in these fluctuations, the values are much too large to be compatible with the expansion history of the universe.

This vacuum energy gravitates the same way as the cosmological constant. Such a large cosmological constant, however, should lead to a collapse of the universe long before the formation of galactic structures. If you switch the sign, the universe doesn’t collapse but expands so rapidly that structures can’t form because they are ripped apart. Evidently, since we are here today, that didn’t happen. Instead, we observe a small positive cosmological constant and where did that come from? That’s the cosmological constant problem.

The problem can be solved by introducing an additional cosmological constant that cancels the vacuum energy from quantum field theory, leaving behind the observed value. This solution is both simple and consistent. It is, however, unpopular because it requires fine-tuning the additional term so that the two contributions almost – but not exactly – cancel. (I believe this argument to be flawed, but that’s a different story and shall be told another time.) Physicists therefore have tried for a long time to explain why the vacuum energy isn’t large or doesn’t couple to gravity as expected.

Strictly speaking, however, the vacuum energy density is not constant, but – as you expect of fluctuations – it fluctuates. It is merely the average value that acts like a cosmological constant, but the local value should change rapidly both in space and in time. (These fluctuations are why I’ve never bought the “degravitation” idea according to which the vacuum energy decouples because gravity has a built-in high-pass filter. In that case, you could decouple a cosmological constant, but you’d still be stuck with the high-frequency fluctuations.)

In the new paper, the authors make the audacious attempt to calculate how gravity reacts to the fluctuations of the vacuum energy. I say it’s audacious because this is not a weak-field approximation and solving the equations for gravity without a weak-field approximation and without symmetry assumptions (as you would have for the homogeneous and isotropic case) is hard, really hard, even numerically.

The vacuum fluctuations are dominated by very high frequencies corresponding to a usually rather arbitrarily chosen ‘cutoff’ – denoted Λ – where the effective theory for the fluctuations should break down. One commonly assumes that this frequency roughly corresponds to the Planck mass, mp. The key to understanding the new paper is that the authors do not assume this cutoff, Λ, to be at the Planck mass, but at a much higher energy, Λ >> mp.

As they demonstrate in the paper, massaged into a suitable form, one of the field equations for gravity takes the form of an oscillator equation with a time- and space-dependent coupling term. This means, essentially, space-time at each place has the properties of a driven oscillator.

The important observation that solves the cosmological constant problem is then that the typical resonance frequency of this oscillator is Λ2/mp which is by assumption much larger than the main frequency of fluctuations the oscillator is driven by, which is Λ. This means that space-time resonates with the frequency of the vacuum fluctuations – leading to an exponential expansion like that from a cosmological constant – but it resonates only with higher harmonics, so that the resonance is very weak.

The result is that the amplitude of the oscillations grows exponentially, but it grows slowly. The effective cosmological constant they get by averaging over space is therefore not, as one would naively expect, Λ, but (omitting factors that are hopefully of order one) Λ* exp (-Λ2/mp). One hence uses a trick quite common in high-energy physics, that one can create a large hierarchy of numbers by having a small hierarchy of numbers in an exponent.

In conclusion, by pushing the cutoff above the Planck mass, they suppress the resonance and slow down the resulting acceleration.

Neat, yes.

But I know you didn’t come for the nice words, so here’s the main course. The idea has several problems. Let me start with the most basic one, which is also the reason I once discarded a (related but somewhat different) project. It’s that their solution doesn’t actually solve the field equations of gravity.

It’s not difficult to see. Forget all the stuff about parametric resonance for a moment. Their result doesn’t solve the field equations if you set all the fluctuations to zero, so that you get back the case with a cosmological constant. That’s because if you integrate the second Friedmann-equation for a negative cosmological constant you can only solve the first Friedmann-equation if you have negative curvature. You then get Anti-de Sitter space. They have not introduced a curvature term, hence the first Friedmann-equation just doesn’t have a (real valued) solution.

Now, if you turn back on the fluctuations, their solution should reduce to the homogeneous and isotropic case on short distances and short times, but it doesn’t. It would take a very good reason for why that isn’t so, and no reason is given in the paper. It might be possible, but I don’t see how.

I further find it perplexing that they rest their argument on results that were derived in the literature for parametric resonance on the assumption that solutions are linearly independent. General relativity, however, is non-linear. Therefore, one generally isn’t free to combine solutions arbitrarily.

So far that’s not very convincing. To make matters worse, if you don’t have homogeneity, you have even more equations that come from the position-dependence and they don’t solve these equations either. Let me add, however, that this doesn’t worry me all that much because I think it might be possible to deal with it by exploiting the stochastic properties of the local oscillators (which are homogeneous again, in some sense).

Another troublesome feature of their idea is that the scale-factor of the oscillating space-time crosses zero in each cycle so that the space-time volume also goes to zero and the metric structure breaks down. I have no idea what that even means. I’d be willing to ignore this issue if the rest was working fine, but seeing that it doesn’t, it just adds to my misgivings.

The other major problem with their approach is that the limit they work in doesn’t make sense to begin with. They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized. This approximation, however, is known to break down when the fluctuations in the energy-momentum tensor get large compared to its absolute value, which is the very case they study.

In conclusion, “bla-bla-bla parametric resonance” is a pretty accurate summary.

How serious are these problems? Is there something in the paper that might be interesting after all?

Maybe. But the assumption (see below Eq (42)) that the fields that source the fluctuations satisfy normal energy conditions is, I believe, a non-starter if you want to get an exponential expansion. Even if you introduce a curvature term so that you can solve the equations, I can’t for the hell of it see how you average over locally approximately Anti-de Sitter spaces to get an approximate de Sitter space. You could of course just flip the sign, but then the second Friedmann equation no longer describes an oscillator.

Maybe allowing complex-valued solutions is a way out. Complex numbers are great. Unfortunately, nature’s a bitch and it seems we don’t live in a complex manifold. Hence, you’d then have to find a way to get rid of the imaginary numbers again. In any case, that’s not discussed in the paper either.

I admit that the idea of using a de-tuned parametric resonance to decouple vacuum fluctuations and limit their impact on the expansion of the universe is nice. Maybe I just lack vision and further work will solve the above mentioned problems. More generally, I think numerically solving the field equations with stochastic initial conditions is of general interest and it would be great if their paper inspires follow-up studies. So, give it ten years, and then ask me again. Maybe something will have come out of it.

In other news, I have also written a paper that explains the cosmological constant and I haven’t only solved the equations that I derived, I also wrote a Maple work-sheet that you can download and check the calculation for yourself. The paper was just accepted for publication in PRD.

For what my self-reflection is concerned, I concluded I might be too ambitious. It’s much easier to solve equations if you don’t actually solve them.


I gratefully acknowledge helpful conversation with two of this paper’s authors who have been very, very patient with me. Sorry I didn’t have anything nicer to say.

Friday, May 26, 2017

Can we probe the quantization of the black hole horizon with gravitational waves?


Tl;dr: Yes, but the testable cases aren’t the most plausible ones.

It’s the year 2017, but we still don’t know how space and time get along with quantum mechanics. The best clue so far comes from Stephen Hawking and Jacob Bekenstein. They made one of the most surprising finds that theoretical physics saw in the 20th century: Black holes have entropy.

It was a surprise because entropy is a measure for unresolved microscopic details, but in general relativity black holes don’t have details. They are almost featureless balls. That they nevertheless seem to have an entropy – and a gigantically large one in addition – indicates strongly that black holes can be understood only by taking into account quantum effects of gravity. The large entropy, so the idea, quantifies all the ways the quantum structure of black holes can differ.

The Bekenstein-Hawking entropy scales with the horizon area of the black hole and is usually interpreted as a measure for the number of elementary areas of size Planck-length squared. A Planck-length is a tiny 10-35 meters. This area-scaling is also the basis of the holographic principle which has dominated research in quantum gravity for some decades now. If anything is important in quantum gravity, this is.

It comes with the above interpretation that the area of the black hole horizon always has to be a multiple of the elementary Planck area. However, since the Planck area is so small compared to the size of astrophysical black holes – ranging from some kilometers to some billion kilometers – you’d never notice the quantization just by looking at a black hole. If you got to look at it to begin with. So it seems like a safely untestable idea.

A few months ago, however, I noticed an interesting short note on the arXiv in which the authors claim that one can probe the black hole quantization with gravitational waves emitted from a black hole, for example in the ringdown after a merger event like the one seen by LIGO:
    Testing Quantum Black Holes with Gravitational Waves
    Valentino F. Foit, Matthew Kleban
    arXiv:1611.07009 [hep-th]

The basic idea is simple. Assume it is correct that the black hole area is always a multiple of the Planck area and that gravity is quantized so that it has a particle – the graviton – associated with it. If the only way for a black hole to emit a graviton is to change its horizon area in multiples of the Planck area, then this dictates the energy that the black hole loses when the area shrinks because the black hole’s area depends on the black hole’s mass. The Planck-area quantization hence sets the frequency of the graviton that is emitted.

A gravitational wave is nothing but a large number of gravitons. According to the area quantization, the wavelengths of the emitted gravitons is of the order of the order of the black hole radius, which is what one expects to dominate the emission during the ringdown. However, so the authors’ argument, the spectrum of the gravitational wave should be much narrower in the quantum case.

Since the model that quantizes the black hole horizon in Planck-area chunks depends on a free parameter, it would take two measurements of black hole ringdowns to rule out the scenario: The first to fix the parameter, the second to check whether the same parameter works for all measurements.

It’s a simple idea but it may be too simple. The authors are careful to list the possible reasons for why their argument might not apply. I think it doesn’t apply for a reason that’s a combination of what is on their list.

A classical perturbation of the horizon leads to a simultaneous emission of a huge number of gravitons, and for those there is no good reason why every single one of them must fit the exact emission frequency that belongs to an increase of one Planck area as long as the total energy adds up properly.

I am not aware, however, of a good theoretical treatment of this classical limit from the area-quantization. It might indeed not work in some of the more audacious proposals we have recently seen, like Gia Dvali’s idea that black holes are condensates of gravitons. Scenarios such like Dvali’s might be testable indeed with the ringdown characteristics. I’m sure we will hear more about this in the coming years as LIGO accumulates data.

What this proposed test would do, therefore, is to probe the failure of reproducing general relativity for large oscillations of the black hole horizon. Clearly, it’s something that we should look for in the data. But I don’t think black holes will release their secrets quite as easily.

Friday, May 19, 2017

Can we use gravitational waves to rule out extra dimensions – and string theory with it?

Gravitational Waves,
Computer simulation.

Credits: Henze, NASA
Tl;dr: Probably not.

Last week I learned from New Scientist that “Gravitational waves could show hints of extra dimensions.” The article is about a paper which recently appeared on the arxiv:

The claim in this paper is nothing but stunning. Authors Andriot and Gómez argue that if our universe has additional dimensions, no matter how small, then we could find out using gravitational waves in the frequency regime accessible by LIGO.

While LIGO alone cannot do it because the measurement requires three independent detectors, soon upcoming experiments could either confirm or forever rule out extra dimensions – and kill string theory along the way. That, ladies and gentlemen, would be the discovery of the millennium. And, almost equally stunning, you heard it first from New Scientist.

Additional dimensions are today primarily associated with string theory, but the idea is much older. In the context of general relativity, it dates back to the work of Kaluza and Klein the 1920s. I came across their papers as an undergraduate and was fascinated. Kaluza and Klein showed that if you add a fourth space-like coordinate to our universe and curl it up to a tiny circle, you don’t get back general relativity – you get back general relativity plus electrodynamics.

In the presently most widely used variants of string theory one has not one, but six additional dimensions and they can be curled up – or ‘compactified,’ as they say – to complicated shapes. But a key feature of the original idea survives: Waves which extend into the extra dimension must have wavelengths in integer fractions of the extra dimension’s radius. This gives rise to an infinite number of higher harmonics – the “Kaluza-Klein tower” – that appear like massive excitations of any particle that can travel into the extra dimensions.

The mass of these excitations is inversely proportional to the radius (in natural units). This means if the radius is small, one needs a lot of energy to create an excitation, and this explains why he haven’t yet noticed the additional dimensions.

In the most commonly used model, one further assumes that the only particle that experiences the extra-dimensions is the graviton – the hypothetical quantum of the gravitational interaction. Since we have not measured the gravitational interaction on short distances as precisely as the other interactions, such gravity-only extra-dimensions allow for larger radii than all-particle extra-dimensions (known as “universal extra-dimensions”.) In the new paper, the authors deal with gravity-only extra-dimensions.

From the current lack of observation, one can then derive bounds on the size of the extra-dimension. These bounds depend on the number of extra-dimensions and on their intrinsic curvature. For the simplest case – the flat extra-dimensions used in the paper – the bounds range from a few micrometers (for two extra-dimensions) to a few inverse MeV for six extra dimensions (natural units again).

Such extra-dimensions do more, however, than giving rise to a tower of massive graviton excitations. Gravitational waves have spin two regardless of the number of spacelike dimensions, but the number of possible polarizations depends on the number of dimensions. More dimensions, more possible polarizations. And the number of polarizations, importantly, doesn’t depend on the size of the extra-dimensions at all.

In the new paper, the authors point out that the additional polarization of the graviton affects the propagation even of the non-excited gravitational waves, ie the ones that we can measure. The modified geometry of general relativity gives rise to a “breathing mode,” that is a gravitational wave which expands and contracts synchronously in the two (large) dimensions perpendicular to the direction of the wave. Such a breathing mode does not exist in normal general relativity, but it is not specific to extra-dimensions; other modifications of general relativity also have a breathing mode. Still, its non-observation would indicate no extra-dimensions.

But an old problem of Kaluza-Klein theories stands in the way of drawing this conclusion. The radii of the additional dimensions (also known as “moduli”) are unstable. You can assume that they have particular initial values, but there is no reason for the radii to stay at these values. If you shake an extra-dimension, its radius tends to run away. That’s a problem because then it becomes very difficult to explain why we haven’t yet noticed the extra-dimensions.

To deal with the unstable radius of an extra-dimension, theoretical physicists hence introduce a potential with a minimum at which the value of the radius is stuck. This isn’t optional – it’s necessary to prevent conflict with observation. One can debate how well-motivated that is, but it’s certainly possible, and it removes the stability problem.

Fixing the radius of an extra-dimension, however, will also make it more difficult to wiggle it – after all, that’s exactly what the potential was made to do. Unfortunately, in the above mentioned paper the authors don’t have stabilizing potentials.

I do not know for sure what stabilizing the extra-dimensions would do to their analysis. This would depend not only on the type and number of extra-dimension but also on the potential. Maybe there is a range in parameter-space where the effect they speak of survives. But from the analysis provided so far it’s not clear, and I am – as always – skeptical.

In summary: I don’t think we’ll rule out string theory any time soon.

[Updated to clarify breathing mode also appears in other modifications of general relativity.]

Tuesday, May 16, 2017

“Not a Toy” - New Video about Symmetry Breaking

Here is the third and last of the music videos I produced together with Apostolos Vasilidis and Timo Alho, sponsored by FQXi. The first two are here and here.


In this video, I am be-singing a virtual particle pair that tries to separate, and quite literally reflect on the inevitable imperfection of reality. The lyrics of this song went through an estimated ten thousand iterations until we finally settled on one. After this, none of us was in the mood to fight over a second verse, but I think the first has enough words already.

With that, I have reached the end of what little funding I had. And unfortunately, the Germans haven’t yet figured out that social media benefits science communication. Last month I heard a seminar on public outreach that didn’t so much as mention the internet. I do not kid you. There are foundations here who’d rather spend 100k on an event that reaches 50 people than a tenth of that to reach 100 times as many people. In some regards, Germans are pretty backwards.

This means from here on you’re back to my crappy camcorder and the always same three synthesizers unless I can find other sponsors. So, in your own interest, share the hell out of this!

Also, please let us know which video was your favorite and why because among the three of us, we couldn’t agree.

As previously, the video has captions which you can turn on by clicking on CC in the YouTube bottom bar. For your convenience, here are the lyrics:

Not A Toy

We had the signs for taking off,
The two of us we were on top,
I had never any doubt,
That you’d be there when things got rough.

We had the stuff to do it right,
As long as you were by my side,
We were special, we were whole,
From the GUT down to the TOE.

But all the harmony was wearing off,
It was too much,
We were living in a fiction,
Without any imperfection.

[Bridge]
Every symmetry
Has to be broken,
Every harmony
Has to decay.

[Chorus]
Leave me alone, I’m
Tired of talking,
I’m not a toy,
I’m not a toy.

Leave alone now,
I’m not a token,
I’m not a toy,
I’m not a toy.

[Interlude]
We had the signs for taking off
Harmony was wearing off
We had the signs for taking off
Tired of talking
Harmony was wearing off
I’m tired of talking.


[Repeat Bridge]
[Repeat Chorus]

Friday, April 21, 2017

No, physicists have not created “negative mass”

Thanks to BBC, I will now for several years get emails from know-it-alls who think physicists are idiots not to realize the expansion of the universe is caused by negative mass. Because that negative mass, you must know, has actually been created in the lab:


The Independent declares this turns physics “completely upside down”


And if you think that was crappy science journalism, The Telegraph goes so far to insists it’s got something to do with black holes






Not that they offer so much as a hint of an explanation what black holes have to do with anything.

These disastrous news items purport to summarize a paper that recently got published in Physics Review Letters, one of the top journals in the field:
    Negative mass hydrodynamics in a Spin-Orbit--Coupled Bose-Einstein Condensate
    M. A. Khamehchi, Khalid Hossain, M. E. Mossman, Yongping Zhang, Th. Busch, Michael McNeil Forbes, P. Engels
    Phys. Rev. Lett. 118, 155301 (2017)
    arXiv:1612.04055 [cond-mat.quant-gas]

This paper reports the results of an experiment in which the physicists created a condensate that behaves as if it has a negative effective mass.

The little word “effective” does not appear in the paper’s title – and not in the screaming headlines – but it is important. Physicists use the preamble “effective” to indicate something that is not fundamental but emergent, and the exact definition of such a term is often a matter of convention.

The “effective radius” of a galaxy, for example, is not its radius. The “effective nuclear charge” is not the charge of the nucleus. And the “effective negative mass” – you guessed it – is not a negative mass.

The effective mass is merely a handy mathematical quantity to describe the condensate’s behavior.

The condensate in question here is a supercooled cloud of about ten thousand Rubidium atoms. To derive its effective mass, you look at the dispersion relation – ie the relation between energy and momentum – of the condensate’s constituents, and take the second derivative of the energy with respect to the momentum. That thing you call the inverse effective mass. And yes, it can take on negative values.
 
If you plot the energy against the momentum, you can read off the regions of negative mass from the curvature of the resulting curve. It’s clear to see in Fig 1 of the paper, see below. I added the red arrow to point to the region where the effective mass is negative.
Fig 1 from arXiv:1612.04055 [cond-mat.quant-gas]

As to why that thing is called effective mass, I had to consult a friend, David Abergel, who works with cold atom gases. His best explanation is that it’s a “historical artefact.” And it’s not deep: It’s called an effective mass because in the usual non-relativistic limit E=p2/m, so if you take two derivatives of E with respect to p, you get the inverse mass. Then, if you do the same for any other relation between E and p you call the result an inverse effective mass.

It's a nomenclature that makes sense in context, but it probably doesn’t sound very headline-worthy:
“Physicists created what’s by historical accident still called an effective negative mass.”
In any case, if you use this definition, you can rewrite the equations of motion of the fluid. They then resemble the usual hydrodynamic equations with a term that contains the inverse effective mass multiplied by a force.

What this “negative mass” hence means is that if you release the condensate from a trapping potential that holds it in place, it will first start to run apart. And then no longer run apart. That pretty much sums up the paper.

The remaining force which the fluid acts against, it must be emphasized, is then not even an external force. It’s a force that comes from the quantum pressure of the fluid itself.

So here’s another accurate headline:
“Physicists observe fluid not running apart.”
This is by no means to say that the result is uninteresting! Indeed, it’s pretty cool that this fluid self-limits its expansion thanks to long-range correlations which come from quantum effects. I’ll even admit that thinking of the behavior as if the fluid had a negative effective mass may be a useful interpretation. But that still doesn’t mean physicists have actually created negative mass.

And it has nothing to do with black holes, dark energy, wormholes, and the like. Trust me, physics is still upside up.

Wednesday, April 19, 2017

Catching Light – New Video!

I have many shortcomings, like leaving people uncertain whether they’re supposed to laugh or not. But you can’t blame me for lack of vision. I see a future in which science has become a cultural good, like sports, music, and movies. We’re not quite there yet, but thanks to the Foundational Questions Institute (FQXi) we’re a step closer today.



This is the first music video in a series of three, sponsored by FQXi, for which I’ve teamed up with Timo Alho and Apostolos Vasileiadis. And, believe it or not, all three music videos are about physics!

You’ve met Apostolos before on this blog. He’s the one who, incredibly enough, used his spare time as an undergraduate to make a short film about gauge symmetry. I know him from my stay in Stockholm, where he completed a masters degree in physics. Apostolos then, however, decided that research wasn’t for him. He has since founded a company – Third Panda  – and works as freelance videographer.

Timo Alho is one of the serendipitous encounters I’ve made on this blog. After he left some comments on my songs (mostly to point out they’re crappy) it turned out not only is he a theoretical physicist too, but we were both attending the same conference a few weeks later. Besides working on what passes as string theory these days, Timo also plays the keyboard in two bands and knows more than I about literally everything to do with songwriting and audio processing and, yes, about string theory too.

Then I got a mini-grant from FQXi that allowed me to coax the two young men into putting up with me, and five months later I stood in the hail, in a sleeveless white dress, on a beach in Crete, trying to impersonate electromagnetic radiation.

This first music video is about Einstein’s famous thought experiment in which he imagined trying to catch light. It takes on the question how much can be learned by introspection. You see me in the role of light (I am part of the master plan), standing in for nature more generally, and Timo as the theorist trying to understand nature’s working while barely taking notice of it (I can hear her talk to me at night).

The two other videos will follow early May and mid of May, so stay tuned for more!

Update April 21: 

Since several people asked, here are the lyrics. The YouTube video has captions - to see them, click on the CC icon in the bottom bar.

[Chorus]
I am part of the master plan
Every woman, every man
I have seen them come and go
Go with the flow

I have seen that we all are one
I know all and every one
I was here when the sun was born
Ages ago

[Verse]
In my mind
I have tried
Catching light
Catching light

In my mind
I have left the world behind

Every time I close my eyes
All of nature's open wide
I can hear her
Talk to me at night

In my mind I have been trying
Catching light outside of time
I collect it in a box
Collect it in a box

Every time I close my eyes
All of nature's open wide
I can hear her
Talk to me at night

[Repeat Chorus]

[Interlude, Einstein recording]
The scientific method itself
would not have led anywhere,
it would not even have been formed
Without a passionate striving for a clear understanding.
Perfection of means
and confusion of goals
seem in my opinion
to characterize our age.

[Repeat Chorus]