Pages

Tuesday, October 30, 2012

ESQG 2012 - Conference Summary

Conference Photo: Experimental Search for Quantum Gravity 2012.

The third installment of our conference "Experimental Search for Quantum Gravity" just completed. It was good to see both familiar faces and new ones, sharing a common interest and excitement about this research direction. This time around the event was much more relaxing for me because most of the organizational work was done, masterfully, by Astrid Eichhorn, and the administrative support at Perimeter Institute worked flawlessly. In contrast to 2007 and 2010, this time I also gave a talk myself, albeit a short one, about the paper we discussed here.

All the talks were recorded and can be found on PIRSA. (The conference-collection tag isn't working properly yet, I hope this will be fixed soon. You'll have to go to "advanced search" and search for the dates Oct 22-26 to find the talks.) So if you have a week of time to spare don't hesitate to blow your monthly download limit ;o) In the unlikely event that you don't have that time, let me just tell you what I found most interesting.

For me, the most interesting aspect of this meeting was the recurring question about the universality of effective field theory. Deformed special relativity, you see, has returned in the reincarnation "relative locality" as to boldly abandon locality altogether after the problem could no longer be ignored. It still doesn't have, however, a limit to an effective field theory. A cynic might say "how convenient," considering that 5th order operators in Lorentz-invariance violating extensions of the standard model are so tightly constrained you might as well call them ruled out.

If you're not quite as cynic however, you might take into account the possibility that the effective field theory limit indeed just does not work. That, it was pointed out repeatedly -- among others by David Mattingly, Stefano Liberati and Giovanni Amelino-Camelia -- would actually be more interesting than evidence for some higher order corrections. If we find data that cannot be accommodated within the effective field theory framework, such as for example evidence for delayed photons without evidence for 5th order Lorentz-invariance violating operators, that would give us quite something to think about.

I agree: Clearly one shouldn't stop looking just because one believes to know nothing can be found. I have to add however that the mere absence of an effective field theory limit doesn't convince me there is none. I want to know why such a limit can't be made before I believe in this explanation. For all I know it might be absent just because nobody has made an effort to derive it. After all there isn't much of an incentive to do so. As the German saying goes: Don't saw on the branch you sit on. That having been said, I understand that it would be exciting, but I'm too skeptic myself to share the excitement.

A related development is the tightening of constraints on an energy-dependence of the speed of light. Robert Nemiroff gave a talk about his and his collaborator's recent analysis of the photon propagation time from distant gamma ray bursts (GRB). We discussed this paper here. (After some back and forth it finally got published in PRL.) The bound isn't the strongest in terms of significance, but makes it to 3σ. The relevance of this paper is the proposal of a new method to analyse the GRB data, one that, given enough statistics, will allow for tighter constraints. And, most importantly, it delivers constraints on scenarios in which the speed of highly energetic photons might be slower as well as on the case in which it might be faster than the photons with lower energy. And for an example on how that is supposed to happen, see Laurent Freidel's talk.

A particularly neat talk was delivered by Tobias Fritz who summarized a simple proof that a periodic lattice cannot reproduce isotropy for large velocities, and that without making use of an embedding space. Though his argument works so far for classical particles only, I find it interesting because with some additional work it might become useful to quantify just how well a discretized approach reproduces isotropy or, ideally, Lorentz-invariance, in the long-distance limit.

Another recurring theme at the conference was dimensional reduction at short distances which has recently become quite popular. While there are meanwhile several indications (most notably from Causal Dynamical Triangulation and Asymptotically Safe Gravity) that at short distances space-time might have less than three spatial dimensions, the ties to phenomenology are so far weak. It will be interesting to see though how this develops in the coming years, as clearly the desire to make contact to experiment is present. Dejan Stojkovic spoke on the model of "Evolving Dimensions" that he and his collaborators have worked on and that we previously discussed here. There has however, for all I can tell, not been progress on the fundamental description of space-time necessary to realize these evolving dimensions.

Noteworthy is also that Stephon Alexander, Joao Magueijo and Lee Smolin have for a while now been poking around on the possibility that gravity might be chiral, ie that there is an asymmetry between left- and right-handed gravitons, which might make itself noticeable in the polarization of the cosmic microwave background. I find it difficult to tell how plausible this possibility is, though Stephon, Lee and Joao all delivered their arguments very well. The relevant papers I think are this and this.

I very much enjoyed James Overduin's talk on tests of the equivalence principle, as I agree that this is one of the cases in which pushing the frontiers of parameter space might harbor surprises. He has a very readable paper on the arxiv about this here. And Xavier Calmet is among the brave who haven't given up hope on seeing black holes at the LHC, arguing that the quantum properties of these objects might not be captured by thermal decay at all. I agree with him of course (I pointed this out already in this post 6 years ago), yet I can't say that this lets me expect the LHC will see anything of that sort. More details about Xavier's quantum black holes are in his talk or in this paper.

As I had mentioned previously, the format of the conference this year differed from the previous ones in that we had more discussion sessions. In practice, these discussion sessions turned into marathon sessions with many very brief talks. Part of the reason for this is that we would have preferred the meeting to last 5 days rather than 4 days, but that wasn't doable with the budget we had available. So, in the end, we had the talks of 5 days squeezed into 4 days. There's a merit to short and intense meetings, but I'll admit that I prefer less busy schedules.

Wednesday, October 24, 2012

The Craziness Factor

Hello from Canada and sorry for the silence, I'm here for the 2012 conference on Experimental Search for Quantum Gravity, and the schedule is packed. As with the previous two installations of the conference, we have experimentalists and theorists mixed together, which has the valuable benefit that you actually get to speak to people who know what the data means.

I learned yesterday from Markus Risse for example that the Auger Collaboration has a paper in the making to fit the penetration depth data which has earlier been claimed could not be explained neither with protons nor heavier ions or compositions thereof. Turns out the data can be fitted with a composition of protons and ions after all, though we'll have to wait for the paper to learn how well this works.

Today I just want to pick up an amusing remark by Holger Müller from Berkeley, who gave the first talk on Monday, about his experiments in atom interferometry. He jokingly introduced the "Craziness Factor" of a model, arguing that the a preferred frame, and the thereby induced violations of Lorentz-invariance, have a small craziness factor.

Naturally, this lead me to wonder what terms contribute to the craziness factor. Here's what came to my mind:
    + additional assumptions not present in the Standard Model and General Relativity. Bonus: if these assumptions are unnecessary
    + principles and assumptions of the Standard Model and General Relativity dropped. Bonus: without noticing
    + problems ignored. Bonus: problems given a name
    + approach has previously been tried. Bonus: and abandoned, multiple times
    + additional parameters. Bonus: parameters with unnatural values, much larger or smaller than one, without any motivation
    + model does not describe the real world (Euclidean, 2 dimensions, without fermions, etc). Bonus: Failure to mention this.
    + each time the model is being referred to as "speculative," "radical" or "provocative". Bonus: By the person who proposed it.
    + model has been amended to agree with new data. Bonus: multiple times.
And here is what decreases the craziness factor:
    - problems addressed. Bonus: Not only the author worries about these problems.
    - relations learned, insights gained. Bonus: If these are new relations or insights, rather than reproductions of findings from other approaches.
    - Simplifications over standard approach. Bonus: If it's an operational, not a formal simplification.
    - Data matched. Bonus: Additional predictions made.
In practice, perceived craziness has a subjective factor. The more you hear about a crazy idea, the less crazy it seems. Or maybe your audience just gets tired objecting.

Friday, October 19, 2012

Turbulence in a 2-dimensional Box: Pretty

Physicists like systems with fewer than the three spatial dimensions that we are used to. Not so much because that's easier, but because it often brings in qualitatively new features. For example, in two dimensions vortices in a fluid fulfill a conservation law that does not hold in three dimensions.

The vorticity of a fluid is a local quantity that measures, roughly, the spinning around each point of a fluid. In a two dimensional system, the only spinning that can happen is around the axis perpendicular to the two dimensions of the system. That is, if you have fluid in a plane, the vorticity is a vector that is always perpendicular to the plane, so the only thing that matters is the length and direction of this vector. In two dimensions now, the integral of the vorticity is a conserved quantity, called the enstrophy.

Pictorially this means if you create a vortex - a point that is itself at rest but around which the fluid spins - you can only do that in pairs that spin in opposite direction.

This neat paper:
    Dynamics of Saturated Energy Condensation in Two-Dimensional Turbulence
    Chi-kwan Chan, Dhrubaditya Mitra, Axel Brandenburg
    Phys. Rev. E 85, 036315 (2012)
    arXiv:1109.6937 [physics.flu-dyn]
studies what happens if you put a 2-dimensional fluid in a box with periodic boundary conditions, and disturb it by a force that is random in direction but at a distinct frequency. Due to dispersion the energy that enters the system at the frequency of the driving force cascades down to longer wavelengths. However, in a box of finite size there's a longest wavelength that will fit in. So the energy "condenses" into this longest possible wavelength. At the same time, the random force creates turbulence that leads to the formation of two oppositely rotating vortices.

Below is a plot of the vorticity of the fluid in the box. The two white/red and white/blue swirls are the vortices.
Fig 1 from arXiv:1109.6937.
Pseudocolor plot of vorticity of fluid in 2-dimensional box,
showing condensation into long wavelength modes.
My mom likes to say "symmetry is the art of the stupid", and she's right in that symmetry all by itself is usually too strict to be interesting. Add a little chaos to symmetry however and you get a good recipe for beauty.

Wednesday, October 17, 2012

Book Review: "Soft Matter" by Roberto Piazza

Soft Matter: The stuff that dreams are made of
By Roberto Piazza
Springer (April 11, 2011)

Some months ago I had a conversation about nematic films. Or was trying to have. Unfortunately I didn't have the faintest clue what this conversation was about. Neither, to my shame, did I understand much of the papers on the subject. Then I came across a review of Roberto Piazza's book on "Soft Matter" and I thought it sounds like just what I need to learn some new vocabulary.

Roberto Piazza is professor for Condensed Matter Physics at the Politecnico di Milano, and his book isn't your typical popular science book. It is instead a funny mixture of popular science book and what you might find in a textbook introduction, minus the technical details. In some regards this mixture works quite well. For example, Piazza is not afraid to introduce terminology and even uses an equation here and there. In other regards however, this mixture does not work well. The book does introduce far too much terminology in a quite breathless pace. It's a case in which less would have been more.

The book covers a lot of terrain: Colloids, aerosols, polymers, liquid crystals, glasses and gels, and in the last chapter amino acids, proteins, and the basic functions of cells. The concepts are first briefly introduced and then in later chapters there are examples and more details. In principle this is a good structure. Unfortunately, the author has a tendency to pack the pages with too much information, information that isn't always conductive to the flow of the text, and doesn't spend enough time on clarifying the information he wants to get across, or that I believe he might have wanted to get across.

The text is accompanied by several color figures, which are in most cases helpful, but there could have been more, especially to show molecular structures that are often explained in words. The book comes with a glossary that is very useful. It does however not come with references or suggestions for further reading, so if the reader wants to know more about a topic, they are left on their own.

In summary, the book is a useful introduction to soft matter, but it isn't exactly a captivating read. Especially in the last chapter, where Piazza goes on about proteins and their functions - while constantly reminding the reader that he's not a biologist - I had to resist the temptation of skipping some pages. Not because the topic is uninteresting, but because the presentation is unstructured and wasteful on words, and thus wasteful on the reader's time.

That having been said, lack of structure and too many words is just the type of criticism you'd expect from a German about an Italian, so take that with a grain of salt ;o) And, yes, now I know what nematic films are. I'd give this book three out of five stars.

Thursday, October 11, 2012

PRL on "Testing Planck-Scale Gravity with Accelerators"

With astonishment I saw the other day that Phys. Rev. Lett. published a paper I had come across on the arxiv earlier this year:
I had ignored this paper for a simple reason. The author proposes a test for effects that are already excluded, by many orders of magnitude, by other measurements. The "Planck-Scale Gravity" that he writes about is nothing but 5th order Lorentz-invariance violating operators. These are known to be extremely tightly constrained by astrophysical measurements. And the existing bounds are much stronger than the constraints that can be reached, in the best case, by the tests proposed in the paper. We already know there's nothing to be found there.

The author himself mentions the current astrophysical constraints, in the PRL version at least, not in the arxiv version - peer review isn't entirely useless. But he omits to draw the obvious conclusion: The test he proposes will not test anything new. He vaguely writes that
"The limits, however, are based on assumptions about the origin, spatial or temporal distribution of the initial photons, and their possible interactions during the travel. Another critical assumption is a uniformly distributed birefringence over cosmological distances... In contrast to the astrophysical methods, an accelerator Compton experiment is sensitive to the local properties of space at the laser-electron interaction point and along the scattered photon direction."
He leaves the reader to wonder then what model he wants to test. One in which the vacuum birefringence just so happens to be 15 orders of magnitude larger at the collision point than anywhere else in space where particles from astrophysical sources might have passed through? Sorry, but that's a poor way of claiming to test a "new" regime. At the very least, I would like to hear a reason why we should expect an effect so much larger. Space-time here on Earth as well as in interstellar space is, for what quantum gravitational effects are concerned, essentially flat. Why should the results be so dramatically different?

I usually finish with a sentence saying that it's always good to test a new parameter regime, no matter how implausible the effect. In this case, I can't even say that, because it's just not testing a new parameter regime. The only good thing about the paper is that it drives home the point that we can test Planck scale effects. In fact, we have already done so, and Lorentz-invariance violation is the oldest example of this.

Here's one of the publication criteria that PRL lists on the journal website:
"Importance.
Important results are those that substantially advance a field, open a significant new area of research or solve–or take a crucial step toward solving – a critical outstanding problem, and thus facilitate notable progress in an existing field."
[x] Fails by a large amount.

Thanks to Elangel for the pointer.

Monday, October 08, 2012

Towards an understanding of the Sun's Butterfly Diagram

The layered structure of the sun.
Click to enlarge. Image credits: NASA
It's hot, round, and you're not supposed to stare at it: The Sun has attracted curiosity since we crawled out of the primordial pond. And even though we now have a pretty good idea of how the Sun does its job, some puzzles stubbornly remain. One of them is where sunspots form and how their location changes with the solar cycle. A recent computer simulation has now managed to reproduce a pattern that brings us a big step closer to understanding this.

The Sun spins about a fixed axis, but since it's not solid its rotation frequency is not uniform: At the visible surface, the equator rotates in about 27 days whereas close by the poles it takes 35 days. The plasma that forms the Sun is held together by its own gravitational pull with a density that is highest in the center. In this high density core, the sun creates energy by nuclear fusion. Around that core, there's a layer, the radiative zone, where the density is already too small for fusion, and the heat created in the core is just passed on outwards by radiative transfer. Further outside, when the density is even lower, the plasma then passes on the heat by convection, basically cycles of hot plasma moving upwards and cooler plasma moving downwards. Even further outside, there's the photosphere and the corona.

The physics of the convection zone is difficult because the motion of the plasma is turbulent, so it's hard to understand analytically and numerical simulations require enormous computing power. Some generic features are well understood. For example the granularity of the sun's surface comes about by a mechanism similar to Rayleigh–Bénard convection: In the middle of the convection cell there's the hot plasma rising and towards the outside of the cell there's the cooler plasma moving down again.


It also has been known since more than a century that sunspots are not only colder than the normal surface of the sun, but are also regions with strong magnetic fields. They arise in pairs with opposite magnetic polarity. Sunspot activity follows a cycle of roughly 11 years, after which polarity switches. So the magnetic cycle is actually 22 years, on the average.

A big puzzle that has remained is why sunspots are created predominatly in low latitudes (below 30°N/above 30 S) and, over the course of the solar cycle, their production region moves towards the equator. When one plots the latitude of the sunpots over time, this creates what is known as the "Butterfly diagram", shown below


You can find a monthly update of the butterfly diagram on the NASA website. The diagram for the magnetic field strength follows the same pattern, except for the mentioned switch in polarity, see for example page 54 of this presentation. On the slide, note that in the higher latitudes the magnetic fields move towards the poles rather than towards the equator.

Numerical simulation of the convection zone have been made beginning already in the early 80s, but so far something always left the scientists wanting. Either the sunspots didn't move how they should or the rotation wasn't faster towards the equator, or the necessary strong and large-scale magnetic fields were not present, or something else just didn't come out right.

At Nordita in Stockholm, there's a very active research group around Axel Brandenburg, which has developed a computer code to simulate the physics of the convection zone. It's called the "pencil code" and is now hosted by Google code, for more information see here. Basically, it's an integration of the (non-linear) hydrodynamics equations that govern the plama with magnetic fields added. In the video below you see the result of a very recent simulation done with his collaborators in Helsinki:


The colors show the strength of the magnetic field (toroidal component), with white and blue being the strongest fields, blue for one polarity and white for the other. Two things you should be able to see in the video: First, the rotation is faster at the equator than at the poles, second, the spots of strong magnetic fields in low latitudes migrate towards the equator. One can't see it very well in the video, but in the higher latitudes the magnetic fields do move towards the poles, as they should. In the time-units shown in the top-left corner, about 600 time steps correspond to one solar cycle. A computation like this, Axel tells me, takes several weeks, run on 512 to 2048 cores.

Details on how the movie was made can be found in this paper
    Cyclic magnetic activity due to turbulent convection in spherical wedge geometry
    Petri J. Käpylä, Maarit J. Mantere, Axel Brandenburg
    arxiv: 1205.4719
The model has six parameters that quantify the behavior of the plasma. For some of these parameters, values that would be realistic in the sun are too large to be possible to simulate. So instead, one uses different values and hopes to still capture the essential behavior. The equations and boundary conditions can be found in the paper, see eqs (1)-(4) and (6)-(11).

The calculation doesn't actually simulate the whole convection zone, but only a wedge of it with periodic boundary conditions. In the video this wedge is just repeated. The poles are missing because there the coordinate system becomes pathological. In the part that they simulate, they use 128 x 256 x 128 points. A big assumption that goes on here is that the small scales, scales too small to be captured at this resolution, don't matter for the essential dynamics.

If you found the video was too messy, you can see the trend of the magnetic fields nicely in the figure below, which shows the average strength of the magnetic fields by latitude as a function of time.

Fig 3 from arxiv:1205.4719.


Not all is sunny of course. For example, if you gauge the timescale with the turnover time in the convection zone which can be inferred from other observatons, the length of the magnetic solar cycle is about 33 years instead of 22. And while the reason for the faster rotation towards the equator can be understood from the anisotropy of the turbulence (with longitudinal velocity fluctuations dominating over latitudinal ones), the butterfly trend is not (yet) analytically well understood. Be that as it may, I for certain am impressed how much we have been able to learn about the solar cycle despite the complicated turbulent behavior in the convection zone.

The original movie (in somewhat better resolution) and additional material can be found on Petri's website. Kudos to Axel and Amara for keeping me up to date on solar physics.

Thursday, October 04, 2012

ESQG 2012 Update

Our conference on "Experimental Search for Quantum Gravity" now has the schedule online. As you can see, this year's format is somewhat different from the previous installations. Based on Astrid's suggestions, we have only a few long talks and otherwise many discussions with short (10-15 min) contributions. I'm curious to see how this goes.

Personally, I find discussion sessions to be of limited use. Participants usually to like them for the social touch, but in my experience they tend to be dominated by always the same people who say always the same things. And I guess I just prefer prepared talks for they are usually better structured and convey information better. Which is why, if I add discussion sessions to a conference I'm organizing, I do my best to encourage participants and esp the discussion leaders to prepare some questions and arguments in advance. Maybe mixing discussions with the short contributions is a good way to avoid these pitfalls. Either way, I think it is worthwhile to try a different format.

Monday, October 01, 2012

Clearly foggy

"I am ... rather skeptical about "popular" science in general, in particular when I bump into those books pretending to address in "popular" language formidable mathematical conjectures, or esoteric concepts such as black holes, superstrings, and dark matter. Quite often, skimming through their first chapters, the non-professional reader gets the impression that everything is as clear as day, to realize well before the end that it is in fact quite a foggy day."