Pages

Sunday, October 30, 2011

Interna

Lara and Gloria are now 10 months old. They can both stand as long as they have something to hold on to, and they take little steps along the walls. Yesterday Lara dared to take her hands off the table and surprised herself by standing, wobbly, but all on her own.

The babies' first visit at the dentist featured a doctor informing us that they don't yet have teeth and got us two tiny toothbrushes and a booklet that promises to explain everything you ever wanted to know about baby's teeth - as long as you speak Swedish. Since Lara prefers my thumb over her own, I can testify the first tooth is now well on its way, but we're still waiting for it to see the light of the day.

Recently, the little ones have developed an interest in books and chewed to pieces Dan Brown's "Da Vinci Code," and Stefan has taken on the task of teaching the babies some physics. Since a week or so, Gloria takes delight in bringing her toys to us, just to take them back immediately. It becomes increasingly noticeable that the girls now understand quite a few words, especially the essentials yes, no, good, bad, come, play, milk, daddy.

Lara and Gloria have coped well with the flights to and from Stockholm, much better than Superdaddy who has developed a contact allergy to Scandinavian Airlines SAS. The Lufthansa-end of the trip in Frankfurt is flawlessly family friendly. The SAS-end in Arlanda is a complete disaster. Despite the twin stroller being clearly marked for 'Delivery at Gate' it ended up on the oversized baggage belt and Stefan had to carry the baggage, the baby seat and the two girls through the airport, much to the amusement of SAS staff.

Upon inquiry, we learned that in the late 19th century a 73 year old labor union member strained an ankle when lifting a bag tagged as gate claim. Or so. Ever since then, employees at Arlanda airport refuse to bring anything exceeding 7kg to the gate, including strollers. Not that anybody bothered to inform us about that or offered any help. We for certain will have reason to celebrate if Lufthansa takes over SAS as rumors say.

Yes, parenthood changes you. I for example have developed the unfortunate habit of looking into stranger's noses to see if there's something in need of being picked out. Stefan meanwhile has worked on a theory of snot clumping according to which the size of a snot does not depend on the nose. He's now collecting data ;o)

Thursday, October 27, 2011

The future of the seminar starts with w

I've learned a new word: webinar. Stefan has had a few. Maybe it's contagious.

A webinar, so I've learned, is a web-based seminar. It's a hybrid of video conference and desktop sharing. If you know the International Loop Quantum Gravity Seminar (ILQGS) series, this is the pleistocenic predecessor of a webinar. To take part, you download the slides online prior to the seminar, then dial in to hear what the speaker has to say. One thing he'll be telling you is when to go to the next slide.

A webinar now makes use of advanced file-sharing. Somebody plays the role of a moderator who shares a desktop, not necessarily his own, with all participants, for example the powerpoint presentation of the speaker, but it might also be a demonstration of a software or pictures from your latest trip to the pleistocene or whatever. So, you don't have to switch slides on your own and can pleasantly doze off. Just take care not to hit the keyboard for a webinar is interactive and you might accidentally ask the question "Ghyughgggggggggggggggggg?"

In principle one could stream the audio right along with the desktop and also combine it with a video. However, sharing videos of the participants has limits both at bandwidth and feasibility. If you're giving a seminar with an audience of 100 people, you neither want nor need a video of every single one picking their nose. Much more useful is the option to virtually 'raise a hand' and ask a question, either by audio or by a chat interface.

The webinar interface that Stefan has made some experience with is called webex. In these webinars that Stefan has attented, the audio was not streamed along with the desktop sharing over the web. Instead, participants submit a phone number at which the software will call them. That has the disadvantage that you have to be on the phone in addition to sitting at the computer. (You also need to have a phone line to begin with.) It has the advantage however that if the web connection breaks down you can still try to figure out the problem on the phone. Webex is not a free service - I suppose one primarily pays for the bandwidth that allows many participants since desktop sharing and video conferencing with a few people is doable on Skype also. Google brings up some free offers for webinar software, but I don't know any of them. Let me know if you've tried some of these free services, I'd be interested to hear how good or bad they are.

From the speaker's side the situation requires some adaption if one is used to 'real' seminars. One has to stop oneself from mumbling into the laptop. For pointing at some item, one has to use the cursor which is possible but not ideal. One would wish for an easy way to enlarge the icon so it is better visible.

From the side of the audience there's the general temptation of leaving to get a coffee and forgetting to come back because who will notice anyway. One is also left wondering how many of the participants are sitting in bed or have just replaced themselves with a software that will ask the occasional question. It is actually more a comment than a question...

From both sides there is the necessity to get used to the software which is typically the main obstacle for applications to spread.

If one wants to combine a webinar with a real seminar, new technological hurdles are in the way but they aren't too difficult to take. The shared desktop can be projected with a beamer as usual, the audio needs to go on a speaker. The question is how to deal with 'real' audience questions. This requires a good A/V equipment at location.

In any case, the technology is clearly there and one already finds some webinar offers online. The APS for example has some webinars with career advice, and Physics World also has a few listed. Most of the webinars that I have come across so far are however software demonstrations. But after increasingly many institutions routinely record seminars and make them available online, I think webinars are the next step that we might see spreading though academia. I for sure would appreciate the possibility to easily log in to one or the other seminar from home while I am on parental leave.

However, if the nomenclature develops as it did with weblogs, we'll end up sitting in binars, you're either in or you're not.

Have you made experience with a webinar? Would you consider attending, giving, or organizing one?

Friday, October 21, 2011

Interna

After the previous posts were somewhat heavy in content, for relaxation let me just show you some photos from a recent weekend excursion.










[Click to badastronate]

My month back at work is almost over, and we'll be commuting back to Germany in the coming days so Superdaddy can reappear in his office chair.

Monday, October 17, 2011

Super Extra Loop Quantum Gravity

In the summer, I noted a recent paper that scored remarkably low on the bullshit index:
    Towards Loop Quantum Supergravity (LQSG)
    Norbert Bodendorfer, Thomas Thiemann, Andreas Thurn

    Bullshit Index : 0.08
    Your text shows no or marginal indications of 'bullshit'-English.

But what is the paper actually about? It is an attempt to make contact between Loop Quantum Gravity (LQG) and Superstring Theory (ST). Both are approaches to a quantization of gravity, one of the big open problems in theoretical physics. LQG directly attacks the problem by a careful selection of variables and quantization procedure. String theory does not only aim at quantizing gravity, but at the same time at unifying also the other 3 interactions of the standard model by taking as fundamental the strings that give it its name. If quantizing gravity and unifying the standard model interactions are actually related problems, then string theorists are wise to attack them together. Yet, we don't know if they are related. In any case, it has turned out that gravity is necessarily contained in ST.

Both theories still struggle to reproduce general relativity and/or the standard model, and to make contact to phenomenology, though for very different reasons. This begs the question how the theories compare to each other, whether they give the same results for selected problems. Unfortunately, so far this has not been possible to answer because LQG has been developed for a 3+1 dimensional space-time, while ST famously or infamously, depending on your perspective, necessitates 6 additional dimensions that then have to be compactified. ST is also, as the name says, supersymmetric. It should be noted that these both features, supersymmetry and extra dimensions, are not optional but mandatory for ST to make sense.

I've always wondered why one hasn't extended LQG to higher dimensions since the idea of extra dimensions is appealing and somebody in the field who should have known better once told me it would be straight forward to do. It is however not so because one of the variables (a certain SU(2) Yang-Mills connection) used in the quantization procedure relies on a property (the equivalence of the fundamental and adjoint representations of SU(2)) that is fulfilled only in 3 spatial dimensions. So it took many years and two brilliant young students, Norbert Bodendorfer and Andreas Thurn, to come up with a variable that could be used in an arbitrary number of dimensions and to work through the maths which, as you can imagine, didn't get easier. It required to work around the difficulty that SO(1,D) is not compact and digging out a technique for gauge unfixing, a procedure that I had never heard of before.

Compared to the difficulty of adding dimensions, going supersymmetric can be done by simply generalizing the appropriate matter content which is contained in the supergravity actions, and constructing a supersymmetry constraint operator.

Taken together, this in principle allows one to compare the super extra loop quantized gravity to string theory, to which supergravity is a low energy approximation, though concrete calculations have yet to follow. One of the tasks on the to-do list the entropy of extremal supersymmetric black holes to see if LQG reproduces the ST results. (Or if not, which might be even more interesting.) Since LQG is a manifestly non perturbative approach, this relation to string theory might also help filling in some blanks in the AdS/CFT correspondence in areas where neither side of the duality is weakly coupled.

Friday, October 14, 2011

AdS/CFT confronts data

One of the most persistent and contagious side-effects of string theory has been the conjectured AdS/CFT correspondence (that we previously discussed here, here and here). The briefest of all brief summaries is that it is a duality that allows to swap the strong coupling limit of a conformal field theory (CFT) with (super)gravity in a higher dimensional Anti-de-Sitter (AdS) space. Since computation at strong coupling is difficult, at the very least this is a useful computational tool. It has been applied to some condensed matter systems and also to heavy ion physics, where one wants to know the properties of the quark gluon plama. Now the theory that one has to deal with in heavy ion collisions is QCD, which is neither supersymmetric nor conformal, but there have been some arguments for why it should be approximately okay.

The great thing about the application of AdS/CFT to heavy ion physics is that it made predictions for the LHC's heavy ion runs that are now being tested. One piece of data that is presently coming in is the distribution of jets in heavy ion collisions, but first some terminology.

A heavy ion is an atom with a high atomic number stripped of all electrons; typically one uses lead or gold. Compared to a proton, a heavy ion is a large clump of bound nucleons (neutrons and protons) that are accelerated and brought to collision. They may collide head-on or only peripherally, quantified in a number called "centrality." When the ions collide, they temporarily form a hot, dense soup of quarks and gluons called the "quark gluon plasma." This plasma rapidly expands and cools and the quarks and gluons form hadrons again (in a process called "hadronization" or also "fragmentation"), that are then detected. The temperature of the plasma depends on the energy of the colliding ions that is provided by the accelerator. At RHIC the temperature is about 350 MeV, in the LHC's heavy ion program it is about 500 MeV. The task of heavy ion physicists is to extract information about matter at nuclear densities and such high temperatures from the detected collision products.

A (di) jet is two back-to-back correlated showers of particles that are a typical signature in perturbative QCD. It is created if a pair of outgoing partons (quarks or gluons) hadronizes and produces a bunch of particles that then hit the detector. Since QCD is confined, the primary, colored, particles never reach the detector. In contrast to proton-proton collisions, in heavy ion collisions the partons have to first go through the quark gluon plasma before they can make a jet. Thus, the distribution of momenta of the observed jets depends on the properties of the plasma, in particular the energy loss that the partons undergo.

Different models predict different energy loss and dependence of that energy loss on the temperature of the medium. Jets are a QCD feature at weak coupling and strictly speaking in the strong coupling limit that AdS/CFT describes there are no jets at all. What one can however do is to use a hybrid model in which one just extracts the energy loss in the plasma from the conformal theory. This energy loss scales with L3 T4, where L is the length that the partons travel through the medium and T is the temperature. All other models for the energy loss scale with smaller powers of the temperature.

Heavy ion physicists like to encode observables into how different they are from the corresponding observables for collisions of the ion's constituents. The "nuclear suppression factor," denoted RAA, plotted in Thorsten Renk's figure below (Slide 17 of this talk), is basically the ratio of the cross-section for jets in lead-lead over the same quantity for proton-proton (normalized to the number of nucleons) and it's depicted as a function of the average transverse momentum (pT) of the jets. The black dots are the ALICE data, the solid lines are fits from various models. The orange line at the bottom is AdS/CFT.


[Picture credit: Thorsten Renk, Slide 17 of this presentention]

As the saying goes, a picture speaks a thousand words, but since links and image sources have a tendency to deteriorate over time, let me spell it out for you: The AdS/CFT scaling does not agree with the data at all.

A readjustment of parameters might move the total curve up or down, but the slope would still be off. Another problem with the AdS/CFT model is that the model parameters needed to fit the RHIC data are very different from the ones needed for the LHC. The model that does best is Yet another Jet Energy-loss Model (YaJEM) that works with in medium showers (I know nothing about that code). It is described in some detail in this paper. It doesn't only fit well with the observed scaling, it also does not require a large readjustment of parameters from RHIC to LHC.

Of course there's always caveats to a conclusion. One might criticize for example the way that AdS/CFT has been implemented into the code. But the scaling with temperature is such a general property that I don't think nagging at the details will be of much use here. Then one may want to point out that the duality is good actually only in the large N limit and N=3 isn't so large after all. And that is right, so maybe one would have to take correction terms more seriously. But that would then require calculating string contributions and one loses the computational advantage that AdS/CFT seemed to bring.

Some more details on the above figure are in Thorsten Renk's proceedings from the Quark Matter 2011, on the arxiv under 1106.2392 [hep-ph].

Summary: I predict applications of the AdS/CFT duality to heavy ion physics is a rapidly cooling area.

Wednesday, October 12, 2011

New constraints on energy-dependent speed of light from gamma ray bursts

Two weeks ago, an arXiv preprint came out with a new analysis of the highest energetic gamma ray bursts (GRBs) observed with the Fermi telescope. This paper put forward a bound on an energy-dependent speed of light that is an improvement of 3 orders of magnitude over existing bounds. This rules out a class of models for Planck-scale effects. If you know the background, just scroll down to "News" to read what's new. If you need a summary of why this is interesting and links to earlier discussions, you'll find that in the "Avant-propos".

Avant-propos

Deviations from Lorentz-invariance are the best studied case of physics at the Planck scale. Such deviations can have two different expressions: Either an explicit breaking of Lorentz-invariance that introduces a preferred restframe, or a so-called deformation that changes Lorentz-transformations at high energies without introducing a preferred restframe.

Such new effects are parameterized by a mass scale that, if it is a quantum gravitational effect, one would expect to be close by the Planck-mass. Extensions of the standard model that explicitly break Lorentz-invariance are very strongly constrained already, to 9 orders of magnitude above the Planck mass. Such constraints are derived by looking for effects on particle physics that are a consequence of higher order operators in the standard model.

Deformations of special relativity (DSR) evade that type of constraints, basically because there is no agreed upon effective limit from which one could actually read off higher order operators and calculate such effects. It is also difficult, if not impossible, to make sense of DSR in position space without ruining locality and these models have so-far unresolved issues with multi-particle states. So, as you can guess, there's some controversy among the theorists about whether DSR is a viable model for quantum gravitational effects. (See also this earlier post.) But that's arguments from theory, so let's have a look at the data.

Some models of DSR feature an energy-dependent speed of light. That means that photons travel with different speeds depending on their energy. This effect is very small. In the best case, it scales with the photon's energy over the Planck mass which, even for photons in the GeV range, is a a factor 10-19. But the total time difference between photons of different energies can add up if the photons travel over a long distance. Thus the idea is to look at photons with high energies coming to us from far away, such as those emitted from GRBs. It turns out that in this case, with distances of some Gpc and energies at some GeV, an energy-dependent speed of light can become observable.

There's two things one should add here. First, not all cases of DSR do actually have an energy-dependent speed of light. Second, not in all cases does it scale the same way. That is, the above discussed case is the most optimistic one when it comes to phenomenology, the one with the most striking effect. For that reason, it's also the case that has been talked about the most.

There had previously been claims from analysis of GRB data that the scale at which the effect becomes important had been constrained up to about 100 times the Planck mass. This would have been a strong indication that the effect, if it is a quantum gravitational effect, is not there at all, ruling out a large class of DSR models. However, we discussed here why that claim was on shaky ground, and indeed it didn't make it through peer review. The presently best limit from GRBs is just about at the Planck scale.

News

Now, three researchers from Michigan Technological University, have put forward a new analysis that has appeared on the arxiv:
    Limiting properties of light and the universe with high energy photons from Fermi-detected Gamma Ray Bursts

    By Robert J. Nemiroff, Justin Holmes, Ryan Connolly
    arXiv:1109.5191 [astro-ph.CO]

Previous analysis had studied the difference in arrival times between the low and high energetic photons. In the new study, the authors have exclusively looked at the high energetic photons, noting that the average difference in energies between photons in the GeV range is about the same as that between photons in the GeV and the MeV range, and for the delay it's only the difference that matters. Looking at the GeV range has the added benefit that there is basically no background.

For their analysis, they have selected a subsample of the total of 600 or so GRBs that Fermi has detected so far. From all these events, they have looked only at those who have numerous photons in the GeV range to begin with. In the end they consider only 4 GRBs (080916C, 090510A, 090902B, and 090926A). From the paper, it does not really become clear how these were selected, as this paper reports at least 19 events with statistically significant contributions in the GeV range. One of the authors of the paper, Robert Nemiroff, explained upon my inquiry that they selected the 4 GRBs with the best high energy data, numerous particles that have been identified as photons with high confidence.

The authors then use a new kind of statistical analysis to extract information from the spectrum, even though we know little to nothing about the emission spectrum of the GRBs. For their analysis, they study exclusively the timing of the high energetic photons' arrival. Just by looking at the Figure 2 from their paper you can see that on occasion two or three photons of different energies arrive almost simultaneously (up to some measurement uncertainty). They study two methods of extracting a bunch from the data and then quantify its reliability by testing it against a Monte Carlo simulation. If one assumes a uniform distribution and just sprinkles photons in the time interval of the burst, a bunch is very unlikely to happen by coincidence. Thus, one concludes with some certainty that this 'bunching' of photons must have been present already at the source and was maintained during propagation. An energy-dependent dispersion would tend to wash out such correlations as it would increase the time difference between photons with different energies. Then, from the total time of the bunch of photons and its variability in energy, one can derive constraints on the dispersion that this bunch can have undergone.

Clearly, what one would actually want to do is a Monte Carlo analysis with and without the dispersion and see which one fits the data better. Yet, one cannot do that because one doesn't know the emission spectrum of the burst. Instead, the procedure the authors use just aims at extracting a likely time variability. In that way, they can then identify in particular one very short substructure in GRB 090510A that in addition also has a large spread in energy. From this (large energy difference but small time difference) they then extract a bound on the dispersion and, assuming a first order effect, a bound on the scale of possible quantum gravitational effects that is larger than 3060 times the Planck scale. If this result holds up, this is an improvement by 3 orders of magnitude over earlier bounds!

Comments

The central question is however what is the confidence level for this statement. The bunching they have looked at in each GRB is a 3σ effect, i.e. it would appear coincidentally only in one out of 370 cases that they generated per Monte Carlo trials: "Statistically significant bunchiness was declared when the detected counts... occurred in less than one in 370 equivalent Monte Carlo trials." Yet they are extracting their strong bound from one dataset (GRB) of a (not randomly chosen) subsample of all recorded data. But the probability to expect such a short bunch just by pure coincidence in one out of 20 cases is higher than the probability to find it coincidentally in just one. Don't misunderstand me, it might very well be that the short-timed bunch in GRB 090510A has a probability of less than one in 370 to appear just coincidentally in the data we have so far, I just don't see how that follows from the analysis that is in the paper.

To see my problem, consider that (and I am not saying this has anything to do with reality) the GRB had a completely uniform emission in some time window and then suddenly stops. The only two parameters are the time window and the total number of photons detected. In the low energy range, we detect a lot of photons and the probability that the variation we see happened just by chance even though the emission was uniform is basically zero. In the high energy range we detect sometimes a handful, sometimes 20 or so photons. If you assume a uniform emission, the photons we measure will simply by coincidence sometimes come in a bunch if you measure enough GRBs, dispersion or not. That is, the significance of one bunch in one GRB depends on the total size of the sample, which is not the same significance that the authors have referred to. (You might want to correlate the spectrum at high energies with the better statistic at low energies, but that is not what has been done in this study.)

The significance that is referred to in the paper is how well their method extracts a bunch from the high energy spectrum. The significance I am asking for is a different one, namely what is the confidence by which a detected bunch does actually tell us something about the spectrum of the burst.

Summary

The new paper suggests an interesting new method to extract information about the time variability of the GRB in the GeV range by estimating the probability that the observed bunched arrivals of photons might have occurred just by chance even though there is dispersion. That allows to bound a possible Planck scale effect very tightly. Since I have written some papers arguing from theoretical grounds that there should be no Planck scale effect in the GRB spectra, I would be pleased to see an observational confirmation of my argument. Unfortunately, the statistical relevance of this new claim is not entirely clear to me. The relevance that is referred to in the paper I am not sure how to translate into the relevance of the bound. Robert Nemiroff has shown infinite patience to explain the reasoning to me, but I still don't understand it. Let's see what the published version of the paper says.

Wednesday, October 05, 2011

Away note

I'll be away the rest of the week for a brief trip to Jyväskylä, Finland. I'm also quite busy next week, so don't expect to hear much from me.

For your distraction, here's some things that I've come across that you might enjoy:

Sunday, October 02, 2011

FAZ: Interview with German member of OPERA collaboration

The German newspaper Frankfurter Allgemeine Zeitung (FAZ) has an interesting interview with Caren Hagner, from the University of Hamburg. Hagner is member of the OPERA collaboration and talked to the journalist Manfred Lindinger. The interview is in German and I thought most of you would probably miss it, so here's some excerpts (translation mine):
Frau Hagner, you are leader of the German group of the OPERA experiment. But one searches in vain for your name on the preprint.

I and a dozen of colleagues did not sign the preprint. I have no reservations about the experiment. I just think it was premature to go public with the results for such an unusual effect like faster than light travel. One should have done more tests. But then the publication would have taken at least 2 months longer. I and other colleagues from the OPERA collaboration wanted these tests to be done.

What tests?

First, a second independent analysis. In particle physics, if one believes to have discovered a new particle or effect, then in general there is not only one group analyzing the data but several. And if all get the same result then one can be convinced it is right. That has not been the case with OPERA.

Why?

Because there hasn't been time. For an effect like faster than light travel the analysis should certainly be controlled. Maybe there is a bug in the program [...] The majority of the collaboration preferred a quick publication.

Hagner also says that the statistical analysis (matching the proton spectrum with that of the neutrinos) should have been redone by different techniques and that this is currently under way. She further points out that the results are only from one of two detection methods that OPERA has, the scintillation-tracker. Another detector, the spectrometer, should yield an independent measurement that could be compared to the first, but that would take about 2 months.

The final question is also worth quoting:
[If true], might satellite navitation in the future be based on neutrino rays rather than light?

Yes, maybe. But then our GPS devices would weigh some thousand tons.