Saturday, December 29, 2012

Happy Birthday Lara and Gloria!

Today our two beautiful girls are two years old! We have two cakes with two candles each and the apartment is full of balloons, awaiting the grandparents for a visit.



During the last year, Lara and Gloria have learned to walk and to run and to jump, to dance and to climb. Since a few weeks, they can climb out of their cribs, so time has come to upgrade the beds. We're browsing the IKEA catalog as I type, so to speak.  In their explorations, they have also suffered the occasional bruise or scratch, but luckily no major injuries. We too have gotten our share of bruises and scratches, mostly due to being hit with toys in a moment of inattentiveness.

For us, this year has been much more work than the first, because for most of the time we couldn't leave the kids unattended for even a second; they would inevitably tear something down, break something, spill something or topple over with the chair. It has gotten better during the last months. They know now fairly well what they can do safely, and they are careful not to touch anything that might be glass. We can let them walk around in the apartment now, so long as we recall to lock away the detergents and knives.

The girls are slow with learning to talk, though our pediatrician says this isn't so uncommon with twins. Gloria refers to herself as "Goo-kie" and to Lara as "Gah-kie" for reasons we don't know. They can now both eat by themselves, though one better doesn't leave them alone with the spaghetti.

When I contemplate the human brain, I am always torn between frustration about its shortcomings and amazement about how well it works. Watching the kids, what astonishes me most is how quickly and flawlessly they learn to identify objects. If you look at some picture book, the drawings are not usually too precise. Yet the kids have no problem to identify items from the books with our household items. And many items, such as most animals or large vehicles, they have never seen in reality, yet if they glimpse as much as a part of a tiger in an illustration, they'll announce "tee-ga". They also find Stefan's and my photos in the tiniest thumbnail versions from among dozens, instantly.

A source of amusement for us is how they construct causal and temporal relations. If they want to watch the washing machine for example, they sometimes pile laundry in front of it, not necessarily laundry that actually needs washing. If I forget to close the blinds for their afternoon nap, they'll scream and point at the window. When I come back from my morning run and the kids are already up, Lara will come and say "Mama. Ouh." and point to the shower. If they want Stefan to read them a book, they'll put a pillow on the floor where he usually sits.

The next year will bring many changes for the kids and for us, not only because of the new beds but also because they will spend more time with other adults and other children. We haven't yet completely solved our daycare problem, but it looks like it will resolve soon. Now it's time to light the candles :o)

Wednesday, December 26, 2012

Book review: “Phantoms in the Brain” by Ramachandran and Blakeslee

Phantoms in the Brain: Probing the Mysteries of the Human Mind
By V. S. Ramachandran, S. Blakeslee
William Morrow Paperbacks (1999)

Yes, I’ve read another neuroscience book. This one has been recommended to me as one of the classics in the field, though a little dated by now. It didn’t disappoint.

Ramachandran takes the reader on an engaging tour through the functions of the brain by using case studies on patients with brain damage. At first I thought this would be a collection of heart-wrenching stories and bizarre peculiarities – nobody really doubts shit happens if an iron pole leaves a hole in your head. But I was surprised to learn how reproducible the effects of localized brain damage are, how strange, and how much can be learned from the unfortunate patients.

The book starts with phantom limbs and our body image in general, followed by sections on denial, memory, religious thought, emotions, laughter, delusions and hallucinations. The chapters usually start with a patient or several, and are followed by an explanation of what brain regions are involved and what they do, to the extent that one knows. Ramachandran then often discusses some experiments that he and his collaborators did to shed light on puzzles going along with the condition, sometimes leading to insights that could help the patient or at least provide a basis for the development of treatments. He also adds his own speculations and hunches, which I find very interesting. He is usually very clear in demarking where actual knowledge ends and his speculation starts.

I was very pleased by his sober discussion of “qualia,” and his careful treading on the question of religion and the mind-body interaction in general. His argumentation is overall very balanced; he comes across as an open-minded scientist who isn’t pushing any particular agenda, but is simply driven by curiosity. I didn’t find his elaborations on the nature of consciousness too enlightening, but I guess consciousness is to neuroscience what the cosmological constant is to physics: everybody’s got an opinion about it and nobody finds anybody else’s opinion convincing.

The book is well written and reads very smoothly. It is however in places somewhat repetitive in that some patients reappear and one has to read through a summary. Some readers might appreciate that, especially if they had put aside the book for a bit, but it switches my brain into jah-jah-you-already-told-me-that mode. (The brain region for this is between the yawn-campus and the facebook-lobe.) I also have to complain that Ramachandran is quite vague on explaining what research has been done in the field apart from his own studies, and is too focused on his own work. Since the book is now more than a decade old, maybe it just wasn’t all that much. Still, I would have hoped for a somewhat broader survey.

(The co-author Sandra Blakeslee is credited in the acknowledgements for “making the book accessible for a wider readership.” The book is written in the first person narrative.)

Altogether, I learned quite a lot from this book, and especially the section on denial has given me something to think about. I’d give this book four out of five stars.

This TED talk by Ramachandran will give you a good impression what the book is about (the part about synesthesia is not in the book):

Monday, December 24, 2012

Merry Christmas!

We wish all our readers happy holidays and a merry Christmas, and if you're not celebrating Christmas, we wish you a good time anyway.


The girls are now old enough to take note of what is going on, so this year I've been thinking about our Christmas traditions.

In Germany, Christmas is celebrated on the evening of December 24th, the "holy night", with presents being deposited below the tree and opened either before or after dinner. A very common dinner on Christmas here is goose with red cabbage and dumplings. The presents are attributed not to Santa Claus but to the "Christuskind" (Christ child), usually depicted as a little angel. (I recall being quite confused as to whether its a boy or a girl.) Saint Claus' (Nikolaus) day on the other hand is in Germany not Christmas but December 6th. He delivers his goodies into boots that you place in front of the door over night. However, Saint Nikolaus comes with a dark brother, Knecht Ruprecht, who will slap the kids if they haven't been nice.

So tell me something about your Christmas tradition and how you celebrate!

Friday, December 21, 2012

Large Extra Dimensions - Not Dead Yet

15 years ago large extra dimensions were on vogue.

The idea that our universe may have additional spatial dimensions so small we have not yet been able to observe them dates back more than a century. This idea received a tremendous boost by the realization that String Theory actually requires such additional dimensions for consistency, but they were normally assumed to be wrapped up to sizes about a Planck length, or 10-35m. That’s so small you can forget about it. (Forgetting about them being the reason to wrap them up to begin with.)

Then in 1998/99 some smart physicists realized that if there are extra dimensions, they could be much larger than the Planck length, and we wouldn’t have noticed. Better still, if these dimensions have the right size this would explain why gravity is so much weaker than the other interactions in the standard model, a problem called the “hierarchy problem” that causes physicists sleepless nights.

In these scenarios with large extra dimensions, the stuff that we are made of (quarks, electrons and so on) sits on a slice with three spatial dimensions, which is called a “brane”. This matter does not normally notice the additional dimension, but gravity does. This has the result that gravity is weak on long distances, but becomes much stronger on short distances, leading to a “lowered Planck scale” and quantum gravitational effects that are much larger than naively expected. Thus the excitement. (There are different models with different realizations of this, but the details won’t concern us in the following. For details read this earlier post.).

If one buys into this, one however has a new problem: The question why the extra dimensions have exactly this size. But sometimes finding a new way to formulate an old question can be a big step forward, so this should not deter us from exploring the idea.

Models with large extra dimensions also made predictions for the LHC due to the lowered Planck scale, most strikingly graviton and black hole production. In 2012, now that the end of the world is near, we know that nothing like this has been seen.

As I explained in this earlier post, it is quite rare that experiment can falsify a model, even if you might have heard so. Normally a model has free parameters that should be determined by experiment, or, if nothing is found, be constrained by experiment. That the LHC has not found evidence for large extra dimensions doesn’t falsify the idea, but it certainly “implausifies” it by constraining the parameters into an uninteresting range. Which is another way of saying, move on, there’s nothing to see here.

So you might think large extra dimensions are dead. But Cliff Burgess begs to differ. In two recent arXiv papers, he and his collaborators have put forward an extra dimensional model that offers an intriguing new perspective:

    Accidental SUSY: Enhanced Bulk Supersymmetry from Brane Back-reaction
    C. P. Burgess, L. van Nierop, S. Parameswaran, A. Salvio, M. Williams
    arXiv:1210.5405

    Running with Rugby Balls: Bulk Renormalization of Codimension-2 Branes
    M. Williams, C.P. Burgess, L. van Nierop, A. Salvio
    arXiv:1210.3753
The key point is that they are trying in the first place not to solve the hierarchy problem, but the cosmological constant problem: Why is the cosmological constant that we observe small and nonzero? The cosmological constant term in general relativity describes the energy of the vacuum and this energy should receive contributions all the way up to the Planck energy. This would be a huge contribution, leading to a very strong curvature of our universe, incompatible with observation. Why aren’t these Planck energetic quantum contributions there?

Burgess and his collaborators argue that a plausible reason is that space-time has additional dimensions, and the full space-time is not Lorentz-invariant. In other words, it’s a scenario with branes in higher dimensions. In such a situation, the troublesome quantum contributions, which normally, due to Lorentz-invariance, take on the form of a cosmological constant term, might not make themselves noticeable on the brane, which is where we live.

The example that they give is that of a cosmic string. If one calculates the metric that the string induces, one finds that space is flat but has a defect angle that depends on the string tension. The string itself however is unaffected by what it does to the background. The scenario that Burgess et al construct is basically a higher-dimensional version of this, where our universe plays the role of the string and creates a defect, but no curvature is induced in our universe itself.

Concretely, they have two additional large extra dimensions. (There might be more than that, but if they are much smaller their presence does not matter for the argument.) These additional dimensions have the topology of a sphere. On the two poles of the sphere, there are a brane each, one of which you can interpret as our universe. Like in the case with the cosmic string, the matter density on the branes induces a defect angle for the sphere, creating a manifold which they call a “rugby ball”. The radius of the sphere is flux-stabilized, which leaves one free parameter (a combination of the radius and the dilaton field).

These extra dimensions induce a vacuum energy on the brane, which is essentially the Casimir energy of this compact space, and this energy depends on the radius of the sphere. To use this scenario to get the right value of the cosmological constant, the radius should be of the order of about 5 μm, which is somewhat below current measurement precision (45 μm), but not so far below.

But what about the troublesome quantum corrections?

Supersymmetry must be broken on the brane (because we don’t see it) but is intact away from it. Supersymmetry solves the cosmological constant problem in the sense that it brings all the troublesome contributions in the bulk to zero. What remains to be shown though is that the cosmological constant on the brane does not receive large correction terms, which depends on the way the branes are coupled.

Cliff and his collaborators have shown that, in the scenario they constructed, the cosmological constant on the brane (read “in our universe”) does receive correction terms from high energies, but due to the way the branes are coupled these corrections are highly suppressed and do not ruin the smallness of the effective cosmological constant; they do not induce a large curvature. Think of the example with the cosmic string that stands in for higher dimensional branes. The geometry on the string (or brane) is flat regardless of the value of the tension. The large quantum corrections are there, but they contribute to the tension rather than inducing a curvature.

Now once you have fixed the radius of the “rugby ball” so that the cosmological constant matches with observation, you can use this to calculate the value of the lowered Planck scale. It turns out to be at least 10 TeV, so we wouldn’t see gravitons or black holes at the LHC. (Keep in mind that the LHC collides protons, which are composite particles. The average energy per individual collision of quarks or gluons is in most cases far below the total energy in the proton collision which is usually quoted. That’s why everybody wants a lepton collider.) However, since the string scale is somewhat below the Planck scale, one would expect to see string excitations at the LHC, though still at fairly high energies; we wouldn’t have seen them yet.

So to sum up, what this model achieves is the following: 1) It provides a setting in which there is a small cosmological constant whose small value is not ruined by large quantum corrections. 2) It makes the prediction that we should see corrections to Newton’s law not too far beyond present measurement precision. 3) It gives a plausible reason why we haven’t seen evidence for extra dimensions at the LHC so far but 4) predicts that we should see some glimpses of it in form of string excitations within the next years.

This doesn’t convince me to start working on large extra dimensions again, but it does convince me that large extra dimensions aren’t dead yet.

Monday, December 17, 2012

The Usefulness of Useless Knowledge

Abraham Flexner was one of the founders of the Princeton Institute for Advanced Studies. The other day I came across a wonderful essay, titled “The Usefulness of Useless Knowledge” (PDF) that he wrote in 1939, on the relevance of curiosity-driven basic research:
    “Much more am I pleading for the abolition of the word "use," and for the freeing of the human spirit. To be sure, we shall thus free some harmless cranks. To be sure, we shall thus waste some precious dollars. But what is infinitely more important is that we shall be striking the shackles off the human mind and setting it free...”
Flexner goes through historic examples in which progress came about by scientists not thinking about applications but driven to understand nature, which resulted in unforeseen breakthroughs that changed our lives. Needless to say, his examples (Maxwell's equations, Bose-Einstein Condensation, atom spectroscopy...) could not take into account the most stunning developments in the later part of the century, based on our increasingly better understanding of quantum mechanics that underlies pretty much all the little technological gadgets we put under the tree.

But despite his essay being 70 years old, the points are as timely today as they were then, and they have only grown more pressing: Without basic research, progress is not sustainable and applications will eventually run dry. Believing that applied research produces technological advances is like saying electricity comes from the holes in your outlet.

Flexner was also ahead of his time in clearly realizing that science is a community enterprise, driven by social dynamics and the interaction of experts, and not by single individuals working on their own:
    “[O]ne must be wary in attributing scientific discovery wholly to anyone person. Almost every discovery has a long and precarious history. Someone finds a bit here, another a bit there. A third step succeeds later and thus onward till a genius pieces the bits together and makes the decisive contribution. Science, like the Mississippi, begins in a tiny rivulet in the distant forest. Gradually other streams swell its volume. And the roaring river that bursts the dikes is formed from countless sources.”

Wednesday, December 12, 2012

AdS/CFT predicts the quark gluon plasma is unstable

The gauge-gravity duality is a spin-off from string theory and has attracted considerable attention for its potential to describe the quark gluon plasma produced in heavy ion collisions. The last news we heard about this was that the AdS/CFT prediction for the energy loss of quarks or gluons passing through the plasma does not agree with the data. The AdS/CFT community has so far been disappointingly silent on this issue, which has now been known for more than a year.

Meanwhile however, there has been an interesting new development pointed out by Brett McInnes in his papers
    Fragile Black Holes and an Angular Momentum Cutoff in Peripheral Heavy Ion Collisions
    Brett McInnes
    arXiv:1201.6443

    Shearing Black Holes and Scans of the Quark Matter Phase Diagram
    Brett McInnes
    arXiv:1211.6835
The dual description of the quark gluon plasma is a black hole in AdS space. Since the plasma resides in a beam pipe in a background metric that is to excellent approximation flat, the black hole that has to be used to describe it is a planar black hole. If one would use a “normal” spherical black hole, then the background for the plasma would have a spherical symmetry too.

These planar black holes appear alien at first sight because they have an infinitely extended planar horizon and are nothing like the real black holes that we have for example in the center of our galaxy. The planar black holes cannot in fact exist in an asymptotically flat space; they need the asymptotic AdS-space. So they might be alien in the context of astrophysics, but they make a lot of sense as a dual description for the quark gluon plasma.

Brett now notes the following. The quark gluon plasma that is created in heavy ion collisions generically has an angular momentum when the nuclei do not collide centrally. In particular, this angular momentum comes in the form of a shear, that is a non-trivial velocity potential in the direction parallel to the beam axis. The reason is, essentially, that the colliding heavy ions are approximately spherical (in their rest frame) and the amount of constituent particles that takes part in the collision depends on the distance from the beam axis. Thus arises a velocity profile.

So the quark gluon plasma has a shear. But this shear then should also be present in the dual description, ie for the black hole. In his paper, Brett studies such a sheared black hole in the AdS space – and the interesting thing is that he finds it to be unstable. If one takes into account that pairs of branes can be produced in the AdS background, then one can see that in fact an infinite amount of brane pairs can be produced because the brane action is unbounded from below.

But what does this mean?

The description of the quark gluon plasma that the AdS/CFT duality offers does not take into account that the formation, and subsequent fragmentation into hadrons, is a dynamical, time-dependent process. Brett thus argues that in a realistic situation after formation of the plasma it takes some while until the system is affected by the instability. He estimates the time it takes for the instability to develop and finds that for currently existing experiments at RHIC and at the LHC the plasma is stable for a time longer than it exists in the collision zone anyway. So there is nothing to observe in these experiments.

The relevant quantity here is the chemical potential. At RHIC and LHC it is very small, essentially because the collision is so highly energetic that very many particle-antiparticle pairs are created. However, for some upcoming new experiments, such as the ones planned at FAIR, that operate at a comparably low collision energy, the instability might become observable for realistic values of the impact parameter!

Brett is however very careful to point out that while the theoretic argument for the instability is solid, one should not take too seriously the numbers one obtains from his estimate. Since a truly dynamic treatment of the system is presently not feasible, what he does is instead is to calculate the time it takes for signals of an impending instability to propagate in the AdS background. One should not expect the result to be very precise.

Be that as it may, this opens the exciting possibility that upcoming experiments might observe an effect that could only be anticipated by use of the AdS/CFT duality.

Thursday, December 06, 2012

How liquid crystals handle conflicting boundary conditions

Two weeks ago, we discussed nematic films: thin layers of liquid crystals in solution, dropped on a substrate. These thin films are pretty to look at in polarized light, but they also teach us a lot about the behavior of the molecules in solution. Because the thin films can be easily manipulated with electric and magnetic fields, or changes in temperature and boundary conditions, they are excellent experimental territory.

This is interesting physics not only because we use liquid crystals and other types of soft matter in many every-day applications, but also because of their closeness to biological systems: Most of your body is molecules in solution, and most of your body’s processes depend on the organization and interaction of these molecules. Granted, most molecules in biological systems are larger and more complex than the molecules in these thin layers, but one has to start somewhere.

So let us look again at the image of the nematic film that we have seen earlier. Note the not quite regular stripes. Interesting.

Thin Nematic Film
Image source: arXiv:1010.0832 [cond-mat.soft]

This, it turns out, is not the only type of regular structure that one can find in nematic films if they are thin enough. Sometimes one also finds little squares.

Source. Image Credits: Oleg Lavrentovich

This behavior has puzzled physicists since it was first observed, almost 20 years ago. Especially it has not been understood theoretically at which thickness of the film such modulations start to appear and what determines their size. The width of the film is typically at least 20 times or so larger than the length of the molecules, so this cannot be the relevant scale.

This puzzle is what Oksana Manyuhina, now a postdoc at Nordita, and collaborators studied in their paper
    Instability patterns in ultrathin nematic films: comparison between theory and experiment
    O. V. Manyuhina, A.-M. Cazabat and M. Ben Amar
    Eur. Phys. Lett. 92, 16005 (2010)
    arXiv:1010.0832 [cond-mat.soft] 
The neat thing is how straight-forward their analysis is.

The nematic films are described by a vector field for the molecules orientation. The direction of the vector field does not matter, only its orientation. The system tries to minimize energy, which depends on the orientation of neighboring molecules relative to each other. Up to 2nd order in derivatives of the vector field there is a handfull of terms that can be written down with constants to parameterize their relative strength.

The relevant new ingredient to understand the structures in the thin films are boundary terms. The substrate below the film and the air above it have different chemical properties that lead to conflicting preferences for the molecules: At the liquid interface the molecules want to be parallel to the surface while at the air interface they want to be orthogonal to the surface.

Surface terms had been investigated before to account for the appearance of quasi-periodic structures, but without success. It was found instead that there should be structures at arbitrarily long wavelengths, in conflict with what the experiments show. In the above mentioned paper now a new term was added that introduces an energy penalty for relative angles between neighboring molecules at the plane of the interface. This has the effect that solutions for the vector field are no longer isotropic in the plane.

Once the expression for the energy is written down, one considers a perturbation of the system by rotating each molecule by a small angle, and does a linear stability analysis. This way one finds the energetically preferred configuration, at least as long as the linear approximation is good. And indeed, these configurations show a quasi-periodic behavior that sets in at some specific width of the film! Exactly when it sets in depends on the coupling constant in front of the new term, which can be nicely fitted with the data.

Below is a schematic image of how the molecules try to arrange themselves with the conflicting boundary conditions. You have to imagine the liquid substrate on bottom and air on top. The little rods represent the molecules of the liquid crystal. Note how, at the bottom, they are parallel to the surface while at the top they alternate between trying to remain parallel to the lower layers and trying to be orthogonal to the air interface – that is what causes the quasi-periodic structures.

Image credits: Oksana Manyuhina
This is such a nice example for how theoretical physics is supposed to work: An experimental result that can’t be explained. A mathematical model for the system, and an analysis that shows it can correctly describe the observations. We learned in this process about the relevance of boundary conditions, and that one should keep in mind configurations of a system need not respect the symmetries of the Hamiltonian (here: isometry in the plane parallel to the substrate).

Wednesday, December 05, 2012

I'm populär

The December issue of the Swedish magazine "Populär Astronomi" (Popular Astronomy) has a researcher profile about me. You can download a PDF here. For all I can tell, it's nicely written and accurate. If, in contrast to me, you actually speak Swedish, I would like to hear your opinion... I meet with the journalist, Anna Davour, during our cosmology program last month, after a glass of wine or two. It was a pleasure to talk to her and she did an excellent job.

The reason I'm grinning so stupidly in the photo is that we were looking for a whiteboard that would serve as background, and, when we found one, realized that none of us actually knew what the equations were about. Good thing one can't read them anyway. (We got as far as: It's a Hamiltonian. And something with spin couplings.)

And I have no clue what the header is supposed to mean.

Friday, November 30, 2012

The Holey Grail and its Dual: From String Theory to Strange Metals

Quantum Gravity: The Holey Grail.
Jonathan Granot’s colloquium slides have an amusing typo rendering quantum gravity the “holey grail of physicists.” How appropriate, I thought, and many of these holes are black. But typo aside, how holy really is this grail to physicists? After all, other areas of physics have their own holey grails: quantum computing for example, or high temperature superconductivity.

High temperature superconductors are badly understood theoretically, yet this understanding might allow us one day to create superconducting materials that save energy by avoiding resistive losses in long distance power lines. Imagine the potential! (Alternatively, read this.)

Presently the temperature at which these materials become superconducting is “high” only to the physicist who spends his days playing with liquid nitrogen: The transition temperature below which superconductivity sets in, also called the critical temperature, is in all known cases below -70°C. (The value depends on properties of the material as well as external fields.)

“Normal” superconductivity is described by the theory of Bardeen, Cooper and Schrieffer. At low temperatures, but in the not-superconducting phases, these metals are well described as Fermi liquids. But metals who display high temperature superconductivity are an entirely different story, and one that is largely unwritten.

One thing we know from experiment is that high temperature superconductors are “strange metals” whose electric resistance in the normal, non-superconducting, phase increases linearly with the temperature rather than with the square of the temperature. The latter is what one finds for a Fermi liquid with weakly coupled quasi-particles. Thus, plausibly the reason for our lacking theoretical understanding is that strange metals are strongly coupled system, which are notoriously hard to understand. “But darling,” said the string theorist, “I can explain everything.” And so he puts a black hole into an Anti-DeSitter (AdS) space and looks at the boundary.

The celebrated AdS/CFT correspondence makes it possible to deal with strongly coupled systems by mapping them to a weakly coupled gravitational system in a space-time with one more dimension. This is computationally more manageable, or at least one hopes so. So far, this correspondence, also called “duality”, between the gravity in the AdS space and the strongly coupled theory on the boundary of this space (thus one dimension less) is an unproved conjecture put forward by Juan Maldacena. However, it has been extensively tested for a few cases and many people are confident that it captures a deep truth about nature (though they might disagree on the extent to which it holds). We previously discussed this idea here and here.

For a high-temperature superconductor, one puts a planar black hole in the AdS space and decorates it with some U(1) vector fields and a scalar field, φ, and then goes on to calculate the free energy for different configurations of the scalar field. For temperatures above a critical value, the free energy is minimal if the scalar field vanishes identically. However, if the temperature drops below this critical value, configurations with a non-vanishing scalar field minimize the free energy, so the system must make a transition. In the figure below, you see the free energy, F, (with some normalization) as a function of the temperature (again with some normalization) for the case of φ = 0 (dotted line) and a case with non-vanishing φ (solid line). The latter solution doesn’t exist for all values of the temperature. But note that when it exists, its free energy is lower than that of the φ=0 solution.

[Image credits: Hartnoll, Herzog and Horowitz]

For these different configurations one can then calculate thermodynamic quantities of interest, such as the electric conductivity (AC and DC) or heat conductivity, and… compare the results with actual measurements.

As you can tell already from my brief summary, this approach to understand strange metals is, presently, far too rough to give quantitative predictions. It can however describe qualitative behavior, such as the scaling of the resistance with temperature that is so puzzling. And that it does quite well!

A bunch of smart people have been studying the strange metal duals for a couple of years now, among others Subir Sachdev, Sean Hartnoll, Hong Liu (who wrote a recent article for Physics Today on the topic), Shamit Kachru, Gary Horowitz, and a group here at Nordita around Lárus Thorlacius.

An exciting recent development is that Horowitz et al added a lattice structure by a periodic boundary condition, which is a big step towards modeling more realistic systems. Amazingly, despite the simplicity of the model, they scaling they find for the optical conductivity (the ability of photons to pass through a material) as a function of the photon’s frequency is in excellent agreement with experiment. (See “Optical Conductivity with Holographic Lattices” Gary T. Horowitz, Jorge E. Santos and David Tong, arXiv:1204.0519 [hep-th]).

One of the side-effects of commuting from Heidelberg to Stockholm is that the door sign with my name spontaneously relocates when I’m not at the institute, and I acquire new office mates in this process. Which is how I came to talk to Blaise Goutéraux, who arrived at Nordita this fall.

Blaise is among the AdS/CFT correspondents of Nordita’s “subatomic” group. (In fact, at this point I seem to be the only one in this group who doesn’t have anything to do with bulks and branes.) Blaise has taken on another challenge in this area, which is to describe the landscape of holographic quantum critical points, from which the strange metallic behavior at finite temperature is believed to originate. For this, he is working with more complicated geometries that exhibit different scaling behaviors from AdS.

What do we learn from this? The AdS/CFT correspondence is a useful tool, and if you’ve got a hammer quantum critical points might start looking like nails. But the only reason we call the bulk theory gravitational is that we first encountered a theory of this type when we wanted to describe the gravitational interaction. Leaving aside this scientific history, in the end it’s just a mathematical model to calculate observables that can be compared to experiment. And that’s all fine with me.

The big question is however whether this approach will ever be able to deliver quantitative predictions. For this, a connection would have to be made to the microscopic description of the material, a connection to the theories we already know. While this is not presently possible, one can hope that one day it will be. Then one could no longer think of the duality as merely useful computational tool with an educated guess for the geometry – the bulk theory would have to be a truly equivalent description for whatever is going on with the lattice of atoms on the boundary. But the cases for which the AdS/CFT correspondence has been well tested are very different from the ones that are being used here, and the connection to string theory, the original inspiration for the duality, has almost vanished. It wouldn’t be the first time though that physicists’ intuitions are ahead of formal proof.

Saturday, November 24, 2012

Is a tabletop search for Planck scale signals feasible?

In a recent arxiv paper, Jacob Bekenstein from the Hebrew University of Jerusalem proposed a tabletop experiment to test Planck scale signals:

    Is a tabletop search for Planck scale signals feasible?
    Jacob D. Bekenstein
    arXiv:1211.3816 [gr-qc]

The idea is roughly the following: Take a single photon and spread its wavefunction by suitable lenses, then let it hit some macroscopic solid block, for example a crystal. Focus the photon and detect it.

Since the crystal has a refractive index, the photon has to discard momentum into it. This momentum will be evenly spread into the crystal, distributed by phonons, and be returned to the photon upon exit. Essentially, the block reacts not like single atoms but in one piece (though it cannot instantly do so, the distribution of momentum must take a finite amount of time).

If you give the crystal a momentum for the duration of the photon’s passage, it will move, but since it’s macroscopically heavy, it will move only a tiny distance. If you look at the shift of its center-of-mass, the distance it will move scales with the energy of the incoming photon over the mass of the block.

Bekenstein puts in the numbers and finds that with presently available technology, the energy of a single photon could be so tiny that the distance the crystal moves would have to be smaller than the Planck length. This, he argues would “occasionally be at odds with the non-smooth texture of spacetime on Planck scale.” If that is so, the photon would not be able to transverse the crystal, leading to an unexpected, and observable, decrease in the transmission probability.

He also estimates sources of noise that could move the block oh-so slightly and affect the probability of the photon trespassing, thus rendering the outcome inconclusive. Bekenstein argues that by cooling the block to some Kelvin, which is cold indeed but still feasible, the noise could be kept under control. This might seem implausible at first sight, but note that the thermal noise for the motion of the center-of-mass itself is not the problem because the photon spends only a very short time inside the crystal. The relevant question is whether the center-of-mass moves in that short duration. 

So far, so brilliant. The proposed experiment is an excellent example for a model-independent test. It is so model-independent in fact that I don’t know which model could be constrained by it.

The usual expectation from Planck-scale fluctuations is that they lead to a position uncertainty that cannot become smaller than the Planck length. This does not forbid you to move an item by distances less than the Planck length, it just tells you that the position of the crystal wasn’t defined to a precision better than the Planck length to begin with.

Now, if space-time was a discrete regular lattice with Planck-length spacing then you could not move the crystal, as a rigid block, by anything shorter than the Planck length. Already if the lattice isn’t regular, this is no longer true. But even if the lattice was regular, the crystal would have to be very rigid indeed, so as to not allow any relative shift among atoms that could account for the motion of the center-of-mass. For example, if your block has a number of particles about Avogadro’s number, 1023, and you move one out of 1015 of these atoms by a distance of 10-20 m (that’s less than the size of a proton and less than the LHC can probe), you’d move the center of mass by about a Planck length. Now I don’t know much about crystals, but it seems quite implausible to me that the effective description of phonons on the lattice should be sensitive to such tiny shifts at all (even worse if the block is not a crystal but some amorphous solid).

Besides this, I don’t understand how the “rejection” of the photon should come about if one took the path integral of all possible trajectories and scatterings in the crystal, none of which is sensitive to Planck scale effects.

In summary: The proposed table-top argument tests a quantity, the shift of the crystal’s center-of-mass, which is of the order Planck length. It is unclear however if there is any plausible model for the phenomenology of quantum gravity that would be constrained by this a measurement. Is a tabletop search for Planck scale signals feasible? Maybe. Is it possible with the proposed experiment? Probably not. Does it have to do anything with Planck mass black holes? No.

Monday, November 19, 2012

There's no free lunch - and no free news either

I read last week that the German daily newspaper "Frankfurter Rundschau" declared bankruptcy. While it's not the first and probably not the last newspaper to throw in the towel, this saddened me considerably because it's the newspaper I've grown up with. Some years ago, when back in Europe, I checked their website and found it confusing to useless. I never gave it a second look, and haven't bought a print issue since forever. So to make matters worse now I feel personally responsible for sinking a newspaper I actually thought was pretty good. I also haven't bothered you for a while with my terribly insightful diagrams, so here are two to depict the problem.


The first one shows the present situation of online news providers. We get the news "for free" because they're paid by advertisement revenue. But this money has to come from somewhere, so we pay for it with the product that's being advertised. Now nobody really likes all the advert clutter around or even covering the news, and advertisement techniques are shifting. The problem is then that if newspaper advertisement doesn't yield results, and companies cut it out of the cycle, they cut off your news feed with it. What bothers me even more is that long before this happens newspapers have a large incentive to produce content that increases the number of people clicking on adverts. It is questionable this benefits the quality of information.


The second diagram shows how the situation would look like if we'd manage to get over the idea that information is free. All content has to be produced somewhere by somebody and that somebody needs to live from something. It would make more sense to directly pay for news because the feedback loop isn't distorted by product sales. If you cut out the marketing here, you cut yourself off information about products and services, which would lead to incentives for more sensible advertisement rather than to incentives for more traffic-generating content aggregation.

Most providers of online news actually represent a mixture of these two cases, but the first case has become very dominant within the last decade or so. During the last years there has been a trend to subscriptions for online content, notably realized by the NYT paywall. Now the NYT is a very prominent newspaper with a large readership, and that it seems to be working for them doesn't mean it will be working for everybody. The problem is that the subscribers still pay, implicitly, for the advertisement cost with purchase of products. As long as there are news financed entirely or to a large extent by adverts, capitalism predicts people will prefer them (unless they are of considerably worse quality that is), and it will be very difficult for pay-for-content news providers to generate enough revenue.

Thursday, November 15, 2012

Book review: “Brain Bugs” by Dean Buonomano

Brain Bugs: How the Brain's Flaws Shape Our Lives
By Dean Buonomano
W. W. Norton & Company (August 6, 2012)


We have to thank natural selection for putting a remarkably well-working and energy-efficient computing unit between our ears. Our brains have allowed us to not only understand the world around us, but also shape nature to suit our needs. However, the changes humans have brought upon the face of the Earth, and in orbit around it, have taken place on timescales much shorter than those on which natural selection works efficiently. And with this comes the biggest problem mankind is facing today: We are changing our environment faster than we can adapt to it - evolution is lagging behind.

The human body did not evolve to sit in an office chair all day long, neither did we have time to adapt to an overabundance of food, travel over different time-zones, or writing a text-message while driving on a 6-lane highway. We have absolutely no experience in governing the lives of billions of people and their impact on ecological systems. These are not situations our brains are well suited to comprehend.

There are four ways to deal with this issue. First, ignore it and wait for evolution to catch up. Not a very enlightened approach as we might go extinct in its execution. Second, the Amish approach: keep the environment in a state that our brains evolved to deal with. Understandable, but not for the curious and not realistically what most people will sign up to. Third, tweak our brains and speed up evolution. Unfortunately, our scientific knowledge isn't yet sufficient for this, at least not without causing even larger problems. This then leaves Fourth: Learn about our shortcomings and try to avoid mistakes by recognizing and preventing situations in which we are prone to make errors of judgement.

I recently reviewed David Kahneman's book "Thinking, Fast and Slow", which focuses on a particular type of shortcoming in our judgement, that is that we're pretty bad in intuitively estimating risks and making statistic assessments. Dean Buonomano's book includes these biases that are focus of Kahneman's work, but offers a somewhat broader picture, covering other "brain bugs" that human have, such as memory lapses, superstition, phobias, and imitative learning. Buonomano is very clear in pointing out that all these "bugs" are actually "features" of our brains and beneficial in many if not most situations. But sometimes what is a useful feature, such as learning from others' mishaps, can go astray, as when watching the movie “Jaws” leaves people more afraid of being eaten by sharks than of falling victim to heart attacks.

Dean Buonomano is professor for neurobiology and psychology at UCLA. His book is easy to follow and well written. It moves forward swiftly, which I have appreciated very much because it turns out I knew almost everything that he wrote about already, a clear sign that I have too many subscriptions in my reader. The illustrations are sparse but useful, the endnotes are helpful, and the reference list is extensive.

I have only one issue to take with this book, which is that Buonomano leaves the reader with little indication on how well established the research is that he writes about. In some cases he offers neurological explanations for "brain bugs" that I suspect are actually quite controversial among specialists - it would be surprising if it wasn't so. He has an interesting opinion to offer on the origin of religious beliefs that he clearly marks as his own, but in other instances he is not as careful. Since I'm not an expert on the topic, but generally suspicious about results from fields with noisy data, small samples, and large media attention, I'm none the wiser for what the durability of the conclusions is concerned.

In summary: This book gives you a good overview on biases and shortcomings of the human brain in a well-written and entertaining way. You will not get a lot of details about the underlying scientific research, but this is partly made up for with a good reference list. I'd say this book deserves four out of five stars.

Monday, November 12, 2012

Thin Nematic Films: Liquid Beauty

Image source: arXiv:1010.0832 [cond-mat.soft]

No, it's not a moon passage in front of an exoplanet. It's a thin nematic film. Let me explain.

Between condensed matter physics and chemistry, between solids and liquids, there is soft condensed matter. Soft condensed matter deals with the behavior of materials like gels, glasses, surfactants, or colloids. Typically these are fairly large molecules, possibly floating in some substrate, and can assemble to even larger structures. Understanding this assembly, the existence of different phases, and also the motion of the molecules is mathematically challenging due to the complexity of the system.

But taking on this challenge is rewarding: Soft matter is all around you, from toothpaste over body lotion to salad dressing. It is even quite literally in your veins. One of the best known examples for soft matter however is probably not blood, but liquid crystals.

Liquid crystals are rod-like molecules whose chemical structure encourages them to collectively align. How well this works depends on variables in the environment, for example temperature and magnetic fields. Liquid crystal have different phases; the transition between them depends on these environmental variables. In the so-called nematic phase molecules are locally aligned but still free to move around, and the orientation might change over long distances.

To make the molecule orientations visible, one uses polarized light on a thin film of liquid crystals on some type of substrate and a polarization filter to take the image. The liquid crystal molecules change the polarization of the light depending on the molecules' orientation, so different light intensities become a measure for the orientation of the molecules.

For the images we are looking at here we have the substrate below the liquid crystal and air above it. These two different surfaces causes a conundrum for the molecules in the liquid crystal, because they would prefer to align parallel to the substrate, but vertical to the air surface. Now if the film is fairly thick - "thick" meaning a μm or more - the molecules manage to align along threads that bend to achieve this orientation, though there are the occasional topological defects in this arrangement, places where the molecules change orientation abruptly. This is what you see for example in the image blow

[Picture Credits: Oleg Lavrentovich from the Liquid Crystal Institute at Kent State University, for more pictures see here.]

But this behavior changes if the film becomes very thin, down to a tenth of a μm or so. Then, the competing boundary conditions from the two interfaces start getting in conflict with the molecules' desire to align, which breaks the symmetry in the plane of the liquid and leads to the formation of periodic structures, like the ones you see in the first image. In this example, the nematic film does not cover the whole area shown, but it's a drop that covers only the parts where you see the periodic structures. This has the merit that one can see that the orientation of the structure to the boundary is always perpendicular.

The typical molecules in these films are not very large. In the example here, it's 6CB with the chemical structure C19H21N. The size of this molecule is much smaller than the width of the film when the effect sets in, so this cannot be the relevant scale. The question at which width the instability sets in has been studied in this paper, where also the image was taken from. It's an intriguing effect that can teach us a lot about the behavior of these molecules, not to mention that it's pretty.

Thursday, November 08, 2012

CMB anisotropy, 13 years later

Sean wrote a wonderful post about the recent measurement of the anisotropies in the cosmic microwave background from the South Pole Telescope. I am so impressed by the data. To give you a visual impression on just how dramatically the measurements have improved, I've dug out an old plot from 1999. (Note the square root in the vertical axis though.)

Fig1: CMB power spectrum, 1999, data from Python (star) and Viper (box). Image source.
Fig 2: CMB power spectrum, 2012, data from WMAP7 and SPT.  Image source.
For more information about the CMB anisotropies, please refer to this or this earlier post.

Tuesday, November 06, 2012

Program: Run your own!

This month, we have a program on "Perspectives of Fundamental Cosmology" here at Nordita, which I've been organizing together with Martin Bojowald, Kristina Giesel and Mairi Sakellariadou. Since it's a format for scientific meetings that is not so common, I thought it would be worthwhile to tell you a few words about it.

The purpose of running a program is to get researchers together for an extended amount of time, to give them the opportunity not only to get to know each other and share their ideas, but also have the time and space to work on this ideas, discuss them, and to find new collaborators. A workshop or a conference is usually too short and the schedule too packed to really allow participants to have much constructive exchange. And in contrast to a school, the talks and lectures at the program are usually focused on a specific topic and its challenges. The programs are really meant to move a field forward, and to allow people to work on this actively. Though, if you have a student who is beginning his own research agenda, sending him or her to a program on the topic will make for a good start.

The programs at Nordita are very similar to the programs at the KITP in Santa Barbara, and in some cases a longer program goes together with a shorter workshop or conference on the same or a closely related topic.

Does the idea of getting together with likeminded researchers for 4 weeks in Stockholm to dig into a problem sound good to you? You can submit a proposal for your own program here; this year's deadline is November 15. The topic should be in theoretical physics or a closely related area of the natural science. If your proposal is selected, you'll get a grant to invite people and can basically arrange the schedule as you please. And let me not forget to mention that while Santa Barbara has the nicer beaches, Stockholm is arguably a more interesting place than Goleta.

If you want to know more about Nordita, check out our information brochure (pdf, 4.5 MB).

Friday, November 02, 2012

Interna

Gloria trying out my
running shoes.
Fall has come to Germany and with it a bunch of bad news. The grant application that I had written in spring didn't go through, and the Swedes want EUR 1,500 additional taxes for the calendar year 2011. My last grandparent died, so now another generation of my family is on the cemetery "watching radish from below" as the Germans say so aptly. Also our landlord died, unexpectedly, last month. Now his wife owns the building but she isn't up to dealing with the details and handed over responsibility to an apartment management company. We're awaiting the changes this might bring, and I for once am glad I insisted on writing down every little detail into the lease, thinking to myself: what if he dies and his wife can't recall what we agreed upon.

We're also fighting again with the German "Familenkasse" for our child benefits. They had informed us at the beginning of the year (after a full year of struggle with them) that Stefan would finally get the usual monthly rate, and that retroactive back to the girls' birth. Alas, after a few months they stopped paying and he never saw a cent for the first year. They didn't give any reason for this.

After we waited for some while to see if any information would trickle down our direction, I finally lost patience and spent an hour or so trying to get somebody on the phone. Amazingly enough, they have no waiting loop, but just disconnect you if all lines are busy. Yes, that's right, I actually had to call their number over and over again. And then all I got was a call-center where they evidently had no information in Stefan's files about what was going on. So much about German efficiency.

Upon my question if they could maybe connect me to the local office that was actually responsible for this nonsense they said, no they can't connect me and there's no way to reach them by phone, I can only appear there in person if I really want. Or my husband, respectively, as it's actually his application.

As much as I like my iPhone, it's a serious disadvantage that you can't slam down the receiver.

By coincidence I then came across a website of the European Union where they offer a service called SOLVIT whose sole purpose seems to be to help with this type of communication problem between national institutions of the European Union. So now I submitted our case. I heard from them within 24 hours and they promised they'll take on the problem. I'm curious if they'll manage to sort this out, stay tuned.

The kids meanwhile are having fun taking apart the furniture and pushing all buttons that they can get their hands on. Everything that beeps is particularly interesting, for example the microwave and the babyphone. To help align Lara's gaze she now has to wear an eye patch a few hours a day. We were expecting protest, but she doesn't seem to mind. The biggest problem is that it hurts when torn off. Needless to say, Gloria will cry and scream until she also gets an eye patch, which we put on her cheek. Stefan and I also sometimes wear one. Lara probably meanwhile thinks it's a strange kind of fashion.

Our November program on "Perspectives of Fundamental Cosmology" is starting on Monday, and the next weeks will be very busy for us. After that I hope things slow down towards the end of the year.

Lara with her eye patch.


Tuesday, October 30, 2012

ESQG 2012 - Conference Summary

Conference Photo: Experimental Search for Quantum Gravity 2012.

The third installment of our conference "Experimental Search for Quantum Gravity" just completed. It was good to see both familiar faces and new ones, sharing a common interest and excitement about this research direction. This time around the event was much more relaxing for me because most of the organizational work was done, masterfully, by Astrid Eichhorn, and the administrative support at Perimeter Institute worked flawlessly. In contrast to 2007 and 2010, this time I also gave a talk myself, albeit a short one, about the paper we discussed here.

All the talks were recorded and can be found on PIRSA. (The conference-collection tag isn't working properly yet, I hope this will be fixed soon. You'll have to go to "advanced search" and search for the dates Oct 22-26 to find the talks.) So if you have a week of time to spare don't hesitate to blow your monthly download limit ;o) In the unlikely event that you don't have that time, let me just tell you what I found most interesting.

For me, the most interesting aspect of this meeting was the recurring question about the universality of effective field theory. Deformed special relativity, you see, has returned in the reincarnation "relative locality" as to boldly abandon locality altogether after the problem could no longer be ignored. It still doesn't have, however, a limit to an effective field theory. A cynic might say "how convenient," considering that 5th order operators in Lorentz-invariance violating extensions of the standard model are so tightly constrained you might as well call them ruled out.

If you're not quite as cynic however, you might take into account the possibility that the effective field theory limit indeed just does not work. That, it was pointed out repeatedly -- among others by David Mattingly, Stefano Liberati and Giovanni Amelino-Camelia -- would actually be more interesting than evidence for some higher order corrections. If we find data that cannot be accommodated within the effective field theory framework, such as for example evidence for delayed photons without evidence for 5th order Lorentz-invariance violating operators, that would give us quite something to think about.

I agree: Clearly one shouldn't stop looking just because one believes to know nothing can be found. I have to add however that the mere absence of an effective field theory limit doesn't convince me there is none. I want to know why such a limit can't be made before I believe in this explanation. For all I know it might be absent just because nobody has made an effort to derive it. After all there isn't much of an incentive to do so. As the German saying goes: Don't saw on the branch you sit on. That having been said, I understand that it would be exciting, but I'm too skeptic myself to share the excitement.

A related development is the tightening of constraints on an energy-dependence of the speed of light. Robert Nemiroff gave a talk about his and his collaborator's recent analysis of the photon propagation time from distant gamma ray bursts (GRB). We discussed this paper here. (After some back and forth it finally got published in PRL.) The bound isn't the strongest in terms of significance, but makes it to 3σ. The relevance of this paper is the proposal of a new method to analyse the GRB data, one that, given enough statistics, will allow for tighter constraints. And, most importantly, it delivers constraints on scenarios in which the speed of highly energetic photons might be slower as well as on the case in which it might be faster than the photons with lower energy. And for an example on how that is supposed to happen, see Laurent Freidel's talk.

A particularly neat talk was delivered by Tobias Fritz who summarized a simple proof that a periodic lattice cannot reproduce isotropy for large velocities, and that without making use of an embedding space. Though his argument works so far for classical particles only, I find it interesting because with some additional work it might become useful to quantify just how well a discretized approach reproduces isotropy or, ideally, Lorentz-invariance, in the long-distance limit.

Another recurring theme at the conference was dimensional reduction at short distances which has recently become quite popular. While there are meanwhile several indications (most notably from Causal Dynamical Triangulation and Asymptotically Safe Gravity) that at short distances space-time might have less than three spatial dimensions, the ties to phenomenology are so far weak. It will be interesting to see though how this develops in the coming years, as clearly the desire to make contact to experiment is present. Dejan Stojkovic spoke on the model of "Evolving Dimensions" that he and his collaborators have worked on and that we previously discussed here. There has however, for all I can tell, not been progress on the fundamental description of space-time necessary to realize these evolving dimensions.

Noteworthy is also that Stephon Alexander, Joao Magueijo and Lee Smolin have for a while now been poking around on the possibility that gravity might be chiral, ie that there is an asymmetry between left- and right-handed gravitons, which might make itself noticeable in the polarization of the cosmic microwave background. I find it difficult to tell how plausible this possibility is, though Stephon, Lee and Joao all delivered their arguments very well. The relevant papers I think are this and this.

I very much enjoyed James Overduin's talk on tests of the equivalence principle, as I agree that this is one of the cases in which pushing the frontiers of parameter space might harbor surprises. He has a very readable paper on the arxiv about this here. And Xavier Calmet is among the brave who haven't given up hope on seeing black holes at the LHC, arguing that the quantum properties of these objects might not be captured by thermal decay at all. I agree with him of course (I pointed this out already in this post 6 years ago), yet I can't say that this lets me expect the LHC will see anything of that sort. More details about Xavier's quantum black holes are in his talk or in this paper.

As I had mentioned previously, the format of the conference this year differed from the previous ones in that we had more discussion sessions. In practice, these discussion sessions turned into marathon sessions with many very brief talks. Part of the reason for this is that we would have preferred the meeting to last 5 days rather than 4 days, but that wasn't doable with the budget we had available. So, in the end, we had the talks of 5 days squeezed into 4 days. There's a merit to short and intense meetings, but I'll admit that I prefer less busy schedules.

Wednesday, October 24, 2012

The Craziness Factor

Hello from Canada and sorry for the silence, I'm here for the 2012 conference on Experimental Search for Quantum Gravity, and the schedule is packed. As with the previous two installations of the conference, we have experimentalists and theorists mixed together, which has the valuable benefit that you actually get to speak to people who know what the data means.

I learned yesterday from Markus Risse for example that the Auger Collaboration has a paper in the making to fit the penetration depth data which has earlier been claimed could not be explained neither with protons nor heavier ions or compositions thereof. Turns out the data can be fitted with a composition of protons and ions after all, though we'll have to wait for the paper to learn how well this works.

Today I just want to pick up an amusing remark by Holger Müller from Berkeley, who gave the first talk on Monday, about his experiments in atom interferometry. He jokingly introduced the "Craziness Factor" of a model, arguing that the a preferred frame, and the thereby induced violations of Lorentz-invariance, have a small craziness factor.

Naturally, this lead me to wonder what terms contribute to the craziness factor. Here's what came to my mind:
    + additional assumptions not present in the Standard Model and General Relativity. Bonus: if these assumptions are unnecessary
    + principles and assumptions of the Standard Model and General Relativity dropped. Bonus: without noticing
    + problems ignored. Bonus: problems given a name
    + approach has previously been tried. Bonus: and abandoned, multiple times
    + additional parameters. Bonus: parameters with unnatural values, much larger or smaller than one, without any motivation
    + model does not describe the real world (Euclidean, 2 dimensions, without fermions, etc). Bonus: Failure to mention this.
    + each time the model is being referred to as "speculative," "radical" or "provocative". Bonus: By the person who proposed it.
    + model has been amended to agree with new data. Bonus: multiple times.
And here is what decreases the craziness factor:
    - problems addressed. Bonus: Not only the author worries about these problems.
    - relations learned, insights gained. Bonus: If these are new relations or insights, rather than reproductions of findings from other approaches.
    - Simplifications over standard approach. Bonus: If it's an operational, not a formal simplification.
    - Data matched. Bonus: Additional predictions made.
In practice, perceived craziness has a subjective factor. The more you hear about a crazy idea, the less crazy it seems. Or maybe your audience just gets tired objecting.

Friday, October 19, 2012

Turbulence in a 2-dimensional Box: Pretty

Physicists like systems with fewer than the three spatial dimensions that we are used to. Not so much because that's easier, but because it often brings in qualitatively new features. For example, in two dimensions vortices in a fluid fulfill a conservation law that does not hold in three dimensions.

The vorticity of a fluid is a local quantity that measures, roughly, the spinning around each point of a fluid. In a two dimensional system, the only spinning that can happen is around the axis perpendicular to the two dimensions of the system. That is, if you have fluid in a plane, the vorticity is a vector that is always perpendicular to the plane, so the only thing that matters is the length and direction of this vector. In two dimensions now, the integral of the vorticity is a conserved quantity, called the enstrophy.

Pictorially this means if you create a vortex - a point that is itself at rest but around which the fluid spins - you can only do that in pairs that spin in opposite direction.

This neat paper:
    Dynamics of Saturated Energy Condensation in Two-Dimensional Turbulence
    Chi-kwan Chan, Dhrubaditya Mitra, Axel Brandenburg
    Phys. Rev. E 85, 036315 (2012)
    arXiv:1109.6937 [physics.flu-dyn]
studies what happens if you put a 2-dimensional fluid in a box with periodic boundary conditions, and disturb it by a force that is random in direction but at a distinct frequency. Due to dispersion the energy that enters the system at the frequency of the driving force cascades down to longer wavelengths. However, in a box of finite size there's a longest wavelength that will fit in. So the energy "condenses" into this longest possible wavelength. At the same time, the random force creates turbulence that leads to the formation of two oppositely rotating vortices.

Below is a plot of the vorticity of the fluid in the box. The two white/red and white/blue swirls are the vortices.
Fig 1 from arXiv:1109.6937.
Pseudocolor plot of vorticity of fluid in 2-dimensional box,
showing condensation into long wavelength modes.
My mom likes to say "symmetry is the art of the stupid", and she's right in that symmetry all by itself is usually too strict to be interesting. Add a little chaos to symmetry however and you get a good recipe for beauty.

Wednesday, October 17, 2012

Book Review: "Soft Matter" by Roberto Piazza

Soft Matter: The stuff that dreams are made of
By Roberto Piazza
Springer (April 11, 2011)

Some months ago I had a conversation about nematic films. Or was trying to have. Unfortunately I didn't have the faintest clue what this conversation was about. Neither, to my shame, did I understand much of the papers on the subject. Then I came across a review of Roberto Piazza's book on "Soft Matter" and I thought it sounds like just what I need to learn some new vocabulary.

Roberto Piazza is professor for Condensed Matter Physics at the Politecnico di Milano, and his book isn't your typical popular science book. It is instead a funny mixture of popular science book and what you might find in a textbook introduction, minus the technical details. In some regards this mixture works quite well. For example, Piazza is not afraid to introduce terminology and even uses an equation here and there. In other regards however, this mixture does not work well. The book does introduce far too much terminology in a quite breathless pace. It's a case in which less would have been more.

The book covers a lot of terrain: Colloids, aerosols, polymers, liquid crystals, glasses and gels, and in the last chapter amino acids, proteins, and the basic functions of cells. The concepts are first briefly introduced and then in later chapters there are examples and more details. In principle this is a good structure. Unfortunately, the author has a tendency to pack the pages with too much information, information that isn't always conductive to the flow of the text, and doesn't spend enough time on clarifying the information he wants to get across, or that I believe he might have wanted to get across.

The text is accompanied by several color figures, which are in most cases helpful, but there could have been more, especially to show molecular structures that are often explained in words. The book comes with a glossary that is very useful. It does however not come with references or suggestions for further reading, so if the reader wants to know more about a topic, they are left on their own.

In summary, the book is a useful introduction to soft matter, but it isn't exactly a captivating read. Especially in the last chapter, where Piazza goes on about proteins and their functions - while constantly reminding the reader that he's not a biologist - I had to resist the temptation of skipping some pages. Not because the topic is uninteresting, but because the presentation is unstructured and wasteful on words, and thus wasteful on the reader's time.

That having been said, lack of structure and too many words is just the type of criticism you'd expect from a German about an Italian, so take that with a grain of salt ;o) And, yes, now I know what nematic films are. I'd give this book three out of five stars.

Thursday, October 11, 2012

PRL on "Testing Planck-Scale Gravity with Accelerators"

With astonishment I saw the other day that Phys. Rev. Lett. published a paper I had come across on the arxiv earlier this year:
I had ignored this paper for a simple reason. The author proposes a test for effects that are already excluded, by many orders of magnitude, by other measurements. The "Planck-Scale Gravity" that he writes about is nothing but 5th order Lorentz-invariance violating operators. These are known to be extremely tightly constrained by astrophysical measurements. And the existing bounds are much stronger than the constraints that can be reached, in the best case, by the tests proposed in the paper. We already know there's nothing to be found there.

The author himself mentions the current astrophysical constraints, in the PRL version at least, not in the arxiv version - peer review isn't entirely useless. But he omits to draw the obvious conclusion: The test he proposes will not test anything new. He vaguely writes that
"The limits, however, are based on assumptions about the origin, spatial or temporal distribution of the initial photons, and their possible interactions during the travel. Another critical assumption is a uniformly distributed birefringence over cosmological distances... In contrast to the astrophysical methods, an accelerator Compton experiment is sensitive to the local properties of space at the laser-electron interaction point and along the scattered photon direction."
He leaves the reader to wonder then what model he wants to test. One in which the vacuum birefringence just so happens to be 15 orders of magnitude larger at the collision point than anywhere else in space where particles from astrophysical sources might have passed through? Sorry, but that's a poor way of claiming to test a "new" regime. At the very least, I would like to hear a reason why we should expect an effect so much larger. Space-time here on Earth as well as in interstellar space is, for what quantum gravitational effects are concerned, essentially flat. Why should the results be so dramatically different?

I usually finish with a sentence saying that it's always good to test a new parameter regime, no matter how implausible the effect. In this case, I can't even say that, because it's just not testing a new parameter regime. The only good thing about the paper is that it drives home the point that we can test Planck scale effects. In fact, we have already done so, and Lorentz-invariance violation is the oldest example of this.

Here's one of the publication criteria that PRL lists on the journal website:
"Importance.
Important results are those that substantially advance a field, open a significant new area of research or solve–or take a crucial step toward solving – a critical outstanding problem, and thus facilitate notable progress in an existing field."
[x] Fails by a large amount.

Thanks to Elangel for the pointer.

Monday, October 08, 2012

Towards an understanding of the Sun's Butterfly Diagram

The layered structure of the sun.
Click to enlarge. Image credits: NASA
It's hot, round, and you're not supposed to stare at it: The Sun has attracted curiosity since we crawled out of the primordial pond. And even though we now have a pretty good idea of how the Sun does its job, some puzzles stubbornly remain. One of them is where sunspots form and how their location changes with the solar cycle. A recent computer simulation has now managed to reproduce a pattern that brings us a big step closer to understanding this.

The Sun spins about a fixed axis, but since it's not solid its rotation frequency is not uniform: At the visible surface, the equator rotates in about 27 days whereas close by the poles it takes 35 days. The plasma that forms the Sun is held together by its own gravitational pull with a density that is highest in the center. In this high density core, the sun creates energy by nuclear fusion. Around that core, there's a layer, the radiative zone, where the density is already too small for fusion, and the heat created in the core is just passed on outwards by radiative transfer. Further outside, when the density is even lower, the plasma then passes on the heat by convection, basically cycles of hot plasma moving upwards and cooler plasma moving downwards. Even further outside, there's the photosphere and the corona.

The physics of the convection zone is difficult because the motion of the plasma is turbulent, so it's hard to understand analytically and numerical simulations require enormous computing power. Some generic features are well understood. For example the granularity of the sun's surface comes about by a mechanism similar to Rayleigh–Bénard convection: In the middle of the convection cell there's the hot plasma rising and towards the outside of the cell there's the cooler plasma moving down again.


It also has been known since more than a century that sunspots are not only colder than the normal surface of the sun, but are also regions with strong magnetic fields. They arise in pairs with opposite magnetic polarity. Sunspot activity follows a cycle of roughly 11 years, after which polarity switches. So the magnetic cycle is actually 22 years, on the average.

A big puzzle that has remained is why sunspots are created predominatly in low latitudes (below 30°N/above 30 S) and, over the course of the solar cycle, their production region moves towards the equator. When one plots the latitude of the sunpots over time, this creates what is known as the "Butterfly diagram", shown below


You can find a monthly update of the butterfly diagram on the NASA website. The diagram for the magnetic field strength follows the same pattern, except for the mentioned switch in polarity, see for example page 54 of this presentation. On the slide, note that in the higher latitudes the magnetic fields move towards the poles rather than towards the equator.

Numerical simulation of the convection zone have been made beginning already in the early 80s, but so far something always left the scientists wanting. Either the sunspots didn't move how they should or the rotation wasn't faster towards the equator, or the necessary strong and large-scale magnetic fields were not present, or something else just didn't come out right.

At Nordita in Stockholm, there's a very active research group around Axel Brandenburg, which has developed a computer code to simulate the physics of the convection zone. It's called the "pencil code" and is now hosted by Google code, for more information see here. Basically, it's an integration of the (non-linear) hydrodynamics equations that govern the plama with magnetic fields added. In the video below you see the result of a very recent simulation done with his collaborators in Helsinki:


The colors show the strength of the magnetic field (toroidal component), with white and blue being the strongest fields, blue for one polarity and white for the other. Two things you should be able to see in the video: First, the rotation is faster at the equator than at the poles, second, the spots of strong magnetic fields in low latitudes migrate towards the equator. One can't see it very well in the video, but in the higher latitudes the magnetic fields do move towards the poles, as they should. In the time-units shown in the top-left corner, about 600 time steps correspond to one solar cycle. A computation like this, Axel tells me, takes several weeks, run on 512 to 2048 cores.

Details on how the movie was made can be found in this paper
    Cyclic magnetic activity due to turbulent convection in spherical wedge geometry
    Petri J. Käpylä, Maarit J. Mantere, Axel Brandenburg
    arxiv: 1205.4719
The model has six parameters that quantify the behavior of the plasma. For some of these parameters, values that would be realistic in the sun are too large to be possible to simulate. So instead, one uses different values and hopes to still capture the essential behavior. The equations and boundary conditions can be found in the paper, see eqs (1)-(4) and (6)-(11).

The calculation doesn't actually simulate the whole convection zone, but only a wedge of it with periodic boundary conditions. In the video this wedge is just repeated. The poles are missing because there the coordinate system becomes pathological. In the part that they simulate, they use 128 x 256 x 128 points. A big assumption that goes on here is that the small scales, scales too small to be captured at this resolution, don't matter for the essential dynamics.

If you found the video was too messy, you can see the trend of the magnetic fields nicely in the figure below, which shows the average strength of the magnetic fields by latitude as a function of time.

Fig 3 from arxiv:1205.4719.


Not all is sunny of course. For example, if you gauge the timescale with the turnover time in the convection zone which can be inferred from other observatons, the length of the magnetic solar cycle is about 33 years instead of 22. And while the reason for the faster rotation towards the equator can be understood from the anisotropy of the turbulence (with longitudinal velocity fluctuations dominating over latitudinal ones), the butterfly trend is not (yet) analytically well understood. Be that as it may, I for certain am impressed how much we have been able to learn about the solar cycle despite the complicated turbulent behavior in the convection zone.

The original movie (in somewhat better resolution) and additional material can be found on Petri's website. Kudos to Axel and Amara for keeping me up to date on solar physics.