Pages

Monday, July 30, 2012

So I made a video

I've been trying to convince some people here at Nordita that it would be great if we'd have a couple of brief videos explaining what research we're doing in addition to the seminars and lectures that we have online. You can tell that I miss PI's active public outreach program...

After some soul-searching I figured there's no way to avoid that I come up with a video myself. Ideally one that a) leaves plenty of space to do better, and that b) makes it very clear I'm not the person to record or edit any video. So here it is:


There is nothing happening in this video, except me standing there and talking, so don't expect much action.

As you can easily see, I still haven't figured out how to turn off the automatic brightness adjustment. That's because it's not my video camera, I have no manual, and the menu description is cryptic at the best.

While I'm at it, I want to draw your attention to this nice blog run by Claire Thomas, physics graduate student at UC Berkeley, who collects videos of researchers explaining why they do what they do. So, get inspired, turn on your camera and tell us what you're working on!

Saturday, July 28, 2012

ORCID: Working towards a global researcher ID

I didn't change my family name when I got married six years ago; it would have been a complication on my publication list that I didn't want to spend brain time on. Stefan's family name, Scherer, is much more common than mine is. Indeed there is another physicist with name Stefan Scherer, and to make matters worse the other Stefan Scherer actually works on quite similar topics than our Stefan Scherer did before he left academia. A case for middle names then. Still, the occasional mixup has happened.

Thus, while I'm the only Hossenfelder on the arxiv and my Google scholar profile basically assembles itself, I'm sympathetic to the problem of author identification. The arXiv helpfully offers an author ID. I don't know how many people actually use it and anyway, it's of limited use as long as publishers don't use it.

So here's an interesting initiative then: ORCID - the Open Researcher and Contributor ID. The aim of this initiative is to create a global and interdisciplinary registry for authors. It's run by a non-profit organization with a board of directors that seems to bring together several key institutions, and looks quite trustworthy to me. On their website one finds:

"The central goal of the Open Researcher and Contributor ID non-profit organization (ORCID) is to solve the long-standing name ambiguity problem in scholarly communication. Accurate attribution is a fundamental pillar of the scholarly record. Global identification infrastructure exists for content but not for the producers of that content, creating challenges in establishing the identity of authors and other contributors and reliably linking them to their published works.

The core mission of ORCID is to rectify this by creating a central registry of unique identifiers for individual researchers and an open and transparent linking mechanism between ORCID and other current author identifier schemes. This registry will be a centralized identity system for collecting and managing information describing i) contributors themselves and ii) relationships between contributors and their scholarly publications as well as various other types of academic output."
I didn't find much on the website in terms of procedure, so I don't know how they are assembling their database. I guess that as an author you don't actually have to do much yourself. Though at some point you might be sent a notification asking you to have a look at your data and check if it's accurate, at least that would be my guess. There's some information on that website how academic institutions can support this initiative, which vaguely mentions some fee but no details on that. Either way, it looks to me like this global author ID is well under way and has the potential to simplify many researcher's and publishers' lives.

Wednesday, July 25, 2012

Neutral Kaons and Quantum Gravity Phenomenology

Earlier this year, there was an interesting program at the KITP on "Bits, Branes and Black Holes." Unfortunately I couldn't be there for reasons that are presently happily taking apart the new IKEA catalogue. However, many audios and videos are online, and meanwhile there's also some papers on the arxiv picking up the discussions from the program.

One of the maybe most interesting developments is a revival of the idea that black hole evolution might just not be unitary. Recall, if one takes Hawking's semi-classical calculation of black hole evaporation one has a hard time explaining how information that falls into a black hole can come out again. (And if you don't recall, read this.) There is the option to just accept that information doesn't come back out. However, this would be in conflict with unitarity, one of the sacred principles of quantum mechanics. But nothing really is sacred to a theoretical physicist with a headache, so why not do without unitary? Well, there is an argument dating back to the early 80s by Banks, Susskind and Peskin that this would go along with violation of energy conservation.

Each time this argument came up I recall somebody objecting. Personally I am not very convinced that's the right way to go, so I was never motivated enough to look into this option. But interestingly, Bill Unruh has now offered a concrete counter-example showing that it is possible to have decoherence without violating energy conservation (which you can find on the arXiv here), that seems to have gone some way towards convincing people it is possible. It seems to me quite likely at this point that non-unitary black hole evaporation might increase in popularity in the next years again, so this is a good time to tell you about neutral Kaons. Stay with me for some paragraphs and the link will become clear.

Black hole evaporation seems non-unitary when taking Hawking's calculation all the way to the end stage because the outcome is always thermal radiation no matter what one started with - it's a mixed state. One could have started for example with a pure state that collapsed to a black hole. Unitary evolution will never give you a mixed state from a pure state.

But what if we'd take it seriously that black hole evaporation is not unitary? It would mean that if you take into account gravity it might be possible to note decoherence in quantum systems when there shouldn't be any according to normal quantum mechanics. Everything moves through space-time and, in principle, that space-time should undergo quantum fluctuations. So it's not a nice and smooth background, but it is what has become known as "space-time foam" - a dynamic constantly changing background, a background in which Planck scale black holes might be produced and decay all the time.

This idea calls for a phenomenological model, a bottom-up approach that modifies quantum mechanics in such a way as to take into account this decoherence induced by the background. In fact a model for this has been proposed already in the early 80s by Ellis et al in their paper "Search for Violations of Quantum Mechanics." It is relatively straight forward to reformulate quantum mechanics in terms of density matrices and allow for a non-unitary additional term for the Hamiltonian. As usual for phenomenological models, this modification comes with free parameters that quantify the deviations. For quantum gravitational effects, you should expect the parameters to be a number of order one times the necessary powers of the Planck mass. (If that doesn't make sense, watch this video explaining natural units.)

This brings us to the question how to look for such effects.

A decisive feature of quantum mechanics is the oscillation between eigenstates, which is observable if the state in which a particle is produced is a superposition of these eigenstates. Decoherence is the loss of phase information, so the oscillation is sensitive to decoherence. Neutrino oscillations are an example of an oscillation between two Hamiltonian eigenstates. However, neutrinos are difficult to observe - it takes a lot of patience to collect enough data because they interact so weakly. In addition, at the typical energies that we can produce them with the oscillation wavelength is of the order of a kilometer to some hundred kilometers, not really very lab friendly.

Enter the neutral Kaons. The Kaons are hadrons; they are composites of quarks. The two neutral Kaons have the quark content of strange and anti-down, and down and anti-strange. Thus, even though they are neutral, they are not their own anti-particles. Instead, each is the anti-particle to the other. These Kaons are not however eigenstates of the Hamiltonian. Naively, one would expect the CP eigenstates, that can be constructed from them, to be the eigenstates of the Hamiltonian. Alas, the CP eigenstates are not Hamiltonian eigenstates either because the weak interaction breaks CP invariance.

The way you can show this is to construct the CP eigenstates to the eigenvalues +1 and -1 and note that the state with eigenvalue +1 can decay into two pions, which is the preferred decay channel. The one with eigenvalue -1 needs (at least) three pions. Since three is more than two, the three pion decay is less likely, which means that the CP -1 state lives longer.

Experiment shows indeed that there is a long lived and a short lived Kaon state. These measured particles are the mass eigenstates of the Hamiltonian. But if you wait for the short lived states to have pretty much all decayed, you can show that the long lived one still can do a two pion decay. In other words, the CP eigenstates are not identical to the mass eigenstates, and the CP +1 state mixes back in. This indirect proof of CP violation in the weak interaction got Cronin and Fitch the Nobel Price in 1980.

The same process can be used to find signs of decoherence. That's because the additional, decoherence inducing term in the Hamiltonian enters the prediction of the observables, eg the ratio of the decay rates in the two pion channel. The relevant property from the neutral Kaons that enters here is the difference in the decay widths which happens to be really small, of the order 10-14 GeV, times the CP violating parameter ε2 which is about 10-6, and we know these are values that can be measured with presently available technology.

This has to be compared to the expectation for the size of the effect if it was a quantum gravitational effect, which would be of the order M2/mPl, where M is the mass of the Kaons (about 500 MeV) and mPl is the Planck mass. If you put in the numbers, you'll find that they are of about the same order of magnitude. There's some fineprint here that I omitted (most important, there are three parameters so you need several different observables) but roughly you can see that it doesn't take a big step forward in measurement precision to be sensitive to this correction. In fact, presently running experiments are now on the edge of being sensitive to this potential quantum gravitational effect, see eg this recent update.

To come back to the opening paragraphs, the model that is being used here has the somewhat unappealing feature that it does not automatically conserve energy. It is commonly assumed that energy is statistically conserved, for example Ellis et al write "[A]t our level of sophistication the conservation of energy or angular momentum must be put in by hand as a statistical constraint." Mavromatos et al have worked out a string-theory inspired model, the D-particle foam model, in which energy should be conserved if the recoil is taken into account, but the effective model has the same property that individual collisions may violate energy conservation. It will be interesting to see whether these models receive an increased amount of attention now.

I like this example of neutral Kaon oscillations because it demonstrates so clearly that quantum gravitational effects are not necessarily too small to be detected in experiments, and it is likely we'll hear more about this in the soon future.

Monday, July 23, 2012

2012 Statistics from the German Science Foundation

The German Science Foundation (DFG) has recently released statistics and tables about science funding in Germany and, in some cases, the European Union. You can find all the numbers on this website. If they have an English version, I couldn't find it, so let me pick out for you some graphics that I found interesting.

First, here's a graphic for the national investment in research and development as a percentage of the GDP by country (click to enlarge).

From top to bottom the list shows Israel, Finland, Sweden, Japan, Korea, Denmark, Switzerland, Germany, USA, Austria, Iceland, OECD total, France, Australia, Belgium, Canada, EU-27, Great Britain, Slovenia, Netherlands, Norway. I'm not surprised to see Sweden scoring high, but I am surprised that the Netherlands invest less than Great Britain. The color code from top to bottom says universities, other research institutes, industry, private non-profit.

Second graphic shows the distribution of ERC grants by country and field of research. The color code is: orange - humanities and social sciences, red - life sciences, green - natural sciences, blue - engineering. It would be interesting to see these numbers compared to the population, but they have no respective graph. It says in the text however that Israel and Switzerland have secured a very large number of grants relative to population. I have no clue why there's an arrow pointing to Iceland, maybe just so you don't miss it.


Finally, let me pick out a third graphic. It shows the fraction of women among those contributing to DFG projects (principal investigator, co-PI and so on). The fields shown are from left to right: humanities, social sciences, biology, medicine, veterinary medicine, chemistry, physics, mathematics, geology, mechanical engineering, computer science and electronics, architecture. The horizontal line at 15% with the label "Durchschnitt" is the average.


As usual, the female ratio in physics is on the lower end, something like 7 or 8%. I don't know what's wrong with architecture, which seems to have an even lower ratio. In the text to the graphic it says that the fraction is the same or similar to the fraction of woman among the applicants. You can apply for funding with the DFG as soon as you have a PhD. The fraction one sees in the graphic is more representative however of the female ratio in tenured faculty. Not surprisingly so, because it is difficult to get institutional funding (except possibly scholarships) without faculty support, and few try. (I did. Unsuccessfully.)

On the lighter side, I note that the Germans have adopted the English word "gender analysis" and made it into "Gender-Analyse."

Wednesday, July 18, 2012

Watching Ytterbium

Absorption image of Yb ion.
Image source. Via.
If you know anything about atoms you know they're small. And if you know a little more you know that the typical size of an atom sounds Swedish - it's a few Ångström, or 10-10 meters.

First actual images of atoms went around the world two decades or so ago, taken with scanning tunnel microscopes. These microscope images require careful preparation of the sample, and also take time. It is highly desirable to find a method that works faster and is more flexible for small samples, ideally without a lot of preparation and without damaging the sample.

Taking an image with a scanning tunnel microscope doesn't have a lot in common with watching something the way that we are used to. For the average person "watching" means detecting photons that have been scattered off objects. Quantum mechanics sets a limit to how well you can "watch" an atom absorbing and releasing photons of some energy. That's because the absorption of a photon will excite an electron and temporarily put it into a level with higher energy. Alas, these excited levels have some lifetime and don't decay instantaneously. As long as the electron is in the excited state it can't absorb another photon.

So you might conclude it's hopeless trying to watch a single atom. But a group of experimentalists from Australia have found a nifty way to do exactly that. Their paper was published in Nature two weeks ago
So how do you do it? First, get some Ytterbium. Strip off an electron, so you have a positively charged ion, and put it into an ion trap in ultra high vacuum. Then laser cool your ion to a few mK (that's really, really cold).

Ytterbium has a resonance at 370 nm (in the near ultraviolet). At that frequency you can excite an Yb electron from the S ground-state to the P excited state. Alas, if it decays, the electron has a probability of 1/200 to not go back into the ground state, but end up in a metastable D state of intermediate energy. The lifetime of the excited P state is some nanoseconds, but that of the metastable state is much much longer, about 50 microseconds. So if you just keep exciting your atom at 370 nm, after some nanoseconds you'll have kicked it into the metastable state where it stays and you can't watch anything anymore at that frequency. So what's the experimentalist to do? They stimulate the emission with the right wavelength, in this case at 935.2 nm (in the near infrared), to get the electron back from the metastable state into the ground state.

Actually, to excite the atom you don't need incident light of exactly the right frequency, and in fact that's not what they use. The absorption probability has a finite width and is not exactly peaked. That means there's a small probability the atom will absorb light of slightly smaller frequency and then emit it at the resonance frequency. The actual light the experimentalists used is thus not at 370 nm, but at 369.5 nm. That has the merit that you can in principle tell (with a certain probability) which light was absorbed and reemitted and which one was never absorbed to begin with. The detuning also gives you a handle on how strongly you can afford to disturb your atom, for every time a photon scatters off it, it gets a recoil and moves. You don't want it too move too much, otherwise you'll get a blurry image.

So here's then how you take your image. Shine the slightly detuned light on the ion while driving the transition back from the metastable state to the ground state, and measure the photons at the resonance frequency. Do the same thing without driving the transition back from the metastable state. This has the effect that the probability that the ion can absorb anything is really small and you get essentially a background image. Then subtract both images, and voila. While you do that, you better try not to have too much fluctuations in the intensity of the light.

The merit of this method is its flexibility and it's also reasonably fast with illumination times between 0.05 and 1 second. The authors write that with more improvement this method might be useful to study the dynamics of nucleic acids.

Monday, July 16, 2012

Bekenstein-Hawking entropy, strong and weak form

At the recent Marcel Grossmann meeting, I had been invited to give a talk about my 2009 paper with Lee on the black hole information loss paradox. (For a brief summary of the paper, see here.)

It occurred to me in some conversations after my talk that I lost part of the younger audience in the step where I was classifying solution attempts by the strong and weak form of the Bekenstein-Hawking entropy. Rarely have I felt so old as when I realized that the idea that the entropy of the black hole is proportional to its area, and the holographic principle which is based on it, has been beaten into young heads so efficiently that the holographic principle has already been elevated to property of Nature - despite the fact that it has the status of a conjecture, a conjecture based on a particular interpretation of the black hole entropy.

The holographic principle says, in brief, that the information about what happens inside a volume of space is encoded on its surface. It's like the universe is a really bad novel - just by reading the author's the name and the blurb on the cover you can already tell the plot.

The holographic principle plays a prominent role in string theory, gravity is known to have some "holographic" properties, and the idea just fits so perfectly with the Bekenstein-Hawking entropy. So there is this theoretical evidence. But whether or not quantum gravity is actually holographic is an open question, given that we don't yet know which theory for quantum gravity is correct. If you read the wikipedia entry on the holographic principle however you might get a very different impression than it being a conjecture.

The most popular interpretation of the Bekenstein-Hawking entropy is that it counts the number of microstates of the black hole. This interpretation seems to have become so popular many people don't even know there are other interpretations. But there are: Scholarpedia has a useful list that I don't need to repeat here. They come in two different categories, one in which the Bekenstein-Hawking entropy is a property of the black hole and its interior (the strong form), and one in which it is a property of the horizon (the weak form). If it is a property of the horizon there is, most important, no reason why the entropy of the black hole interior, or the information it can store, should be tied to the black hole's mass by the Bekenstein-Hawking formula. If the weak interpretation is true, a black hole of a certain mass can store an arbitrary amount of information.

If Hawking radiation does indeed not contain any information, as Hawking's calculation seems to imply and is the origin of the black hole information loss paradox to begin with, then you're forced to believe in the weak form. That is because if the black hole loses mass then, according to the strong form of the Bekenstein-Hawking entropy, its capacity to store information decreases and that information has to go somewhere if it's not destroyed. So it has to come out, and then one has explaining to do just how it comes out.

There is a neat and simple argument making this point in a paper by Don Marolf, the "Hawking radiation cycle"
"[O]ne starts with a black hole of given mass M, considers some large number of ways to turn this into a much larger black hole (say of mass M′), and then lets that large black hole Hawking radiate back down to the original mass M. Unless information about the method of formation is somehow erased from the black hole interior by the process of Hawking evaporation, the resulting black hole will have a number of possible internal states which clearly diverges as M′ → ∞. One can also arrive at an arbitrarily large number of internal states simply by repeating this thought experiment many times, each time taking the black hole up to the same fixed mass M′ larger than M and letting it radiate back down to M. We might therefore call this the ‘Hawking radiation cycle’ example. Again we seem to find that the Bekenstein-Hawking entropy does not count the number of internal states."

Let me also add that there exist known solutions to Einstein's field equations that violate the holographic bound, though it is unclear if they are physically meaningful, see this earlier post.

While I admit that the strong form of the Bekenstein-Hawking entropy seems more appealing due to its universality and elegance, I think it's a little premature to discard other interpretations. So next time you sit in a talk on the black hole information loss problem, keep in mind that the Bekenstein-Hawking entropy might not necessarily be a measure for the information that a black hole can store.

For a good discussion of these both interpretations and their difficulties, see "Black hole entropy: inside or out?" by Ted Jacobson, Donald Marolf and Carlo Rovelli.

Thursday, July 12, 2012

Cabibbo what?

I recently came across an old report from Nordita, the years 1957-1982. It's in Swedish and for all I can tell it covers the mission, organization and the research areas that were pursued back then, atomic and nuclear physics, condensed matter and astrophysics. Somewhere in the middle of the little booklet one finds this photo


The photo has no caption and I have no clue who the people are. I suspect it was taken sometime in the 70s.  The woman in the photo is the only female face that appears in the whole booklet. Wondering what a caption might have read, I thought it looks like "Cabibbo what? Forget about that, how about tonight?" while the guy on the far right clearly feels like slapping his forehead ;o)

Anyway, Stefan and I couldn't really figure out what the multiplet is they have on the blackboard there, the one with the two L's and the N in the middle. Anybody has a good guess? Or does anybody actually know who's on the photo? Seeing that they look pretty young, they might actually still be alive. Or maybe you have a suggestion for an alternative caption...

Update: Somebody on FB indeed recognized people on the photo! So the person the the very right is Finn Ravndal and the woman's name Cecilia Jarlskog.

Tuesday, July 10, 2012

100 years ago: The discovery of cosmic rays

Already in 1785, Charles Coulomb pointed out a puzzle that would take more than a century to solve: An electrically charged conductor will lose charge with time, even if the only way to decharge is through air, which was generally considered a good insulator.

In 1900 the two Germans Julius Elster and Hans Geitel, and independently the Scotsman Charles Wilson, offered the explanation that air becomes partly conductive by the presence of ionizing radiation. It was known at this time that the Earth contains slightly radioactive substances that create a natural background radiation. This was believed to be the origin of the ionizing radiation.

The meterologist Franz Linke, with support from Geitel and Elster, set out to test this hypothesis. If the radiation is emitted by the Earth, its intensity should drop with distance from the ground. In 1902 and 1903 Linke, on board of  a balloon, found tentative evidence that, after an initial decrease between 1000 and 3000 meters, the intensity of ionizing radiation did increase again. He concluded nevertheless that the origin of ionization in the first line should be sought after on Earth. Linke's research was followed up on by Theodor Wulf, who measured the intensity, among other places, high up in the alps, and found no evidence for the increase of intensity, caused by "cosmic radiation." He was the first to coin the term.

But the situation remained inconclusive. Wulf himself went on to measure the discharge of a charge isolated by air on top of the Eiffel tower. He predicted that in that height (about 300m), the radiation should be about 74% less than on the ground. Instead, he found it to be only 13% less. And in 1910, the Italian physicist Pacini argued that, if the ionizing radiation is emitted by the solids in the Earth, then there should be less of it to find on the sea. That however was not the case either.

On August 7th 1912, Franz Hess and his colleague Kolhorster started for the final one of sevel balloon rides, and this final one lead up to 5350 meter. Despite oxygen mask, Hess reported feeling disoriented, and in fact accidentally turned off one of his detectors already below 4000m. Nevertheless, his measurement clearly showed an increase in the ionization. This was the first conclusive evidence for cosmic radiation.

Then the first world war spelled a time-out for academic curiosity. It wasn't until 1921 that the American  physicist Robert Millikan, together with Ira Bowen, got back to this line of research. Their first balloon ride also found an increase in the ionizing radiation, though less pronounced than what Hess found. The New York Times celebrated him as the discoverer of "Millikan radiation." Needless to say, Hess and Kolhorster were not amused.

The measurement of ionizing radiation dramatically improved with the invention of the Geiger counter in 1928 and the spread of bubble chambers. By 1930 there was little controversy left about the existence of cosmic radiation. Franz Hess was awarded the Nobel Prize in physics in 1936.

Today, cosmic radiation is the true high energy frontier, and has lead to a great many discoveries starting with the positron and the muon, and later the Pion, up to the invaluable knowledge that atmospheric neutrinos have brought to the standard model of particle physics. And, who knows, maybe the first evidence for physics beyond the standard model will come from the cosmic ray frontier too.

Sunday, July 08, 2012

Interna

The past month has been very busy for us, and it will unfortunately remain that way for some more weeks, after which I hope time pressure will ease off.

Our two lovely ladies are still not willing to speak to us. They have however developed other communication channels, or maybe I've just become good at guessing what they want. They now both have four molars and Gloria finally gets her missing front teeth (the outer ones on the bottom, nicely visible in the photo to the right).

The developing brain of the human infant is a mystery as well as a miracle, and one of the least well understood properties of this development is childhood amnesia, the fact that adults' earliest memories normally dates back to the age of 2-4 years, but not before that. We do learn many things before that age of course which remain with us, but they do not come in the form of episodic memory, in which we realize our self being in a certain situation. What exactly is the reason for childhood amnesia, and what are the functions necessary for the formation of episodic memory, nobody really knows. It is generally believed that it is connected to self-awareness and also language development, which comes with the ability to conceive of and understand narratives.

There is, interestingly, some research showing that the onset of memories differs between cultures and also between genders, see eg this pdf (women tend to recall more details). There is a line of research in which it has been suggested that early autobiographical memory formation depends on of the way in which parents talk about the past and encourage their children to do the same. It is also well known that emotionally intense events can be recalled back to very early age. Generally, high emotional impact is conductive to memory formation.

My earliest memory, I believe, is being bitten by a hamster. (I also recall having been told repeatedly to not stick my fingers into the cage, but, well.) I must have been roughly 3 years or so at that time. I also recall falling down the stairs, but that must have been later. I have a bunch of memories of my younger brother when he was old enough to walk, but not old enough to talk, which also dates me at about 3 years. Interestingly enough, I have absolutely no memory of my parents till past the age of 4. Which fits well with my perception that the girls do not so much take note of me as a person, but as a freely available service that's just around, like the air to breathe, but nothing that really requires attention.

Needless to say, I am wondering what one day will be Lara and Gloria's earliest memory.

Wednesday, July 04, 2012

Hello, Higgs. What now?

CMS 7 TeV + 8 TeV diphoton channel CMS.
Source: Phil Gibbs
So they've found the Higgs. Not that this announcement was much of a surprise today, after lots of rumors had trickled into the blogosphere during the last weeks. A milestone, they will write in the history books, a symphony of global collaboration and combined efford, a triumph of the human mind, it was, finding the particle responsible for the origin of mass, roughly where expected with roughly the properties expected.

Roughly, but not exactly as it seems, apparently they have too few tau/anti-tau decays, and, as we've known for some while, the mass is somewhat heavy.

There are some good summaries here, here and here.

There will follow now years and years of analysis of LHC data and theorizing, thousands of papers and hundreds of cubic meters of coffee will be needed to get a clearer picture. We've learned something - now we can revise our understanding of nature. And in the end, we'll be left with a puzzle, an open question, and a theory that requires higher energies to really test it.

And so, strangely, on this sunny day for high energy particle physics, I feel somewhat blue about the prospects. It's been almost two decades since the last discovery of a particle that we presently believe is elementary, the top quark in 1995, which was the year I finished high school. It's been a long way and an enormous effort to that little bump in the above plot. There isn't so much more we can do with hadron colliders. If we try really hard, we can ramp up the energy a little and improve the luminosity a little. Of course what we want next is a lepton collider like the ILC that will complete the picture that the LHC delivers.

But we have a diminishing return on investment. Not so surprisingly - it's the consequence of our increasingly better understanding that it takes more effort to find something new. And to make that effort of blue sky fundamental research, we need societies who can afford it. There's an economic question here, about the way mankind will develop, it's the question whether or not we'll be able to take care of our survival needs, and still continue to have enough resources to push the boundary of nature's secrets back further.

If I look at the ongoing disaster that the European Union has turned into, and at our inability to fix problems with the global financial system, our inability to help billions of people who live in poverty and who lack health care, our inability to find a global perspective on global problems - and our ignorance for most of this issues too - I am far from certain that we will be able to continue to afford that investment. And if we can't, then we lose a major source of innovation, and we risk getting stuck entirely.

So on this day of triumph for fundamental research, I really hope we do get our act together and manage to address the problems we have in governing a global society, for the sake of science.

Sunday, July 01, 2012

Workshop on Nonlocality, Summary

Sorry for the silence. I've been stuck in the workshop on nonlocality that I organized here at Nordita, and it seemed somewhat rude to blog through talks of people I invited to speak.

It all went well, except that my co-organizer cancelled two days before the start of the workshop, so I had to  be the sole entertainer of a group of 27 people. A group that, to my own shame, was almost entirely male except for one student, which however I only realized when I was standing in front of them. I have a public speaking anxiety, one of the most common anxieties there is, but really somewhat unfortunate for a scientist. People tell me my talks are okay, but the more ancient parts of my brain still think the smart thing to do when a group of guys stares at me is to run really fast, and the Scandinavians are a particularly scary audience. Anyway, I think I managed to pull it off, minus the usual projector glitches.

My interest in non-locality comes about because it shows up in different approaches to understand the quantum structure of space time, and it plays a role in many attempts to resolve the black hole information loss problem too. It is, in that, comparable to the minimal length that I've been working on for, ah, a hundred years or so, at least in somebody's reference frame. Nonlocality and the minimal length both seem to be properties of nature deeply connected to quantum gravity, even though we don't yet really understand the details, and they're also related to each other.

Nonlocality comes in many different variants and the purpose of the workshop was to shed some light on the differences and features. The most common forms of nonlocality are

  • Quantum mechanical entanglement. The type of nonlocality that we find in standard quantum mechanics, no information exchange over space-like distances though.
  • Quantum field theory, non-commuting operators on space-like separated points. This can ruin the causal structure of your theory and should be approached with great caution.
  • Quantum field theory, higher-order Lagrangians which show up in many models and approaches but bring a lot of problems with them too. Gariy Efimov and Leonardo Modesto spoke about realizations of this, and how these problems might be remedied. A certain book by Gariy Efimov that was published the year I was born, in Russian, and was never translated into English, plays a central role here. It's so much a clichee I couldn't not mention it - I'll probably end up having to dig out the damned book and learn Russian or at least pipe it into Google (as Leonardo apparently did).
  • Quantum mechanics and quantum field theory, non-commuting operators for space and time themselves, ie non-commutative space time in its many variants. This might or might not be related to the previous two points. The problem is that many approaches towards such a quantum space time are not yet at a point where they can deal with quantum fields, so the relation is not clear. Klaus Fredenhagen gave a very interesting talk about the spectrum of area and volume operators in a non-commutative space-time. Michael Wohlgenannt, Michele Arzano, Jerzy Kowalski-Glikman and his student Tomasz Trzesniewski spoke about other versions of this idea.
  • The whole AdS/CFT bulk-brane stuff, black hole complementarity and so on. Larus Thorlacius spoke about that. Unfortunately, Samir Mathur who had intended to come to the workshop couldn't make it, so the topic was very underrepresented. 
  • It might have passed you by, but Giovanni Amelino-Camelia, Lee Smolin and Kowalski-Glikmann, together with a steadily increasing number of co-workers have cooked up something they call "the principle of relative locality," essentially to cure the problems with nonlocality in DSR (see this earlier post for details). The idea is, roughly, that the notion of what constitutes a point depends on the location of the observer. I've tried and failed to make sense of this - it seems to me just DSR in disguise - but who knows, I might be wrong, and maybe they're onto something big. They too can't do quantum field theory on that space (yet), so it's not well understood how this notion of nonlocality relates to the above ones. Lee went so far to claim relative locality solves the black hole information loss problem, but I think at this point they don't even have a proper definition of what constitutes a black hole in this scenario to begin with, so it seems a little premature to claim victory.
  • A failure to reproduce a local space-time that occurs in lattice or network approaches, that runs under the name of "disordered locality." You can imagine it like tiny wormholes distributed over a nicely smooth space-time, except that the wormholes have no geometry themselves because, fundamentaly, space-time isn't a manifold. Fotini has been on to this for a while, but since she couldn't come the topic only came up once or twice in the discussion.
  • As Ingemar Bengtsson reminded us, trapped surfaces in General Relativity have some non-local features already.
We had a couple of more talks that touched on several of these topics, emergent gravity by Lorenzo Sindoni and Olaf Dreyer, black hole information loss by Jonathan Oppenheim, and Luis Garay who spoke about a stochastic model for nonlocality in quantum mechanics that I found very interesting.

Lastly, I should mention we had two discussion sessions that picked up the topics from the talks, one moderated by George Musser, one by Olaf Dreyer.

It was an interesting group of people that mixed better than I had expected. I had been a little afraid they would just talk past each other, but it seems they found some overlap on many different points. I certainly learned a lot from this meeting, and it has given me food for more thought. 

There are some slides of talks on the website; we hope to receive some more during the next week.