Pages

Tuesday, September 03, 2013

What is Special Relativity?

I got issues. Here’s one. I don’t like what people say about special relativity. Because we’re friends, special relativity and I.

I got issues with certain people in particular, those writing popular science books. Sometimes I feel like have to thank every physicist who takes the time to write a book. But, well, I got issues. Also, I got sunglasses and a haircut, see photo.

I presently read “The Universe in the Rearview Mirror” (disclaimer: free copy) and here we go again. Yet another writer who gives special relativity a bad name.

Here’s the issue.

Ask some theoretical physicist what special relativity is and they’ll say something like “It’s the dynamics in Minkowski space” or “It’s the special case of general relativity in flat space”. (Representative survey taken among our household members, p=0.0003). But open a pop science book and they’ll try to tell you special relativity applies only to inertial frames, only to observers moving with constant velocities.

Now, as with all nomenclature it’s of course a matter of definition, but referring to special relativity as being only good for inertial frames is a bad terminology, and not only because it doesn’t agree with the modern use of the word. The problem is that general relativity is commonly, both among physicists and in the pop sci literature, referred to as Einstein’s theory of gravity, rubber sheet and all. Einstein famously used the equivalence principle to arrive at his theory of gravity and that principle says essentially: “The effects of gravity are locally indistinguishable from acceleration in flat space.” With the equivalence principle, all you need to do is to take acceleration in flat space and glue it locally to a curved space, and voila there’s general relativity. I’m oversimplifying somewhat, all right, but if you know a thing or two about tensor bundles that’s essentially it.

The issue is, if you don’t know how to describe acceleration in flat space then the equivalence principle doesn’t gain you anything. So if you’ve been told special relativity works only for constant velocities, it’s impossible to understand all the stuff about angels pulling lifts and so on. You also mistakenly come to believe that to resolve the twin paradox you need to take into account gravity, which is nonsense.

Yes, historically Einstein first published special relativity for inertial frames, after all that’s the simplest case, and that’s where the name comes from. But the essence of special relativity isn’t inertial frames, it’s the symmetry of Minkowski space. It’s absolutely no problem to apply special relativity to accelerated bodies. Heck, you can do Galilean relativity for accelerated bodies! All you need is to know what a derivative is. You can also, for that matter, do Galilean relativity in arbitrary coordinate frames. In fact, most first semester exercises seem to consist basically of such coordinate transformation, or at least that’s my recollection. So don’t try to tell me that the ‘general’ of relativity has something to do with the choice of coordinates.

So yes, historically special relativity started out being about constant velocities. But insisting – more than 100 years later – that special relativity is about inertial frames, and only about inertial frames, is like insisting a telephone is a device to transfer messages about cucumber salad, just because that happened to be the first thing that ever went through a phone line. It’s an unnecessarily confusing terminology.

Since special relativity is busy boosting your rocket ships with laser cannons and so on, on her* behalf I want to ask you for somewhat more respect. Special relativity is perfectly able to deal with accelerated observers.


*German nouns come in three genders: male, female and neuter. Special relativity, or theory in general, is a female noun. Time is female, space is male. The singularity is female, the horizon is male. Intelligence is female, insanity male. Don’t shoot the messenger.

Friday, August 30, 2013

Should you write a science blog?

I get asked a lot how I keep up the blogging. It might be the second most asked question right after “What happened to your hair?” (Answer: It’s a natural disaster, get used to it.) The third frequently asked question, especially by students, is “Do you have any advice if I want to start blogging?” Yeah, I do, but I’m not sure you want to hear it.

I used to think there should really be more scientists blogging. That’s because for me science journalism not so much a source of information but a source of news. It tells me where the action is and points into a direction. If it seems interesting I’ll go and look up the references, but if it’s not a field close to my own I prefer if somebody who actually works on the topic offers an opinion. And I don’t mean a cropped sentence with a carefully chosen adjective and politically correct grammar. In some research areas, quantum gravity one of them, there really aren’t many researchers offering first-hand opinions. Shame on you.

So yeah, I think there should be more scientists blogging. But over the years I’ve seen quite a few of them starting to blog like penguins start to fly. If I had a penny for every deserted science blog I’ve seen I’d be wondering why some deranged British tourist stuffed their coins into my pockets. What’s so difficult about writing a blog, I hear you asking now. You’re asking the wrong person, said the flying penguin, but what blogger would I be if I only had opinions on things I know something about? So here’s my 5 cents (about 4.27 pennies).

As everybody in quantum gravity knows, first there’s the problem of time. So here’s

    Advice #1: Don’t start blogging if you don’t have the time.

Do you really want to invest the time you could be teaching your daughter basketball? Do you really think it’s more important than rewriting that grant proposal for the twentieth time? If you had the time to write a blog wouldn’t you rather use it to learn Chinese, train for a marathon, or become an expert in power napping? If you answered yes to any of these questions, thank you and good bye. Also, give me my money back. If you answered yes to all of these questions, I suggest you touch base with the local drug scene.

But how much time will it take, is your next question. Depends on your ambition of course, said the penguin and flapped her wings. You should produce at least one post a week if you ever want to get off the ground, which brings me to

    Advice #2: Don’t start blogging if you don’t like writing.

The less you like writing, the longer it will take and the more time becomes an issue. The more time becomes an issue, the more you’ll hate blogging and esp those people who seem to produce blogposts, seemingly effortlessly, 5 times a day, apparently while cooking for a family of twelve and jetting around the globe in a self-made, wooden plane sponsored by their three million subscribers.

Are you sure you like writing? No, I didn’t mean you gave it a thumb up on facebook. Are you really sure you like the process of converting thought into keyboard clatter? Ok, good start. But just because you like it doesn’t mean it’s easy.

I’ll admit it took me years to realize it, but evidently I have a lot of colleagues who fight with words. Did you notice that this blog has a second contributor? Yes, it does. It’s just that the frequency of my posts is a factor 300 or so higher than his. He can be forgiven for making himself rare because he’s got a full-time job and two kids and a wife who blogs rather than doing the laundry. But mostly the problem is that he’s fighting with words.

Words – Once upon a time I went to a Tai Chi class. The first class was also the last because I realized quickly that my back problem wasn’t up to the task of throwing people around. I used the opportunity though to punch the trainer straight into the solar plexus a second before he had finished his encouragement to do so. I hope he learned not to use more words than necessary. But I also took away a lesson, one that’s been useful for my writing: Don’t try to take hits frontally, deviate them and use the momentum. So here’s my

    Advice #3: Don’t be afraid of words.

Words aren’t your enemies. It they come at you, use their momentum and go with it. That’s easier said than done, I know, especially if you’re a scientist and have been trained to be precise and accurate and to decorate every sentence with 20 references and footnotes. But don’t think you actually have to be a good writer. Because most likely your readers aren’t good readers either, which is only fair. If you can really write well, you shouldn’t blog, you should… you should… write my damned grant proposal. What I mean is if you try to blog like you write research articles, you’ll almost certainly turn out to be a flying penguin, so don’t overthink it.

However, nobody is born flying, so here’s

    Advice #4: Be patient.

It takes time until you’re integrated into the blogosphere. You can help your integration by using social networks to make yourself, your expertise, and your blog known. Unless you are already well known in your field, it will probably take at least a year, more likely several years, till readership catches on. Until then, make contacts, make friends, learn from others, have fun. Above everything, don’t call a blogpost a blog, it’s mistaking the weather for the climate.

If you still think you want to write a blog, then go ahead. I honestly don’t think it takes more than that: Time, and a good relation to the written word, and patience. The main reason I’m still blogging is that I like writing and verbal TaiChi doesn’t take me a lot of effort. It arguably also helps that since 2006 I’ve been employed at pure research institutes and don’t have teaching duties, see advice #1.

Then let me address some worries. This might be more an issue for the, eh, more senior people, but it should be said

    Advice #5: Don't be afraid of the technology.

As with everything in life, you can make it arbitrarily complicated if you want, but as long as you have an IQ above 70 you'll find some way to blog. It really is not difficult. Another worry that newcomers seem to have is that they’ll run out of ideas, so let me assure you

    Advice #6: Don’t worry that you’ll run out of things to say.

Topics will come flying at you faster than you can get out of the way. There’s always somebody who’s said something about something that you also want to say something about. There’s always some science writer who got it so totally wrong. There’s always somebody’s seminar that was interesting and somebody’s paper that you just read. And if all of that fails, there’s always somebody who has thrown sexist comments around, ten things you wish you had known when you were twenty, and down at the very bottom of the list there’s blogging advice. So don’t worry, just take notes when you come across something interesting or have an idea for a blogpost. I pin post-its to my desk.

Yes, in principle you can fill your blog otherwise than with words. This might work if you have a lot of visual content, pictures, videos, infographics, applets, etc. Alas, the way things have developed the primarily visual stuff has migrated to other platforms and blogs are today the format primarily used for verbal content. And since the spread of twitter, facebook and Google+, sharing links with brief comments has also left the blogosphere. Blogging started out mostly being about writing, and it boomeranged back to this.

Having said that however, blogging of course isn’t only about writing, it’s also about reading. So here’s my

    Advice #7: Care about your readers.

They’ll give you feedback as to whether you’re expressing yourself clearly. If the comments don’t have any relation to the content of your posts, you’re not expressing yourself clearly enough. If insults pile up in your comment section, you’re expressing yourself too clearly. If you’re not getting any comments, see advice #4. However, please

    Advice #8: Don’t be afraid of your readers.

If everybody would like what you write, somebody would hate it just because everybody likes it, so it’s futile. If I’ve learned one thing from blogging, it’s that misunderstandings are unavoidable. They’re part of the process and that’s a two-way process. Just don’t take hits frontally, use their momentum. That misunderstanding really makes a good topic for your next blogpost, no?

You’ll have noticed that I didn’t say anything about content. That’s because the content is up to you. It really doesn’t matter all that much what you write because blog readers are self-selecting. The ones who’ll stay are the ones who like what you write. If it matters to you to attract a sizeable audience then you should spend some time thinking about content, but I’m not the right penguin to give advice on that. I basically just write what comes to my mind, minus some self-censorship for the sake of my readers’ sanity. You don’t really want to know how I lost my virginity, do you?

So should you write a science blog?

You and I both might think you should blog, but that’s wishful thinking. Be honest and ask yourself if you really want to write a blog. Without motivation it’ll be painful both for you and your readers. I wouldn’t want to eat in a restaurant where the cook hates cooking and I wouldn’t want to read a blog where the writer hates writing. If you’re not sure though, I want to encourage you to give it a try because writing might just change your life.

For me the blogging has been very useful, especially because it has taught me to quickly extract the main points of other people’s work and to coherently summarize them, which in return has made it much easier for me to recall this information later. I have also over the years made many friends through this blog, some of whom I have met in person and whose friendship I value very much. I see a lot of cynicism these days about the emptiness of social networking. But I appreciate social media for making it so much easier to stay in touch with people I know who have distributed all over the planet.

Homework assignment: Open the book closest to you on a random page and take the first noun that you see. Imagine it’s a chapter title in your autobiography. Write that chapter.

Wednesday, August 28, 2013

Can we test quantum gravity with gravitational bremsstrahlung?

When A falls into the black hole B
gets thermally distributed headache.
If Blogger had space for a subtitle it would be “A paper I can’t make up my mind about”. A few months ago a paper appeared on the arxiv that proposed to test quantum gravitational effects with neutrino oscillations.

    Quantum Gravity effect on neutrino oscillations in a strong gravitational field
    Jonathan Miller, Roman Pasechnik
    arXiv:1305.4430 [hep-ph]
Models in quantum gravity phenomenology span a spectrum that reaches from conservative but boring to interesting but flaky. This craziness factor is of course somewhat subjective, but the paper in question at first sight seemed to fall somewhere in the middle. In a nutshell, the authors are arguing that neutrino oscillation would be affected in the vicinity of black holes by interaction with gravitons and that this may cause a potentially observable phase distortion. For this they made the assumption that it’s the neutrino mass eigenstates (not flavor eigenstates) that couple to the graviton, and then they had some rather vague explanation that the type of this coupling would depend on the fundamental theory of gravity and thus could be used as a test.

Since it wasn’t originally really clear to me what assumption they made on top of perturbatively quantized gravity and why, I had a longer exchange with the authors in which they patiently answered my dumb questions. They updated the paper two months later and version two is a remarkable improvement over the first version. Alas, I’m still not sure the effect is real. But neither can I find a reason why it’s not real. Let me explain.

First, forget about the neutrino oscillation. That really isn’t so relevant, it’s just that neutrinos can deliver a particularly clean signal because they interact weakly with other stuff. Second, calling the gravitational field that they are concerned with a “strong” field is somewhat misleading. The term is commonly used to mean in the Planckian regime, but the field they talk about is that of a solar mass black hole close by the horizon. That’s strong compared to the field you just sit in, but still far off the Planckian regime. Also forget the stuff about collapse in the abstract, it doesn’t make much sense to me.

But then, note that while it’s often said that gravity is a weak interaction that’s a sloppy statement. Yes, that little fridge magnet and its electromagnetic interaction can overcome the gravitational pull of the whole planet Earth. But if you slam the door the magnet falls down, meaning the forces are quite comparable. How strong gravity is depends on how much mass you accumulate. In the paper the authors make the point that the cross-section for gravitational bremsstrahlung (that’s exchange of a virtual graviton and emission of a real graviton) is tiny for masses of elementary particles all right. But if you put in the mass of a solar mass black hole as one of the interacting ‘particles,’ the cross-section becomes comparable to that of other cross-sections in the standard model.

The original calculation of this cross-section goes back to a paper in the 60s. This is just perturbatively quantized gravity and besides the coupling constants, indices on the propagators, and polarization tensors very similar to the respective qed effect. Having said that, there’s no particular reason bremsstrahlung should be coherent or at least I don’t see one. This would mean then that a particle that passes by the black hole experiences a phase blurring, essentially because the background field is not in fact classical but because the interaction with the gravitational source is mediated by virtual gravitons. Or so the idea. Then they claim that this effect is large enough as to be potentially observable.

Having pulled out the origin of their proposed effect however, the paper suddenly moved to the very conservative end of the spectrum. On that conservative end, you typically find lots of effects that are almost certainly there, but way too small to be observable. If it was possible to find evidence for the quantization of the gravitational background field, evidence that virtual gravitons have been exchanged, this would be amazing.

However, my headache with the paper, which prevails in its revised version, is the following. Treating the black hole as a point particle is almost certainly a bad approximation. In some sense one might say a black hole is as close to a perfect point particle as we’ll ever get. But the distance in which the particle passes by the ‘point’ that is at the center of the black hole is large, much larger than the wavelength of the particle. It takes some time for the particle to pass by the horizon. It shouldn’t exchange one graviton at a fairly high energy (comparable to that of the neutrino in the black hole restframe with non-negligible probability) but it should exchange a lot of very low energy gravitons. This must be so simply because the equivalence principle prevents you from noticing anything on distance scales below the curvature radius. If this passage by the black hole was treated correctly, the effect would almost certainly get smaller. The question is how much smaller. It seems implausible it would vanish completely.

For me this also raises the question whether the cross-section would depend on what you believe is inside the black hole or at its horizon respectively. Eg if you’re a fuzzball fan, the coupling might look quite different than when you believe in baby universes. And let me not even get started on firewalls.

So I’m quite convinced that the effect is actually much smaller than they say, but this only raises the question just how small it is. I’m also not sure whether such an effect, if it exists, would be truly a sign for the quantization of the gravitational field. I mean, to first approximation the graviton exchange just has the effect that the particle moves on a geodesis. If you take into account that the particle itself is quantum and not a point particle, you should also notice some dispersion in a non-homogeneous background. But that’s not a signal for the quantization of the background, just for the quantum nature of the particle. Ie, it is conceivably possible that even if the effect is real, it’s not a signal for quantum gravity.

Having said that, if the particle acquires a phase-blurring as a correction from the quantum nature of the background field, the same effect should exist for charged particles passing by large charged objects in conceivable distance. Let me know if you have a useful reference.

The paper is now on the pile with unsettled cases...

Sunday, August 25, 2013

Can we measure scientific success? Should we?

My new paper.
Measures for scientific success have become a hot topic in the community. Many scientists have spoken out in view of the increasingly widespread use of these measures. They largely all agree that the attempt to quantify, even predict, scientific success is undesirable if not flawed. In this blog’s archive, you find me too banging the same drum.

Scientific quality assessment, so the argument goes, can’t be left to software crunching data. An individual’s promise can’t be summarized in a number. Success can’t be predicted on past achievements, look at all the historical counterexamples. Already Einstein said. I’m sure he said something.

I’ve had a change of mind lately. I think science need measures. Let me explain.

The problem with measures for scientific success has two aspects. One is that measures are used by people outside the community to rank institutions or even individuals for justification and accountability. That’s problematic because it’s questionable this leads to smart research investments, but I don’t think it’s the root of the problem.

The aspect that concerns me more, and that I think is the root of all evil, is that any measure for success feeds back into the system and affects the way science is conducted. The measure will be taken on by the researchers themselves. Rather than defining success individually, scientists are then encouraged to work towards an external definition of scientific achievement. They will compare themselves and others on these artificially created scales. So even if a quantifiable marker of scientific output was once an indicator for success, its predictive power will inevitably change as scientists work specifically towards it. What was meant to be a measure instead becomes a goal.

This has already happened in several cases. The most obvious examples are the number of publications or the number of research grants obtained. On the average, both are plausibly correlated with scientific success. And yet a scientist who increases her paper output doesn’t necessarily increase the quality of her research, and employing more people to work on a certain project doesn’t necessarily mean its scientific relevance increases.

A correlation is not a causation. If Einstein didn’t say that he should have. And another truth that comes courtesy of my grandma is that too much of a good thing can be a bad thing. My daughter reminds me we’re not born with that wisdom. If sunlight falls on my screen and I close the blinds, she’ll declare that mommy is tired. Yesterday she poured a whole bottle of body lotion over herself.

Another example comes from Lee Smolin’s book “The Trouble with Physics”. Smolin argued that the number of single authored papers is a good indicator for a young researcher’s promise. He’s not alone in this belief. Most young researchers are very aware that a single authored paper will put a sparkle on their publication list. But maybe a researcher with many single authored papers just a bad collaborator.

Simple measures, too simple measures, are being used in the community. And this use affects what researchers strive for, distracting them from their actual task of doing good research.

So, yes, I too dislike attempts to measure scientific success. But if we all agree that it stinks why are we breathing the stink? Why are not only funding agencies and other assessment ‘exercises’ using these measures, but why are scientists themselves using them?

Ask any scientist if they think the number of papers shows a candidate’s promise and they’ll probably say no. Ask if they think publications in high impact journals are indicators for scientific quality and they’ll probably say no. Look at what they do, and the length of the publication list and occurrence of high impact journals on that list is suddenly remarkably predictive of their opinion. And then somebody will ask for the h-index. The very reason that politically savvy researchers tune their score on these scales is that, sadly, it does matter. Analogies to natural selection are not coincidental. Both are examples of complex adaptive systems.

The reason for the widespread use of oversimplified measures is that they’ve become necessary. They stink, all right, but they’re the smallest evil among the options we presently have. They’re the least stinky option.

The world has changed and the scientific community with it. Two decades ago you’d apply for jobs by carrying letters to the post office, grateful for the sponge so wouldn’t have to lick all these stamps. Today you apply by uploading application documents within seconds all over the globe and I'm not sure they still sell lickable stamps. This, together with increasing mobility and connectivity, has greatly inflated the number of places researchers apply to. And with that, the number of applications every place gets has skyrocketed.

Simplified measures are being used because it has become impossible to actually do the careful, individual assessment that everybody agrees would be optimal. And that has lead me to think that instead of outright rejecting the idea of scientific measures, we have to accept them and improve them and make them useful to our needs, not to that of bean counters.

Scientists, in hiring committees or on some funding agency’s review panel, have needs that presently just aren’t addressed by existing measures. Maybe one would like to know what’s the overlap of some person’s research topics with those represented at a department? How often have they been named in acknowledgements? Do you share common collaborators? What administrational skills does the candidate bring? Is there somebody in my network who knows this person and could give me a firsthand assessment? Have they experience with conference organization? What’s their h-index relative to the typical h-index in a field? What would you like to know?

You might complain these are not measures for scientific quality and that’s correct. But science is done by humans. These aren’t measures for scientific quality, they’re indicators for how well a candidate might fit on an open position and into a new environment. And that, in return, is relevant for both their success and that of the institution.

Today, personal relations are highly relevant for successful applications. That is a criterion which sparks interest that is being used in absence of better alternatives. We can improve on that by offering possibilities to quantify, for example, the vicinity of research areas. This can provide a fast way to identify interesting candidates that one might not have heard of before.

And so I think “Can we measure scientific success?” is the wrong question to ask. We should ask instead what measures serve scientists in their profession. I’m aware there are meanwhile several alt-metrics being offered, but they don’t address the issue, they merely take into account more data sources to measure essentially the same.

That concerns the second aspect of the problem, the use of measures in the community. For what the first aspect is concerned, the use of measures by accountants who are not scientists themselves: The reason they use certain measures for success or impact is that they believe scientists themselves regard them useful. Administrators use these measures simply because they exist and because scientists, in lack of better alternatives, draw upon them to justify and account for their success or that of their institution. If you have argued that the value of your institute is in the amount of papers produced or conferences held, in the number of visitors pushed through or distinguished furniture bought, you’ve contributed to that problem. Yes, I’m talking about you. Yes, I know not using these numbers would just make matters worse. That’s my point: They’re a bad option, but still the best available one.

So what to do?

Feedback in complex systems and network dynamics have been studied extensively during the last decade. Dirk Helbig recently had a very readable brief review in Nature (pdf here) and I’ve tried to extract some lessons from this.
  1. No universal measures.
    Nobody has a recipe for scientific success. Picking a single measure bears a great risk of failure. We need a variety so that the pool remains heterogeneous. There is a trend towards standardized measures because people love ordered lists. But we should have a large number of different performance indicators.
  2. Individualizable measures.
    Measures must be possible to individualize, so that they can take into account local and cultural differences as well as individual opinions and different purposes. You might want to give importance to the number of single authored papers. I might want to give importance to science blogging. You might think patents are of central relevance. I might think a long-term vision is. Maybe your department needs somebody who is skilled in public outreach. Somebody once told me he wouldn’t hire a postdoc who doesn’t like Jazz. One size doesn’t fit all.
  3. Self-organized and network solutions
    Measures should take into account locations and connections in the various scientific networks, may that be social networks, coauthor networks or networks based on research topics. If you’re not familiar with somebody’s research, can you find somebody who you trust to give you a frank assessment? Can I find a link to this person’s research plans?
  4. No measure is ever final.
    Since the use of measures feeds back into the system, they need to be constantly adapted and updated. This should be a design feature and not an afterthought.
Some time between Pythagoras and Feynman, scientists had to realize that it had become impossible to check the accuracy of all experimental and theoretical knowledge that their own work depended upon. Instead they adopted a distributed approach in which scientists rely on the judgment of specialists for topics in which they are not specialists themselves; they rely on the integrity of their colleagues and the shared goal of understanding nature.

If humans lived forever and were infinitely patient then every scientists could trace down and fact-check every detail that their work makes use of. But that’s not our reality. The use of measures to assess scientists and institutions represents a similar change towards a networked solution. Done the right way, I think that measures can make science fairer and more efficient.

Wednesday, August 21, 2013

Physics Outreach Event at Kungsträdgården, Sep 7

Yes, there's Swedish Umlauts in the header. Our local readers might be interested to hear that Nordita will take part in the bi-annual outreach event "Fysik i Kungsan", which is scheduled for Sept 7, 11am to 5pm, at Kungsträdgården, Stockholm. Here are some impressions from two years ago:



If you're in the area, it would be nice to see you there! I'll be the audio stream for a poster on the question "What is Quantum Gravity" so that's your opportunity to ask me everything you ever wanted to ask. I'll be there only part of the day though, because some months ago I signed up for a 10k run that happens to be the same Saturday.

Sunday, August 18, 2013

Researchers and coffee consumption

You might have seen this collection of 40 world maps in your news feed recently. It's interesting and worth a look. When I scrolled down the list I thought it looks like the number of researchers (per million inhabitants) is correlated with the coffee consumption (in kg per capita). So I pulled down the data and plotted it in excel and here we go:

Coffee consumption vs number of researchers. The red dot is Germany.

I passionately hate excel and I have no idea how to convince it to give me a p-value, but I've seen worse correlations being published. More coffee consumption linked to more research!

If you want to play with the data, you can download the excel sheet here. I've left out Singapore from the table because I wasn't sure whether the entry "0" meant there's no data, or nobody in Singapore drinks coffee. I've made a second plot where I left out the 15 main coffee export countries (according to Wikipedia), but visually it doesn't make much of a difference so I'm not showing you the graph. (It's in the excel sheet.) According to chartsbin.com the data on researchers per million inhabitants is from the UNESCO Institute for Statistics, and the data on coffee consumption is from the World Resources Institute.

Don't take this too seriously. I'd guess that you'd find a similar correlation for many consume goods. It has some amusement value though :o)

Thursday, August 15, 2013

You are likely special and your friends probably not normal

Squaring the melon. Image source.
I think of myself as a very average person. I like the music on the radio and enjoy books on bestseller lists. I’m somewhat short but not unusually so, my reaction time is average for my age, and I look as old as I am.

Yes, I thought I was normal. Then I read that the average person is cognitively biased to think they’re special. Now I have a problem. I can either think I’m normal then I’m not, or I can think I’m special then I’m normal. Either way, I’m facing mental inconsistency. That shit bothers me. Is this normal?

Things you think about when stuck in small town traffic that’s suffered cardiac arrest by way of garbage truck blockage.

But, I thought, what are the odds of being normal?

Let’s take any variable with a normal distribution and define somebody as “normal” if within, say, a 2σ deviation of the mean. You are probably normal, by definition. Now let’s take N uncorrelated variables that are similarly distributed, like income, follicle density, number of friends on facebook, annual coffee consumption, amount of clothes owned, spectral distribution of these clothes, average number of words spoken per minute, time spent sleeping before the age of ten, and so on and so forth.

I’m sure you could list a few hundred such individual characteristics if somebody pointed a pun at your head. The probability that you’re average according to all characteristics is (0.95)N. This means if you look at about 400 different ways that people celebrate their individuality with, the probability that anybody is normal is less than one in a few billion.

This means there’s probably no normal person living on Earth today. In other words, it’s normal to be special.

That’s why the teenager from across the street’s got a million followers on YouTube, our downstairs neighbor wears shoes in two different sizes, and my colleague has meaningful conversations with moths. That’s why my older daughter is obsessed with boogers, that seventy year old just finished a marathon in 3 hours, the blonde woman is an undercover agent in search of pressure cookers, and the garbage truck driver can probably recite Goethe, backwards, in Latin. Which, for all I know, is exactly what he’s been doing instead of driving the damned truck.

Let’s not miss an educational opportunity here and mention that’s also why, if you analyze a dataset according to sufficiently many properties you’ll almost certainly eventually find something special about it. Or, if you study correlations between sufficiently many parameters you’ll eventually find a correlation. Being special really is normal.

And I - I have a particle data booklet in the glove box. What are the odds?

Monday, August 12, 2013

Book Review: “Information is Beautiful” by David McCandless

Information is Beautiful (New Edition)
By David McCandless
Collins (6 Dec 2012)

The more information, the more relevant it becomes to present it in human-digestible form, whence springs the flood of infographics in your news feed. There are good examples and bad examples of data visualization, and McCandless’ graphics are among the cleanest, neatest and well-designed ones that I’ve come across. McCandless describes himself as a “data journalist” and “information designer” and with that fills in a niche in the economic ecosystem that isn’t presently populated by many.

The book is a print-version of examples from his website. It’s not the kind of book you read front to back, but one that you browse through for the sake of curiosity, for distraction, or in search of a conversation topic. It does this job quite well; it also looks good, feels nice and is interesting. Some of the graphics in the book are however quite useless or seem to be based on data, or interpretation of data, that I find questionable. This is to say, the emphasis of these graphics is on design, not on science.

I got this book as a gift and spent a cozy afternoon with it on the couch, something I haven’t yet managed to achieve with digital media. (Not to mention that I’d rather have the kids wreck a book than a screen, should I fall asleep over it.) I’m more interested in the science of information than the design of information, and from the scientific side the graphics leave wanting. But they’re an interesting reflection on contemporary thought and I’d say the book is is well worth the price.

Monday, August 05, 2013

Are physicists hot or not?

It has become trendy to study scientists. Two weeks ago, a group of network researchers published a paper in “Scientific Reports” that aims to analyze in how far scientists pay attention to what is trendy.


The title is however misleading for several reasons.

The most obvious reason is that the analysis presented in the paper was performed exclusively on papers published in the Physical Review journals (in the years 1976-2009), meaning the word ‘scientists’ would better be replaced with ‘physicists’. Even that would be misleading though, because it’s questionable that papers published in the Physical Review are representative for the whole of physics. Physical Review is a high quality journal and it tends to be conservative. If your research is speculative or on a highly specialized topic then it might not be your journal of choice, or so a friendly editor will write before marking your manuscript as “no longer under consideration.” Besides this, the sample also includes the “rapid communication” Physical Review Letters with the declared policy that topics have to be “of broad interest” -- clearly not representative for physics by large, if you excuse the sarcasm.

But to understand what the authors mean with “hot”, let us look at what they have done. They quantify the physicists’ ‘tracing’ of hotness by the probability that the subject of a new paper depends on the number of papers already published on the topic. Topics are identified by the PACS number of the paper (a paper can thus belong to several fields). If new papers are not evenly distributed over existing topics, but those topics with many publications already are more likely to attract new ones than random chance would suggest, this is known as preferential attachment. It’s more commonly known as the “rich get richer effect” and can be quantified by fitting a power-law to the distribution.

The authors find that the physics papers in their sample do show preferential attachment, ie who has will be given. The effect is not as pronounced as for some social networks (eg Flickr) where similar studies have been done, but it clearly exists. They have further looked at the scaling in subsamples broken down by the country of origin of the first author and done the same analysis separately for a selection of four countries: Japan, China, Germany and the USA. They find that the preferential attachment is the strongest for China, followed by Japan, Germany, USA. Yes, that’s right. According to this study, Americans are less likely to follow “hot” topics than Germans.

In the introduction of the paper the authors remark “It is believed among many scientists that there are many more Chinese scientists that are followers than original thinkers compared with many other countries.” I find this an interesting statement for a scientific paper, seeing that it’s little more than spelling out a perceived stereotype. Though they may be forgiven their bleak view of Chinese scientists since, for all I can tell, the authors are all Chinese, or are at least working in China. They interpret the results of their study as confirming this stereotype.

It should be mentioned that the sample which the authors analyzed also contains comments, replies and errata that I’d have thought should be mostly evenly distributed over topics. I would guess if these were taken out the sample, the overall effect would increase somewhat.

But is this preferential attachment a sign that physicists follow “hot” topics?

What this analysis actually shows is that Physical Review preferably publishes papers on topics that already have a literature base. I wouldn’t call that tracing of “hotness”, I’d call it conservative. If you wanted to quantify how eager physicists are to jump on ‘hot’ topics, you’d have to measure how likely new papers are to be in rapidly growing fields, as opposed to fields with many publications already. And to add my own perceived stereotype, I’d be very surprised if you’d find the Germans jump faster than the Americans.

In summary, this study isn’t uninteresting but the interpretation of the data is highly misleading.

Friday, August 02, 2013

Video about Loop Quantum Cosmology

The below YouTube video on Loop Quantum Cosmology was brought to my attention.


I'm not sure who this video is aimed at. It seems to me you'll only understand what they are talking about when you already know what they are talking about, in which case you're not learning anything new, except possibly that Abhay Ashtekar's family moved a lot. The visuals are pretty much useless, the audio is on occasion extremely bad, and who is that person with the flipboard? They somehow fail to mention that Loop Quantum Gravity isn't the same as Loop Quantum Cosmology, so there's a glaring gap in the narrative. I don't know what the CMB anomalies mentioned at the end have to do with anything and why are people still discussing black holes at the LHC, that topic is as dead as dead can be, not to mention that Loop Quantum Gravity didn't have much if anything to do with it anyway. At around 32 minutes, they start talking about phenomenology and it gets more interesting then. Note how very careful they avoid making predictions...

That having been said, I'm not at all sure you want to spend 45 minutes on this. But please ignore my opinion and make up your own. I'm just in a foul mood because I couldn't find my favorite socks this morning.

Wednesday, July 31, 2013

Book review: “The Blank Slate” by Steven Pinker

The Blank Slate: The Modern Denial of Human Nature
By Steven Pinker
Penguin Books; Reprint edition (August 26, 2003)

“The Blank Slate” is an ode to human nature and scientific thought in public debate. Pinker masterfully summarizes what scientists have learned about the genetic and environmental influence on human behavior, both as individuals and as members of groups, and what it means for the organization of our lives. In a nutshell he argues that humans are not born “blank slates” and can’t be made to fit any utopian society or be trained any behavior. Denying human nature isn’t only futile, it makes us miserable and, needless to say, it’s unscientific.

The book is an interesting and well-written review of research on the nature-nurture debate. But for me its real value is that Pinker puts the scientific knowledge into the historical and social context and relates it to “hot buttons” in public debate, such as gender equality, war, or parenting advice.

I’ve been asked at least half a dozen of times if I read “The Blank Slate” and now I know why. The arguments Pinker leads are arguments I’ve led myself many times, if less elegantly and not backed up with references as thoroughly. You’ll find there my ambivalent attitude to feminism, my disliking of postmodernism, my hesitation to take on pretty much any parenting advice, my conflicted relation to capitalism, and my refusal to accept morals or social norms as rules. I’ve had a hard time finding something to disagree with. I was almost relieved to find that Pinker believes in free will, just because I don’t.

So for the most part, Pinker’s arguments haven’t been new to me. That might partly be because the book is ten years old now and its content was conceivably reflected in other literature I read after it appeared. But Pinker connected for me many dots whose relation wasn’t previously clear to me. Much of the public discussion about the “hot buttons” he refers to took place in the United States, though these topics arguably surface also in Europe, if not with quite as much noise. Before I read the book I never really understood why these issues were controversial to begin with and why people are evidently unable to have a reasoned discussion about them. The world makes a lot more sense to me after having read the book, because I now understand better the historical and social origin of these tensions.

It took me a long time to get through the book. One reason is simply that the paperback version is 430 pages in small font, I’m short-sighted and it’s just many tiny words. Another reason is that Pinker’s arguments become somewhat repetitive after the first 100 pages or so. Yes, yes, I got it, I wanted to say, can we please move on because I got another ten books waiting. This repetition of the main theme is however greatly alleviated by Pinker’s fluid and witty writing. You really don’t want to miss a page.

In summary, the book is a must-read, it’s a classic already. If you read one non-fiction book from the last decade, that’s the one.

Monday, July 29, 2013

Starspotting

Stars have no privacy. Not only does NASA unashamedly collect extensive records of star’s data, they’re not even denying they do. NASA’s surveillance program has the sole intent to stare at stars, and to stare intensely, just for the sake of collecting knowledge.

NASA’s Kepler satellite has been looking for more than three years at a small patch of the Milkyway that hosts an estimated 145,000 stars similar to our own sun. The data that Kepler gathered, and still gathers, is analyzed for transits of planets that temporarily block part of the star’s surface and diminish its emission. The precision by which this detection can meanwhile been done is simply amazing. The Kepler mission so far found more than 2000 planet candidates that are now subject to closer investigation.

Ray Jayawardhana gave a great lecture on exoplanets at Nordita’s recent workshop for science writers, slides are here. The progress in the field in the last decades can’t be called anything but stellar.

To see just how much progress has been made, look at page 13 of Ray's second lecture. You see there a time-series of measurements of the flux from some star observed with Kepler. You clearly see the dips when then planet covers part of the surface, a decrease that isn’t more than a tenth of a percent.

Image: Lisa Esteves.


A decade ago that would have been an amazing observation all by itself. Now look at the (red marked) data taken between the transits. If the planet doesn’t cover part of the star’s surface it will reflect light that is in principle also observable. This reflection should be largest when the planet is just about to vanish behind the star, and then dip. That means there should be a fine-structure in the flux between the transits, at about two orders of magnitude smaller still than the already small transit signal. And in fact, the data and data analysis is meanwhile so good that even the vanishing of the planet behind the star can be measured, as you can see on page 14 of Ray's slides

Image: Lisa Esteves.



I went away from Ray’s lecture thinking I should read his book, but was also left wondering if not the close monitoring of the stars should pick up sunspot activity and what we know about solar cycles of stars other than our own.

So I looked for literature and found two good reviews on starspots. A Living Review by Svetlana Berdyugina and an Astronomy and Astrophysics Review by Klaus Strassmeier. In comparison to exoplanets, starspots seem to be a fairly small research area. For those of you who don’t want to dig through 90+ pages and for my own record, here are the most interesting facts that I learned from my reading
  • Prior to the Kepler mission, about 500 spotted stars were known and had been analyzed. (I expect that this number will go up dramatically when the Kepler data is taken into account.)
  • That starspots exist was first proposed in 1667 by the French astronomer Ismael Bouillau.
  • The first starspots were recorded in the 1940s, but not recognized as such. In the late 60s and early 70s, several research groups independently proposed star spots as an explanation for certain types of light variability of stars that had been observed.
  • Some white dwarfs show spectral variations on a timescale of hours and days that are believed to be caused by sunspots. Similar structures are probably present on the surfaces of neutron stars as well.
  • Monitoring star spots allows to extract the rotation period of the star. According to presently existing models of stellar evolution, the rotation of stars changes with mass and age. Star spots can thus provide relevant data to find out which of these models is correct and thereby teach us how stars like our own evolve.
  • Doppler imaging of the stars emission spectrum allows in principle to reconstruct the latitude of the spot, though in practice reconstruction can be difficult.
  • There’s two different types of spots. The spots at low to mid latitude that we know from the sun, and polar spots that cover a pole of the star. The polar spots are thought to be caused by magnetic fields produced by a distinctly different mechanism than the other spots.
  • The polar spots can be HUGE. Look at this amazing image that is a reconstruction from Doppler imaging. This polar spot is about 10,000 larger than typical spots on our sun:

    Image: Strassmeier, Astron. Astrophys.,347, 225–234 (1999)
    In: Berdyugina, "Starspots: A Key to the Stellar Dynamo"


  • Out of 65 stars whose surface emission was analyzed with the Doppler technique, 36 had polar spots.
  • The polar spots can be very long lived and survive up to a decade.
  • There seems to exist no strong correlation between temperature and size of the star spots.
  • Most spotted stars have a cycle similar to that of the sun with a period in the range 3-21 years. Some stars probably have longer cycles, but existing observations don’t yet allow extracting it. But about a third of the observed stars seem to have no cycle or have one with large variations between the periods.
  • The lifetime of a sunspot is probably not determined by the decay of its magnetic field but by the surface shear due to differential rotation.
  • Spots on tidally locked binary systems live longer (on the average several months) than spots on single main-sequence stars (on the average several weeks).
  • Solar-type stars show the most sunspot activity when in the surface temperature is in the range 4900 - 6400 K.
  • Some starspots spin. This has not (yet) been observed on our sun’s spots.
There’s much that astrophysicists can learn about our own sun from other stars like our own. It seems to me this scientific invasion of solar privacy deserves to be called intelligence service :o)

Tuesday, July 23, 2013

How stable is the photon? Yes, the photon.

Light. Source: Povray tutorial.
I never really got into the kitten-mania that has befallen the internet. But I do think there’s such a thing as a cute paper, and here is a particularly nice example:
The photon is normally assumed to be massless. While a photon mass breaks gauge invariance and seems unappealing from a theoretical perspective, in the end it’s an experimental question whether photons have a mass. And while the mass of the photon is tightly constrained by experiment, to below about 10-18 eV, the mere possibility that it may be non-zero brings up another very basic question. If the photon has a mass it can decay into other particles, for example a pair of the lightest neutrino and its anti-partner, or other so-far undiscovered particles beyond the standard model. But if decay is possible, what are the bounds on the life-time of the photon? That’s the question Julian Heeck set out to address in his paper.

If the photon is unstable and decays into other particles, then the number density of photons in the cosmic microwave background (CMB) should decrease while the photons are propagating. But then, the energy density of the spectrum would no longer fit the almost perfectly thermal Planck curve that we observe. One can thus use the CMB measurements to constrain the photon lifetime.

If one uses the largest photon mass presently consistent with experiment, the photon lifetime is 3 years in the restframe of the photon. If one calculates the γ-factor (ie, the time dilatation) to obtain the lifetime of light in the visible spectrum it turns out to be at least 1018 years.

It’s rare to find such a straight-forward and readable paper addressing an interesting question in particle physics. “Cute” was really the only word that came to my mind.

Friday, July 19, 2013

You probably have no free will. But don’t worry about it.

Railroad tracks. Image source.
Anybody who believes in reductionism and that the standard model of particle physics is correct to excellent precision must come to the conclusion that free will is an illusion. Alas, denial of this conclusion is widely spread, documented in many attempts to redefine “free will” so that it can somehow be accommodated in our theories. I find it quite amusing to watch otherwise sensible physicists trying to wriggle out of the consequences of their own theories. We previously discussed Sean Carroll’s attempt, and now Carlo Rovelli has offered his thoughts on free will in the context of modern physics in a recent Edge essay.

Free will can only exist if there are different possible futures and you are able to influence which one becomes reality. This necessitates to begin with that there are different possible futures. In a deterministic theory, like all our classical theories, this just isn’t the case - there’s only one future, period. The realization that classically the future is fully determined by the presence goes back at least to Laplace and it’s still as true today as it was then.

Quantum mechanics in the standard interpretation has an indeterministic element that is a popular hiding place for free will. But quantum mechanical indeterminism is fundamentally random (as opposed to random by lack of knowledge). It doesn’t matter how you define “you” (in the simplest case, think of a subsystem of the universe), “you” won’t be able to influence the future because nothing can. Quantum indeterminism is not influenced by anything, and what kind of decision making is that?

Another popular hiding place for free will is chaos. Yes, many systems in nature are chaotic and possibly the human mind has chaotic elements to it. In chaotic systems, even smallest mistakes in knowledge about the present will lead to large errors in the future. These systems rapidly become unpredictable because in practice measurement always contains small mistakes and uncertainties. But in principle chaos is entirely deterministic. There’s still only one future. It’s just that chaotic behavior spoils predictability in practice.

That brings us to what seems to me like the most common free will mirage, the argument that it is difficult if not impossible to make predictions about human behavior. Free will, in this interpretation, is that nobody, possibly not even you yourself, can tell in advance what you will do. That sounds good but is intellectually deeply unsatisfactory.

To begin with, it isn’t at all clear that it’s impossible to predict human behavior, it’s just presently not possible. Since ten thousand years ago people couldn’t predict lunar eclipses, this would mean the moon must have had free will back then. And who or what makes the prediction anyway? If no human can predict your behavior, but a computer cluster of an advanced alien civilization could, would you have free will? Would it disappear if the computer is switched on?

And be that as it may, these distractions don’t change anything about the fact that “you” didn’t have any influence on what happens in the future whether or not somebody else knew what you’ll do. Your brain is still but a machine acting on input to produce output. The evaluation of different courses of action can plausibly be interpreted as “making a choice,” but there’s no freedom to the result. This internal computation that evaluates the results of actions might be impossible to predict indeed, but this is clearly an illusion of freedom.

To add another buzzword, it also doesn’t help to refer to free will as an “emergent property”. Free will almost certainly is an emergent property, unless you want to argue that elementary particles also have free will. But emergent properties of deterministic systems still behave deterministically. In principle, you could do without the “emergent” variables and use the fundamental ones, describing eg the human brain in terms of the standard model. It’s just not very practical. So appealing to emergence doesn’t help, it just adds a layer of confusion.

Rovelli in his essay now offers a new free will argument that is interesting.

First he argues that free will can be executed when external contraints on choice are absent. He doesn’t explain what he means with “external constraints” though and I’m left somewhat confused about this. Is for example, alcohol intoxication a constraint that’s “external” to your decision making unit? Is your DNA an external constraint? Is a past event that induced stress trauma an external constraint? Be that as it may, this part of Rovelli’s argument is a rephrasing of the idea that free will means it isn’t possible to predict what course of action a person will take from exclusively external observation. As we’ve seen above, this still doesn’t mean there are different future options for you to choose from, it just means prediction is difficult.

Then Rovelli alludes to the above mentioned idea that free will is “emergent”, but he does so with a new twist. He argues that “mental states” are macroscopic and can be realized by many different microscopic arrangements. If one just uses the information in the mental states - which is what you might experience as “yourself” - then the outcome of your decisions might not be fully determined. Indeed, that might be so. But it’s like saying if you describe a forest as a lot of trees and disregard the details, then you’ll fail to predict an impending pest infection. Which brings us to the question whether forests have free will. And once again, failure to predict by disregarding part of the degrees of freedom in the initial state doesn’t open any future options.

In summary, according to our best present theories of nature humans don’t have free will in the sense explained above, which in my opinion is the only sensible meaning of the phrase. Now you could dismiss this and claim there must be something about nature then that these theories don’t correctly describe and that’s where free will hides. But that’s like claiming god hides there. It might be possible to construct theories of nature that allow for free will, as I suggested here, but we presently have absolutely zero evidence that this is how nature works. For all we know, there just is no free will.

But don’t worry.

People are afraid of the absence of free will not because it’s an actual threat to well-being, but because it’s a thought alien to our self-perception. Most people experience the process of evaluating possible courses of action not as a computation, but as making a decision. This raises the fear that if they don’t have free will they can no longer make decisions. Of course that’s wrong.

To begin with, if there’s no free will, there has never been free will, and if you’ve had a pleasant and happy life so far there is really no reason why this should change. But besides this, you still make your decisions. In fact, you cannot not make decisions. Do you want to try?

And the often raised concern about moral hazard is plainly a red herring. There’s this idea that if people had no free will “they” could not be made responsible for their decisions. Scare quotes because this suggests there are two different parts of a person, one making a decision and the other one, blameless, not being able to affect that decision. People who commit crimes cause pain to other people, therefore we take actions to prevent and deter crime, for which we identify individuals who behave problematically and devise suitable reactions. But the problem is their behavior and that needs to be addressed regardless of whether “they” have a freedom in their decision.

I believe that instead of making life miserable accepting the absence of free will will improve our self-perception and with it mutual understanding and personal well-being. This acceptance lays a basis for curiosity about how the brain operates and what part of decision making is conscious. It raises awareness of the information that we receive and its effect on our thoughts and resulting actions. Accepting the absence of free will doesn’t change how you think, it changes how you think about how you think.

I hope that this made you think and wish you a nice weekend :o)

Monday, July 15, 2013

More mysteries in cosmic rays, and a proposed solution

The highest energetic particle collisions that we observe on our planet are created by particles from outer space that hit atomic nuclei in Earth’s upper atmosphere. The initial particle produces a large number of secondary particles which decay or scatter again, creating what is called a cosmic ray shower. The shower rains down on the surface where it is measured in large arrays of detectors. The challenge for the theoretical physicist is to reconstruct the cosmic ray shower so that it is compatible with all data. In practice this is done with numerical simulations in which enters our knowledge about particle physics that we have from collider experiments.

Cosmic ray shower, artist's impression. Source: ASPERA

One of the detectors is the Pierre Auger Observatory whose recent data has presented some mysteries.

One mystery we already discussed previously. The “penetration depth” of the shower, ie the location where the maximal number of secondary particles are generated, doesn’t match expectation. It doesn’t match when one assumes that the primary particle is a proton, and Shaham and Piran argued that it can’t be matched either by assuming that the primary is some nuclei or a composite of protons and nuclei. The problem is that using heavier nuclei as primaries would change the penetration depth to fit the data, but on the expenses that the width of the distribution would no longer fit the data. Back then, I asked the authors of the paper if they can give me a confidence level so I’d know how seriously to take this discrepancy between data and simulation. They never came back to me with a number though.

Now here’s an interesting new paper on the arXiv that adds another mystery. Pierre Auger sees too many muons


In the paper the authors go through possible explanations for this mismatch between data and our understanding of particle physics. They discuss the influence of several parameters on the shower simulation and eventually identify one that has the potential to influence both, the penetration depth and the number of muons. This parameter is the total energy in neutral pions.

Pions are the lightest mesons, that is particles composed of a quark and an anti-quark. They get produced abundantly in highly energetic particle collisions. Neutral pions have a very short lifetime and decay almost immediately into photons. This means essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. Reducing the fraction of energy in neutral pions also changes the rate at which secondary particles are produced and with it the penetration depth.

This begs the question of course why the total energy in neutral pions should be smaller than present shower simulations predict. In their paper, the authors suggest that a possible explanation might be chiral symmetry restoration.

The breaking of chiral symmetry is what accounts for the biggest part of the masses of nucleons. The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

In their paper, the authors assume that the cosmic ray shower produces a phase with chiral symmetry restoration which suppresses the production of pions relative to baryons. They demonstrate that this can be used to fit the existing data, and it fits well. They also make a prediction that could be used to test this model, which is a correlation between the number of muons and the penetration depth in individual events.

They make it very clear that they have constructed a “toy model” that is quite ad-hoc and mainly meant to demonstrate that the energy fraction in neutral pions is a promising parameter to focus on. Their model raises some immediate questions. For example it isn’t clear to me in which sense a cosmic ray shower produces a “phase” in any meaningful sense, and they also don’t discuss to what extent their assumption about the chirally restored phase is compatible with data we have from heavy ion physics.

But be that as it may, it seems that they’re onto something and that cosmic rays are about to teach us new lessons about the structure of elementary matter.

Tuesday, July 09, 2013

The unshaven valley

A super model. Simple, beautiful, but not your reality.
If you want to test quantum gravity, the physics of the early universe is promising, very promising. Back then, energy densities were high, curvature was large, and we can expect effects were relevant that we’d never see in our laboratories. It is thus not so surprising that there exist many  descriptions of the early universe within one or the other approach to quantized gravity. The intention is to test compatibility with existing data and, ideally, make new predictions and arrive at new insights.

In a recent paper, Burgess, Cicoli and Quevedo contrasted a number of previously proposed string theory models for inflation with the new Planck data (arXiv:1306.3512 [hep-th]). They conclude that by and large most of these models are still compatible with the data because our observations seem to be fairly generic. In the trash bin goes everything that predicted large non-Gaussianities, and the jury is still out on the primordial tensor modes, because Planck hasn’t yet published the data. It’s the confrontation of models with observation that we’ve all been waiting for.

The Burgess et al paper is very readable if you are interested in string inflation models. It is valuable for pointing out difficulties with some of these approaches that gives the reader a somewhat broader perspective than just data fitting. Interesting for a completely different reason is the introduction of the paper with a subsection “Why consider such complicated models?” that is a forward defense against Occam’s razor. I want to spend some words on this.

Occam’s razor is the idea that from among several hypotheses with the same explanatory power the simplest one is the best, or at least the one that scientists should continue with. This sounds reasonable until you ask for definitions of the words “simple” and “explanatory power”.

“Simple” isn’t simple to define. In the hard sciences one may try to replace it with small computational complexity, but that neglects that scientists aren’t computers. What we regard as “simple” often depends on our education and familiarity with mathematical concepts. Eg you might find Maxwell’s equations much simpler when written with differential forms if you know how to deal with stars and wedges, but that’s really just cosmetics. Perceived simplicity also depends on what we find elegant which is inevitably subjective. Most scientists tend to find whatever it is that they are working on simple and elegant.

Replacing “simple” with the number of assumptions in most cases doesn’t help remove the ambiguity because it just raises the question what’s a necessary assumption. Think of quantum mechanics. Do you really want to count all assumptions about convergence properties of hermitian operators on Hilbert-spaces and so on that no physicist ever bothers with?

There’s one situation in which “simpler” seems to have an unambiguous meaning, which is if there are assumptions that are just entirely superfluous. This seems to be the case that Burgess et al are defending against, which brings us to the issue of explanatory power.

Explanatory power begs the question what should be explained with that power. It’s one thing to come up with a model that describes existing data. It’s another thing entirely whether that model is satisfactory, again an inevitably subjective notion.

ΛCDM for example fits the available data just fine. For the theoretician however it’s a highly unsatisfactory model because we don’t have a microscopic explanation for what is dark matter and dark energy. Dark energy in particular comes with the well-known puzzles of why it’s small, non-zero, and became relevant just recently in the history of the universe. So if you want to shave model space, should you discard all models that make additional assumptions about dark matter and dark energy because a generic Î›CDM will do for fitting the data? Of course you shouldn’t. You should first ask what the model is supposed to explain. The whole debate about naturalness and elegance in particular hinges on the question of what requires an explanation.

I would argue that models for dark energy and dark matter aim to explain more than the available data and thus should not be compared to Î›CDM in terms of explanatory power. These models that add onto the structure of Î›CDM with “unnecessary” assumption are studied to make predictions for new data, so that experimentalists know what to look for. If new data comes in, then what requires an explanation can change one day to the next. What was full with seemingly unnecessary assumptions yesterday might become the simplest model tomorrow. Theory doesn’t have to follow experiment. Sometimes it’s the other way round.

The situation with string inflation models isn’t so different. These models weren’t constructed with the purpose of being the simplest explanation for available data. They were constructed to study and better understand quantum effects in the early universe, and to see whether string theoretical approaches are consistent with observation. The answer is, yes, most of them are, and still are. It is true of course that there are simpler models that describe the data. But that leaves aside the whole motivation for looking for a theory of quantum gravity to begin with.

Now one might try to argue that a successful quantization of gravity should fulfill the requirement of simplicity. To begin with, that’s an unfounded expectation. There really is no reason why more fundamental theories should be simpler in any sense of the word. Yes, many people expect that a “theory of everything” will, for example, provide a neat and “simple” explanation for the masses of particles in the standard model and ideally also for the gauge groups and so on. They expect a theory of everything to make some presently ad-hoc assumptions unnecessary. But really, we don’t know that this has to be the case. Maybe it just isn’t so. Maybe quantum gravity is complicated and requires the introduction of 105 new parameters, who knows. After all, we already know that the universe isn’t as simple as it possibly could be just by virtue of existing.

But even if the fundamental theory that we are looking for is simple, this does not mean that phenomenological models on the path to this theory will be of increasing simplicity. In fact we should expect them to be less simple by construction. The whole purpose of phenomenological models is to bridge the gap between what we know and the underlying fundamental theory that we are looking for. On both ends, there’s parsimony. In between, there’s approximations and unexplained parameter values and inelegant ad-hoc assumptions.

Phenomenological models that are not strictly derived from but normally motivated by some approach to quantum gravity are developed with the explicit purpose to quantify effects that have so far not been seen. This means they are not necessary to explain existing data. Their use is to identify promising new observables to look for, like eg tensor modes or non-Gaussianity.

In other words, even if the fundamental theory is simple, we’ll most likely have to go through a valley of ugly, not-so-simple, unshaven attempts. Applying Occam’s razor would cut short these efforts and greatly hinder scientific progress.

It’s not that Occam’s razor has no use at all, just that one has to be aware it marks a fuzzy line because scientists don’t normally agree on exactly what requires an explanation. For every model that offers a genuinely new way of thinking about an open question, there follow several hundred small variations of the original idea that add little or no new insights. Needless to say, this isn’t particularly conductive to progress. This bandwagon effect is greatly driven by present publication tactics and largely a social phenomenon. Occam’s razor would be applicable, but of course everybody will argue that their contribution adds large explanatory value, and we might be better of to err on the unshaven side.

If a ball rolls in front of your car, the simplest explanation for your observation, the one with the minimal set of assumption, is that there’s a ball rolling. From your observation of it rolling you can make a fairly accurate prediction where it’s going. But you’ll probably brake even if you are sure you’ll miss the ball. That’s because you construct a model for where the ball came from and anticipate new data. The situation isn’t so different for string inflation models. True, you don’t need them to explain the ball rolling; the Planck data can be fitted by simpler models. But they are possible answers to the question where the ball came from and what else we should watch out for.

In summary: Occam’s razor isn’t always helpful to scientific progress. To find a fundamentally simple theory, we might have to pass through stages of inelegant models that point us into the right direction.

Friday, July 05, 2013

Video: The origin of galactic magnetic fields, Oliver Gressel

The third of our videos about researchers at Nordita! Here you meet Oliver Gressel from our astrophysics group. Oliver works on numerical simulations of turbulence and magnetic fields in plasma. We previously discussed how this improves our understanding of sunspots and the solar dynamo. With the same tools, Oliver studies a different plasma, the interstellar plasma that permeates our galaxies. In the video, he explains what we know and don't know about galactic magnetic fields and what we can learn from the numerical simulations.



This video appeared in the recent issue of the Nordita Newsletter, the full version of which you can read here. (The "Feature" might look familiar to readers of this blog...)

I am also excited to let you know that we'll be producing four more videos later that year. One of them will probably be about the 'subatomic' group; appearance of black holes is entirely possible :p

If you have any news items from the Nordic countries that are relevant to research in physics or related fields (eg mathematical physics, biophysics) and that you would like to see included in the next issue of the newsletter (due early October), just send me an email to hossi at nordita dot org.

Wednesday, July 03, 2013

Interna

Last month I gave a seminar in Bielefeld on models with a minimal length scale. This seminar was part of a series organized within the framework of the new research training group "Models of Quantum Gravity". The initiative is funded by the German Science Foundation (DFG), and several universities in Northern Germany take part in it. I find this a very interesting development. The Germans seem to finally have recognized the need to support research in quantum gravity generally, rather than singling out specific approaches, and this initiative looks promising to me. Let's hope it is fruitful.

My trip to Bielefeld was interesting also in another aspect. When I was about to get on the way back to Heidelberg, the car wouldn't start. After some cursing and fruitless attempts to decode the erratic blinking of the panel lights, I called the closest Renault dealer. (Actually, I first called my husband to yell at him, just because that was the first thing that came to my mind.) The Renault guy said, Guten Tag and tough luck, we'll have to tow the car, but it's five to five now so please call back tomorrow morning.

So I unexpectedly had to spend the night out of town, which I took as an excuse to buy really expensive underwear. They towed the car the next morning, figured out that the battery had died in a short-circuit that blew up some wiring, and I made it back home with 24 hours delay. The irony in this was that I had taken Stefan's car because I was afraid mine would break down and I'd get stranded in Bielefeld.

Tomorrow I'm giving a seminar in Aachen and I hope that this time the car won't break down... Later this month I will try to listen in at a black hole conference in Frankfurt. Unfortunately, this happens to be during the week when our daycare place has summer break, so the logistics is nontrivial. In September I'll be in Helsinki for another seminar. In October I'm on a conference in Vienna. In November I'm attending a workshop in the UK, which for all I can tell doesn't have a webpage and I'm not entirely sure what it is about either.

There's been some discussion in the blogosphere lately about the difficulty of combining the necessary travel to seminars and conferences with family demands. And yeah, what do you expect, it's not easy and it's not fun.

Sometime when I'm writing these Internas about work-family issues, I feel like a case study in the making.

The girls are doing fine and have adjusted well to the new daycare place. So far, we're very happy with it. It's a nice and fairly large place with a playground and much space to run around. They're very well organized and it's not exceedingly costly either.

Some weeks ago the kids were ill, and I called in at the daycare place to say we're not coming. When somebody picked up the phone and I heard a male voice, my first thought was that I must have dialed a wrong number. Needless to say, I then felt bad for my own stereotyping, and that I was apparently surprised the childcare business is not exclusively run by women. If you Google for the job description "Kindererzieher" in German, auto-complete gives you as first hit the female ending of the word.

To be fair to myself though, the guy hadn't been there previously. He was only there as a temporary replacement, and normally a woman called Stephanie would answer the phone. In any case, I later had an interesting conversation with him about gender imbalance in education. His explanation for why there are so few men in his profession was simply that it's badly paid. "You can't feed a family from this." I'm not sure that really explains much though.

Lara and Gloria's vocabulary has exponentially grown in the last month. No day passes without them trying out new words. At this point we actually have to be careful what we tell them because they'll go around and tell everybody who'll listen that the mommycar is broken and will shamelessly repeat my complaints that the neighbors don't separate their garbage. They have meanwhile pretty much taken over the whole apartment. There doesn't seem to be any place that's not occupied with toys or other child paraphernalia. And I, I spend a considerable amount of time collecting building blocks and lego pieces, a genuinely sisyphean activity.

In summary, life is busy.


Bedtime!

Tuesday, July 02, 2013

“String theorists have to sit in the back.”

In March, Lawrence Krauss took part in a discussion hosted by an Islamic group at University College London. He evidently did not expect what he saw upon arrival, that all women were sitting in the back and (according to this newspaper report) men were prevented from sitting among the women. This YouTube clip captures Krauss’ refusal to be part of such an event, and his request that the segregation be quit. His demands were eventually met.

There is some disagreement in the newspapers on whether or not the gender segregation was voluntary. Be that as it may, I can understand Krauss’ reaction and would probably have done the same.

Reason I’m telling you this is not that I’ve suddenly become an activist for women’s rights in Islam, but that a month later Krauss’ gave a public lecture in Stockholm (I was not there). He was introduced by a guy called Christer Sturmark, and I recommend you listen to this yourself, at 1:20 – 2:10 min


“I now realize that maybe I should have warned Professor Krauss that our audience here is also segregated. String theorists have to sit in the back. I hope that’s okay.”
So there’s this guy, Sturmark, who tells us on his website that he’s editor of a journal for cultural and intellectual debates and who, on Wikipedia, is described as “prominent debater on religion and humanism in Swedish media.” This “intellectual” evidently thinks it’s funny to pretend that string theorists have to sit in the back of the room, like suppressed and disadvantaged women in certain religious groups. To make matters worse, he is clearly reading his introduction off, so it’s not like that was the kind of spontaneous joke that came out wrong. It was a deliberately made comparison. It was probably made because he thought that it would amuse the audience. And though I can’t say that they were exactly rolling on the floor, you can hear some laughter in the recording.

Sturmark doesn’t seem to have an education in physics (according to Wikipedia he has a BA in computer science), so it appears fair to say that he probably doesn’t know what he’s talking about. And evidently he thought it okay to make jokes about string theory without knowing what he’s talking about. Because everybody does it, right? Imagine he’d have said “Material scientists have to sit in the back. I hope that’s okay.” Haha. Wait. WFT?

My problem isn’t so much with Sturmark himself – the world is full with guys who think they’re oh-so-smart and who need a haircut. He’s hardly the first to make fun of string theorists, and he probably won’t be the last. No, my problem is the impression that jokes and condescending remarks about string theorists have become acceptable in general.

This isn’t an “intellectual debate”. This is a sickening way of making brainless jokes about a whole group of scientists. Yes, some of the stuff that they work on will turn out to have no relevance for our understanding of nature. The same can be said about literally all research areas. Yes, some of them seem to have gone off the deep end. But one shouldn’t extrapolate from single points of data.

No, I’m not a string theorist. No, I’ve never even worked on string theory. No, I’m not married to a string theorist either. Or if, he’s hiding it well. Yes, I think more attention should be paid to making contact to experiment, that’s why I work on the phenomenology of quantum gravity. I want to know how to describe quantum effects of space and time and, whether you like that or not, string theory was and is still among the best candidates.

I think it’s really bad taste to make fun of scientists just because they are interested in certain research questions. What worries me much more than the bad taste though is that scientists of course take note of the public opinion, consciously and unconsciously. Scientists make deliberate efforts to keep discussions and evaluations objective in order to be able to make accurate assessments of the promise of certain research directions. And this strive for objectivity is greatly skewed by publicly ridicule.

In comparison to the struggles for women’s rights this is a petty issue of course. But it’s about a topic, quantum gravity, I care deeply about, even if the biggest part of the world doesn’t.

Having said that, if you watch the first 15 minutes or so of Krauss’ lecture you’ll note that he makes a whole series of jokes and fails to elicit laughter from the Scandinavians, which is quite amusing in its own right. If you’ve ever given a talk somewhere in North Europe, you can probably relate. He didn’t exactly help his situation by self-deprecating remarks about the USA - Italians might have been laughing their butts off, but Swedes are much too polite for this. Be that as it may, Krauss’ lecture is well structured and well delivered, though I guess that most of you won’t actually learn anything new from it.