Pages

Friday, October 29, 2010

This and That

With the weekend approaching, here's some distractions to kill your remaining working hours till 5pm:
  • A very nice applet that zooms you through the scales of the universe, all the way down to the Planck length.

  • An interesting recollection by Robert Weisbrot of Edward Witten's way to physics:

    "I am reminded of a friend from the early 1970s, Edward Witten. I liked Ed, but felt sorry for him, too, because, for all his potential, he lacked focus. He had been a history major in college, and a linguistics minor. On graduating, though, he concluded that, as rewarding as these fields had been, he was not really cut out to make a living at them. He decided that what he was really meant to do was study economics. And so, he applied to graduate school, and was accepted at the University of Wisconsin. And, after only a semester, he dropped out of the program. Not for him. So, history was out; linguistics, out; economics, out. What to do? This was a time of widespread political activism, and Ed became an aide to Senator George McGovern, then running for the presidency on an anti-war platform. He also wrote articles for political journals like the Nation and the New Republic. After some months, Ed realized that politics was not for him, because, in his words, it demanded qualities he did not have, foremost among them common sense. All right, then: history, linguistics, economics, politics, were all out as career choices. What to do? Ed suddenly realized that he was really suited to study mathematics. So he applied to graduate school, and was accepted at Princeton. I met him midway through his first year there--just after he had dropped out of the mathematics department. He realized, he said, that what he was really meant to do was study physics; he applied to the physics department, and was accepted.

    I was happy for him. But I lamented all the false starts he had made, and how his career opportunities appeared to be passing him by. Many years later, in 1987, I was reading the New York Times magazine and saw a full-page picture akin to a mug shot, of a thin man with a large head staring out of thick glasses. It was Ed Witten! I was stunned. What was he doing in the Times magazine? Well, he was being profiled as the Einstein of his age, a pioneer of a revolution in physics called "String Theory." Colleagues at Harvard and Princeton, who marvelled at his use of bizarre mathematics to solve physics problems, claimed that his ideas, popularly called a "theory of everything," might at last explain the origins and nature of the cosmos. Ed said modestly of his theories that it was really much easier to solve problems when you analyzed them in at least ten dimensions. Perhaps. Much clearer to me was an observation Ed made that appeared near the end of this article: every one of us has talent; the great challenge in life is finding an outlet to express it. I thought, he has truly earned the right to say that. And I realized that, for all my earlier concerns that he had squandered his time, in fact his entire career path--the ventures in history, linguistics, economics, politics, math, as well as physics--had been rewarding: a time of hard work, self-discovery, and new insight into his potential based on growing experience."


    [Via Michael Nielsen, via Hacker News. Read the full speech here.]

  • You might already have read it on Nature News: Astronomers have found the to date most massive neutron star with about 2 solar masses. When I read this, a bell was ringing faintly in the dusty back of my head. Meanwhile I've figured out what was ringing: Smolin's Cosmological Natural Selection predicts an upper mass limit for neutron stars of 1.6 solar masses. (See hep-th/0612185, section 3.2).

  • Some months ago I was sent a link to an April fools day paper, funny-haha, physicists style. That paper has now resurfaced on my desk: Schrödinger's Cat is not Alone. It's a humorous take on the interpretation of quantum mechanics and cat dynamics. Not the sort of humor that deepens my laugh wrinkles, but I thought some of you might find it amusing.

  • Here's something that did give me a good laugh. Real life absurdity:
    Nurses find the weirdest stuff. [Via Bora].

I wish you all a nice weekend!

Saturday, October 23, 2010

Short-term thinking

Science, especially fundamental research, used to be a pastime of the rich. Within the last century its potential for innovation has been discovered. Today, fundamental research is widely recognized as an investment of our societies into the future. While this societal support and appreciation has opened the stage for everybody to participate, it came with a side-effect. Research is being more and more confined and run by the same rules that have been efficient for the producing and service-providing parts of our economies, the standards that are being used by corporations and companies, the framework that policy makers are used to think in. While this is not a complete disaster - after all science does still work remarkably well - the problem is that it is not an approach which works for all sorts of research.

I have discussed at this blog many times the differences and similarities between the "Marketplace of Ideas" and the free marketplace of products. The most relevant difference is the property the system should optimize. For our economies it is profit and - if you believe the standard theory - this results ideally in a most efficient use of resources. One can debate how well the details work, but by and large it has indeed worked remarkably well. In the academic system however, the property to optimize is "good research" - a vague notion with subjective value. Before nature's judgement on a research proposal is available, what does or doesn't constitute good research is fluid and determined by the scientific community, which is also the first consumer of that research. Problems occur when one tries to impose fixed criteria for the quality of research, some measure of success. It sets incentives that can only deviate the process of scientific discovery (or invention?) from the original goal.

That is, as I see it, the main problem: setting wrong incentives. Here, I want to focus on a particular example, that of accountability and advance planning. In many areas of science, projects can be planned ahead and laid out in advance in details that will please funding agencies. But everybody who works in fundamental research knows that attempting to do the same in this area too is a complete farce. You don't know where your research will take you. You might have an idea of where to start, but then you'll have to see what you find. Forced to come up with a 3-year, 5 point plan, I've found that some researchers apply for grants after a project has already been finished, just not been published, and then spend the grant on what is actually their next project. Of course that turns the whole system ad absurdum, and few can afford that luxury of delaying publication.

The side-effect of such 3-year pre-planned grants is that researchers adapt to the requirements and think in 3-years pre-plannable projects. Speaking about setting incentives. The rest is good old natural selection. The same is true for 2 or 3 year postdoc positions, that just this month thousands of promising young researchers are applying for. If you sow short-term commitment, you reap short-term thinking. And that's disastrous for fundamental research, because the questions we really need answers to will remain untouched, except for those courageous few scientists who willingly risk their future.

Let us look at where the trends are going: The number of researchers in the USA holding faculty positions 7 years after obtaining their degree has dropped from 90% in ’73 to 60% in 2006 (NSF statistics, see figure below). The share of full-time faculty declined from 88% in the early 1970s to 72% in 2006. Meanwhile, postdocs and others in full-time nonfaculty positions constitute an increasing percentage of those doing research at academic institutions, having grown from 13% in 1973 to 27% in 2006.



The American Association of University Professors (AAUP) has compiled similar data showing the same trend, see the figure below depicting the share of tenured (black), tenure-track (grey), non-tenured (stripes) and part-time (dots) faculty for the years 1975, 1989, 1995 and 2007 [source] (click to enlarge).


In their summary of the situation, the AAUP speaks clear words "The past four decades have seen a failure of the social contract in faculty employment... Today the tenure system [in the USA] has all but collapsed... the majority of faculty work in subprofessional conditions, often without basic protections for academic freedom."

In their report, the AAUP is more concerned with the quality of teaching, but these numbers also mean that more and more research is done by people on temporary contracts, who at the time they start their job already have to think about applying for the next one. Been there, done that. And I am afraid, this shifting of weight towards short-term thinking will have disastrous consequences for the fundamental research that gets accomplished, if it doesn't already have them.

In the context of setting wrong incentives and short-term thinking another interesting piece of data is Pierre Azoulay et al's study
    Incentives and Creativity: Evidence from the Academic Life Sciences
    By Pierre Azoulay, Joshua S. Gra Zivin, Gustavo Manso
    (PDF here)

In their paper, the authors compared the success of researchers in the life sciences funded under two different programs, the Howard Hughes Medical Institute (HHMI), which "tolerates early failure, rewards long-term success, and gives its appointees great freedom to experiment" and the National Institute of Health (NIH), with "short review cycles, pre-defined deliverables, and renewal policies unforgiving of failure." Of course the interpretation of the results depends on how appropriate you find the used measure for scientific success, the number of high-impact papers produced under the grant. Nevertheless, I find it tale-telling that, after a suitable adjustment of researcher's average qualification, the HHMI program funding 5 years with good chances of renewal produces a better high-impact output than the NIH 3 year grants.

And speaking of telling tales, let me quote for you from the introduction of Azoulay et al's paper which contains the following nice anecdote:
"In 1980, a scientist from the University of Utah, Mario Capecchi, applied for a grant at the National Institutes of Health (NIH). The application contained three projects. The NIH peer-reviewers liked the first two projects, which were building on Capecchi's past research effeorts, but they were unanimously negative in their appraisal of the third project, in which he proposed to develop gene targeting in mammalian cells. They deemed the probability that the newly introduced DNA would ever fi nd its matching sequence within the host genome vanishingly small, and the experiments not worthy of pursuit.

The NIH funded the grant despite this misgiving, but strongly recommended that Capecchi drop the third project. In his retelling of the story, the scientist writes that despite this unambiguous advice, he chose to put almost all his efforts into the third project: "It was a big gamble. Had I failed to obtain strong supporting data within the designated time frame, our NIH funding would have come to an abrupt end and we would not be talking about gene targeting today." Fortunately, within four years, Capecchi and his team obtained strong evidence for the feasibility of gene targeting in mammalian cells, and in 1984 the grant was renewed enthusiastically. Dispelling any doubt that he had misinterpreted the feedback from reviewers in 1980, the critique for the 1984 competitive renewal started, "We are glad that you didn't follow our advice."

The story does not stop there. In September 2007, Capecchi shared the Nobel prize for developing the techniques to make knockout mice with Oliver Smithies and Martin Evans. Such mice have allowed scientists to learn the roles of thousands of mammalian genes and provided laboratory models of human afflictions in which to test potential therapies."

Tuesday, October 19, 2010

If you're interested in the phenomenology of quantum gravity...

... you might want to check out my recent paper

It's not very technical, so don't hesitate to have a look. It's basically a summary of interesting developments and hopefully explains why I like working in the area. If you're not from the field, you might stumble over one or the other expression, but I think you'll still get a pretty good impression what it's all about.

Saturday, October 16, 2010

Science changes, for real

I recently read an interesting article by Jeffrey R. Young in the Chronical of Higher Education, titled Crowd Science Reaches New Heights. It tells the story of Alexander S. Szalay, professor of physics and astronomy at the Johns Hopkins University, who has played a major role in making astronomical data a public resource. The whole article is very readable, so if you have the time, check it out. Here, I just want to quote a snippet that documents vividly how much science has changed within the last decade:

"The astronomical community did not believe we would ever really make the data public," says Mr. Szalay. The typical practice in the mid-1990s was to guard data because it was so difficult to get telescope time, and scholars did not want to get scooped on an analysis of something they gathered.

One incident demonstrates the mood at the time. A young astronomer saw a data set in a published journal and wanted to reanalyze it, so he asked his colleague for the numbers. The scholar who published the paper refused, so the junior scholar took the published scatterplot, guessed the numbers, and published his own analysis. The original scholar was so upset that he called for the second journal to retract the young scholar's paper.

Mr. Szalay said that astronomers changed their minds once the first big data sets hit the Web, starting with some images from NASA, followed by the official release of the first Sloan survey results in 2000.

I was surprised by that anecdote, but then I only started working in physics in '97. I recall though converting one or the other figure into a table to be able to reuse the data - an extremely annoying procedure, even with the use of suitable software. However, these were figures from decade-old textbooks, the data of which I needed to check whether a code I had written would make a sufficiently good fit. And 5 years back or so, when I had a phase of sudden interest in neutrino physics, I noticed that while one finds plenty of papers on the results of Monte-Carlo simulations to fit neutrino experiments, the data used is not for all experiments listed. In one case, I ended up browsing a bulk of Japanese PhD thesis (luckily in English) till I found the tables in the appendix of one, and then I had to type them off. Not sure how much the situation in that area has changed since. But change is inevitably on its way...

Wednesday, October 13, 2010

test* the hypothes*

I recently came across a study in the sociology of science and have been wondering how to interpret the results:
    Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data
    By Daniele Fanelli
    PLoS ONE 5(4): e10271. 1
There are many previous studies showing that papers are more likely to get published and cited if they report "positive results." Fanelli now has found a correlation between the likeliness of reporting positive results and the total number of papers published in a sample of papers with a corresponding author in the USA, published in the years 2000 - 2007, across all disciplines. The papers were sampled by searching the Essential Science Indicator's database with the query "test* the hypothes*" and then the sample was separated into positive and negative results by individual examination (both by the author and by an assistant). The result was as follows:
In a random sample of 1316 papers that declared to have “tested a hypothesis” in all disciplines, outcomes could be significantly predicted by knowing the addresses of the corresponding authors: those based in US states where researchers publish more papers per capita were significantly more likely to report positive results, independently of their discipline, methodology and research expenditure... [T]hese results support the hypothesis that competitive academic environments increase not only the productivity of researchers, but also their bias against “negative” results.

When I read that, I was somewhat surprised about the conclusion. Sure, such a result would "support" the named hypothesis in the sense that it didn't contradict it. But it seems to me like jumping to conclusions. How many other hypothesis can you come up with that are also supported by the results? I'll admit that I hadn't even read the whole paper when I made up the following ones:
  • Authors who publish negative results are sad and depressed people and generally less productive.

  • A scientist who finds a negative result wants more evidence to convince himself his original hypothesis was wrong, thus the study takes longer and in toto less papers are published.

  • Stefan suggested that the folks who published more papers are of the sort who hand out a dozen shallow hypothesis to their students to be tested, and are likely to be confirmed. (Stefan used the, unfortunately untranslatable, German expression "Dünnbrettbohrer," which means literally "thin board driller.")

After I had read the paper, it turns out Fanelli had something to say about Stefan's alternative hypothesis. Before I come to that however, I have to say that I have an issue with the word "positive result." Fanelli writes that he uses the term to "indicate all results that support the experimental hypothesis." That doesn't make a lot of sense to me, as one could simply negate the hypothesis and find a positive result. If it was that easy to circumvent a more difficult to publish, less likely to be cited, summary of ones research results, nobody would ever publish a result that's "negative" in that sense. I think that in most cases a positive result should be understood as one that confirms a hypothesis that "finds something" (say, an effect or a correlation) rather than one that "finds nothing" (we've generated/analyzed loads of data and found noise). I would agree that this isn't well-defined but I think in most cases there would be a broad agreement on what "find something" means, and a negation of the hypothesis wouldn't make the reader buy it as a "positive result." (Here is a counter-example). The problem is then of course that studies which "find nothing" are equally important as the ones that "find something," so the question whether there's a bias in which ones are published is important.

Sticking with his own interpretation, Fanelli considers that researchers who come to a positive result, and in that sense show themselves correct, are just the smarter ones, who are also more productive. He further assumes that the more productive ones are more likely to be found at elite institutions. With his own interpretation this alternative hypothesis doesn't make a lot of sense, because when the paper goes out, who knows what the original hypothesis was anyway? You don't need to be particularly smart to just reformulate it. That reformulation however doesn't make a non-effect into an effect, so let's better consider my interpretation of "positive result." Fanelli argues the explanation that people smart enough to do an experiment where something is to be found are also the ones who publish more papers generally doesn't explain the correlation for two reasons: First, since he assumes these people will be at elite institutions, there should be a correlation with R&D expenditure, which he didn't find. Second, because this explanation alone (without any bias) would mean that in states where 95% - 100% of published results were positive, the smart researchers hardly every misjudged in advance the outcome of an experiment and the experiment was always such that the result was statistically significant, even though other studies have shown that this is not generally the case.

To the alternative hypothesis that Stefan suggested, Fanelli writes:
A possibility that needs to be considered in all regression analyses is whether the cause-effect relationship could be reversed: could some states be more productive precisely because their researchers tend to do many cheap and non-explorative studies (i.e. many simple experiments that test relatively trivial hypotheses)? This appears unlikely, because it would contradict the observation that the most productive institutions are also the more prestigious, and therefore the ones where the most important research tends to be done.
Note that he is first speaking about "states" (which was what actually went into his study) and then later about "institutions." Is it the case indeed that the more productive states (that would be DC, AZ, MD, CA, IL) are also the ones where the most important research is done? It's not that I entirely disagree with this argument, but I don't think it's particularly convincing without clarifying what "most important research" means. Is it maybe research that is well cited? And didn't we learn earlier that positive results tend to get better cited? Seems a little circular, doesn't it?

In the end, I wasn't really convinced by Fanelli's argument that the correlation he finds is a result of systematic bias, though it does sound plausible, and he did verify his own hypothesis.

Let me then remark something about the sample he's used. While Fanelli has good arguments the sample is representative for the US states, it is not clear to me that it is in addition also representative for "all disciplines." The term "test the hypothesis" might just be more commonly used in some fields, e.g. medicine, than in others, e.g. physics. The thing is that in physics what is actually a negative result often comes in the form of a bound on some parameter or a higher precision of confirming some theory. Think of experiments that are "testing the hypothesis" that Lorentz-invariance is broken. There's an abundance of papers that do nothing than report negative results and more negative results (no effect, nothing new, Lorentz-invariance still alive). Yet, I doubt these papers would have shown up in the keyword search, simply because the exact phrase is rarely used. More commonly it would be formulated as "constraining parameters for deviations from Lorentz-invariance" or something similar.

That is not to say however I think there's no bias for positive results in physics. There almost certainly is one, though I suspect you find more of it in theoretical than in experimental physics, and the phrase "testing the hypothesis" again would probably not be used. Thing is that I suspect that a great many of attempts to come up with an explanation or a model that, when confronted with the data, fails, do never get published. And if they do, it's highly plausible that these papers don't get cited very much because it's unlikely very many people will invest further time into a model that was already shown not to work. However, I would argue that such papers should have their own place. That's because it presently very likely happens that many people are trying the same ideas and all find them to fail. They could save time and effort if the failure was explained and documented once and for always. So, I'd be all in favor of a journal for "models that didn't work."

Sunday, October 10, 2010

Cosmic Strings

The appeal of string theory is in the simplicity of the idea. The devil, as usual, is in the details that follow. But one-dimensional objects are common in physical systems, and sometimes have little to do with string theory as the candidate theory of everything. The Lund string-model for example is an effective description for the fragmentation of color flux-tubes resulting in hadronization. And then there's cosmic strings.

Cosmic strings are stable, macroscopic, one-dimensional objects of high energy density that might be created in the early universe. It was originally suggested by Kibble in 1976 that such objects could form from symmetry-breaking phase transitions in quantum field theory that would take place when the universe was young and hot. These strings then form a network of (infinitely) long strings and loops that evolves with the expansion of the universe. It was thought for a while that strings might seed the density perturbations leading to the large-scale structures we see today, but this turned out not be consistent with the increasingly better data. While we know now that cosmic strings cannot have dominated in the early universe, some of them might still have been present, and still be present today.

The topic raised to new attention when it was found that cosmic strings might alternatively also be created in a string theory scenario in the early universe and then grow to macroscopic sizes. That is interesting because cosmic strings have a bunch of possibly observable consequences. For the purposes of testing string theory, the question is of course if one could distinguish a cosmic string created by ordinary quantum field theory from a cosmic super-string-theory-string.

Two of the most outstanding observables are that cosmic strings create peculiar gravitational lensing effects and can, while they move around, create cusps that release bursts of gravitational waves. There are other, more subtle, signatures, such as the creation of small non-Gaussianities in the cosmic microwave background (CMB) and some influence on the CMB tensor-modes, but the gravitational lensing and gravitational wave bursts have so far gotten the most attention due to the already good experimental prospects of detecting them.

For what the lensing is concerned, every now and then a candidate is found where the lens might have been a cosmic string, though none of them has survived scrutiny. Like CSL-1, that later turned out to be merely two similar galaxies in close vicinity. In any case, the gravitational lensing wouldn't allow us to tell whether we're looking at a super-string or not.

There are however differences between fundamental and non-fundamental cosmic strings that have been pointed out during the last years. These stem from the presence of additional spatial dimensions in super-string theory. These have the consequence of altering the evolution of the string network, resulting in a denser network today, that might give one the hope that bursts of gravitational radiation are more likely to occur. Recently though, a more detailed study has been done, examining the motion of the string and the gravitational radiation emitted by taking into account the additional dimensions:


In their analysis, the researchers found that the presence of compactified extra dimensions larger than the width of the string dampens the gravitational wave emission. The effect depends on the the number of extra dimensions, and the damping can be several orders of magnitude. While this is interesting in the sense that the signal carries information about the sort of string one is dealing with, it means unfortunately that the signal is also far less likely to be detected at all. The strength of the damping depends also on the ratio of the width of the string and the size of the extra-dimensions, though this dependence is hidden within the model and not obvious from the results. I wrote to one of the authors of the above paper, Ruth Gregory, and was explained that simulating the dynamics of a thick string was quite a challenge which is why they had to resort to an empirical model.

A signal of cosmic strings would be tremendously exciting either way. But so far the prospects of being able to unambiguously assign such a signal to string theory seem slim.

Tuesday, October 05, 2010

From IGNobel to the Nobel Prize

Congratulations to Andre Geim of the University of Manchester, the first winner of both the IGNobel and the Nobel Prize in Physics!

Back in 2000, Andre Geim shared the IgNobel Prize with Sir Michael Berry, for his celebrated levitating frog eperiment.

Today, ten years later, he has been awarded the Nobel Prize in Physics for 2010, together with Konstantin Novoselov, for "for groundbreaking experiments regarding the two-dimensional material graphene".

Graphene, as this chicken-wire single-atom carbon layer is called, is a cool material for theorists and experimentalists alike - just have a look at Google to see how popular and important this stuff has become.

It seems to me that the way how Geim and Novoselov discovered graphene in 2004 by using adhesive tape to peel a single layer of carbon atoms off a piece of graphite - the "Scotch tape method" - and the levitating frog clearly show the same playful attitude towards physics, a great way to do science!



For a first reading about Graphene, check out Carbon Wonderland by Andre Geim and Philip Kim, Scientific American April 2008, and Graphene: Exploring Carbon Flatland by Andre Geim and Allan MacDonald, Physics Today 60 (2007) 35-41.

More technical papers can also be found on the website of Geim's group at Manchester.

TAGS: ,

Monday, October 04, 2010

Einstein on the discretenes of space-time

I recently came across this interesting quotation by Albert Einstein:
“But you have correctly grasped the drawback that the continuum brings. If the molecular view of matter is the correct (appropriate) one, i.e., if a part of the universe is to be represented by a finite number of moving points, then the continuum of the present theory contains too great a manifold of possibilities. I also believe that this too great is responsible for the fact that our present means of description miscarry with the quantum theory. The problem seems to me how one can formulate statements about a discontinuum without calling upon a continuum (space-time) as an aid; the latter should be banned from the theory as a supplementary construction not justified by the essence of the problem, which corresponds to nothing “real”. But we still lack the mathematical structure unfortunately. How much have I already plagued myself in this way!”

It's from a 1916 letter to Hans Walter Dällenbach, a former student of Einstein. (Unfortunately the letter is not available online.) I hadn't been aware Einstein thought (at least then) that a continuous space-time is not “real.” It's an interesting piece of history.

Friday, October 01, 2010

Experimental Search for Quantum Gravity - Workshop Summary

With some delay, here's finally the summary of our summer workshop on Experimental Search for Quantum Gravity. Most of the delay is due to the videos only having been uploaded two weeks ago, but you can now find the link to the recording and slides on the conference website.

The phenomenology of quantum gravity is a still fairly young research field, and it is good to see it is attracting more interest and efforts every year. Experimental test, also in form of constraints, is an important guide on our search for a theory of quantum gravity. The challenge is that gravity is such a weak force compared to the other interactions, which has the consequence that quantum effects of gravity are extremely difficult to detect - they become important only at the Planck scale, at energies 16 orders of magnitude above what the Large Hadron Collider (LHC) will reach. However, during the last decade proposals have been put forward how quantum gravity could be testable nevertheless.

To that end, a number of models have been developed that arguably are at different levels of sophistication and plausibility, not to mention man-hours. As you can guess, this makes the field very lively, with many controversies still waiting to be settled. So far, none of these models have actually been rigorously derived from a candidate theory of quantum gravity. Instead, they are means to capture specific features that the fundamental theory has been argued to have. Such phenomenological models should thus be understood as simplifications, and one would expect them to be incomplete, leaving questions open for the fundamental theory to be answered.

The best place to look for quantum gravitational effects is in regions of strong curvature, that would be towards the center of black holes or towards the first moments of the universe. Since black hole interiors are hidden from our observation by the horizon, this leaves the early universe as the best place to look. It is thus not surprising that the bulk of effort has been invested into cosmology, most notably in form of String Cosmology and Loop Quantum Cosmology. The typical observables to look for are the amplitudes of tensor modes in the cosmic microwave background (CMB) and non-gaussianities.

The other area of quantum gravity phenomenology that has attracted a lot of attention are violations and deformations of Lorentz-invariance. These have been argued to appear in many approaches towards quantum gravity, including Loop Quantum Gravity (LQG), String Theory, Non-commutative geometry and emergent gravity, thus the large interest in the subject. However, the details are subtle. As I mentioned, no actual derivation exists from either LQG nor string theory, so don't jump to conclusions. Violations of Lorentz-invariance, which have a preferred restframe, can be captured in an effective field theory and are testable to extremely high precision with particle physics experiments (both collider and astrophysics) that allows us to tightly constrain them despite the smallness of the Planck scale. Deformations of Lorentz-invariance have no preferred frame and have been argued not be expressible as effective field theories, thus evading the tight constraints on Lorentz-invariance violations. Deformations of Lorentz-invariance generically lead to a modification of the dispersion relation and an energy-dependent speed of light, which may be observable in gamma ray burst events. As you know from my earlier writing, there's some discussion at the moment about the consistency of these models, and Lee Smolin gave a nice talk on that. Giovanni Amelino-Camelia summarized some of the recent work on the field, and added an interesting new proposal.

Besides these areas into which most of the work has been invested, there's a number of interesting models based on ideas about the fundamental structure of space-time. There is, for example, the causal sets approach, which is Lorentz-invariant, yet results in diffusion, aspects of which may be observable in the CMB polarization, which Fay Dowker spoke about at the workshop. Again, note however that the diffusion equation is motivated by, though not yet actually derived from, the causal sets approach. Then there's the quantum graphity models which I personally find very promising. Unfortunately, Fotini Markoupoulo could not make it to our meeting. I am reasonably sure though that we'll hear more about that model and its phenomenological implications in the future. And there's models about space-time foam leading to decoherence and/or CPT violation, models about space-time granularity leading to modifications of Eötvös' experiment (preprint here) - and I won't attempt to make this a complete listing because I'll inevitably forget somebody's pet model.

A class of models that one should discuss separately are those with a lowered Planck scale. It can happen in scenarios with large extra dimensions that quantum gravitational effects are not actually as feeble as we think they are from extrapolating the strength of gravity over 16 orders of magnitude. (For details, see my earlier post on such models.) It might instead be the Planck scale is just around the corner, making it accessible for collider experiments. A lot of work has been done in this area and these models are now up to being tested at the LHC. Thomas Rizzo gave a great talk on these prospects, and Marco Cavaglia spoke about the production of mini black holes in particular.

Then there's the possibility that we do already have observational evidence for quantum gravity, we just haven't recognized it for what it is. Stephon Alexander talked about a model that generates the neutrino masses, the cosmological constant, and makes additional predictions. Can you ask for more? (Preprint here.) And Greg Landsberg gave a talk about his recent work, trying out the idea that on short scales space-time is not higher- but lower-dimensional (preprint here). This idea has been around for some years now (even New Scientist noticed), but in my impression it so far lacks a really good phenomenological model.

We had three discussion sessions during the week. One on the question what principles might be violated by quantum gravity, one on experiments and thought experiments, and one on the future of particle physics. Unfortunately the recording of the last one, which was the most lively one, failed, but check out the other two. The discussions went very well, and I think they served their purpose of people getting to know each other and exchanging their opinions about the central questions of the field.

All together, I am very pleased with the workshop. Despite a number of organizational glitches, it went very smoothly. The experimentalists mixed well with the theorists, we covered a fair share of the relevant topics, and it didn't rain on the BBQ. To offer some self-criticism, we did this year have a lack of string phenomenology. Some may want to count Mavromatos as "stringy," but we didn't have anybody speaking on string cosmology for instance. That was not by design, but by chance, since, as usual, some of the people we invited declined or could eventually not make it. One of the lessons that I personally have drawn from this workshop is that there is some degeneracy in the predictions of various models that should be sorted out by combining several predictions. This has been well done in the case of extra dimensional models where a clear distinction between signatures of different scenarios has been invested a lot of effort into. Similar studies are however missing when it comes, for example, to quantum gravity phenomenology in the early universe as predicted by different models.

In any case, I hope that we will have more workshops in this series in the future. I'll keep you posted. And I'm sure, one day the workshop will come when we'll actually have evidence to discuss...

Monday, September 27, 2010

Discovery or Invention?

Some years back I was shortlisted for a job and gave a seminar, doing my best to leave a good impression. In the questions following my talk, somebody asked if I think the process of science is one of discovery or one of invention. "Both," I said, leaving everyone in the audience, including myself, somewhat confused.

In more detail, the question is the following. In the process of science, we accumulate knowledge. That's observations, that's applications, that's theories. But this knowledge, does it exist before we have made it our own and it is just up to us to discover it? Or is this knowledge genuinely new, and does only come into existence once we are thinking about and working with it?

If one goes down this slope it can becomes somewhat slippery, and you might end up at the question whether all of science is a human construct, tainted by the biases of our consciousness and social effects (invention) - or if science in its essence, ideally, is pure and objective, without human baggage (discovery). You don't have to slide down the slope though, because going there neglects that either way we're doing our best to make science as useful for our purposes as possible, trying to reduce biases and social effects.

Take for example quantum field theory. If you believe that mathematics has an existence independent of human consciousness, you would argue that quantum field theory existed before we knew of it, and we discovered and then used it. If you don't believe that Plato's world of ideas is real, then you'd instead call it an invention of the human mind. Or take some application like for example the LASER (that just celebrated its 50th birthday). Was the construction of such an instrument always a possibility that existed, and it was just discovered by humans? Or is it an invention, a possibility that only came into existence thanks to our ingenuity?

This admittedly philosophical question, that is eventually one about the meaning of creativity, has a correspondence in the arts. In interviews, I've sometimes found painters or writers saying that the "idea" for their work was waiting for them, they were just the ones who brought it to paper or canvas, they are the discoverer and the medium to bring it into our attention, but not the inventor. Some even speak of a mental "place" that they visit to find their ideas, a place they apparently believe is not a creation of their own mind. Others however describe the creative process as entirely self-made, often including trial and error, many studies and improvements, the making of something genuinely novel that has never existed before, an invention.

I, as many physicists I think, believe that reality exists independent from us. It is thus out there for us to discover. Reality doesn't care about the quirks of the human brain or or problems of our societies. So that would put me on the side of the discoverers. However, it is not that simple. Whatever we do, our discoveries are shaded by human perception. Whatever we observe, we observe it with human senses or human instruments. And the theories we write down, they are stories that humans tell which are meant to describe the real world, rather than actually being the real world. To illustrate that, let me recycle an image I used in my earlier post on Models and Theories, see left. You first need to discover. But once you measure it, once you write it down, and make it suitable for human use, you're creating an - necessarily imperfect - image, may that be a collection of data points or a theory. And that's the part of the process which is an invention.

I would argue for example that while we have discovered quantum mechanical effects which exist somewhere in "the real world out there," the theory we have to explain them is a human construct, it's an invention. It uses variables and language that are specific to our species, it carries the history of particle-wave duality, it is suitable to describe the data that we have measured with our devices. An alien civilization might discover the same effects, but they might invent a different story to explain them, a theory that explains their data in possible entirely different ways.

I don't even think that mathematics itself is free of human baggage, or that it will remain the best way to describe Nature if you could fast forward some hundred thousand years. We just think today it is because we cannot possibly imagine anything else that would work better. But I think we should always keep in mind the Principle of Finite Imagination: Human imagination has limits set by our cognitive abilities. Excluding a possibility because we cannot today imagine it neglects that time may bring significant changes to our cognition or species.

I didn't get the job. I doubt it had anything to do with my inability to explain my reply to this particular question. But still, I wished I had been able to express myself better back then.

Now it's your turn: Discovery or Invention?

Saturday, September 25, 2010

Dance your PhD

"Dance your PhD," believe it or not, is a contest for the best presentation of a PhD topic as a dance video in the categories physics, chemistry, biology, and social sciences. Dancing mathematics, it seems, would have been too easy. Here's an example from physics: "Generation and detection of high-energy phonons by superconducting junctions" by Irwin Singer:

Electrons and Phonons in Superconductors: A Love Story. from Irwin Singer on Vimeo


You can look at more submissions on this website.

The topic of my PhD thesis was "Black Holes in Extra Dimensions: Properties and Detection." (IsMyThesisHotOrNot?!) I'm afraid a video wouldn't have properly captured extra dimensional dancing. I suppose I would have tried to represent collapse and subsequent radiation, increasing temperature, and a final decay with dancers coming together in the center of a room, and later leaving the scene again. More likely though, I wouldn't have spent time on this.

I'm not really sure what to think of such efforts to bring science closer to the public. The above video about the superconductor, frankly, would have been equally instructive without the dancers. Most of the other videos, if you check them out, don't communicate more than a sentence or two of information about the thesis topic. Not so surprisingly - dancing is hardly a good way to get across complex science.

Now don't get me wrong, I'm sure everybody has had a lot of fun with these videos, and one or two people learned a complicated new word they hadn't known before. But let's reverse the roles of art and science for a moment here. It's like trying to get people interested in a Van Gogh by showing them a spectral analysis of the colors used. Science is beautiful in itself. But to see the beauty you must understand. The value of artists representation is in skilled art being able to capture more than the written or spoken word alone. But these dance videos, at least to me, are less. In any case, they might serve as a weekend distraction ;-)

Thursday, September 23, 2010

Spaces

Love for mathematics divides people like nothing else.

Sure, people can develop a fascination for the most bizarre things - knitting, collecting Hard-Rock-Cafe shirts, or photoshopping their wifes into shape - and leave others puzzled about their obsession. But when it comes to mathematics, indifference and puzzlement is replaced with plain rejection. There's those for who mathematics is the essence of everything, it's the language in which the book of Nature is written and the secrets of the universe are encoded in. And then there's those who believe that the lover of mathematics is narrow-minded, and that the world is so much more, so vastly more complex than what mathematics can possibly capture, that anybody who thinks incomprehensible, abstract symbols capture elementary truths must be seriously disturbed.

And sure, people can argue furiously about politics or who has the best pizza in town, but in no case I can think of do you find a comparable utter lack of understanding for the other side than when it comes to the power of mathematics. The lack of understanding is probably so complete, one can't even argue about it. Over and over I have found people who reject the notion of mathematics being a universal language, and who discard it as insufficient for reality. They are dead wrong to do so of course, but since I've encountered this attitude over and over again, I want to dedicate some paragraphs to what I believe is the origin of this divide.

At the very beginning is, of course, school education. Unfortunately, what's called mathematics in school has little to do with mathematics. It should more aptly be called calculation. Don't get me wrong, it is essential knowledge to be able to multiply fractions and calculate percentage rates, but it has about as much to do with mathematics as spreading your arms has with being a pilot. Problem is, that's about all most people ever get to know of mathematics. The actual heart of math however is not number crunching or solving quadratic equations, it's the abstraction, the development of an entirely self-referential, logically consistent language, detached from the burden of reality.

Let me focus on an example that those of you with high school education will have met: vector spaces. A vector space is basically a set with elements that have a structure allowing for an operation called addition and a second operation that's multiplication with a scalar. These operations have to fulfill certain criteria which you can look up somewhere if you've forgotten, but it's not so relevant for the following. What's relevant is how abstract this notion of a vector space already is. The vector space really is that definition, and nothing else. And it's taught in school! Of course, at the time pupils come across a definition for a vector space most know examples already and have a mental picture. My math teacher used pens to visualize vectors. But nothing in the definition of a vectorspace tells you it ought to be three-dimensional, or the elements be coordinate-vectors (pens).

Given that vector spaces are such a simple concept that is introduced even in school, I was surprised to learn how late in the history of science it came along. The phase space in physics is essentially a higher dimensional vectorspace whose elements aren't only coordinate vectors but also momenta (and, by virtue of this, has some additional "symplectic" structure). Knowing what you know today this sounds hardly like a revolutionary concept. But in the middle of the 19th century it was. In his (highly readable) Physics Today article on the history of the Phase-space, David Nolte writes:

Today it is natural for us to assign each variable its own axis in a generalized multidimensional space. But in the 1700s it was not natural. [...] Cayley in his 1843 paper titled "Chapters in the Analytical Geometry of (n) Dimensions" was the first to take the bold step of referring to a geometry of more than 3 dimensions. After that, the stage was set for the "invention" of multiple dimensions when Grassmann developed the concept of an n-dimensional vectorspace in 1844.

The vector space that you've heard of in school is a result of many generations of abstractions. Yet, it is still an extremely special case of what the mathematician considers a "space."

If you go out on the street and ask random passers-by what they associate with "space," you might hear office space, or the space they don't have in their living room after they bought the drum set for the eldest son. You might hear parking space, or the space between two letters in a sentence, or maybe outer space. That's the real world, and at first sight it seems indeed like a selection of complex and very different notions of space. But in fact all these spaces that you encounter in the real world are highly specific. They are three or lower-dimensional. They are to excellent precision flat. They come with a distance measure that allows you to tell if the new couch will fit next to the drum set. The general space in mathematics however may do away with all of these properties that we are so used to. Imagine an infinite dimensional space. Imagine one without distances. Imagine what it would be to try to park your car in one.

In physics one does encounter more general spaces than the standard 3-dimensional vector space. The best known example is probably the Hilbertspace of quantum mechanics, which can easily be infinite dimensional. But also in physics, the realm that we deal with is only a tiny part of all that mathematics has to offer. The functions we deal with are typically nicely differentiable, so are the manifolds we put them on, blessing us with plenty of additional structure. The differential equations we have are typically not higher than 2nd order, spaces are hausdorffian and almost all of the pathological examples you come across in mathematics the physicist never has to bother with.

This of course then brings one to the question, if the world of mathematics contains so much more, then where is it? Does it exist, somewhere, that space without distances, that module, that left-invariant subgroup? I have some sympathy for Tegmark's Mathematical Universe, which posits that all of mathematics must exist somehow, somewhere in the multiverse. My central objection to Tegmark's idea just is that it's not insightful and plain useless.

If you've scrolled down to this last paragraph, shame on you. The point of all the words above was that during the history of science we have come to realize it is the world of mathematics that is vastly larger than what the real world has to offer, not the other way round.

Wednesday, September 22, 2010

1+1=2+ɛ... +δ

I have a confession to make. I've been unduly withholding information from you. But I think time has come to fill you in and I hope the following will clarify some things about my recent absence and brevity. During the last months, I've been more of a bumblebee than a bee: Yes, I'm pregnant! Here's a short summary of what was written between the lines.

Late April, due to an overactive Icelandic volcano, Stefan and I had to take a 3000 km road trip from Sweden to Germany and back. Unlike other primates, humans belong to the category of "nonseasonal breeders." Nevertheless, for as long as there's birth statistics, 10-20% seasonal variations in human live births have been documented. Confusingly though, the seasonal peak has changed over decades and seems to depend on age and other demographic factors as well. So spare me the comments about spring feelings. For what humans are concerned, "spring feelings" is a cocktail.

Briefly before my trip to Perimeter Institute in May, I got a positive result on the pregnancy test. Pregnacy tests are sensitive to the human chorionic gonadotrophin (hCG) hormone that is detectable after implantation of the fertilized egg. Modern pregnancy tests are amazing accurate. You can now also have them with a digital display and they provide estimates for the weeks after conception. They work up to 4 days before the missed period. In a few years, pregnancy tests will probably have a speech output and automatically submit the result to your twitter feed.

I dubbed the little clump of rapidly dividing cells Epsilon.

My stay at Perimeter Institute in May was mildly speaking scientifically not very productive. I started getting very sick briefly after my arrival. It's a miracle I managed to sit through the workshop on the Laws of Nature - and to even write a blogpost about it. My flight back to Stockholm was a nightmare. I spent 14 hours holding onto a sick bag, pressing a lemon scented face towel to my nose, trying to escape the food smell. Folk wisdom says nausea during early pregnancy is worse with girls and twins.

A word on counting the pregnancy length. Historically, the length of the pregnancy is not counted from conception on (typically unknown), but from the first day of the last menstrual period. This is known as "gestational age" and used by all doctors, books, and most other references (with a few exceptions). The gestational age, which I'll also use in the following, is approximately two weeks longer than the actual age of the embryo, also known as "embryonic age."

At about 7 weeks, the heart starts beating.

Back in Stockholm, at about 8 weeks, I went to the first ultrasound exam. The doctor didn't speak English too well. "You know," he said, "you have twilights." - "I have what?" He turned the screen around and showed me the image (see left). "Twins!" I said. And so to Epsilon was added Delta. If you haven't seen an ultrasound image before, dark means little reflection (sonolucent). Here, the black ovals are basically liquid-filled: the amniotic sacs. In the lower one you see very nicely the umbilical cord. The embryos are about 18 mm (crown to rump).

A twin pregnancy has advantages and disadvantages. There's a long list of possible additional complications with multiple pregnancies, which is why it's automatically considered a "high risk pregnancy." The advantage is the mother gets additional screenings to prevent these complications.

Just in time for our summer workshop, the nausea finally started fading. Clothes started getting a little tight though, as an observant eye might have noticed on the conference photo.

The average length of a human pregnancy is 40 weeks (gestational) and it's roughly divided into 3 trimesters, each bringing its own challenges. The first trimester ends at week 12, the second at week 27. From 9 weeks on, the embryo is called a fetus. The risk of miscarriage during the first trimester is very high, estimates range from one in 8 to one in 5 pregnancies that end by spontaneous abortion in that early stage. In the first 12 weeks, all the major organs are formed and working, though it takes several more months for them to mature - the last organ to fully mature is the lung. If anything goes wrong in the critical first steps, the embryo is not viable. Some reasons for miscarriage are known and are treatable, but by and large the doctors don't know much and can do even less at these stages. This is my second pregnancy. The first one ended in a miscarriage at 11 weeks. Of course I knew the statistics. But I didn't realize what it means till it happened to me and I suddenly learned how many people I knew had made the same experience before.

At 14 weeks, I went to the second ultrasound. The doc found two placentas, which means it could be fraternal or idential twins, but more likely fraternal. In the ultrasound image to the left you see them lying on top of each other, both heads to the right. (I've blurred out some details in the header for privacy reasons.) Crown to rump length is about 8 cm.

After that, Epsilon and Delta got too large to fit on a single ultrasound image. Then, the docs start measuring the sizes of single organs instead. The next ultrasound, "the big one" as they call it, is at 20 weeks. With this ultrasound, the fetus' organs are screened to detect possible problems. The outcome is some sort of risk assessment. In our case, the doctors didn't find any reason for concern. Kidneys and bladders were working properly. They did an ultrasound of the hearts, and measured the blood flow. They showed us the main arteries, the brains, the stomachs, the legs and hands. I was less impressed by the technique itself than by the resolution and, most of all, by what the doctor managed to read out of the pictures. We had this "big ultrasound" done in Germany (it has to be done by some particular week in case a severe problem calls for a late abortion), so Stefan could come and watch.

The doctor told us Epsilon and Delta are two girls.

Since then, Epsilon and Delta, and so my belly, have been steadily growing. The little ones are well and kicking (both me and each other), while I'm having some trouble coping with the adjustment. My sudden hospitalization some weeks ago was very likely related to the pregnancy, though the actual problem remained mysterious. Worse, only a week later I had the questionable honor to spend another night in a hospital, this time for a completely different reason. Again the doctors insisted on many tests but eventually found nothing in particular. The pregnancy bible tells me helpfully that I should be feeling great during the 2nd trimester. Well, so much about statistics.

In any case, my employer is informed now, the wheels of bureaucracy are turning, and if everything works out alright I'll go on maternity leave mid of November. As you know, Stefan lives in Germany while I live in Sweden, so our situation is a little complicated. The health insurance issues are a big annoyance, and that's only part of the problem. Luckily though, both the Swedes and the Germans are very generous with parental leave, so we'll have some time to reorganize our lives. For now, Stefan is moving into a larger apartment while I'm enjoying the pleasantries of maternity wear.

Thursday, September 16, 2010

Visit to CERN

Last week, the CERN Book Fair give me the opportunity of a short trip back to CERN. I had been there a few times in 2000/2001, working with the Geant4 collaboration and contributing snippets of computer code to this huge simulation package, and I really enjoyed to visit CERN again, alas for a much too short stay.

Back then ten years ago, we could visit the LEP detectors in their caverns, and now, the LHC is running - but actually, these giant changes are nearly invisible on the Meyrin site of CERN:


There is the dark dome of the CERN Globe looming in the background, and the blue thing on the lawn is a LHC magnet, but otherwise, not much has changed. Unfortunately, the beautiful terrace of the cafeteria was a construction site, and the dusty glass display with Tim Berners-Lee's original web server had vanished from the side wing of the restaurant.


I remember that back then, I could stroll around a few of the older experimental halls, so I had the naive idea that I could try and find the famous hydrogen bottle that feeds the LHC. But of course, as the machine is running, there is no access to the accelerators: Defense d'Entrer/No Entry.


Instead, I walked to the Computer Centre, where we had our temporary offices when collaborating with the Geant4 group.


The stairs in the entrance hall lead to a visitors gallery, which allows a great view into the Computer Centre's huge machine hall:




Downstairs, next to the user helpdesk, there is now a small exhibition of historical computer hardware: magnetic tapes, giant floppy disks, clumsy looking equipment:


And there I did spot it again: The black NeXT workstation, the very first web server:


It seems that a web server was expected to be always available right from the beginning: With a red pen, Tim Berners-Lee has written a warning note on the NeXT:


This machine is a server. DO NOT POWER IT DOWN !!


Coming back to more recent times, Sabine told me of a talk she had heard just last week on ATLAS results, and of an ATLAS web page with plots and papers presented in talks during this summer.

I did browse around a bit, and while I do not want to say anything about the physics discussed in the papers, I realized that they nearly all quote a Geant4 paper on which I am a coauthor. Great, I thought, that should boost the quotation statistics - and then I realized that I am a coauthor on a "Topcite 1000+" paper!

Monday, September 13, 2010

This and That

Some things that entered my sphere of thought recently:
  • Dorothy Bishop, Prof. of developmental neuropsychology at Oxford, is offering an "Orwellian Prize for Journalistic Misrepresentation" for any article in an English-language national newspaper that has the most inaccurate report of a piece of academic work. Judgement will be based on a points scoring system, as follows:

    • Factual error in the title: 3 points
    • Factual error in a subtitle: 2 points
    • Factual error in the body of the article: 1 point

    You can find details on the nomination on Prof Bishop's blog.

  • The recent issue of New Scientist features an article about Internet addiction. We discussed this topic a few months back in my post Addicted, where I argued one should be careful to distinguish between substance abuse and compulsive disorder, and that "addiction" is a rather sloppy expression. The New Scientist article doesn't really say anything new but offers a summary of the present status of discussion:
    "For almost as long as there's been information technology, there have been arguments over whether it is possible to become addicted to it.

    One definition of behavioural addiction is a recurring compulsion to act in specific ways which may have detrimental impacts on the person's well-being - there are well catalogued examples of people's internet activity fitting that pattern.

    The idea of behavioural addiction is not universally accepted, however. Psychiatry's bible - the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM) - prefers the classification "impulse control disorder", differentiating the conditions from physical addictions, such as cocaine or alcohol addiction.

    The question of whether internet addiction should be included as a diagnosed condition in the next edition, DSM-V, is a hot debate right now.

    Some people, such as psychiatrist Jerald Block, based in Portland, Oregon, argue that internet addicts show behaviour consistent with other addictive disorders, such as excessive use, withdrawal and negative social impacts.

    Others, including Ronald Pies, at Tufts University School of Medicine in Boston, Massachusetts, argue there have been insufficient controlled studies of internet addiction to show that withdrawal symptoms are genuinely physiological. They suspect that the negative social effects attributed to excessive internet use may have other underlying causes, such as depression or obsessive compulsive disorder.

  • Perimeter Institute for Theoretical Physics is inviting applications for Postdoctoral Research positions. For more information please visit this website, and good luck :-)

  • Something to first make you laugh and then make you think: What exactly is a doctorate? A graphic representation by Matt Might. By the time you've finished your 3rd postdoc, you might have made it to a few pimples on the body of knowledge ;-) [Thanks to Christine]

  • Just because it's one of the more absurd stories I've read recently: A 75 year old German psychiatrist attempted to kiss his patient and claimed later it was supposed to be a "therapeutic kiss" meant as "shock therapy." The woman turned away fast enough so the attempt aimed at her lips landed on her cheek instead. Then she sued the doctor for sexual harassment. The psychiatrist had to pay a fine of EUR 3,500.

  • If you don't know how to arrange your marathon training with your 80 hours/week office job, what you need is a treadputer. Zeitgeist!

Wednesday, September 08, 2010

“Most Couragous Postdoc” Prize

Some months back, I got a mini-grant from the Foundational Questions Institute, FQXi. The proposal was to award a prize to the “most courageous postdoc.” The details are now settled and we're ready for nominations! You find all details on this website

The deadline is September 30th. Note that you cannot nominate yourself.

The criterium for nomination is that the candidate shall have demonstrated extraordinary passion for unraveling the secrets of the universe and big unsolved problems in FQXi's focus areas, and has done so by
  • Dedicating time and effort to develop own theories despite this bringing great risk and not being helpful to one’s career;
  • Following one’s interests in fundamental physics despite the local environment being unsupportive or even outright discouraging;
  • Disagreeing on the consensus despite the likeliness of being ridiculed; and,
  • Publicly demonstrating one’s passion.

The idea for this prize goes actually back to a tongue-in-cheek footnote that I wrote more than two years ago to a blogpost in a state of utter frustration (can't recall exactly what I was frustrated about though):
    “The Sabine Hossenfelder Award recognizes annually the most courageous postdoc in theoretical physics. Courage can be shown by stubbornly working on topics where there hasn't been progress for several decades (with or without outcome), changing fields and starting all over again (with or without success), public political involvement (with or without impact), questioning the common sensus, criticising the majority opinion, or disagreeing with established senior researchers. Courage should not be confused with stupidity, in neither category.”

I sincerely believe that science, and theoretical physics in particular, needs young researchers who are a little crazy, who take a risk and do their own thing, even if it looks completely hopeless or insane for everybody else. I hope that this little price serves to document their efforts are not wasted, but appreciated.

Thursday, September 02, 2010

In the hospital

Monday morning, I woke up in the hospital with an IV-drip in one arm and a nurse taking my blood pressure on the other arm.

What happened?

Sunday, I was about to fly back to Stockholm. I hadn’t been feeling too great, but then I generally haven’t been feeling great lately. My blood pressure has been at the lower end of healthy since I was a teenager. It runs in the family. People like to tell me low blood pressure is good. I usually ask them to try to go to work when you can hardly stand upright, let alone speak.

I would have classically fainted and dropped to the floor, except that the moment my circulatory system decided to shut down all non-essential functions I was on board of an airbus, seatbelt fastened, tray table securely stored. In fact, we were headed for the runway already. So there was no dropping. When I could see again through the black clouds, I was lying on several seats. Somebody was pushing an oxygen mask on my face, somebody else was taking blood pressure. They later told me it read 70 to 30. 150 people had to wait while I was carried back out of the plane. An ambulance brought me to the airport hospital.

Several people poked holes into my arms before they found a vein to put me on an IV drip. They measured blood sugar; it came out to be low but still normal. Blood pressure went up some twenty points or so. I was told they’d keep me there for some hours and pump half a liter isotonic fluid into my blood stream, confident I’d be labeled “fit to fly” after this and be able to take the next flight to Stockholm. What happened instead was that my blood pressure hit bottom again. They put me on a glucose drip, back into the ambulance and brought me to the next hospital, suspecting inner bleeding or pulmonary embolism. I had hold onto my hand baggage, but my checked-in bag was meanwhile on the way to Stockholm.

In the hospital, I was handed over to a doctor who took me off the glucose drip and did a few exams. She found nothing of concern, then poked more holes into my arms trying to take blood. Eventually she used a butterfly-needle (a tiny needle commonly used for children) and managed to extract some drops. Having done that, she went to get some forms to note down my medical history. The second she left the room, I got sick and my blood pressure plummeted again. They hastily put me back onto the drip, blood pressure down to 62 to 35, body temperature plummeted to 34C (93 F). “Centralized,” somebody mumbled, schemes in white coats around my bed. An internist pushed electrodes on my chest to take an EKG. They gave me some injection which remarkably enough raised the blood pressure within a matter of a minute back to 100 to 70. The EKG turned out to be normal.

I had to stay for the night with blood pressure being monitored, not even allowed to go to the restroom without a nurse because they were afraid I might faint. Blood pressure finally stabilized around 90 to 50something. The blood picture came out with some minor aberrations; I was prescribed a stack of mineral pills. They asked me a lot of questions: Has this happened before? Did I not drink enough during the day? Maybe eaten something funny? Ever had problems with the thyroid glands? Afraid of flying? No to all of the above.

I am still in the hospital. The last days, they’ve done numerous tests and collected a seemingly endless amount of numbers, notes and graphs in a large folder with my name on it. They checked my heart and lungs and found nothing of concern. I am sharing the room with a women who is here for hypertension – her blood pressure is more than twice as high as mine.

After 3 days, I asked the nurse if there’s any internet connection available in the building. She stared at me in disbelieve. “Internet?” she asked, as if nobody had ever dared before to have such an outlandish question. Luckily I have my BlackBerry with me. Stefan, who came to bring me clothes and sweets, told me the main entrance is cluttered with signs prohibiting cell phone use. Well, I said, I didn’t come in through the main entrance and nobody told me. After 4 days I sneaked out of the hospital with an IV needle on my arm and a device on my chest recording the heart rate, and bought an USB internet stick. (Thanks to Phil for the suggestion!) So here I am again, hitting “mark all as read” on my Google reader which announced 1000+ unread items.

Reason I’m telling you this is that last night, listening to my roommate snoring, I decided I’ll put this blog on a break. I feel like I need some time to find equilibrium. As you probably know, I live alone in Stockholm and of course I’m wondering what might have happened had I not been around people. While it’s a relieve the docs didn’t find a serious problem, not knowing why it happened means to me it can happen again. Comments on this blog will remain open, and I encourage you to have a look at our archives, but you might not hear much from me for a while. I hope you understand. I’ll be back.

If the result of yesterday’s test comes out okay the docs say I can go this afternoon. I hope I’ll be able to make it back to Stockholm and find my bag. And that the health insurance will cover…

Thursday, August 26, 2010

Body Worlds

Yesterday, Stefan and I went to see the "Body Worlds" exhibition, which is currently in Offenbach, close to Frankfurt, Germany. Body Worlds is a traveling exhibition that displays human bodies and body parts that have been preserved using a technique called plastination. Basically, it works by removing all bodily fluids and fat from the tissue by washing it out with acetone, and then replacing these fluids with silicone. That is to say, the exhibits are not anatomic models but actually real. The method of plastination used for these purposes was invented by Gunther von Hagens, then at the University of Heidelberg. His work there likely was inspiration for the horror movie "Anatomy," starring Franka Potente, which still causes me the occasional nightmare.

The exhibition itself was absolutely non-nightmarish. It had in fact a high educational value, and at least for me no yuck-factor. Besides that, it also had a missionary theme, that of documenting and explaining the process of aging and not only the complexity but also the fragility of the body. Besides many whole-body exhibits in fancy positions - dancing, playing saxophone, jumping over fences, during intercourse (must be 16 or older to see that) - they had all organs separately, some showing various illnesses and diseases (fatty liver, cancerous uterus, smoker's lung), as well as artificial joints. Some of the organs were cut into small slices or into half, so you could see inside. It is quite amazing really, to see all the muscles, bands, and nerves. Most stunning I found the capillary system that leaves behind the shape of the body after plastination (see picture to the left, more here).

It is not allowed to take photos of the exhibits. The ones you see here are from this and that url and there's some more on the website bodyworlds.com. Alternatively, do a Google image search for Body Worlds and get a nice selection.

What I found somewhat annoying about the exhibition is that in all of the full body exhibits there were necessarily parts missing for better visibility (or possibly because they were just missing? Who knows what these people died from.) To begin with, most of the skin had been removed, but sometimes one or the other muscle, or this or that band. Unfortunately, there was nowhere to find a detailed explanation of what parts had been removed, so I was sometimes left wondering if there shouldn't be another muscle on that leg or another part on that spine or so. Also, I could have done without the photos of happy 100 year old men water skiing, proclaiming that happiness is the key to a long life. On the other hand I learned one or the other thing. For example, I wasn't aware the liver lies to closely below the diaphragm. And did you know that your testicles are doomed to shrink after you've passed your mid 40s? Or, more amusingly, that two centuries ago it was believed sperm is produced in the brain. Because, you see, that's were the soul is located and how could it be produced elsewhere. (Of course today we're more enlightened and know that the male soul sits in the testicles ;-).)

The bodies that are being used for plastination stem from people who donated them during their lifetime by signing the necessary forms. You can indeed donate your own body if you want to be conserved for educational means. Presently, there's more than 10,000 people who have signed up, and I suspect that most of them will not be used for exhibitions but rather for anatomy courses. On the other hand you might become famous post-mortem on Lady Gaga's stage. Apparently, the Lady has expressed interest in a decoration consisting of human bodies. In the exhibition guide, there's a selection of donors summarizing their motivations, which ranges from a love for science over some sort of immortality to admiration of von Hagens' work. The anonymity of the endproduct's origin I guess sorts out most narcissistic motivations. As to me, I'm signed up for organ donation, in various countries, and prefer to maximize my educational value during my lifetime.

Monday, August 23, 2010

Testing the foundations of quantum mechanics

If you know one thing about quantum mechanics, it's Born's rule: The probability of a measurement is the square of the amplitudes of the wave-functions. It is the central axiom of quantum mechanics and what makes it quantum. If you have a superposition of states, the amplitudes are sums of these states. Taking the square to obtain the probability means you will not only get the square of each single amplitude - which would be the classical result - but you will get mixed terms. These mixed terms are what is responsible for the interference in the famous double-slit experiment and yield the well-known spectrum with multiple maxima rather than one reproducing the two slits, as you'd get were the particles classical. (Dr. Quantum shows you what I mean.)

This rule has been implicitly tested countless times since it enters literally every calculation in which quantum effects are relevant. But it is not usually tested for parameterized deviations like, say, Einstein's field equations are tested for such deviations. Now however, a group of physicists (from the Institute for Quantum Computing and Perimeter Institute in Waterloo, Canada, the Laboratoire the Nanotechlogie et d'Instrumentation Optique in Troye, France, and the Institut für Experimentalphysik in Innsbruck, Austria) has tested Born's rule for deviations stemming from higher order interference which serves to constrain possible modifications of quantum mechanics. Their results were published in a recent Science issue:

The short summary is that they haven't found any deviation to a precision of one in a hundred. But their method is really neat and worth spending a paragraph on.

The experimental setup that the group has used is a tripe-slit through which pass single photons. If one computes the probability to measure a photon at a particular location on the detector screen in usual quantum mechanics, you square the sum of the wave-functions originating from each of the three slits. You get several mixed terms, but they are all second order in the wave-function. If Born's rule holds, this allows you to express the probability for the three-slit experiment as a sum of probabilities from leaving open only one of the slits and leaving open combinations of two slits. Thus, what the clever experimentalist do is a series of measurements leaving each single slit open, all combinations of two slits open, and leaving all three slits open, and see if the probabilities add up. And they do, to very good precision.

So, there's nothing groundbreaking to report here in terms of novel discoveries, but I very much like the direct test of the foundations of quantum mechanics this experiment constitutes. I think we could use more tests in this direction, and higher precision will come with time.

Friday, August 20, 2010

Book review: You are not a Gadget by Jaron Lanier

You are not a Gadget - A Manifesto
By Jaron Lanier
Knopf (January 12, 2010)

Jaron Lanier is an interdisciplinary computer scientist who doesn't shy away from also crossing borders also to the arts. He could probably be described as a creative intellectual, is known for his work on virtual reality, less known for his music, and now he has written a book. More details on Lanier's bio are on his website.

Lanier is a man with opinions, and that's basically what his book is about: Despite it being called "A Manifesto" what it really is is a collection of opinion pieces. Lanier is a skeptic, and concerned about many developments in software and information technology and their impact on human societies. I am very sympathetic to the points he is trying to make. Unfortunately, he doesn't make them well.

Lanier for example bemoans the "locked in" effect in which a piece of software, despite far from being optimal or even being plain annoying, becomes so wide spread that at some point it is more or less impossible to replace or change it; it simply would be too much effort. That is of course true, but it is hardly a new problem of software in particular. The same problem has hindered and does hinder progress in many other aspects of our life. Take tax laws for example. A mess. You want to throw them out and start all over again from scratch. Yet, too much effort and resistance. In practice, you fiddle something here or something there. Or, even worse, take norms and standards. Surely it would be less annoying if the world could agree on one paper format or one standard for power outlets. But the effort for such a change would be enormous. That is not to say that Lanier isn't making a correct point. It is a good point and one that we should pay more attention to. It's just to say, he misses the larger societal context and complains about an ancient problem without offering any new insight about it.
"If you love a medium made of software, there's a danger that you will become entrapped in someone else's recent careless thoughts. Struggle against that."

Another large concern of his is that the present organization of the internet, the spread of easy-to-use templates as well as making money per advertisement hampers creativity.

About the former point: it is of course true that the availability of default websites has decreased expressions of individual design. On the other hand, it's what allowed the vast majority of people to set up a website in the first place, and let me add that I know plenty of people with a PhD who insist they aren't able to understand html or css-style sheets. It's a matter of convenience. And in addition, it is actually a great relieve that one can generally at least open and read these unindividual websites. Lanier is concerned that making use of imperfect software will change your humanity to adapt to the software instead the other way round. I can't but have the impression that this concern is borne out of observing a specific community of people rather than the average person. In any case, the scientist in me hears the rethoric and waits for the evidence. Yet, there's no evidence to come in Lanier's book.

"Am I accusing all those hundreds of millions of users of social networking sites of reducing themselves in order to be able to use the services? Well, yes, I am."

Don't people also "reduce themselves" by buying a mass-produced car that comes in one of 5 colors and the only option to customize it is put a sticker on the bumper? The vast majority of people on the planet neither has the interest not the skills nor the money to individualize every detail of their average life. The artist might find that sad, but that's reality.

In any case, the latter point is a crucial one of course. You know that I too have frequently warned about the side-effects that the now common way of financing online presentations via adverts has. People often claim the internet is democratic, then they claim this sort of financing per adverts is just capitalism in action. As a matter of fact the internet is neither democratic, nor is what you're seeing a sensible capitalistic system, simply because people are not payed for their work. They are instead being paid by accidental clicks on banners that pop up on the screens of visitors who might have been looking for something entirely different to begin with. It's a feedback mechanism that one has no reason to expect to lead to any outcome that's beneficial for our societies.

Again however, Lanier misses the larger context. He puts forward a concrete proposal for how to allow artists to earn from their work better than is the case today, basically some system of micro-payments. That is all well and nice, but only addresses part of the problem. The problem that frankly concerns me much more than whether Lanier's musical friends can make a living is that the present organization erodes one of the most essential foundations of democratic societies: journalism. This issue is only mentioned in Lanier's book in the passing at some point. More generally, it is well-known that some services, especially those that are essential to the foundations of our societies, are better offered as public services than as private services. For what I am concerned, the best solution is probably a mixture. I find it particularly disingenuous that Lanier then claims "the only alternative [to some version of the proposal he is advocating] would be to establish some form of socialism."

Lanier also has a proposal for how to improve our financial systems that I don't feel competent to judge on. I can't but think that again he has missed the relevant point. The problem is not to come up with some proposal for improvement. Everybody I know seems to have some idea for how to improve our financial system. Just that most of them don't get their ideas printed in books. The problem is not coming up with an idea for improvement. No, the problem is that the present political and economic system has no instance for such proposals to be considered and be tested viable for reality and promising for improvement. The problem lies on a much deeper level.

It goes on like this. Lanier is a computer scientist, all right, and he clearly knows his field, but again and again he fails to put his proposals or arguments into the larger context and contrast them with the realities of politics and social dynamics. For example, he bemoans that the programming language LISP has fallen out of favor, though in his opinion it is essential to realize some of the proposals he is making. It strikes me similar to the complaint that we're not all speaking Esperanto.
"Wikipedia, for instance, works on what I call the Oracle illusion, in which knowledge of human authorship of a text is suppressed in order to give the text superhuman validity. Traditional holy books work in precisely the same way and present many of the same problems."

His criticism of the benefits of using the knowledge of large groups, though strongly expressed, remain superficial. In my opinion, he is throwing out the baby with the bathwater by not clearly explaining exactly what he is critical of and why, where the benefits are and what the drawbacks are. It is not very insightful.

To make matters worse, the book is very incoherently written. It is subdivided in 12 Chapters, that contain vaguely related short subsections to various topics. Ironically, since Lanier is outspoken critical of the blogosphere, the whole thing reads more like a collection of blogposts than a book. I am sure that all these little pieces he is offering fit perfectly together in Lanier's intellectually creative mind, but I had a hard time seeing a line of thought. Somewhere he elaborates on a research project he is working on with a friend on the relation between olfactation and language. That's certainly interesting, but I can't avoid having the impression Lanier just wrote down whatever crossed his mind. The book finally ends unexpectedly, without even so much as an attempt at drawing a conclusion or summarizing the argument. There are pretty much no references in the book to back up his claims or to at least justify his concerns.

That is not to say though that the book is uninteresting. See, having spent the money to buy and the time to read it, I am inclined to find something of value in it now. Lanier touches on many important points, and I hope that the book makes people think. However, exactly because I think that the theme of Lanier's book is important, it is even more disappointing it is so badly argued.