Pages

Thursday, January 31, 2013

Interna

January has been busy, as you can probably tell from the frequency of my posts. Lara and Gloria are now in half-daycare for 4 hours weekdays. The transition went fairly well, and I think they like it there. The nanny clearly has more time and patience to play with the kids than I, and the place is also better suited than our apartment where computers, books, pens, and other stuff that you don't want in your toddlers' hands, are lying in every corner. The nanny is from Spain and so the kids learn some Spanish along the way. They seem to understand a few words, but don't yet speak any.

We now replaced the baby cribs with larger beds that the kids can get in and out on their own. This took some getting used to. They wake up in the night now considerably more often than previously, and sometimes wander around, so recently we haven't been getting as much sleep as we would like to. That explains half of my silence. The other big change this month was that, now that the kids are two years old and we have to pay for their flight tickets, we've given up commuting to Stockholm together, and this is the first month of me trying to commute alone. Stefan has support from the babysitter and the grandparents while I'm away, but we're still trying to find the best way to arrange things. It's proved difficult to find a good solution for our issues with non-locality.

I have a case of recurring sinus infection which puts me in a generally grumpy mood, and the kids have a permanently runny nose, for which I partly blame myself and partly the daycare. Besides this, I am in the process of writing a proposal for what the European Research Council calls the "Consolidator Grant" and it's taking up a lot of time I'd rather spend on something else. My review on the minimal length scale got now published in Living Reviews in Relativity. I have been very impressed by how smoothly and well-organized their review and publication process went. Needless to say, now every time I see a paper on the arXiv on a topic covered by the review, I'm dreading the day I have to update this thing.

The girls are finally beginning to actually convey information with what they say. They ask for things they are looking for, they say "mit" (with) to tell us what we should take along, they complain if they're hungry and have learned the all-important word "put" (kaputt, broken). We haven't made much progress with the potty training though, unless naming the diaper content counts.

Sunday, January 27, 2013

Misconceptions about the Anthropic Principle

I keep coming across statements about the anthropic principle leaving its mark on physics that strike me as ill-informed, most recently in a book I am presently reading “The Edge of Physics” by Anil Ananthaswamy:
“The anthropic principle – the idea that our universe has the properties it does because we are here to say so and that if it were any different, we wouldn’t be around commenting on it – infuriates many physicists, including [Marc Davis from UC Berkeley]. It smacks of defeatism, as if we were acknowledging that we could not explain the universe from first principles. It also appears unscientific. For how do you verify the multiverse? Moreover, the anthropic principle is a tautology. “I think this explanation is ridiculous. Anthropic principle… bah,” said Davis. “I’m hoping they are wrong [about the multiverse] and that there is a better explanation.””
The anthropic principle has been employed in physics as a proposed explanation for the values of parameters in our theories. I’m no fan of the anthropic principle because I don’t think it will lead to big insights. But it’s neither useless nor a tautology nor does it acknowledge that the universe can’t be explained from first principles.
  1. The anthropic principle doesn’t necessarily have something to do with the multiverse.

    The anthropic principle is true regardless of whether there is a multiverse or not and regardless of what fundamentally is the correct explanation for the values of parameters in our theories. The reason it is often mentioned in combination with the multiverse is that proponents of the multiverse argue it is the only explanation, and no further explanation is needed or necessary to look for.

  2. The anthropic principle most likely cannot explain the values of all parameters in our theories.

    There are a lot of arguments floating around that go like this: If the value of parameter x was just a little larger or smaller we’d be fucked. The problem with these arguments is that small variations around one out of two dozen parameters leave out most possible combinations of parameters. You’d really have to consider modifications of all parameters together to be able to conclude there is only one supportive of life, which is however not a presently feasible calculation. And though this calculation is not feasible, the claim that there is really only one combination of parameters that will create a universe hospitable to life is on shaky ground already because this paper put forward a universe that seems capable of creating life and yet is entirely different from our own. And Don Page had something to say about this too.

    The anthropic principle might however still work for some parameters if their effect is almost independent on what the other parameters do.

  3. The anthropic principle is trivial, but that doesn’t mean it’s useless.

    Mathematical theorems, lemmas and corollaries are results of derivations following from assumptions and definitions. They essentially are the assumptions, just expressed differently, always true and sometimes trivial. But often, they are surprising and far from obvious, though that is inevitably a subjective statement. Complaining that something is trivial is like saying “It’s just sound waves” and referring to everything from engine noise to Mozart.

    And so, while the anthropic principle might strike you as somewhat silly and trivially true, it can be useful for example to rule out values of certain parameters of our theories can have. The most prominent example is probably the cosmological constant which, if it was too large, wouldn’t allow the formation of structures large enough to support life. This is not an empty conclusion. It’s akin to me seeing you drive to work by car every morning and concluding you must be old enough to have a driver’s license. (You might just be stubbornly disobeying laws, but the universe can’t do that.) Though, this probably doesn’t work for all parameters, see 2.

  4. The anthropic principle does not imply a causal relation.

    Though “because” suggests so there’s no causation in the anthropic principle. An everyday example for “because” not implying an actual cause: I know you’re sick because you’ve got a cough and a runny nose. This doesn’t mean the runny nose caused you to be sick. Instead, it was probably some virus. Alas, you can carry a virus without showing symptoms so it’s not like the virus is the actual “cause” of my knowing. Likewise, that there is somebody here to observe the universe did not cause a life-friendly universe into existence. (And the return, that a life-friendly universe caused our existence isn’t the case because life-friendly doesn’t mean interested in science, see 3. Besides this, it’s not like the life-friendly universe sat somewhere out there and then decided to come into existence to produce some humans.)

  5. The applications of the anthropic principle in physics have actually nothing to do with life.

    As Lee Smolin likes to point out, the mentioning of “life” in the anthropic principle is entirely superfluous verbal baggage (my words, not his). Physicists don’t usually have a lot of business with the science of self-aware conscious beings. They talk about formation of large scale structures or atoms. Don’t even expect large molecules. However, talking about “life” is arguably catchier.

  6. The anthropic principle is not a tautology in the rhetorical sense.

    It does not use different words to say the same thing: A universe might be hospitable to life and yet life might not feel like coming to the party, or none of that life might ever ask a why-question. In other words, getting the parameters right is a necessary but not a sufficient condition for the evolution of intelligent life. The rhetorically tautological version would be “Since you are here asking why the universe is hospitable to life, life must have evolved in that universe that now asks why the universe is hospitable to life.” Which you can easily identify as rhetorical tautology because now it sounds entirely stupid.

  7. It’s not a new or unique application.

    Anthropic-type arguments, based on the observation that there exists somebody in this universe capable of making an observation, are not only used to explain free parameters in our theories. They sometimes appear as “physical” requirements. For example: we assume there are no negative energies because otherwise the vacuum would be unstable and we wouldn’t be here to worry about it. And requirements like locality, separation of scales, and well-defined initial value problems are essentially based on the observation that otherwise we wouldn’t be able to do any science, if there was anybody to do anything at all.

Thursday, January 24, 2013

Hurdles for women in physics

Time Magazine's Person of the Year in 2012 was Barack Obama, the dullest choice they could possibly have made. I would have cast my vote for Malala Yousafzai who made it on the list of runners-up. Among the runners-up one could also find particle physicist Fabiola Gianotti ("The Discoverer") who had the eyes of the world on her when she announced the discovery of the Higgs last year. That, I thought, was pretty cool to find a particle physicist on that list.

Alas, the article, if you read it, is somewhat funny. To begin with you might get the impression she was selected for heroically fighting a toothache. And then there is this remark:
“Physics is a male-dominated field, and the assumption is that a woman has to overcome hurdles and face down biases that men don’t. But that just isn’t so. Women in physics are familiar with this misconception and acknowledge it mostly with jokes.”
This pissed me off enough to write a letter to the editor. I only learned coincidentally the other day that it appeared in the Jan 21 issue of the US edition. (Needless to say, we get the European edition.) Below is the full comment I wrote and the shortened version that appeared. There are many other things one could have mentioned, but I wanted to keep it brief.
“As a particle physicist, it was exhilarating for me to see Fabiola Gianotti on your list of runners-up, but I was very dismayed by Kluger's statement it is a "misconception" that women in physics face hurdles men don't.

Yes, instances in which I have been mistaken by my male colleagues for the secretary or catering personnel can be "acknowledge[d] mostly with jokes", though these incidences arguably reveal biases and not everybody finds them amusing. But the assertion that women in physics do not "have to overcome hurdles... that men don't" speaks past the reality of academia and is no laughing matter.

In this field the competition for tenure usually plays out in the mid to late thirties, and is not only accompanied by hard work but also frequently by international moves. Men can postpone their family planing until after they have secured positions. Women can't. I am very lucky to live in a country with generous parental leave and family benefits. But I do have female colleagues in other countries who faced severe problems because of unrealistic expectations on their work-performance and lack of governmental support while raising small children.

Both genders face the tension between having a family and securing tenure, but the timing is markedly more difficult for women. You have done a great disservice to female physicists by denying this "hurdle" exists.”

Thursday, January 17, 2013

How a particle tells time

One of the first things you learn about quantum mechanics is that particles have a wavelength, and thus a frequency. If the particle is in rest, this frequency is the Compton frequency and proportional to the particles’ rest mass. It appears in the wavefunction of the particle at rest as a phase. This means basically the particle oscillates, even if it doesn’t move, with a frequency directly linked to its mass.

The precision of atomic clocks in use today relies on the precise measurement of transition frequencies between energy levels in atoms which serve as reference for an oscillator. But via the Compton wavelength, the mass of a (stable) particle is also a reference for an oscillator. Can one therefore use a single particle to measure the passing of time?

This is the question Holger Müller and his collaborators from the University of Berkeley have addressed in a neat experiment that was published in the recent issue of Science:
    A Clock Directly Linking Time to a Particle's Mass
    Shau-Yu Lan, Pei-Chen Kuan, Brian Estey, Damon English, Justin M. Brown, Michael A. Hohensee, Holger Müller
    Science, DOI: 10.1126/science.1230767
As you can tell from the title of the article, the answer is Yes, one can use a single particle to measure time! They have done it, with the particle in question a Cesium atom, and call it a “Compton clock.” The main difficulty is that the oscillation frequency is very high, far beyond what is measurable today. To make it indirectly measureable, they had to cleverly combine two main ingredients, an atomic interferometer and a frequency comb.

The atomic interferometer works as follows. The atom is hit by two laser pulses, one pulse with frequency a little higher than the laser’s direct output frequency, and one with a frequency a little lower. This splits the wavefunction of the atom. A couple more precisely timed laser pulses are then used to let the wavefunction converge again. It interfers with itself and the interference pattern can be measured in repeating this process.

The relevant aspect of the atom interferometry here is that the phase accumulated by each part of the wave-function depends on the output frequency of the laser, the difference in frequency between the two pulses (tiny in comparison to the output frequency), as well as on the path taken. The path-dependent phase itself depends on the mass of the atom because the two parts of the wavefunction are not in rest with each other. So then the experimentalist can turn a knob and change the difference between the frequencies of the two pulses until the interference pattern vanishes. If the interference pattern vanishes, one then has a fixed relation between the mass of the particle, the output frequency of the laser, and the difference between the pulse frequencies.

So far, so good. If one now knows the frequency of the laser, one can measure the particle’s mass by looking at the frequency split of the pulses needed to get the interference to vanish. Alas, this is not what one wants for the purpose of a clock, which should not rely on an additional, external, measurement.

This is where the frequency comb comes in. In 2005, frequency combs brought a Nobel Prize to John Hall and Theodor Hänsch. Before the invention of the frequency comb, it was not possible to accurately determine absolute frequencies in the optical range. Relative frequencies, yes, but not absolute ones. They’re just too fast to be counted by any electronic means. Frequency combs address this issue by relating very high optical frequencies to considerably lower frequencies, which then can be counted. This is done by pulsing a low frequency signal . If one takes the Fourier transformation of such a pulsed signal, one obtains (ideally) a series of peaks – the frequency comb – whose positions are exactly known (these are the higher harmonics of the low frequency signal). If one knows the pulse pattern of the laser comb one can then substitute the measurement of a very high frequency with that of a considerably lower frequency. Ingenious!

And more ingenuity. Mueller and his collaborators use a frequency comb to self-reference the (tiny) difference in the laser pulses with the output frequency of the laser. The relation between both is then known and given by the pulse pattern of the frequency comb. This way, one gets rid of one parameter and has a direct relation between a measurable frequency and the mass of the particle: It’s a clock!

For what the precision of this clock is concerned however, it is orders of magnitude below today’s state-of-the-art atomic clocks. So unless there are truly dramatic improvements to atom interferometry, nobody is going to use the Compton clock in practice any time soon.

But this clock works both ways. It doesn’t only relate a mass to time (oscillation frequency), but also the other way round. Thus, one can use the Compton clock to measure mass if one has a time reference. With the "Avogadro Project", an enourmously precisely manufactured silicon crystals containing an accurately known number of atoms, one can scale up a single atom to a large number and macroscopic masses. This way the Compton clock might one day be used to define a standard of mass.

Monday, January 14, 2013

Soft Science Envy

If I look at a correlation plot in biology, sociology or psychology, I can understand what they mean with “physics envy.” Physics is the field of precision measurement, the field of hard facts, the field of unambiguous conclusions – at least that’s what it looks like from the outside. The neutron lifetime (see image to the right) tells a different story, one in which convergence clearly had a social parameter (note that jumps in measurements over the years are outside the errorbars. But in the end, the facts won and isn't the shrinking of errorbars just so amazing? That's the side of physics envy that is understandable.

There is the occasional physicist who puts his skills to use in biology, chemistry, neuroscience or the social sciences, economics, sociology and fancy new interdisciplinary mixtures thereof. Needless to say, people working in these fields aren’t always pleased about the physicists stomping on their grass, and more often than not they’re quite unsupportive.

Source: SMBC.
That’s the ugly side of physics envy. It's is a great stumbling block for interdisciplinary research. You really need a masochistic gene and a high criticism tolerance to try.

Physics envy has led many researchers in other fields to develop mathematical models that create the illusion of control and precision – even if the system under question doesn’t allow for such precision. That’s the hazardous side of physics envy.

But after having read Kahneman’s and Ramachandran's book, I clearly have developed a soft science envy!

Kahneman tells the reader throughout his book how he cooked up hypotheses and ways to test them in the blink of an eye. His hypotheses were frequently triggered by reflecting on the shortcomings of his own perceptions, then assuming he’s an average person. He won the Nobel Prize for Economics for the insight that human decisions can be inconsistent. Ramachandran, who made career learning about the neurobiology from patients with brain damage, literally has the subjects of his papers walking into his office. This is not to belittle the insights that we have gained from their creativity and the benefits that they have brought. But the flipside of physics envy is that not only the facts are hard, the way to them is too.

Tuesday, January 08, 2013

Conform and be funded?

A recent issue of Nature magazine featured a study by Joshua Nicholson and John Ioannidis that looked at the citation count of principal investigators (PIs) funded by the US-American National Institute of Health (NIH).
    Research grants: Conform and be funded
    Joshua M. Nicholson, John P. A. Ioannidis
    Nature 492, 34–36 (06 December 2012) doi:10.1038/492034a
Ionnadis is no unknown, he previously published a paper "Why Current Publication Practices May Distort Science" that we discussed here, and is author of the essay "Why Most Published Research Findings Are False". The Nature article is unfortunately subscription only, so let me briefly summarize what it says before commenting.

Nicholson and Ioannidis analyzed papers published between 2001 and 2012 in the life and health sciences, catalogued by the Scopus database. They looked those who had received more than 1,000 citations by April 2012 and an author affiliation in the United States. They found 700 papers and 1,172 authors matching this query.

The NIH invites PIs of funded projects to become members of study sections. The purpose of NIH study section is to evaluate scientific merit. Nicholson and Ioannidis found that from the 1,172 top-cited authors only 72 were currently members of study groups, and most of these 72 (as expected) currently received NIH funding. However, these 72 top-cited scientists are merely 0.8% of all section members. Maybe more insightful is that they further randomly selected 200 of the top-cited papers and excluded those with authors in a study group. From the remaining top-cited authors, only 40% are currently receiving NIH funding.

In a nutshell, this is to say that the majority of authors of research articles in the life and health sciences that were top-cited within the last decade do not currently receive NIH funding.

That's as far as the facts are concerned. Now let's see how Nicholson and Ioannidis interpret this finding and what they conclude. In the beginning of the article, they are careful to point out that scientific success is difficult to measure and the citation count should be regarded with caution:
    "The influence of scientific work is difficult to measure, and one might have to wait a long time to understand it. One proxy measurement is the number of citations that scientific publications receive. Using citation metrics to appraise scientists and their work has many pitfalls... However, one uncontestable fact is that highly cited papers (and thus their authors) have had a major influence, for whatever reason, on the evolution of scientific debate and on the practice of science."
However, towards the end of the paper they write:
    "The mission of the NIH is to support the best scientists, regardless of whether they are young, old or in industry... Such innovative thinkers should not have so much trouble obtaining funding as principal investigators. One cannot assume that investigators who have authored highly cited papers will continue to do equally influential work in the future. However, a record of excellence may be the best predictor of future quality, and it would seem appropriate to give these scientists the opportunity of funding their projects."
Note how now authoring a highly cited paper is synonym for being an "innovative thinker" and "may be the best predictor of future quality". In fact, they go even farther than that by arguing that all authors of highly-cited papers should have their projects NIH funded (apparently regardless of what this project is):
    "Funding all scientists who are key authors of unrefuted papers that have 1,000 or more citations would be a negligible amount in the big picture of the NIH budget, simply because there are very few such people. This could foster further important discoveries that would otherwise remain unfunded in the current system."
I find the above closing paragraph of the article simply stunning. They seriously argue that something must be wrong with NIH funding -- according to their elaboration it's a "networked system" in which "exceptionally creative ideas may have difficulty surviving" -- because the NIH does not automatically fund projects of authors with papers who gathered more than 1,000 citations within the last decade.

Now I know nothing about funding problems in the life sciences. Maybe they have a good reason to hold a grudge against NIH peer review practice. Be that as it may, the facts do simply not support their arguments. I am tempted to say it actually speaks in favor of the NIH that they do not pay so much attention to the citation count because, as the authors write themselves, it's a questionable measure: It measures not only innovative thinking, but also fashions and just usefulness (reviews and illustrative diagrams tend to gather lots of citations), it moreover picks up social dynamics, popularity of the authors, or the amount of secondary work that is created, irrespective of whether that work is particularly insightful.

Many top-cited works are created because somebody has been fast enough to jump onto a topic about to take off. Is that a sign for not being "conform", as the title of the article suggests? I am trying to imagine that somebody would argue that all top-cited physicists should get their projects funded without peer review. And would try to publish this as an essay in Nature.

Friday, January 04, 2013

Gravitational bar detectors set limits to Planck-scale physics - Really?

Contains 10-31% juice.
Three weeks ago, Nature Physics published, to my surprise, another paper on quantum gravity phenomenology:
The appearance of the word “macroscopic” in the title should be a warning sign.

As we discussed previously, there are recurring attempts in the literature on quantum gravity phenomenology to amplify normally tiny and unobservable effects by using massive systems. This is tempting because in macroscopic terms the Planck mass is 10-5 g and easy to reach. The problem with this attempt is that such a scaling-up of quantum gravitational effects with the total mass of a system isn't only implausible as an amplification, it is known to be wrong. Next two paragraphs contain technical details, you can skip them if you want.

The reason this amplification for massive systems appears in the literature is that such a scaling is what you, naively, get in approaches with non-linear Lorentz-transformations on momentum space that have been motivated by quantum gravity. If Lorentz-transformations act non-linearly the normal, linear, sum of momenta, this linear sum is no longer invariant under Lorentz-transformations and thus does not constitute a suitable total momentum for objects composed of many constituents.

It is possible to introduce a modified sum, and thus total momentum, that is invariant. But this total momentum receives a correction term that grows faster than the leading order term with the number of constituents. The correction term is suppressed by the Planck mass, but if the number of constituents is large enough, the additional term will become larger than the (normal) leading order term. This would mean that momenta of macroscopic objects would not add linearly, in conflict with what we observe. This issue has been called the “soccer ball problem”; accepting it is not an option. Either this model is just wrong, or, as most people working on it believe, multi-particle states are subtle and the correction terms stay small for reasons that are not yet well understood. To get rid of these terms, a common ad-hoc assumption is to also scale the Planck mass with the number of constituents so that the correction terms remain small. Be that as it may, it's not something that makes sense to use for “observable predictions”.

Earlier last year, Nature published a paper in which the questionable scaling was used to make “predictions" for massive quantum oscillators. Since this prediction is not based on a sound model, it is very implausible that anything like this will be observed.

The authors of the new paper now propose to precisely measure the ground state energy of a gravitational wave detector, AURIGA. In theories with a modified commutation relation between position and momentum operators, this energy receives correction terms. Alas, such modified commutation relations either break Lorentz-invariance, in which case they are very tightly constrained already and nothing interesting is to be found there. Or Lorentz-invariance is deformed, which leads to the necessity to modify the addition law and we're back to the soccer-ball problem.

So you might suspect that the new paper by Marin et al suffers from a similar problem as the previous one. And you'd be wrong. It's much better than that.

The authors explicitly acknowledge the necessity to understand multi-particle states in the models that they aim at testing, and present their proposal as a method to resolve a theoretical impasse. And while they talk about very massive objects indeed (the detector bars have a mass of about 105 kg), they do not scale up the effect with the mass (see eq 4). Needless to say, this means that the effect that they get is incredibly tiny, about 33 orders of magnitude away from where you would expect quantum gravitational effects to become relevant. They modestly write “Our upper limit... is still far from forbidding new physics at the Planck scale.”

Here's the amazing thing. For all I can tell, not knowing much about the AURIGA detector, the paper is perfectly plausible and the constraint makes indeed sense. I have nothing to complain about. In fact they even cite my review in which I explained the problem with massive systems.

The only catch is of course that the limit that they obtain really isn't much of a limit. If Nature Physics was consistent in their publication decisions, they should now go on and publish all limits on Planck scale physics that are less than 34 orders of magnitude away from being tested. I am very much looking forward to this. There are literally hundreds of papers that compute corrections due to modified commutation relations for all sorts of quantum mechanics problems. I should know because they're all listed and cited in my review. Expect an exponential growth of papers on the topic. (I am already dreading the day I have to update my review.) Few of them ever bother to put in the numbers and look for constraints because rough estimates show that they're far, far, away from being able to test Planck scale effects.

The best constraints on these types of models is, needless to say, my own, which is a stunning 56 orders of magnitude better than the one published in Nature.

So it seems that for once I have nothing to complain. It's a great paper and it's great it was published in Nature Physics. Now I encourage you all to compute Planck scale corrections to your favorite quantum mechanics problem by adding an additional term to the commutation relation, and submit your results to Nature. How about the g-2, or the Casimir effect? Oh, and don't forget that somebody should think about the soccer-ball problem...

Tuesday, January 01, 2013

Private Funding for Science – A Good Idea?

Two years ago Warren Buffett asked the community of the super-rich to make a “Giving Pledge”: to commit to donating half of their money to charity. His effort made headlines, and some fellow billionaires joined Buffett’s pledge, among others Bill Gates, George Lucas and Mark Zuckerberg.

Money bags. WPClipart.
The wealthy Europeans however have remained skeptic, for good reasons. Money brings influence – influence that can conflict with democratic decisions, a fact that Europeans seem to be more acutely aware of than Americans. The German Peter Krämer, who I guess counts as rich though not as super-rich, said about Buffett’s pledge:
    “In a democratic nation, one cannot allow billionaires to decide as they please which way donations are used. It is the duty of the government, and thus in the end that of the citizens, to make the right decisions.” [Source]
Instead, Krämer argues that taxes should be raised for the upper class. Since nobody is listening to his wish of being taxed, he launched his own charitable project “Schools for Africa.”

The NYT last month raised the question “[C]an charity efficiently and fairly take the place of government in important areas? Or does the power of wealthy patrons let them set funding priorities in the face of government cutbacks?” In the replies, Chrystia Freeland from Thomson Reuters relates how a wealthy American philanthropist coined the term “self-tax” for charitable donations, and she brings the problem to the point:
    “From the point of view of the person writing the check, the appeal of the self-tax is self-evident: you get to choose where your money goes and you get the kudos for contributing it.

    But for society as a whole, the self-tax is dangerous. For one thing, someone needs to pay for a lot of unglamourous but essential services, like roads and bank regulation, which are rarely paid for by private charity.

    Even more crucially, the self-tax is at odds with a fundamental democratic principle -- the idea that we raise money collectively and then, as a society, collectively choose how we will spend it.”
The same discussion must be had about private funding of science.

Basic research, with its dramatically high failure rate, is for the most part an “unglamorous” brain exercise whose purpose as well as appeal is difficult to communicate. Results can take centuries to even been recognized as results. The vast majority of researchers and research findings will not even make a footnote in the history of science. Basic research rarely makes sexy headlines. And if, it is because somebody misspelled hadron. All that makes it an essential, yet unlikely, target of private donations.

Even Jeffrey Sachs, after some trial and error, came around to realize that raw capitalism left to its own devices may fail people and societal goals. Basic investments like infrastructure, education, and basic research are tax-funded because they're in the category where the market works very badly, where pay-offs are too far into the future for tangible profits.

The solution to this shortcoming of capitalism cannot be to delegate decisions to the club of billionaires and hope they be wise and well-meaning. Money is not a good. It’s a virtual tool to direct investment of real resources: labor, energy, time. The central question is not whose money is it, but how resources are best put to use.

We previously discussed a specific type of private funding of science: crowdfunding. The problem with crowdfunding is that chances of funding depend primarily on the skilled presentation of a project, and not on its potential scientific relevance.

A recent article in Time Magazine “Crowdfunding a Cure” (subscription only) reported a trend from the United States in which online services allow patients and their relatives to raise money to pay for medical treatments, organ donations, or surgeries. One obvious problem with this approach is fraud. (If you think nobody would possibly want to fake cancer, think twice and read this.) What bothers me even more is the same issue as with the crowdfunding of science: You better be popular and good at social networking if you want to raise enough money for a new kidney. Last week’s issue of Time Magazine published a reader’s comment from Claes Molin, Sweden. This is how crowdfunding medical treatments looks from the Scandinavian perspective:
    “It is moving to read about the altruism displayed by crowdfunding for medical procedures, and I don’t doubt the sincerity of the donors. But the steps described to raise money, including displaying personal details for strangers to see and remembering to say “thank you,” sound a lot like being forced to beg. I understand that values differ, but government-funded health care would let people keep their dignity, along with their peace of mind, in the face of life-threatening disease.”
A thesis project isn’t as serious as a life-threatening disease, but the root of the problem with crowdfunding either is the same. Crowdfunding is neither an efficient nor a fair way to distribute money, and thus the resources that follow. It is a simple way, a presently popular way, and a last hope resort for those who have been failed by their government. But researchers shouldn’t be forced to waste time on marketing like patients shouldn’t be forced to waste time on illustrating their sufferings, and in neither case should the success depend on the popularity of their presentation.

Be that as it may, crowdfunding is and will most likely remain a drop in the drying lake of science funding. I strongly doubt it has the potential to significantly change the direction of scientific research; there just isn’t enough money to go round in the crowd. Paying attention to private funding by wealthy individuals is much more pressing.

Wealthy donors often drive their own agenda. This bears a high risk that some parts of research, the “unglamorous” but essential parts, simply do not receive attention, and that researcher’s interests are systematically skewed to the disadvantage of scientific progress.

The German association of science foundations (“Deutscher Stifterverband für die Wissenschaft”) is, loosely speaking, a head organization for private donors to science that manages funds. (Note that the German use of the word “science” encompasses the natural and social sciences as well as the humanities and mathematics.)

I once spent a quite depressing hour browsing through the full list of in total 560 foundations that they have to date (this includes foundations exclusively for scholarships and prizes). 56 of them are listed under natural sciences and engineering. There isn’t a single one remotely related to quantum gravity or physics beyond the standard model. The two that come closest are the Andrejewski Foundation that hands out a total of EUR 9000 per year to invite lecturers on topics relating math and physics, and the Schmidt Foundation for basic research in the natural sciences in general, which however has an even smaller total funding. (Interestingly, their fund is distributed by the German Research Foundations and, so I assume, subject to the standard peer review.)

Then what do people donate to in the natural sciences? Most donors, it seems, donate to very specific topics that are closely related to their own interest. Applications of steel for example. Railroad development. The improvement of libraries at technical universities. The scientific cooperation between Hungary and Germany. And so on.

So much about the vision of the wealthy. To be fair however, the large foundations are not to be found in this list, they do their own management. And there exist indeed the occasional billionaires with an interest in basic research in physics, such as Kavli, Lazaridis, Tschira, Templeton. And, more recently, Yuri Milner with his sur-prizes.

If you work like me in a field that seems constantly underfunded, where you see several hundred applications for two-year positions and people uproot families every other year to stay in academia, you are of course grateful to anybody who eases financial pressures.

But what price is the scientific community paying?

Money sets incentives and affects researcher’s scientific interests by offering funding, jobs, or rewards. The recurring debate over the influence of the Templeton foundation touches on this tension. And what effect will Milner’s prizes have on the coming generation of scientists? We have a lot to lose in this game if we allow the vanity of wealthy individual to influence what research is conducted tomorrow.

There is another problem with private funding, which is lack of financial stability. One of the main functions of governmental funding of basic research is its sustained, continuous availability and reliability. High quality research builds on educational and technological infrastructure and expertise. It withers away if funding runs dry, and once people have moved elsewhere or to other occupations, rebuilding this infrastructure and attracting bright people is difficult and costly. Private donations are ill-suited to address this issue. A recent Nature Editorial “Haste not Speed” comments on the problem of stability with US funding in particular:
    “[W]hen it comes to funding science, predictability is more of a virtue than speed, and stability better than surprise.”
All this is not to say that I disapprove of private funding. But as always, one has to watch out for unwanted side-effects. So here’s my summary of side-effects:
  • Interests of wealthy individuals can affect research directions leading to an inefficient use of resources, leaving essential areas out of consideration. Keep in mind that the relevant question is not whose money it is, but how it is best used to direct investment of resources into an endeavor, science, with the aim of serving our societies.
  • When it comes to delicate questions like which scientific project is most promising, somebody’s personal interest or experience is not a good basis for decision. Short-circuiting peer review saves time and effort in the short run, but individual opinion is unlikely to lead to scientifically more desirable outcomes.
  • Eyeing and relying on private donations is tempting for governments and institutional boards, especially when times are rough. This slope can be slippery and lead to a situation where scientists are expected to “beg for money,” which is not a good use of their time and skills, and questionable to result in fair and useful funding schemes.
  • The volume of private funding and the interests of donors tend to be unstable, which makes it particularly ill-suited for areas like basic research where expertise needs sustained financial commitment.
So what is the researcher to do? If somebody offered to fund my project I probably wouldn’t say no: Obviously, I am convinced of the relevance of my own research! Neither would I expect anybody else to do so.

But whenever the situation calls for it, scientists should insist on standard quality control and peer review, and discourage funding schemes that circumvent input from the scientific community. Otherwise we’re passively agreeing on wasting collective effort. The standard funding scheme is taxation channeled to funding agencies. The next easiest thing is donations to existing funding agencies or established institutions, not purpose-bound. Private foundations and their review process are not necessarily bad, but should be treated carefully, especially when more opaque than transparent. And crowdfunding, hip as it sounds, will not work for the unglamorous, dry, incremental investigations that form the backbone of basic research.