Pages

Tuesday, March 12, 2013

Interna

Lara with her new glasses.
When you last heard from Lara and Gloria, they could utter a few single words. Within a couple of weeks, they have transitioned to speaking full sentences, answer to questions with "yes" and "no", and are very clear in expressing themselves. "Jacke an, Bagger gucke" (Jacket on, watch digger), they might say when they want to go for a walk. They still refer to each other as Gaakie and Gookie though. And they are struggling with German grammar, especially finding the right articles.

Lara now has glasses that are meant to help correct her squinting. She wears them without complaint. It probably helps for her acceptance that I too wear glasses.

The half-day daycare solution is working reasonably well, except that it's prohibitively expensive. The nanny has taught the kids to drink from a cup, to wash their hands, to paint and to jump. I'm sure our downstairs neighbors are as excited about the jumping as the kids. My commuting to Stockholm is not working quite so well. It leaves all of us terribly exhausted and is a huge waste of time, not to mention money. The time that I gain by having the kids in daycare is mostly spent on catching up on life's overhead, paperwork, the household, piles of unread papers and unanswered emails that wait for me upon return.

That having been said, I have a bunch of trips coming up. March 15 I'm in Bergen giving a seminar, apparently on the topic "Siste nytt om kvantegravitasjon". On April 12 I'm in Reykjavik. I haven't been able to find anything resembling a seminar schedule on the department website, but it's the same seminar as in Bergen. In May George and I are running the previously mentioned Workshop for Science Writers in Stockholm, and at the end of May I'll be attending a workshop on "Quantum Gravity in Perspective" in Munich. I have some more trips coming up, but plans haven't proceeded further than that. If you're located in any of these places and feel like  meeting up, send me a note.

Besides this, I've been told that the current issue of the Finnish magazine Tähdet ja avaruus ("Stars and Space") has an article by Laura Koponen about quantum gravity, featuring Renate Loll, Robert Brandenberger, and me. It's in Finnish so I have no clue what it says, but the photos look nice. Though... something about the photo of me didn't feel quite right, and after some forehead frowning it occurred to me that the NorthFace logo on my shirt fell victim to Finnish photoshopping. I actually like it better this way; I prefer my clothes without logos if possible. In any case, should you by any chance speak Finnish and have read the article, let me know what you think.


Friday, March 08, 2013

Upcoming Science Writers Workshop at Nordita

Recently, I've seen and heard a lot of talk about the relevance of science communication. Of course I totally believe it's relevant. I also totally believe Elvis was right asking for a little less conversation and a little more action. So George Musser and I, we decided to run a workshop that actually communicates science, physics in particular, astrophysics and cosmology specifically.

Our "Workshop for Science Writers: Astrophysics and Cosmology" will take place May 27-29, 2013, in Stockholm. It is is hosted and mainly funded by Nordita, and co-funded by the Swedish Research Council, Vetenskapsrådet. All the relevant information is on our website:
The organization is well under way, and we have meanwhile assembled a great list of lecturers, that we will bring together with a selection of excellent science writers. The details of the schedule aren't settled yet, but we are planning on lectures focused on recent developments and running and upcoming experiments, followed by question and answer session. I am very much looking forward to this workshop as I myself am not an expert in the area and I expect to learn a big deal.

Space for this meeting is limited but we will select some applicants among those who register online. The application deadline is March 31st. So if this sounds interesting to you, either as a physicist or as a science writer, you can fill in this application form.

This isn't the typical workshop that I normally organize. It's somewhat of a challenge for me to figure out the needs of science writers. George's suggestions have been invaluable while I've mostly taken care of the local issues. We're still in the midst of preparation though. I'll keep you updated on how it's going and you can expect some coverage of the event on this blog.

Wednesday, March 06, 2013

23 and Me

This is the century in which personal DNA sequencing became affordable. And so it was unavoidable that curiosity would finally have me sign up at 23andMe, spit in a plastic tube, and see what's in my genes. Primarily, I just wanted to know how it works. So here's how it works for those of you who share my curiosity and are thinking of having a look at their genetic information too.

How does it work?

First thing you do is order a spit kit. It contains a plastic tube with some preservative and exact instructions how to send it back to the lab. 23andMe is located in California. They ship outside the US, but not to all countries; you can find a full list here. Cost for the spit kit is presently at US$ 99. To this you have to add the shipping and customs cost for a "human sample" which comes at US$ 79,95.

I ordered the spit kit on January 4th. It was shipped January 10th and arrived in Germany within a few days. They ask for quite some amount of saliva, so it's not really done with "just spitting." It took me half an hour or so to fill the tube up to the mark.

There's a number on the spit kit that you have to register on the website. For this you have to set up an account if you haven't already done that anyway. Then close the tube and seal it into a plastic bag with a biohazard logo which goes into a padded envelope. The spit kit comes with customs forms that have to be filled in. (If you live in the US, the procedure is easier). To send it back  to the lab, you have to drop off the envelope at a DHL Express station. So if you think of doing this, you might want to check where you find the closest one to your place.

On January 18, I received an email saying the sample arrived in the lab.  They tell you the analysis takes on average 6 weeks. On March 4th, after exactly two months, I got the results. It should be said that that they don't actually sequence the whole DNA. They look for about a million SNPs that are known or suspected to be interesting for one or the other reason.

What do you get?

First thing you see when you log in to view your results is the question whether you want to opt out of receiving health information. If you do, you only get information about your genetic ancestry.

Once logged in, you can browse the raw data if you like, this will give you a long list with names of SNPs, their position, and your genotype. For the average user like me, who doesn't know a terrible lot about genetics, this isn't very useful though. What's more useful is the summary you get that tells you what's known about your genotypes, what this means, and how reliable this information is.

In the "Health" menu, you have the categories "Disease Risk," "Carrier Status," "Drug Response" and "Traits." Disease risk and drug responce is self-explanatory. Carrier status tells you if you carry any known mutations responsible for heritable genetic diseases (which you might not necessarily get yourself but just pass on to your kids). Disease risks come in percentage of likelihood to develop some disease, and they tell you whether your risk is higher or lower than average. In addition the results are labelled by stars telling you roughly how reliable the conclusion from existing research is. Drug response gives you a list of drugs you are likely to respond to more or less than average, which is valuable medical information.

The first three categories in the "Health" menu contain more details than I'm comfortable sharing publicly, so let me instead show you a screenshot of the "Traits" list, which you could summarize as fun facts


Blue eyes, curly hair, and, no, I don't use deodorant. I've always assumed the rest of the world is just somewhat weird when it comes to their arm pits.

Now let's look at the ancestry, which you see in the screenshot in the left menu. The "relative finder" isn't working yet, it says they're still processing my data. For all I know I haven't lost any relatives, so I'm not expecting to find many. The ancestry composition tells you where your genes came from 50 years ago, it looks like this:



So, I'm European, but then you already knew that. From what I know of my family, I'd have expected more East European and less North European though; I'm somewhat surprised about this. Who knows what my ancestors have been up to.

And then you can trace your maternal and paternal line. The maternal line comes down through mitochondrial DNA which is exclusively inherited from the mother. Allegedly, if you look back long enough, we all go back to the same woman, referred to as Mitochondrial Eve. But there have been a few mutations since and the line has split, which allows some localization. 23andMe lists your haplogroup and shows its estimated distribution about 500 years ago:
Again, it looks more nordic than I'd have expected.

The paternal line is traced via the Y-chromosome. So I'll have to convince a male relative to spit for this information. I think I know what my younger brother will get as a birthday present ;o)

The website

The website is very functional, readable, and works well. What I appreciate very much is that they don't just give you a likely correlation between your genotype and some trait, but, if you click on an item, you get a list of scientific papers and a short summary of the research status. So you don't have to believe what they tell you but can make up your own mind.

You can also, if you find the time, fill out some dozens of surveys that they use to find cross-correlations between what you report and your genetic information. The participation is entirely voluntary. They've found some links in this way, eg the "curly hair" SNP that you see in the first image (the one that appears with the 23andMe logo) is such a case. So you can actively contribute to research in the area, which I find a nice twist.

Taken together I'd say it's worth the money. I had previously toyed with the idea to sign up with 23and Me, but before January 2013 you had to get a subscription for the webpage in addition to the cost for the sequencing and the shipment.

It is btw entirely coincidental that my favicon looks pretty much like the 23andMe logo. I've used this icon since 1997 I believe, it's supposed to be a mixture of an x and a lightcone.

Thursday, February 28, 2013

The simulation hypothesis and other things I don’t believe

Some years ago at SciFoo I sat through a session by Nick Bostrom, director of the Future of Humanity Institute, who elaborated on the risk that we live in a computer simulation and somebody might pull the plug, thereby deliberately or accidentally erasing all of mankind.

My mind keeps wandering back to Bostrom’s session. You might think that discussing the probability of human extinction due to war, disease or accident is a likely cause of insomnia. The simulation hypothesis in particular is the stuff that dreams and nightmares are made of - a modern religion with an omnipotent programmer. In this light, it is not so surprising that the simulation hypothesis is popular on the internet, though Keanu Reeves clearly had a role in this popularity, which now gives me an excuse to decorate my blog with his photo.

But while I do sometimes get headaches over questions concerning the nature of reality, the simulation hypothesis is not among the things that keep me up at night (neither is Keanu Reeves, thanks for asking).  After some soul searching I realized that I don’t believe in the simulation hypothesis for the same reason I don’t believe in alien abductions. Before science fiction literature and its alien characters became popular, there was no such thing as alien abduction. Instead, people commonly thought they were possessed by demons. It is believed today that sleep paralysis is a likely origin of hallucinations and out-of-body experiences, an interesting topic on its own right, but the point here is that popular culture creates hypotheses, and present culture is a collective limit to our imagination.

People today ponder the idea that reality is a computer simulation in the same way that post-Newtonian intellectuals thought of the universe as a clockwork. The clockwork universe theory seems bizarre today, now that we know many things that Newtonian mechanics cannot describe. But then people used to wear strange wigs and women stood around in dresses barely able to walk, let alone breathe, so what did they know. And chances are, 200 years from now the simulation hypothesis will seem equally bizarre as the idea to transfer fat from the butt to the lips or take notes by rubbing graphite on paper.

A more scientific way to phrase this is that the simulation hypothesis creates a coincidence problem, much like the coincidence problem for the cosmological constant. For the cosmological constant the coincidence problem is this: Throughout the expansion of the universe, matter dilutes and the constant stays constant. Why do we just happen to live in a period when both have about the same value? For the simulation hypothesis the coincidence problem is this: Why do we just happen to live in a period where we discover the very means by which the universe is run?

To me, it’s too much of a coincidence to be plausible. I will put this down as a corollary of the Principle of Finite Imagination “Just because humans do not or cannot imagine something doesn’t mean it does not or cannot exist.” Corollary:  “If humans put forward a hypothesis based on something they have just learned to imagine, it is most likely a cultural artifact and not of fundamental relevance.” Though the possibility exists that present day human imagination is the eclipse of scientific insight, the wish to be special vastly amplifies believes in this possibility.

That having been said, another way to approach the question is to ask for scientific evidence of the simulation hypothesis. There has been some work on this, and occasionally it appears on the arxiv, such as this paper last year which studied the possibility that The Simulator runs state-of-the art lattice QCD. I find it peripherally interesting and applaud the authors for applying scientific thought to vagueness (for other attempts at this, check their reference list). Alas, the scenario that Bostrom has in mind is infinitely meaner than theirs. As he explains in this paper, to save on calculational power only that part of reality is simulated that is currently observed:
“In order to get a realistic simulation of human experience, much less [than simulating the entire universe] is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.”
So you’d never observe any effects of finite lattice spacing because whenever you look all symmetries are restored. Wicked. It also creates other scientific problems.

To begin with, unless you want to populate the simulation by hand, you need a process in which self-awareness is created out of simpler bits. And to prevent self-aware beings from noting the simulation’s limits, you then need a monitoring program that identifies when the self-aware parts attempt to make an observation and exactly which observation. Then you need to provide them with this observation, so that the observation is the same as they would have gotten had you run the full simulation. This might work fine in some cases, say, vacuum fluctuations, because nobody really cares what a vacuum fluctuation does when you’re not looking. If you have a complex system however, reducing the complexity systematically and blowing it back up is difficult if not impossible.

Take a system that’s still fairly simple, like a galaxy. If nobody is pointing a telescope at it, you don’t want to bother with its time evolution. But then how do you make sure that observations at different times are consistent? And then there’s the possibility that somewhere in the galaxy that humans weren’t observing intelligent life developed that would one day land on planet Earth. If your simulation by design doesn’t take into account events like this, it’s strangely anthropocentric. It also then raises the question why bother with 7 billion people to begin with? Would not an island do, and the rest of us pop in and out of existence to amuse the islanders? This reminds me, I have to book a flight to Iceland.

To avoid these problems, The Simulator would use a much simpler method: deter observations that might test the limits, much like it is difficult to reach the boundary of Dark City. And suddenly it makes sense, doesn’t it? All the recent budget cuts to research funding, even in areas like theoretical physics, the possibly most cost-efficient insight engine running on little more than graphite rubbing on paper. It’s all to deter us from discovering the boundaries of our simulation. Now if saying hello to the programmer who runs the simulation we live in isn’t an argument to support basic research, then I don’t know what is. I’ll leave you with this thought and book my flight before I pop out of existence again.

Saturday, February 23, 2013

Book review: "The Theoretical Minimum" by Susskind and Hrabovsky

The Theoretical Minimum: What You Need to Know to Start Doing Physics
By Leonard Susskind, George Hrabovski
Basic Books (January 29, 2013)

Susskind made his lecture notes into a book and did a great job. His book is explicitly not aimed at students but at everybody with an interest in physics who wants to expand their toolkit and start speaking the language of physicists.

The book primarily covers classical mechanics: momentum and forces, energy and potentials, up to the principle of least action, Hamiltonian mechanics and poisson brackets. In content it is very similar to the lecture notes that I learned from, it might also remind you of Goldstein's classical book on classical mechanics. However, what's special about Susskind's book is that he introduces along the way all the mathematical concepts that are needed, starting with vectors and functions to integration and differentiation. The book is thus very self-contained and yet really brief and to the point, which is quite an achievement.

It seems pretty obvious that there will be a sequel to this book that continues this educational effort.

I appreciate this book very much. It would have been dramatically useful for me when I was a teenager, because there is a gap in the physics literature between high school level and the level aimed at students, a gap this book can bridge. However, if you think this book will bring to up to speed with modern physics, you got it wrong. It's a long way to quantum field theory and there really are no shortcuts. Susskind's book, and the ones that will probably follow, however might be the shortest route, the one of least action so to say.

That having been said, I'm not a teenager anymore and frankly don't have much use for the book. Which is why I'll give away my copy for free. The book will go to the first person who has a mailing address in Europe and leaves a comment to this blogpost telling us why you want the book and what is your interest in physics.

Update: The book is gone.

Wednesday, February 20, 2013

Thumbs up for the Cambridge University Press Customer Service

Some years ago, I bought a copy of Stephani et al's book "Exact Solutions of Einstein's Field Equations" from Cambridge University Press. It's pretty much an encyclopedia of all that's known about Einstein's Field Equations. It's the type of book you turn to for advice when you've got a problem, not a textbook you read front to back. So I hope you'll forgive me when I say it took me a few months to notice that the copy I bought was a misprint with several empty pages towards the middle. These are the obscurer parts of the book whose physical applications are at least to me somewhat unclear, and I thought I would just never need whatever should have been printed on these pages anyway.

Over the years however I developed the distinct paranoia that whenever I was looking for something that I could not find in Stephani's book, it was certainly printed on the missing pages. Some time last week, frustrated by yet another intractable set of equations one gets without a good ansatz for the metric, I wrote to Cambridge University Press customer service, complaining about the misprint, with the above photo attached.

Needless to say, several years after purchasing the book I don't have a receipt. Nevertheless, I got a reply within 24 hours, with an apology for the misprint. Alas, the hardcover version that I have is out of print, if a paperback would be okay. "Sure", I wrote back. They asked for my shipping address and a week later I have a brand new copy, all for free. Now if I don't find an answer to a problem I was looking for, I have no empty pages to blame any more.

Sunday, February 17, 2013

The Future of Peer Review

This week's cover of The Economist.
A year ago, I told you what I think is the future of scientific peer review: Peer review that is conducted independently from the submission of a manuscript to a journal. You would get a report from an institution offering such a service, possibly some already existing publisher, possibly some new institution specifically created for this purpose. This report you could then use together with submission of your paper to a journal, but you could also use it with open access databases. You could even use it in company with your grant proposals if that seems suitable. I call it pre-print peer review.

I argued earlier that, irrespective of what you think about this, it's going to happen. You just have to extrapolate the present situation: There is a lot of anger among scientists about publishers who charge high subscription fees. And while I know some tenured people who simply don't bother with journal publication any more and just upload their papers to the arXiv, most scientists need the approval stamp that a journal publication presently provides: it shows that peer review has taken place. The easiest way to break this dependence on journals is to offer peer review by other means. This will make the peer review process more to the point and more effective.

The benefit of this change over other, more radical, changes that have been proposed is that it stays very close to the present model in that the procedure of peer review itself need not be changed. It's just the provider that changes.

I am thus really excited that the recent issue of Nature reports that one such service exists now and another one is about to be created:
The one that already exists is called Peerage of Science, based in Jyväskylä, Finland. Yeah, right, the Nordic people, they're always a little faster than the rest of the world. Peerage of Science seems to have launched a little more than a year ago, but this is the first time I've heard of it. The one in the making is US based and the project is managed by a guy called Keith Collier.

Of course it's difficult to say whether such a change will catch on. Academia has a large inertia, and it depends a lot on whether people will accept independent reviews. But I am confident, so let me make a prediction, just for the fun of it: In 5 years there will be a dozen of such services, some run by publishers. In ten years, most of peer review will take place this way.

Tuesday, February 12, 2013

The end of science is near, again.

The recent Nature issue has a comment titled
by Dean Keith Simonton who is professor of psychology at UC Davis. Ah, wait, according to his website he isn't just professor, he is Distinguished Professor. His piece is subscription only, so let me briefly summarize what he writes. Simonton notes it has become rare that new disciplines of science are being created:
“Our theories and instruments now probe the earliest seconds and farthest reaches of the Universe, and we can investigate the tiniest of life forms and the shortest-lived of subatomic particles. It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline alongside astronomy physics, chemistry and biology. For more than a century, any new discipline has been a hybrid of one of these, such as astrophysics, biochemistry or astrobiology. Future advances are likely to build on what is already known rather than alter the foundations of knowledge. One of the biggest recent scientific accomplishments is the discovery of the Higgs boson – the existence of which was predicted decades ago.”
He argues that scientific progress will not stall, but what’s going to happen is that we’ll be filling in the dots in a landscape whose rough features are now known:
“Just as athletes can win an Olympic gold medal by beating the world record only by a fraction of a second, scientists can continue to receive Nobel prizes for improving the explanatory breadth of theories of the preciseness of measurements.”
I have some issues with his argument

First, he doesn’t actually discuss scientific genius or any other type of genius. He is instead talking about the foundation of knowledge that he seems to imagine as building blocks of scientific disciplines. While it seems fair to say that the creation of a new scientific discipline scores high on the genius scale, it’s not a necessary criterion. Simonton acknowledges
“[I]f anything, scientists today might require more raw intelligence to become a first-rate researcher than it took to become a genius during… the scientific revolution in the sixteenth and seventeenth century, given how much information and experience researchers must now acquire to become proficient.”
but one is still left wondering what he means with genius to begin with, or why it appears in the title of his comment if he doesn’t explain or discuss it.

Second, I am unhappy with his imagery of the foundations of knowledge, which I must have as I believe in reductionism. The foundation is, always, what’s the currently most fundamental theory and it presently resides in physics. Other disciplines have their own “knowledge” that exists independently of physics, because the derivation of other discipline’s “knowledge” is not presently possible, or if it was, it would be entirely impractical.

The difference between these two images matters: In Simonton’s image there’s each discipline and its knowledge. In my image there’s physics and the presently unknown relations between physics and other theories (and thereby these theories among each other). You see then what Simonton is missing: Yes, we know the very large and the very small quite well. But our understanding of complex systems and their behavior has only just begun. Now if we understand better the complex systems that are subject of study in disciplines like biology, neuroscience and politics, this might not create a new discipline in that the name would probably not change. But it has the potential to vastly increase our understanding of the world around us, in very contrast to the incremental improvements that Simonton believes we’re headed towards. Simonton’s argument is akin to saying that once one knows the anatomy of the human body, the rest of medicine is just details.

Third, he has a very limited imagination. I am imagining extraterrestrial life making use of chemistry entirely alien to ours, with cultures entirely different from ours, or disembodied conscious beings floating through the multiverse. You can see what I’m saying: there’s more to the universe than we have seen so far and there is really no telling what we’ll find if we keep on looking.

Fourth, he is underestimating the relevance of what we don’t know. Simonton writes
“The core disciplines have accumulated not so much anomalies as mere loose ends that will be tidied up one way or another. A possible exception is theoretical physics, which is as yet unable to integrate gravity with the other three forces of nature.”
I guess he deserves credit for having heard or quantum gravity. Yes, the foundations are incomplete. But that's not a small missing piece, it's huge, and nobody knows how huge.

To draw upon an example I used earlier, imagine that our improved knowledge of the fundamental ingredients of our theories would allow us to create synthetic nuclei (molecei) that would not have been produced by any natural processes anywhere in the universe. They would have their own chemistry, their own biology, and would interact with the matter we already have in novel ways. Now you could complain that this would be just another type of chemistry rather than a new discipline, but that’s just nomenclature. The relevant point is that this would be a dramatic discovery affecting all of the natural sciences. You never know what you’ll find if you follow the loose ends.

In summary: It might be true what Simonton says, that we have made pretty much all major discoveries and everything that is now to come will be incremental. Or it might not be true. I really do not see what evidence his “thesis”, as he calls it, is based upon, other than stating the obvious, that the low hanging fruits are the first to be eaten.

Aside: John Barrow in his book “Impossibility” discussed the three different scenarios of scientific progress: progress ending, asymptotically stagnating, or forever expanding. I found it considerably more insightful than Simonton’s vague comment.

Friday, February 08, 2013

Book review "The Edge of Physics" by Anil Ananthaswamy

The Edge of Physics: A Journey to Earth's Extremes to Unlock the Secrets of the Universe
By Anil Ananthaswamy
Mariner Books (January 14, 2011)

In "The Edge of Physics", Ananthaswamy takes the reader on a trip to some of the presently most exciting experiments in physics. The Soudan Mine where physicists are looking for direct detection of dark matter, the Baikal Lake with its underwater neutrino detectors, the Square Kilometre Array in South Africa, the VLT in Chile, the IceCube Neutrino Observatory at the South Pole, and others more before he finishes his travels at CERN in Geneva.

Along this trip one learns a lot not only about the scenery, but also about physics and the history of physics. Ananthaswamy doesn't add the experiments as an afterthought to elaborations on quantum mechanics and special relativity, but the experiments and the people working on them take lead. His theoretical explanations are brief but to the point. The appendix contains the shortest summaries of the Standard Model and the Concordance Model that I've ever seen. He explains enough so the reader can understand which new physics the experiments are looking for and what the relevance is, but always quickly comes back to show how this search proceeds in reality.

I found this book hugely enjoyable because it is not your typical popular science book. I didn't have to make my way through yet another chapter that promises to explain general relativity without equations, and I learned quite some things along the way. It's amazing how many details experimentalists have to think about that would never have occurred to me. Ananthaswamy tells stories of people who found their destiny, stories of courage, stories of trial and error, and some quite dramatic accidents and almost accidents. It's a very well written narrative.

I have only one complaint about this book which is that it would have very much benefited from some illustrations, be that to explain the CMB power spectrum, the generations and families in the Standard Model, the thermal history of the universe, or sketches of the experiments and their parts.

In summary, I can recommend this book to everybody with an interest in contemporary physics or the history of physics. If you have no clue about particle physics or cosmology whatsoever, you might not be able to follow some of the explanations, which are really brief. But even then you'll still take something away from this book. I'd give "The Edge of Physics" 5 out of 5 stars.

Tuesday, February 05, 2013

Consequences of using the journal impact factor

An interesting paper that should be mandatory literature for everybody making decisions on grant or job application, especially for those people impressed by high profile journals on publication lists:
It's a literature review that sends a clear message about the journal impact factor. The authors argue the impact factor is useless in the best case and harmful to science in the worst case.

The annually updated Thomson Reuters journal impact factor (IF) is, in principle, the number of citations to articles in a journal divided by the number of all articles in that journal. In practice, there is some ambiguity about what counts as "article" that is subject of negotiation with Thomson Reuters. For example, journals that publish editorials will not want them to count among the articles because they get rarely cited in the scientific literature. Unfortunately, this freedom in negotiation results in a lack of transparency that casts doubt on the objectivity of the IF. While I knew that, the problem seems to be worse than I thought. Brembs and Munafò quote some findings:
"For instance, the numerator and denominator values for Current Biology in 2002 and 2003 indicate that while the number of citations remained relatively constant, the number of published articles dropped...

In an attempt to test the accuracy of the ranking of some of their journals by IF, Rockefeller University Press purchased access to the citation data of their journals and some competitors. They found numerous discrepancies between the data they received and the published rankings, sometimes leading to differences of up to 19% [86]. When asked to explain this discrepancy, Thomson Reuters replied that they routinely use several different databases and had accidentally sent Rockefeller University Press the wrong one. Despite this, a second database sent also did not match the published records. This is only one of a number reported errors and inconsistencies [87,88]."
(For references in this and the following quotes, please see Brembs and Munafò's paper.)

That is already a bad starting point. But more interesting is that, even though there are surveys confirming that the IF captures quite well researcher's perception of high impact, if one looks at the numbers, it actually doesn't tell much about the promise of articles in these journals:

"[J]ournal rank is a measurable, but unexpectedly weak predictor of future citations [26,55–59]... The data presented in a recent analysis of the development of [the] correlations between journal rank and future citations over the period from 1902-2009 reveal[s that]... the coefficient of determination between journal rank and citations was always in the range of ~0.1 to 0.3 (i.e., very low)."
And that is despite there being reasons to expect a correlation because high profile journals put some effort into publicizing articles and you can expect people to cite high IF journals just to polish their reference list. However,
"The only measure of citation count that does correlate strongly with journal rank (negatively) is the number of articles without any citations at all [63], supporting the argument that fewer articles in high-ranking journals go unread...

Even the assumption that selectivity might confer a citation advantage is challenged by evidence that, in the citation analysis by Google Scholar, only the most highly selective journals such as Nature and Science come out ahead over unselective preprint repositories such as ArXiv and RePEc (Research Papers in Economics) [64]."
So IFs of journals in publication lists don't tell you much. That scores as useless, but what's the harm? Well, there are some indications that studies published in high IF journals are less reliable, ie more likely to contain exaggerated claims and cannot later be reproduced.
"There are several converging lines of evidence which indicate that publications in high ranking journals are not only more likely to be fraudulent than articles in lower ranking journals, but also more likely to present discoveries which are less reliable (i.e., are inflated, or cannot subsequently be replicated).

Some of the sociological mechanisms behind these correlations have been documented, such as pressure to publish (preferably positive results in high-ranking journals), leading to the potential for decreased ethical standards [51] and increased publication bias in highly competitive fields [16]. The general increase in competitiveness, and the precariousness of scientific careers [52], may also lead to an increased publication bias across the sciences [53]. This evidence supports earlier propositions about social pressure being a major factor driving misconduct and publication bias [54], eventually culminating in retractions in the most extreme cases."
The "decline effect" (effects getting less pronounced in replications) and the problems with reproducability of published research findings have recently gotten quite some attention. The consequences for science that Brembs and Munafò warn of are
"It is conceivable that, for the last few decades, research institutions world-wide may have been hiring and promoting scientists who excel at marketing their work to top journals, but who are not necessarily equally good at conducting their research. Conversely, these institutions may have purged excellent scientists from their ranks, whose marketing skills did not meet institutional requirements. If this interpretation of the data is correct, we now have a generation of excellent marketers (possibly, but not necessarily also excellent scientists) as the leading figures of the scientific enterprise, constituting another potentially major contributing factor to the rise in retractions. This generation is now in charge of training the next generation of scientists, with all the foreseeable consequences for the reliability of scientific publications in the future."
Or, as I like to put it, you really have to be careful what secondary critera (publications in journals with high impact factor) you use to substitute for the primary goal (good science). If you use the wrong criteria you'll not only not reach an optimal configuration, but make it increasingly harder to ever get there because you're changing the background on which you're optimizing (selecting for people with non-optimal strategies).

It should clearly give us something to think that even Gordon Macomber, the new head of Thomson Reuters, warns of depending on publication and citation statistics.

Thanks to Jorge for drawing my attention to this paper.

Thursday, January 31, 2013

Interna

January has been busy, as you can probably tell from the frequency of my posts. Lara and Gloria are now in half-daycare for 4 hours weekdays. The transition went fairly well, and I think they like it there. The nanny clearly has more time and patience to play with the kids than I, and the place is also better suited than our apartment where computers, books, pens, and other stuff that you don't want in your toddlers' hands, are lying in every corner. The nanny is from Spain and so the kids learn some Spanish along the way. They seem to understand a few words, but don't yet speak any.

We now replaced the baby cribs with larger beds that the kids can get in and out on their own. This took some getting used to. They wake up in the night now considerably more often than previously, and sometimes wander around, so recently we haven't been getting as much sleep as we would like to. That explains half of my silence. The other big change this month was that, now that the kids are two years old and we have to pay for their flight tickets, we've given up commuting to Stockholm together, and this is the first month of me trying to commute alone. Stefan has support from the babysitter and the grandparents while I'm away, but we're still trying to find the best way to arrange things. It's proved difficult to find a good solution for our issues with non-locality.

I have a case of recurring sinus infection which puts me in a generally grumpy mood, and the kids have a permanently runny nose, for which I partly blame myself and partly the daycare. Besides this, I am in the process of writing a proposal for what the European Research Council calls the "Consolidator Grant" and it's taking up a lot of time I'd rather spend on something else. My review on the minimal length scale got now published in Living Reviews in Relativity. I have been very impressed by how smoothly and well-organized their review and publication process went. Needless to say, now every time I see a paper on the arXiv on a topic covered by the review, I'm dreading the day I have to update this thing.

The girls are finally beginning to actually convey information with what they say. They ask for things they are looking for, they say "mit" (with) to tell us what we should take along, they complain if they're hungry and have learned the all-important word "put" (kaputt, broken). We haven't made much progress with the potty training though, unless naming the diaper content counts.

Sunday, January 27, 2013

Misconceptions about the Anthropic Principle

I keep coming across statements about the anthropic principle leaving its mark on physics that strike me as ill-informed, most recently in a book I am presently reading “The Edge of Physics” by Anil Ananthaswamy:
“The anthropic principle – the idea that our universe has the properties it does because we are here to say so and that if it were any different, we wouldn’t be around commenting on it – infuriates many physicists, including [Marc Davis from UC Berkeley]. It smacks of defeatism, as if we were acknowledging that we could not explain the universe from first principles. It also appears unscientific. For how do you verify the multiverse? Moreover, the anthropic principle is a tautology. “I think this explanation is ridiculous. Anthropic principle… bah,” said Davis. “I’m hoping they are wrong [about the multiverse] and that there is a better explanation.””
The anthropic principle has been employed in physics as a proposed explanation for the values of parameters in our theories. I’m no fan of the anthropic principle because I don’t think it will lead to big insights. But it’s neither useless nor a tautology nor does it acknowledge that the universe can’t be explained from first principles.
  1. The anthropic principle doesn’t necessarily have something to do with the multiverse.

    The anthropic principle is true regardless of whether there is a multiverse or not and regardless of what fundamentally is the correct explanation for the values of parameters in our theories. The reason it is often mentioned in combination with the multiverse is that proponents of the multiverse argue it is the only explanation, and no further explanation is needed or necessary to look for.

  2. The anthropic principle most likely cannot explain the values of all parameters in our theories.

    There are a lot of arguments floating around that go like this: If the value of parameter x was just a little larger or smaller we’d be fucked. The problem with these arguments is that small variations around one out of two dozen parameters leave out most possible combinations of parameters. You’d really have to consider modifications of all parameters together to be able to conclude there is only one supportive of life, which is however not a presently feasible calculation. And though this calculation is not feasible, the claim that there is really only one combination of parameters that will create a universe hospitable to life is on shaky ground already because this paper put forward a universe that seems capable of creating life and yet is entirely different from our own. And Don Page had something to say about this too.

    The anthropic principle might however still work for some parameters if their effect is almost independent on what the other parameters do.

  3. The anthropic principle is trivial, but that doesn’t mean it’s useless.

    Mathematical theorems, lemmas and corollaries are results of derivations following from assumptions and definitions. They essentially are the assumptions, just expressed differently, always true and sometimes trivial. But often, they are surprising and far from obvious, though that is inevitably a subjective statement. Complaining that something is trivial is like saying “It’s just sound waves” and referring to everything from engine noise to Mozart.

    And so, while the anthropic principle might strike you as somewhat silly and trivially true, it can be useful for example to rule out values of certain parameters of our theories can have. The most prominent example is probably the cosmological constant which, if it was too large, wouldn’t allow the formation of structures large enough to support life. This is not an empty conclusion. It’s akin to me seeing you drive to work by car every morning and concluding you must be old enough to have a driver’s license. (You might just be stubbornly disobeying laws, but the universe can’t do that.) Though, this probably doesn’t work for all parameters, see 2.

  4. The anthropic principle does not imply a causal relation.

    Though “because” suggests so there’s no causation in the anthropic principle. An everyday example for “because” not implying an actual cause: I know you’re sick because you’ve got a cough and a runny nose. This doesn’t mean the runny nose caused you to be sick. Instead, it was probably some virus. Alas, you can carry a virus without showing symptoms so it’s not like the virus is the actual “cause” of my knowing. Likewise, that there is somebody here to observe the universe did not cause a life-friendly universe into existence. (And the return, that a life-friendly universe caused our existence isn’t the case because life-friendly doesn’t mean interested in science, see 3. Besides this, it’s not like the life-friendly universe sat somewhere out there and then decided to come into existence to produce some humans.)

  5. The applications of the anthropic principle in physics have actually nothing to do with life.

    As Lee Smolin likes to point out, the mentioning of “life” in the anthropic principle is entirely superfluous verbal baggage (my words, not his). Physicists don’t usually have a lot of business with the science of self-aware conscious beings. They talk about formation of large scale structures or atoms. Don’t even expect large molecules. However, talking about “life” is arguably catchier.

  6. The anthropic principle is not a tautology in the rhetorical sense.

    It does not use different words to say the same thing: A universe might be hospitable to life and yet life might not feel like coming to the party, or none of that life might ever ask a why-question. In other words, getting the parameters right is a necessary but not a sufficient condition for the evolution of intelligent life. The rhetorically tautological version would be “Since you are here asking why the universe is hospitable to life, life must have evolved in that universe that now asks why the universe is hospitable to life.” Which you can easily identify as rhetorical tautology because now it sounds entirely stupid.

  7. It’s not a new or unique application.

    Anthropic-type arguments, based on the observation that there exists somebody in this universe capable of making an observation, are not only used to explain free parameters in our theories. They sometimes appear as “physical” requirements. For example: we assume there are no negative energies because otherwise the vacuum would be unstable and we wouldn’t be here to worry about it. And requirements like locality, separation of scales, and well-defined initial value problems are essentially based on the observation that otherwise we wouldn’t be able to do any science, if there was anybody to do anything at all.

Thursday, January 24, 2013

Hurdles for women in physics

Time Magazine's Person of the Year in 2012 was Barack Obama, the dullest choice they could possibly have made. I would have cast my vote for Malala Yousafzai who made it on the list of runners-up. Among the runners-up one could also find particle physicist Fabiola Gianotti ("The Discoverer") who had the eyes of the world on her when she announced the discovery of the Higgs last year. That, I thought, was pretty cool to find a particle physicist on that list.

Alas, the article, if you read it, is somewhat funny. To begin with you might get the impression she was selected for heroically fighting a toothache. And then there is this remark:
“Physics is a male-dominated field, and the assumption is that a woman has to overcome hurdles and face down biases that men don’t. But that just isn’t so. Women in physics are familiar with this misconception and acknowledge it mostly with jokes.”
This pissed me off enough to write a letter to the editor. I only learned coincidentally the other day that it appeared in the Jan 21 issue of the US edition. (Needless to say, we get the European edition.) Below is the full comment I wrote and the shortened version that appeared. There are many other things one could have mentioned, but I wanted to keep it brief.
“As a particle physicist, it was exhilarating for me to see Fabiola Gianotti on your list of runners-up, but I was very dismayed by Kluger's statement it is a "misconception" that women in physics face hurdles men don't.

Yes, instances in which I have been mistaken by my male colleagues for the secretary or catering personnel can be "acknowledge[d] mostly with jokes", though these incidences arguably reveal biases and not everybody finds them amusing. But the assertion that women in physics do not "have to overcome hurdles... that men don't" speaks past the reality of academia and is no laughing matter.

In this field the competition for tenure usually plays out in the mid to late thirties, and is not only accompanied by hard work but also frequently by international moves. Men can postpone their family planing until after they have secured positions. Women can't. I am very lucky to live in a country with generous parental leave and family benefits. But I do have female colleagues in other countries who faced severe problems because of unrealistic expectations on their work-performance and lack of governmental support while raising small children.

Both genders face the tension between having a family and securing tenure, but the timing is markedly more difficult for women. You have done a great disservice to female physicists by denying this "hurdle" exists.”

Thursday, January 17, 2013

How a particle tells time

One of the first things you learn about quantum mechanics is that particles have a wavelength, and thus a frequency. If the particle is in rest, this frequency is the Compton frequency and proportional to the particles’ rest mass. It appears in the wavefunction of the particle at rest as a phase. This means basically the particle oscillates, even if it doesn’t move, with a frequency directly linked to its mass.

The precision of atomic clocks in use today relies on the precise measurement of transition frequencies between energy levels in atoms which serve as reference for an oscillator. But via the Compton wavelength, the mass of a (stable) particle is also a reference for an oscillator. Can one therefore use a single particle to measure the passing of time?

This is the question Holger Müller and his collaborators from the University of Berkeley have addressed in a neat experiment that was published in the recent issue of Science:
    A Clock Directly Linking Time to a Particle's Mass
    Shau-Yu Lan, Pei-Chen Kuan, Brian Estey, Damon English, Justin M. Brown, Michael A. Hohensee, Holger Müller
    Science, DOI: 10.1126/science.1230767
As you can tell from the title of the article, the answer is Yes, one can use a single particle to measure time! They have done it, with the particle in question a Cesium atom, and call it a “Compton clock.” The main difficulty is that the oscillation frequency is very high, far beyond what is measurable today. To make it indirectly measureable, they had to cleverly combine two main ingredients, an atomic interferometer and a frequency comb.

The atomic interferometer works as follows. The atom is hit by two laser pulses, one pulse with frequency a little higher than the laser’s direct output frequency, and one with a frequency a little lower. This splits the wavefunction of the atom. A couple more precisely timed laser pulses are then used to let the wavefunction converge again. It interfers with itself and the interference pattern can be measured in repeating this process.

The relevant aspect of the atom interferometry here is that the phase accumulated by each part of the wave-function depends on the output frequency of the laser, the difference in frequency between the two pulses (tiny in comparison to the output frequency), as well as on the path taken. The path-dependent phase itself depends on the mass of the atom because the two parts of the wavefunction are not in rest with each other. So then the experimentalist can turn a knob and change the difference between the frequencies of the two pulses until the interference pattern vanishes. If the interference pattern vanishes, one then has a fixed relation between the mass of the particle, the output frequency of the laser, and the difference between the pulse frequencies.

So far, so good. If one now knows the frequency of the laser, one can measure the particle’s mass by looking at the frequency split of the pulses needed to get the interference to vanish. Alas, this is not what one wants for the purpose of a clock, which should not rely on an additional, external, measurement.

This is where the frequency comb comes in. In 2005, frequency combs brought a Nobel Prize to John Hall and Theodor Hänsch. Before the invention of the frequency comb, it was not possible to accurately determine absolute frequencies in the optical range. Relative frequencies, yes, but not absolute ones. They’re just too fast to be counted by any electronic means. Frequency combs address this issue by relating very high optical frequencies to considerably lower frequencies, which then can be counted. This is done by pulsing a low frequency signal . If one takes the Fourier transformation of such a pulsed signal, one obtains (ideally) a series of peaks – the frequency comb – whose positions are exactly known (these are the higher harmonics of the low frequency signal). If one knows the pulse pattern of the laser comb one can then substitute the measurement of a very high frequency with that of a considerably lower frequency. Ingenious!

And more ingenuity. Mueller and his collaborators use a frequency comb to self-reference the (tiny) difference in the laser pulses with the output frequency of the laser. The relation between both is then known and given by the pulse pattern of the frequency comb. This way, one gets rid of one parameter and has a direct relation between a measurable frequency and the mass of the particle: It’s a clock!

For what the precision of this clock is concerned however, it is orders of magnitude below today’s state-of-the-art atomic clocks. So unless there are truly dramatic improvements to atom interferometry, nobody is going to use the Compton clock in practice any time soon.

But this clock works both ways. It doesn’t only relate a mass to time (oscillation frequency), but also the other way round. Thus, one can use the Compton clock to measure mass if one has a time reference. With the "Avogadro Project", an enourmously precisely manufactured silicon crystals containing an accurately known number of atoms, one can scale up a single atom to a large number and macroscopic masses. This way the Compton clock might one day be used to define a standard of mass.

Monday, January 14, 2013

Soft Science Envy

If I look at a correlation plot in biology, sociology or psychology, I can understand what they mean with “physics envy.” Physics is the field of precision measurement, the field of hard facts, the field of unambiguous conclusions – at least that’s what it looks like from the outside. The neutron lifetime (see image to the right) tells a different story, one in which convergence clearly had a social parameter (note that jumps in measurements over the years are outside the errorbars. But in the end, the facts won and isn't the shrinking of errorbars just so amazing? That's the side of physics envy that is understandable.

There is the occasional physicist who puts his skills to use in biology, chemistry, neuroscience or the social sciences, economics, sociology and fancy new interdisciplinary mixtures thereof. Needless to say, people working in these fields aren’t always pleased about the physicists stomping on their grass, and more often than not they’re quite unsupportive.

Source: SMBC.
That’s the ugly side of physics envy. It's is a great stumbling block for interdisciplinary research. You really need a masochistic gene and a high criticism tolerance to try.

Physics envy has led many researchers in other fields to develop mathematical models that create the illusion of control and precision – even if the system under question doesn’t allow for such precision. That’s the hazardous side of physics envy.

But after having read Kahneman’s and Ramachandran's book, I clearly have developed a soft science envy!

Kahneman tells the reader throughout his book how he cooked up hypotheses and ways to test them in the blink of an eye. His hypotheses were frequently triggered by reflecting on the shortcomings of his own perceptions, then assuming he’s an average person. He won the Nobel Prize for Economics for the insight that human decisions can be inconsistent. Ramachandran, who made career learning about the neurobiology from patients with brain damage, literally has the subjects of his papers walking into his office. This is not to belittle the insights that we have gained from their creativity and the benefits that they have brought. But the flipside of physics envy is that not only the facts are hard, the way to them is too.

Tuesday, January 08, 2013

Conform and be funded?

A recent issue of Nature magazine featured a study by Joshua Nicholson and John Ioannidis that looked at the citation count of principal investigators (PIs) funded by the US-American National Institute of Health (NIH).
    Research grants: Conform and be funded
    Joshua M. Nicholson, John P. A. Ioannidis
    Nature 492, 34–36 (06 December 2012) doi:10.1038/492034a
Ionnadis is no unknown, he previously published a paper "Why Current Publication Practices May Distort Science" that we discussed here, and is author of the essay "Why Most Published Research Findings Are False". The Nature article is unfortunately subscription only, so let me briefly summarize what it says before commenting.

Nicholson and Ioannidis analyzed papers published between 2001 and 2012 in the life and health sciences, catalogued by the Scopus database. They looked those who had received more than 1,000 citations by April 2012 and an author affiliation in the United States. They found 700 papers and 1,172 authors matching this query.

The NIH invites PIs of funded projects to become members of study sections. The purpose of NIH study section is to evaluate scientific merit. Nicholson and Ioannidis found that from the 1,172 top-cited authors only 72 were currently members of study groups, and most of these 72 (as expected) currently received NIH funding. However, these 72 top-cited scientists are merely 0.8% of all section members. Maybe more insightful is that they further randomly selected 200 of the top-cited papers and excluded those with authors in a study group. From the remaining top-cited authors, only 40% are currently receiving NIH funding.

In a nutshell, this is to say that the majority of authors of research articles in the life and health sciences that were top-cited within the last decade do not currently receive NIH funding.

That's as far as the facts are concerned. Now let's see how Nicholson and Ioannidis interpret this finding and what they conclude. In the beginning of the article, they are careful to point out that scientific success is difficult to measure and the citation count should be regarded with caution:
    "The influence of scientific work is difficult to measure, and one might have to wait a long time to understand it. One proxy measurement is the number of citations that scientific publications receive. Using citation metrics to appraise scientists and their work has many pitfalls... However, one uncontestable fact is that highly cited papers (and thus their authors) have had a major influence, for whatever reason, on the evolution of scientific debate and on the practice of science."
However, towards the end of the paper they write:
    "The mission of the NIH is to support the best scientists, regardless of whether they are young, old or in industry... Such innovative thinkers should not have so much trouble obtaining funding as principal investigators. One cannot assume that investigators who have authored highly cited papers will continue to do equally influential work in the future. However, a record of excellence may be the best predictor of future quality, and it would seem appropriate to give these scientists the opportunity of funding their projects."
Note how now authoring a highly cited paper is synonym for being an "innovative thinker" and "may be the best predictor of future quality". In fact, they go even farther than that by arguing that all authors of highly-cited papers should have their projects NIH funded (apparently regardless of what this project is):
    "Funding all scientists who are key authors of unrefuted papers that have 1,000 or more citations would be a negligible amount in the big picture of the NIH budget, simply because there are very few such people. This could foster further important discoveries that would otherwise remain unfunded in the current system."
I find the above closing paragraph of the article simply stunning. They seriously argue that something must be wrong with NIH funding -- according to their elaboration it's a "networked system" in which "exceptionally creative ideas may have difficulty surviving" -- because the NIH does not automatically fund projects of authors with papers who gathered more than 1,000 citations within the last decade.

Now I know nothing about funding problems in the life sciences. Maybe they have a good reason to hold a grudge against NIH peer review practice. Be that as it may, the facts do simply not support their arguments. I am tempted to say it actually speaks in favor of the NIH that they do not pay so much attention to the citation count because, as the authors write themselves, it's a questionable measure: It measures not only innovative thinking, but also fashions and just usefulness (reviews and illustrative diagrams tend to gather lots of citations), it moreover picks up social dynamics, popularity of the authors, or the amount of secondary work that is created, irrespective of whether that work is particularly insightful.

Many top-cited works are created because somebody has been fast enough to jump onto a topic about to take off. Is that a sign for not being "conform", as the title of the article suggests? I am trying to imagine that somebody would argue that all top-cited physicists should get their projects funded without peer review. And would try to publish this as an essay in Nature.

Friday, January 04, 2013

Gravitational bar detectors set limits to Planck-scale physics - Really?

Contains 10-31% juice.
Three weeks ago, Nature Physics published, to my surprise, another paper on quantum gravity phenomenology:
The appearance of the word “macroscopic” in the title should be a warning sign.

As we discussed previously, there are recurring attempts in the literature on quantum gravity phenomenology to amplify normally tiny and unobservable effects by using massive systems. This is tempting because in macroscopic terms the Planck mass is 10-5 g and easy to reach. The problem with this attempt is that such a scaling-up of quantum gravitational effects with the total mass of a system isn't only implausible as an amplification, it is known to be wrong. Next two paragraphs contain technical details, you can skip them if you want.

The reason this amplification for massive systems appears in the literature is that such a scaling is what you, naively, get in approaches with non-linear Lorentz-transformations on momentum space that have been motivated by quantum gravity. If Lorentz-transformations act non-linearly the normal, linear, sum of momenta, this linear sum is no longer invariant under Lorentz-transformations and thus does not constitute a suitable total momentum for objects composed of many constituents.

It is possible to introduce a modified sum, and thus total momentum, that is invariant. But this total momentum receives a correction term that grows faster than the leading order term with the number of constituents. The correction term is suppressed by the Planck mass, but if the number of constituents is large enough, the additional term will become larger than the (normal) leading order term. This would mean that momenta of macroscopic objects would not add linearly, in conflict with what we observe. This issue has been called the “soccer ball problem”; accepting it is not an option. Either this model is just wrong, or, as most people working on it believe, multi-particle states are subtle and the correction terms stay small for reasons that are not yet well understood. To get rid of these terms, a common ad-hoc assumption is to also scale the Planck mass with the number of constituents so that the correction terms remain small. Be that as it may, it's not something that makes sense to use for “observable predictions”.

Earlier last year, Nature published a paper in which the questionable scaling was used to make “predictions" for massive quantum oscillators. Since this prediction is not based on a sound model, it is very implausible that anything like this will be observed.

The authors of the new paper now propose to precisely measure the ground state energy of a gravitational wave detector, AURIGA. In theories with a modified commutation relation between position and momentum operators, this energy receives correction terms. Alas, such modified commutation relations either break Lorentz-invariance, in which case they are very tightly constrained already and nothing interesting is to be found there. Or Lorentz-invariance is deformed, which leads to the necessity to modify the addition law and we're back to the soccer-ball problem.

So you might suspect that the new paper by Marin et al suffers from a similar problem as the previous one. And you'd be wrong. It's much better than that.

The authors explicitly acknowledge the necessity to understand multi-particle states in the models that they aim at testing, and present their proposal as a method to resolve a theoretical impasse. And while they talk about very massive objects indeed (the detector bars have a mass of about 105 kg), they do not scale up the effect with the mass (see eq 4). Needless to say, this means that the effect that they get is incredibly tiny, about 33 orders of magnitude away from where you would expect quantum gravitational effects to become relevant. They modestly write “Our upper limit... is still far from forbidding new physics at the Planck scale.”

Here's the amazing thing. For all I can tell, not knowing much about the AURIGA detector, the paper is perfectly plausible and the constraint makes indeed sense. I have nothing to complain about. In fact they even cite my review in which I explained the problem with massive systems.

The only catch is of course that the limit that they obtain really isn't much of a limit. If Nature Physics was consistent in their publication decisions, they should now go on and publish all limits on Planck scale physics that are less than 34 orders of magnitude away from being tested. I am very much looking forward to this. There are literally hundreds of papers that compute corrections due to modified commutation relations for all sorts of quantum mechanics problems. I should know because they're all listed and cited in my review. Expect an exponential growth of papers on the topic. (I am already dreading the day I have to update my review.) Few of them ever bother to put in the numbers and look for constraints because rough estimates show that they're far, far, away from being able to test Planck scale effects.

The best constraints on these types of models is, needless to say, my own, which is a stunning 56 orders of magnitude better than the one published in Nature.

So it seems that for once I have nothing to complain. It's a great paper and it's great it was published in Nature Physics. Now I encourage you all to compute Planck scale corrections to your favorite quantum mechanics problem by adding an additional term to the commutation relation, and submit your results to Nature. How about the g-2, or the Casimir effect? Oh, and don't forget that somebody should think about the soccer-ball problem...

Tuesday, January 01, 2013

Private Funding for Science – A Good Idea?

Two years ago Warren Buffett asked the community of the super-rich to make a “Giving Pledge”: to commit to donating half of their money to charity. His effort made headlines, and some fellow billionaires joined Buffett’s pledge, among others Bill Gates, George Lucas and Mark Zuckerberg.

Money bags. WPClipart.
The wealthy Europeans however have remained skeptic, for good reasons. Money brings influence – influence that can conflict with democratic decisions, a fact that Europeans seem to be more acutely aware of than Americans. The German Peter Krämer, who I guess counts as rich though not as super-rich, said about Buffett’s pledge:
    “In a democratic nation, one cannot allow billionaires to decide as they please which way donations are used. It is the duty of the government, and thus in the end that of the citizens, to make the right decisions.” [Source]
Instead, Krämer argues that taxes should be raised for the upper class. Since nobody is listening to his wish of being taxed, he launched his own charitable project “Schools for Africa.”

The NYT last month raised the question “[C]an charity efficiently and fairly take the place of government in important areas? Or does the power of wealthy patrons let them set funding priorities in the face of government cutbacks?” In the replies, Chrystia Freeland from Thomson Reuters relates how a wealthy American philanthropist coined the term “self-tax” for charitable donations, and she brings the problem to the point:
    “From the point of view of the person writing the check, the appeal of the self-tax is self-evident: you get to choose where your money goes and you get the kudos for contributing it.

    But for society as a whole, the self-tax is dangerous. For one thing, someone needs to pay for a lot of unglamourous but essential services, like roads and bank regulation, which are rarely paid for by private charity.

    Even more crucially, the self-tax is at odds with a fundamental democratic principle -- the idea that we raise money collectively and then, as a society, collectively choose how we will spend it.”
The same discussion must be had about private funding of science.

Basic research, with its dramatically high failure rate, is for the most part an “unglamorous” brain exercise whose purpose as well as appeal is difficult to communicate. Results can take centuries to even been recognized as results. The vast majority of researchers and research findings will not even make a footnote in the history of science. Basic research rarely makes sexy headlines. And if, it is because somebody misspelled hadron. All that makes it an essential, yet unlikely, target of private donations.

Even Jeffrey Sachs, after some trial and error, came around to realize that raw capitalism left to its own devices may fail people and societal goals. Basic investments like infrastructure, education, and basic research are tax-funded because they're in the category where the market works very badly, where pay-offs are too far into the future for tangible profits.

The solution to this shortcoming of capitalism cannot be to delegate decisions to the club of billionaires and hope they be wise and well-meaning. Money is not a good. It’s a virtual tool to direct investment of real resources: labor, energy, time. The central question is not whose money is it, but how resources are best put to use.

We previously discussed a specific type of private funding of science: crowdfunding. The problem with crowdfunding is that chances of funding depend primarily on the skilled presentation of a project, and not on its potential scientific relevance.

A recent article in Time Magazine “Crowdfunding a Cure” (subscription only) reported a trend from the United States in which online services allow patients and their relatives to raise money to pay for medical treatments, organ donations, or surgeries. One obvious problem with this approach is fraud. (If you think nobody would possibly want to fake cancer, think twice and read this.) What bothers me even more is the same issue as with the crowdfunding of science: You better be popular and good at social networking if you want to raise enough money for a new kidney. Last week’s issue of Time Magazine published a reader’s comment from Claes Molin, Sweden. This is how crowdfunding medical treatments looks from the Scandinavian perspective:
    “It is moving to read about the altruism displayed by crowdfunding for medical procedures, and I don’t doubt the sincerity of the donors. But the steps described to raise money, including displaying personal details for strangers to see and remembering to say “thank you,” sound a lot like being forced to beg. I understand that values differ, but government-funded health care would let people keep their dignity, along with their peace of mind, in the face of life-threatening disease.”
A thesis project isn’t as serious as a life-threatening disease, but the root of the problem with crowdfunding either is the same. Crowdfunding is neither an efficient nor a fair way to distribute money, and thus the resources that follow. It is a simple way, a presently popular way, and a last hope resort for those who have been failed by their government. But researchers shouldn’t be forced to waste time on marketing like patients shouldn’t be forced to waste time on illustrating their sufferings, and in neither case should the success depend on the popularity of their presentation.

Be that as it may, crowdfunding is and will most likely remain a drop in the drying lake of science funding. I strongly doubt it has the potential to significantly change the direction of scientific research; there just isn’t enough money to go round in the crowd. Paying attention to private funding by wealthy individuals is much more pressing.

Wealthy donors often drive their own agenda. This bears a high risk that some parts of research, the “unglamorous” but essential parts, simply do not receive attention, and that researcher’s interests are systematically skewed to the disadvantage of scientific progress.

The German association of science foundations (“Deutscher Stifterverband für die Wissenschaft”) is, loosely speaking, a head organization for private donors to science that manages funds. (Note that the German use of the word “science” encompasses the natural and social sciences as well as the humanities and mathematics.)

I once spent a quite depressing hour browsing through the full list of in total 560 foundations that they have to date (this includes foundations exclusively for scholarships and prizes). 56 of them are listed under natural sciences and engineering. There isn’t a single one remotely related to quantum gravity or physics beyond the standard model. The two that come closest are the Andrejewski Foundation that hands out a total of EUR 9000 per year to invite lecturers on topics relating math and physics, and the Schmidt Foundation for basic research in the natural sciences in general, which however has an even smaller total funding. (Interestingly, their fund is distributed by the German Research Foundations and, so I assume, subject to the standard peer review.)

Then what do people donate to in the natural sciences? Most donors, it seems, donate to very specific topics that are closely related to their own interest. Applications of steel for example. Railroad development. The improvement of libraries at technical universities. The scientific cooperation between Hungary and Germany. And so on.

So much about the vision of the wealthy. To be fair however, the large foundations are not to be found in this list, they do their own management. And there exist indeed the occasional billionaires with an interest in basic research in physics, such as Kavli, Lazaridis, Tschira, Templeton. And, more recently, Yuri Milner with his sur-prizes.

If you work like me in a field that seems constantly underfunded, where you see several hundred applications for two-year positions and people uproot families every other year to stay in academia, you are of course grateful to anybody who eases financial pressures.

But what price is the scientific community paying?

Money sets incentives and affects researcher’s scientific interests by offering funding, jobs, or rewards. The recurring debate over the influence of the Templeton foundation touches on this tension. And what effect will Milner’s prizes have on the coming generation of scientists? We have a lot to lose in this game if we allow the vanity of wealthy individual to influence what research is conducted tomorrow.

There is another problem with private funding, which is lack of financial stability. One of the main functions of governmental funding of basic research is its sustained, continuous availability and reliability. High quality research builds on educational and technological infrastructure and expertise. It withers away if funding runs dry, and once people have moved elsewhere or to other occupations, rebuilding this infrastructure and attracting bright people is difficult and costly. Private donations are ill-suited to address this issue. A recent Nature Editorial “Haste not Speed” comments on the problem of stability with US funding in particular:
    “[W]hen it comes to funding science, predictability is more of a virtue than speed, and stability better than surprise.”
All this is not to say that I disapprove of private funding. But as always, one has to watch out for unwanted side-effects. So here’s my summary of side-effects:
  • Interests of wealthy individuals can affect research directions leading to an inefficient use of resources, leaving essential areas out of consideration. Keep in mind that the relevant question is not whose money it is, but how it is best used to direct investment of resources into an endeavor, science, with the aim of serving our societies.
  • When it comes to delicate questions like which scientific project is most promising, somebody’s personal interest or experience is not a good basis for decision. Short-circuiting peer review saves time and effort in the short run, but individual opinion is unlikely to lead to scientifically more desirable outcomes.
  • Eyeing and relying on private donations is tempting for governments and institutional boards, especially when times are rough. This slope can be slippery and lead to a situation where scientists are expected to “beg for money,” which is not a good use of their time and skills, and questionable to result in fair and useful funding schemes.
  • The volume of private funding and the interests of donors tend to be unstable, which makes it particularly ill-suited for areas like basic research where expertise needs sustained financial commitment.
So what is the researcher to do? If somebody offered to fund my project I probably wouldn’t say no: Obviously, I am convinced of the relevance of my own research! Neither would I expect anybody else to do so.

But whenever the situation calls for it, scientists should insist on standard quality control and peer review, and discourage funding schemes that circumvent input from the scientific community. Otherwise we’re passively agreeing on wasting collective effort. The standard funding scheme is taxation channeled to funding agencies. The next easiest thing is donations to existing funding agencies or established institutions, not purpose-bound. Private foundations and their review process are not necessarily bad, but should be treated carefully, especially when more opaque than transparent. And crowdfunding, hip as it sounds, will not work for the unglamorous, dry, incremental investigations that form the backbone of basic research.