Pages

Tuesday, March 30, 2010

Against Measure

The recent Nature issue has an opinion piece by Julia Lane:

Julia Lane is the director of the Science of Science and Innovation Policy programme with the National Science Foundation. This programme is a very timely initiative to support research with the aim to better understand the process of research itself.

If you've been reading this blog for a while, you know that this is exactly what I've been saying over and over again: we need a scientific approach to understand the academic system, much like we have one to understand the economic system. We are wasting time, money and effort because we don't understand the social dynamics of the communities and don't know what benefits knowledge discovery. Instead of a scientific investigation of these matters, we spend time with useless complaints. Clearly, funding agencies should be interested in how to use their resources more efficiently. It's thus no surprise the NSF has such a program. It's a surprise other agencies don't.

Thus, I agree on the general sense of Lane's article. Current metrics to assess scientific productivity are poor. They are used even though that is known. There are no international standards. Many measures for creativity and productivity are not used at all. Lane writes:
Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes. [...]

Knowledge creation is a complex process, so perhaps alternative measures of creativity and productivity should be included in scientific metrics, such as the filing of patents, the creation of prototypes4 and even the production of YouTube videos. Many of these are more up-to-date measures of activity than citations.

She also points out that there are many differences between fields that have to be accounted for, and that the development of good metrics is an interdisciplinary effort at the intersection of the social and natural sciences. I would have added the computer sciences to that. I agree on all that. What I don't agree on is the underlying assumption that measuring scientific productivity is under all circumstances good and useful to begin with. Julia lane actually doesn't provide a justification, she just writes:
"Scientists are often reticent to see themselves or their institutions labelled, categorized or ranked. Although happy to tag specimens as one species or another, many researchers do not like to see themselves as specimens under a microscope — they feel that their work is too complex to be evaluated in such simplistic terms. Some argue that science is unpredictable, and that any metric used to prioritize research money risks missing out on an important discovery from left field. It is true that good metrics are difficult to develop, but this is not a reason to abandon them. Rather it should be a spur to basing their development in sound science. If we do not press harder for better metrics, we risk making poor funding decisions or sidelining good scientists."

True, this is no reason to abandon metrics. But let me ask the other way 'round: Where is the scientific evidence that the use of metrics to measure scientific success is beneficial for progress?

I don't have any evidence one way or the other (well, if my proposal under Ms Lane's program had been approved of, I might have). So instead I'll just have to offer some arguments why the mere use of metrics can be counterproductive. First, let me be clear that scientific research can be very different from one field to the other. We also previously discussed Alexander Shneider's suggestion that science proceeds in various different stages. The stages basically differentiate the phase of the creative process. For some of these stages, the use of metrics can be useful. Metrics are useful if it is uncontroversial what constitutes progress or good research. This will be the case in the stages where a research field is established. That's basically the paper-production, problem-solving phase. It's not the "transformative" and creative stage.

One has to be very clear on one point: metrics are not external to the system. The measurement does affect the system. Julia Lane actually provides some examples for that. Commonly known as "perverse incentives" it's what I've referred to as a mismatch between primary goals and secondary criteria: You have a primary goal. That might be fuzzy and vague. It's something like "good research" or "insight" or "improved understanding." Then you try to quantify it by use of some measure. If you use that measure, you have now defined for the community what success means. You dictate them what "good research" is. It's 4 papers per year. It's 8 referee reports and 1 YouTube video. It doesn't matter what it is and how precise you make it, point is that this measure in turn becomes a substitute for the primary goal:
"The Research Performance Progress Report (RPPR) guidance helps by clearly defining what agencies see as research achievements, asking researchers to list everything from publications produced to websites created and workshops delivered."

So you're telling me what I should be achieving? And then you want me to spend time counting my peas?

Measures for achievements are fine if you have good reason to believe that your measure (and you could adapt it when things go astray) is suitably aligned with what you want. But the problem arises in cases where you don't know what you want. Lane eludes to this with her mentioning of researchers who think their work is "too complex" (to be accurately measured) and that "science is unpredictable." But then she writes this is no reason to abandon metrics. I already said that metrics do have their use, so the conclusion cannot be to abandon them altogether. But merely selecting what one measures tells people what they should spend their time on. If it's not measured, what is it good for? Even if these incentives are not truly "perverse" in that they lead the dynamics of the system totally astray, they deviate researchers' interests. And there's the rub: how do you know that deviation is beneficial? Where's the evidence? You with your science metric, please tell me, how do you know what are the optimal numbers a researcher has to aim at? And if you don't know, how do you dare to tell me what I should be doing?

The argument that I've made previously in my post "We only have ourselves to judge each other" is that what you should be doing instead is to just make sure the system can freely optimize, at least within some externally imposed constraints that basically set the goals of research within the context of the society. The last thing you should be doing is to dictate researchers what is the right thing to do, because you don't know. How can you know, if they don't know?

And yes, that's right, I'm advocating laissez-faire for academia. All you out there who scream for public accountability, you have completely missed the point. It's not that scientists don't want to be accountable. There's just no sensible way to account for their work without that accounting hindering progress. Call that the measurement problem of academia if you like.

Bottomline: Before you ask for more scientific science metrics, deliver scientific evidence that the use of such metrics is beneficial for scientific progress to start with.

Sunday, March 28, 2010

Experiments with GPS

The biggest mystery in the universe is clearly the male brain. What happens if you leave my husband alone with a GPS receiver? He'll spend several hours measuring the position of a table. For what I'm concerned the table is on the patio. Besides this, I'm every product developers nightmare since instead of reading manuals or tutorials I randomly click or push buttons till I've figured out what they're good for. That's a good procedure to find out every single way to crash the system, but usually not particularly efficient to actually use the device or software. Stefan instead goes and reads the manual!

In this case, he's been playing around with a brand new Garmin etrex H GPS receiver. Among other things it allows you to store a time sequence of position measurements, a so-called track, which you can then upload on Google maps to see which part of the forest you've been straying around in. This cute high-tech gadget is shown in the photo. On the display (click to enlarge) you can see a schematic map of the sky with the GPS satellites in view, and the strengths of the signals received from these satellites indicated in the bars at the bottom of the screen. Moreover, the display says that the accuracy of the position measurement is 5 meters.

Now, Stefan wanted to figure out the meaning of this accuracy, and if it can be improved by averaging over many repeated measurements. So, he put the GPS receiver on said patio table, and had the device measure the position every second over a duration of a bit less than 3 hours. Then he downloaded the measurement series with several thousand data points, plotted them, and computed the average value.

Remarkably, this simple experiment delivered clear evidence that spacetime is discrete! Shown in the figure below is the longitude and latitude of the data points, transformed to metric UTM coordinates, as blue crosses. The yellow dot is the average value, and the ellipse has half-axes of 1 standard deviation. Several thousand measurements correspond to just 16 different positions.

This 3-d figure shows the weights of the data points from the above figure:

Of course, the position measurements in the time series are not really statistically independent, so one has to be careful when interpreting the result. If one repeats a position measurement after the short time interval of just one second, one expects a very similar result since the signals used likely come from the same satellites which haven't changed their position much. Over the course of time, however, the satellites whose signal one receives are likely to change. To see this effect, Stefan computed the autocorrelation function of the measurement series, shown in the figure below:



The autocorrelation function, a function of the time delay τ between two measurements, tells you how long it takes till you can consider the measurements to be uncorrelated. The closer to zero the autocorrelation function, the less correlated the measurements. A positive value indicates the measurements are correlated on the same side of the average value, a negative value that they're correlated in opposite directions.

How do we interpret these results?

The origin of the discreteness of the measuring points is likely a result of rounding or some artificially imposed uncertainty. (The precision of commercial devices is usually limited as to disable their use for military purposes.) It remains somewhat unclear though whether the origin is in the device's algorithm or already in the signal received.

The initial drop of the autocorrelation functions in the figure means that after roughly half an hour, the position measurements are statistically independent. But why the autocorrelation function does not simply fall to zero and instead indicates complete anticorrelation in the y coordinate (latitude) data for a delay of about 1.5 hours and seems to hint at a periodicity is not entirely clear to us - more data and a more sophisticated analysis clearly are necessary.

Anyway, finally, to finish this little experiment, Stefan uploads the graphs to blogger. And then asks his wife to weave a story around it. The biggest mystery in the universe...

Saturday, March 27, 2010

So what do you do?

When I arrived in Canada for my stay at Perimeter Institute, the border officer marked my customs form and shoved me over to Canada Immigration. “I'm not immigrating!” I tried to say, but had to get in the back office and in another queue. I'm not sure why they suddenly found I look suspicious. I've been crossing the Canadian border dozens of times during the last years, and they usually just ask me what I do for a living and stamp my passport. Maybe they were confused by my expired work permit. In any case, I ended up with a young, blonde officer in an intimidating uniform who made his way through my passport.

“What did you do in the USA?” - “I worked there.”

“What do you do?" - “I'm a theoretical physicist.”

“Ah. What's the purpose of your trip to Canada?” - “Ooohm. Collaboration.”

I was actually about to say “I live here” before I recalled I recently moved to Sweden. After a day's trip at 3am in the morning I'm not exactly in the best shape.

“Who do you work with?” - “Some people at Perimeter Institute. That's in Waterloo.”

“What do you work on?” Excellent question actually. What do I know? “Dark matter.” Or something like this. Maybe.

“Dark. Matter. Humm. What do you do for that?” - “Huh?” - “How do you work?” - “Well. We do calculations and things. Write papers. I mean, journal articles.”

“How do you write a paper?”

Border officer wants to know how to write a paper. Is there a restroom here?

“Oohm. We sit around and talk, and write things on the blackboard, and read other's papers and work through equations and sometimes there's numerical fits to do. And so.”

Guy grins. I wonder why. He probably has a degree in physics and two dozen published papers. I think I'm not doing this very convincingly. “Oh, and we think.” At least we try. Now stop babbling.

“What was the name of the place again?” He types something into the computer. Grins again. Clicks. Turns the monitor around and shows me this website.

“Here you are. You dyed your hair.” I did. What was the first hit? My blog? Then he stamps my passport.

“Welcome to Canada.”

Thursday, March 25, 2010

If you watch one video today...

...watch this: Nature by Numbers



via Grrlscientist. For details on the science behind the video, see here.

Wednesday, March 24, 2010

Update on the origin of highly energetic cosmic rays

In a previous post I reported on data from the Pierre Auger Observatorium. They studied the correlation between the arrival directions of ultra high energetic cosmic rays (with energies above 55 x 1018 eV) and known active galactic nuclei. They found that the observed distribution has less than 1% probability to have happened by chance if the arrival directions were isotropic (the same in all directions). Thus, it seems likely, if not overwhelmingly likely, that these highly energetic cosmic rays originate from active galactic nuclei. At least some of them. There are some puzzles in the data though, for example while Centaurus A has an excess of events, the Virgo cluster (home to M 87) does not. So it's not that easy.

The total number of such highly energetic events is however tiny. The previous analysis was based on a total of 27 events. Meanwhile, AUGER has more data and they've redone their analysis with 31 additional events. The news is that there's no news. One might have expected the correlation to become better, but it didn't. The observed correlation has still less than 1% chance to occur by chance if the arrival directions were isotropically distributed, but that's it. The situation is depicted in the below figure. Shown is the number of events and their angular distance to the next closest active galactic nuclei (in the VCV catalogue). The brown shaded region is the average expectation from isotropy. You see how the data has an excess at small angles. The hatched data shows events from directions close by the galactic plane (I suppose because the error might be higher).

[Picture Credits: J.D. Hague, AUGER Collaboration]


You can find the details and the above figure in this paper:
    Astrophysical Sources of Cosmic Rays and Related Measurements with the Pierre Auger Observatory
    arXiv:0906.2347v2 [astro-ph.HE]

Monday, March 22, 2010

Publish or Publish? Or maybe Publish?

Thomson Reuter has raised some interesting data on the scientific activity in national comparison to "to inform policymakers about the changing landscape of the global research base." Shown below is the annual number of papers with at least one author address in the respective countries. The papers counted are all those indexed by Web of Science, covering most leading scientific journals. See China on the rise:



(Click to enlarge.) The USA is not on the above figure; their number of papers is about a factor 4 higher still: During the period shown the USA output increased from 265,000 to 340,000 publications. Note that Russia's output actually decreased during that period. The increase in the number of Chinese publications is even more dramatic when you normalize each curve to the 1982 level:



China's share on the number of world publication (in 2004-2008) is the largest in material science (20.83%), chemistry (16.90%), and physics (14.16%).

For details on the figures, download the full report on China here, and the report on Russia here.

Sunday, March 21, 2010

ESA's Living Planet Program

The European Space Agency (ESA), dedicates its efforts not only to observing space, but also to observing planet Earth from above. The "Living Planet Program" is part of these efforts. This program constitutes of several missions, part of which are the Earth Explorer missions, which themselves constitute of one or several satellites. I wont even try to imagine the complexity of administrative issues behind these layers of organization. According to the blurb

"The Earth Explorer missions are designed to address key scientific challenges identified by the science community blahblahblah breakthrough technology in observing techniques blabla peer-reviewed selection process blabla gives Europe an excellent opportunity for international cooperation blablabla"

The first satellite, GOCE (Gravity field and steady-state Ocean Circulation Explorer), was launched on March 17 2009. Its task is to provide precision measurements of the Earth's gravitational fields. The strength of the gravitational field of the Earth depends on the local density of the matter and thus contains geological information. The precision of these measurements is quite amazing. ESA's marketing department moreover lets us know that "the sleek, elegant aerodynamic design of GOCE immediately sets it apart from most other satellites." See illustration to the left. Book a test-drive at your nearest dealership.

The second satellite (see picture to the right) was launched November 2nd 2009. Dubbed SMOS (Soil Moisture and Ocean Salinity) it will provide global maps of (guess) soil moisture and ocean salinity with one of the purposes being to improve "extreme-event forecasting." I suppose "extreme event" is PR-speak for "natural disaster."

The third sattelite, CryoSat-2, is scheduled to launch on 8 April 2010. Its mission is to measure variations in the thickness of continental ice sheets. It was originally scheduled to launch Feb 25, but the launch was postponed due to technical problems. The website let us know the satellite "is currently being 'babysat' in the integration facilities by two team members."

The remaining three missions are in the planning phase. They contain Swarm: three satellites measuring the Earth's magnetic field to provide inside about the inner dynamics, ADM-Aeolus: measuring the wind-fields and hopefully contributing to improvements in weather forecasting, and EarthCARE: measuring Earth's radiative balance. To me EarthCARE sounds more like an immunization program by Doctors without Borders.

ESA is an international organization with 18 member states: Austria, Belgium, Czech Republic, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Norway, Portugal, Spain, Sweden, Switzerland and the United Kingdom, and Canada takes part in some projects under a cooperation agreement.

I find it amazing when I imagine what amount of knowledge and attention to details is necessary to put a satellite into orbit. I'm not even sure what I'm more impressed by, the "breakthrough technology" or the complexity of global collaboration and organization to make it possible.

Saturday, March 20, 2010

Interna

I'm back in Sweden, working on the jetlag. When I arrived last weekend, Sweden was still covered by snow. But the last days the snow has been melting in places, just to then freeze again, converting most walkways into outdoor skating arenas. Today however is the first day this year I am looking out of the window and do not see a closed white surface. Yes, spring is coming!



Some of you have been asking how the pothole situation is in Stockholm compared to Waterloo. I haven't seen potholes. At least in the area I live there hasn't been any salt on the streets either. (You notice it when you get home and the shoes leave stains.) Absence of salt contributes noticeably to the outdoor skating experience. They do extensively put gravel on the streets, but that doesn't melt the snow, so it just adds layer on layer on layer. Walking on it is possible to some extend, but if you try to push a stroller, ride a bike, pull a suitcase (guess which one is me), you can forget about it. The extensive salting Canadians engage in seems to wreck the upper layers of the pavement, contributing to pothole growth. However, in the last 2 years no-salt ice-melters have become more common. They are allegedly better for the environment. And for your dog. Don't know about the potholes though.

It seems however the Germans have had a lot of potholes this year due to the unusually cold winter. Well, what the Germans call potholes wouldn't even count as a surface crack in Canada. Note: if you can't store a turkey in it, it's not a pothole. But either way, it's caused considerable damage to the streets and most of the expenses fall to the communities. One German town that's short on money is thus seeking sponsorship for potholes. For EUR 50, they'll tar over a hole and put a plaque with your name on it. If you ever wanted to own a pothole - a GERMAN pothole - here's your chance!

Wednesday, March 17, 2010

What I learned today

We had an interesting talk today by Antti Niemi from Uppsala University modestly titled "Can Theory of Everything Explain Life?" It was about string theory of a somewhat different kind. The string in this case is a protein and what the theory should explain is its folding. The talk was basically a summary of this paper: "A phenomenological model of protein folding." In a nutshell the idea is to put a U(1) gauge theory on a discretized string (the protein), define a gauge-invariant free energy and minimize it. The claim is that this provides a good match to available data.

I know next to nothing about protein folding, so it's hard for me to tell how good the model is. From the data he showed, I wasn't too impressed that one can fit a scatter plot with two maxima by a function that has 5 free parameters, but then that fit is not in the paper and I didn't quite catch the details. One thing I learned from this talk though is that PDB sometimes doesn't stand for Particle Data Book, but for Protein Data Bank. If you know more about protein folding than I, let me know what you think. I found it quite interesting.

Something else that I learned in the talk is that the DNA of the bacterium Escherichia Coli is a closed string rather than an open string (see picture). I think I had heard that before. There's enzymes that act on the DNA, so-called topoisomerases that don't change the DNA sequence but the topology of the string. In other words, these enzymes can produce knots. Simple knots, but still. I think I had also heard that before. However, I thought the topology-change of the DNA is a process that is useful for the winding/unwinding and reading/reproducing of the DNA. It seems however that the topology of the DNA affects the synthesis of proteins, in particular the folding and function of the proteins. This probably isn't really news for anybody who works in the field but I actually didn't know that the topology of the DNA, not only it's sequence, has functional consequences. Alas, that flashed by only briefly and wasn't really content of the talk. But I find it intriguing.

Monday, March 15, 2010

The Frog Spawn Picture

Raise your hand if you've seen this picture before:

Illustration of a heavy ion collision among two lead nuclei at a beam energy of 160 GeV per nucleon. Generated by Henning Weber, ITP Frankfurt, using the UrQMD (Ultra-relativistic Quantum Molecular Dynamics) code.


It's an illustration of a heavy ion collision among two lead nuclei at a beam energy of 160 GeV per nucleon. I've seen this picture over and over again in talks, leaflets, and even printed in books, typically as a motivation for why heavy ion physics is the thing to do and the quark gluon plasma is cool. Followed by a praise of whatever model it is that the speaker/author is working on.

Now raise your hand if you know who made the picture and how it was made?

Nobody? Well, there's the rub. As often as I've seen the picture, as often the credits were missing. If it is credited to anybody, it's credited to a CERN press release of February 2000 that was summing up the results of the CERN-SPS heavy-ion program before the start of RHIC at Brookhaven. The picture was used as an illustration for that.

As it happens, Stefan and I know exactly who spent weeks on this figure and how its production came along, since at that time we were sharing an office with the unknown, uncredited physicist behind it. The picture was made by Henning Weber, then a PhD candidate at the Institute for Theoretical Physics in Frankfurt. Henning has meanwhile left academia, which is why you've probably never heard of him. The picture is a visualization of data generated with a numerical model called UrQMD (Ultra-relativistic Quantum Molecular Dynamics). From the code's numerical output that's usually just a looong list of numbers, Henning made a couple of videos showing how a heavy ion collision in this model looks like. He still has his website up, you can look at the videos here.

The grey balls are color-neutral hadrons which the initial nuclei consist of. Shown is a not-quite central collision, one in which the nuclei have a non-zero impact parameter. The red, blue and green balls represent temporarily unconfined quarks in the collision region. After the collision, the quarks hadronize again. Well, actually, the UrQMD code does not explicitly treat color degrees of freedom, so the colors are an artistic rendition of what technically are called "preformed hadrons". The below shows a screenshot of one of Henning's movies. Same picture, but with modest credits to the Frankfurt UrQMD group at the bottom:

Illustration of a heavy ion collision among two lead nuclei at a beam energy of 160 GeV per nucleon. Generated by Henning Weber, ITP Frankfurt, using the UrQMD (Ultra-relativistic Quantum Molecular Dynamics) code.


So what happened? Well, Henning was asked by his supervisor to provide a visually appealing picture for an upcoming CERN press conference. Henning sat down and spent some days and nights on the picture that he would later refer to as the "frog spawn picture," because said supervisor insisted on making the balls semi-transparent giving them the appearance of fish eggs. This association was even stronger after the relativistic squeeze of the nuclei was removed. More accurately, the nuclei should be flattened to about 1/10 of the initial size in the direction of motion – the Lorentz gamma factor for the SPS fixed-target collisions at 160 GeV/nucleon is, in the center-of-mass frame, γ = 9.2.

The picture then was sent to CERN and used in the press release. And somewhere along the line the bottom with the credits vanished.

Saturday, March 13, 2010

Poll: Do you believe in extraterrestrial life?

I always take the overnight flight from Toronto to Frankfurt. I hate overnight flights because I never manage to sleep on planes, but the flight has a bonus: It lands in Germany in around sunrise, so one gets a spectacularly beautiful top-down view on clouds bathed in orange to pink colors. This morning, we passed through several thin layers of clouds before we landed in a grey and foggy typically German Spring morning. It's still grey and foggy now, and my inner clock is wondering if it's morning or evening and if I've had breakfast or if the airline muffin at 4am doesn't count. Coffee, I think. Coffee is always a good idea.

In any case, gazing on the planet from above always makes me think about how fragile our life-enabling environment is. How thin the layer of gas that we breathe. How lucky we are to be in the right distance to the sun. How amazing the sheer number of lifeforms that came about in all their diversity, now crammed on the planet's surface in a not always peaceful coexistence. And of course there's the question did this happen elsewhere? So here's a weekend poll. I'm a believer. I think there's intelligent life out there, and sooner or later we'll find it. Or it will find us. What do you think?

Wednesday, March 10, 2010

This and That

  • The Nobel Foundation goes YouTube.
    Have you ever wanted to ask a Nobel Laureate a question? Now, here's your chance! Ask a Nobel Laureate is offering you a unique opportunity to communicate with some of the world´s most brilliant minds. The current participating Nobel Laureate is Albert Fert, Nobel Prize in Physics 2007 "for the discovery of Giant Magnetoresistance", which forms the basis of the memory storage system found in your computer.

    Albert Fert will answer a selection of your uploaded video questions [...] Upload your video question no later than March, 19, 2010.

  • Online Colleges has put together a list with "100 Amazing Videos for Teaching and Studying Physics," that might be worth having a look.

  • Definitely worth a look is this totally amazing music video to OK Go's song "This Too Shall Pass."

  • The Louisiana State University has two openings for faculty positions in loop quantum gravity. Details are here. [Thanks to Christine].

Sunday, March 07, 2010

What's in a book?

In the comments to my recent post "Addicted!," Steven, Christine and I were discussing the value of science books. Steven let us know he has "a sick addiction to Science books" and Christine writes "I want to die reading a book." Around the same time, Sean Carroll over at Cosmic Variance polled his readers for what got them interested in science. The results show that the biggest chunk of the cake is science books, followed by SF/Fantasy which I suppose also has a book fraction. And in a recent Nature Editorial, titled "Back to books," it is argued that "Researchers should be recognized for writing books to convey and develop science," and
"[R]esearchers should embrace the book as another means of expressing not only their insights but also their visions.... It is time to bring the book back into the science mainstream. This needn't be a mass movement: just a dedicated few, but more of them, could fulfil the reasonable hope that their books will inspire a new generation. And they should be encouraged to do so."

I had a mixed reaction to the Nature article. On the one hand, I have previously argued that researchers' contributions to advance scientific discourse inside their community, across communities and with the public should be better acknowledged. As should be other community services and public services in general. I proposed that instead of expecting researchers to be all-rounders and do a little of everything which is then generally done sloppily, one should allow for a specialization in task which would have the benefit of allowing them to focus on what they're best at. Besides that it could counteract specialization in topic.

On the other hand I was wondering who is supposed to read all these books. How many books does the world really need? Isn't everybody writing a book or wants to write a book anyway? I would be concerned that with more popular science books and textbooks coming out these would just become increasingly more specialized. Which publisher wants to print the 50th criticism of string theory, whether or not it's an expression of somebody's insight?

I could picture myself standing in the library when I was a teenager staring at all these aisles full of books and books and more books. Where to start? I couldn't possibly read them all, so now what? 20 years later though things have changed. Today you can quickly and easily check online for book recommendations and learn about other people's opinions. It might not always work too well, but you don't have to rely on one librarian and otherwise make good guesses. The other thing that I didn't know when I was 13 is that it doesn't matter much where you start. If you trace back references you'll find what you were looking for (unless you really started in an entirely disconnected part of the citation network.)

Thus, I think I shouldn't be too concerned that we'll end up having too many books. So what do I look for in a popular science book?
  • Information
    Most obviously, a book isn't a random string of ascii characters, but it contains information. On the other hand, research papers also do that. There is the issue with subscription though. While I'm fortunate to have access to the majority of subscription journals, a book can provide information that might not be freely available to everyone.

  • Knowledge
    But besides that, a book ideally provides the reader with an overview on a subject, and with explanations beyond the information. With a well written book one can draw upon the knowledge of an expert who is familiar with his field.

  • Time and effort
    A book is usually more carefully edited than journal articles. Ideally, great care has been taken to erase any sloppiness, and to make it accessible to the expected readership. That includes avoiding gaps in the argumentation, inconsistent or not introduced terminology, and disconnected branches of arguments.

  • Personal account
    A book can do a good job making science more human. It's a place to tell the story behind the result, to make science fun, add a dose of humor and some anecdotes. To me that's one of the main reasons to read a popular science book rather than, say, a review article in a journal. I want to know if the researchers ever had any doubts and what difficulties they faced. What do they believe but don't know? What was their inspiration? I want to know what they were thinking, what they were or are concerned of, and what their hopes and dreams are.

  • The big picture
    Finally, in a book it is possible to embed ones research into the big picture in a way that's not possible in research articles. What world-view has the writer obtained from his or her research? What relevance do they believe their results could have, in a decade, in a century, in a thousand years? What is their vision? How does their research relate to other disciplines and do they see any connection? How do they see themselves in the history of science? What made their research change them their opinion on?

These are points I value very much in books. Textbooks are somewhat different. One of the big disadvantages of textbooks in active research fields is that they are rapidly outdated. The Living Reviews are a great step to alleviate this problem. Unfortunately, I find them hard to read. I'd rather have a printed textbook with an update option, eg a website where one could download chapters added within the course of years.

Thursday, March 04, 2010

The Box-Problem in Deformed Special Relativity

As you know, I'm presently visiting Perimeter Institute. I was asked to speak in today's Quantum Gravity group meeting about one of my recent papers:
Since it's not a technically very demanding argument, I thought writing up my thoughts in form of a blogpost has a double-benefit: It will help me organize what I'm about to say, and you'll get a summary of the paper. I will leave out some subtleties here, for details please look at the paper. This paper was the result of an argument I had about the usefulness or uselessness of thought-experiments which triggered this earlier post. In the following I will explain why combining an energy-dependent speed of light with observer-independence is inconsistent with experimentally well established theories to precision much higher than previously thought.

Deformed Special Relativity

An energy-dependent speed of light without the introduction of a preferred frame is a feature of what's become known as "Deformed Special Relativity" (DSR). If the energy-dependence is first oder in the photon's energy over the Planck energy, then it would become observable in the travel-time of highly energetic photons from distant gamma ray bursts. This prediction has received a considerable amount of attention. We previously discussed it here, here, here, here and most recently here. The reason that a tiny Planck scale effect grows to observable magnitude is that it adds up over the long distance the photons travel.

The amazing thing about this prediction is that there is actually no (agreed upon) DSR model in position space. DSR, pioneered by Giovanni Amelino-Camelia and later Kowalski-Glikman, Lee Smolin and João Magueijo, is a theory that is formulated in momentum space. It is motivated by the desire to modify the usual Lorentz-transformations such that the Planck energy is an invariant of the transformation, and it would appear the same for all observers. This works very well in momentum space. One obtains in these models a modified dispersion relation from which there follows the energy-dependence of the speed of light.

It is indeed interesting that it is possible to do that in an observer-independent way. The new transformations assure that the dispersion relation in its new form is the same for all observers. The one important equation to take from this is that also the energy-dependence of the speed of light is invariant when changing reference frames. Since the energy changes this means it's the functional relation that all observers agree on: if I change from one inertial frame to the other and the energy of a photon changes (by use of the new transformation) from E to E', then the speed of the photon changes to c'(E') which is the same as c(E'). That's what it means for the speed to be observer-independent. You cannot achieve this invariance of different speeds with ordinary Lorentz-transformations but the "deformed" ones do the trick.

But that's all in momentum space. So how was the prediction made for the photons propagating from the gamma ray burst towards Earth, arguably in position space? Well, one just used the c(E) that one obtains from the momentum space treatment. Now there's several technical arguments one can raise for why this is not a good idea (see eg this paper and references therein), but it later occurred to me there's a simpler way to see why this doesn't work. For this, let's just assume that one can indeed have c'(E') = c(E), and I'll tell you what a mess you get.

The Box Problem

Here's a very simple thought experiment. It is an extreme case but exemplifies the problem that is also present in less extreme situations. Consider you have a photon with Planck energy and a speed of light that decreases with increasing energy. Since the function c(E) should be monotonic and we don't want to introduce another scale, it's plausible that it drops to zero. So a photon with Planck energy, which is now the maximal possible energy, has zero speed, it doesn't move. I take that photon and put it inside a box. The box represents a classical, macroscopic system, one for which we know there's no funny things going on. It also takes into account that there's a finite precision to our measurements (the size of the box).

Now I change into a different reference frame, one in which the box moves relative to me. The photon's energy is by construction invariant under the transformation. If its speed is also invariant, it means the photon is in rest also in the other frame. This means however it can't remain inside the box. This is sketched in the space-time diagram below, the grey shaded area is the box, the red dashed line is the funny photon:


This peculiar transformation behavior was clear to Giovanni Amelino-Camelia already since the early days of DSR. In a 2002 paper he wrote:
"It is unclear which type of spacetime picture is required by the DSR framework. We should be prepared for a significant “revolution” in the description of spacetime ... In typical DSR theories ... one observer could see two particles ... moving at the same speed and following the same trajectory (for [this observer the particles] are “near” at all times), but the same two particles would have different velocities according to a second observer so they could be “near” only for a limited amount of time."

So far, so good. Then he writes
"For the particles we are able to study/observe, whose energies are much smaller than the Planck scale, and for the type of (relatively small) boosts we are able to investigate experimentally, this effect can be safely neglected."

And this unfortunately is not true for the following reason.

In the above figure you might not be too worried about the lines diverging. After all, it's hard to get two worldlines to be and stay perfectly parallel anyway, and it could take a very long time for them to diverge. But let's bring a third particle into the game. You can think of it as an electron that interacts with the photon inside the box which could be a detector. For simplicity think of that particle as being low energetic, such that additional DSR effects are irrelevant (that's not a crucial point). Then look at the same system from a different restframe (the green line is the electron):

What happens is that what was one space-time event (the black circle) in one restframe splits up into three different events. This generically happens if you have three lines. It is a very special case if they meet in one point. You make a small change one line (a different transformation behavior than the other lines), they'll fail to meet in one point and instead pairwise meet in three points. What this means is that not only is there no clear definition of "rest" or what it means for two particles to be "near." It's far worse: the notion of an event itself is ill-defined; it becomes an observer-dependent notion. Note that with the finite size of the box I've acknowledge that there's some fundamentally finite precision with which we can decide what's an event. But the DSR non-locality is well above that limit. The non-locality is in fact macroscopically large.

Consequences

And that's bad. That's really bad because it totally messes up particle physics in regimes where we've tested it to very high accuracy. Whether two particles did or didn't interact inside a detector better not be observer-dependent, because that interaction could have real-world consequence. You can't talk it away. In my paper the particle interaction in the box triggers a bomb. It blows up the lab in one frame and not the other.

The later sections of my paper study a more realistic setting of the same problem with actually achievable particle energies and relative velocities. It turns out that, if DSR had a first-order modification in the speed of light that was indeed observable in the measurements from gamma-ray-bursts, then the mismatch in the location of the events would be of the order of a km. Not exactly what I'd call safely negligible. The irony is that what makes this mismatch so large is what makes the time-delay for the gamma ray burst's photons observable in the first place: the long distance traveled.

The setup of the experiment in my paper might seem rather complicated, but that's because I've been very careful to chose a situation in which there's no loophole. So far nobody has found one. It follows from my considerations that DSR with an energy-dependent speed of light that has modifications to first order in the energy over Planck mass are ruled out.

It should be emphasized that the here outlined problem does not occur in theories with a modified dispersion relation that actually break Lorentz-invariance and introduce a preferred frame. But then there are tight constraints on Lorentz-invariance violations already. There are also versions of DSR where the speed of light is not energy-dependent. These form a subclass of the general case and also do not suffer from the here discussed problem. This subclass is what I've been working on for some while. Needless to say I still think it's the only case that makes sense :-)

I'm curious to hear what arguments will be raised in the discussion this afternoon.

Wednesday, March 03, 2010

Gravity is Entropy is Gravity is...

I've been thinking recently about Erik Verlinde's paper "On the Origin of Gravity and the Laws of Newton" (arXiv:1001.0785v1 [hep-th]). Some of you will have noticed it's been discussed in the blogosphere before. In a post on "Expanding Crackpottery," Peter Woit found it to be a "sad example of “unconventional physics”" and expresses his concern about "prominent string theorists taking up dubious lines of research." Luboš Motl finds some links to "octopi swimming in the spin foam,", Robert Helling writes bluntly "If it were not for the author, I would have rated it as pure crackpottery," and of course New Scientist reports.

Now the idea that gravity has a relation to thermodynamics is not new. It's been around since decades and has maybe most clearly been pointed out by Ted Jacobsen. I thus fail to see what's so particularly offensive about Verlinde's recent contribution. Jacobson's paper however is 15 years old, and not much seems to have come out of it, at least not that I know of. So I'll admit that I read Verlinde's paper feeling obliged to follow what's going on in my research area, thinking it's either wrong or meaningless or both.

In the course of my reading it became clearer to me what was the actual content of the paper. And what not. If you are interested, I have written up my notes and uploaded them here:

Here is a short summary: With a suitable definition of quantities, describing gravity by a Newtonian potential or describing it as an entropic force in terms of an "entropy," "temperature" and "holographic screens" is equivalent. One can do it back and forth. The direction Verlinde has shown in his paper is the more difficult and more surprising one. That it works both ways relies on the particularly nice properties that harmonic functions have. Formally, one can also do this identification for electrostatics. In this case however one finds that the "temperature" can be negative and that the "entropy" can decrease without having to do work.

Some assumptions made in the paper are actually not necessary. For example, the equipartition theorem for "bits on the screen," and the explanation for why the change in entropy is proportional to the distance the particle is shifted. One doesn't actually need that. The equivalence works pretty well for the case of Newtonian gravity, but when it comes to General Relativity things are getting somewhat murky.

The biggest problem is that Verlinde's argument to show that gravity can be derived from a thermodynamical description already makes use of quantities that belong on the gravity side. To really show that gravity can be obtained from thermodynamics, one would have to express the gravitational quantities in terms of thermodynamical ones. Unfortunately, Verlinde does it the other way 'round. He also has to use a lot of previous knowledge from General Relativity. It does not seem entirely impossible to actually do this derivation, but there are some gaps in his argument.

In any case, let us consider for a moment these gaps can be filled in. Then the interesting aspect clearly is not the equivalence. The interesting aspect is to consider the thermodynamical description of gravity would continue to hold where we cannot use classical gravity, that it might provide a bridge to a statistical mechanics description of a possibly underlying more fundamental theory. The thermodynamical description might then be advantageous for example in regimes where effects of quantum gravity become strong. Maybe more banally, you might ask what's the gravitational field of a superposition state, a state that is neither "here" nor "there." It can't have a gravitational field in the classical sense. But the thermodynamical description might still work. However, the treatment Verlinde offers is purely classical. It's not even quantum particles in a classical background, it's classical particles in a classical background. So there's way to go.

Taken together, I can see the appeal. However, at the present level it is very unclear to me what the physical meaning of the used quantities is. I would like to add that Erik Verlinde has kindly replied to my queries and patiently clarified several points in his argument.

Update March 04: The notes are now on the arxiv:1003.1015v1 [gr-qc]