Pages

Tuesday, February 28, 2012

Everything is amazing and nobody writes errata

I am working on a review which had me digging out papers going back to 1920. It is very interesting to see the ideas that were suggested and then found not work, or were forgotten and then rediscovered.

In an objective sense the papers become less readable the older they are. For one, because at the beginning of the century many papers were still written in French, German or Russian, and also the notation and terminology has changed. It adds to this that back then people were discussing problems whose answer we know today, and it can be difficult to follow their trains of thought. And then, there's the physical readability that deteriorates. Printouts of scans, especially in small fonts with toner low, can give me a headache that is not conductive to my attention.

On one scanned paper that I read, an overactive software removed background noise, and in that process also erased all punctuation marks. In the text that was merely annoying, but unfortunately the authors had used dots and primes for derivatives.

However, in a subjective sense the papers seem to be getting less readable the newer they are, and that almost discontinuously. The style of writing has been changing.

Everything written before roughly 1990 is carefully motivated, edited, referenced and explained. One also finds very frequently errata, or constructive comments in the next issue of the journal, which seems to have fallen somewhat out of fashion after that. By the late 1990s, most papers are difficult to understand if one doesn't happen to work on a closely related topic or at least follows it; the motivation is often entirely missing or very narrow, common arguments are omitted and apparently just assumed to be known, variables are never introduced and believed to conform to some standard notation (that in 100 years nobody will recall), technical terms are neither explained nor referenced and yet hardly anybody ever seems to cite the textbooks that would explain them.

Needless to say, this is not the case for all papers, there are exceptions, but by and large that has been my impression. It's not so bad actually when you are familiar with the topic. In fact, I am often relieved if I don't have to read yet another introduction that says the same thing! But it is likely that to the reader not familiar with the topic, which in some decades might be pretty much all readers, the relevance and argumentation remains unclear.

So then I've been wondering why it seems that by the mid 1990 the style in which scientific papers were written changed. Here's some explanations that I came up with:
  1. That's just me. Everybody else thinks the newer a paper the better it is understandable, and people back then only wrote confusing garble.

  2. Selection bias. The old papers that are still cited today, or are at least at the root of citation trees, are the most readable ones.

  3. Specialization. There are many more physicists today than in 1920, and calculations have been getting more complicated. It would however take up too much space to explain all technical details or terminology from scratch, which makes papers increasingly opaque. There is certainly some truth to that, but that doesn't quite explain why this seemed to have happened so suddenly.

  4. Typesetting changes. Stefan pointed out that in the 1990s LaTeX became widely used. Before that, many papers were typed by secretaries on typewriters and the equations put in by hand, then the draft was send by mail. The ease and speed of the process today breads carelessness.

  5. Distribution changes. The pulse of academic exchange has been quickening. Today, researchers don't write to be understood in 100 years, they write to get a job next year. Errata or comments don't work towards that end. They don't add motivations because the people they want to reach share their opinion that the topic is relevant anyway.

Most likely it's a combination of the above. What do you think?

I've been wondering if not the future of the paper is an assembly of building blocks. Why does everybody have to write the motivation or explanation of techniques used all over again? I am thinking in ten years, when you download a paper you can choose an option for the level of detail that you want, and then get the paper customized for the knowledge you bring. That won't always work, but for research fields in stages 3 and 4, it might work quite well.

Partly related:

Saturday, February 25, 2012

Gambini and Pullin: A First Course in Loop Quantum Gravity

Briefly after Christmas, I found in my mailbox a copy of Rodolfo Gambini and Jorge Pullin's new book "A First Course in Loop Quantum Gravity" and a note "Sent with compliments of the Editor." It's a lean book with a total of 183 pages, and that includes the index and the references.

A book like this has been long overdue. There are Rovelli's and Thiemann's books of course, and there is Kiefer's book too, but to read one of these you have to make a major commitment, both in terms of time and in terms of finances.

Gambini and Pullin's book in contrast is a masterwork of omission, and as every science blogger knows that's a very under-appreciated art. The book comes at an affordable US$ 45.99 which, for a hardcover textbook really isn't much.

You need to bring only basic knowledge to benefit from the book. It starts very slowly, with special relativity and electromagnetism, goes on to general relativity, Yang Mills theory and quantum field theory. That is about the first half of the book. The rest of the book then deals with the reformulation of general relativity in terms of Ashtekar's variables, the quantization, loop representation, loop quantum cosmology and black holes, and a word or two on spin foams and the problem of time.

I think this book will be very useful for every undergraduate student who considers to work on quantum gravity and wants to know some more about the topic. Needless to say, the brevity comes at a price. That is that many intricate points are covered only very superficially and the reader is referred to the literature. If you really want to know all the details, this is not the right book for you. But if you want to find out if you want to know the details, this is the place to start.

Notably, the book finishes with a chapter on open issues and controversies, stating clearly and honestly that "Loop Quantum Gravity is an incomplete theory. We do not know if the current formulation... really describes nature or not." But, Gambini and Pullin add, "[I]t is refreshing that in loop quantum gravity there are not many things that can be tweaked to make the theory agree with experiment... This is in contrast to string theory, where the theory has evolved considerably over time and is now perceived by some as having too much freedom to be a predictive theory."

You can also read there
"In this modern day and age a lot of the disagreements about loop quantum gravity are aired in blogs and discussion groups on the Internet. In general we do not recommend people learn from the controversy there. Because blogs and Internet discussion groups are unedited opinions of people, some of which have questionable training or knowledge of the details, and are usually written relatively quickly, in many circumstances they contain highly inaccurate statements."

With that note of caution, in this relatively quickly written and unedited blogpost, I recommend Gambini and Pullin's book if you want to know more about Loop Quantum Gravity without getting drowned in details.

Gloria also liked it.

Friday, February 24, 2012

Six years Backreaction

It's been six years since I sat in my Santa Barbara office on a Friday morning, clicking the "publish" button on Blogger's editor for the first time. It's been six eventful years, and I want to take the opportunity of this anniversary to thank all our readers and especially the commenters for their contributions!

Over the years, the content of this blog has slightly changed. At present it's mainly a reprocessing of books/papers/articles I read and find interesting enough to discuss, mixed with the occasional family update. I have diverted most of my link dumps to Twitter, Google plus, and Facebook.

Some months ago, I finally updated the template and I've found it's an improvement. Unfortunately, the comment count in the sidebar wasn't possible with the new template. But the archive list is much cleaner now and the labels now work properly.

I was looking at our blog archive the other day, and was wondering if not it would be interesting to recycle some of the older posts (with updates if applicable). I don't want to bore you with repetitions but my guess is that most of our readers have no been around for six years and very few people ever look at the archive. Can you give me some feedback on that? Would you be interested if I'd pick out the occasional post from the archive and brush it up for a new discussion?

Wednesday, February 22, 2012

Pragmatic Paradigms

I used to consider myself a pragmatist. But some months ago I learned that pragmatism is an American school of thought, which threw me into an identity crisis. Germany is after all "das Land der Dichter und Denker," the country of poets and thinkers. I'm not living up my ancestry. Clearly, I have to reinvent myself. The Free Will Function is testimony to my try. There doesn't seem to be much that is less pragmatic than debating the existence of free will. Except possibly the multiverse.

My attitude towards the landscape problem had been based on pragmatic neglect. I can't figure out what this discussion is good for, so why bother? The landscape problem, in one sentence, is that a supposedly fundamental theory does not only deliver the description of the one universe we inhabit but of many, maybe infinitely many, universes in addition. The collection of all these universes is often called the multiverse.

There are many versions of such multiverses, Max Tegmark has layered them in 4 levels and Brian Greene has written a book about them. String theory infamously won't let its followers ignore the inelegant universes, but everybody else can still ignore the followers. At least that was my way to deal with the issue. Until I heard a talk by Keith Dienes.

Dienes has been working on making probabilistic statements about properties of possible string theory vacua, and is one of the initiators and participants of the "string vacuum project."Basically, he and his collaborators have been random sampling models and looked how often they fulfilled certain properties, like how often did one get the standard model gauge groups or chiral fermions, and where these features statistically correlated. I can't recall the details of that talk, you can either watch it here or read the paper here. But what I recall is the sincerity with which Dienes expressed his belief that, if the landscape is real, then in the end probabilistic statements might be the only thing we can do. There won't be no other answer to our questions. Call it a paradigm change.

Dienes might be wrong of course. String theory might be wrong and its landscape a blip in the history of physics. But that made me realize that I, as many other physicists, favor a particular mode of thinking, and the landscape problem just doesn't fit in. So what if he's right, I thought, would I just reject the idea because I've been educated under an outdated paradigm?

Now, realizing that I'm getting old didn't make me a multiverse enthusiast. As I argued in this earlier post, looking for a right measure in the landscape, one according to which we live in a likely place, isn't much different from looking for some other principle according to which the values of parameters we measure are optimal in some sense. If that works, it's fine with me, but I don't really see the intellectual advantage of believing in the reality of the whole parameter space.

So while I remain skeptic of the use of the multiverse, I had to wonder if not Dienes is right, and I am stuck with old-fashioned, pragmatic paradigms.

I was trying to continue to ignore string theorists and their problems. Just that, after trying for some while, I had to admit that I think Tegmark and Greene are right. The landscape isn't a problem of string theory alone.

As I've argued in this post, every theory that we currently know has a landscape problem because we always have to make some assumptions about what constitutes the theory to begin with. We have to identify mathematical objects with reality. Without these assumptions, in the end the only requirement that is left is mathematical consistency, and that is not sufficient to explain why we see what we see; there is too much that is mathematically consistent which does not describe our observation. All theories have that problem, it's just more apparent with some than with others.

Normally I just wouldn't care but, if you recall, I was trying not to be so pragmatic. This then leaves me two options. I can either believe in the landscape. Or I believe that mathematics isn't fundamentally the right language to describe nature.

While I was mulling over German pragmatism and the mathematical effectiveness of reason, Lee Smolin wrote a paper on the landscape problem

The paper excels in the use of lists and bullet points, and argues a lot with principles and fallacies and paradigms. So how could I not read it?

Lee writes we're stuck with the Newtonian paradigm, a theme that I've heard Paul Davies deliver too. We've found it handy to deal with a space of states and an evolution law acting on it, but that procedure won't work for the universe itself. If you believe Lee, the best way out is cosmological natural selection. He argues that his approach to explain the parameters in the standard model is preferable because it conforms to Leibniz' principle of sufficient reason:
    Principle of Sufficient Reason.
    For every property of nature which might be otherwise, there must be a rational reason which is sufficient to explain that choice.

That reason cannot be one of logical conclusion, otherwise one wouldn't need the principle. Leibniz explains that his principle of sufficient reason is necessary "in order to proceed from mathematics to physics."

Lee then argues basically that Leibniz's principle favors some theories over others. I think he's both right and wrong. He is right in that Leibniz's principle favors some theories over others. But he's wrong in thinking that there is sufficient reason to apply the principle to begin with. The principle of sufficient reason itself has a landscape problem, and it is strangely anthropocentric in addition.

As Leibniz points out the "sufficient reason" cannot be a strictly logical conclusion. For that one doesn't need his principle. The sufficient reason can eventually only be a social construct, based on past observation and experience, and it will be one that's convincing for human scientists in particular. It doesn't help to require the sufficient reason to be "rational," this is just another undefined adjective.

Take as an example the existence of singularities. We like to think that a theory that results in singularities is unphysical, and thus cannot fundamentally be a correct description of nature. For many physicists, singularities or infinite results are "sufficient reason" to discard a theory. It's unphysical, it can't be real: That is not a logical conclusion, and exactly the sort of argument that Leibniz is after. But, needless to say, scientists don't always agree on when a reason is "sufficient." Do we have sufficient reason to believe that gravity has to be quantized? Do we have sufficient reason to believe that black holes bounce and create baby universes? Do we have sufficient reason to require that the Leibniz cookie has exactly 52 teeth?

Do we have any reason to believe that a human must be able to come up with a rational reason for a correct law of nature?

The only way to remove the ambiguity in the principle of sufficient reason would be to find an objective measure for "sufficient" and then we're back to scratch: We have no way to prefer one "sufficiency" over the other, except that some work better than others. As Popper taught us, one can't verify a theory. One can just not falsify it and gain confidence. Yet how much confidence is "sufficient" to make a reason "rational" is a sociological question.

So in the end, one could read Leibniz principle as one of pragmatism.

That way reassured in my German pragmatism, I thought going through this argument might not have been very useful, but at least it will make a good blogpost.

Saturday, February 18, 2012

Kate Findlay's LHC Quilts

Via George Musser via Symmetry Magazine come these wonderful images of quilts by artist Kate Findlay who let herself be inspired by LHC physics.




Read the full article here.

Thursday, February 16, 2012

Pre-Print Peer Review

Nature news titled recently that "Rebel academics ponder how to break free of commercial publishers". The rebels would be better off if they'd read this blog, because we have discussed here a solution to their problem!

The solution is Pre-Print Peer Review (PPPR). The idea is a simple as obvious: Scientists and publishers likewise would benefit if we'd just disentangle the quality assessment from the selection for journal publication. There is no reason why peer review should be tied to the publishing process, so don't. Instead, create independent institutions (ideally several) that mediate peer review. These institutions may be run by scientific publishers. In fact, that would be the easiest and fastest way to do it, and the way most likely to succeed because the infrastructure and expertise is already in place.

The advantages of PPPR over the present system are: There is no more loss of time (and thereby cost) by repeated reviews in different journals. Reports could be used with non-peer-reviewed open access databases, or with grant applications.

Editors of scientific journals could still decide for themselves if they want to follow the advice of these reports. Initially, it is likely they will be skeptical and insist on further reports. The hope is that over time, PPPR would gain trust, and the reports would become more widely accepted.

In contrast to more radical options, PPPR has a good chance of success because it is very close to the present system and would work very similar. And it is of advantage for everybody involved.

I have a longer outline of the idea here, comments and suggestions are very welcome!

Tuesday, February 14, 2012

Updated science symbol

Following some suggestions in the comments, I have made an updated version of the science symbol. I've added a hint of arrows to the circle and a touch of color. I think it looks much better now, more dynamic.

You can also have that carved in stone...


Pendolski suggested to add something in the middle to represent knowledge. I was thinking that in the middle you can add a symbol to your specific profession. You might for example want to point out that you're not just a scientist, but a rocket scientist.





If you like the symbol, feel free to use it. I'm using Corel Draw, you can download the source file here. You will probably need the fonts Life BT and Book Antiqua.

Sunday, February 12, 2012

Does science need a universal symbol?

Paul Root Wolpe is on the search for a universal symbol for science. He must be serious, because he has set up a Facebook page. Though one can't say the success of that page is overwhelming.

I'm not sure we really need a universal symbol for science, but I don't think it would harm either. Either way, once the question was in my head, it got me thinking what would make a good symbol for science. Here's what I came up with:


It has the merit that you can put some electron orbits around it, or a galaxy in the middle. Here is somebody else who has made a suggestion. It looks a little illuminati-ish to me though ;o) Something else that crossed my mind is to use an existing symbol, for example ∀ ("for all").

What do you think, would a symbol for science come in handy? Would you put it on your bumper?

Thursday, February 09, 2012

When I grow up I want to be a physicist

The other day I talked to a young women who is about to finish high school, so the time is coming to decide what education to pursue after that. What does a theoretical physicist actually do?, she asked. And while I was babbling away, I recalled how little I knew myself what a physicist does when I was a young student.

Of course I knew that professors give lectures. And I had read a bunch of popular science books and biographies, from which I concluded that theoretical physics requires a lot of thinking. The physicists I had read about, they also wrote many books, and articles and, most of all, letters. They really wrote a lot of letters, these people. There also was the occasional mentioning of a conference, where talks had to be given. And I could have learned from these historical narratives that, even back then, the physicists moved a lot, but I blamed that on one or the other war. I never asked who organized these conferences or hired these people.

While one could say that my family is scientifically minded, when I grew up I didn't know anybody who worked in scientific research or in academia who I could have asked what their daily life looks like. Today, it is easier for young people with an interest in science to find out what a profession entails in practice, and if you are thinking about a career in science, I really encourage you too look around. Piled higher and deeper has documented the sufferings of PhD students as humorously as aptly, and postdocs from many areas of science write blogs. When I finished high school, I didn't even know what a postdoc is! At the higher career levels, bloggers are still sparse, but they are there, and they tell you what theoretical physicists do.

Yes, they give lectures. They also give seminars, and attend seminars. They write articles and read articles, and review articles. They also write the occasional book, though that isn't very common in the early career stages. They attend conferences and workshops, and also organize conferences and workshops. They travel a lot. They sit in committees for all sorts of organizational and administrational purposes.

To some extend, the books I had read contained a little of all of that. What they did not tell me anything about was one thing that theoretical physicists today spend a lot of time on: writing proposals. They write and write and write proposals, to fund their own research or their research group, their students and postdocs, or their conferences, or maybe just their own book, or long-term stays. If you want to be a theoretical physicist, you better get used to the idea that a big part of your job will consist of asking for money, again and again and again. And then, somebody also has to review these proposals...

You will not be surprised to hear that theoretical physicists do no longer write a lot of letters. I don't know how their email frequency compares to that of the general population, but this touches on one aspect of research in theoretical physics that you read about very, very little on blogs. That is how tightly knit the community really is, and how much people talk to each other and exchange ideas.

At least on the blogs that I read, it's like an unwritten code. You don't blog about conversations with your peers, except possibly under special circumstances (like for an interview). Most of these conversations are considered private and sharing inappropriate, even if confidentiality was not explicitly asked for. I think this is good because there needs to be room for privacy. However, this might give the reader a somewhat distorted picture of what research looks like. It is really a lot about exchanging ideas, it is a lot about asking questions, and about building up on other people's argument. A lot of research is communication with colleagues. So, if you try to catch a taste of theoretical physics from reading blogs, keep in mind that most bloggers will not pull their nonblogging colleagues into a public discussion.

Oh, yes, and in the remaining time - the time not spent on reading papers, sitting in seminars, organizing conferences or writing proposals or reports or blogging - in that time, they think.

If you are considering to become a scientist: Check out this wonderful tumblr site that shows you some photos of real scientists!

Tuesday, February 07, 2012

The Free Will Function

Last year's FQXi conference was a memorable event for me. Not only because it doesn't happen all that often that my conversation partners abandon their arguments to hurry away and find some more of these pills against motion sickness. But also because I was reassured that I am not the only physicist with an interest in questions that might become relevant only so far into the future that the time spent on them rests precariously on the edge between philosophy and insanity.

Scott Aaronson (who blogs at Shtetl-Optimized) went ahead and gave a talk about free will (summarized by George Musser here), which in return encouraged me to write up my thoughts on the topic, though I've still hidden them in physics.hist-ph (which I previously didn't even know exists!):

So, here's the executive summary.

In my previous post on free will, I explained that I don't buy the explanation that in a deterministic universe, or one with a random element, free will exists as an emergent property. If you call something free will that emerges from the fundamentally deterministic or probabilistic laws that we know, then I can't prevent you from doing that, but there isn't anything free to your will anymore. If free will is as real as a baseball, then you have as much freedom in making decisions as a baseball has to change its trajectory in Newtonian mechanics, namely none.

You might seek comfort in the fact that it is quite plausible that nobody can predict what you are doing, but this isn't freedom, it's just that nobody is able to document your absence of freedom. If your "will" is a property of a system that emerges from some microscopic laws, its laws might be for all practical purposes unknown, but in principle they still exist. If time evolution is deterministic, any choice that you make now strictly followed from an earlier state of the universe. If time evolution has a probabilistic element, as in quantum mechanics, then choices that you make now must not necessarily follow from earlier times, but you didn't have any choice either because the non-deterministic ingredient was just random.

Needless to say, I have greatly simplified. Notably, I've omitted everything about consciousness and the human brain. Look, I'm a particle physicist, not a neurologist. The exact working of the brain and the question whether quantum mechanics is relevant for biological processes don't change anything about the actual root of the problem: There is no room for anything or anybody to make a decisions in the fundamental laws of Nature that we know.

One way out of this problem is to believe in what is known as "strong emergence." That would be if the laws of the macroscopic emergent systems (e.g. "you") do not follow from the microscopic laws. The only people I have met who managed to make sense of this idea are philosophers. There is presently no formal way to achieve such a behavior and there is no known example how this could work. (We discussed here a paper that made an attempt into this direction, but note that the assumption of an infinite rather than a large system is crucial for that to work.) But yes, finding an example for strong emergence would be a possibility. Just that I couldn't find one.

My paper is much simpler than that. In my paper I just pointed out that there exist time evolutions that are neither deterministic nor probabilistic, certainly not in practice but also not in principle. Functions that do that for you are just functions physicists don't normally deal with. The functions that we normally use are solutions to differential equations. They can be forward-evolved or they can't and that is exactly the problem. Yet, there are lots of functions which don't fall in this category. These are functions can can be forward evolved, yet you have no way to ever find out how. They are deterministic, yet you cannot determine them.

Take for example a function that spits out one digit of the number π every second, but you don't know when it started or when it will end. You can record as much output from that function as you want, you'll never be able to tell what number you get in the next second: π is a transcendental number; every string that you record, no matter how long, will keep reappearing. If you don't know that the number is π you won't even be able to find out what number the algorithm is producing.

The algorithm is well-defined and it spits out numbers in a non-random fashion that, if you'd know the algorithm, is perfectly determined. But even if somebody monitors all output for an arbitrarily long amount of time to an arbitrarily good precision, it remains impossible to predict what the next output will be. This has nothing to do with chaos, where it's the practical impossibility of measuring to arbitrary precision that spoils predictability: Chaos is still deterministic. The same initial conditions will always give the same result, you just won't be able to know them well enough to tell. Chaos too doesn't allow you to make a choice, it just prevents you from knowing.

But what if you'd make your decisions after a function like the one I described? Then your decisions would not be random, but they wouldn't be determined by the state of the universe at any earlier time either (nor at any later time for that matter). You need to have your function to complete the time evolution, which is why I call it the "Free Will Function."

This is far too vague an idea to be very plausible, but I think it is interesting enough to contemplate. If you would like to believe in free will, yet your physics training has so far prevented you from making sense of it, maybe this will work for you!

If you found this interesting, George Musser has storified the topic, so you can continue reading.

Sunday, February 05, 2012

The Hoganmeter

Almost three years ago, I wrote about Craig Hogan's holographic noise. If you look up Hogan's papers on the arXiv, you'll see that he has been on the topic of holographic noise at least since 2004. His time came in 2009, when the GEO600 gravitational wave interferometer detected unexplained excess noise, just at the right frequency to be "holographic." New Scientist reported.

Unfortunately, the GEO600 noise vanished after a new readout method was employed by the collaboration. Hogan corrected his prediction for the noise by finding a factor in his calculation, so that the noise was no longer in the GEO600 range.

But that wasn't the end of the story. Craig Hogan is a man with a mission - a holographic mission. Last year, I mentioned that Hogan got a grant to follow his dreams and is now building his own experiment at Fermilab, the "Holometer," especially designed to detect the holographic noise. (Which might or might not have something to do with holographic foam. More likely not.)

In the February issue, Scientific American's cover story asks the questionn "Is Space Digital?" The article by Michael Moyer reports on Hogan's Holometer. I really like Scientific American. Look, we have a subscription to the PRINT version, do I need to say more? But this is the worst article I've ever read in this magazine.

To begin with, it is entirely uncritical, and one doesn't actually learn anything about the, in principle quite interesting, question whether space may be digital. One doesn't even learn why the question is interesting. But even worse is that the article is also factually wrong in several places. You can read there for example:
"[Hogan] begins by explaining how the two most successful theories of the 20th century - quantum mechanics and general relativity - cannot possibly be reconciled. At the smallest scales, both break down into gibberish."

Where to start? Moyer probably meant "quantum field theory" rather than "quantum mechanics." One might forgive that, since this simplification is often made in science journalism.

I am more puzzled that Hogan allegedly explained quantum field theory and general relativity cannot possibly be reconciled. Last time I looked there were literally thousands of people working on such a reconciliation; they would surely be interested to learn of Hogan's insight. Therefore, I doubt that this is what Hogan was saying, especially since his experiment is supposed to test for such a reconciliation. More likely, he was laying out the main difficulties in quantizing gravity. Which brings me to the next misleading statement: one might say indeed that gravity breaks down at small distances, which could mean all from the formation of singularities at high densities to the breakdown of the perturbative expansion. But it's news to me that quantum mechanics "breaks down to gibberish" at short scales.
"The Planckscale is not just small - it is the smallest."

Depends on whether you are talking about the Planck length or the Planck mass!
"The laws of quantum mechanics say that any black hole smaller than a Planck length must have less than a single quantum of energy."

The laws of quantum mechanics don't say anything about black holes. And probably neither Hogan nor Moyer have ever heard of monsters.

And then I came to this sentence:
"[P]hysicists mostly agree that the holographic principle is true"

Micheal Moyer's evidence comes from talking to Craig Hogan and Leonard Susskind. He also quotes Jacob Bekenstein, Raphael Bousso and Herman Verlinde.

I am always stunned how easily science writers lose perspective. The vast majority of physicists work in condensed matter physics, nuclear and atomic physics, solid state physics, plasma physics, optics or quantum optics, and astrophysics, half of them in experiment. The idea that space may be digital is a fringe idea of a fringe idea of a speculative subfield of a subfield. I'm not saying it's not interesting. I'm just saying if you'd actually go and ask a representative sample of physicists, I guess you'd find that most don't care about the holographic principle and wouldn't agree on any statement about it.

Anyway, the best part of Moyer's article is a quotation by Hogan about the motivations for his experiment:
"It's a slight cheat because I don't have a theory."

Indeed, if you look at the Holometer website, you find an extensive list of two articles, both unpublished, one of which scores with 25 revisions in 2 years.

Hogan is also quoted with saying
"Things have been stuck for a long time. How do you unstick things? Sometimes they get unstuck with an experiment."

That is true and exactly the reason why I am working on the phenomenology of quantum gravity! But normally, before investing money into an experiment, it is worthwhile to check if not the hypothesis that would lead to a signal in the experiment would also lead to other effects that we should already have seen. Unfortunately, this is difficult to tell without a theory! The criticism in my post from three years ago was essentially that Hogan's scenario breaks Lorentz invariance, and we know that Lorentz-invariance violation is very tightly constrained already. Maybe there is a way to avoid the already existing constraints, but I'd really like to know how.

I admit that I admire Hogan for his passion, perseverance, and also his honesty to admit that he doesn't exactly know what he's doing or why, just that he feels like it has to be done. He is the archetypal American with a hands-on, high-risk, high-gain attitude. He also looks good on the photo in the Scientific American article, is director of the Fermilab Center for Particle Astrophysics, and probably doesn't care a lot about peer review.

Of course I hope he succeeds, because I really want to see some positive evidence for quantum gravity phenomenology in my lifetime!

And hey, you know, I too have an idea for an experiment that can revolutionize our understanding of the world. And mine did even get published.

Thursday, February 02, 2012

No evidence for spacetime foam

It goes under the name spacetime-foam or fuzz, or sometimes graininess or, more neutral, fluctuations: The general expectation that, due to quantum gravitational effects, spacetime on very short distance scales is not nice and smooth but, well, fuzzy or grainy or foamy.

If that seems somewhat fuzzy to you, it's because it is. Absent an experimentally verified and generally accepted theory of quantum gravity, nobody really knows what exactly spacetime foam looks like. A case for the phenomenlogists then! And, indeed, over the decades several models have been suggested to describe spacetime's foamy, fuzzy grains, based on a few simple assumptions. The idea is not that these models are fundamentally the correct description of spacetime but that they can, ideally, be tested against data, so that we can learn something about the general properties that we should expect the fundamental theory to deliver.

One example for such a model, going back to Amelino-Camelia in 1999, is that spacetime foam causes a particle to make a random walk in the direction of propagation. For each step of distance of the Planck length, lP, the particle randomly gains or loses another step. This is a useful model because random walks in one dimension are very well understood. Over a total distance L that consists of N =L/lP steps, the average deviation from the mean is the length of the step times the square root of the number of steps. Thus, over a distance L, a particle deviates a distance

where I have put in a dimensionless constant α - for a quantum gravitational effect, we would expect it to be of order one. See also the below figure for illustration and keep in mind that c=1 on this blog

While simple, this model is not, and probably was never meant to be, particularly compelling. Leaving aside that it's not Lorentz-invariant, there is no good reason why the steps should be discrete or be aligned in one direction. One might have hoped that this general idea would be worked out to become somewhat more plausible. Yet that never happened because this model was ruled out pretty much as soon as it was proposed. The reason is that if you consider the detection of lightrays from some source, the deviation from the normal propagation on the light-cone will have the effect of allowing different phases of the emitted light to arrive at once. The average phase blur is

If the phase blur is comparable to 2π, interferences would be washed out. See below figure for illustration

As it happens however, interference patterns, known as Airy rings, can be observed on objects as far as some Gpc away from earth. If you put in the numbers, for such a large distance and α ≈ 1 the phase should be entirely smeared out. To the right you see an actual image of such a distant quasar from the Hubble Space Telescope (Figure 3 from astro-ph/0610422). You can clearly see the interference rings. And there goes the random walk model.

There is a similar model going back to Wheeler and later Ng and Van Dam, that the authors have called the "holographic foam" model. (Probably because everything holographic was chic in the late 1990s. Except for the general scaling it has little to do with holography.) In any case, the main feature of this model is that the deviation from the mean goes with the 3rd root, rather than the square root, of N. Thus, the effects are smaller.

It is amazing though how quickly smart people can find ways to punch holes in your models. Already in 2003 it was pointed out, that with some classical optic formulas from the late 19th century, modern telescopes allow to set much tighter bounds. Roughly speaking, the reason is that a telescope with diameter D focuses a much larger part of the light's wavefront than just one wavelength λ. The telescope is very sensitive to phase-smearing all over its opening. Telescopes are for example sensitive to air turbulences, a problem that the Hubble Space Telescope does not have.

The sensitivity of a telescope to such phase distortions can be quantified by a pure number known as the "Strehl ratio." The closer the Strehl ratio is to 1, the closer the telescope's images are to those of an ideal telescope, showing a point-like sources as a perfect Airy patterns. A non-ideal telescope will cause an image degradation, most importantly a smearing of the intensity. The same effect would be caused by the holographic space-time fuzz. Thus, up to the telescope's limit on image quality, the additional phase distortion would be observable: it lowers the Strehl ratio of images of very far-away objects such as quasars. (Though, if it was observed, one wouldn't know exactly what its origin is.)

The relevant point is that, using the telescope's sensitivity to image degradation, one gains an additional factor of D/λ ≈ 108. In their paper:

the authors have presented an analysis of the images of 157 high-redshift (z > 4) quasi-stellar objects. They found no blurring. With that, also the holographic foam model is ruled out. Or, to be precise, the parameter α is constrained into a range that is implausible for quantum gravitational effects.

As it is often the case in the phenomenology of quantum gravity, the plausible models are difficult, if not impossible, to constrain by data. And the implausible ones nobody misses when they are ruled out. This is a case of the latter.

Thanks to Neil for reminding me of that paper.


PS: We were not able to find a derivation for the exact expression for the phase blurring as a function of the Strehl ratio, Eq. (5), that is used in the paper. We got so far that it's called the Marechal approximation. If you know of a useful reference, we'd be interested!