Pages

Tuesday, May 28, 2013

Have your multiverse and eat it

The recent results from the Planck mission have caused a flurry of activity among theoretical physicists, documented on the arXiv in an increasing amount of papers with updates on constraints on various cosmological models. Of particular interest is the question which models of inflation are favored by the data. Interestingly, the simplest potentials for the scalar field that causes inflation are ruled out or disfavored already. For a summary, see Jester’s post Planck about inflation.

Paul Steinhardt and collaborators have taken this as a reason to argue that the data actually hints at cyclic models.
    Inflationary paradigm in trouble after Planck2013
    Anna Ijjas, Paul J. Steinhardt, Abraham Loeb
    1304.2785

    Planck 2013 results support the simplest cyclic models
    Jean-Luc Lehners, Paul J. Steinhardt
    1304.3122
The argument in these papers goes as follows.

The potentials for the inflaton field that are necessary to fit the Planck data are not simple in that they require finetuning, ie delicately adjusted parameters. The finetuning has to produce a suitably flat plateau in the potential, and a power law with coefficients of order one isn’t going to do this. If you’d random pick the potential, it would be very unlikely you’d get a suitably finetuned one.

This, Steinhardt et al argue, is a serious problem because the “inflationary paradigm” draws its justification from our universe being a “likely” outcome of quantum fluctuations that are blown up to produce the structures we see. If the potential, or the initial value of the scalar field, is unlikely, this erodes the basis of believing in the inflationary paradigm to begin with. In the paper this unlikeliness is quantified, and it is noted that the unlikeliness of the initial value of the scalar field can be recast as an unlikeliness of the potential. Then they go on to argue that cyclic models are preferable because in these cases natural parameter ranges for coefficients in the potential are still compatible with the data (they do not comment on how natural these models are in other respects). They then identify observables that could further solidify the case.

There are two gaps in this argument. The first gap is between “inflation” and “inflationary paradigm.”

Inflation is a model that describes very well the observations in our universe by using a familiar framework that makes use of quantum field theory and general relativity. The inflationary paradigm that they refer to (not an expression that is common in the scientific literature) adds requirements beyond the explanation of observation, that being the likeliness of the model.

To begin with, speaking about probabilities makes only sense if one has an ensemble. So to even refer to unlikeliness you have to believe in a distribution over set of possibilities, a multiverse. And for that you must have faith in your model, faith that extends beyond and before and beneath our universe, faith that the model holds outside everything we have ever observed, and that you can actually use it to make a statement about likeliness.

Besides this, the inflaton potential is normally not expected to be fundamental, but some effective limit a few orders of magnitude below the Planck scale. If you want to say anything about the probability of finding a particular potential, you would first have to know the fundamental degrees of freedom and the UV-completion of the theory. Just taking potentials and attempting to assign them a probability doesn’t make a lot of sense.

So talking about probabilities is already a bad starting position. From this starting position then Steinhardt et al argue that the inflationary paradigm says that we should find our universe to be likely. But by going from inflation to the inflationary paradigm, one is no longer talking about testing a model that explains observations. In their own words
“The usual test for a theory is whether experiment agrees with model predictions. Obviously, inflationary plateau-like models pass this test.”
That should be the last sentence of a scientific paper. Alas, there’s a next sentence, and it starts with “However…”
“However, this cannot be described as a success for the inflationary paradigm, since, according to inflationary reasoning, this particular class of models is highly unlikely to describe reality.”
Note the leap from “theory” to “paradigm”. (Let me not ask what “reality” means, I know it’s an unfair question.)

The second gap in the argument is that you could use it to rule out pretty much any model anybody has ever proposed.

In this earlier post I explained that all presently existing theories inevitably lead to a multiverse, a large space of possibilities. It’s just that this multiverse is more apparent in some approaches than in others.

The reason a multiverse is inevitable is that we always need something to specify a theory to begin with. Call it basic axioms or postulates. We need something to start with. And in the context of the theory you’re working with, that postulated basis is an uncaused cause: It was written down with the explicit purpose to explain observations. If you take away that purpose because you’ve misunderstood what science is all about, you are left with only mathematical consistency. And then, layer by layer, you are forced to include everything into your theory that is mathematically consistent. That’s what Tegmark called the “Mathematical Universe.”

Steinhardt et al’s elaboration about the possible shape of potentials is an example of this mathematical multiverse beneath the basis. They take away one postulate and replace it by a larger space of mathematical possibilities. Instead of postulating a specific (purpose bound) real-valued, differentiable, scalar function, they replace them with the space of all continuous functions (though they’re not too explicit on the requirements). But why stop there? Why not take the space of all functions and random pick one of these? Almost all functions on the real axis are discontinuous in infinitely many places, which is a fancy way of saying that the probability to get a continuous one upon random picking is zero. Look, I just ruled out both the “inflationary paradigm” and Steinhardt’s cyclic models without referring to any data at all.

To be fair however, Steinhardt et al are just fighting inflation with its own weapons. It is arguably true that the literature is full of arguments about naturalness and how inflation solves this or that philosophical conundrum. If you believe in the multiverse, or eternal inflation specifically, I think you should take the argument put forward in these papers seriously. For the rest of us, those who see inflation as a model with the purpose to describe observations in our universe, there’s no reason to make these leaps of faith. And that’s what they are - at least for now. One never knows what the data will bring.

Wednesday, May 22, 2013

Who said it first? The historical comeback of the cosmological constant

I finished high school in 1995, and the 1998 evidence for the cosmological constant from supernova redshift data was my first opportunity to see physicists readjusting their worldview to accommodate new facts. Initially met by skepticism - as all unexpected experimental results - the nonzero value of the cosmological constant was quickly accepted though. (Unlike eg neutrino oscillations, where the situation remained murky, and people remained skeptic, for more than a decade.)

But how unexpected was that experimental result really?

I learned only recently that by 1998 it might not have been so much of a surprise. Already in 1990, Efstathiou, Sutherland and Maddox, argued in a Nature paper that a cosmological constant is necessary to explain large scale structures. The abstract reads:
"We argue here that the successes of the [Cold Dark Matter (CDM)] theory can be retained and the new observations accommodated in a spatially flat cosmology in which as much as 80% of the critical density is provided by a positive cosmological constant, which is dynamically equivalent to endowing the vacuum with a non-zero energy density. In such a universe, expansion was dominated by CDM until a recent epoch, but is now governed by the cosmological constant. As well as explaining large-scale structure, a cosmological constant can account for the lack of fluctuations in the microwave background and the large number of certain kinds of object found at high redshift."
By 1995 a bunch of tentative and suggestive evidence had piled up that lead Krauss and Turner to publish a paper titled "The Cosmological Constant is Back".

I find this interesting for two reasons. First, it doesn't seem to be very widely known, it's also not mentioned in the Wikipedia entry. Second, taking into account that there must have been preliminary data and rumors even before the 1990 Nature paper was published, this means that by the late 1980s, the cosmological constant likely started to seep back into physicists brains.

Weinberg's anthropic prediction dates to 1987, which likely indeed predated observational evidence. Vilenkin's 1995 refinement of Weinberg's prediction was timely but one is lead to suspect he anticipated the 1998 results from the then already available data. Sorkin's prediction for a small positive cosmological constant in the context of Causal Sets seems to date back into the late 80s, but the exact timing is somewhat murky. There is a paper here which dates to 1990 with the prediction (scroll to the last paragraph), which leads me to think at the time of writing he likely didn't know about the recent developments in astrophysics that would later render this paper a historically interesting prediction.

Monday, May 20, 2013

Guestpost: Howard Burton - "Justified Optimism"

[Howard Burton, founding director of Perimeter Institute, has a new project, Ideas Roadshow, a weekly magazine dedicated to ideas of all types and shapes. Rather than having the declared aim of spreading fractured pieces with little content, the Ideas Roadshow is for those who are looking for content, and who want to know more than the catchy phrases. The magazine will be published in text and as video (streaming and downloadable).]

A fairly common reaction when I tell people what I’m doing now with Ideas Roadshow is a quizzical raising of the eyebrows followed by a wry little smile.

“Well, good luck,” they say sceptically. I certainly think that’s needed right now. But you know Howard, the internet is not exactly about substance. We live in a sound bite world. How do you think you’re going to make money from this? Who is going to watch it?”

So one of the few benefits about careening into advanced middle age is that I’ve witnessed enough by now to recognize that any references to recent golden ages are wildly exaggerated. I don’t remember being brought up in a world awash in substantive, measured discussions of the latest issues in neuroscience or public policy. My high school experience didn’t consist of teachers having to forcibly detach kids from their iPhones, but there was no shortage of ways for us to waste our hours and avoid doing what we were supposed to: Donkey Kong manages to kill time just as well as Angry Birds.

That’s not to say that, by some objective measure, things aren’t getting worse. In some ways they certainly are.

It’s true that newspapers almost everywhere are in deep financial trouble and those that have managed to stay afloat are devoting increasingly less of their time and resources towards long-form analysis and more for mindless knee-jerk responses to the ever-increasing amount of “breaking news”.

But it’s also true that there are now far more effective and salubrious ways for a young ambitious musician to gain a popular audience than by being forced to cavort with sleazy record executives.

Technology, of course, is but a tool. That is so obvious as to border on the cliché. But that doesn’t mean that the message doesn’t sometimes get overlooked.

The notion that, somehow as a result of our developing technology, virtually nobody on planet Earth actually cares anymore about engaging in the world of ideas, is, of course, simply ludicrous. It can’t be true. And it bloody well isn’t.

What technology has done, however, is change the way that those who are interested interact with the world of ideas. In particular, one decidedly ironic effect of the internet has been to intellectually ghettoize people. So while it’s now trivial to meaningfully interact with like-minded people living on the other side of the world, it’s also the case that one is much less likely to be confronted with interesting and stimulating ideas outside of one’s own self-selected area of interest.

Often the most illuminating and stimulating experiences happen when we are forced to encounter people who hold radically different approaches or interests to our own. But the more we spend time with our like-minded friends, the less likelier such encounters are going to be.

This is the core issue. It has, of course, been commented on before. But somehow I don’t think it’s as appreciated as much as it should be.

Conventional newspapers are not collapsing because nobody cares about general ideas. Conventional newspapers are collapsing because their principal revenue stream – print advertising revenue – has dried up. Advertisers are naturally much keener to ensure that their message is being delivered to their particular target audience, which naturally argues for a segmented, specialized approach to sponsorship. Now that technology allows for detailed methods to precisely deliver content and measure its impact, advertisers are increasingly unwilling to participate in scattershot approaches that will clearly be hugely less efficient and effective.

All quite reasonable. But the solution for those who seek a general level of stimulation, for those who are keen to be at play in the world of ideas, is not to bemoan the logic of the marketplace or fall back on dreamy reminiscences of some mythical golden age, but to simply capitalize on the opportunities afforded.

Twenty years ago, or even ten, it would have been completely inconceivable to imagine creating a program where one travels the world and records substantial conversations with a diverse range of fascinating people. Camera technology would have made it prohibitively expensive to develop a professional-quality product; and even had that been somehow circumvented, it would have been virtually impossible to disseminate the results with anywhere near the range necessary to make it profitable.

People interested in ideas have always been a small minority, so to make it work one has to scale globally, or at least nationally. How could a private start-up even attempt such a thing? We’d have had to effectively take over a TV station. Inconceivable.

Recent technology has allowed both of these fundamental obstacles to be overcome. We can not only film for a fraction of the cost of ten years ago, we can also fit all of our cameras, lights and gear into two travelling cases that we can easily travel with anywhere. And once we’ve made our videos and eBooks, we can easily market them to ideas-oriented consumers worldwide.

Of course, just because structural impediments are eliminated, success is hardly guaranteed. One still has to make a product that people actually like. And then one has to establish a new brand and market it successfully.

But let’s be very clear: those are the issues. Not that we are all too superficial now. Or that nobody cares about ideas. That’s just silly.

Starting something new is always a challenge. But there are challenges and then there are challenges.

Being at the front end of a new wave of global niche market digital media products is one thing. But it’s not like some unknown guy trying to build a theoretical physics institute in the middle of nowhere from scratch.

Now that, surely, is impossible.

Friday, May 17, 2013

Dimensional Reduction

Dimensionally reduced scientist.
“Science is the only news,” Steward Brand wrote. My reading of this sentence is that science, the exploration of nature and natural law, is the ultimate source of inspiration. Developing a model and studying its properties can be like discovering a new world, and the discoveries that are the most fascinating are the ones that are surprising and unintuitive.

Probability amplitudes and wavefunctions are examples of such surprising and unintuitive properties, examples that are now a century old and that have changed the way we think about the world. Holography is a more recent example. And, gathering momentum in the quantum gravity community right now, is dimensional reduction.

Dimensional reduction means that on short distances the dimension of space-time decreases. To quantify what this means one has to be very careful with defining “dimension.”

The way we normally think about the dimension of space is to picture how lines spread out from a point. How quickly the lines dilute into their environment tells us something about the spheres we can draw around the point. The dimension of these spheres can be used to define the “Hausdorff dimension” of a space. The faster the lines dilute with distance, the larger the Hausdorff dimension.

The notion of dimension that is relevant for the effect of dimensional reduction is not the Hausdorff dimension, but instead the “spectral dimension.” The spectral dimension can be found by first getting rid of the Lorentzian signature and going to Euclidean space. And then to watch a random walker who starts at one point, and measure the probability for him to return to that point. The smaller the average return probability, the higher the probability he’ll get lost, and the higher the number of dimensions. One can define the spectral dimension from the average return probability.

Normally, for a flat, classical space, both notions of dimension are identical. However, there have been several approaches toward quantum geometry that found that the spectral dimension at short distances goes down from four to two. The return probability for short walks is larger than expected. One says that the spectral dimension “runs”, meaning it depends on the distance at which space-time is probed.

Surprising. Unintuitive.

This strange behavior was first found in Causal Dynamical Triangulations (hep-th/0505113), where one does a numerical simulation of an actual random walk in Euclidean space. But in other approaches one does not need a numerical simulation; it is possible to study the spectral dimension analytically as follows.

The behavior of the random walk is governed by a differential equation, the diffusion equation, in which there enters the metric of the background space-time. In approaches to quantum gravity in which the metric is quantized, it is then the expectation value of the operator that the metric has become which enters the diffusion equation. From the diffusion equation one calculates the return probability for the random walk.

This way, one can then infer the spectral dimension also in Asymptotically Safe Gravity (hep-th/0508202). Interestingly, one finds the same drop from four to two spectral dimensions. Yet another indication comes from Loop Quantum Gravity, where the scaling of the area operator with length changes at short distances. It is somewhat questionable whether the notion of a metric makes sense at all in this regime, but if one nevertheless constructs the diffusion equation from this scaling, one again finds that the spectral dimension drops from four to two (0812.2214). And Horava-Lifshitz gravity is maybe the best studied case where one finds dimensional reduction (0902.3657).

Surprising. Unintuitive. It is difficult to interpret this behavior. Maybe a good way to picture it, as Calcagni, Eichhorn and Saueressig suggested, is to think of the quantum fluctuations of space-time hindering a particle’s random walk and slowing it down. It wouldn’t have to be that way. Quantum fluctuations could also be kicking the particle around wildly, thus increasing the spectral dimension rather than decreasing it. But that’s not what the theory tells us. One shouldn’t take this picture too seriously though, because we’re talking about a random walk in Euclidean space, so it’s not an actual physical process.

It seems strange that such entirely different approaches to quantum gravity would share a behavior like this. Maybe our theories are trying to teach us a lesson about a very general property of quantum space-time. But then again, the spectral dimension does not say all that much about the theory. There are many different types of random walks that give rise to the same spectral dimension. And while these different approaches to quantum gravity share the same scaling behavior for the spectral dimension, they differ in the type of random walk that produces this scaling (1304.7247).

So far, this is an entirely theoretical observation. It is interesting to speculate whether one can find experimental evidence for this scaling behavior. In fact, this recent paper by Amelino-Camelia et al aims to “explore the cosmological implications” of running spectral dimensions. At least that is what the first sentence of the abstract says. If you read the second sentence though you’ll notice that what they actually explore are modified dispersion relations. And while modified dispersion relations lead to a running spectral dimension, the opposite is not necessarily the case. But is there any better indication for a topic being hot than that people use it in the first sentence of an abstract to draw the readers interest?

Tuesday, May 14, 2013

A star-rating for scientific news?

Garry Gutting's recent post What Do Scientific Studies Show? at the NYT blogs is utterly unremarkable. Or so I thought, being clearly biased because the guy is a professor of philosophy, and I - I'm at the other end of the circle. But then he puts forward a proposal I think is brilliant: A labeling system for scientific news "that made clear a given study’s place in the scientific process", ranging from the speculative idea and preliminary results all the way to established scientific theory.

I like the idea because it would be an easy way to solve a tension in science news, which is that what's new and exciting, and therefore likely to make headlines, is also often controversial and likely to be refuted later. The solution can't be to not report what's new and exciting, but to find a good way to make clear that, while interesting and promising, this isn't (yet) established scientific consensus.

23andMe has a star rating to indicate how reliable a correlation between a genetic sequence and certain traits/diseases is, based on what has been reported in the scientific literature. (See my earlier blogpost for screenshots showing how that looks like.) They have a white paper laying out the criteria for assessing the scientific status of these correlations. The 23andMe rating serves a similar purpose as the proposed rating for science news. It is handy as a quick orientation, and it is a guide for those who can't or don't want to dig into the scientific literature themselves. It doesn't tell you to disregard results with few stars, just to keep in mind that this might turn out to be a data glitch, and to enjoy or worry with caution.

I think that such a label indicating how established a scientific result or idea is would be easy to use. Writers could just assign it themselves with help from the researchers they have been in contact with while working on a piece. That might not always be very accurate, but undoubtedly bloggers would add their voice. There would most likely be a service popping up to aggregate all ratings on a given topic/press release (probably weighted by the source). I am guessing it would be pretty much self-organized because we're all so very used to these ratings for other purposes.

Do you think such a labeling would be helpful? If so, what criteria would you require for zero to five stars?

Saturday, May 11, 2013

Basic research is vital

Last month I had the flu. I was down with a fever of more than 40°C, four days in a row. Needless to say, it was a public holiday.

While the body is struggling to recover from illness, priorities shift. Survive first. Drink. Eat. Stand upright without fainting. Feed the kids because they can’t do it themselves. Two days earlier, I was thinking of running a half-marathon, now happy to make it to the bathroom. Forgotten the parking ticket and the tax return.

We see the same shift of priorities on other levels of our societies. If a system, may that be an organism or a group of people, experiences a potential threat to existence, energy is redirected to the essential needs, to survival first. An unexpected death in the family requires time for recovery and reorganization. A nation that is being attacked redirects resources to the military.

The human body’s defense against viruses does not require conscious control. It executes a program that millions of years of evolution have optimized, a program we can support with medication and nutrition. But when it comes to priorities of our nations, we have no program to follow. We first have to decide what is necessary for survival, and what can be put on hold while we recover.

The last years have not been good years economically, neither in the European Union, nor in North America. We all feel the pressure. We’re forced to focus our priorities. And every week I read a new article about cuts in some research budget.

“Europe's leaders slash proposed research budget,” I read. “Big cuts to R&D budgets [in the UK],” I read. “More than 50 Nobel laureates are urging [the US] Congress to spare the federal science establishment from the looming budget cut,” I read.

An organism befallen by illness manages a shortage of energy. A nation under economic pressure manages a shortage of money. But money is only the tool for the management. And it is a complicated tool, its value influenced by many factors including psychological, and it is not just under national management. In the end, its purpose is to direct labor. And here is the real energy of our nations: Humans, working. It is the amounts of working hours in different professions that budget cuts manage.

In reaction to a perceived threat, nations shift priorities and redirect human labor. They might aim at sustainability. At independence from oil imports. They invest in public health. Or they cut back on these investments. When the pressure raises, what is left will be the essentials. Energy and food, housing and safety. Decisions have to be made. The people who assemble weapons are not available to water the fields.

How vital is science?

We all know that progress depends on scientific research. Somebody has to develop new technologies. Somebody has to test whether they are safe to use. Everybody understands what applied science does: In goes brain, out comes what you’ll smear into your face or wear on your nose tomorrow.

But not everybody understands that this isn’t all of science. Besides the output-oriented research, there is the research that is not conducted with the aim of developing new technologies. It is curiosity-driven. It follows the loose ends of today's theories, it aims to understand the puzzle that is the data. Most scientists call it basic or fundamental research. The NSF calls it transformative research, the ERC frontier research. Sometimes I’ve heard the expression blue-skies research. Whatever the name, its defining property is that you don’t know the result before you’ve done the research.

Since many people do not understand what fundamental research is or why it is necessary, if science funding is cut, basic research suffers most. Politicians lack the proper words to justify investment into something that doesn’t seem to have any tangible outcome. Something that, it seems, just pleases the curiosity of academics. “The question is academic,” has to come to mean “The world doesn’t care about its answer.”

A truly shocking recent example comes from Canada:
“Scientific discovery is not valuable unless it has commercial value," John McDougall, president of the [Canadian National Research Council], said in announcing the shift in the NRC's research focus away from discovery science solely to research the government deems "commercially viable". [Source: Toronto Sun] [Update: He didn't literally say this as the Sun quoted it, see here for the correct quote.]
Oh, Canada. (Also: Could somebody boot the guy, he’s in the wrong profession.)

Do they not understand how vital basic research is for their nation? Or do they decide not to raise the point? I suspect that at least some of those involved in such a decision approve cutting back on basic research not because they don’t understand what it’s good for, but because they believe their people don’t understand what it’s good for. (And they would be wrong, if you scroll down and look at the poll results...)

I suspect that scientists are an easy target, they usually don’t offer much resistance. They're not organized, for not to say disorganized. Scientists will try to cope until it becomes impossible and then pack their bags and their families and move to elsewhere. And once they’re gone, Canada, you’ll have to invest much more money than you save now to get them back.

Do they really not know that basic research, in one sentence, is the applied research in 100 years?

It isn’t possible, in basic research, to formulate a commercial application as goal because nobody can make predictions or formulate research plans over 100 years. There are too many unknown unknowns, the system is too complex, there are too many independent knowledge seekers in the game. Nobody can tell reliably what is going to happen.

They say “commercially viable”, but what they actually mean is “commercially viable within 5 years”.

The scientific theories that modern technology and medicine are based on – from LCD displays over DVD-players to spectroscopy and magnetic resonance imaging, from laser surgery to quantum computers – none of them would exist had scientists pursued “commercial viability”. Without curiosity-driven research, we deliberately ignore paths to new areas of knowledge. Applied research will inevitably run dry sooner or later. Scientific progress is not sustainable without basic research.

As your mother told you, if you have a fever, watch your fluid intake. Even if you are tired and don’t feel like moving a finger, drink that glass of water. The woman with the flu who didn’t drink enough today is the woman in the hospital on an IV-drip tomorrow. And the nation under economic pressure who didn’t invest in basic research today is the nation that will wish there was a global IV-drop for their artery tomorrow.

And here’s some other people saying the same thing in less words [via Steve Hsu]:



I know that on this blog a post like this preaches to the choir. So today I have homework for you. Tell your friends and your neighbors and the other parents at the daycare place. Tell them what basic research is and why it’s vital. And if you don’t feel like talking, send them a link or show them a video.

Wednesday, May 08, 2013

What do "most physicists" work on?

It always amazes me how skewed the image of physics research in the popular press is. To begin with, the amount of coverage is totally unrepresentative for the actual amount of research on a given topic. Controversial and outright fantastic topics are typically hotly discussed, so is everything that captures the public imagination. On the other hand, down-to-earth research like soft condensed matter or statistical mechanics rarely makes headlines.

The field I work in myself, quantum gravity, is among the over-represented fields. If you believe what you read, the quest for quantum gravity has become the "holy grail" of theoretical physicists all over the planet, and we're all working on it because the end of science is near and there's nothing else left to do.

Since coverage by the media is driven by popularity and not by relevance, one can expect such a skewed representation. It probably isn't much different in other areas of our lives. (Who actually wears those wacky clothes that fashion designers celebrate?) What bothers me much more than the skewed selection of topics is how their relevance is misrepresented even in these articles. I must have read hundreds of times that "many physicists" believe this or that, while in reality most physicists couldn't care less and probably have no opinion whatsoever.

Here are some examples:
"According to the current thinking of many physicists, we are living in one of a vast number of universes. We are living in an accidental universe. We are living in a universe uncalculable by science."
Alan Lightman, The Accidental Universe.
"The team’s verdict, published in July 2012, shocked the physics community."
Zeeya Merali, in a recent nature issue, Astrophysics: Fire in the hole!. We note in the passing that the article doesn't have much, if anything, to do with astrophysics.
"Most physicists believe that space is not smooth, but it is rather composed of incredibly small subunits, much like a painting made of dots. This micro-landscape is believed to host numerous black holes..."
Mihai Andrei, in an article titled Finding black holes at a quantum scale about a deeply flawed paper by Jacob Bekenstein. (Which, depressingly, got published in PRD.)

But why limit ourselves to physicists, let's be bold:
"Many scientists claim that mega-millions of other universes, each with its own laws of physics, lie out there, beyond our visual horizon. They are collectively known as the multiverse."
George F. R. Ellis, Scientific American, Does the Multiverse Really Exist? "They" presumably refers to the "other universes," and not to the "many scientists".

So then let's try to quantify "most physicists" by estimating an upper bound on the fraction of physicists who are working on these topics, a sub-area of quantum gravity. The topics under question here tend to appear on the arXiv under hep-th cross-linked to gr-qc or the other way round. That there is no subject category for "quantum gravity" should already tell you that there aren't all that "many" people working on it. First let us have a look at the arXiv submission rates


The left graph shows the total number of submissions, the right shows the percentage. Blue, which presently accounts for about 10%, is high energy physics and collects hep-th+hep-ph+hep-lat+hep-ex. Note that for historical reasons hep is likely to be over-represented in the arXiv statistics relative to the actual distribution of researchers. In hep, pretty much every paper goes on the arxiv, but the same is not true in other areas (at least not yet). Also, hep tends to be a very productive and communicative field, so looking at the number of arXiv submissions rather than researchers is probably an over-estimate. Be that as it may, the topics we are looking for almost certainly occupy less than 10% of researchers.

More data that tells you that the vast number of physicists aren't working on anything related to quantum gravity can be obtained from the number of members in sections of the German Physical Society. The section on Particle Physics (which includes beyond the standard model physics and quantum gravity) has about 2,500 members. The section on Quantum Optics and Photonics has more than 3,000 members, Physics of Semi-conductors 3,800Low Temperature Physics 1,450, Atomic Physics together with Hadronic and Nuclear Physics come to about 3,000, Material Physics together with Chemical and Polymer Physics and Thin Films another 3,500. Not all sections have membership numbers online, so this doesn't cover the full spectrum. But this already tells you that "most physicists" don't even do high energy physics, certainly not quantum gravity, and have no business with multiverses, firewalls, or "micro-landscapes of black holes".

But we can try to get a better estimate by seeing how many papers are cross-linked from hep-th to gr-qc, assuming that the opposite cross-linking is similarly frequent. For this, we look at the submission statistics of gr-qc for the first four months of the year 2013. It lists the submissions as well as the cross-lists. Click on any of the months, select "show all" and count the number of times "cross-list from hep-th" appears on the page. The numbers I get for January to April are: 70,71,52 and 67. If you look at the titles, you'll note that the papers you find this way fit well to the topics we're looking for.

Comparing these numbers with the total arxiv submissions per month (about 7500), we can estimate that it's about 1%. Multiply by two to account for gr-qc cross-linked to hep-th.

Now this is a rather crude estimate and I have mentioned several reasons why it's inaccurate: 1) Some fields of research are not as well represented on the arXiv as is hep-th. This means 2% is still an over-estimate. 2) Some fields might be more productive in paper output than others. If hep-th is on the more productive side, this means the 2% is even more of an over-estimate. 3) Not every paper in the area we're looking for might be hep-th cross-linked to gr-qc or vice-versa. This leads to an under-estimate. 4) On the other hand, not every paper cross-listed as such is about quantum gravity or related topics. 5) There are probably more people following the literature than actively working on it, which also leads to an under-estimate.

However, even if you'd add up all these errors, you would still be left to conclude that the above quoted uses of "most physicists" or "physics community" are extremely inaccurate and misleading.

Monday, May 06, 2013

What is a microfiber cloth and how does it work?

Microfibre cloths have become really popular during the last years. I just got one as an advertisement gift from the phone company. They’re handy to clean glasses, all kinds of screens, windows, mirrors and plastic surfaces, quickly and without the use of water. If dirty, put the cloth in the laundry, add detergent, and they’re as good as new.

But what are microfibers and what can they do that a Kleenex can’t?

Cotton cloths or paper wipes are mostly made of cellulose. Cellulose is a polymer, a long molecule that repeats a shorter structure up to some thousand times. Cellulose is hydrophile, meaning it likes to bind to water molecules. What it doesn’t like though is binding to fat molecules, which are themselves hydrophobic and don’t like to bind to water.

Cotton and paper tissues thus work badly for removing fatty stains, such as finger prints from glasses. If you want to get rid of these you have to use water and detergents on the cloth. Detergents are made of molecules that allow mixing water with fatty substances. With the detergent, the wet cotton cloth does a good job with the grease. Except that then it takes a long time to dry because now all the water molecules are attached to the cellulose polymers.

Cross-section of single microfibre,
electron microscope. Image Source.
Microfibers are also polymers, but that’s where the similarities end. Microfibers are synthetic polymers and usually much longer than the cellulose fibers won from organic materials. They are also about an order of magnitude thinner, typically only a few micrometers.

The microfibres used for cleaning cloths are normally a mixture of polyester and polyamide. Polyesters like to bind to fat, which is why the cloths can be used to wipe away grease without adding detergents. Polyesters however don’t like to bind water. Some polyamides do, but the water absorption of the microfibre cloths comes mostly from a clever production technique that increases the surface area of the fibres and allows for capillary action to suck up the water.

This technique works as follows. The long microfibers are not produced separately, but in a mixture of polyesters and polyamides that are arranged as alternating wedges, much like pieces on a cake. These mixed fibres are later split up by high pressure water jets (the image above shows the result). This procedure allows to produce much finer fibres than would be possible to produce directly, and since the microfibers are thin to begin with, it creates very porose materials that have a large surface in a small volume.

Cross-section of microfibre cloth, electron
microscope image. Source: hotrodworks.net
These splitted fibres are then woven or pressed into textiles (see image right). The resulting cloth is lightweight and binds to fat so you can wipe those fingerprints away easily. The material sucks up water, but since most of the water is stored in the pores between the fibres rather than binding directly to them (as is the case with cotton), microfibre cloths dry much faster than cotton.

Microfibres are not a new invention. The production technique goes back to research in the 1950s, but it wasn’t until the early 90s that they were marketed to households, a trend apparently started by the Swedes. During the last decade or so, microfibers have become quite common, especially for cleaning purposes, and, because they dry quickly, for sport and outdoor clothes.

So the next time you wipe the earwax off your display, I hope you appreciate the science behind this not-so-simple cloth.

Wednesday, May 01, 2013

Interna

Lara, putting on her shoes.
May 1st is a national holiday both in Sweden and in Germany. A good opportunity, I thought, to update you on our attempts at normal family life.

Lara and Gloria are now talking basically non-stop. Half of the time we have no idea what they are trying to say, the other half are refusals. Gloria literally wakes up in the morning yelling "Nein-nein-nein". Saying it's difficult to get her dressed, fed, and to daycare makes quantizing gravity sound like an easy task. Yesterday she insisted on going in her pajamas. Good mother that I am, I thought that was a brilliant idea.

Gloria is proud of her new hat.
Lara isn't quite as difficult as Gloria, but she is very easily distracted. If I ask her to get into the stroller, she'll first spend five minutes inspecting the stones by the road or take off her shoes and put them back on, just because.Time clearly flows very differently when you're two years old than when you're forty. I try to use the occasions to check my email. Time flows through my iPhone, I'm sure it does.

We finally made progress on our daycare issue, which is presently only half a solution. A new daycare place opened in the area, and due to my time spent on the phone last year, asking people to please write down my name and call me back if the situation unexpectedly changes, somebody indeed recalled my name and we made it top of the list for the new place. So there'll be another adaption phase at another place, but this time it's a full-day care that will indeed cover our working hours. It is also, I should add, considerably less expensive than the present solution with a self-employed nanny. This, I hope, will make my commuting easier for Stefan to cope with.

I'm really excited about the workshop for science writers that I'm organizing with George. We now have an (almost) complete schedule, I've ordered food and drinks and sorted out the lab visit, and I'm very much looking forward to the meeting. Directly after this workshop, I'll attend another workshop in Munich, "Quantum Gravity in Perspective", where I'll be speaking about the phenomenology of quantum gravity. I have some more trips upcoming this summer, to Bielefeld and Aachen and, in fall, to Vienna to speak at a conference on "Emergent Quantum Mechanics."

I was invited to take part in this KITP workshop on black hole firewalls but I eventually decided not to go. Partly because I'm trying to keep my travels limited to not burden Stefan too much with the childcare. But primarily because I don't believe that anything insightful will come out of this debate. It seems to me there are more fruitful research topics to explore, and this discussion is a waste of time. I also never liked SoCal in late summer; too dry for my central-European genes.


Lara and Gloria, eating cookies at a visit to the zoo.

We'll be away for the next couple of days because Stefan's brother is getting married. This means a several-hours long road trip with two toddlers who don't want to sit still for a minute; we're all looking forward to it...