Pages

Wednesday, January 31, 2018

Physics Facts and Figures

Physics is old. Together with astronomy, it’s the oldest scientific discipline. And the age shows. Compared to other scientific areas, physics is a slowly growing field. I learned this from a 2010 paper by Larsen and van Ins. The authors counted the number of publications per scientific areas. In physics, the number of publications grows at an annual rate of 3.8%. This means it currently takes 18 years for the body of physics literature to double. For comparison, the growth rate for publications in electric engineering and technology is 9% (7.5%) and has a doubling time of 8 years (9.6 years).

The total number of scientific papers closely tracks the total number of authors, irrespective of discipline. The relation between the two can be approximately fit by a power law, so that the number of papers is equal to the number of authors to the power of β. But this number, β, turns out to be field-specific, which I learned from a more recent paper: “Allometric Scaling in Scientific Fields” by Dong et al.

In mathematics the exponent β is close to one, which means that the number of papers increases linearly with the number of authors. In physics, the exponent is smaller than one, approximately 0.877. And not only this, it has been decreasing in the last ten years or so. This means we are seeing here diminishing returns: More physicists result in a less than proportional growth of output.

Figure 2 from Dong et al, Scientometrics 112, 1 (2017) 583.
β measures is the exponent by which the number of papers
scales with the number of authors. 
The paper also found some fun facts. For example, a few sub-fields of physics are statistical outliers in that their researchers produce more than the average number papers. Dong et al quantified this by a statistical measure that unfortunately doesn’t have an easy interpretation. Either way, they offer a ranking of the most productive sub-fields in physics which is (in order):

(1) Physics of black holes, (2) Cosmology, (3) Classical general relativity, (4) Quantum information (5) Matter waves (6) Quantum mechanics (7) Quantum field theory in curved space time (8) general theory and models of magnetic ordering (9) Theories and models of many electron systems (10) Quantum gravity.

Isn’t it interesting that this closely matches the fields that tend to attract media attention?

Another interesting piece of information that I found in the Dong et al paper is that in all sub-fields the exponent relating the numbers of citations with the number of authors is larger than one, approximately 1.1. This means that on the average the more people work in a sub-field, the more citation they receive. I think this is relevant information for anyone who wants to make sense of citation indices.

A third paper that I found very insightful to understand the research dynamics in physics is “A Century of Physics” by Sinatra et al. Among other things, they analyzed the frequency by which sub-fields of physics reference to their own or other sub-fields. The most self-referential sub-fields, they conclude, are nuclear physics and the physics of elementary particles and fields.

Papers from these two sub-fields also have by far the lowest expected “ultimate impact” which the authors define as the typical number of citations a paper attracts over its lifetime, where the lifetime is the typical number of years in which the paper attracts citations (see figure below). In nuclear physics (labelled NP in figure) and and particle physics (EPF), the interest of papers is short-term and the overall impact remains low. By this measure, the category with the highest impact is electromagnetism, optics, acoustics, heat transfer, classical mechanics and fluid dynamics (labeled EOAHCF).

Figure 3 e from Sinatra et al, Nature Physics 11, 791–796 (2015).

A final graph from the Sinatra et al paper which I want to draw your attention to is the productivity of physicists. As we saw earlier, the total number of papers normalized to the total number of authors is somewhat below 1 and has been falling in the recent decade. However, if you look at the number of papers per author, you find that it has been sharply rising since the early 1990s, ie, basically ever since there was email.

Figure 1 e from Sinatra et al, Nature Physics 11, 791–796 (2015)

This means that the reason physicists seem so much more productive today than when you were young is that they collaborate more. And maybe it’s not so surprising because there is a strong incentive for that: If you and I both write a paper, we both have one paper. But if we agree to co-author each other’s paper, we’ll both have two. I don’t mean to accuse scientists of deliberate gaming, but it’s obvious that accounting for papers by the number puts single-authors at a disadvantage.

So this is what physics is, in 2018. An ageing field that doesn’t want to accept its dwindling relevance.

Thursday, January 25, 2018

More Multiverse Madness

The “multiverse” – the idea that our universe is only one of infinitely many – enjoys some credibility, at least in the weirder corners of theoretical physics. But there are good reasons to be skeptical, and I’m here to tell you all of them.

Before we get started, let us be clear what we are talking about because there isn’t only one but multiple multiverses. The most commonly discussed ones are: (a) The many worlds interpretation of quantum mechanics, (b) eternal inflation, and (c) the string theory landscape.

The many world’s interpretation is, guess what, an interpretation. At least to date, it makes no predictions that differ from other interpretations of quantum mechanics. So it’s up to you whether you believe it. And that’s all I have to say about this.

Eternal inflation is an extrapolation of inflation, which is an extrapolation of the concordance model, which is an extrapolation of the present-day universe back in time. Eternal inflation, like inflation, works by inventing a new field (the “inflaton”) that no one has ever seen because we are told it vanished long ago. Eternal inflation is a story about the quantum fluctuations of the now-vanished field and what these fluctuations did to gravity, which no one really knows, but that’s the game.

There is little evidence for inflation, and zero evidence for eternal inflation. But there is a huge number of models for both because available data don’t constraint the models much. Consequently, theorists theorize the hell out of it. And the more papers they write about it, the more credible the whole thing looks.

And then there’s the string theory landscape, the graveyard of disappointed hopes. It’s what you get if you refuse to accept that string theory does not predict which particles we observe.

String theorists originally hoped that their theory would explain everything. When it became clear that didn’t work, some string theorists declared if they can’t do it then it’s not possible, hence everything that string theory allows must exist – and there’s your multiverse. But you could do the same thing with any other theory if you don’t draw on sufficient observational input to define a concrete model. The landscape, therefore, isn’t so much a prediction of string theory as a consequence of string theorists’ insistence that theirs a theory of everything.

Why then, does anyone take the multiverse seriously? Multiverse proponents usually offer the following four arguments in favor of the idea:

1. It’s falsifiable!

Our Bubble Universe.
Img: NASA/WMAP.
There are certain cases in which some version of the multiverse leads to observable predictions. The most commonly named example is that our universe could have collided with another one in the past, which could have left an imprint in the cosmic microwave background. There is no evidence for this, but of course this doesn’t rule out the multiverse. It just means we are unlikely to live in this particular version of the multiverse.

But (as I explained here) just because a theory makes falsifiable predictions doesn’t mean it’s scientific. A scientific theory should at least have a plausible chance of being correct. If there are infinitely many ways to fudge a theory so that the alleged prediction is no more, that’s not scientific. This malleability is a problem already with inflation, and extrapolating this to eternal inflation only makes things worse. Lumping the string landscape and/or many worlds on top of doesn’t help parsimony either.

So don’t get fooled by this argument, it’s just wrong.

2. Ok, so it’s not falsifiable, but it’s sound logic!

Step two is the claim that the multiverse is a logical consequence of well-established theories. But science isn’t math. And even if you trust the math, no deduction is better than the assumptions you started from and neither string theory nor inflation are well-established. (If you think they are you’ve been reading the wrong blogs.)

I would agree that inflation is a good effective model, but so is approximating the human body as a bag of water, and see how far that gets you making sense of the evening news.

But the problem with the claim that logic suffices to deduce what’s real runs deeper than personal attachment to pretty ideas. The much bigger problem which looms here is that scientists mistake the purpose of science. This can nicely be demonstrated by a phrase in Sean Carroll’s recent paper. In defense of the multiverse he writes “Science is about what is true.” But, no, it’s not. Science is about describing what we observe. Science is about what is useful. Mathematics is about what is true.

Fact is, the multiverse extrapolates known physics by at least 13 orders of magnitude (in energy) beyond what we have tested and then adds unproved assumptions, like strings and inflatons. That’s not science, that’s math fiction.

So don’t buy it. Just because they can calculate something doesn’t mean they describe nature.

3. Ok, then. So it’s neither falsifiable nor sound logic, but it’s still business as usual.

The gist of this argument, also represented in Sean Carroll’s recent paper, is that we can assess the multiverse hypothesis just like any other hypothesis, by using Bayesian inference.

Bayesian inference a way of probability assessment in which you update your information to arrive at what’s the most likely hypothesis. Eg, suppose you want to know how many people on this planet have curly hair. For starters you would estimate it’s probably less than the total world-population. Next, you might assign equal probability to all possible percentages to quantify your lack of knowledge. This is called a “prior.”

You would then probably think of people you know and give a lower probability for very large or very small percentages. After that, you could go and look at photos of people from different countries and count the curly-haired fraction, scale this up by population, and update your estimate. In the end you would get reasonably accurate numbers.

If you replace words with equations, that’s how Bayesian inference works.

You can do pretty much the same for the cosmological constant. Make some guess for the prior, take into account observational constraints, and you will get some estimate for a likely value. Indeed, that’s what Steven Weinberg famously did, and he ended up with a result that wasn’t too badly wrong. Awesome.

But just because you can do Bayesian inference doesn’t mean there must be a planet Earth for each fraction of curly-haired people. You don’t need all these different Earths because in a Bayesian assessment the probability represents your state of knowledge, not the distribution of an actual ensemble. Likewise, you don’t need a multiverse to update the likelihood of parameters when taking into account observations.

So to the extent that it’s science as usual you don’t need the multiverse.

4. So what? We’ll do it anyway.

The fourth, and usually final, line of defense is that if we just assume the multiverse exists, we might learn something, and that could lead to new insights. It’s the good, old Gospel of Serendipity.

In practice this means that multiverse proponents insist on interpreting probabilities for parameters as those of an actual ensemble of universes, ie the multiverse. Then they have the problem of where to get the probability distribution from, a thorny issue since the ensemble is infinitely large. This is known as the “measure problem” of the multiverse.

To solve the problem, they have to construct a probability distribution, which means they must invent a meta-theory for the landscape. Of course that’s just another turtle in the tower and will not help finding a theory of everything. And worse, since there are infinitely many such distributions you better hope they’ll find one that doesn’t need more assumptions than the standard model already has, because if that was so, the multiverse would be shaved off by Occam’s razor.

But let us assume the best possible outcome, that they find a measure for the multiverse according to which the parameters of the standard model are likely, and this measure indeed needs fewer assumptions than just postulating the standard model parameters. That would be pretty cool and I would be duly impressed. But even in this case we don’t need the multiverse! All we need is the equation to calculate what’s presumably a maximum of a probability distribution. Thus, again, Occam’s razor should remove the multiverse.

You could then of course insist that the multiverse is a possible interpretation, so you are allowed to believe in it. And that’s all fine by me. Believe whatever you want, but don’t confuse it with science.


The multiverse and other wild things that physicists believe in are subject of my upcoming book “Lost in Math” which is now available for preorder.

Wednesday, January 17, 2018

Pure Nerd Fun: The Grasshopper Problem

illustration of grasshopper.
[image: awesomedude.com]
It’s a sunny afternoon in July and a grasshopper lands on your lawn. The lawn has an area of a square meter. The grasshopper lands at a random place and then jumps 30 centimeters. Which shape must the lawn have so that the grasshopper is most likely to land on the lawn again after jumping?

I know, sounds like one of these contrived but irrelevant math problems that no one cares about unless you can get famous solving it. But the answer to this question is more interesting than it seems. And it’s more about physics than it is about math or grasshoppers.

It turns out the optimal shape of the lawn greatly depends on how far the grasshopper jumps compared to the square root of the area. In my opening example this ratio would have been 0.3, in which case the optimal lawn-shape looks like an inkblot

From Figure 3 of arXiv:1705.07621



No, it’s not round! I learned this from a paper by Olga Goulko and Adrian Kent, which was published in the Proceedings of the Royal Society (arXiv version here). You can of course rotate the lawn around its center without changing the probability of the grasshopper landing on it again. So, the space of all solutions has the symmetry of a disk. But the individual solutions don’t – the symmetry is broken.

You might know Adrian Kent from his work on quantum foundations, so how come his sudden interest in landscaping? The reason is that problems similar to this appear in certain types of Bell-inequalities. These inequalities, which are commonly employed to identify truly quantum behavior, often end up being combinatorial problems on the unit sphere. I can just imagine the authors sitting in front of this inequality, thinking, damn, there must be a way to calculate this.

As so often, the problem isn’t mathematically difficult to state but dang hard to solve. Indeed, they haven’t been able to derive a solution. In their paper, the authors offer estimates and bounds, but no full solution. Instead what they did (you will love this) is to map the problem back to a physical system. This physical system they configure so that it will settle on the optimal solution (ie optimal lawn-shape) at zero temperature. Then they simulate this system on the computer.

Concretely, the simulate the lawn of fixed area by randomly scattering squares over a template space that is much larger than the lawn. They allow a certain interaction between the little pieces of lawn, and then they calculate the probability for the pieces to move, depending on whether or not such a move will improve the grasshopper’s chance to stay on the green. The lawn is allowed to temporarily go into a less optimal configuration so that it will not get stuck in a local minimum. In the computer simulation, the temperature is then gradually decreased, which means that the lawn freezes and thereby approaches its most optimal configuration.

In the video below you see examples for different values of d, which is the above mentioned ratio between the distance the grasshopper jumps and the square root of the lawn-area:





For very small d, the optimal lawn is almost a disc (not shown in the video). For increasingly larger d, it becomes a cogwheel, where the number of cogs depends on d. If d increases above approximately 0.56 (the inverse square root of π), the lawn starts falling apart into disconnected pieces. There is a transition range in which the lawn doesn’t seem to settle on any particular shape. Beyond 0.65, there comes a shape which they refer to as a “three-bladed fan”, and after that come stripes of varying lengths.

This is summarized in the figure below, where the red line is the probability of the grasshopper to stay on the lawn for the optimal shape:
Figure 12 of arXiv:1705.07621

The authors did a number of checks to make sure the results aren’t numerical artifacts. For example, they checked that the lawn’s shape doesn’t depend on using a square grid for the simulation. But, no, a hexagonal grid gives the same results. They told me by email they are looking into the question whether the limited resolution might hide that the lawn shapes are actually fractal, but there doesn’t seem to be any indication for that.

I find this a super-cute example for how much surprises seemingly dull and simple math problems can harbor!

As a bonus, you can get a brief explanation of the paper from the authors themselves in this brief video.

Tuesday, January 16, 2018

Book Review: “The Dialogues” by Clifford Johnson

Clifford Johnson is a veteran of the science blogosphere, a long-term survivor, around already when I began blogging and one of the few still at it today. He is professor at the Department of Physics and Astronomy at the University of Southern California (in LA).

I had the pleasure of meeting Clifford in 2007. Who’d have thought back then that 10 years later we would both be in the midst of publishing a popular science book?

Clifford’s book was published by MIT Press just two months ago. It’s titled The Dialogues: Conversations about the Nature of the Universe and it’s not just a book, it’s a graphic novel! Yes, that’s right. Clifford doesn’t only write, he also draws.

His book is a collection of short stories which are mostly physics-themed, but also touch on overarching questions like how does science work or what’s the purpose of basic research to begin with. I would characterize these stories as conversation starters. They are supposed to make you wonder.

But just because it contains a lot of pictures doesn’t mean The Dialogues is a shallow book. In contrast, a huge amount of physics is packed into it, from electrodynamics to the multiverse, the cosmological constant, a theory of everything and to gravitational waves. The reader also finds references for further reading in case they wish to learn more.

I found the drawings were put to good use and often add to the explanation. The Dialogues is also, I must add, a big book. With more than 200 illustrated pages, it seems to me that offering it for less than $30 is a real bargain!

I would recommend this book to everyone who has an interest in the foundations of physics. Even if you don’t read it, it will still look good on your coffee table ;)




Win a copy!

I bought the book when it appeared, but later received a free review copy. Now I have two and I am giving one away for free!

The book will go to the first person who submits a comment to this blogpost (not elsewhere) listing 10 songs that use physics-themed phrases in the lyrics (not just in the title). Overly general words (such as “moon” or “light”) or words that are non-physics terms which just happen to have a technical meaning (such as “force” or “power”) don’t count.

The time-stamp of your comment will decide who was first, so please do not send your list to me per email. Also, please only make a submission if you are willing to provide me with a mailing address.

Good luck!

Update:
The book is gone.

Wednesday, January 10, 2018

Superfluid dark matter gets seriously into business

very dark fluid
Most matter in the universe isn’t like the stuff we are made of. Instead, it’s a thinly distributed, cold, medium which rarely interacts both with itself and with other kinds of matter. It also doesn’t emit light, which is why physicists refer to it as “dark matter.”

A recently proposed idea, according to which dark matter may be superfluid, has now become more concrete, thanks to a new paper by Justin Khoury and collaborators.

Astrophysicists invented dark matter because a whole bunch of observations of the cosmos do not fit with Einstein’s theory of general relativity.

According to general relativity, matter curves space-time and, in return, the curvature dictates the motion of matter. Problem is, if you calculate the response of space-time to all the matter we know, then the observed motions doesn’t fit the prediction from the calculation.

This problem exists for galactic rotation curves, velocity distributions in galaxy clusters, for the properties of the cosmic microwave background, for galactic structure formation, gravitational lensing, and probably some more that I’ve forgotten or never heard about in the first place.

But dark matter is only one way to explain the observation. We measure the amount of matter and we observe its motion, but the two pieces of information don’t match up with the equations of general relativity. One way to fix this mismatch is to invent dark matter. The other way to fix this is to change the equations. This second option has become known as “modified gravity.”

There are many types of modified gravity and most of them work badly. That’s because it’s easy to break general relativity and produce a mess that’s badly inconsistent with the high-precision tests of gravity that we have done within our solar system.

However, it has been known since the 1980s that some types of modified gravity explain observations that dark matter does not explain. For example, the effects of dark matter in galaxies become relevant not at a certain distance from the galactic center, but below a certain acceleration. Even more perplexing, this threshold of acceleration is related to the cosmological constant. Both of these features are difficult to account for with dark matter. Astrophysicists have also established a relation between the brightness of certain galaxies and the velocities of their outermost stars. Named “Baryonic Tully Fisher Relation” after its discoverers, it is also difficult to explain with dark matter.

On the other hand, modified gravity works badly in other cases, notably in the early universe where dark matter is necessary to get the cosmic microwave background right, and to set up structure formation so that the result agrees with what we see.

For a long time I have been rather agnostic about this, because I am more interested in the structure of fundamental laws than in the laws themselves. Dark matter works by adding particles to the standard model of particle physics. Modified gravity works by adding fields to general relativity. But particles are fields and fields are particles. And in both cases, the structure of the laws remains the same. Sure, it would be great to settle just exactly what it is, but so what if there’s one more particle or field.

It was a detour that got me interested in this: Fluid analogies for gravity, a topic I have worked on for a few years now. Turns out that certain kinds of fluids can mimic curved space-time, so that perturbations (say, density fluctuations) in the fluid travel just like they would travel under the influence of gravity.

The fluids under consideration here are usually superfluid condensates with an (almost) vanishing viscosity. The funny thing is now that if you look at the mathematical description of some of these fluids, they look just like the extra fields you need for modified gravity! So maybe, then, modified gravity is really a type of matter in the end?

I learned about this amazing link three years ago from a paper by Lasha Berezhiani and Justin Khoury. They have a type of dark matter which can condense (like vapor on glass, if you want a visual aid) if a gravitational potential is deep enough. This condensation happens within galaxies, but not in interstellar space because the potential isn’t deep enough. The effect that we assign to dark matter, then, comes partly from the gravitational pull of the fluid and partly from the actual interaction with the fluid.

If the dark matter is superfluid, it has long range correlations that give rise to the observed regularities like the Tully-Fisher relation and the trends in rotation curves. In galaxy clusters, on the other hand, the average density of (normal) matter is much lower and most of the dark matter is not in the superfluid phase. It then behaves just like normal dark matter.

The main reason I find this idea convincing is that it explains why some observations are easier to account for with dark matter and others with modified gravity: It’s because dark matter has phase transitions! It behaves differently at different temperatures and densities.

In solar systems, for example, the density of (normal) matter is strongly peaked and the gradient of the gravitational field near a sun is much larger than in a galaxy on the average. In this case, the coherence in the dark matter fluid is destroyed, which is why we do not observe effects of modified gravity in our solar system. And in the early universe, the temperature is too high and dark matter just behaves like a normal fluid.

In 2015, the idea with the superfluid dark matter was still lacking details. But two months ago, Khoury and his collaborators came out with a new paper that fills in some of the missing pieces.

Their new calculations take into account that in general the dark matter will be a mixture of superfluid and normal fluid, and both phases will make a contribution to the gravitational pull. Just what the composition is depends on the gravitational potential (caused by all types of matter) and the equation of state of the superfluid. In the new paper, the authors parameterize the general effects and then constrain the parameters so that they fit observations.

Yes, there are new parameters, but not many. They claim that the model can account for all the achievements of normal particle dark matter, plus the benefits of modified gravity on top.

And while this approach very much looks like modified gravity in the superfluid phase, it is immune to the constraint from the measurement of gravitational waves with an optical counterpart. That is because both gravitational waves and photons couple the same way to the additional stuff and hence should arrive at the same time – as observed.

It seems to me, however, that in the superfluid model one would in general get a different dark matter density if one reconstructs it from gravitational lensing than if one reconstructs it from kinetic measurements. That is because the additional interaction with the superfluid is felt only by the baryons. Indeed, this discrepancy could be used to test whether the idea is correct.

Khoury et al don’t discuss the possible origin of the fluid, but I like the interpretation put forward by Erik Verlinde. According to Verlinde, the extra-fields which give rise to the effects of dark matter are really low-energy relics of the quantum behavior of space-time. I will admit that this link is presently somewhat loose, but I am hopeful that it will become tighter in the next years. If so, this would mean that dark matter might be the key to unlocking the – still secret – quantum nature of gravity.

I consider this one of the most interesting developments in the foundations of physics I have seen in my lifetime. Superfluid dark matter is without doubt a pretty cool idea.

Tuesday, January 09, 2018

Me, elsewhere

Beginning 2018, I will no longer write for Ethan Siegel’s Forbes collection “Starts With a Bang.” Instead, I will write a semi-regular column for Quanta Magazine, the first of which -- about asymptotically safe gravity -- appeared yesterday.

In contrast to Forbes, Quanta Magazine keeps the copyright, which means that the articles I write for them will not be mirrored on this blog. You actually have to go over to their site to read them. But if you are interested in the foundations of physics, take my word that subscribing to Quanta Magazine is well worth your time, not so much because of me, but because their staff writers have so-far done an awesome job to cover relevant topics without succumbing to hype.

I also wrote a review of Jim Baggott’s book “Origins: The Scientific Story of Creation” which appeared in the January issue of Physics World. I much enjoyed Baggott’s writing and promptly bought another one of his books. Physics World  doesn’t want me to repost the review in text, but you can read the PDF here.

Finally, I wrote a contribution to the proceedings of a philosophy workshop I attended last year. In this paper, I summarize my misgivings with arguments from finetuning. You can now find it on the arXiv.

If you want to stay up to date on my writing, follow me on Twitter or on Facebook.

Wednesday, January 03, 2018

Sometimes I believe in string theory. Then I wake up.

They talk about me.
Grumpy Rainbow Unicorn.
[Image Source.]

And I can’t blame them. Because nothing else is happening on this planet. There’s just me and my attempt to convince physicists that beauty isn’t truth.

Yes, I know it’s not much of an insight that pretty ideas aren’t always correct. That’s why I objected when my editor suggested I title my book “Why Beauty isn’t Truth.” Because, duh, it’s been said before and if I wanted to be stale I could have written about how we’re all made of stardust, aah-choir, chimes, fade and cut.

Nature has no obligation to be pretty, that much is sure. But the truth seems hard to swallow. “Certainly she doesn’t mean that,” they say. Or “She doesn’t know what she’s doing.” Then they explain things to me. Because surely I didn’t mean to say that much of what goes on in the foundations of physics these days is a waste of time, did I? And even if, could I please not do this publicly, because some people have to earn a living from it.

They are “good friends,” you see? Good friends who want me to believe what they believe. Because believing has bettered their lives.

And certainly I can be fixed! It’s just that I haven’t yet seen the elegance of string theory and supersymmetry. Don’t I know that elegance is a sign of all successful theories? It must be that I haven’t understood how beauty has been such a great guide for physicists in the past. Think of Einstein and Dirac and, erm, there must have been others, right? Or maybe it’s that I haven’t yet grasped that pretty, natural theories are so much better. Except possibly for the cosmological constant, which isn’t pretty. And the Higgs-mass. And, oh yeah, the axion. Almost forgot about that, sorry.

But it’s not that I don’t think unified symmetry is a beautiful idea. It’s a shame, really, that we have these three different symmetries in particle physics. It would be so much nicer if we could merge them to one large symmetry. Too bad that the first theories of unification led to the prediction of proton decay and were ruled out. But there are a lot other beautiful unification ideas left to work on. Not all is lost!

And it’s not that I don’t think supersymmetry is elegant. It combines two different types of particles and how cool is that? It has candidates for dark matter. It alleviates the problem with the cosmological constant. And it aids gauge coupling unification. Or at least it did until LHC data interfered with our plans to prettify the laws of nature. Dang.

And it’s not that I don’t see why string theory is appealing. I once set out to become a string theorist. I do not kid you. I ate my way through textbooks and it was all totally amazing, how much you get out from the rather simple idea that particles shouldn’t be points but strings. Look how much consistency dictates you to construct the theory. And note how neatly it fits with all that we already know.

But then I got distracted by a disturbing question: Do we actually have evidence that elegance is a good guide to the laws of nature?

The brief answer is no, we have no evidence. The long answer is in my book and, yes, I will mention the-damned-book until everyone is sick of it. The summary is: Beautiful ideas sometimes work, sometimes they don’t. It’s just that many physicists prefer to recall the beautiful ideas which did work.

And not only is there no historical evidence that beauty and elegance are good guides to find correct theories, there isn’t even a theory for why that should be so. There’s no reason to think that our sense of beauty has any relevance for discovering new fundamental laws of nature.

Sure, if you ask those who believe in string theory and supersymmetry and in grand unification, they will say that of course they know there is no reason to believe a beautiful theory is more likely to be correct. They still work on them anyway. Because what better could they do with their lives? Or with their grants, respectively. And if you work on it, you better believe in it.

I consent, not all math is equally beautiful and not all math is equally elegant. I yet have to find anyone, for example, who thinks Loop Quantum Gravity is more beautiful than string theory. And isn’t it interesting that we share this sense of what is and isn’t beautiful? Shouldn’t it mean something that so many theoretical physicists agree beautiful math is better? Shouldn’t it mean something that so many people believe in the existence of an omniscient god?

But science isn’t about belief, it’s about facts, so here are the facts: This trust in beauty as a guide, it’s not working. There’s no evidence for grand unification. There’s no evidence for supersymmetry, no evidence for axions, no evidence for moduli, for WIMPs, or for dozens of other particles that were invented to prettify theories which work just fine without them. After decades of search, there’s no evidence for any of these.

It’s not working. I know it hurts. But now please wake up.

Let me assure you I usually mean what I say and know what I do. Could I be wrong? Of course. Maybe tomorrow we’ll discover supersymmetry. Not all is lost.

Monday, December 25, 2017

Merry Christmas!


We wish you all happy holidays! Whether or not you celebrate Christmas, we hope you have a peaceful time to relax and, if necessary, recover.

I want to use the opportunity to thank all of you for reading along, for giving me feedback, and for simply being interested in science in a time when that doesn’t seem to be normal anymore. A special “Thank you" to those who have sent donations. It is reassuring to know that you value this blog. It encourages me to keep my writing available here for free.

I’ll be tied up with family business during the coming week – besides the usual holiday festivities, the twins’ 7th birthday is coming up – so blogging will be sparse for some while.

Monday, December 18, 2017

Get your protons right!

The atomic nucleus consists of protons and neutrons. The protons and neutrons are themselves made of three quarks each, held together by gluons. That much is clear. But just how do the gluons hold the quarks together?

The quarks and gluons interact through the strong nuclear force. The strong nuclear force does not have only one charge – like electromagnetism – but three charges. The charges are called “colors” and often assigned the values red, blue, and green, but this is just a way to give names to mathematical properties. These colors have nothing to do with the colors that we can see.

Colors are a handy terminology because the charges blue, red, and green can combine to neutral (“white”) and so can a color and its anti-color (blue and anti-blue, green and anti-green, and so on). The strong nuclear force is mediated by gluons which each carry two types of colors. That the gluons themselves carry a charge means that, unlike the photon, they also interact among each other.

The strong nuclear force has the peculiar property that it gets stronger the larger the distance between two quarks, while it gets weaker on short distances. A handy metaphor for this is a rubber string – the more you stretch it, the stronger the restoring force. Indeed, this string-like behavior of the strong nuclear force is where string-theory originally came from.

The strings of the strong nuclear force are gluon flux-tubes, that are connections between two color-charged particles where the gluons preferably travel along. The energy of the flux-tubes is proportional to their length. If you have a particle (called a “meson”) made of a quark and an anti-quark, then the flux tube is focused on a straight line connecting the quarks. But what if you have three quarks, like inside a neutron or a proton?

According to the BBC, gluon flux-tubes (often depicted as springs, presumably because rubber is hard to illustrate) form a triangle.


This is almost identical to the illustration you find on Wikipedia:
Here is the proton on Science News:


Here is Alan Stonebreaker for the APS:



This is the proton according to Carole Kliger from the University of Maryland:

And then there is Christine Davies from the University of Glasgow who pictured the proton for Science Daily as an artwork resembling a late Kandinsky:


So which one is right?

At first sight it seems plausible that the gluons form a triangle because that requires the least stretching of strings that each connect two quarks. However, this triangular – “Δ-shaped” – configuration cannot connect three quarks and still maintain gauge-invariance. This means it violates the key principle of the strong force, which is bad and probably means this configuration is not physically possible. The Y-shaped flux-tubes on the other hand don’t suffer from that problem.

But we don’t have to guess around because this is physics and one can calculate it. This calculation cannot be done analytically but it is tractable by computer simulations. Bissey et al reported the results in a 2006 paper: “We do not find any evidence for the formation of a Delta-shaped flux-tube (empty triangle) distribution.” The conclusion is clear: The Y-shape is the preferred configuration.

And there’s more to learn! The quarks and gluons in the proton don’t sit still, and when they move then the center of the Y moves around. If you average over all possible positions you approximate a filled Δ-shape. (Though the temperature dependence is somewhat murky and subject of ongoing research.)

The flux-tubes also do not always exactly lie in the plane spanned by the three quarks but can move up and down into the perpendicular direction. So you get a filled Δ that’s inflated to the middle.

This distribution of flux tubes has nothing to do with the flavor of the quarks, meaning it’s the same for the proton and the neutron and all other particles composed of three quarks, such as the one containing two charm-quarks that was recently discovered at CERN. How did CERN picture the flux tubes? As a Δ:



Now you can claim you know quarks better than CERN! It’s either a Y or a filled triangle, but not an empty triangle.

I am not a fan of depicting gluons as springs because it makes me think of charged particles in a magnetic field. But I am willing to let this pass as creative freedom. I hope, however, that it is possible to get the flux-tubes right, and so I have summed up the situation in the image below :



Tuesday, December 12, 2017

Research perversions are spreading. You will not like the proposed solution.

The ivory tower from
The Neverending Story
Science has a problem. The present organization of academia discourages research that has tangible outcomes, and this wastes a lot of money. Of course scientific research is not exclusively pursued in academia, but much of basic research is. And if basic research doesn’t move forward, science by large risks getting stuck.

At the root of the problem is academia’s flawed reward structure. The essence of the scientific method is to test hypotheses by experiment and then keep, revise, or discard the hypotheses. However, using the scientific method is suboptimal for a scientist’s career if they are rewarded for research papers that are cited by as many of their peers as possible.

To the end of producing popular papers, the best tactic is to work on what already is popular, and to write papers that allow others to quickly produce further papers on the same topic. This means it is much preferable to work on hypotheses that are vague or difficult to falsify, and stick to topics that stay inside academia. The ideal situation is an eternal debate with no outcome other than piles of papers.

You see this problem in many areas of science. It’s origin of the reproducibility crisis in psychology and the life sciences. It’s the reason why bad scientific practices – like p-value hacking – prevail even though they are known to be bad: Because they are the tactics that keep researchers in the job.

It’s also why in the foundations of physics so many useless papers are written, thousands of guesses about what goes on in the early universe or at energies we can’t test, pointless speculations about an infinitude of fictional universes. It’s why theories that are mathematically “fruitful,” like string theory, thrive while approaches that dare introduce unfamiliar math starve to death (adding vectors to spinors, anyone?). And it is why physicists love “solving” the black hole information loss problem: because there’s no risk any of these “solutions” will ever get tested.

If you believe this is good scientific practice, you would have to find evidence that the possibility to write many papers about an idea is correlated with this idea’s potential to describe observation. Needless to say, there isn’t any such evidence.

What we witness here is a failure of science to self-correct.

It’s a serious problem.

I know it’s obvious. I am by no means the first to point out that academia is infected with perverse incentives. Books have been written about it. Nature and Times Higher Education seem to publish a comment about this nonsense every other week. Sometimes this makes me hopeful that we’ll eventually be able to fix the problem. Because it’s in everybody’s face. And it’s eroding trust in science.

At this point I can’t even blame the public for mistrusting scientists. Because I mistrust them too.

Since it’s so obvious, you would think that funding bodies take measures to limit the waste of money. Yes, sometimes I hope that capitalism will come and rescue us! But then I go and read things like that Chinese scientists are paid bonuses for publishing in high impact journals. Seriously. And what are the consequences? As the MIT technology review relays:
    “That has begun to have an impact on the behavior of some scientists. Wei and co report that plagiarism, academic dishonesty, ghost-written papers, and fake peer-review scandals are on the increase in China, as is the number of mistakes. “The number of paper corrections authored by Chinese scholars increased from 2 in 1996 to 1,234 in 2016, a historic high,” they say.”

If you think that’s some nonsense the Chinese are up to, look at what goes on in Hungary. They now have exclusive grants for top-cited scientists. According to a recent report in Nature:
    “The programme is modelled on European Research Council grants, but with a twist: only those who have published a paper in the past five years that counted among the top 10% most-cited papers in their discipline are eligible to apply.”
What would you do to get such a grant?

To begin with, you would sure as hell not work on any topic that is not already pursued by a large number of your colleagues, because you need a large body of people able to cite your work to begin with.

You would also not bother criticize anything that happens in your chosen research area, because criticism would only serve to decrease the topic’s popularity, hence working against your own interests.

Instead, you would strive to produce a template for research work that can easily and quickly be reproduced with small modifications by everyone in the field.

What you get with such grants, then, is more of the same. Incremental research, generated with a minimum of effort, with results that meander around the just barely scientifically viable.

Clearly, Hungary and China introduce such measures to excel in national comparisons. They don’t only hope for international recognition, they also want to recruit top researchers hoping that, eventually, industry will follow. Because in the end what matters is the Gross Domestic Product.

Surely in some areas of research – those which are closely tied to technological applications – this works. Doing more of what successful people are doing isn’t generally a bad idea. But it’s not an efficient method to discover useful new knowledge.

That this is not a problem exclusive to basic research became clear to me when I read an article by Daniel Sarewitz in The New Atlantis. Sarewitz tells the story of Fran Visco, lawyer, breast cancer survivor, and founder of the National Breast Cancer Coalition:
    “Ultimately, “all the money that was thrown at breast cancer created more problems than success,” Visco says. What seemed to drive many of the scientists was the desire to “get above the fold on the front page of the New York Times,” not to figure out how to end breast cancer. It seemed to her that creativity was being stifled as researchers displayed “a lemming effect,” chasing abundant research dollars as they rushed from one hot but ultimately fruitless topic to another. “We got tired of seeing so many people build their careers around one gene or one protein,” she says.”
So, no, lemmings chasing after fruitless topics are not a problem only in basic research. Also, the above mentioned overproduction of useless models is by no means specific to high energy physics:
    “Scientists cite one another’s papers because any given research finding needs to be justified and interpreted in terms of other research being done in related areas — one of those “underlying protective mechanisms of science.” But what if much of the science getting cited is, itself, of poor quality?

    Consider, for example, a 2012 report in Science showing that an Alzheimer’s drug called bexarotene would reduce beta-amyloid plaque in mouse brains. Efforts to reproduce that finding have since failed, as Science reported in February 2016. But in the meantime, the paper has been cited in about 500 other papers, many of which may have been cited multiple times in turn. In this way, poor-quality research metastasizes through the published scientific literature, and distinguishing knowledge that is reliable from knowledge that is unreliable or false or simply meaningless becomes impossible.”

Sarewitz concludes that academic science has become “an onanistic enterprise.” His solution? Don’t let scientists decide for themselves what research is interesting, but force them to solve problems defined by others:
    “In the future, the most valuable science institutions […] will link research agendas to the quest for improved solutions — often technological ones — rather than to understanding for its own sake. The science they produce will be of higher quality, because it will have to be.”
As one of the academics who believe that understanding how nature works is valuable for its own sake, I think the cure that Sarewitz proposes is worse than the disease. But if Sarewitz makes one thing clear in his article, it’s that if we in academia don’t fix our problems soon, someone else will. And I don’t think we’ll like it.

Wednesday, December 06, 2017

The cosmological constant is not the worst prediction ever. It’s not even a prediction.

Think fake news and echo chambers are a problem only in political discourse? Think again. You find many examples of myths and falsehoods on popular science pages. Most of them surround the hype of the day, but some of them have been repeated so often they now appear in papers, seminar slides, and textbooks. And many scientists, I have noticed with alarm, actually believe them.

I can’t say much about fields outside my specialty, but it’s obvious this happens in physics. The claim that the bullet cluster rules out modified gravity, for example, is a particularly pervasive myth. Another one is that inflation solves the flatness problem, or that there is a flatness problem to begin with.

I recently found another myth to add to my list: the assertion that the cosmological constant is “the worst prediction in the history of physics.” From RealClearScience I learned the other day that this catchy but wrong statement has even made it into textbooks.

Before I go and make my case, please ask yourself: If the cosmological constant was such a bad prediction, then what theory was ruled out by it? Nothing comes to mind? That’s because there never was such a prediction.

The myth has it that if you calculate the cosmological constant using the standard model of particle physics the result is 120 orders of magnitude larger than what is observed due to contributions from vacuum fluctuation. But this is wrong on at least 5 levels:

1. The standard model of particle physics doesn’t predict the cosmological constant, never did, and never will.

The cosmological constant is a free parameter in Einstein’s theory of general relativity. This means its value must be fixed by measurement. You can calculate a contribution to this constant from the standard model vacuum fluctuations. But you cannot measure this contribution by itself. So the result of the standard model calculation doesn’t matter because it doesn’t correspond to an observable. Regardless of what it is, there is always a value for the parameter in general relativity that will make the result fit with measurement.

(And if you still believe in naturalness arguments, buy my book.)

2. The calculation in the standard model cannot be trusted.

Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies. If that is so, then any calculation of the contribution to the cosmological constant using the standard model is wrong anyway. If there are further particles, so heavy that we haven’t yet seen them, these will play a role for the result. And we don’t know if there are such particles.

3. It’s idiotic to quote ratios of energy densities.

The 120 orders of magnitude refers to a ratio of energy densities. But not only is the cosmological constant usually not quoted as an energy density (but as a square thereof), in no other situation do particle physicists quote energy densities. We usually speak about energies, in which case the ratio goes down to 30 orders of magnitude.

4. The 120 orders of magnitude are wrong to begin with.

The actual result from the standard model scales with the fourth power of the masses of particles, times an energy-dependent logarithm. At least that’s the best calculation I know of. You find the result in equation (515) in this (awesomely thorough) paper. If you put in the numbers, out comes a value that scales with the masses of the heaviest known particles (not with the Planck mass, as you may have been told). That’s currently 13 orders of magnitude larger than the measured value, or 52 orders larger in energy density.

5. No one in their right mind ever quantifies the goodness of a prediction by taking ratios.

There’s a reason physicists usually talk a about uncertainty, statistical significance, and standard deviations. That’s because these are known to be useful to quantify the match of a theory with data. If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero.

In summary: No prediction, no problem.

Why does it matter? Because this wrong narrative has prompted physicists to aim at the wrong target.

The real problem with the cosmological constant is not the average value of the standard model contribution but – as Niayesh Afshordi elucidated better than I ever managed to – that the vacuum fluctuations, well, fluctuate. It’s these fluctuations that you should worry about. Because these you cannot get rid of by subtracting a constant.

But of course I know the actual reason you came here is that you want to know what is “the worst prediction in the history of physics” if not the cosmological constant...

I’m not much of a historian, so don’t take my word for it, but I’d guess it’s the prediction you get for the size of the universe if you assume the universe was born by a vacuum fluctuation out of equilibrium.

In this case, you can calculate the likelihood for observing a universe like our own. But the larger and the less noisy the observed universe, the less likely it is to originate from a fluctuation. Hence, the mere fact that you have a fairly ordered memory of the past and a sense of a reasonably functioning reality would be exceedingly tiny in such a case. So tiny, I’m not interested enough to even put in the numbers. (Maybe ask Sean Carroll.)

I certainly wish I’d never have to see the cosmological constant myth again. I’m not yet deluded enough to believe it will go away, but at least I now have this blogpost to refer to when I encounter it the next time.

Thursday, November 30, 2017

If science is what scientists do, what happens if scientists stop doing science?

“Is this still science?” has become a recurring question in the foundations of physics. Whether it’s the multiverse, string theory, supersymmetry, or inflation, concerns abound that theoreticians have crossed a line.

Science writer Jim Baggott called the new genre “fairy-tale science.” Historian Helge Kragh coined the term “higher speculations,” and Peter Woit, more recently, suggested the name “fake physics.” But the accused carry on as if nothing’s amiss, arguing that speculation is an essential part of science. And I? I have a problem.

On the one hand, I understand the concerns about breaking with centuries of tradition. We used to followed up each hypothesis with experimental test, and the longer the delay between hypothesis and test, the easier for pseudoscience to take foothold. On the other hand, I agree that speculation is a necessary part of science and new problems sometimes require new methods. Insisting on ideals of the past might mean getting stuck, maybe forever.

Even more important, I think it’s a grave mistake to let anyone define what we mean by doing science. Because who gets to decide what’s the right thing to do? Should we listen to Helge Kragh? Peter Woit? George Ellis? Or to the other side, to people like Max Tegmark, Sean Carroll, and David Gross, who claim we’re just witnessing a little paradigm change, nothing to worry about? Or should we, heaven forbid, listen to some philosophers and their ideas about post-empirical science?

There have been many previous attempts to define what science is, but the only definition that ever made sense to me is that science is what scientists do, and scientists are people who search for useful descriptions of nature. “Science,” then, is an emergent concept that arises in communities of people with a shared work practices. “Communities of practice,” as the sociologists say.

This brings me to my problem. If science is what scientists do, then how can anything that scientists do not be science? For a long time it seemed to me that in the end we won’t get around settling on a definition for science and holding on to it, regardless of how much I’d prefer a self-organized solution.

But as I was looking for a fossil photo to illustrate my recent post about what we mean by “explaining” something, I realized that we witness the self-organized solution right now: It’s a lineage split.

If some scientists insist on changing the old-fashioned methodology, the communities will fall apart. Let us call the two sectors “conservatives” and “progressives.” Each of them will insist they are the ones pursuing the more promising approach.

Based on this little theory, let me make a prediction what will happen next: The split will become more formally entrenched. Members of the community will begin taking sides, if they haven’t already, and will make an effort to state their research philosophy upfront.

In the end, only time will tell which lineage will survive and which one will share the fate of the Neanderthals.

So, if science is what scientists do, what happens if some scientists stop doing science? You see it happening as we speak.

Sunday, November 26, 2017

Astrophysicist discovers yet another way to screw yourself over when modifying Einstein’s theory

Several people have informed me that phys.org has once again uncritically promoted a questionable paper, in this case by André Maeder from UNIGE. This story goes back to a press release by the author’s home institution and has since been hyped by a variety of other low-quality outlets.

From what I gather from Maeder’s list of publications, he’s an astrophysicist who recently had the idea to revolutionize cosmology by introducing a modification of general relativity. The paper which now makes headlines studies observational consequences of a model he introduced in January and claim to explain away the need for dark matter and dark energy. Both papers contain a lot of fits to data but no consistent theory. Since the man is known in the astrophysics community, however, the papers got published in ApJ, one of the best journals in the field.

For those of you who merely want to know whether you should pay attention to this new variant of modified gravity, the answer is no. The author does not have a consistent theory. The math is wrong.

For those of you who understand the math and want to know what the problem is, here we go.

Maeder introduces a conformal prefactor in front of the metric. You can always do that as an ansatz to solve the equations, so there is nothing modified about this, but also nothing wrong. He then looks at empty de Sitter space, which is conformally flat, and extracts the prefactor from there.

He then uses the same ansatz for the Friedmann Robertson Walker metric (eq 27, 28 in the first paper). Just looking at these equations you see immediately that they are underdetermined if the conformal factor (λ) is a degree of freedom. That’s because the conformal factor can usually be fixed by a gauge condition and be chosen to be constant. That of course would just give back standard cosmology and Maeder doesn’t want that. So he instead assumes that this factor has the same form as in de Sitter space.

Since he doesn’t have a dynamical equation for the extra field, my best guess is that this effectively amounts to choosing a weird time coordinate in standard cosmology. If you don’t want to interpret it as a gauge, then an equation is missing. Either way the claims which follow are wrong. I can’t tell which is the case because the equations themselves just appear from nowhere. Neither of the papers contain a Lagrangian, so it remains unclear what is a degree of freedom and what isn’t. (The model is also of course not scale invariant, so somewhat of a misnomer.)

Maeder later also uses the same de Sitter prefactor for galactic solutions, which makes even less sense. You shouldn’t be surprised that he can fit some observations when you put in the scale of the cosmological constant to galactic models, because we have known this link since the 1980s. If there is something new to learn here, it didn’t become clear to me what.

Maeder’s papers have a remarkable number of observational fits and pretty plots, which I guess is why they got published. He clearly knows his stuff. He also clearly doesn’t know a lot about modifying general relativity. But I do, so let me tell you it’s hard. It’s really hard. There are a thousand ways to screw yourself over with it, and Maeder just discovered the one thousand and first one.

Please stop hyping this paper.

Wednesday, November 22, 2017

How do you prove that Earth is older than 10,000 years?

Mesosaurus fossil. Probably dating back
to the early Permian Period, roughly
300 million years ago. [Image source]
Planet Earth formed around 4.5 billion years ago. The first primitive forms of life appeared about 4 billion years ago. Natural selection did the rest, giving rise to species increasingly better adapted to their environment. Evidence, as they say, is overwhelming.

Or is it? Imagine planet Earth began its existence a mere 10,000 years ago, with all fossil records in place and carbon-14 well into decaying. From there on, however, evolution proceeded as scientists tell us. How’d you prove this story wrong?

You can’t.

I know it hurts. But hang on there, band aid follows below.

You can’t prove this story wrong because of the way our current theories work. These theories need two ingredients: 1) A configuration at any one moment in time, called the “initial condition,” and 2) A hypothesis for how this initial configuration changes with time, called the “evolution law.”

You can reverse the evolution law to figure out from the present configuration what happened back in time. But there’s no way you can tell whether an earlier configuration actually existed or whether they are just convenient stories. In theories of this type – and that includes all theories in physics – you can therefore never rule out that at some earlier time the universe evolved by an entirely different law – maybe because God or The Programmer assembled it – and was then suddenly switched on to reproduce our observations.

I often hear people argue such creation-stories are wrong because they can’t be falsified, but this makes about as much sense as organic salt. No, they are not wrong, but they are useless.

Last week, I gave a talk at the department of History and Philosophy at the University of Toronto. My talk was followed by a “response” from a graduate student who evidently spent quite some time digging through this blog’s archives to classify my philosophy of science. I didn’t know I have one, but you never stop learning.

I learned that I am sometimes an anti-realist, meaning I don’t believe in the existence of an external reality. But I’d say I am neither a realist nor an anti-realist; I am agnostic about whether or not reality exists or what that even means. I don’t like to say science unveils “truths” about “reality” because this just brings on endless discussions about what is true and what is real. To me, science is about finding useful descriptions of the world, where by “useful” I mean they allow us to make predictions or explain already existing observations. The simpler an explanation, the more useful it is.

That scientific theories greatly simplify the stories we tell about the world is extremely important and embodies what we even mean by doing science. Forget all about Popperism and falsification, just ask what’s the most useful explanation. Saying that the world was created 10,000 years ago with all fossils in place is useless in terms of explaining the fossils. If you, on the other hand, extrapolate the evolution law back in time 4 billion years, you can start with a much simpler initial condition. That’s why it’s a better explanation. You get more out of less.

So there’s your band aid: Saying that the world was created 10,000 years ago with everything in place is unfalsifiable but also useless. It is quantifiably not simple: you need to put a lot of data into the initial condition. A much simpler, and thus scientifically better, explanation, is that planet Earth is ages old and Darwinian evolution did its task.

I’m not telling you this because I’ve suddenly developed an interest in Creationism. I am telling you this because I frequently encounter similar confusions surrounding the creation of the universe. This usually comes up in reaction to me pointing out that there is nothing whatsoever wrong with finetuned initial conditions if you do not have a probability distribution to quantify why the conditions are supposedly unlikely.

People often tell me that a finetuned initial condition doesn’t explain anything and thus isn’t scientific. Or, even weirder, that if you’d accept finetuned initial conditions this would turn science itself ad absurdum.

But this is just wrong. Finetuned initial conditions are equally good or bad explanations than not-finetuned ones. What is decisive isn’t whether the initial condition is finetuned, but whether it’s simple. According to current nomenclature, that is not the same thing. Absent a probability distribution, for example, an initial value of 1.0000000[00] for the curvature density parameter is scientifically equally good as an initial value of 0.0000001[00]… because both are equally simple. Current thinking among cosmologists, in contrast, has it that the latter is much worse than the former.

This confusion about what it means for a scientific theory to be useful sits deep and has caused a lot of cosmologists to cook up stories about the early universe based on highly questionable extrapolations into the past.

Take, for example, inflation, the idea that the early universe underwent a phase of rapid expansion. Inflation conjectures that before a certain moment in our universe’s history there was a different evolution law, assigned to a newly invented scalar field called the “inflaton.” But this conjecture is scientifically problematic because it construes up an evolution law in the past where we have no way of testing it. It’s not so different from saying that if you’d roll back time more than 10,000 years, you wouldn’t find planet Earth but god waving a magic wand or what have you.

A bold conjecture like inflation can only be justified if it leads to an actually simpler story, but on that the jury is out. Inflation was meant to solve several finetuning problems, but this doesn’t bring a simplification, it’s merely a beautification. The price to pay for this prettier theory, though, is that you now have at least one, if not several, new fields and their potentials, and some way to get rid of them again, which is arguably a complication of the story.

I wrote in a recent post that inflation seems justifiable after all because it provides a simple explanation for certain observed correlations in the cosmic microwave background (CMB). Well, that’s what I wrote some weeks ago, but now I am not so sure it is correct, thanks in no small part to a somewhat disturbing conversation I had with Niayesh Afshordi at Perimeter Institute.

The problem is that in cosmology there really aren’t a lot of data. There are but a few numbers. It’s a simple story already without inflation. And so, the current status is that I just don’t know whether or not inflation is a good theory. (But check back next month.)

Let me emphasize that the concordance model (aka ΛCDM) does not suffer from this problem. Indeed, it makes a good example for a scientifically successful theory. Here’s why.

For the concordance model, we seek the combination of dark matter, normal matter, and cosmological constant (as well as a handful other parameters) that best fit current observations. But what do we mean by best fit? We could use any combinations of parameters to get the dynamical law, and then use it to evolve the present day observations back in time. And regardless of what the law, there is always an initial state that will evolve into the present one.

In general, however, the initial state will be a mess, for example because the fluctuations of the cosmic microwave background (radiation) are not related in any obvious way to the structures we observe (matter). Whereas, if you pick the parameters correctly, these two types of structures belong together (higher density of matter corresponding to hotter spots in the cosmic microwave background). This match is a great simplification of the story – it explains something.

But the more you try to turn back time in the early universe, the harder it becomes to obey the scientific credo of storytelling, that you should seek only simpler explanations, not more complicated ones. The problem is the story we presently have is already very simple. This really is my biggest issue with eternal inflation and the multiverse or cyclic cosmologies, bounces, and so on and so forth. They are stories, all right, but they aren’t simplifying anything. They just add clutter, like the programmer that set up our universe so that it looks the way it looks.

On some days I hope something scientific will eventually come out of these stories. But today I am just afraid we have overstepped the limits of science.

Sunday, November 12, 2017

Away Note

I am overseas the coming week, giving a seminar at Perimeter Institute on Tuesday, a colloq in Toronto on Wednesday, and on Thursday I am scheduled to “make sense of mind-blowing physics” with Natalie Wolchover in New York. The latter event, I am told, has a live webcast starting at 6:30 pm Eastern, so dial in if you fancy seeing my new haircut. (Short again.)

Please be warned that things on this blog will go very slowly while I am away. On this occasion I want to remind you that I have comment moderation turned on. This means comments will not appear until I manually approve them. I usually check the queue at least once per day.


(The above image is the announcement for the New York event. Find the seven layout blunders.)