Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Thursday, May 11, 2017

A Philosopher Tries to Understand the Black Hole Information Paradox

Is the black hole information loss paradox really a paradox? Tim Maudlin, a philosopher from NYU and occasional reader of this blog, doesn’t think so. Today, he has a paper on the arXiv in which he complains that the so-called paradox isn’t and physicists don’t understand what they are talking about.
So is the paradox a paradox? If you mean whether black holes break mathematics, then the answer is clearly no. The problem with black holes is that nobody knows how to combine them with quantum field theory. It should really better be called a problem than a paradox, but nomenclature rarely follows logical argumentation.

Here is the problem. The dynamics of quantum field theories is always reversible. It also preserves probabilities which, taken together (assuming linearity), means the time-evolution is unitary. That quantum field theories are unitary depends on certain assumptions about space-time, notably that space-like hypersurfaces – a generalized version of moments of ‘equal time’ – are complete. Space-like hypersurfaces after the entire evaporation of black holes violate this assumption. They are, as the terminology has it, not complete Cauchy surfaces. Hence, there is no reason for time-evolution to be unitary in a space-time that contains a black hole. What’s the paradox then, Maudlin asks.

First, let me point out that this is hardly news. As Maudlin himself notes, this is an old story, though I admit it’s often not spelled out very clearly in the literature. In particular the Susskind-Thorlacius paper that Maudlin picks on is wrong in more ways than I can possibly get into here. Everyone in the field who has their marbles together knows that time-evolution is unitary on “nice slices”– which are complete Cauchy-hypersurfaces – at all finite times. The non-unitarity comes from eventually cutting these slices. The slices that Maudlin uses aren’t quite as nice because they’re discontinuous, but they essentially tell the same story.

What Maudlin does not spell out however is that knowing where the non-unitarity comes from doesn’t help much to explain why we observe it to be respected. Physicists are using quantum field theory here on planet Earth to describe, for example, what happens in LHC collisions. For all these Earthlings know, there are lots of black holes throughout the universe and their current hypersurface hence isn’t complete. Worse still, in principle black holes can be created and subsequently annihilated in any particle collision as virtual particles. This would mean then, according to Maudlin’s argument, we’d have no reason to even expect a unitary evolution because the mathematical requirements for the necessary proof aren’t fulfilled. But we do.

So that’s what irks physicists: If black holes would violate unitarity all over the place how come we don’t notice? This issue is usually phrased in terms of the scattering-matrix which asks a concrete question: If I could create a black hole in a scattering process how come that we never see any violation of unitarity.

Maybe we do, you might say, or maybe it’s just too small an effect. Yes, people have tried that argument, which is the whole discussion about whether unitarity maybe just is violated etc. That’s the place where Hawking came from all these years ago. Does Maudlin want us to go back to the 1980s?

In his paper, he also points out correctly that – from a strictly logical point of view – there’s nothing to worry about because the information that fell into a black hole can be kept in the black hole forever without any contradictions. I am not sure why he doesn’t mention this isn’t a new insight either – it’s what goes in the literature as a remnant solution. Now, physicists normally assume that inside of remnants there is no singularity because nobody really believes the singularity is physical, whereas Maudlin keeps the singularity, but from the outside perspective that’s entirely irrelevant.

It is also correct, as Maudlin writes, that remnant solutions have been discarded on spurious grounds with the result that research on the black hole information loss problem has grown into a huge bubble of nonsense. The most commonly named objection to remnants – the pair production problem – has no justification because – as Maudlin writes – it presumes that the volume inside the remnant is small for which there is no reason. This too is hardly news. Lee and I pointed this out, for example, in our 2009 paper. You can find more details in a recent review by Chen et al.

The other objection against remnants is that this solution would imply that the Bekenstein-Hawking entropy doesn’t count microstates of the black hole. This idea is very unpopular with string theorists who believe that they have shown the Bekenstein-Hawking entropy counts microstates. (Fyi, I think it’s a circular argument because it assumes a bulk-boundary correspondence ab initio.)

Either way, none of this is really new. Maudlin’s paper is just reiterating all the options that physicists have been chewing on forever: Accept unitarity violation, store information in remnants, or finally get it out.

The real problem with black hole information is that nobody knows what happens with it. As time passes, you inevitably come into a regime where quantum effects of gravity are strong and nobody can calculate what happens then. The main argument we are seeing in the literature is whether quantum gravitational effects become noticeable before the black hole has shrunk to a tiny size.

So what’s new about Maudlin’s paper? The condescending tone by which he attempts public ridicule strikes me as bad news for the – already conflict-laden – relation between physicists and philosophers.

Wednesday, December 21, 2016

Reasoning in Physics

I’m just back from a workshop about “Reasoning in Physics” at the Center for Advanced Studies in Munich. I went because it seemed a good idea to improve my reasoning, but as I sat there, something entirely different was on my mind: How did I get there? How did I, with my avowed dislike of all things -ism and -ology, end up in a room full of philosophers, people who weren’t discussing physics, but the philosophical underpinning of physicists’ arguments. Or, as it were, the absence of such underpinnings.

The straight-forward answer is that they invited me, or invited me back, I should say, since this was my third time visiting the Munich philosophers. Indeed, they invited me to stay some longer for a collaborative project, but I’ve successfully blamed the kids for my inability to reply with either yes or no.

So I sat there, in one of these awkwardly quiet rooms where everyone will hear your stomach gargle, trying to will my stomach not to gargle and instead listen to the first talk. It was Jeremy Butterfield, speaking about a paper which I commented on here. Butterfield has been praised to me as one of the four good physics philosophers, but I’d never met him. The praise was deserved – he turned out to be very insightful and, dare I say, reasonable.

The talks of the first day focused on multiple multiverse measures (meta meta), inflation (still eternal), Bayesian inference (a priori plausible), anthropic reasoning (as observed), and arguments from mediocrity and typicality which were typically mediocre. Among other things, I noticed with consternation that the doomsday argument is still being discussed in certain circles. This consterns me because, as I explained a decade ago, it’s an unsound abuse of probability calculus. You can’t randomly distribute events that are causally related. It’s mathematical nonsense, end of story. But it’s hard to kill a story if people have fun discussing it. Should “constern” be a verb? Discuss.

In a talk by Mathias Frisch I learned of a claim by Huw Price that time-symmetry in quantum mechanics implies retro-causality. It seems the kind of thing that I should have known about but didn’t, so I put the paper on the reading list and hope that next week I’ll have read it last year.

The next day started with two talks about analogue systems of which I missed one because I went running in the morning without my glasses and, well, you know what they say about women and their orientation skills. But since analogue gravity is a topic I’ve been working on for a couple of years now, I’ve had some time to collect thoughts about it.

Analogue systems are physical systems whose observables can, in a mathematically precise way, be mapped to – usually very different – observables of another system. The best known example is sound-waves in certain kinds of fluids which behave exactly like light does in the vicinity of a black hole. The philosophers presented a logical scheme to transfer knowledge gained from observational test of one system to the other system. But to me analogue systems are much more than a new way to test hypotheses. They’re fundamentally redefining what physicists mean by doing science.

Presently we develop a theory, express it in mathematical language, and compare the theory’s predictions with data. But if you can directly test whether observations on one system correctly correspond to that of another, why bother with a theory that predicts either? All you need is the map between the systems. This isn’t a speculation – it’s what physicists already do with quantum simulations: They specifically design one system to learn how another, entirely different system, will behave. This is usually done to circumvent mathematically intractable problems, but in extrapolation it might just make theories and theorists superfluous.

It then followed a very interesting talk by Peter Mattig, who reported from the DFG research program “Epistemology of the LHC.” They have, now for the 3rd time, surveyed both theoretical and experimental particle physicists to track researchers’ attitudes to physics beyond the standard model. The survey results, however, will only get published in January, so I presently can’t tell you more than that. But once the paper is available you’ll read about it on this blog.

The next talk was by Radin Dardashti who warned us ahead that he’d be speaking about work in progress. I very much liked Radin’s talk at last year’s workshop, and this one didn’t disappoint either. In his new work, he is trying to make precise the notion of “theory space” (in the general sense, not restricted to qfts).

I think it’s a brilliant idea because there are many things that we know about theories but that aren’t about any particular theory, ie we know something about theory space, but we never formalize this knowledge. The most obvious example may be that theories in physics tend to be nice and smooth and well-behaved. They can be extrapolated. They have differentiable potentials. They can be expanded. There isn’t a priori any reason why that should be so; it’s just a lesson we have learned through history. I believe that quantifying meta-theoretical knowledge like this could play a useful role in theory development. I also believe Radin has a bright future ahead.

The final session on Tuesday afternoon was the most physicsy one.

My own talk about the role of arguments from naturalness was followed by a rather puzzling contribution by two young philosophers. They claimed that quantum gravity doesn’t have to be UV-complete, which would mean it’s not a consistent theory up to arbitrarily high energies.

It’s right of course that quantum gravity doesn’t have to be UV-complete, but it’s kinda like saying a plane doesn’t have to fly. If you don’t mind driving, then why put wings on it? If you don’t mind UV-incompleteness, then why quantize gravity?

This isn’t to say that there’s no use in thinking about approximations to quantum gravity which aren’t UV-complete and, in particular, trying to find ways to test them. But these are means to an end, and the end is still UV-completion. Now we can discuss whether it’s a good idea to start with the end rather than the means, but that’s a different story and shall be told another time.

I think this talk confused me because the argument wasn’t wrong, but for a practicing researcher in the field the consideration is remarkably irrelevant. Our first concern is to find a promising problem to work on, and that the combination of quantum field theory and general relativity isn’t UV complete is the most promising problem I know of.

The last talk was by Michael Krämer about recent developments in modelling particle dark matter. In astrophysics – like in particle-physics – the trend is to go away from top-down models and work with slimmer “simplified” models. I think it’s a good trend because the top-down constructions didn’t lead us anywhere. But removing the top-down guidance must be accompanied by new criteria, some new principle of non-empirical theory-selection, which I’m still waiting to see. Otherwise we’ll just endlessly produce models of questionable relevance.

I’m not sure whether a few days with a group of philosophers have improved my reasoning – be my judge. But the workshop helped me see the reason I’ve recently drifted towards philosophy: I’m frustrated by the lack of self-reflection among theoretical physicists. In the foundations of physics, everybody’s running at high speed without getting anywhere, and yet they never stop to ask what might possibly be going wrong. Indeed, most of them will insist nothing’s wrong to begin with. The philosophers are offering the conceptual clarity that I find missing in my own field.

I guess I’ll be back.

Tuesday, September 06, 2016

Sorry, the universe wasn’t made for you

Last month, game reviewers were all over No Man’s Sky, a new space adventure launched to much press attention. Unlike previous video games, this one calculates players’ environments from scratch rather than revealing hand-crafted landscapes and creatures. The calculations populate No Man’s Sky’s virtual universe with about 1019 planets, all with different flora and fauna – at least that’s what we’re told, not like anyone actually checked. That seems a giganourmous number but is still less than there’s planets in the actual universe, estimated at roughly 1024.



User’s expectations of No Man’s Sky were high – and were highly disappointed. All the different planets, it turns out, still get a little repetitive with their limited set of options and features. It’s hard to code a universe as surprising as reality and run it on processors that occupy only a tiny fraction of that reality.

Theoretical physicists, meanwhile, have the opposite problem: The fictive universes they calculate are more surprising than they’d like them to be.

Having failed on their quest for a theory of everything, in the area of quantum gravity many theoretical physicists now accept that a unique theory can’t be derived from first principles. Instead, they believe, additional requirements must be used to select the theory that actually describes the universe we observe. That, of course, is what we’ve always done to develop theories – the additional requirements being empirical adequacy.

The new twist is that many of these physicists think the missing observational input is the existence of life in our universe. I hope you just raised an eyebrow or two because physicists don’t normally have much business with “life.” And indeed, they usually only speak about preconditions of life, such as atoms and molecules. But that the sought-after theory must be rich enough to give rise to complex structures has become the most popular selection principle.

Known as “anthropic principle” this argument allows physicists to discard all theories that can’t produce sentient observers on the rationale that we don’t inhabit a universe that lacks them. One could of course instead just discard all theories with parameters that don’t match the measured values, but that would be so last century.

The anthropic principle is often brought up in combination with the multiverse, but logically it’s a separate argument. The anthropic principle – that our theories must be compatible with the existence of life in our universe – is an observational requirement that can lead to constraints on the parameters of a theory. This requirement must be fulfilled whether or not universes for different parameters actually exist. In the multiverse, however, the anthropic principle is supposedly the only criterion by which to select the theory for our universe, at least in terms of probability so that we are likely to find ourselves here. Hence the two are often discussed together.

Anthropic selection had a promising start with Weinberg’s prescient estimate for the cosmological constant. But the anthropic princple hasn’t solved the problem it was meant to solve, because it does not single out one unique theory either. This has been known at least since a decade, but the myth that our universe is “finetuned for life” still hasn’t died.

The general argument against the success of anthropic selection is that all evidence for the finetuning of our theories explores only a tiny space of all possible combinations of parameters. A typical argument for finetuning goes like this: If parameter X was only a tiny bit larger or smaller than the observed value, then atoms couldn’t exist or all stars would collapse or something similarly detrimental to the formation of large molecules. Hence, parameter X must have a certain value to high precision. However, these arguments for finetuning – of which there exist many – don’t take into account simultaneous changes in several parameters and are therefore inconclusive.

Importantly, besides this general argument there also exist explicit counterexamples. In the 2006 paper A Universe Without Weak Interactions, Harnik, Kribs, and Perez discussed a universe that seems capable of complex chemistry and yet has fundamental particles entirely different from our own. More recently, Abraham Loeb from Harvard argued that primitive forms of life might have been possible already in the early universe under circumstances very different from today’s. And a recent paper (ht Jacob Aron) adds another example:

    Stellar Helium Burning in Other Universes: A solution to the triple alpha fine-tuning problem
    By Fred C. Adams and Evan Grohs
    1608.04690 [astro-ph.CO]

In this work the authors show that some combinations of fundamental constants would actually make it easier for stars to form Carbon, an element often assumed to be essential for the development of life.

This is a fun paper because it extends on the work by Fred Hoyle, who was the first to use the anthropic principle to make a prediction (though some historians question whether that was his actual motivation). He understood that it’s difficult for stars to form heavy elements because the chain is broken in the first steps by Beryllium. Beryllium has atomic number 4, but the version that’s created in stellar nuclear fusion from Helium (with atomic number 2) is unstable and therefore can’t be used to build even heavier nuclei.

Hoyle suggested that the chain of nuclear fusion avoids Beryllium and instead goes from three Helium nuclei straight to carbon (with atomic number 6). Known as the triple-alpha process (because Helium nuclei are also referred to as alpha-particles), the chances of this happening are slim – unless the Helium merger hits a resonance of the Carbon nucleus. Which it does if the parameters are “just right.” Hoyle hence concluded that such a resonance must exist, and that was later experimentally confirmed.

Adams and Groh now point out that there are other sets of parameters altogether in which case Beryllium is just stable and the Carbon resonance doesn’t have to be finely tuned. In their paper, they do not deal with the fundamental constants that we normally use in the standard model – they instead discuss nuclear structure which has constants that are derived from the standard model constants, but are quite complicated functions thereof (if known at all). Still, they have basically invented a fictional universe that seems at least as capable of producing life as ours.

This study is hence another demonstration that a chemistry complex enough to support life can arise under circumstances that are not anything like the ones we experience today.

I find it amusing that many physicists believe the evolution of complexity is the exception rather than the rule. Maybe it’s because they mostly deal with simple systems, at equilibrium or close by equilibrium, with few particles, or with many particles of the same type – systems that the existing math can deal with.

It makes me wonder how many more fictional universes physicists will invent and write papers about before they bury the idea that anthropic selection can single out a unique theory. Fewer, I hope, than there are planets in No Man’s Sky.

Monday, August 15, 2016

The Philosophy of Modern Cosmology (srsly)

Model of Inflation.
img src: umich.edu
I wrote my recent post on the “Unbearable Lightness of Philosophy” to introduce a paper summary, but it got somewhat out of hand. I don’t want to withhold the actual body of my summary though. The paper in question is


Before we start I have to warn you that the paper speaks a lot about realism and underdetermination, and I couldn’t figure out what exactly the authors mean with these words. Sure, I looked them up, but that didn’t help because there doesn’t seem to be an agreement on what the words mean. It’s philosophy after all.

Personally, I subscribe to a philosophy I’d like to call agnostic instrumentalism, which means I think science is useful and I don’t care what else you want to say about it – anything from realism to solipsism to Carroll’s “poetic naturalism” is fine by me. In newspeak, I’m a whateverist – now go away and let me science.

The authors of the paper, in contrast, position themselves as follows:
“We will first state our allegiance to scientific realism… We take scientific realism to be the doctrine that most of the statements of the mature scientific theories that we accept are true, or approximately true, whether the statement is about observable or unobservable states of affairs.”
But rather than explaining what this means, the authors next admit that this definition contains “vague words,” and apologize that they “will leave this general defense to more competent philosophers.” Interesting approach. A physics-paper in this style would say: “This is a research article about General Relativity which has something to do with curvature of space and all that. This is just vague words, but we’ll leave a general defense to more competent physicists.”

In any case, it turns out that it doesn’t matter much for the rest of the paper exactly what realism means to the authors – it’s a great paper also for an instrumentalist because it’s long enough so that, rolled up, it’s good to slap flies. The focus on scientific realism seems somewhat superfluous, but I notice that the paper is to appear in “The Routledge Handbook of Scientific Realism” which might explain it.

It also didn’t become clear to me what the authors mean by underdetermination. Vaguely speaking, they seem to mean that a theory is underdetermined if it contains elements unnecessary to explain existing data (which is also what Wikipedia offers by way of definition). But the question what’s necessary to explain data isn’t a simple yes-or-no question – it’s a question that needs a quantitative analysis.

In theory development we always have a tension between simplicity (fewer assumptions) and precision (better fit) because more parameters normally allow for better fits. Hence we use statistical measures to find out in which case a better fit justifies a more complicated model. I don’t know how one can claim that a model is “underdetermined” without such quantitative analysis.

The authors of the paper for the most part avoid the need to quantify underdetermination by using sociological markers, ie they treat models as underdetermined if cosmologists haven’t yet agreed on the model in question. I guess that’s the best they could have done, but it’s not a basis on which one can discuss what will remain underdetermined. The authors for example seem to implicitly believe that evidence for a theory at high energies can only come from processes at such high energies, but that isn’t so – one can also use high precision measurements at low energies (at least in principle). In the end it comes down, again, to quantifying which model is the best fit.

With this advance warning, let me tell you the three main philosophical issues which the authors discuss.

1. Underdetermination of topology.

Einstein’s field equations are local differential equations which describe how energy-densities curve space-time. This means these equations describe how space changes from one place to the next and from one moment to the next, but they do not fix the overall connectivity – the topology – of space-time*.

A sheet of paper is a simple example. It’s flat and it has no holes. If you roll it up and make a cylinder, the paper is still flat, but now it has a hole. You could find out about this without reference to the embedding space by drawing a circle onto the cylinder and around its perimeter, so that it can’t be contracted to zero length while staying on the cylinder’s surface. This could never happen on a flat sheet. And yet, if you look at any one point of the cylinder and its surrounding, it is indistinguishable from a flat sheet. The flat sheet and the cylinder are locally identical – but they are globally different.

General Relativity thus can’t tell you the topology of space-time. But physicists don’t normally worry much about this because you can parameterize the differences between topologies, compute observables, and then compare the results to data. Topology is, in that, no different than any other assumption of a cosmological model. Cosmologists can, and have, looked for evidence of non-trivial space-time connectivity in the CMB data, but they haven’t found anything that would indicate our universe wraps around itself. At least so far.

In the paper, the authors point out an argument raised by someone else (Manchak) which claims that different topologies can’t be distinguished almost everywhere. I haven’t read the paper in question, but this claim is almost certainly correct. The reason is that while topology is a global property, you can change it on arbitrarily small scales. All you have to do is punch a hole into that sheet of paper, and whoops, it’s got a new topology. Or if you want something without boundaries, then identify two points with each other. Indeed you could sprinkle space-time with arbitrarily many tiny wormholes and in that way create the most abstruse topological properties (and, most likely, lots of causal paradoxa).

The topology of the universe is hence, like the topology of the human body, a matter of resolution. On distances visible to the eye you can count the holes in the human body on the fingers of your hand. On shorter distances though you’re all pores and ion channels, and on subatomic distances you’re pretty much just holes. So, asking what’s the topology of a physical surface only makes sense when one specifies at which distance scale one is probing this (possibly higher-dimensional) surface.

I thus don’t think any physicist will be surprised by the philosophers’ finding that cosmology severely underdetermines global topology. What the paper fails to discuss though is the scale-dependence of that conclusion. Hence, I would like to know: Is it still true that the topology will remain underdetermined on cosmological scales? And to what extent, and under which circumstances, can the short-distance topology have long-distance consequences, as eg suggested by the ER=EPR idea? What effect would this have on the separation of scales in effective field theory?

2. Underdetermination of models of inflation.

The currently most widely accepted model for the universe assumes the existence of a scalar field – the “inflaton” – and a potential for this field – the “inflation potential” – in which the field moves towards a minimum. While the field is getting there, space is exponentially stretched. At the end of inflation, the field’s energy is dumped into the production of particles of the standard model and dark matter.

This mechanism was invented to solve various finetuning problems that cosmology otherwise has, notably that the universe seems to be almost flat (the “flatness problem”), that the cosmic microwave background has the almost-same temperature in all directions except for tiny fluctuations (the “horizon problem”), and that we haven’t seen any funky things like magnetic monopoles or domain walls that tend to be plentiful at the energy scale of grand unification (the “monopole problem”).

Trouble is, there’s loads of inflation potentials that one can cook up, and most of them can’t be distinguished with current data. Moreover, one can invent more than one inflation field, which adds to the variety of models. So, clearly, the inflation models are severely underdetermined.

I’m not really sure why this overabundance of potentials is interesting for philosophers. This isn’t so much philosophy as sociology – that the models are underdetermined is why physicists get them published, and if there was enough data to extract a potential that would be the end of their fun. Whether there will ever be enough data to tell them apart, only time will tell. Some potentials have already been ruled out with incoming data, so I am hopeful.

The questions that I wish philosophers would take on are different ones. To begin with, I’d like to know which of the problems that inflation supposedly solves are actual problems. It only makes sense to complain about finetuning if one has a probability distribution. In this, the finetuning problem in cosmology is distinctly different from the finetuning problems in the standard model, because in cosmology one can plausibly argue there is a probability distribution – it’s that of fluctuations of the quantum fields which seed the initial conditions.

So, I believe that the horizon problem is a well-defined problem, assuming quantum theory remains valid close by the Planck scale. I’m not so sure, however, about the flatness problem and the monopole problem. I don’t see what’s wrong with just assuming the initial value for the curvature is tiny (finetuned), and I don’t know why I should care about monopoles given that we don’t know grand unification is more than a fantasy.

Then, of course, the current data indicates that the inflation potential too must be finetuned which, as Steinhardt has aptly complained, means that inflation doesn’t really solve the problem it was meant to solve. But to make that statement one would have to compare the severity of finetuning, and how does one do that? Can one even make sense of this question? Where are the philosophers if one needs them?

Finally, I have a more general conceptual problem that falls into the category of underdetermination, which is to which extent the achievements of inflation are actually independent of each other. Assume, for example, you have a theory that solves the horizon problem. Under which circumstances does it also solve the flatness problem and gives the right tilt for the spectral index? I suspect that the assumptions for this do not require the full mechanism of inflation with potential and all, and almost certainly not a very specific type of potential. Hence I would like to know what’s the minimal theory that explains the observations, and which assumptions are really necessary.

3. Underdetermination in the multiverse.

Many models for inflation create not only one universe, but infinitely many of them, a whole “multiverse”. In the other universes, fundamental constants – or maybe even the laws of nature themselves – can be different. How do you make predictions in a multiverse? You can’t, really. But you can make statements about probabilities, about how likely it is that we find ourselves in this universe with these particles and not any other.

To make statements about the probability of the occurrence of certain universes in the multiverse one needs a probability distribution or a measure (in the space of all multiverses or their parameters respectively). Such a measure should also take into account anthropic considerations, since there are some universes which are almost certainly inhospitable for life, for example because they don’t allow the formation of large structures.

In their paper, the authors point out that the combination of a universe ensemble and a measure is underdetermined by observations we can make in our universe. It’s underdetermined in the same what that if I give you a bag of marbles and say the most likely pick is red, you can’t tell what’s in the bag.

I think physicists are well aware of this ambiguity, but unfortunately the philosophers don’t address why physicists ignore it. Physicists ignore it because they believe that one day they can deduce the theory that gives rise to the multiverse and the measure on it. To make their point, the philosophers would have had to demonstrate that this deduction is impossible. I think it is, but I’d rather leave the case to philosophers.

For the agnostic instrumentalist like me a different question is more interesting, which is whether one stands to gain anything from taking a “shut-up-and-calculate” attitude to the multiverse, even if one distinctly dislikes it. Quantum mechanics too uses unobservable entities, and that formalism –however much you detest it – works very well. It really adds something new, regardless of whether or not you believe the wave-function is “real” in some sense. For what the multiverse is concerned, I am not sure about this. So why bother with it?

Consider the best-case multiverse outcome: Physicists will eventually find a measure on some multiverse according to which the parameters we have measured are the most likely ones. Hurray. Now forget about the interpretation and think of this calculation as a black box: You put in math one side and out comes a set of “best” parameters the other side. You could always reformulate such a calculation as an optimization problem which allows one to calculate the correct parameters. So, independent of the thorny question of what’s real, what do I gain from thinking about measures on the multiverse rather than just looking for an optimization procedure straight away?

Yes, there are cases – like bubble collisions in eternal inflation – that would serve as independent confirmation for the existence of another universe. But no evidence for that has been found. So for me the question remains: under which circumstances is doing calculations in the multiverse an advantage rather than unnecessary mathematical baggage?

I think this paper makes a good example for the difference between philosophers’ and physicists’ interests which I wrote about in my previous post. It was a good (if somewhat long) read and it gave me something to think, though I will need some time to recover from all the -isms.

* Note added: The word connectivity in this sentence is a loose stand-in for those who do not know the technical term “topology.” It does not refer to the technical term “connectivity.”

Friday, August 12, 2016

The Unbearable Lightness of Philosophy

Philosophy isn’t useful for practicing physicists. On that, I am with Steven Weinberg and Lawrence Krauss who have expressed similar opinions. But I think it’s an unfortunate situation because physicists – especially those who work on the foundations of physics – could need help from philosophers.

Massimo Pigliucci, a Prof for Philosophy at CUNY-City College, has ingeniously addressed physicists’ complaints about the uselessness of philosophy by declaring that “the business of philosophy is not to advance science.” Philosophy, hence, isn’t just useless, but it’s useless on purpose. I applaud. At least that means it has a purpose.

But I shouldn’t let Massimo Pigliucci speak for his whole discipline.

I’ve been told for what physics is concerned there are presently three good philosophers roaming Earth: David Albert, Jeremy Butterfield, and Tim Maudlin. It won’t surprise you to hear that I have some issues to pick with each of these gentlemen, but mostly they seem reasonable indeed. I would even like to nominate a fourth Good Philosopher, Steven Weinstein from UoW, with whom even I haven’t yet managed to disagree.

The good Maudlin, for example, had an excellent essay last year on PBS NOVA, in which he argued that “Physics needs Philosophy.” I really liked his argument until he wrote that “Philosophers obsess over subtle ambiguities of language,” which pretty much sums up all that physicists hate about philosophy.

If you want to know “what follows from what,” as Maudlin writes, you have to convert language into mathematics and thereby remove the ambiguities. Unfortunately, philosophers never seem to take that step, hence physicists’ complaints that it’s just words. Or, as Arthur Koestler put it, “the systematic abuse of a terminology specially invented for that purpose.”

Maybe, I admit, it shouldn’t be the philosophers’ job to spell out how to remove the ambiguities in language. Maybe that should already be the job of physicists. But regardless of whom you want to assign the task of reaching across the line, presently little crosses it. Few practicing physicists today care what philosophers do or think.

And as someone who has tried to write about topics on the intersection of both fields, I can report that this disciplinary segregation is meanwhile institutionalized: The physics journals won’t publish on the topic because it’s too much philosophy, and the philosophy journals won’t publish because it’s too much physics.

In a recent piece on Aeon, Pigliucci elaborates on the demarcation problem, how to tell science from pseudoscience. He seems to think this problem is what underlies some physicists’ worries about string theory and the multiverse, worries that were topic of a workshop that both he and I attended last year.

But he got it wrong. While I know lots of physicists critical of string theory for one reason or the other, none of them would go so far to declare it pseudoscience. No, the demarcation problem that physicists worry about isn’t that between science and pseudoscience. It’s that between science and philosophy. It is not without irony that Pigliucci in his essay conflates the two fields. Or maybe the purpose of his essay was an attempt to revive the “string wars,” in which case, wake me when it’s over.

To me, the part of philosophy that is relevant to physics is what I’d like to call “pre-science” – sharpening questions sufficiently so that they can eventually be addressed by scientific means. Maudlin in his above mentioned essay expressed a very similar point of view.

Philosophers in that area are necessarily ahead of scientists. But they also never get the credit for actually answering a question, because for that they’ll first have to hand it over to scientists. Like a psychologist, thus, the philosopher of physics succeeds by eventually making themselves superfluous. It seems a thankless job. There’s a reason I preferred studying physics instead.

Many of the “bad philosophers” are those who aren’t quick enough to notice that a question they are thinking about has been taken over by scientists. That this failure to notice can evidently persist, in some cases, for decades is another institutionalized problem that originates in the lack of communication between both fields.

Hence, I wish there were more philosophers willing to make it their business to advance science and to communicate across the boundaries. Maybe physicists would complain less that philosophy is useless if it wasn’t useless.

Saturday, August 06, 2016

The LHC “nightmare scenario” has come true.

The recently deceased diphoton
bump. Img Src: Matt Strassler.

I finished high school in 1995. It was the year the top quark was discovered, a prediction dating back to 1973. As I read the articles in the news, I was fascinated by the mathematics that allowed physicists to reconstruct the structure of elementary matter. It wouldn’t have been difficult to predict in 1995 that I’d go on to make a PhD in theoretical high energy physics.

Little did I realize that for more than 20 years the so provisional looking standard model would remain undefeated world-champion of accuracy, irritatingly successful in its arbitrariness and yet impossible to surpass. We added neutrino masses in the late 1990s, but this idea dates back to the 1950s. The prediction of the Higgs, discovered 2012, originated in the early 1960s. And while the poor standard model has been discounted as “ugly” by everyone from Stephen Hawking to Michio Kaku to Paul Davies, it’s still the best we can do.

Since I entered physics, I’ve seen grand unified models proposed and falsified. I’ve seen loads of dark matter candidates not being found, followed by a ritual parameter adjustment to explain the lack of detection. I’ve seen supersymmetric particles being “predicted” with constantly increasing masses, from some GeV to some 100 GeV to LHC energies of some TeV. And now that the LHC hasn’t seen any superpartners either, particle physicists are more than willing to once again move the goalposts.

During my professional career, all I have seen is failure. A failure of particle physicists to uncover a more powerful mathematical framework to improve upon the theories we already have. Yes, failure is part of science – it’s frustrating, but not worrisome. What worries me much more is our failure to learn from failure. Rather than trying something new, we’ve been trying the same thing over and over again, expecting different results.

When I look at the data what I see is that our reliance on gauge-symmetry and the attempt at unification, the use of naturalness as guidance, and the trust in beauty and simplicity aren’t working. The cosmological constant isn’t natural. The Higgs mass isn’t natural. The standard model isn’t pretty, and the concordance model isn’t simple. Grand unification failed. It failed again. And yet we haven’t drawn any consequences from this: Particle physicists are still playing today by the same rules as in 1973.

For the last ten years you’ve been told that the LHC must see some new physics besides the Higgs because otherwise nature isn’t “natural” – a technical term invented to describe the degree of numerical coincidence of a theory. I’ve been laughed at when I explained that I don’t buy into naturalness because it’s a philosophical criterion, not a scientific one. But on that matter I got the last laugh: Nature, it turns out, doesn’t like to be told what’s presumably natural.

The idea of naturalness that has been preached for so long is plainly not compatible with the LHC data, regardless of what else will be found in the data yet to come. And now that naturalness is in the way of moving predictions for so-far undiscovered particles – yet again! – to higher energies, particle physicists, opportunistic as always, are suddenly more than willing to discard of naturalness to justify the next larger collider.

Now that the diphoton bump is gone, we’ve entered what has become known as the “nightmare scenario” for the LHC: The Higgs and nothing else. Many particle physicists thought of this as the worst possible outcome. It has left them without guidance, lost in a thicket of rapidly multiplying models. Without some new physics, they have nothing to work with that they haven’t already had for 50 years, no new input that can tell them in which direction to look for the ultimate goal of unification and/or quantum gravity.

That the LHC hasn’t seen evidence for new physics is to me a clear signal that we’ve been doing something wrong, that our experience from constructing the standard model is no longer a promising direction to continue. We’ve maneuvered ourselves into a dead end by relying on aesthetic guidance to decide which experiments are the most promising. I hope that this latest null result will send a clear message that you can’t trust the judgement of scientists whose future funding depends on their continued optimism.

Things can only get better.

[This post previously appeared in a longer version on Starts With A Bang.]

Monday, July 04, 2016

Why the LHC is such a disappointment: A delusion by name “naturalness”

Naturalness, according to physicists.

Before the LHC turned on, theoretical physicists had high hopes the collisions would reveal new physics besides the Higgs. The chances of that happening get smaller by the day. The possibility still exists, but the absence of new physics so far has already taught us an important lesson: Nature isn’t natural. At least not according to theoretical physicists.

The reason that many in the community expected new physics at the LHC was the criterion of naturalness. Naturalness, in general, is the requirement that a theory should not contain dimensionless numbers that are either very large or very small. If that is so, then theorists will complain the numbers are “finetuned” and regard the theory as contrived and hand-made, not to say ugly.

Technical naturalness (originally proposed by ‘t Hooft) is a formalized version of naturalness which is applied in the context of effective field theories in particular. Since you can convert any number much larger than one into a number much smaller than one by taking its inverse, it’s sufficient to consider small numbers in the following. A theory is technically natural if all suspiciously small numbers are protected by a symmetry. The standard model is technically natural, except for the mass of the Higgs.

The Higgs is the only (fundamental) scalar we know and, unlike all the other particles, its mass receives quantum corrections of the order of the cutoff of the theory. The cutoff is assumed to be close by the Planck energy – that means the estimated mass is 15 orders of magnitude larger than the observed mass. This too-large mass of the Higgs could be remedied simply by subtracting a similarly large term. This term however would have to be delicately chosen so that it almost, but not exactly, cancels the huge Planck-scale contribution. It would hence require finetuning.

In the framework of effective field theories, a theory that is not natural is one that requires a lot of finetuning at high energies to get the theory at low energies to work out correctly. The degree of finetuning can, and has been, quantified in various measures of naturalness. Finetuning is thought of as unacceptable because the theory at high energy is presumed to be more fundamental. The physics we find at low energies, so the argument, should not be highly sensitive to the choice we make for that more fundamental theory.

Until a few years ago, most high energy particle theorists therefore would have told you that the apparent need to finetuning the Higgs mass means that new physics must appear nearby the energy scale where the Higgs will be produced. The new physics, for example supersymmetry, would avoid the finetuning.

There’s a standard tale they have about the use of naturalness arguments, which goes somewhat like this:

1) The electron mass isn’t natural in classical electrodynamics, and if one wants to avoid finetuning this means new physics has to appear at around 70 MeV. Indeed, new physics appears even earlier in form of the positron, rendering the electron mass technically natural.

2) The difference between the masses of the neutral and charged pion is not natural because it’s suspiciously small. To prevent fine-tuning one estimates new physics must appear around 700 MeV, and indeed it shows up in form of the rho meson.

3) The lack of flavor changing neutral currents in the standard model means that a parameter which could a priori have been anything must be very small. To avoid fine-tuning, the existence of the charm quark is required. And indeed, the charm quark shows up in the estimated energy range.

From these three examples only the last one was an actual prediction (Glashow, Iliopoulos, and Maiani, 1970). To my knowledge this is the only prediction that technical naturalness has ever given rise to – the other two examples are post-dictions.

Not exactly a great score card.

But well, given that the standard model – in hindsight – obeys this principle, it seems reasonable enough to extrapolate it to the Higgs mass. Or does it? Seeing that the cosmological constant, the only other known example where the Planck mass comes in, isn’t natural either, I am not very convinced.

A much larger problem with naturalness is that it’s a circular argument and thus a merely aesthetic criterion. Or, if you prefer, a philosophic criterion. You cannot make a statement about the likeliness of an occurrence without a probability distribution. And that distribution already necessitates a choice.

In the currently used naturalness arguments, the probability distribution is assumed to be uniform (or at least approximately uniform) in a range that can be normalized to one by dividing through suitable powers of the cutoff. Any other type of distribution, say, one that is sharply peaked around small values, would require the introduction of such a small value in the distribution already. But such a small value justifies itself by the probability distribution just like a number close to one justifies itself by its probability distribution.

Naturalness, hence, becomes a chicken-and-egg problem: Put in the number one, get out the number one. Put in 0.00004, get out 0.00004. The only way to break that circle is to just postulate that some number is somehow better than all other numbers.

The number one is indeed a special number in that it’s the unit element of the multiplication group. One can try to exploit this to come up with a mechanism that prefers a uniform distribution with an approximate width of one by introducing a probability distribution on the space of probability distributions, leading to a recursion relation. But that just leaves one to explain why that mechanism.

Another way to see that this can’t solve the problem is that any such mechanism will depend on the basis in the space of functions. Eg, you could try to single out a probability distribution by asking that it’s the same as its Fourier-transformation. But the Fourier-transformation is just one of infinitely many basis transformations in the space of functions. So again, why exactly this one?

Or you could try to introduce a probability distribution on the space of transformations among bases of probability distributions, and so on. Indeed I’ve played around with this for some while. But in the end you are always left with an ambiguity, either you have to choose the distribution, or the basis, or the transformation. It’s just pushing around the bump under the carpet.

The basic reason there’s no solution to this conundrum is that you’d need another theory for the probability distribution, and that theory per assumption isn’t part of the theory for which you want the distribution. (It’s similar to the issue with the meta-law for time-varying fundamental constants, in case you’re familiar with this argument.)

In any case, whether you buy my conclusion or not, it should give you a pause that high energy theorists don’t ever address the question where the probability distribution comes from. Suppose there indeed was a UV-complete theory of everything that predicted all the parameters in the standard model. Why then would you expect the parameters to be stochastically distributed to begin with?

This lacking probability distribution, however, isn’t my main issue with naturalness. Let’s just postulate that the distribution is uniform and admit it’s an aesthetic criterion, alrighty then. My main issue with naturalness is that it’s a fundamentally nonsensical criterion.

Any theory that we can conceive of which describes nature correctly must necessarily contain hand-picked assumptions which we have chosen “just” to fit observations. If that wasn’t so, all we’d have left to pick assumptions would be mathematical consistency, and we’d end up in Tegmark’s mathematical universe. In the mathematical universe then, we’d no longer have to choose a consistent theory, ok. But we’d instead have to figure out where we are, and that’s the same question in green.

All our theories contain lots of assumptions like Hilbert-spaces and Lie-algebras and Haussdorf measures and so on. For none of these is there any explanation other than “it works.” In the space of all possible mathematics, the selection of this particular math is infinitely fine-tuned already – and it has to be, for otherwise we’d be lost again in Tegmark space.

The mere idea that we can justify the choice of assumptions for our theories in any other way than requiring them to reproduce observations is logical mush. The existing naturalness arguments single out a particular type of assumption – parameters that take on numerical values – but what’s worse about this hand-selected assumption than any other hand-selected assumption?

This is not to say that naturalness is always a useless criterion. It can be applied in cases where one knows the probability distribution, for example for the typical distances between stars or the typical quantum fluctuation in the early universe, etc. I also suspect that it is possible to find an argument for the naturalness of the standard model that does not necessitate to postulate a probability distribution, but I am not aware of one.

It’s somewhat of a mystery to me why naturalness has become so popular in theoretical high energy physics. I’m happy to see it go out of the window now. Keep your eyes open in the next couple of years and you’ll witness that turning point in the history of science when theoretical physicists stopped dictating nature what’s supposedly natural.

Monday, May 09, 2016

Book review: “The Big Picture” by Sean Carroll

The Big Picture: On the Origins of Life, Meaning, and the Universe Itself
Sean Carroll
Dutton (May 10, 2016)

Among the scientific disciplines, physics is unique: Concerned with the most fundamental entities, its laws must be respected in all other areas of science. While there are many emergent laws which are interesting in their own right – from neurobiology to sociology – there is no doubt they all have to be compatible with energy conservation. And the second law of thermodynamics. And quantum mechanics. And the standard model better be consistent with whatever you think are the neurological processes that make you “you.” There’s no avoiding physics.

In his new book, The Big Picture Sean explains just why you can’t ignore physics when you talk about extrasensory perception, consciousness, god, afterlife, free will, or morals. In the first part, Sean lays out what, to our best current knowledge, the fundamental laws of nature are, and what their relevance is for all other emergent laws. In the later parts he then goes through the consequences that follow from this.

On the way from quantum field theory to morals, he covers what science has to say about complexity, the arrow of time, and the origin of life. (If you attended the 2011 FQXi conference, parts will sound very familiar.) Then, towards the end of the book, he derives advice from his physics-based philosophy – which he calls “poetic naturalism” – for finding “meaning” in life and finding a “good” way to organize our living together (scare quotes because these words might not mean what you think they mean). His arguments rely heavily on Bayesian reasoning, so you better be prepared to update your belief system while reading.

The Big Picture is, above everything, a courageous book – and an overdue one. I have had many arguments about exactly the issues that Sean addresses in his book – from “qualia” to “downwards causation” – but I neither have the patience nor the interest to talk people out of their cherished delusions. I’m an atheist primarily because I think religion would be wasting my time, time that I’d rather spend on something more insightful. Trying to convince people that their beliefs are inconsistent would also be wasting my time, hence I don’t. But if I did, I almost certainly wouldn’t be able to remain as infallibly polite as Sean.

So, I am super happy about this book. Because now, whenever someone brings up Mary The Confused Color Scientist who can’t tell sensory perception from knowledge about that perception, I’ll just – politely – tell them to read Sean’s book. The best thing I learned from The Big Picture is that apparently Franck Jackson, the philosopher who came up with The Color Scientist, eventually conceded himself that the argument was wrong. The world of philosophy indeed sometimes moves! Time then, to stop talking about qualia.

I really wish I had found something to disagree with in Sean’s book, but the only quibble I have (you won’t be surprised to hear) is that I think what Sean-The-Compatibilist calls “free will” doesn’t deserve being called “free will.” Using the adjective “free” strongly suggests an independence from the underlying microscopic laws, and hence a case of “strong emergence” – which is an idea that should go into the same bin as qualia. I also agree with Sean however that fighting about the use of words is moot.

(The other thing I’m happy about is that, leaving aside the standard model and general relativity, Sean’s book has almost zero overlap with the book I’m writing. *wipes_sweat_off_forehead*. Could you all please stop writing books until I’m done, it makes me nervous.)

In any case, it shouldn’t come as a surprise that I agree so wholeheartedly with Sean because I think everybody who open-mindedly looks at the evidence – ie all we currently know about the laws of nature – must come to the same conclusions. The main obstacle in conveying this message is that most people without training in particle physics don’t understand effective field theory, and consequently don’t see what this implies for the emergence of higher level laws. Sean does a great job overcoming this obstacle.

I wish I could make myself believe that after the publication of Sean’s book I’ll never again have to endure someone insisting there must be something about their experience that can’t be described by a handful of elementary particles. But I’m not very good at making myself believe in exceedingly unlikely scenarios, whether that’s the existence of an omniscient god or the ability of humans to agree on how unlikely this existence is. At the very least however, The Big Picture should make clear that physicists aren’t just arrogant when they say their work reveals insights that reach far beyond the boundaries of their discipline. Physics indeed has an exceptional status among the sciences.

[Disclaimer: Free review copy.]

Sunday, January 10, 2016

Free will is dead, let’s bury it.

I wish people would stop insisting they have free will. It’s terribly annoying. Insisting that free will exists is bad science, like insisting that horoscopes tell you something about the future – it’s not compatible with our knowledge about nature.

According to our best present understanding of the fundamental laws of nature, everything that happens in our universe is due to only four different forces: gravity, electromagnetism, and the strong and weak nuclear force. These forces have been extremely well studied, and they don’t leave any room for free will.

There are only two types of fundamental laws that appear in contemporary theories. One type is deterministic, which means that the past entirely predicts the future. There is no free will in such a fundamental law because there is no freedom. The other type of law we know appears in quantum mechanics and has an indeterministic component which is random. This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no “will” – there is just some randomness sprinkled over the determinism.

In neither case do you have free will in any meaningful way.

These are the only two options, and all other elaborations on the matter are just verbose distractions. It doesn’t matter if you start talking about chaos (which is deterministic), top-down causation (which doesn’t exist), or insist that we don’t know how consciousness really works (true but irrelevant). It doesn’t change a thing about this very basic observation: there isn’t any known law of nature that lets you meaningfully speak of “free will”.

If you don’t want to believe that, I challenge you to write down any equation for any system that allows for something one could reasonably call free will. You will almost certainly fail. The only thing really you can do to hold on to free will is to wave hands, yell “magic”, and insist that there are systems which are exempt from the laws of nature. And these systems somehow have something to do with human brains.

The only known example for a law that is neither deterministic nor random comes from myself. But it’s a baroque construct meant as proof in principle, not a realistic model that I would know how to combine with the four fundamental interactions. As an aside: The paper was rejected by several journals. Not because anyone found anything wrong with it. No, the philosophy journals complained that it was too much physics, and the physics journals complained that it was too much philosophy. And you wonder why there isn’t much interaction between the two fields.

After plain denial, the somewhat more enlightened way to insist on free will is to redefine what it means. You might settle for example on speaking of free will as long as your actions cannot be predicted by anybody, possibly not even by yourself. Clearly, it is presently impossible to make such a prediction. It remains to be seen whether it will remain impossible, but right now it’s a reasonable hope. If that’s what you want to call free will, go ahead, but better not ask yourself what determined your actions.

A popular justification for this type of free will is insisting that on comparably large scales, like those between molecules responsible for chemical interactions in your brain, there are smaller components which may have a remaining influence. If you don’t keep track of these smaller components, the behavior of the larger components might not be predictable. You can then say “free will is emergent” because of “higher level indeterminism”. It’s like saying if I give you a robot and I don’t tell you what’s in the robot, then you can’t predict what the robot will do, consequently it must have free will. I haven’t managed to bring up sufficient amounts of intellectual dishonesty to buy this argument.

But really you don’t have to bother with the details of these arguments, you just have to keep in mind that “indeterminism” doesn’t mean “free will”. Indeterminism just means there’s some element of randomness, either because that’s fundamental or because you have willfully ignored information on short distances. But there is still either no “freedom” or no “will”. Just try it. Try to write down one equation that does it. Just try it.

I have written about this a few times before and according to the statistics these are some of the most-read pieces on my blog. Following these posts, I have also received a lot of emails by readers who seem seriously troubled by the claim that our best present knowledge about the laws of nature doesn’t allow for the existence of free will. To ease your existential worries, let me therefore spell out clearly what this means and doesn’t mean.

It doesn’t mean that you are not making decisions or are not making choices. Free will or not, you have to do the thinking to arrive at a conclusion, the answer to which you previously didn’t know. Absence of free will doesn’t mean either that you are somehow forced to do something you didn’t want to do. There isn’t anything external imposing on you. You are whatever makes the decisions. Besides this, if you don’t have free will you’ve never had it, and if this hasn’t bothered you before, why start worrying now?

This conclusion that free will doesn’t exist is so obvious that I can’t help but wonder why it isn’t widely accepted. The reason, I am afraid, is not scientific but political. Denying free will is considered politically incorrect because of a wide-spread myth that free will skepticism erodes the foundation of human civilization.

For example, a 2014 article in Scientific American addressed the question “What Happens To A Society That Does not Believe in Free Will?” The piece is written by Azim F. Shariff, a Professor for Psychology, and Kathleen D. Vohs, a Professor of Excellence in Marketing (whatever that might mean).

In their essay, the authors argue that free will skepticism is dangerous: “[W]e see signs that a lack of belief in free will may end up tearing social organization apart,” they write. “[S]kepticism about free will erodes ethical behavior,” and “diminished belief in free will also seems to release urges to harm others.” And if that wasn’t scary enough already, they conclude that only the “belief in free will restrains people from engaging in the kind of wrongdoing that could unravel an ordered society.”

To begin with I find it highly problematic to suggest that the answers to some scientific questions should be taboo because they might be upsetting. They don’t explicitly say this, but the message the article send is pretty clear: If you do as much as suggest that free will doesn’t exist you are encouraging people to harm others. So please read on before you grab the axe.

The conclusion that the authors draw is highly flawed. These psychology studies always work the same. The study participants are engaged in some activity in which they receive information, either verbally or in writing, that free will doesn’t exist or is at least limited. After this, their likeliness to conduct “wrongdoing” is tested and compared to a control group. But the information the participants receive is highly misleading. It does not prime them to think they don’t have free will, it instead primes them to think that they are not responsible for their actions. Which is an entirely different thing.

Even if you don’t have free will, you are of course responsible for your actions because “you” – that mass of neurons – are making, possibly bad, decisions. If the outcome of your thinking is socially undesirable because it puts other people at risk, those other people will try to prevent you from more wrongdoing. They will either try to fix you or lock you up. In other words, you will be held responsible. Nothing of this has anything to do with free will. It’s merely a matter of finding a solution to a problem.

The only thing I conclude from these studies is that neither the scientists who conducted the research nor the study participants spent much time thinking about what the absence of free will really means. Yes, I’ve spent far too much time thinking about this.

The reason I am hitting on the free will issue is not that I want to collapse civilization, but that I am afraid the politically correct belief in free will hinders progress on the foundations of physics. Free will of the experimentalist is a relevant ingredient in the interpretation of quantum mechanics. Without free will, Bell’s theorem doesn’t hold, and all we have learned from it goes out the window.

This option of giving up free will in quantum mechanics goes under the name “superdeterminism” and is exceedingly unpopular. There seem to be but three people on the planet who work on this, ‘t Hooft, me, and a third person of whom I only learned from George Musser’s recent book (and whose name I’ve since forgotten). Chances are the three of us wouldn’t even agree on what we mean. It is highly probable we are missing something really important here, something that could very well be the basis of future technologies.

Who cares, you might think, buying into the collapse of the wave-function seems a small price to pay compared to the collapse of civilization. On that matter though, I side with Socrates “The unexamined life is not worth living.”

Tuesday, December 29, 2015

Book review: “Seven brief lessons on physics” by Carlo Rovelli

Seven Brief Lessons on Physics
By Carlo Rovelli
Allen Lane (September 24, 2015)

Carlo Rovelli’s book is a collection of essays about the fundamental laws of physics as we presently know them, and the road that lies ahead. General Relativity, quantum mechanics, particle physics, cosmology, quantum gravity, the arrow of time, and consciousness, are the topics that he touches upon in this slim, pocket-sized, 79 pages collection.

Rovelli is one of the founders of the research program of Loop Quantum Gravity, an approach to understanding the quantum nature of space and time. His “Seven brief lessons on physics” are short on scientific detail, but excel in capturing the fascination of the subject and its relevance to understand our universe, our existence, and ourselves. In laying out the big questions driving physicists’ quest for a better understanding of nature Rovelli makes it clear how the, often abstract, contemporary research is intimately connected with the ancient desire to find our place in this world.

As a scientist, I would like to complain about numerous slight inaccuracies, but I forgive them since they are admittedly not essential to the message Rovelli is conveying, that is the value of knowledge for the sake of knowledge itself. The book is more a work of art and philosophy than of science, it’s the work of a public intellectual reaching out to the masses. I applaud Carlo for not dumbing down his writing, for not being afraid of using multi-syllable words and constructing nested sentences; it’s a pleasure to read. He seems to spend too much time on the beach playing with snail-shells though.

I might have recommended the book as a Christmas present for your relatives who never quite seem to understand why anyone would spend their life pondering the arrow of time, but I was too busy pondering the arrow of time to finish the book before Christmas.

I would recommend this book to anyone who wants to understand how fundamental questions in physics tie together with the mystery of our own existence, or maybe just wants a reminder of what got them into this field decades ago.

[Disclaimer: I got the book as gift from the author.]

Saturday, December 19, 2015

Ask Dr B: Is the multiverse science? Is the multiverse real?

Kay zum Felde asked:
“Is the multiverse science? How can we test it?”
I added “Is the multiverse real” after Google offered it as autocomplete:


Dear Kay,

This is a timely question, one that has been much on my mind in the last years. Some influential theoretical physicists – like Brian Greene, Lenny Susskind, Sean Carroll, and Max Tegmark – argue that the appearance of multiverses in various contemporary theories signals that we have entered a new era of science. This idea however has been met with fierce opposition by others – like George Ellis, Joe Silk, Paul Steinhardt, and Paul Davies – who criticize the lack of testability.

If the multiverse idea is right, and we live in one of many – maybe infinitely many – different universes, then some of our fundamental questions about nature might never be answered with certainty. We might merely be able to make statements about how likely we are to inhabit a universe with some particular laws of nature. Or maybe we cannot even calculate this probability, but just have to accept that some things are as they are, with no possibility to find deeper answers.

What bugs the multiverse opponents most about this explanation – or rather lack of explanation – is that succumbing to the multiverse paradigm feels like admitting defeat in our quest for understanding nature. They seem to be afraid that merely considering the multiverse an option discourages further inquiries, inquiries that might lead to better answers.

I think the multiverse isn’t remotely as radical an idea as it has been portrayed, and that some aspects of it might turn out to be useful. But before I go on, let me first clarify what we are talking about.

What is the multiverse?

The multiverse is a collection of universes, one of which is ours. The other universes might be very different from the one we find ourselves in. There are various types of multiverses that theoretical physicists believe are logical consequences of their theories. The best known ones are:
  • The string theory landscape
    String theory doesn’t uniquely predict which particles, fields, and parameters a universe contains. If one believes that string theory is the final theory, and there is nothing more to say than that, then we have no way to explain why we observe one particular universe. To make the final theory claim consistent with the lack of predictability, one therefore has to accept that any possible universe has the same right to existence as ours. Consequently, we live in a multiverse.

  • Eternal inflation
    In some currently very popular models for the early universe our universe is just a small patch of a larger space. As result of a quantum fluctuation the initially rapid expansion – known as “inflation” – slows down in the region around us and galaxies can be formed. But outside our universe inflation continues, and randomly occurring quantum fluctuations go on to spawn off other universes – eternally. If one believes that this theory is correct and that we understand how the quantum vacuum couples to gravity, then, so the argument, the other universes are equally real as ours.

  • Many worlds interpretation
    In the Copenhagen interpretation of quantum mechanics the act of measurement is ad hoc. It is simply postulated that measurement “collapses” the wave-function from a state with quantum properties (such as being in two places at once) to a distinct state (at only one place). This postulate agrees with all observations, but it is regarded unappealing by many (including myself). One way to avoid this postulate is to instead posit that the wave-function never collapses. Instead it ‘branches’ into different universes, one for each possible measurement outcome – a whole multiverse of measurement outcomes.

  • The Mathematical Universe
    The Mathematical Universe is Max Tegmark’s brain child, in which he takes the final theory claim to its extreme. Any theory that describes only our universe requires the selection of some mathematics among all possible mathematics. But if a theory is a final theory, there is no way to justify any particular selection, because any selection would require another theory to explain it. And so, the only final theory there can be is one in which all mathematics exists somewhere in the multiverse.
This list might raise the impression that the multiverse is a new finding, but that isn’t so. New is only the interpretation. Since every theory requires observational input to fix parameters or pick axioms, every theory leads to a multiverse. Without sufficient observational input any theory becomes ambiguous – it gives rise to a multiverse.

Take Newtonian gravity: Is there a universe for each value of Newton’s constant? Or General Relativity: Do all solutions to the field equations exist? And Loop Quantum Gravity has multiverses with different parameters for an infinite number of solutions like string theory. It’s just that Loop Quantum Gravity never tried to be a theory of everything, so nobody worries about this.

What is new about the multiverse idea is that some physicists are no longer content with having a theory that describes observation. They now have additional requirements for a good theory, like for example that the theory have no ad hoc prescriptions like collapsing wavefunctions; no small, large, or in fact any numbers; or initial conditions that are likely according to some currently accepted probability distribution.

Is the multiverse science?

Science is what describes our observations of nature. But this is the goal and not necessarily the case for each step along the way. And so, taking multiverses seriously, rather than treating them as the mathematical artifact that I think they are, might eventually lead to new insights. The real controversy about the multiverses is how likely it is that new insights will emerge from this approach eventually.

The maybe best example for how multiverses might become scientific is eternal inflation. It has been argued that the different universes might not be entirely disconnected, but can collide, thereby leaving observable signatures in the cosmic microwave background. Another example for testability comes from Mersini-Houghton and Holman who have looked into potentially observable consequences of entanglement between different universes. And in a rather mindbending recent work, Garriga, Vilenkin and Zhang, have argued that the multiverse might give rise to a distribution of small black holes in our universe which also has consequences that could become observable in the future.

As to probability distributions on the string theory landscape, I don’t see any conceptual problem with that. If someone could, based on a few assumptions, come up with a probability measure according to which the universe we observe is the most likely one, that would for me be a valid computation of the standard model parameters. The problem is of course to come up with such a measure.

Similar things could be said about all other multiverses. They don’t presently seem very useful to describe nature. But pursuing the idea might eventually give rise to observable consequences and further insights.

We have known since the dawn of quantum mechanics that it’s wrong to require all mathematical structures of a theory to directly correspond to observables – wave-functions are the best counter example. How willing physicists are to accept non-observable ingredients of a theory as necessary depends on their trust in the theory and on their hope that it might give rise to deeper insights. But there isn’t a priori anything unscientific with a theory that contains elements that are unobservable.

So is the multiverse science? It is an extreme speculation, and opinions widely differ on how promising it is as a route is to deeper understanding. But speculations are a normal part of theory development, and the multiverse is scientific as long as physicists strive to eventually derive observable consequences.

Is the multiverse real?

The multiverse has some brain-bursting consequences. For example that everything that can happen does happen, and it happens an infinite amount of times. There are thus infinitely many copies of you, somewhere out there, doing their own thing, or doing exactly the same as you. What does that mean? I have no clue. But it makes for an interesting dinner conversation through the second bottle of wine.

Is it real? I think it’s a mistake to think of “being real” as a binary variable, a property that an object either has or has not. Reality has many different layers, and how real we perceive something depends on how immediate our inference of the object from sensory input is.

A dog peeing on your leg has a very simple and direct relation to your sensory input that does not require much decoding. You would almost certainly consider it real. On the contrary, evidence for the quark model contained in a large array of data on a screen is a very indirect sensory input that requires a great deal of decoding. How real you consider quarks thus depends on your knowledge of, and trust in, the theory and the data. Or trust in the scientists dealing with the theory and the data as it were. For most physicists the theory underlying the quark model has proved reliable and accurate to such high precision that they consider quarks as real as the peeing dog.

But the longer the chain of inference, and the less trust you have in the theories used for inference, the less real objects become. In this layered reality the multiverse is currently at the outer fringes. It’s as unreal as something can be without being plain fantasy. For some practitioners who greatly trust their theories, the multiverse might appear almost as real as the universe we observe. But for most of us these theories are wild speculations and consequently we have little trust in this inference.

So is the multiverse real? It is “less real” than everything else physicists have deduced from their theories – so far.

Saturday, May 30, 2015

String theory advances philosophy. No, really.

I have a soft side, and I don’t mean my Snoopy pants, though there is that. I mean I have a liking for philosophy because there are so many questions that physics can’t answer. I never get far with my philosophical ambitions though because the only reason I can see for leaving a question to philosophers is that the question itself is the problem. Take for example the question “What is real?” What does that really mean?

Most scientists are realists and believe that the world exists independent of them. On the very opposite end there is solipsism, the belief that one can only be sure that one’s own mind exists. And then there’s a large spectrum of isms in the middle. Philosophers have debated the nature of reality for thousands of years, and you might rightfully conclude that it just isn’t possible to make headway on the issue. But you’d be wrong! As I learned on a recent conference where I gave a talk about dualities in physics, string theory indeed helped philosophers to make progress in this ancient debate. However, I couldn’t make much sense of the interest in dualities that my talk got until I read Richard Dawid’s book which put things into perspective.

I’d call myself a pragmatic realist and an opportunistic solipsist, which is to say that I sometimes like to challenge people to prove me they’re not a figment of my imagination. So far nobody has succeeded. It’s not so much self-focus that makes me contemplate solipsism, but a deep mistrust in the reliability of human perception and memory, especially my own, because who knows if you exist at all. Solipsism never was very popular, which might be because it makes you personally responsible for all that is wrong with the world. It is also the possibly most unproductive mindset you can have if you want to get research done, but I find it quite useful to deal with the more bizarre comments that I get.

My biggest problem with the question what is real though isn’t that I evidently sometimes talk to myself, but that I don’t know what “real” even means, which is also why most discussions about the reality of time or the multiverse seem void of content to me. The only way I ever managed to make sense of reality is in a layer of equivalence classes, so let me introduce you to my personal reality.

Equivalence classes are what mathematicians use to collect things with similar properties. It’s basically a weaker form of equality, often denoted with a tilde ~. For example all natural numbers that divide evenly by seven are in the same equivalence class, so while 7 ≠ 21, it is 7 ~ 21. They’re not the same numbers, but they share a common property. The good thing about using equivalence classes is that once defined one can derive relations for them. They play an essential role in topology, but I digress, so back to reality.

Equivalence classes help because while I can’t make sense out of the question what is real, the question what is “as real as” makes sense. The number seven isn’t “as real as” my shoe, and the reason I’m saying this is because of the physical interaction I can have with my shoe but not with seven. That’s why, you won’t be surprised to hear, I want to argue here the best way to think about reality is to think about physics first.

As I laid out in an earlier post, in physics we talk about direct and indirect measurements, but the line that separates these is fuzzy. Roughly speaking, the more effort is necessary to infer the properties of the object measured, the more indirect the measurement. A particle that hits a detector is often said to be directly measured. A particle whose existence has to be inferred from decay products that hit the detector is said to be indirectly measured. But of course there are many other layers of inference in the measurement. To begin with there are assumptions about the interactions within the detector that eventually produce a number on a screen, then there are photons that travel to your retina, and finally the brain activity resulting from these photons.

The reason we don’t normally mention all these many assumptions is that we assign them an extremely high confidence level. Reality then, in my perspective, has confidence levels like our measurements do, from very direct to very indirect. The most direct measurement, the first layer of reality, is what originates in your own brain. The second layer is direct sensory input: It’s a photon, it’s the fabric touching your skin, the pressure fluctuations in the air perceived as sound. The next layer is the origin of these signals, say, the screen emitting the photon. Then the next layer is whatever processor gave rise to that photon, and so on. Depending on how solipsisitic you feel you can imagine these layers extending outside or inside.

The more layers there are, the harder it becomes to reconstruct the origin of a signal and the less real the origin appears. A person appears much more real if they are stepping on your feet, rather than sending an image of a shoe. Also, as optical illusions tell us, the signal reconstruction can be quite difficult which twists our perception of reality. And let us not even start with special relativistic image distortions that require quite some processing to get right.

Our assessment of how direct or indirect a measurement is, and of how real the object measured appears, is not fixed and may change over time with technological advances. It was historically for example much topic of debate whether atoms can be considered real if they cannot be seen by eye. But modern electron microscopes now can produce images of single atoms, a much more direct measurement than inferring the existence of atoms from chemical reactions. As the saying goes “seeing is believing.” Seeing photos from the surface of Mars likewise has moved Mars into another equivalence class of reality, one that is much closer to our sensory input. Doesn’t Mars seem so much more real now?

[Surface of Mars. Image Source: Wikipedia]


Quarks have posed a particular problem for the question of reality since they cannot be directly measured due to confinement. In fact many people in the early days of the quark model, Gell-Mann himself included, didn’t believe in quarks being real, but where thinking of them as calculational devices. I don’t really see the difference. We infer their properties through various layers of reasoning. Quarks are not in a reality class that is anywhere close to direct sensory input, but they have certainly become more real to us as our confidence in the theory necessary to extract information from the data has increased. These theories are now so well established that quarks are considered as real as other particles that are easier to measure, fapp - for all practical physicists.

It’s about at the advent of quantum field theory that the case of scientific realism starts getting complicated. Philosophers separate in two major camps, ontological realism and structural realism. The former believes that the objects of our theories are somehow real, the latter that it’s the structure of the theory instead. Effective field theories basically tell you that ontological realism makes only sense in layers, because you might have different objects depending on the scale of resolution. But even then, with seas of virtual particles, and different bases in the Hilbert space, and different pictures of time-evolution, the objects that should be at the core of ontological realism seem ill-defined. And that’s not even taking into account that the notion of a particle also depends on the observer.

For what I can extract from Dawid’s book it hasn’t been looking good for ontological realism for some while, but it’s an ongoing debate and it’s here where string theory became relevant.

Some dualities between different theories have been known for a long time. A duality can relate theories that have a different field content and different symmetries. That by itself is a death spell to anything ontological, for if you have two different fields by which you can describe the same physics, what is the rationale for calling one more real than the other? Dawid writes:
“dualities… are thoroughly incompatible with ontological scientific realism.”
String theory now not only has popularized the existence of dualities and forced philosophers to deal with that, it has also served to demonstrate that theories can be dual to each other that are structurally very different, such as a string theory in one space and a gauge-theory in a space of lower dimension. So one is now similarly at a loss to decide which structure is more real than the other.

To address this, Dawid suggests to instead think of “consistent structure realism” by which he seem to mean we need to take the full “consistent structure” (ie, string theory) and interpret this as being the fundamentally “real” thing.

For what I am concerned, both sides of a duality are equally real, or equally unreal, depending on how convincing you think the inference of either theory from existing data is. They’re both in the same equivalence class; in fact the duality itself provides the equivalence relation. So suppose you have convincing evidence for some string-theory-derived duality to be a good description of nature, does that mean the whole multiverse is equally real? No, because the rest of the multiverse only follows through an even longer chain of reasoning. You either must come up with a mechanism that produces the other universes (as in eternal inflation or the many worlds interpretation) and then find support for that, or the multiverse moves to the same class of reality as the number seven, somewhere behind Snoopy and the Yeti.

So the property of being real is not binary, but rather it is infinitely layered. It is also relative and changes over time for the effort that you must make to reconstruct a concept or an image isn’t the same I might have to make. Quarks become more real the better we understand quantum chromo dynamics in the same way that you are more real to yourself than you are to me.

I still don’t know if strings as the fundamental building blocks of elementary particles can ever reach a reality level comparable to quarks, or if there is any conceivable measurement at all, no matter how indirect. Though one could rightfully argue that in some people’s mind strings already exist beyond any doubt. And if you’re a brain in a jar, that’s all that matters, really.