Tuesday, August 21, 2018

Roger Penrose still looks for evidence of universal rebirth

Roger Penrose really should have won a Nobel Prize for something. Though I’m not sure for what. Maybe Penrose-tilings. Or Penrose diagrams. Or the Penrose process. Or the Penrose-Hawking singularity theorems. Or maybe just because there are so many things named after Penrose.

And at 87 years he’s still at it. Penrose has a reputation for saying rude things about string theory, has his own interpretation of quantum mechanics, and he doesn’t like inflation, the idea that the early universe underwent a rapid phase of exponential expansion. Instead, he has his own theory called “conformal cyclic cosmology” (CCC).

According to Penrose’s conformal cyclic cosmology, the universe goes through an infinite series of “aeons,” each of which starts with a phase resembling a big bang, then forming galactic structures as usual, then cooling down as stars die. In the end the only thing that’s left are evaporating black holes and thinly dispersed radiation. Penrose then conjectures a slight change to particle physics that allows him to attach the end of one aeon to the beginning of another, and everything starts anew with the next bang.

This match between one aeon’s end and another’s beginning necessitates the introduction of a new field – the “erebon” – that makes up dark matter, and that decays throughout the coming aeon. We previously met the erobons because Penrose argued their decay should create noise in gravitational wave interferometers. (Not sure what happened to that.)

If Penrose’s CCC hypothesis is correct, we should also be able to see some left-over information from the previous aeon in the cosmic microwave background around us. To that end, Penrose has previously looked for low-variance rings in the CMB, that he argued should be caused by collisions between supermassive black holes in the aeon prior to ours. The search for that, however, turned out to be inconclusive. In a recent paper with Daniel An and Krzysztof Meissner he has now suggested to look instead for a different signal.

The new signal that Penrose et al are looking for are points in the CMB at the places where in the previous aeon supermassive black holes evaporated. He and collaborators called these “Hawking Points” in memory of the late Stephen Hawking. The idea is that when you glue together the end of the previous aeon with the beginning of ours, you squeeze together the radiation emitted by those black holes and that makes a blurry point at which the CMB temperature is slightly increased.

Penrose estimates the total number of such Hawking Points which should be in the total cosmic microwave background is about a million. The analysis in the paper, covering about 1/3 of the sky, finds tentative evidence for about 20. What’s with the rest remains somewhat unclear, presumably too weak to be observed.

They look for these features by generating fake “normal” CMBs, following standard procedure, and then trying to find Hawking Points in these simulations. They have now done about 5000 of such simulations, but none of them, they claim, has features similar to the actually observed CMB. This makes their detection highly statistically significant, with a chance of less than 1/5000 that the Hawking Points which they find in the CMB are due to random chance.

In the paper, the authors also address an issue that I am guessing was raised by someone else somewhere, which is that in CCC there shouldn’t be a CMB polarization signal like the one BICEP was looking for. This signal still hasn’t been confirmed, but Penrose et al claim pre-emptively that in CCC there should also be a polarization, and it should go with the Hawking Peaks because:
“primordial magnetic fields might arise in CCC as coming [...] from galactic clusters in the previous aeon […] and such primordial magnetic fields could certainly produce B-modes […] On the basis that such a galactic cluster ought to have contained a supermassive black hole which could well have swallowed several others, we might expect concentric rings centred on that location”
Quite a collection of mights and coulds and oughts.

Like Penrose, I am not a big fan of inflation, but I don’t find conformal cyclic cosmology well-motivated either. Penrose simply postulates that the known particles have a so-far unobserved property (so the physics becomes asymptotically conformal) because he wants to get rid of all gravitational degrees of freedom. I don’t see what’s wrong with that, but I also can’t see any good reason for why that should be correct. Furthermore, I can’t figure out what happens with the initial conditions or the past hypothesis, which leaves me feeling somewhat uneasy.

But really I’m just a cranky ex-particle-physicist with an identity crisis, so I’ll leave the last words to Penrose himself:
“Of course, the theory is “crazy”, but I strongly believe (in view of observational facts that seem to be coming to light) that we have to take it seriously.”

Monday, August 20, 2018

Guest Post: Tam Hunt questions Carlo Rovelli about the Nature of Time

Tam Hunt.
[Tam Hunt, photo on the right, is a renewable energy lawyer in Santa Barbara and an “affiliate” in the Department of Brain and Cognitive Sciences at UCSB. (Scare quotes his, not mine, make of this what you wish.) He has also published some papers about philosophy and likes to interview physicists. The below is an email interview he conducted with Carlo Rovelli about the Nature of Time. Carlo Rovelli is director of the quantum gravity group at Marseille University in France and author of several popular science books.]

TH:Let me start by asking why discussions about the nature of time should matter to the layperson?

CR: There is no reason it “should” matter. People have the right to be ignorant, if they wish to be. But many people prefer not to be ignorant. Should the fact that the Earth is not flat matter for normal people? Well, the fact that Earth is a sphere does not matter during most of our daily lives, but we like to know.

TH: Are there real-world impacts with respect to the nature of time that we should be concerned with?

CR: There is already technology that has been strongly impacted by the strange nature of time: the GPS in our cars and telephones, for instance.
Carlo Rovelli.

TH: What inspired you to make physics and the examination of the nature of time a major focus of your life's work?

CR: My work on quantum gravity has brought me to study time. It turns out that in order to solve the problem of quantum gravity, namely understanding the quantum aspects of gravity, we have to reconsider the nature of space and time. But I have always been curious about the elementary structure of reality, since my adolescence. So, I have probably been fascinated by the problem of quantum gravity precisely because it required rethinking the nature of space and time.

TH: Your work and your new book continue and extend the view that the apparent passage of time is largely an illusion because there is no passage of time at the fundamental level of reality. Your new book is beautifully and clearly written -- even lyrical at times -- and you argue that the world described by modern physics is a “windswept landscape almost devoid of all trace of temporality.” (P. 11). How does this view of time pass the “common sense” test since everywhere we look in our normal waking consciousness there is nothing but a passage of time from moment to moment to moment?

CR: Thanks. No, I do not argue that the passage of time is an illusion. “Illusion” may be a misleading word. It makes it seem that there is something wrong about our common-sense views on time. There is nothing wrong with it. What is wrong is to think that this view must hold for the entire universe, or that it is valid at all scales and in all situations. It is like the flat Earth: Earth is almost perfectly flat at the scale of most of our daily life, so, there is nothing wrong in considering it flat when we build a house, say. But on larger scales the Earth just happens not to be flat. So with time: as soon as we look a bit farther than our myopic eyes allow, we see that it works differently from what we thought.

This view passes the “common sense” test in the same way in which the fact that the Earth rotates passes the “common sense” view that the Earth does not move and the Sun moves down at sunset. That is, “common sense” is often wrong. What we experience in our “normal waking consciousness” is not the elementary structure of reality: it is a complex construction that depends on the physics of the world but also on the functioning of our brain. We have difficulty in disentangling one from the other.

“Time” is an example of this confusion; we mistake for an elementary fact about physics what is really a complex construct due to our brain. It is a bit like colors: we see the world in combinations of three basic colors. If we question physics as to why the colors we experience are combination of three basic colors, we do not find any answer. The explanation is not in physics, it is in biology: we have three kinds of receptors in our eyes, sensible to three and only three frequency windows, out of the infinite possibilities. If we think that the three-dimensional structure of colors is a feature of reality external to us, we confuse ourselves.

There is something similar with time. Our “common sense” feeling of the passage of time is more about ourselves than the physical nature of the external world. It regards both, of course, but in a complex, stratified manner. Common sense should not be taken at face value, if we want to understand the world.

TH: But is the flat Earth example, or similar examples of perspectival truth, applicable here? It seems to me that this kind of perspectival view of truth (that the Earth seems flat at the human scale but is clearly spherical when we zoom out to a larger perspective) isn’t the case with the nature of time because no matter what scale/perspective we use to examine time there is always a progression of time from now to now to now. When we look at the astronomical scale there is always a progression of time. When we look at the microscopic scale there is always a progression of time.

CR: What indicates that our intuition of time is wrong is not microscopes or telescopes. It’s clocks. Just take two identical clocks indicating the same time and move them around. When they meet again, if they are sufficiently precise, they do not indicate the same time anymore. This demolishes a piece of our intuition of time: time does not pass at the same “rate” for all the clocks. Other aspects of our common-sense intuition of time are demolished by other physics observations.

TH: In the quote from your book I mentioned above, what are the “traces” of temporality that are still left over in the windswept landscape “almost devoid of all traces of temporality,” a “world without time,” that has been created by modern physics?

CR: Change. It is important not to confuse “time” and “change.” We tend to confuse these two important notions because in our experience we can merge them: we can order all the change we experience along a universal one-dimensional oriented line that we call “time.” But change is far more general than time. We can have “change,” namely “happenings,” without any possibility of ordering sequences of these happenings along a single time variable.

There is a mistaken idea that it is impossible to describe or to conceive change unless there exists a single flowing time variable. But this is wrong. The world is change, but it is not [fundamentally] ordered along a single timeline. Often people fall into the mistake that a world without time is a world without change: a sort of frozen eternal immobility. It is in fact the opposite: a frozen eternal immobility would be a world where nothing changes and time passes. Reality is the contrary: change is ubiquitous but if we try to order change by labeling happenings with a time variable, we find that, contrary to intuition, we can do this only locally, not globally.

TH: Isn’t there a contradiction in your language when you suggest that the common-sense notion of the passage of time, at the human level, is not actually an illusion (just a part of the larger whole), but that in actuality we live in a “world without time”? That is, if time is fundamentally an illusion isn’t it still an illusion at the human scale?

CR: What I say is not “we live in a world without time.” What I say is “we live in a world without time at the fundamental level.” There is no time in the basic laws of physics. This does not imply that there is no time in our daily life. There are no cats in the fundamental equations of the world, but there are cats in my neighborhood. Nice ones. The mistake is not using the notion of time [at our human scale]. It is to assume that this notion is universal, that it is a basic structure of reality. There are no micro-cats at the Planck scale, and there is no time at the Planck scale.

TH: You argue that time emerges: “Somehow, our time must emerge around us, at least for us and at our scale.” As such, how do you reconcile the notion of emergence of time itself with the fact that the definition of emergence necessarily includes change over time? That is, how is it coherent to argue that time itself emerges over time?

CR: The notion of “emergence” does not always include change over time. For instance we say that if you look at how humans are distributed on the surface of the Earth, there are some general patterns that “emerge” by looking at a very large scale. You do not see them at the small scale, you see them looking at the large scale. Here “emergence” is related to the scale at which something is described. Many concepts we use in science emerge at some scale. They have no meaning at smaller scales.

TH: But this kind of scale emergence is a function solely of an outside conscious observer, in time, making an observation (in time) after contemplating new data. So aren’t we still confronted with the problem of explaining how time emerges in time?

CR: There is no external observer in the universe, but there are internal observers that interact with one another. In the course of this interaction, the temporal structure that they ascribe to the rest may differ. I think that you are constantly misunderstanding the argument of my book, because you are not paying attention to the main point: the book does not deny the reality of change: it simply confronts the fact that the full complexity of the time of our experience does not extend to the entire reality. Read the book again!

TH: I agree that common sense can be a faulty guide to the nature of reality but isn’t there also a risk of unmooring ourselves from empiricism when we allow largely mathematical arguments to dictate our views on the nature of reality?

CR: It is not “largely mathematical arguments” that tell us that our common sense idea of time is wrong. It is simple brute facts. Just separate two accurate clocks and bring them back together and this shows that our intuition about time is wrong. When the GPS global positioning system was first mounted, some people doubted the “delicate mathematical arguments” indicating that time on the GPS satellites runs faster than at sea level: the result was that the GPS did not work [when it was first set up]. A brute fact. We have direct facts of evidence against the common-sense notion of time.

Empiricism does not mean taking what we see with the naked eye as the ultimate reality. If it was so, we would not believe that there are atoms or galaxies, or the planet Uranus. Empiricism is to take seriously the delicate experience we gather with accurate instruments. The idea that we risk unmooring “ourselves from empiricism when we allow largely mathematical arguments to dictate our views on the nature of reality” is the same argument used against Galileo when we observed with the telescope, or used by Mach to argue against the real existence of atoms. Empiricism is to base our knowledge of reality on experience, and experience includes looking into a telescope, looking into an electronic microscope, where we actually can see the atoms, and reading accurate clocks. That is, using instruments.

TH: I’m using “empiricism” a little differently than you are here; I’m using the term to refer to all methods of data gathering, whether directly with our senses or indirectly with instruments (but still mediated through our senses because ultimately all data comes through our human senses). So what I’m getting at is that human direct experience, and the constant passage of time in our experience, is as much data as are data from experiments like the 1971 Hafele-Keating experiment using clocks traveling opposite directions on airplanes circling the globe. And we cannot discount either category of experience. Does this clarification of “empiricism” change your response at all?

CR: We do not discount any category of experience. There is no contradiction between the complex structure of time and our simple human experience of it. The contradiction appears only if we extrapolate our experience and assume it captures a universal aspect of reality. In our daily experience, the Earth is flat and we take it to be flat when we build a house or plan a city; there is no contradiction between this and the round Earth. The contradiction comes if we extrapolate our common-sense view of the flat Earth beyond the small region where it works well. So, we are not discounting our daily experience of time, we are just understanding that it is an approximation to a more complicated reality.

TH: There have been, since Lorentz developed his version of relativity, which Einstein adapted into his Special Theory of Relativity in 1905, interpretations of relativity that don’t render time an illusion. Isn’t the Lorentz interpretation still valid since it’s empirically equivalent to Special Relativity?

CR: I think you refer here to the so called neo-Lorentzian interpretations of Special Relativity. There is a similar case in the history of science: after Copernicus developed his systems in which all planets turn around the Sun and the Earth moves, there were objections similar to those you mention: “the delicate mathematical arguments” of Copernicus cannot weight as much as our direct experience that the Earth does not move.

So, Tycho Brahe developed his own system, where the Earth is at the center of the universe and does not move, the Sun goes around the Earth and all the other planets rotate around the Sun. Nice, but totally useless for science and for understanding the world: a contorted and useless attempt to save the common sense-view of a motionless Earth, in the face of overwhelming opposite evidence.

If Tycho had his way, science would not have developed. The neo-Lorentzian interpretations of Special Relativity do the same. They hang on to the wrong extrapolation of a piece of common sense.

There is an even better example: the Moon and the Sun in the sky are clearly small. When in antiquity astronomers like Aristarchus come out with an estimate of the size of the Moon and the Sun, it was a surprise, because it turned out that the Moon is big and the Sun even bigger than the Earth itself. This was definitely the result of “largely mathematical arguments.” Indeed it was a delicate calculation using geometry, based on angles under which we see these objects. Would you say that the fact that the Sun is larger than the Earth should not be believed because it is based on a “largely mathematical argument“ and contradicts our direct experience?

TH: But in terms of alternative interpretations of the Lorentz transformations, shouldn’t we view these alternatives, if they’re empirically equivalent as they are, in the same light as the various different interpretations of quantum theory (Copenhagen, Many Worlds, Bohmian, etc.)? All physics theories have two elements: 1) the mathematical formalisms; 2) an interpretive structure that maps those formalisms onto the real world. In the case of alternatives to Special Relativity, some have argued that we don’t need to adopt the Einstein interpretation of the formalisms (the Lorentz transformations) in order to use those formalisms. And since Lorentz’s version of relativity and Einstein’s Special Relativity are thought to be empirically equivalent, doesn’t a choice between these interpretations come down to a question of aesthetics and other considerations like explanatory power?

CR: It is not just a question of aesthetics, because science is not static, it is dynamic. Science is not just models. It is a true continuous process of better understanding reality. A better version of a theory is fertile: it takes us ahead; a bad version takes no part. The Lorentzian interpretation of special relativity assumes the existence of entities that are unobservable and undetectable (a preferred frame). It is contorted, implausible, and in fact it has been very sterile.

On the other hand, realizing that the geometrical structure of spacetime is altered has led to general relativity, to the prediction of black holes, gravitational waves, the expansion of the universe. Science is not just mathematical models and numerical predictions: it is developing increasingly effective conceptual tools for making sense and better understanding the world. When Copernicus, Galileo and Newton realized that the Earth is a celestial body like the ones we see in the sky, they did not just give us a better mathematical model for more accurate predictions: they understood that man can walk on the moon. And man did.

TH: But doesn’t the “inertial frame” that is the core of Einstein’s Special Relativity (instead of Lorentz’s preferred frame) constitute worse “sins”? As Einstein himself states in his 1938 book The Evolution of Physics, inertial frames don’t actually exist because there are always interfering forces; moreover, inertial frames are defined tautologically (p. 221). Einstein’s solution, once he accepted these issues, was to create the general theory of relativity and avoid focusing on fictional inertial frames. We also have the cosmic frame formed by the Cosmic Microwave Background that is a very good candidate for a universal preferred frame now, which wasn’t known in Einstein’s time. When we add the numerous difficulties that the Einstein view of time results in (stemming from special not general relativity), the problems in explaining the human experience of time, etc., might it be the case that the sins of Lorentzian relativity are outweighed by Special Relativity’s sins?

CR: I do not know what you are talking about. Special Relativity works perfectly well, is very heavily empirically supported, there are no contradictions with it in its domain of validity, and has no internal inconsistency whatsoever. If you cannot digest it, you should simply study more physics.

TH: You argue that “the temporal structure of the world is not that of presentism,” (p. 145) but isn’t there still substantial space in the scientific and philosophical debate for “presentism,” given different possible interpretations of the relevant data?

CR: There is a tiny minority of thinkers who try to hold on to presentism, in the contemporary debate about time. I myself think that presentism is de facto dead.

TH: I’m surprised you state this degree of certainty here when in your book you acknowledge that the nature of time is one of physics’ last remaining large questions. Andrew Jaffe, in a review of your book for Nature, writes that the issues you discuss “are very much alive in modern physics.”

CR: The debate on the nature of time is very much alive, but it is not a single debate about a single issue, it is a constellation of different issues, and presentism is just a rather small side of it. Examples are the question of the source of the low initial entropy, the source of our sense of flow, the relation between causality and entropy. The non-viability of presentism is accepted by almost all relativists.

TH: Physicist Lee Smolin (another loop quantum gravity theorist, as you know) argued views quite different than yours, in his book, Time Reborn, for example. In an interview with Smolin I did in 2013, he stated that “the experience we have of time flowing from moment into moment is not an illusion but one of the deepest clues we have as to the nature of reality.” Is Smolin part of the tiny minority you refer to?

CR: Yes, he is. Lee Smolin is a dear friend for me. We have collaborated repeatedly in the past. He is a very creative scientists and I have much respect of his ideas. But we disagree on this. And he is definitely in the minority on this issue.

TH: I’ve also been influenced by Nobel Prize winner Ilya Prigogine’s work and particularly his 1997 book, The End of Certainty: Time, Chaos and the New Laws of Nature, which opposes the eternalist view of time as well as reversibility in physics. Prigogine states in his book that reversible physics and the notion of time as an illusion are “impossible for me to accept” He argues that whereas many theories of modern physics include a reversible t term, this is an empirical mistake because in reality the vast majority of physical processes are irreversible. How do you respond to Prigogine and his colleagues’ arguments that physics theories should be modified to include irreversibility?

CR: That he is wrong, if this is what he writes. There is no contradiction between the reversibility of the laws that we have and the irreversibility of the phenomena. All phenomena we see follow the laws we have, as far as we can see. The surprise is that these laws allow also other phenomena that we do not see. So, something may be missing in our understanding --and I discuss this at length in my book-- but something missing does not mean something wrong.

I do not share the common “block universe” eternalist view of time either. What I argue in the book is that the presentist versus eternalist alternative is a fake alternative. The universe is neither evolving in a single time, nor static without change. Temporality is just more complex than either of these naïve alternatives.

TH: You argue that “the world is made of events, not things” in part II of your book. Alfred North Whitehead also made events a fundamental feature of his ontology, and I’m partial to his “process philosophy.” If events—happenings in time—are the fundamental “atoms” of spacetime (as Whitehead argues), shouldn’t this accentuate the importance of the passage of time in our ontology, rather than downgrade it as you seem to otherwise suggest?

CR: “Time” is a stratified notion. The existence of change, by itself, does not imply that there is a unique global time in the universe. Happenings reveal change, and change is ubiquitous, but nothing states that this change should be organized along the single universal uniform flow that we commonly call time. The question of the nature of time cannot be reduced to a simple “time is real”, “time is not real.” It is the effort of understanding the many different layers giving rise to the complex phenomenon that we call the passage of time.

Monday, August 13, 2018

Book Review: “Through Two Doors at Once” by Anil Ananthaswamy

Through Two Doors at Once: The Elegant Experiment That Captures the Enigma of Our Quantum Reality Hardcover 
By Anil Ananthaswamy
Dutton (August 7, 2018)

The first time I saw the double-slit experiment, I thought it was a trick, an elaborate construction with mirrors, cooked up by malicious physics teachers. But no, it was not, as I was soon to learn. A laser beam pointed at a plate with two parallel slits will make 5 or 7 or any odd number of dots aligned on the screen, their intensity fading the farther away they are from the middle. Light is a wave, this experiment shows, it can interfere with itself.

But light is also a particle, and indeed the double-slit experiment can, and has been, done with single photons. Perplexingly, these photons will create the same interference pattern; it will gradually build up from single dots. Strange as it sounds, the particles seem to interfere with themselves. The most common way to explain the pattern is that a single particle can go through two slits at once, a finding so unintuitive that physicists still debate just what the results tell us about reality.

The double-slit experiment is without doubt one of the most fascinating physics experiments ever. In his new book “Through Two Doors at Once,” Anil Anathaswamy lays out both the history and the legacy of the experiment.

I previously read Anil’s 2013 book “The Edge of Physics” which got him a top rank on my list of favorite science writers. I like Anil’s writing because he doesn’t waste your time. He says what he has to say, doesn’t make excuses when it gets technical, and doesn’t wrap the science into layers of flowery cushions. He also has a good taste in deciding what the reader should know.

A book about an experiment and its variants might sound like a washing list of technical detail with increasing sophistication, but Anil has picked only the best of the best. Besides the first double-slit experiment, and the first experiment with single particles, there’s also the delayed choice, the quantum eraser, weak measurement, and interference of large molecules (“Schrödinger’s cat”). The reader of course also learns how to detect a live bomb without detonating it, what Anton Zeilinger did on the Canary Islands, and what Yves Couder’s oil droplets may or may not have to do with any of that.

Along with the experiments, Anil explains the major interpretations of quantum mechanics, Copenhagen, Pilot-Wave, Many Worlds, and QBism, and what various people have to say about this. He also mentions spontaneous collapse models, and Penrose’s gravitationally induced collapse in particular.

The book contains a few equations and Anil expects the reader to cope with sometimes rather convoluted setups of mirrors and beam splitters and detectors, but the heavier passages are balanced with stories about the people who made the experiments or who worked on the theories. The result is a very readable account of the past and current status of quantum mechanics. It’s a book with substance and I can recommend it to anyone who has an interest in the foundation of quantum mechanics.

[Disclaimer: free review copy]

Tuesday, August 07, 2018

Dear Dr B: Is it possible that there is a universe in every particle?

“Is it possible that our ‘elementary’ particles are actually large scale aggregations of a different set of something much smaller? Then, from a mathematical point of view, there could be an infinite sequence of smaller (and larger) building blocks and universes.”

                                                                      ~Peter Letts
Dear Peter,

I love the idea that there is a universe in every elementary particle! Unfortunately, it is really hard to make this hypothesis compatible with what we already know about particle physics.

Simply conjecturing that the known particles are made up of smaller particles doesn’t work well. The reason is that the masses of the constituent particles must be smaller than the mass of the composite particle, and the lighter a particle, the easier it is to produce in particle accelerators. So why then haven’t we seen these constituents already?

One way to get around this problem is to make the new particles strongly bound, so that it takes a lot of energy to break the bond even though the particles themselves are light. This is how it works for the strong nuclear force which holds quarks together inside protons. The quarks are light but still difficult to produce because you need a high energy to tear them apart from each other.

There isn’t presently any evidence that any of the known elementary particles are made up of new strongly-bound smaller particles (usually referred to as preons), and many of the models which have been proposed for this have run into conflict with data. Some are still viable, but with such strongly bound particles you cannot create something remotely resembling our universe. To get structures similar to what we observe you need an interplay of both long-distance forces (like gravity) and short-distance forces (like the strong nuclear force).

The other thing you could try is to make the constituent particles really weakly interacting with the particles we know already, so that producing them in particle colliders would be unlikely. This, however, causes several other problems, one of which is that even the very weakly interacting particles carry energy and hence have a gravitational pull. If they are produced at any substantial rates at any time in the history of the universe, we should see evidence for their presence but we don’t. Another problem is that by Heisenberg’s uncertainty principle, particles with small masses are difficult to keep inside small regions of space, like inside another elementary particle.

You can circumvent the latter problem by conjecturing that the inside of a particle actually has a large volume, kinda like Mary Poppins’ magical bag, if anyone recalls this.


via GIPHY

Sounds crazy, I know, but you can make this work in general relativity because space can be strongly curved. Such cases are known as “baby universes”: They look small from the outside but can be huge on the inside. You then need to sprinkle a little quantum gravity magic over them for stability. You also need to add some kind of strange fluid, not unlike dark energy, to make sure that even though there are lots of massive particles inside, from the outside the mass is small.

I hope you notice that this was already a lot of hand-waving, but the problems don’t stop there. If you want every elementary particle to each have a universe inside, you need to explain why we only know 25 different elementary particles. Why aren’t there billions of them? An even bigger problem is that elementary particles are quantum objects: They get constantly created and destroyed and they can be in several places at once. How would structure formation ever work in such a universe? It is also a generally the case in quantum theories that the more variants there are of a particle, the more of them you produce. So why don’t we produce humongous amounts of elementary particles if they’re all different inside?

The problems that I listed do not of course rule out the idea. You can try to come up with explanations for all of this so that the model does what you want and is compatible with all observations. But what you then end up with is a complicated theory that has no evidence speaking for it, designed merely because someone likes the idea. It’s not necessarily wrong. I would even say it’s interesting to speculate about (as you can tell, I have done my share of speculation). But it’s not science.

Thanks for an interesting question!

Wednesday, August 01, 2018

Trostpreis (I’ve been singing again)

I promised my daughters I would write a song in German, so here we go:

 

“Trostpreis” means “consolation prize”. This song was inspired by the kids’ announcement that we have a new rule for pachisi: Adults always lose. I think this conveys a deep truthiness about life in general.

 After I complained the last time that the most frequent question I get about my music videos is “where do you find the time?” (answer: I don’t), I now keep getting the question “Do you sing yourself?” The answer to this one is, yes, I sing myself. Who else do you think would sing for me?

(soundcloud version here)

Monday, July 30, 2018

10 physics facts you should have learned in school but probably didn’t

[Image: Dreamstime.com]
1. Entropy doesn’t measure disorder, it measures likelihood.

Really the idea that entropy measures disorder is totally not helpful. Suppose I make a dough and I break an egg and dump it on the flour. I add sugar and butter and mix it until the dough is smooth. Which state is more orderly, the broken egg on flour with butter over it, or the final dough?

I’d go for the dough. But that’s the state with higher entropy. And if you opted for the egg on flour, how about oil and water? Is the entropy higher when they’re separated, or when you shake them vigorously so that they’re mixed? In this case the better sorted case has the higher entropy.

Entropy is defined as the number of “microstates” that give the same “macrostate”. Microstates contain all details about a system’s individual constituents. The macrostate on the other hand is characterized only by general information, like “separated in two layers” or “smooth on average”. There are a lot of states for the dough ingredients that will turn to dough when mixed, but very few states that will separate into eggs and flour when mixed. Hence, the dough has the higher entropy. Similar story for oil and water: Easy to unmix, hard to mix, hence the unmixed state has the higher entropy.

2. Quantum mechanics is not a theory for short distances only, it’s just difficult to observe its effects on long distances.

Nothing in the theory of quantum mechanics implies that it’s good on short distances only. It just so happens that large objects we observe are composed of many smaller constituents and these constituents’ thermal motion destroys the typical quantum effects. This is a process known as decoherence and it’s the reason we don’t usually see quantum behavior in daily life.

But quantum effect have been measured in experiments spanning hundreds of kilometers and they could span longer distances if the environment is sufficiently cold and steady. They could even span through entire galaxies.

3. Heavy particles do not decay to reach a state of smallest energy, but to reach a state of highest entropy.

Energy is conserved. So the idea that any system tries to minimize its energy is just nonsense. The reason that heavy particles decay if they can is because they can. If you have one heavy particle (say, a muon) it can decay into an electron, a muon-neutrino and an electron anti-neutrino. The opposite process is also possible, but it requires that the three decay products come together. It is hence unlikely to happen.

This isn’t always the case. If you put heavy particles in a hot enough soup, production and decay can reach equilibrium with a non-zero fraction of the heavy particles around.

4. Lines in Feynman diagrams do not depict how particles move, they are visual aids for difficult calculations.

Every once in a while I get an email from someone who notices that many Feynman diagrams have momenta assigned to the lines. And since everyone knows one cannot at the same time measure the position and momentum of a particle arbitrarily well, it doesn’t make sense to draw lines for the particles. It follows that all of particle physics is wrong!

But no, nothing is wrong with particle physics. There are several types of Feynman diagrams and the ones with the momenta are for momentum space. In this case the lines have nothing to do with paths the particles move on. They really don’t. They are merely a way to depict certain types of integrals.

There are some types of Feynman diagrams in which the lines do depict the possible paths that a particle could go, but also in this case the diagram itself doesn’t tell you what the particle actual does. For this you actually have to do the calculation.

5. Quantum mechanics is non-local, but you cannot use it to transfer information non-locally.

Quantum mechanics gives rise to non-local correlations that are quantifiably stronger than those of non-quantum theories. This is what Einstein referred to as  “spooky action at a distance.”

Alas, quantum mechanics is also fundamentally random. So, while you have those awesome non-local correlations, you cannot use them to send messages. Quantum mechanics is indeed perfectly compatible with Einstein’s speed-of-light limit.

6. Quantum gravity becomes relevant at high curvature, not at short distances.

If you estimate the strength of quantum gravitational effects, you find that they should become non-negligible if the curvature of space-time is comparable to the inverse of the Planck length squared. This does not mean that you would see this effect at distances close by the Planck length. I believe the confusion here comes from the term “Planck length.” The Planck length has the unit of a length, but it’s not the length of anything.

Importantly, that the curvature gets close to the inverse of the Planck length squared is an observer-independent statement. It does not depend on the velocity by which you move. The trouble with thinking that quantum gravity becomes relevant at short distances is that it’s incompatible with Special Relativity.

In Special Relativity, lengths can contract. For an observer who moves fast enough, the Earth is a pancake of a width below the Planck length. This would mean we should either long have seen quantum gravitational effects, or Special Relativity must be wrong. Evidence speaks against both.

7. Atoms do not expand when the universe expands. Neither does Brooklyn.

The expansion of the universe is incredibly slow and the force it exerts is weak. Systems that are bound together by forces exceeding that of the expansion remain unaffected. The systems that are being torn apart are those larger than the size of galaxy clusters. The clusters themselves still hold together under their own gravitational pull. So do galaxies, solar systems, planets and of course atoms. Yes, that’s right, atomic forces are much stronger than the pull of the whole universe.

8. Wormholes are science fiction, black holes are not.

The observational evidence for black holes is solid. Astrophysicists can tell the presence of a black hole in various ways.

The easiest way may be to deduce how much mass must be combined in some volume of space to cause the observed motion of visible objects. This alone does not tell you whether the dark object that influences the visible ones has an event horizon. But you can tell the difference between an event horizon and a solid surface by examining the radiation that is emitted by the dark object. You can also use black holes as extreme gravitational lenses to test that they comply with the predictions of Einstein’s theory of General Relativity. This is why physicists are excitedly looking forward to the data from the Event Horizon Telescope.

Maybe most importantly, we know that black holes are a typical end-state of certain types of stellar collapse. It is hard to avoid them, not hard to get them, in general relativity.

Wormholes on the other hand are space-time deformations for which we don’t know any way how they could come about in natural processes. Their presence also requires negative energy, something that has never been observed, and that many physicists believe cannot exist.

9. You can fall into a black hole in finite time. It just looks like it takes forever.

Time slows down if you approach the event horizon, but this doesn’t mean that you actually stop falling before you reach the horizon. This slow-down is merely what an observer in the distance would see. You can calculate how much time it would take to fall into a black hole, as measured by a clock that the observer herself carries. The result is finite. You do indeed fall into the black hole. It’s just that your friend who stays outside never sees you falling in.

10. Energy is not conserved in the universe as a whole, but the effect is so tiny you won’t notice it.

So I said that energy is conserved, but that is only approximately correct. It would be entirely correct for a universe in which space does not change with time. But we know that in our universe space expands, and this expansion results in a violation of energy conservation.

This violation of energy conservation, however, is so minuscule that you don’t notice it in any experiment on Earth. It takes very long times and long distances to notice. Indeed, if the effect was any larger we would have noticed much earlier that the universe expands! So don’t try to blame your electricity bill on the universe, but close the window when the AC is running.

Monday, July 23, 2018

Evidence for modified gravity is now evidence against it.

Hamster. Not to scale.
Img src: Petopedia.
It’s day 12,805 in the war between modified gravity and dark matter. That’s counting the days since the publication of Mordehai Milgrom’s 1983 paper. In this paper he proposed to alter Einstein’s theory of general relativity rather than conjecturing invisible stuff.

Dark matter, to remind you, are hypothetical clouds of particles that hover around galaxies. We can’t see them because they neither emit nor reflect light, but we do notice their gravitational pull because it affects the motion of the matter that we can observe. Modified gravity, on the other hand, posits that normal matter is all there is, but the laws of gravity don’t work as Einstein taught us.

Which one is right? We still don’t know, though astrophysicists have been on the case since decades.

Ruling out modified gravity is hard because it was invented to fit observed correlations, and this achievement is difficult to improve on. The idea which Milgrom came up with in 1983 was a simple model called Modified Newtonian Dynamics (MOND). It does a good job fitting the rotation curves of hundreds of observed galaxies, and in contrast to particle dark matter this model requires only one parameter as input. That parameter is an acceleration scale which determines when the gravitational pull begins to be markedly different from that predicted by Einstein’s theory of General Relativity. Based on his model, Milgrom also made some predictions which held up so far.

In a 2016 paper, McGaugh, Lelli, and Schomberg analyzed data from a set of about 150 disk galaxies. They identified the best-fitting acceleration scale for each of them and found that the distribution is clearly peaked around a mean-value:

Histogram of best-fitting acceleration scale.
Blue: Only high quality data. Via Stacy McGaugh.


McGaugh et al conclude that the data contains evidence for a universal acceleration scale, which is strong support for modified gravity.

Then, a month ago, Nature Astronomy published a paper titled “Absence of a fundamental acceleration scale in galaxies“ by Rodrigues et al (arXiv-version here). The authors claim to have ruled out modified gravity with at least 5 σ, ie with high certainty.

That’s pretty amazing given that two months ago modified gravity worked just fine for galaxies. It’s even more amazing once you notice that they ruled out modified gravity using the same data from which McGaugh et al extracted the universal acceleration scale that’s evidence for modified gravity.

Here is the key figure from the Rodrigues et al paper:

Figure 1 from Rodrigues et al


Shown on the vertical axis is their best-fit parameter for the (log of) the acceleration scale. On the horizontal axis are the individual galaxies. The authors have sorted the galaxies so that the best-fit value is monotonically increasing from left to right, so the increase is not relevant information. Relevant is that if you compare the error-margins marked by the colors, then the best-fit value for the galaxies on the very left side of the plot are incompatible with the best-fit values for the galaxies on the very right side of the plot.

So what the heck is going on?

A first observation is that the two studies don’t use the same data analysis. The main difference is the priors for the distribution of the parameters which are the acceleration scale of modified gravity and the stellar mass-to-light ratio. Where McGaugh et al use Gaussian priors, Rodrigues et al use flat priors over a finite bin. The prior is the assumption you make for what the likely distribution of a parameter is, which you then feed into your model to find the best-fit parameters. A bad prior can give you misleading results.

Example: Suppose you have an artificially intelligent infrared camera. One night it issues an alert: Something’s going on in the bushes of your garden. The AI tells you the best fit to the observation is a 300-pound hamster, the second-best fit is a pair of humans in what seems a peculiar kind of close combat. Which option do you think is more likely?

I’ll go out on a limb and guess the second. And why is that? Because you probably know that 300-pound hamsters are somewhat of a rare occurrence, whereas pairs of humans are not. In other words, you have a different prior than your camera.

Back to the galaxies. As we’ve seen, if you start with an unmotivated prior, you can end up with a “best fit” (the 300 pound hamster) that’s unlikely for reasons your software didn’t account for. At the very least, therefore, you should check that whatever the resulting best-fit distribution of your parameters is doesn’t contradict other data. The Rodrigues et al analysis hence raises the concern that their best-fit distribution for the stellar mass-to-light ratio doesn’t match commonly observed distributions. The McGaugh paper on the other hand starts with a Gaussian prior, which is a reasonable expectation, and hence their analysis makes more physical sense.

Having said this though, it turns out the priors don’t make much of a difference for the results. Indeed, for what the numbers are concerned the results in both papers are pretty much the same. What differs is the conclusion the authors draw from it.

Let me tell you a story to illustrate what’s going on. Suppose you are Isaac Newton and an apple just banged on your head. “Eureka,” you shout and postulate that the gravitational potential fulfils the Poisson-equation.* Smart as you are, you assume that the Earth is approximately a homogeneous sphere, solve the equation and find an inverse-square law. It contains one free parameter which you modestly call “Newton’s constant.”

You then travel around the globe, note down your altitude and measure the acceleration of a falling test-body. Back home you plot the results and extract Newton’s constant (times the mass of the Earth) from the measurements. You find that the measured values cluster around a mean. You declare that you have found evidence for a universal law of gravity.

Or have you?

A week later your good old friend Bob knocks on the door. He points out that if you look at the measurement errors (which you have of course recorded), then some of the measurement results are incompatible with each other at five sigma certainty. There, Bob declares, I have ruled out your law of gravity.

Same data, different conclusion. How does this make sense?

“Well,” Newton would say to Bob, “You have forgotten that besides the measurement uncertainty there is theoretical uncertainty. The Earth is neither homogeneous nor a sphere, so you should expect a spread in the data that exceeds the measurement uncertainty.” – “Ah,” Bob says triumphantly, “But in this case you can’t make predictions!” – “Sure I can,” Newton speaks and points to his inverse square law, “I did.” Bob frowns, but Newton has run out of patience. “Look,” he says and shoves Bob out of the door, “Come back when you have a better theory than I.”

Back to 2018 and modified gravity. Same difference. In the Rodrigues et al paper, the authors rule out that modified gravity’s one-parameter law fits all disk galaxies in the sample. This shouldn’t come as much of a surprise. Galaxies aren’t disks with bulges any more than the Earth is a homogeneous sphere. It’s such a crude oversimplification it’s remarkable it works at all.

Indeed, it would be an interesting exercise to quantify how well modified gravity does in this set of galaxies compared to particle dark matter with the same number of parameters. Chances are, you’d find that particle dark matter too is ruled out at 5 σ. It’s just that no one is dumb enough to make such a claim. When it comes to particle dark matter, astrophysicists will be quick to tell you galaxy dynamics involves loads of complicated astrophysics and it’s rather unrealistic that one parameter will account for the variety in any sample.

Without the comparison to particle dark matter, therefore, the only thing I learn from the Rodrigues et al paper is that a non-universal acceleration scale fits the data better than a universal one. And that I could have told you without even looking at the data.

Summary: I’m not impressed.

It’s day 12,805 in the war between modified gravity and dark matter and dark matter enthusiasts still haven’t found the battle field.


*Dude, I know that Newton isn’t Archimedes. I’m telling a story not giving a history lesson.

Monday, July 16, 2018

SciMeter.org: A new tool for arXiv users

Time is money. It’s also short. And so we save time wherever we can, even when we describe our own research. All too often, one word must do: You are a cosmologist, or a particle physicist, or a string theorist. You work on condensed matter, or quantum optics, or plasma physics.

Most departments of physics use such simple classifications. But our scientific interests cannot be so easily classified. All too often, one word is not enough.

Each scientists has their own, unique, research interests. Maybe you work on astrophysics and cosmology and particle physics and quantum gravity. Maybe you work on condensed matter physics and quantum computing and quantitative finance.

Whatever your research interests, now you can show off its full breadth, not in one word, but in one image. On our new website SciMeter, you can create a keyword cloud from your arXiv papers. For example here is the cloud for Stephen Hawking’s papers:




You can also search for similar authors and for people who have worked on a certain topic, or a set of topics.

As I promised previously, on this website you can also find out your broadness-value (it is listed below the cloud). Please note that the value we quote on the website is standard deviations from the average, so that negative values of broadness are below average and positive values above. Also keep in mind that we measure the broadness relative to the total average, ie for all arXiv categories.

While this website is mostly aimed at authors in the field of physics, we hope it will also be of use to journalists looking for an expert or for editors looking for reviewers.

The software for this website was developed by Tom Price and Tobias Mistele, who were funded on an FQXi minigrant. It is entirely non-profit and we do not plan on making money with it. This means maintaining and expanding this service (eg to include other data) will only be possible if we can find sponsors.

If you encounter any problems with the website, please to not submit the issue here, but use the form that you find on the help-page.

Wednesday, July 11, 2018

What's the purpose of working in the foundations of physics?

That’s me. Photo by George Musser.
Yes, I need a haircut.
[Several people asked me for a transcript of my intro speech that I gave yesterday in Utrecht at the 19th UK and European conference on foundations of physics. So here it is.]

Thank you very much for the invitation to this 19th UK and European conference on Foundations of physics.

The topic of this conference combines everything that I am interested in, and I have seen the organizers have done an awesome job lining up the program. From locality and non-locality to causality, the past hypothesis, determinism, indeterminism, and irreversibility, the arrow of time and presentism, symmetries, naturalness and finetuning, and, of course, everyone’s favorites: black holes and the multiverse.

This is sure to be a fun event. But working in the foundations of physics is not always easy.

When I write a grant proposal, inevitably I will get to the part in which I have to explain the purpose of my work. My first reaction to this is always: What’s the purpose of anything anyway?

My second thought is. Why do only scientists get this question? Why doesn’t anyone ask Gucci what’s the purpose of the Spring collection? Or Ed Sheeran what’s the purpose of singing about your ex-lover? Or Ronaldo what’s the purpose of running after a leather ball and trying to kick it into a net?

Well, you might say, the purpose is that people like to buy it, hear it, watch it. But what’s the purpose of that? Well, it makes their lives better. And what’s the purpose of that?

If you go down the rabbit hole, you find that whenever you ask for purpose you end up asking what’s the purpose of life. And to that, not even scientists have an answer.

Sometimes I therefore think maybe that’s why they ask us to explain the purpose of our work. Just to remind us that science doesn’t have answers to everything.

But then we all know that the purpose of the purpose section in a grant proposal is not to actually explain the purpose of what you do. It is to explain how your work contributes to what other people think its purpose should be. And that often means applications and new technology. It means something you can build, or sell, or put under the Christmas tree.

I am sure I am not the only one here who has struggled to explain the purpose of work in the foundations of physics. I therefore want to share with you an observation that I have made during more than a decade of public outreach: No one from the public ever asks this question. It comes from funding bodies and politicians exclusively.

Everyone else understands just fine what’s the purpose of trying to describe space and time and matter, and the laws they are governed by. The purpose is to understand. These laws describe our universe; they describe us. We want to know how they work.

Seeking this knowledge is the purpose of our work. And, if you collect it in a book, you can even put it under a Christmas tree.

So I think we should not be too apologetic about what we are doing. We are not the only ones who care about the questions we are trying to answer. A lot of people want to understand how the universe works. Because understanding makes their lives better. Whatever is the purpose of that.

But I must add that through my children I have rediscovered the joys of materialism. Kids these days have the most amazing toys. They have tablets that take videos – by voice control. They have toy helicopters – that actually fly. They have glittery slime that glows in the dark.

So, stuff is definitely fun. Let me say some words on applications of the foundations of physics.

In contrast to most people who work in the field – and probably most of you – I do not think that whatever new we will discover in the foundations will remain pure knowledge, detached from technology. The reason is that I believe we are missing something big about the way that quantum theory cooperates with space and time.

And if we solve this problem, it will lead to new insights about quantum mechanics, the theory behind all our fancy new electronic gadgets. I believe the impact will be substantial.

You don’t have to believe me on this.

I hope you will believe me, though, when I say that this conference gathers some of the brightest minds on the planet and tackles some of the biggest questions we know.

I wish all of you an interesting and successful meeting.

Sunday, July 08, 2018

Away Note

I’ll be in Utrecht next week for the 19th UK and European Conference on Foundations of Physics. August 28th I’ll be in Santa Fe, September 6th in Oslo, September 22nd I’ll be in London for another installment of the HowTheLightGetsIn Festival.

I have been educated that this festival derives its name from Leonard Cohen’s song “Anthem” which features the lines
“Ring the bells that still can ring
Forget your perfect offering
There is a crack in everything
That’s how the light gets in.”
If you have read my book, the crack metaphor may ring a bell. If you haven’t, you should.

October 3rd I’m in NYC, October 4th I’m in Richmond, Kentucky, and the second week of October I am at the International Book Fair in Frankfurt.

In case our paths cross, please say “Hi” – I’m always happy to meet readers irl.

Thursday, July 05, 2018

Limits of Reductionism

Almost forgot to mention I made it 3rd prize in the 2018 FQXi essay contest “What is fundamental?”

The new essay continues my thoughts about whether free will is or isn’t compatible with what we know about the laws of nature. For many years I was convinced that the only way to make free will compatible with physics is to adopt a meaningless definition of free will. The current status is that I cannot exclude it’s compatible.

The conflict between physics and free will is that to our best current knowledge everything in the universe is made of a few dozen particles (take or give some more for dark matter) and we know the laws that determine those particles’ behavior. They all work the same way: If you know the state of the universe at one time, you can use the laws to calculate the state of the universe at all other times. This implies that what you do tomorrow is already encoded in the state of the universe today. There is, hence, nothing free about your behavior.

Of course nobody knows the state of the universe at any one time. Also, quantum mechanics makes the situation somewhat more difficult in that it adds randomness. This randomness would prevent you from actually making a prediction for exactly what happens tomorrow even if you knew the state of the universe at one moment in time. With quantum mechanics, you can merely make probabilistic statements. But just because your actions have a random factor doesn’t mean you have free will. Atoms randomly decay and no one would call that free will. (Well, no one in their right mind anyway, but I’ll postpone my rant about panpsychic pseudoscience to some other time.)

People also often quote chaos to insist that free will is a thing, but please note that chaos is predictable in principle, it’s just not predictable in practice because it makes a system’s behavior highly dependent on the exact values of initial conditions. The initial conditions, however, still determine the behavior. So, neither quantum mechanics nor chaos bring back free will into the laws of nature.

Now, there are a lot of people who want you to accept watered-down versions of free will, eg that you have free will because no one can in practice predict your behavior, or because no one can tell what’s going on in your brain, and so on. But I think this is just verbal gymnastics. If you accept that the current theories of particle physics are correct, free will doesn’t exist in a meaningful way.

That is as long as you believe – as almost all physicists do – that the laws that dictate the behavior of large objects follow from the laws that dictate the behavior of the object’s constituents. That’s what reductionism tells us, and let me emphasize that reductionism is not a philosophy, it’s an empirically well-established fact. It describes what we observe. There are no known exceptions to it.

And we have methods to derive the laws of large objects from the laws for small objects. In this case, then, we know that predictive laws for human behavior exist, it’s just that in practice we can’t compute them. It is the formalism of effective field theories that tells us just what is the relation between the behavior of large objects and their interactions to the behavior of smaller objects and their interactions.

There are a few examples in the literature where people have tried to find systems for which the behavior on large scales cannot be computed from the behavior at small scales. But these examples use unrealistic systems with an infinite number of constituents and I don’t find them convincing cases against reductionism.

It occurred to me some years ago, however, that there is a much simpler example for how reductionism can fail. It can fail simply because the extrapolation from the theory at short distances to the one at long distances is not possible without inputting further information. This can happen if the scale-dependence of a constant has a singularity, and that’s something which we cannot presently exclude.

With singularity I here do not mean a divergence, ie that something becomes infinitely large. Such situations are unphysical and not cases I would consider plausible for realistic systems. But functions can have singularities without anything becoming infinite: A singularity is merely a point beyond which a function cannot be continued.

I do not currently know of any example for which this actually happens. But I also don’t know a way to exclude it.

Now consider you want to derive the theory for the large objects (think humans) from the theory for the small objects (think elementary particles) but in your derivation you find that one of the functions has a singularity at some scale in between. This means you need new initial values past the singularity. It’s a clean example for a failure of reductionism, and it implies that the laws for large objects indeed might not follow from the laws for small objects.

It will take more than this to convince me that free will isn’t an illusion, but this example for the failure of reductionism gives you an excuse to continue believing in free will.

Full essay with references here.

Thursday, June 28, 2018

How nature became unnatural

Naturalness is an old idea; it dates back at least to the 16th century and captures the intuition that a useful explanation shouldn’t rely on improbable coincidences. Typical examples for such coincidences, often referred to as “conspiracies,” are two seemingly independent parameters that almost cancel each other, or an extremely small yet nonzero number. Physicists believe that theories which do not have such coincidences, and are natural in this particular sense, are more promising than theories that are unnatural.

Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.

As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.

But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.

Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.

The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.

The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.

In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.

A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.

Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.

I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.

Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.

That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.

I was therefore excited to see that James Wells has a new paper on the arXiv
In his paper, Wells lays out the problems with the lacking probability distribution with several simple examples. And in contrast to me, Wells isn’t a no-one; he’s a well-known US-American particle physicist and Professor at the University of Michigan.

So, now that a man has said it, I hope physicists will listen.



Aside: I continue to have technical troubles with the comments on this blog. Notification has not been working properly for several weeks, which is why I am approving comments with much delay and reply erratically. In the current arrangement, I can neither read the full comment before approving it, nor can I keep comments unread, so as to remind myself to reply, what I did previously. Google says they’ll be fixing it, but Im not sure what, if anything, they’re doing to make that happen.

Also, my institute wants me to move my publicly available files elsewhere because they are discontinuing the links that I have used so far. For this reason most images in the older blogposts have disappeared. I have to manually replace all these links which will take a while. I am very sorry for the resulting ugliness.

Saturday, June 23, 2018

Particle Physics now Belly Up

Particle physics. Artist’s impression.
Professor Ben Allanach is a particle physicist at Cambridge University. He just wrote an advertisement for my book that appeared on Aeon some days ago under the title “Going Nowhere Fast”.

I’m kidding of course, Allanach’s essay has no relation to my book. At least not that I know of. But it’s not a coincidence he writes about the very problems that I also discuss in my book. After all, the whole reason I wrote the book was that this situation was foreseeable: The Large Hadron Collider hasn’t found evidence for any new particles besides the Higgs-boson (at least not so far), so now particle physicists are at a loss for how to proceed. Even if they find something in the data that’s yet to come, it is clear already that their predictions were wrong.

Theory-development in particle physics for the last 40 years has worked mostly by what is known as “top-down” approaches. In these approaches you invent a new theory based on principles you cherish and then derive what you expect to see at particle colliders. This approach has worked badly, to say the least. The main problem, as I lay out in my book, is that the principles which physicists used to construct their theories are merely aesthetic requirements. Top-down approaches, for example, postulate that the fundamental forces are unified or that the universe has additional symmetries or that the parameters in the theory are “natural.” But none of these assumptions are necessary, they’re just pretty guesses.

The opposite to a top-down approach, as Allanach lays out, is a “bottom-up” approach. For that you begin with the theories you have confirmed already and add possible modifications. You do this so that the modifications only become relevant in situations that you have not yet tested. Then you look at the data to find out which modifications are promising because they improve the fit to the data. It’s an exceedingly unpopular approach because the data have just told us over and over and over again that the current theories are working fine and require no modification. Also, bottom-up approaches aren’t pretty which doesn’t help their popularity.

Allanach, as several other people who I know, has stopped working on supersymmetry, an idea that has for a long time been the most popular top-down approach. In principle it’s a good development that researchers in the field draw consequences from the data. But if they don’t try to understand just what went wrong – why so many theoretical physicists believed in ideas that do not describe reality – they risk repeating the same mistake. It’s of no use if they just exchange one criterion of beauty with another.

Bottom-up approaches are safely on the scientific side. But they also increase the risk that we get stuck with the development of new theories because without top-down approaches we do not know where to look for new data. That’s why I argue in my book that some mathematical principles for theory-development are okay to use, namely those which prevent internal contradictions. I know this sounds lame and rather obvious, but in fact it is an extremely strong requirement that, I believe, hasn’t been pushed as far as we could push it.

This top-down versus bottom-up discussion isn’t new. It has come up each time the supposed predictions for new particles turned out to be wrong. And each time the theorists in the field, rather than recognizing the error in their ways, merely adjusted their models to evade experimental bounds and continued as before. Will you let them get away with this once again?

Tuesday, June 19, 2018

Astrophysicists try to falsify multiverse, find they can’t.

Ben Carson, trying to
make sense of the multiverse.
The idea that we live in a multiverse – an infinite collection of universes from which ours is merely one – is interesting but unscientific. It postulates the existence of entities that are unnecessary to describe what we observe. All those other universes are inaccessible to experiment. Science, therefore, cannot say anything about their existence, neither whether they do exist nor whether they don’t exist.

The EAGLE collaboration now knows this too. They recently published results of a computer simulation that details how the formation of galaxies is affected when one changes the value of the cosmological constant, the constant which quantifies how fast the expansion of the universe accelerates. The idea is that, if you believe in the multiverse, then each simulation shows a different universe. And once you know which universes give rise to galaxies, you can calculate how likely we are to be in a universe that contains galaxies and also has the cosmological constant that we observe.

We already knew before the new EAGLE paper that not all values of the cosmological constant are compatible with our existence. If the cosmological constant is too large, the universe either collapses quickly after formation (if the constant is negative) and galaxies are never formed, or it expands so quickly that structures are torn apart before galaxies can form (if the constant is positive).

New is that by using computer simulations, the EAGLE collaboration is able to quantify and also illustrate just how the structure formation differs with the cosmological constant.

The quick summary of their results is that if you turn up the cosmological constant and keep all other physics the same, then making galaxies becomes difficult once the cosmological constant exceeds about 100 times the measured value. The authors haven’t looked at negative values of the cosmological constant because (so they write) that would be difficult to include in their code.

The below image from their simulation shows an example for the gas density. On the left you see a galaxy prototype in a universe with zero cosmological constant. On the right the cosmological constant is 30 times the measured value. In the right image structures are smaller because the gas halos have difficulties growing in a rapidly expanding universe.

From Figure 7 of Barnes et al, MNRAS 477, 3, 1 3727–3743 (2018).

This, however, is just turning knobs on computer code, so what does this have to do with the multiverse? Nothing really. But it’s fun to see how the authors are trying really hard to make sense of the multiverse business.

A particular headache for multiverse arguments, for example, is that if you want to speak about the probability of an observer finding themselves in a particular part of the multiverse, you have to specify what counts as observer. The EAGLE collaboration explains:
“We might wonder whether any complex life form counts as an observer (an ant?), or whether we need to see evidence of communication (a dolphin?), or active observation of the universe at large (an astronomer?). Our model does not contain anything as detailed as ants, dolphins or astronomers, so we are unable to make such a fine distinction anyway.”
But even after settling the question whether dolphins merit observer-status, a multiverse per se doesn’t allow you to calculate the probability for finding this or that universe. For this you need additional information: a probability distribution or “measure” on the multiverse. And this is where the real problem begins. If the probability of finding yourself in a universe like ours is small you may think that disfavors the multiverse hypothesis. But it doesn’t: It merely disfavors the probability distribution, not the multiverse itself.

The EAGLE collaboration elaborates on the conundrum:
“What would it mean to apply two different measures to this model, to derive two different predictions? How could all the physical facts be the same, and yet the predictions of the model be different in the two cases? What is the measure about, if not the universe? Is it just our own subjective opinion? In that case, you can save yourself all the bother of calculating probabilities by having an opinion about your multiverse model directly.”
Indeed. You can even save yourself the bother of having a multiverse to begin with because it doesn’t explain any observation that a single universe wouldn’t also explain.

The authors eventually find that some probability distributions make our universe more, others less probable. Not that you need a computer cluster for that insight. Still, I guess we should applaud the EAGLE people for trying. In their paper, they conclude: “A specific multiverse model must justify its measure on its own terms, since the freedom to choose a measure is simply the freedom to choose predictions ad hoc.”

But of course a model can never justify itself. The only way to justify a physical model is that it fits observation. And if you make ad hoc choices to fit observations you may as well just chose the cosmological constant to be what we observe and be done with it.

In summary, the paper finds that the multiverse hypothesis isn’t falsifiable. If you paid any attention to the multiverse debate, that’s hardly surprising, but it is interesting to see astrophysicists attempting to squeeze some science out of it.

I think the EAGLE study makes a useful contribution to the literature. Multiverse proponents have so far argued that what they do is science because some versions of the multiverse are testable in our universe, for example by searching for entanglement between universes, or for evidence that our universe has collided with another one in the past.

It is correct that some multiverse types are testable, but to the extent that they have been tested, they have been ruled out. This, of course, has not ruled out the multiverse per se, because there are still infinitely many types of multiverses left. For those, the only thing you can do is make probabilistic arguments. The EAGLE paper now highlights that these can’t be falsified either.

I hope that showcasing the practical problem, as the EAGLE paper does, will help clarify the unscientific basis of the multiverse hypothesis.

Let me be clear that the multiverse is a fringe idea in a small part of the physics community. Compared to the troubled scientific methodologies in some parts of particle physics and cosmology, multiverse madness is a minor pest. No, the major problem with the multiverse is its popularity outside of physics. Physicists from Brian Greene to Leonard Susskind to Andrei Linde have publicly spoken about the multiverse as if it was best scientific practice. And that well-known physicists pass the multiverse off as science isn’t merely annoying, it actively damages the reputation of science. A prominent example for the damage that can result comes from the 2015 Republican Presidential Candidate Ben Carson.

Carson is a retired neurosurgeon who doesn’t know much physics, but what he knows he seems to have learned from multiverse enthusiasts. On September 22, 2015, Carson gave a speech at a Baptist school in Ohio, informing his audience that “science is not always correct,” and then went on to justify his science skepticism by making fun of the multiverse:
“And then they go to the probability theory, and they say “but if there’s enough big bangs over a long enough period of time, one of them will be the perfect big bang and everything will be perfectly organized.””
In an earlier speech he cheerfully added: “I mean, you want to talk about fairy tales? This is amazing.”

Now, Carson has misunderstood much of elementary thermodynamics and cosmology, and I have no idea why he thinks he’s even qualified to give speeches about physics. But really this isn’t the point. I don’t expect neurosurgeons to be experts in the foundations of physics and I hope Carson’s audience doesn’t expect that either. Point is, he shows what happens when scientists mix fact with fiction: Non-experts throw out both together.

In his speech, Carson goes on: “I then say to them, look, I’m not going to criticize you. You have a lot more faith than I have… I give you credit for that. But I’m not going to denigrate you because of your faith and you shouldn’t denigrate me for mine.”

And I’m with him on that. No one should be denigrated for what they believe in. If you want to believe in the existence of infinitely many universes with infinitely many copies of yourself, that’s all fine with me. But please don’t pass it off as science.


If you want to know more about the conflation between faith and knowledge in theoretical physics, read my book “Lost in Math: How Beauty Leads Physics Astray.”