Showing posts with label Rant. Show all posts
Showing posts with label Rant. Show all posts

Thursday, July 13, 2017

Nature magazine publishes comment on quantum gravity phenomenology, demonstrates failure of editorial oversight

I have a headache and
blame Nature magazine for it.
For about 15 years, I have worked on quantum gravity phenomenology, which means I study ways to experimentally test the quantum properties of space and time. Since 2007, my research area has its own conference series, “Experimental Search for Quantum Gravity,” which took place most recently September 2016 in Frankfurt, Germany.

Extrapolating from whom I personally know, I estimate that about 150-200 people currently work in this field. But I have never seen nor heard anything of Chiara Marletto and Vlatko Vedral, who just wrote a comment for Nature magazine complaining that the research area doesn’t exist.

In their comment, titled “Witness gravity’s quantum side in the lab,” Marletto and Vedral call for “a focused meeting bringing together the quantum- and gravity-physics communities, as well as theorists and experimentalists.” Nice.

If they think such meetings are a good idea, I recommend they attend them. There’s no shortage. The above mentioned conference series is only the most regular meeting on quantum gravity phenomenology. Also the Marcel Grossmann Meeting has sessions on the topic. Indeed, I am writing this from a conference here in Trieste, which is about “Probing the spacetime fabric: from concepts to phenomenology.”

Marletto and Vedral point out that it would be great if one could measure gravitational fields in quantum superpositions to demonstrate that gravity is quantized. They go on to lay out their own idea for such experiments, but their interest in the topic apparently didn’t go far enough to either look up the literature or actually put in the numbers.

Yes, it would be great if we could measure the gravitational field of an object in a superposition of, say, two different locations. Problem is, heavy objects – whose gravitational fields are easy to measure – decohere quickly and don’t have quantum properties. On the other hand, objects which are easy to bring into quantum superpositions are too light to measure their gravitational field.

To be clear, the challenge here is to measure the gravitational field created by the objects themselves. It is comparably easy to measure the behavior of quantum objects in the gravitational field of the Earth. That has something to do with quantum and something to do with gravity, but nothing to do with quantum gravity because the gravitational field isn’t quantized.

In their comment, Marletto and Vedral go on to propose an experiment:
“Likewise, one could envisage an experiment that uses two quantum masses. These would need to be massive enough to be detectable, perhaps nanomechanical oscillators or Bose–Einstein condensates (ultracold matter that behaves as a single super-atom with quantum properties). The first mass is set in a superposition of two locations and, through gravitational interaction, generates Schrödinger-cat states on the gravitational field. The second mass (the quantum probe) then witnesses the ‘gravitational cat states’ brought about by the first.”
This is truly remarkable, but not because it’s such a great idea. It’s because Marletto and Vedral believe they’re the first to think about this. Of course they are not.

The idea of using Schrödinger-cat states, has most recently been discussed here. I didn’t write about the paper on this blog because the experimental realization faces giant challenges and I think it won’t work. There is also Anastopolous and Hu’s CQG paper about “Probing a Gravitational Cat State” and a follow-up paper by Derakhshani, which likewise go unmentioned. I’d really like to know how Marletto and Vedral think they can improve on the previous proposals. Letting a graphic designer make a nice illustration to accompany their comment doesn’t really count much in my book.

The currently most promising attempt to probe quantum gravity indeed uses nanomechanical oscillators and comes from the group of Markus Aspelmeyer in Vienna. I previously discussed their work here. This group is about six orders of magnitude away from being able to measure such superpositions. The Nature comment doesn’t mention it either.

The prospects of using Bose-Einstein condensates to probe quantum gravity has been discussed back and forth for two decades, but clear is that this isn’t presently the best option. The reason is simple: Even if you take the largest condensate that has been created to date – something like 10 million atoms – and you calculate the total mass, you are still way below the mass of the nanomechanical oscillators. And that’s leaving aside the difficulty of creating and sustaining the condensate.

There are some other possible gravitational effects for Bose-Einstein condensates which have been investigated, but these come from violations of the equivalence principle, or rather the ambiguity of what the equivalence principle in quantum mechanics means to begin with. That’s a different story though because it’s not about measuring quantum superpositions of the gravitational field.

Besides this, there are other research directions. Paternostro and collaborators, for example, have suggested that a quantized gravitational field can exchange entanglement between objects in a way that a classical field can’t. That too, however, is a measurement which is not presently technologically feasible. A proposal closer to experimental test is that by Belenchia et al, laid out their PRL about “Tests of Quantum Gravity induced non-locality via opto-mechanical quantum oscillators” (which I wrote about here).

Others look for evidence of quantum gravity in the CMB, in gravitational waves, or search for violations of the symmetries that underlie General Relativity. You can find a little summary in my blogpost “How Can we test Quantum Gravity”  or in my Nautilus essay “What Quantum Gravity Needs Is More Experiments.”

Do Marletto and Vedral mention any of this research on quantum gravity phenomenology? No.

So, let’s take stock. Here, we have two scientists who don’t know anything about the topic they write about and who ignore the existing literature. They faintly reinvent an old idea without being aware of the well-known difficulties, without quantifying the prospects of ever measuring it, and without giving proper credits to those who previously wrote about it. And they get published in one of the most prominent scientific journals in existence.

Wow. This takes us to a whole new level of editorial incompetence.

The worst part isn’t even that Nature magazine claims my research area doesn’t exist. No, it’s that I’m a regular reader of the magazine – or at least have been so far – and rely on their editors to keep me informed about what happens in other disciplines. For example with the comments pieces. And let us be clear that these are, for all I know, invited comments and not selected from among unsolicited submissions. So, some editor deliberately chose these authors.

Now, in this rare case when I can judge their content’s quality, I find the Nature editors picked two people who have no idea what’s going on, who chew up 30 years old ideas, and omit relevant citations of timely contributions.

Thus, for me the worst part is that I will henceforth have to suspect Nature’s coverage of other research areas is equally miserable as this.

Really, doing as much as Googling “Quantum Gravity Phenomenology” is more informative than this Nature comment.

Friday, June 30, 2017

To understand the foundations of physics, study numerology

Numbers speak. [Img Src]
Once upon a time, we had problems in the foundations of physics. Then we solved them. That was 40 years ago. Today we spend most of our time discussing non-problems.

Here is one of these non-problems. Did you know that the universe is spatially almost flat? There is a number in the cosmological concordance model called the “curvature parameter” that, according to current observation, has a value of 0.000 plus-minus 0.005.

Why is that a problem? I don’t know. But here is the story that cosmologists tell.

From the equations of General Relativity you can calculate the dynamics of the universe. This means you get relations between the values of observable quantities today and the values they must have had in the early universe.

The contribution of curvature to the dynamics, it turns out, increases relative to that of matter and radiation as the universe expands. This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67π)?

That the curvature must have had a small value in the early universe is called the “flatness problem,” and since it’s on Wikipedia it’s officially more real than me. And it’s an important problem. It’s important because it justifies the many attempts to solve it.

The presently most popular solution to the flatness problem is inflation – a rapid period of expansion briefly after the Big Bang. Because inflation decreases the relevance of curvature contributions dramatically – by something like 200 orders of magnitude or so – you no longer have to start with some tiny value. Instead, if you start with any curvature parameter smaller than 10197, the value today will be compatible with observation.

Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?

Worse, if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one. Sucks. Also, doesn’t describe observations*.

And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.

If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

The cosmological constant problem is another such confusion. If you don’t know how to calculate that constant – and we don’t, because we don’t have a theory for Planck scale physics – then it’s a free parameter. You go and measure it and that’s all there is to say about it.

And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!

Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say? I’m not sure what’s worse, but suspect most of them actually believe it.

Many of my readers like jump to conclusions about my opinions. But you are not one of them. You and I, therefore, both know that I did not say that inflation is bunk. Rather I said that the most common arguments for inflation are bunk. There are good arguments for inflation, but that’s a different story and shall be told another time.

And since you are among the few who actually read what I wrote, you also understand I didn’t say the cosmological constant is not a problem. I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate. Which is what vacuum fluctuations should do, and what gives rise to what Niayesh called the cosmological non-constant problem.

Enlightened as you are, you would also never think I said we shouldn’t try to explain the value of some parameter. It is always good to look for better explanations for the assumption underlying current theories – where by “better” I mean either simpler or can explain more.

No, what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement  – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant.

I therefore have a lot of sympathy for philosopher Tim Maudlin who recently complained that “attention to conceptual clarity (as opposed to calculational technique) is not part of the physics curriculum” which results in inevitable confusion – not to mention waste of time.

In response, a pseudoanonymous commenter remarked that a discussion between a physicist and a philosopher of physics is “like a debate between an experienced car mechanic and someone who has read (or perhaps skimmed) a book about cars.”

Trouble is, in the foundations of physics today most of the car mechanics are repairing cars that run just fine – and then bill you for it.

I am not opposed to using aesthetic arguments as research motivations. We all have to get our inspiration from somewhere. But I do think it’s bad science to pretend numerological arguments are anything more than appeals to beauty. That very small or very large numbers require an explanation is a belief – and it’s a belief that has become adapted by the vast majority of the community. That shouldn’t happen in any scientific discipline.

As a consequence, high energy physics and cosmology is now populated with people who don’t understand that finetuning arguments have no logical basis. The flatness “problem” is preached in textbooks. The naturalness “problem” is all over the literature. The cosmological constant “problem” is on every popular science page. And so the myths live on.

If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am.


*Though that’s exactly what happens with bare values.

Wednesday, April 26, 2017

Not all publicity is good publicity, not even in science.

“Any publicity is good publicity” is a reaction I frequently get to my complaints about flaky science coverage. I find this attitude disturbing, especially when it comes from scientists.

[img src: gamedesigndojo.com]


To begin with, it’s an idiotic stance towards journalism in general – basically a permission for journalists to write nonsense. Just imagine having the same attitude towards articles on any other topic, say, immigration: Simply shrug off whether the news accurately reports survey results or even correctly uses the word “immigrant.” In that case I hope we agree that not all publicity is good publicity, neither in terms of information transfer nor in terms of public engagement.

Besides, as United Airlines and Pepsi recently served to illustrate, sometimes all you want is that they stop talking about you.

But, you may say, science is different. Scientists have little to lose and much to win from an increased interest in their research.

Well, if you think so, you either haven’t had much experience with science communication or you haven’t paid attention. Thanks to this blog, I have a lot first-hand experience with public engagement due to science writers’ diarrhea. And most of what I witness isn’t beneficial for science at all.

The most serious problem is the awakening after overhype. It’s when people start asking “Whatever happened to this?” Why are we still paying string theorists? Weren’t we supposed to have a theory of quantum gravity by 2015? Why do physicists still don’t know what dark matter is made of? Why can I still not have a meaningful conversation with my phone, where is my quantum computer, and whatever happened to negative mass particles?

That’s a predictable and wide-spread backlash from disappointed hope. Once excitement fades, the consequence is a strong headwind of public ridicule and reduced trust. And that’s for good reasons, because people were, in fact, fooled. In IT development, it goes under the (branded but catchy) name Hype Cycle

[Hype Cycle. Image: Wikipedia]

There isn’t much data on it, but academic research plausibly goes through the same “through of disillusionment” when it falls short of expectations. The more hype, the more hangover when promises don’t pan out, which is why, eg, string theory today takes most of the fire while loop quantum gravity – though in many regards even more of a disappointment – flies mostly under the radar. In the valley of disappointment, then, researchers are haunted both by dwindling financial support as well as by their colleagues’ snark. (If you think that’s not happening, wait for it.)

This overhype backlash, it’s important to emphasize, isn’t a problem journalists worry about. They’ll just drop the topic and move on to the next. We, in science, are the ones who pay for the myth that any publicity is good publicity.

In the long run the consequences are even worse. Too many never-heard-of-again breakthroughs leave even the interested layman with the impression that scientists can no longer be taken seriously. Add to this a lack of knowledge about where to find quality information, and inevitable some fraction of the public will conclude scientific results can’t be trusted, period.

If you have a hard time believing what I say, all you have to do is read comments people leave on such misleading science articles. They almost all fall into two categories. It’s either “this is a crappy piece of science writing” or “mainstream scientists are incompetent impostors.” In both cases the commenters doubt the research in question is as valuable as it was presented.

If you can stomach it, check the I-Fucking-Love-Science facebook comment section every once in a while. It's eye-opening. On recent reports from the latest LHC anomaly, for example, you find gems like “I wish I had a job that dealt with invisible particles, and then make up funny names for them! And then actually get a paycheck for something no one can see! Wow!” and “But have we created a Black Hole yet? That's what I want to know.” Black Holes at the LHC were the worst hype I can recall in my field, and it still haunts us.

Another big concern with science coverage is its impact on the scientific community. I have spoken about this many times with my colleagues, but nobody listens even though it’s not all that complicated: Our attention is influenced by what ideas we are repeatedly exposed to, and all-over-the-news topics therefore bring a high risk of streamlining our interests.

Almost everyone I ever talked to about this simply denied such influence exists because they are experts and know better and they aren’t affected by what they read. Unfortunately, many scientific studies have demonstrated that humans pay more attention to what they hear about repeatedly, and we perceive something as more important the more other people talk about it. That’s human nature.

Other studies that have shown such cognitive biases are neither correlated nor anti-correlated with intelligence. In other words, just because you’re smart doesn’t mean you’re not biased. Some techniques are known to alleviate cognitive biases but the scientific community does not presently used these techniques. (Ample references eg in “Blind Spot,” by Banaji, Greenwald, and Martin.)

I have seen this happening over and over again. My favorite example is the “OPERA anomaly” that seemed to show neutrinos could travel faster than the speed of light. The data had a high statistical significance, and yet it was pretty clear from the start that the result had to be wrong – it was in conflict with other measurements.

But the OPERA anomaly was all over the news. And of course physicists talked about it. They talked about it on the corridor, and at lunch, and in the coffee break. And they did what scientists do: They thought about it.

The more they talked about it, the more interesting it became. And they began to wonder whether not there might be something to it after all. And if maybe one could write a paper about it because, well, we’ve been thinking about it.

Everybody who I spoke to about the OPERA anomaly began their elaboration with a variant of “It’s almost certainly wrong, but...” In the end, it didn’t matter they thought it was wrong – what mattered was merely that it had become socially acceptable to work on it. And every time the media picked it up again, fuel was added to the fire. What was the result? A lot of wasted time.

For physicists, however, sociology isn’t science, and so they don’t want to believe social dynamics is something they should pay attention to. And as long as they don’t pay attention to how media coverage affects their objectivity, publicity skews judgement and promotes a rich-get-richer trend.

Ah, then, you might argue, at least exposure will help you get tenure because your university likes it if their employees make it into the news. Indeed, the “any publicity is good” line I get to hear mainly as justification from people whose research just got hyped.

But if your university measures academic success by popularity, you should be very worried about what this does to your and your colleagues’ scientific integrity. It’s a strong incentive for sexy-yet-shallow, headline-worthy research that won’t lead anywhere in the long run. If you hunt after that incentive, you’re putting your own benefit over the collective benefit society would get from a well-working academic system. In my view, that makes you a hurdle to progress.

What, then, is the result of hype? The public loses: Trust in research. Scientists lose: Objectivity. Who wins? The news sites that place an ad next to their big headlines.

But hey, you might finally admit, it’s just so awesome to see my name printed in the news. Fine by me, if that's your reasoning. Because the more bullshit appears in the press, the more traffic my cleaning service gets. Just don’t say I didn’t warn you.

Friday, March 31, 2017

Book rant: “Universal” by Brian Cox and Jeff Forshaw

Universal: A Guide to the Cosmos
Brian Cox and Jeff Forshaw
Da Capo Press (March 28, 2017)
(UK Edition, Allen Lane (22 Sept. 2016))

I was meant to love this book.

In “Universal” Cox and Forshaw take on astrophysics and cosmology, but rather than using the well-trodden historic path, they offer do-it-yourself instructions.

The first chapters of the book start with every-day observations and simple calculations, by help of which the reader can estimate eg the radius of Earth and its mass, or – if you let a backyard telescope with a 300mm lens and equatorial mount count as every-day items – the distance to other planets in the solar system.

Then, the authors move on to distances beyond the solar system. With that, self-made observations understandably fade out, but are replaced with publicly available data. Cox and Forshaw continue to explain the “cosmic distance ladder,” variable stars, supernovae, redshift, solar emission spectra, Hubble’s law, the Herzsprung-Russell diagram.

Set apart from the main text, the book has “boxes” (actually pages printed white on black) with details of the example calculations and the science behind them. The first half of the book reads quickly and fluidly and reminds me in style of school textbooks: They make an effort to illuminate the logic of scientific reasoning, with some historical asides, and concrete numbers. Along the way, Cox and Forshaw emphasize that the great power of science lies in the consistency of its explanations, and they highlight the necessity of taking into account uncertainty both in the data and in the theories.

The only thing I found wanting in the first half of the book is that they use the speed of light without explaining why it’s constant or where to get it from, even though that too could have been done with every-day items. But then maybe that’s explained in their first book (which I haven’t read).

For me, the fascinating aspect of astrophysics and cosmology is that it connects the physics of the very small scales with that of the very large scales, and allows us to extrapolate both into the distant past and future of our universe. Even though I’m familiar with the research, it still amazes me just how much information about the universe we have been able to extract from the data in the last two decades.

So, yes, I was meant to love this book. I would have been an easy catch.

Then the book continues to explain the dark matter hypothesis as a settled fact, without so much as mentioning any shortcomings of LambdaCDM, and not a single word on modified gravity. The Bullet Cluster is, once again, used as a shut-up argument – a gross misrepresentation of the actual situation, which I previously complained about here.

Inflation gets the same treatment: It’s presented as if it’s a generally accepted model, with no discussion given to the problem of under-determination, or whether inflation actually solves problems that need a solution (or solves the problems period).

To round things off, the authors close the final chapter with some words on eternal inflation and bubble universes, making a vague reference to string theory (because that’s also got something to do with multiverses you see), and then they suggest this might mean we live in a computer simulation:

“Today, the cosmologists responsible for those simulations are hampered by insufficient computing power, which means that they can only produce a small number of simulations, each with different values for a few key parameters, like the amount of dark matter and the nature of the primordial perturbations delivered at the end of inflation. But imagine that there are super-cosmologists who know the String Theory that describes the inflationary Multiverse. Imagine that they run a simulation in their mighty computers – would the simulated creatures living within one of the simulated bubble universes be able to tell that they were in a simulation of cosmic proportions?”
Wow. After all the talk about how important it is to keep track of uncertainty in scientific reasoning, this idea is thrown at the reader with little more than a sentence which mentions that, btw, “evidence for inflation” is “not yet absolutely compelling” and there is “no firm evidence for the validity of String Theory or the Multiverse.” But, hey, maybe we live in a computer simulation, how cool is that?

Worse than demonstrating slippery logic, their careless portrayal of speculative hypotheses as almost settled is dumb. Most of the readers who buy the book will have heard of modified gravity as dark matter’s competitor, and will know the controversies around inflation, string theory, and the multiverse: It’s been all over the popular science news for several years. That Cox and Forshaw don’t give space to discussing the pros and cons in a manner that at least pretends to be objective will merely convince the scientifically-minded reader that the authors can’t be trusted.

The last time I thought of Brian Cox – before receiving the review copy of this book – it was because a colleague confided to me that his wife thinks Brian is sexy. I managed to maneuver around the obviously implied question, but I’ll answer this one straight: The book is distinctly unsexy. It’s not worthy of a scientist.

I might have been meant to love the book, but I ended up disappointed about what science communication has become.

[Disclaimer: Free review copy.]

Wednesday, March 15, 2017

No, we probably don’t live in a computer simulation

According to Nick Bostrom of the Future of Humanity Institute, it is likely that we live in a computer simulation. And one of our biggest existential risks is that the superintelligence running our simulation shuts it down.

The simulation hypothesis, as it’s called, enjoys a certain popularity among people who like to think of themselves as intellectual, believing it speaks for their mental flexibility. Unfortunately it primarily speaks for their lacking knowledge of physics.

Among physicists, the simulation hypothesis is not popular and that’s for a good reason – we know that it is difficult to find consistent explanations for our observations. After all, finding consistent explanations is what we get paid to do.

Proclaiming that “the programmer did it” doesn’t only not explain anything - it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature.

First, to get it out of the way, there’s a trivial way in which the simulation hypothesis is correct: You could just interpret the presently accepted theories to mean that our universe computes the laws of nature. Then it’s tautologically true that we live in a computer simulation. It’s also a meaningless statement.

A stricter way to speak of the computational universe is to make more precise what is meant by ‘computing.’ You could say, for example, that the universe is made of bits and an algorithm encodes an ordered time-series which is executed on these bits. Good - but already we’re deep in the realm of physics.

If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. This might be somebody’s universe, maybe, but not ours. You either have to overthrow quantum mechanics (good luck), or you have to use qubits. [Note added for clarity: You might be able to get quantum mechanics from a classical, nonlocal approach, but nobody knows how to get quantum field theory from that.]

Even from qubits, however, nobody’s been able to recover the presently accepted fundamental theories – general relativity and the standard model of particle physics. The best attempt to date is that by Xiao-Gang Wen and collaborators, but they are still far away from getting back general relativity. It’s not easy.

Indeed, there are good reasons to believe it’s not possible. The idea that our universe is discretized clashes with observations because it runs into conflict with special relativity. The effects of violating the symmetries of special relativity aren’t necessarily small and have been looked for – and nothing’s been found.

For the purpose of this present post, the details don’t actually matter all that much. What’s more important is that these difficulties of getting the physics right are rarely even mentioned when it comes to the simulation hypothesis. Instead there’s some fog about how the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.

But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem? If the programmer could predict in advance what the brain will investigate next, it would be pointless to run the simulation to begin with. So how does he or she know what are the consistent data to feed the artificial brain with when it decides to probe a specific hypothesis? Where does the data come from? The programmer could presumably get consistent data from their own environment, but then the brain wouldn’t live in a simulation.

It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible. I think, however, it is much harder than many future-optimists would like us to believe. Whatever the artificial brains will be made of, they won’t be any easier to copy and reproduce than human brains. They’ll be one-of-a-kind. They’ll be individuals.

It therefore seems implausible to me that we will soon be outnumbered by artificial intelligences with cognitive skills exceeding ours. More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses and then consult them on questions of importance.

So, yes, I think artificial consciousness is on the horizon. I also think it’s possible to convince a mind with cognitive abilities comparable to that of humans that their environment is not what they believe it is. Easy enough to put the artificial brain in a metaphoric vat: If you don’t give it any input, it would never be any wiser. But that’s not the environment I experience and, if you read this, it’s not the environment you experience either. We have a lot of observations. And it’s not easy to consistently compute all the data we have.

Besides, if the reason you build an artificial intelligences is consultation, making them believe reality is not what it seems is about the last thing you’d want.

Hence, the first major problem with the simulation hypothesis is to consistently create all the data which we observe by any means other than the standard model and general relativity – because these are, for all we know, not compatible with the universe-as-a-computer.

Maybe you want to argue it is only you alone who is being simulated, and I am merely another part of the simulation. I’m quite sympathetic to this reincarnation of solipsism, for sometimes my best attempt of explaining the world is that it’s all an artifact of my subconscious nightmares. But the one-brain-only idea doesn’t work if you want to claim that it is likely we live in a computer simulation.

To claim it is likely we are simulated, the number of simulated conscious minds must vastly outnumber those of non-simulated minds. This means the programmer will have to create a lot of brains. Now, they could separately simulate all these brains and try to fake an environment with other brains for each, but that would be nonsensical. The computationally more efficient way to convince one brain that the other brains are “real” is to combine them in one simulation.

Then, however, you get simulated societies that, like ours, will set out to understand the laws that govern their environment to better use it. They will, in other words, do science. And now the programmer has a problem, because it must keep close track of exactly what all these artificial brains are trying to probe.

The programmer could of course just simulate the whole universe (or multiverse?) but that again doesn’t work for the simulation argument. Problem is, in this case it would have to be possible to encode a whole universe in part of another universe, and parts of the simulation would attempt to run their own simulation, and so on. This has the effect of attempting to reproduce the laws on shorter and shorter distance scales. That, too, isn’t compatible with what we know about the laws of nature. Sorry.

Stephen Wolfram (from Wolfram research) recently told John Horgan that:
    “[Maybe] down at the Planck scale we’d find a whole civilization that’s setting things up so our universe works the way it does.”

I cried a few tears over this.

The idea that the universe is self-similar and repeats on small scales – so that elementary particles are built of universes which again contain atoms and so on – seems to hold a great appeal for many. It’s another one of these nice ideas that work badly. Nobody’s ever been able to write down a consistent theory that achieves this – consistent both internally and with our observations. The best attempt I know of are limit cycles in theory space but to my knowledge that too doesn’t really work.

Again, however, the details don’t matter all that much – just take my word for it: It’s not easy to find a consistent theory for universes within atoms. What matters is the stunning display of ignorance – for not to mention arrogance –, demonstrated by the belief that for physics at the Planck scale anything goes. Hey, maybe there’s civilizations down there. Let’s make a TED talk about it next. For someone who, like me, actually works on Planck scale physics, this is pretty painful.

To be fair, in the interview, Wolfram also explains that he doesn’t believe in the simulation hypothesis, in the sense that there’s no programmer and no superior intelligence laughing at our attempts to pin down evidence for their existence. I get the impression he just likes the idea that the universe is a computer. (Note added: As a commenter points out, he likes the idea that the universe can be described as a computer.)

In summary, it isn’t easy to develop theories that explain the universe as we see it. Our presently best theories are the standard model and general relativity, and whatever other explanation you have for our observations must first be able to reproduce these theories’ achievements. “The programmer did it” isn’t science. It’s not even pseudoscience. It’s just words.

All this talk about how we might be living in a computer simulation pisses me off not because I’m afraid people will actually believe it. No, I think most people are much smarter than many self-declared intellectuals like to admit. Most readers will instead correctly conclude that today’s intelligencia is full of shit. And I can’t even blame them for it.

Monday, November 28, 2016

This isn’t quantum physics. Wait. Actually it is.

Rocket science isn’t what it used to be. Now that you can shoot someone to Mars if you can spare a few million, the colloquialism for “It’s not that complicated” has become “This isn’t quantum physics.” And there are many things which aren’t quantum physics. For example, making a milkshake:
“Guys, this isn’t quantum physics. Put the stuff in the blender.”
Or losing weight:
“if you burn more calories than you take in, you will lose weight. This isn't quantum physics.”
Or economics:
“We’re not talking about quantum physics here, are we? We’re talking ‘this rose costs 40p, so 10 roses costs £4’.”
You should also know that Big Data isn’t Quantum Physics and Basketball isn’t Quantum Physics and not driving drunk isn’t quantum physics. Neither is understanding that “[Shoplifting isn’t] a way to accomplish anything of meaning,” or grasping that no doesn’t mean yes.

But my favorite use of the expression comes from Noam Chomsky who explains how the world works (so the modest title of his book):
“Everybody knows from their own experience just about everything that’s understood about human beings – how they act and why – if they stop to think about it. It’s not quantum physics.”
From my own experience, stopping to think and believing one understands other people effortlessly is the root of much unnecessary suffering. Leaving aside that it’s quite remarkable some people believe they can explain the world, and even more remarkable others buy their books, all of this is, as a matter of fact, quantum physics. Sorry, Noam.

Yes, that’s right. Basketballs, milkshakes, weight loss – it’s all quantum physics. Because it’s all happening by the interactions of tiny particles which obey the rules of quantum mechanics. If it wasn’t for quantum physics, there wouldn’t be atoms to begin with. There’d be no Sun, there’d be no drunk driving, and there’d be no rocket science.

Quantum mechanics is often portrayed as the theory of the very small, but this isn’t so. Quantum effects can stretch over large distances and have been measured over distances up to several hundred kilometers. It’s just that we don’t normally observe them in daily life.

The typical quantum effects that you have heard of – things whose position and momentum can’t be measured precisely, are both dead and alive, have a spooky action at a distance and so on – don’t usually manifest themselves for large objects. But that doesn’t mean that the laws of quantum physics suddenly stop applying at a hair’s width. It’s just that the effects are feeble and human experience is limited. There is some quantum physics, however, which we observe wherever we look: If it wasn’t for Pauli’s exclusion principle, you’d fall right through the ground.

Indeed, a much more interesting question is What is not quantum physics?” For all we presently know, the only thing not quantum is space-time and its curvature, manifested by gravity. Most physicists believe, however, that gravity too is a quantum theory, just that we haven’t been able to figure out how this works.

“This isn’t quantum physics,” is the most unfortunate colloquialism ever because really everything is quantum physics. Including Noam Chomsky.

Monday, November 07, 2016

Steven Weinberg doesn’t like Quantum Mechanics. So what?

A few days ago, Nobel laureate Steven Weinberg gave a one-hour lecture titled “What’s the matter with quantum mechanics?” at a workshop for science writers organized by the Council for the Advancement of Science Writing (CASW).

In his lecture, Weinberg expressed a newfound sympathy for the critics of quantum mechanics.
“I’m not as happy about quantum mechanics as I used to be, and not as dismissive of the critics. And it’s a bad sign in particular that those physicists who are happy about quantum mechanics, who see nothing wrong with it, don’t agree with each other about what it means.”
You can watch the full lecture here. (The above quote is at 17:40.)


It’s become a cliché that physicists in their late years develop an obsession with quantum mechanics. On this account, you can file Weinberg together with Mermin and Penrose and Smolin. I’m not sure why that is. Maybe it’s something which has bothered them all along, they just never saw it as important enough. Maybe it’s because they start paying more attention to their intuition, and quantum mechanics – widely regarded as non-intuitive – begins itching. Or maybe it’s because they conclude it’s the likely reason we haven’t seen any progress in the foundations of physics for 30 years.

Whatever Weinberg’s motivation, he doesn’t like neither Copenhagen, nor Many Worlds, nor decoherent or consistent histories, and he seems to be allergic to pilot waves (1:02:15). As for qbism, which Mermin finds so convincing, that doesn’t even seem noteworthy to Weinberg.

I learned quantum mechanics in the mid-1990s from Walter Greiner, the one with the textbook series. (He passed away a few weeks ago at age 80.) Walter taught the Copenhagen Interpretation. The attitude he conveyed in his lectures was what Mermin dubbed “shut up and calculate.”

Of course I as most other students spent some time looking into the different interpretations of quantum mechanics – nothing’s more interesting than the topics your prof refuses to talk about. But I’m an instrumentalist by heart and also I quite like the mathematics of quantum mechanics, so I never had a problem with the Copenhagen Interpretation. I’m also, however, a phenomenologist. And so I’ve always thought of quantum mechanics as an incomplete, not fundamental, theory which needs to be superseded by a better, underlying explanation.

My misgivings of quantum mechanics are pretty much identical to the ones which Weinberg expresses in his lecture. The axioms of quantum mechanics, whatever interpretation you chose, are unsatisfactory for a reductionist. They should not mention the process of measurement, because the fundamental theory should tell you what a measurement is.

If you believe the wave-function is a real thing (psi-ontic), decoherence doesn’t solve the issue because you’re left with a probabilistic state that needs to be suddenly updated. If you believe the wave-function only encodes information (psi-epistemic) and the update merely means we’ve learned something new, then you have to explain who learns and how they learn. None of the currently existing interpretations address these issues satisfactorily.

It isn’t so surprising I’m with Weinberg on this because despite attending Greiner’s lectures, I never liked Greiner’s textbooks. That we students were more or less forced to buy them didn’t make them any more likable. So I scraped together my Deutsche Marks and bought Weinberg’s textbooks, which I loved for the concise mathematical approach.

I learned both general relativity and quantum field theory from Weinberg’s textbooks. I also later bought Weinberg’s lectures on Quantum Mechanics which appeared in 2013, but haven’t actually read them, except for section 3.7, where he concludes that:
“[T]oday there is no interpretation of quantum mechanics that does not have serious flaws, and [we] ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is merely a good approximation.”
It’s not much of a secret that I’m a fan of non-local hidden variables (aka superdeterminism), which I believe to be experimentally testable. To my huge frustration, however, I haven’t been able to find an experimental group willing and able to do that. I am therefore happy that Weinberg emphasizes the need to find a better theory, and to also look for experimental evidence. I don’t know what he thinks of superdeterminism. But superdeterminism or something else, I think probing quantum mechanics in new regimes is best shot we presently have at making progress on the foundations of physics.

I therefore don’t understand the ridicule aimed at those who think that quantum mechanics needs an overhaul. Being unintuitive and feeling weird doesn’t make a theory wrong – we can all agree on this. We don’t even have to agree it’s unintuitive – I actually don’t think so. Intuition comes with use. Even if you can’t stomach the math, you can build your quantum intuition for example by playing “Quantum Moves,” a video game that crowd-sources players’ solutions for quantum mechanical optimization problems. Interestingly, humans do better than algorithms (at least for now).

[Weinberg (left), getting some
kind of prize or title. Don't
know for what. Image: CASW]
So, yeah, maybe quantum physics isn’t weird. And even if it is, being weird doesn’t make it wrong, and therefore you don’t think it’s a promising research avenue to pursue. Fine, then don’t. But before you make jokes about physicists who rely on their intuition, let us be clear that being ugly doesn’t make a theory wrong either. And yet it’s presently entirely acceptable to develop new theories with the only aim of prettifying the existing ones.

I don’t think for example that numerological coincidences are problems worth thinking about – they’re questions of aesthetic appeal. The mass of the Higgs is much smaller than the Planck mass. So what? The spatial curvature of the universe is almost zero, the cosmological constant tiny, and the electric dipole moment of the neutron is for all we know absent. Why should that bother me? If you think that’s a mathematical inconsistency, think again – it’s not. There’s no logical reason for why that shouldn’t be so. It’s just that to our human sense it doesn’t quite feel right.

A huge amount of work has gone into curing these “problems” because finetuned constants aren’t thought of as beautiful. But in my eyes the cures are all worse than the disease: Solutions usually require the introduction of additional fields and potentials for these fields and personally I think it’s much preferable to just have a constant – is there any axiom simpler than that?

The difference between the two research areas is that there are tens of thousands of theorists trying to make the fundamental laws of nature less ugly, but only a few hundred working on making them less weird. That in and by itself is reason to shift focus to quantum foundations, just because it’s the path less trodden and more left to explore.

But maybe I’m just old beyond my years. So I’ll shut up now and go back to my calculations.

Wednesday, September 21, 2016

We understand gravity just fine, thank you.

Yesterday I came across a Q&A on the website of Discover magazine, titled “The Root of Gravity - Does recent research bring us any closer to understanding it?” Jeff Lepler from Michigan has the following question:
Q: Are we any closer to understanding the root cause of gravity between objects with mass? Can we use our newly discovered knowledge of the Higgs boson or gravitational waves to perhaps negate mass or create/negate gravity?”
A person by name Bill Andrews (unknown to me) gives the following answer:
A: Sorry, Jeff, but scientists still don’t really know why gravity works. In a way, they’ve just barely figured out how it works.”
The answer continues, but let’s stop right there where the nonsense begins. What’s that even mean scientists don’t know “why” gravity works? And did the Bill person really think he could get away with swapping “why” for a “how” and nobody would notice?

The purpose of science is to explain observations. We have a theory by name General Relativity that explains literally all data of gravitational effects. Indeed, that General Relativity is so dramatically successful is a great frustration for all those people who would like to revolutionize science a la Einstein. So in which sense, please, do scientists barely know how it works?

For all we can presently tell gravity is a fundamental force, which means we have no evidence for an underlying theory from which gravity could be derived. Sure, theoretical physicists are investigating whether there is such an underlying theory that would give rise to gravity as well as the other interactions, a “theory of everything”. (Please submit nomenclature complaints to your local language police, not to me.) Would such a theory of everything explain “why” gravity works? No, because that’s not a meaningful scientific question. A theory of everything could potentially explain how gravity can arise from more fundamental principles similar to, say, the ideal gas law can arise from statistical properties of many atoms in motion. But that still wouldn’t explain why there should be something like gravity, or anything, in the first place.

Either way, even if gravity arises within a larger framework like, say, string theory, the effects of what we call gravity today would still come about because energy-densities (and related quantities like pressure and momentum flux and so on) curve space-time, and fields move in that space-time. Just that these quantities might no longer be fundamental. We’ve known since 101 years how this works.

After a few words on Newtonian gravity, the answer continues:
“Because the other forces use “force carrier particles” to impart the force onto other particles, for gravity to fit the model, all matter must emit gravitons, which physically embody gravity. Note, however, that gravitons are still theoretical. Trying to reconcile these different interpretations of gravity, and understand its true nature, are among the biggest unsolved problems of physics.”
Reconciling which different interpretations of gravity? These are all the same “interpretation.” It is correct that we don’t know how to quantize gravity so that the resulting theory remains viable also when gravity becomes strong. It’s also correct that the force-carrying particle associated to the quantization – the graviton – hasn’t been detected. But the question was about gravity, not quantum gravity. Reconciling the graviton with unquantized gravity is straight-forward – it’s called perturbative quantum gravity –  and exactly the reason most theoretical physicists are convinced the graviton exists. It’s just that this reconciliation breaks down when gravity becomes strong, which means it’s only an approximation.
“But, alas, what we do know does suggest antigravity is impossible.”
That’s correct on a superficial level, but it depends on what you mean by antigravity. If you mean by antigravity that you can let any of the matter which surrounds us “fall up” it’s correct. But there are modifications of general relativity that have effects one can plausibly call anti-gravitational. That’s a longer story though and shall be told another time.

A sensible answer to this question would have been:
“Dear Jeff,

The recent detection of gravitational waves has been another confirmation of Einstein’s theory of General Relativity, which still explains all the gravitational effects that physicists know of. According to General Relativity the root cause of gravity is that all types of energy curve space-time and all matter moves in this curved space-time. Near planets, such as our own, this can be approximated to good accuracy by Newtonian gravity.

There isn’t presently any observation which suggests that gravity itself emergens from another theory, though it is certainly a speculation that many theoretical physicists have pursued. There thus isn’t any deeper root for gravity because it’s presently part of the foundations of physics. The foundations are the roots of everything else.

The discovery of the Higgs boson doesn’t tell us anything about the gravitational interaction. The Higgs boson is merely there to make sure particles have mass in addition to energy, but gravity works the same either way. The detection of gravitational waves is exciting because it allows us to learn a lot about the astrophysical sources of these waves. But the waves themselves have proved to be as expected from General Relativity, so from the perspective of fundamental physics they didn’t bring news.

Within the incredibly well confirmed framework of General Relativity, you cannot negate mass or its gravitational pull. 
You might also enjoy hearing what Richard Feynman had to say when he was asked a similar question about the origin of the magnetic force:


This answer really annoyed me because it’s a lost opportunity to explain how well physicists understand the fundamental laws of nature.

Sunday, July 24, 2016

Can we please agree what we mean by “Big Bang”?


Can you answer the following question?

At the Big Bang the observable universe had the size of:
    A) A point (no size).
    B) A grapefruit.
    C) 168 meters.

The right answer would be “all of the above.” And that’s not because I can’t tell a point from a grapefruit, it’s because physicists can’t agree what they mean by Big Bang!

For someone in quantum gravity, the Big Bang is the initial singularity that occurs in General Relativity when the current expansion of the universe is extrapolated back to the beginning of time. At the Big Bang, then, the universe had size zero and an infinite energy density. Nobody believes this to be a physically meaningful event. We interpret it as a mathematical artifact which merely signals the breakdown of General Relativity.

If you ask a particle physicist, they’ll therefore sensibly put the Big Bang at the time where the density of matter was at the Planck scale – about 80 orders of magnitude higher than the density of a neutron star. That’s where General Relativity breaks down; it doesn’t make sense to extrapolate back farther than this. At this Big Bang, space and time were subject to significant quantum fluctuations and it’s questionable that even speaking of size makes sense, since that would require a well-defined notion of distance.

Cosmologists tend to be even more conservative. The currently most widely used model for the evolution of the universe posits that briefly after the Planck epoch an exponential expansion, known as inflation, took place. At the end of inflation, so the assumption, the energy of the field which drives the exponential expansion is dumped into particles of the standard model. Cosmologists like to put the Big Bang at the end of inflation because inflation itself hasn’t been observationally confirmed. But they can’t agree how long inflation lasted, and so the estimates for the size of the universe range between a grapefruit and a football field.

Finally, if you ask someone in science communication, they’ll throw up their hands in despair and then explain that the Big Bang isn’t an event but a theory for the evolution of the universe. Wikipedia engages in the same obfuscation – if you look up “Big Bang” you get instead an explanation for “Big Bang theory,” leaving you to wonder what it’s a theory of.

I admit it’s not a problem that bugs physicists a lot because they don’t normally debate the meaning of words. They’ll write down whatever equations they use, and this prevents further verbal confusion. Of course the rest of the world should also work this way, by first writing down definitions before entering unnecessary arguments.

While I am waiting for mathematical enlightment to catch on, I find this state of affairs terribly annoying. I recently had an argument on twitter about whether or not the LHC “recreates the Big Bang,” as the popular press likes to claim. It doesn’t. But it’s hard to make a point if no two references agree on what the Big Bang is to begin with, not to mention that it was neither big nor did it bang. If biologists adopted physicists standards, they’d refer to infants as blastocysts, and if you complained about it they’d explain both are phases of pregnancy theory.

I find this nomenclature unfortunate because it raises the impression we understand far less about the early universe than we do. If physicists can’t agree whether the universe at the Big Bang had the size of the White House or of a point, would you give them 5 billion dollars to slam things into each other? Maybe they’ll accidentally open a portal to a parallel universe where the US Presidential candidates are Donald Duck and Brigitta MacBridge.

Historically, the term “Big Bang” was coined by Fred Hoyle, a staunch believer in steady state cosmology. He used the phrase to make fun of Lemaitre, who, in 1927, had found a solution to Einstein’s field equations according to which the universe wasn’t eternally constant in time. Lemaitre showed, for the first time, that matter caused space to expand, which implied that the universe must have had an initial moment from which it started expanding. They didn’t then worry about exactly when the Big Bang would have been – back then they worried whether cosmology was science at all.

But we’re not in the 1940s any more, and precise science deserves precise terminology. Maybe we should rename the different stages of the universe that into “Big Bang,” “Big Bing” and “Big Bong.” This idea has much potential by allowing further refinement to “Big Bång,” “Big Bîng” or “Big Böng.” I’m sure Hoyle would approve. Then he would laugh and quote Niels Bohr, “Never express yourself more clearly than you are able to think.”

You can count me to the Planck epoch camp.

Monday, July 04, 2016

Why the LHC is such a disappointment: A delusion by name “naturalness”

Naturalness, according to physicists.

Before the LHC turned on, theoretical physicists had high hopes the collisions would reveal new physics besides the Higgs. The chances of that happening get smaller by the day. The possibility still exists, but the absence of new physics so far has already taught us an important lesson: Nature isn’t natural. At least not according to theoretical physicists.

The reason that many in the community expected new physics at the LHC was the criterion of naturalness. Naturalness, in general, is the requirement that a theory should not contain dimensionless numbers that are either very large or very small. If that is so, then theorists will complain the numbers are “finetuned” and regard the theory as contrived and hand-made, not to say ugly.

Technical naturalness (originally proposed by ‘t Hooft) is a formalized version of naturalness which is applied in the context of effective field theories in particular. Since you can convert any number much larger than one into a number much smaller than one by taking its inverse, it’s sufficient to consider small numbers in the following. A theory is technically natural if all suspiciously small numbers are protected by a symmetry. The standard model is technically natural, except for the mass of the Higgs.

The Higgs is the only (fundamental) scalar we know and, unlike all the other particles, its mass receives quantum corrections of the order of the cutoff of the theory. The cutoff is assumed to be close by the Planck energy – that means the estimated mass is 15 orders of magnitude larger than the observed mass. This too-large mass of the Higgs could be remedied simply by subtracting a similarly large term. This term however would have to be delicately chosen so that it almost, but not exactly, cancels the huge Planck-scale contribution. It would hence require finetuning.

In the framework of effective field theories, a theory that is not natural is one that requires a lot of finetuning at high energies to get the theory at low energies to work out correctly. The degree of finetuning can, and has been, quantified in various measures of naturalness. Finetuning is thought of as unacceptable because the theory at high energy is presumed to be more fundamental. The physics we find at low energies, so the argument, should not be highly sensitive to the choice we make for that more fundamental theory.

Until a few years ago, most high energy particle theorists therefore would have told you that the apparent need to finetuning the Higgs mass means that new physics must appear nearby the energy scale where the Higgs will be produced. The new physics, for example supersymmetry, would avoid the finetuning.

There’s a standard tale they have about the use of naturalness arguments, which goes somewhat like this:

1) The electron mass isn’t natural in classical electrodynamics, and if one wants to avoid finetuning this means new physics has to appear at around 70 MeV. Indeed, new physics appears even earlier in form of the positron, rendering the electron mass technically natural.

2) The difference between the masses of the neutral and charged pion is not natural because it’s suspiciously small. To prevent fine-tuning one estimates new physics must appear around 700 MeV, and indeed it shows up in form of the rho meson.

3) The lack of flavor changing neutral currents in the standard model means that a parameter which could a priori have been anything must be very small. To avoid fine-tuning, the existence of the charm quark is required. And indeed, the charm quark shows up in the estimated energy range.

From these three examples only the last one was an actual prediction (Glashow, Iliopoulos, and Maiani, 1970). To my knowledge this is the only prediction that technical naturalness has ever given rise to – the other two examples are post-dictions.

Not exactly a great score card.

But well, given that the standard model – in hindsight – obeys this principle, it seems reasonable enough to extrapolate it to the Higgs mass. Or does it? Seeing that the cosmological constant, the only other known example where the Planck mass comes in, isn’t natural either, I am not very convinced.

A much larger problem with naturalness is that it’s a circular argument and thus a merely aesthetic criterion. Or, if you prefer, a philosophic criterion. You cannot make a statement about the likeliness of an occurrence without a probability distribution. And that distribution already necessitates a choice.

In the currently used naturalness arguments, the probability distribution is assumed to be uniform (or at least approximately uniform) in a range that can be normalized to one by dividing through suitable powers of the cutoff. Any other type of distribution, say, one that is sharply peaked around small values, would require the introduction of such a small value in the distribution already. But such a small value justifies itself by the probability distribution just like a number close to one justifies itself by its probability distribution.

Naturalness, hence, becomes a chicken-and-egg problem: Put in the number one, get out the number one. Put in 0.00004, get out 0.00004. The only way to break that circle is to just postulate that some number is somehow better than all other numbers.

The number one is indeed a special number in that it’s the unit element of the multiplication group. One can try to exploit this to come up with a mechanism that prefers a uniform distribution with an approximate width of one by introducing a probability distribution on the space of probability distributions, leading to a recursion relation. But that just leaves one to explain why that mechanism.

Another way to see that this can’t solve the problem is that any such mechanism will depend on the basis in the space of functions. Eg, you could try to single out a probability distribution by asking that it’s the same as its Fourier-transformation. But the Fourier-transformation is just one of infinitely many basis transformations in the space of functions. So again, why exactly this one?

Or you could try to introduce a probability distribution on the space of transformations among bases of probability distributions, and so on. Indeed I’ve played around with this for some while. But in the end you are always left with an ambiguity, either you have to choose the distribution, or the basis, or the transformation. It’s just pushing around the bump under the carpet.

The basic reason there’s no solution to this conundrum is that you’d need another theory for the probability distribution, and that theory per assumption isn’t part of the theory for which you want the distribution. (It’s similar to the issue with the meta-law for time-varying fundamental constants, in case you’re familiar with this argument.)

In any case, whether you buy my conclusion or not, it should give you a pause that high energy theorists don’t ever address the question where the probability distribution comes from. Suppose there indeed was a UV-complete theory of everything that predicted all the parameters in the standard model. Why then would you expect the parameters to be stochastically distributed to begin with?

This lacking probability distribution, however, isn’t my main issue with naturalness. Let’s just postulate that the distribution is uniform and admit it’s an aesthetic criterion, alrighty then. My main issue with naturalness is that it’s a fundamentally nonsensical criterion.

Any theory that we can conceive of which describes nature correctly must necessarily contain hand-picked assumptions which we have chosen “just” to fit observations. If that wasn’t so, all we’d have left to pick assumptions would be mathematical consistency, and we’d end up in Tegmark’s mathematical universe. In the mathematical universe then, we’d no longer have to choose a consistent theory, ok. But we’d instead have to figure out where we are, and that’s the same question in green.

All our theories contain lots of assumptions like Hilbert-spaces and Lie-algebras and Haussdorf measures and so on. For none of these is there any explanation other than “it works.” In the space of all possible mathematics, the selection of this particular math is infinitely fine-tuned already – and it has to be, for otherwise we’d be lost again in Tegmark space.

The mere idea that we can justify the choice of assumptions for our theories in any other way than requiring them to reproduce observations is logical mush. The existing naturalness arguments single out a particular type of assumption – parameters that take on numerical values – but what’s worse about this hand-selected assumption than any other hand-selected assumption?

This is not to say that naturalness is always a useless criterion. It can be applied in cases where one knows the probability distribution, for example for the typical distances between stars or the typical quantum fluctuation in the early universe, etc. I also suspect that it is possible to find an argument for the naturalness of the standard model that does not necessitate to postulate a probability distribution, but I am not aware of one.

It’s somewhat of a mystery to me why naturalness has become so popular in theoretical high energy physics. I’m happy to see it go out of the window now. Keep your eyes open in the next couple of years and you’ll witness that turning point in the history of science when theoretical physicists stopped dictating nature what’s supposedly natural.

Wednesday, April 27, 2016

If you fall into a black hole

If you fall into a black hole, you’ll die. That much is pretty sure. But what happens before that?

The gravitational pull of a black hole depends on its mass. At a fixed distance from the center, it isn’t any stronger or weaker than that of a star with the same mass. The difference is that, since a black hole doesn’t have a surface, the gravitational pull can continue to increase as you approach the center.

The gravitational pull itself isn’t the problem, the problem is the change in the pull, the tidal force. It will stretch any extended object in a process with technical name “spaghettification.” That’s what will eventually kill you. Whether this happens before or after you cross the horizon depends, again, on the mass of the black hole. The larger the mass, the smaller the space-time curvature at the horizon, and the smaller the tidal force.

Leaving aside lots of hot gas and swirling particles, you have good chances to survive crossing the horizon of a supermassive black hole, like that in the center of our galaxy. You would, however, probably be torn apart before crossing the horizon of a solar-mass black hole.

It takes you a finite time to reach the horizon of a black hole. For an outside observer however, you seem to be moving slower and slower and will never quite reach the black hole, due to the (technically infinitely large) gravitational redshift. If you take into account that black holes evaporate, it doesn’t quite take forever, and your friends will eventually see you vanishing. It might just take a few hundred billion years.

In an article that recently appeared on “Quick And Dirty Tips” (featured by SciAm), Everyday Einstein Sabrina Stierwalt explains:
“As you approach a black hole, you do not notice a change in time as you experience it, but from an outsider’s perspective, time appears to slow down and eventually crawl to a stop for you [...] So who is right? This discrepancy, and whose reality is ultimately correct, is a highly contested area of current physics research.”
No, it isn’t. The two observers have different descriptions of the process of falling into a black hole because they both use different time coordinates. There is no contradiction between the conclusions they draw. The outside observer’s story is an infinitely stretched version of the infalling observer’s story, covering only the part before horizon crossing. Nobody contests this.

I suspect this confusion was caused by the idea of black hole complementarity. Which is indeed a highly contest area of current physics research. According to black hole complementarity the information that falls into a black hole both goes in and comes out. This is in contradiction with quantum mechanics which forbids making exact copies of a state. The idea of black hole complementarity is that nobody can ever make a measurement to document the forbidden copying and hence, it isn’t a real inconsistency. Making such measurements is typically impossible because the infalling observer only has a limited amount of time before hitting the singularity.

Black hole complementarity is actually a pretty philosophical idea.

Now, the black hole firewall issue points out that black hole complementarity is inconsistent. Even if you can’t measure that a copy has been made, pushing the infalling information in the outgoing radiation changes the vacuum state in the horizon vicinity to a state which is no longer empty: that’s the firewall.

Be that as it may, even in black hole complementarity the infalling observer still falls in, and crosses the horizon at a finite time.

The real question that drives much current research is how the information comes out of the black hole before it has completely evaporated. It’s a topic which has been discussed for more than 40 years now, and there is little sign that theorists will agree on a solution. And why would they? Leaving aside fluid analogies, there is no experimental evidence for what happens with black hole information, and there is hence no reason for theorists to converge on any one option.

The theory assessment in this research area is purely non-empirical, to use an expression by philosopher Richard Dawid. It’s why I think if we ever want to see progress on the foundations of physics we have to think very carefully about the non-empirical criteria that we use.

Anyway, the lesson here is: Everyday Einstein’s Quick and Dirty Tips is not a recommended travel guide for black holes.

Wednesday, March 23, 2016

Hey Bill Nye, Please stop talking nonsense about quantum mechanics.

Bill Nye, also known as The Science Guy, is a popular science communicator in the USA. He has appeared regularly on TV and, together with Corey Powell, produced two books. On Twitter, he has gathered 2.8 million followers, by which he ranks somewhere between Brian Cox and Neil deGrasse Tyson. This morning, a video of Bill Nye explaining quantum entanglement was pointed out to me:



The video seems to be part of a series in which he answers questions from his fans. Here we have a young man by name Tom from Western Australia calling in. The transcript starts as follows:
Tom: Hi, Bill. Tom, from Western Australia. If quantum entanglement or quantum spookiness can allow us to transmit information instantaneously, that is faster than the speed of light, how do you think this could, dare I say it, change the world?

Bill Nye: Tom, I love you man. Thanks for the tip of the hat there, the turn of phrase. Will quantum entanglement change the world? If this turns out to be a real thing, well, or if we can take advantage of it, it seems to me the first thing that will change is computing. We’ll be able to make computers that work extraordinarily fast. But it carries with it, for me, this belief that we’ll be able to go back in time; that we’ll be able to harness energy somehow from black holes and other astrophysical phenomenon that we observe in the cosmos but not so readily here on earth. We’ll see. Tom, in Western Australia, maybe you’ll be the physicist that figures quantum entanglement out at its next level and create practical applications. But for now, I’m not counting on it to change the world.
I thought I must have slept through Easter and it’s already April 1st. I replayed this like 5 times. But it didn’t get any better. So what else can I do but take to my blog in the futile attempt to bring sanity back to earth?

Dear Tom,

This is an interesting question which allows one to engage in some lovely science fiction speculation, but first let us be clear that quantum entanglement does not allow to transmit information faster than the speed of light. Entanglement is a non-local correlation that enforces particles to share properties, potentially over long distances. But there is no way to send information through this link because the particles are quantum mechanical and their properties are randomly distributed.

Quantum entanglement is a real thing, we know this already. This has been demonstrated in countless experiments, and while multi-particle correlations are an active research area, the basic phenomenon is well-understood. But entanglement does not imply a spooky “action” at a distance – this is a misleading historical phrase which lives on in science communication just because it has a nice ring to it. Nothing ever acts between the entangled particles – they are merely correlated. That entanglement might allow faster-than-light communication was a confusion in the 1950s, but it’s long been understood that quantum mechanics is perfectly compatible with Einstein’s theory of Special Relativity in which information cannot be transmitted faster than the speed of light.

No, it really can’t. Sorry about that. Yes, I too would love to send messages to the other side of the universe without having to wait some billion years for a reply. But for all we presently know about the laws of nature, it’s not possible.

Entanglement is the relevant ingredient in building quantum computers, and these could indeed dramatically speed up information processing and storage capacities, hence the effort that is being made to build one. But this has nothing to do with exchanging information faster than light, it merely relies on the number of different states that quantum particles can be brought into, which is huge compared to those of normal computers. (Which also work only thanks to quantum mechanics, but normal computers don’t use quantum states for information processing.)

Now let us forget about the real world for a moment, and imagine what we could do if it was possible to send information faster than the speed of light, even though this is to our best present knowledge not possible. Maybe this is what your question really was?

The short answer is that you are likely to screw up reality altogether. Once you can send information faster than the speed of light, you can also send it back in time. If you can send information back in time, you can create inconsistent histories, that is, you can create various different pasts, a problem commonly known as “grandfather paradox:” What happens if you travel back in time and kill your grandpa? Will Marty McFly be born if he doesn’t get his mom to dance with his dad? Exactly this problem.

Multiple histories, or quantum mechanical parallel worlds, are a commonly used scenario in the science fiction literature and movie industry, and they make for some mind-bending fun. For a critical take on how these ideas hold up to real science, I can recommend Xaq Rzetelny’s awesome article “Trek at 50: The quest for a unifying theory of time travel in Star Trek.

I have no fucking clue what Bill thinks this has to do with harnessing energy from black holes, but I hope this won’t discourage you from signing up for a physics degree.

Dear Bill,

Every day I get emails from people who want to convince me that they have found a way to create a wormhole, harness vacuum energy, travel back in time, or that they know how to connect the conscious mind with the quantum, whatever that means. They often argue with quotes from papers or textbooks which they have badly misunderstood. But they no longer have to do this. Now they can quote Bill The Science Guy who said that quantum entanglement would allow us to harness energy from black holes and to travel back in time.

Maybe you were joking and I didn’t get it. But if it’s a joke, let me tell you that nobody in my newsfeed seems to have found it funny.

Seriously, man, fix that. Sincerely,

B.

Sunday, January 10, 2016

Free will is dead, let’s bury it.

I wish people would stop insisting they have free will. It’s terribly annoying. Insisting that free will exists is bad science, like insisting that horoscopes tell you something about the future – it’s not compatible with our knowledge about nature.

According to our best present understanding of the fundamental laws of nature, everything that happens in our universe is due to only four different forces: gravity, electromagnetism, and the strong and weak nuclear force. These forces have been extremely well studied, and they don’t leave any room for free will.

There are only two types of fundamental laws that appear in contemporary theories. One type is deterministic, which means that the past entirely predicts the future. There is no free will in such a fundamental law because there is no freedom. The other type of law we know appears in quantum mechanics and has an indeterministic component which is random. This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no “will” – there is just some randomness sprinkled over the determinism.

In neither case do you have free will in any meaningful way.

These are the only two options, and all other elaborations on the matter are just verbose distractions. It doesn’t matter if you start talking about chaos (which is deterministic), top-down causation (which doesn’t exist), or insist that we don’t know how consciousness really works (true but irrelevant). It doesn’t change a thing about this very basic observation: there isn’t any known law of nature that lets you meaningfully speak of “free will”.

If you don’t want to believe that, I challenge you to write down any equation for any system that allows for something one could reasonably call free will. You will almost certainly fail. The only thing really you can do to hold on to free will is to wave hands, yell “magic”, and insist that there are systems which are exempt from the laws of nature. And these systems somehow have something to do with human brains.

The only known example for a law that is neither deterministic nor random comes from myself. But it’s a baroque construct meant as proof in principle, not a realistic model that I would know how to combine with the four fundamental interactions. As an aside: The paper was rejected by several journals. Not because anyone found anything wrong with it. No, the philosophy journals complained that it was too much physics, and the physics journals complained that it was too much philosophy. And you wonder why there isn’t much interaction between the two fields.

After plain denial, the somewhat more enlightened way to insist on free will is to redefine what it means. You might settle for example on speaking of free will as long as your actions cannot be predicted by anybody, possibly not even by yourself. Clearly, it is presently impossible to make such a prediction. It remains to be seen whether it will remain impossible, but right now it’s a reasonable hope. If that’s what you want to call free will, go ahead, but better not ask yourself what determined your actions.

A popular justification for this type of free will is insisting that on comparably large scales, like those between molecules responsible for chemical interactions in your brain, there are smaller components which may have a remaining influence. If you don’t keep track of these smaller components, the behavior of the larger components might not be predictable. You can then say “free will is emergent” because of “higher level indeterminism”. It’s like saying if I give you a robot and I don’t tell you what’s in the robot, then you can’t predict what the robot will do, consequently it must have free will. I haven’t managed to bring up sufficient amounts of intellectual dishonesty to buy this argument.

But really you don’t have to bother with the details of these arguments, you just have to keep in mind that “indeterminism” doesn’t mean “free will”. Indeterminism just means there’s some element of randomness, either because that’s fundamental or because you have willfully ignored information on short distances. But there is still either no “freedom” or no “will”. Just try it. Try to write down one equation that does it. Just try it.

I have written about this a few times before and according to the statistics these are some of the most-read pieces on my blog. Following these posts, I have also received a lot of emails by readers who seem seriously troubled by the claim that our best present knowledge about the laws of nature doesn’t allow for the existence of free will. To ease your existential worries, let me therefore spell out clearly what this means and doesn’t mean.

It doesn’t mean that you are not making decisions or are not making choices. Free will or not, you have to do the thinking to arrive at a conclusion, the answer to which you previously didn’t know. Absence of free will doesn’t mean either that you are somehow forced to do something you didn’t want to do. There isn’t anything external imposing on you. You are whatever makes the decisions. Besides this, if you don’t have free will you’ve never had it, and if this hasn’t bothered you before, why start worrying now?

This conclusion that free will doesn’t exist is so obvious that I can’t help but wonder why it isn’t widely accepted. The reason, I am afraid, is not scientific but political. Denying free will is considered politically incorrect because of a wide-spread myth that free will skepticism erodes the foundation of human civilization.

For example, a 2014 article in Scientific American addressed the question “What Happens To A Society That Does not Believe in Free Will?” The piece is written by Azim F. Shariff, a Professor for Psychology, and Kathleen D. Vohs, a Professor of Excellence in Marketing (whatever that might mean).

In their essay, the authors argue that free will skepticism is dangerous: “[W]e see signs that a lack of belief in free will may end up tearing social organization apart,” they write. “[S]kepticism about free will erodes ethical behavior,” and “diminished belief in free will also seems to release urges to harm others.” And if that wasn’t scary enough already, they conclude that only the “belief in free will restrains people from engaging in the kind of wrongdoing that could unravel an ordered society.”

To begin with I find it highly problematic to suggest that the answers to some scientific questions should be taboo because they might be upsetting. They don’t explicitly say this, but the message the article send is pretty clear: If you do as much as suggest that free will doesn’t exist you are encouraging people to harm others. So please read on before you grab the axe.

The conclusion that the authors draw is highly flawed. These psychology studies always work the same. The study participants are engaged in some activity in which they receive information, either verbally or in writing, that free will doesn’t exist or is at least limited. After this, their likeliness to conduct “wrongdoing” is tested and compared to a control group. But the information the participants receive is highly misleading. It does not prime them to think they don’t have free will, it instead primes them to think that they are not responsible for their actions. Which is an entirely different thing.

Even if you don’t have free will, you are of course responsible for your actions because “you” – that mass of neurons – are making, possibly bad, decisions. If the outcome of your thinking is socially undesirable because it puts other people at risk, those other people will try to prevent you from more wrongdoing. They will either try to fix you or lock you up. In other words, you will be held responsible. Nothing of this has anything to do with free will. It’s merely a matter of finding a solution to a problem.

The only thing I conclude from these studies is that neither the scientists who conducted the research nor the study participants spent much time thinking about what the absence of free will really means. Yes, I’ve spent far too much time thinking about this.

The reason I am hitting on the free will issue is not that I want to collapse civilization, but that I am afraid the politically correct belief in free will hinders progress on the foundations of physics. Free will of the experimentalist is a relevant ingredient in the interpretation of quantum mechanics. Without free will, Bell’s theorem doesn’t hold, and all we have learned from it goes out the window.

This option of giving up free will in quantum mechanics goes under the name “superdeterminism” and is exceedingly unpopular. There seem to be but three people on the planet who work on this, ‘t Hooft, me, and a third person of whom I only learned from George Musser’s recent book (and whose name I’ve since forgotten). Chances are the three of us wouldn’t even agree on what we mean. It is highly probable we are missing something really important here, something that could very well be the basis of future technologies.

Who cares, you might think, buying into the collapse of the wave-function seems a small price to pay compared to the collapse of civilization. On that matter though, I side with Socrates “The unexamined life is not worth living.”