Wednesday, September 12, 2018

Book Review: “Making Sense of Science” by Cornelia Dean

Making Sense of Science: Separating Substance from Spin
By Cornelia Dean
Belknap Press (March 13, 2017)

It’s not easy, being a science journalist. On one hand, science journalists rely on good relations with scientists. On the other hand, their next article may be critical of those scientists’ work. On the one hand they want to get the details right. On the other hand they have tight deadlines and an editor who scraps that one paragraph which took a full day to write. That’s four hands already, and I wasn’t even counting the hands they need to write.

Like most scientists, I used to think if I see a bogus headline it’s the writers’ fault. But the more science writers I got to know, the better my opinion of them has become. Unlike scientists, journalists strongly adhere to professional guidelines. They want to get things right and they want the reader to know the truth. If they get something wrong, the misinformation almost always came from scientists themselves.

The amount of misinformation about research in my own discipline is so high that no one who doesn’t work in the field has a chance to figure out what’s going on. Naturally this makes me wonder how much I can trust the news I read about other research areas. Cornelia Dean’s book “Making Sense of Science” tells the reader what to look out for.

Cornelia Dean has been a science writer for the New York Times for 30 years and she knows her job. The book begins with a general introduction, explaining what science is, how it works, and why it matters. She then moves on to conflicts of interest, checking sources, difficulties in assessing uncertainty and risk, scientific evidence in court, pitfalls of statistical analysis and analytical modeling, overconfident scientists, and misconduct.

The book is full with examples, proceeds swiftly, and reads well. The chapters end with bullet-point lists of items to recall which is helpful if you, like I, tend to sometimes switch books half through and then forgot what you read already.

“Making Sense of Science” also offers quick summaries of topics that are frequently front-page news: climate change, genetically modified crops, organic food, and cancer risk. While I have found those summaries well-done they seem somewhat randomly selected. I guess they are mostly there because the author is familiar with those topics.

The biggest shortcoming of the book is its lacking criticism of the scientific disciplines and of journalism itself. While the author acknowledges that she and her colleagues often operate under time pressure and shit happens, she doesn’t assess how much of a problem it is or which outlets are more likely to suffer from it. She also doesn’t mention that even scientists who do not take money from the industry have agendas to push, and that both the scientists as well as the writers profit from big headlines.

In summary, I have found the book to be very useful especially for what the discussion of risk-assessment is concerned, but it presents a suspiciously clean and sanitized picture of journalism.

Sunday, September 09, 2018

I’m now older than my father has ever been

Old photo.
My father died a few weeks shy of his 42nd birthday. Went to bed one night, didn’t wake up the next morning. The death certificate says heart failure. Family gossip says it was a history of clinical depression that led to obesity and heavy drinking. They tell me I take after him. They may not be entirely wrong.

I’ve had troubles with my blood pressure ever since I was a teenager. I also have fainting episodes. One time I infamously passed out on a plane as it was approaching the runway. The pilot had to cancel take-off and call an ambulance. Paramedics carried me off the plane, wheeled me away, and then kept me in the hospital for a week. While noteworthy for the trouble I had getting hold of a bag that traveled without me, this was neither the first nor the last time my blood pressure suddenly gave in for no particular reason. I’ve been on the receiving end of epinephrine shots more than once.

Besides being a constant reminder that life is short, having a close relative who died young from heart failure has also added a high-risk stamp to my medical documents. This blessed me with countless extra exams thanks to which I now know exactly that some of my heart valves don’t properly close and the right chambers are enlarged. I also have a heart arrhythmia.

My doctors say I’m healthy, which really means they don’t know what’s wrong with me. Maybe I just have a fickle vagus nerve that pulls the plug every once in a while. Whatever the cause of my indisposition, I’ve spent most of my life in the awareness that I may not wake up tomorrow.

Today I woke up to find I reached the end of my subconscious life-expectation. In two weeks I’ll turn 42. I have checked off almost all boxes on my to-do list for life. Plant a tree, have a child, write a book. The only unchecked item is visiting New Zealand. But besides this, folks, I feel like I’m done here.

And what the heck do I do now with the rest of my life?

I didn’t really think about this until a few people asked what I plan on doing now that my book has been published. My current contract will run out next year, and then what? Will I write another book? Apply for another grant? Do something entirely different? To which my answer was, I have no idea. Ask me anything about quantum gravity and I may have a smarter reply.

I worry about the future, of course, constantly. Oh yes, I am a great worrier. But the future I worry about is not mine, it’s that of mankind. I’m just a blip in the symphony, a wheel in the machinery, a node in a giant information-processing network. Science, to me, is our collective attempt to accurately understand the laws of nature. It’s not about me, it’s not about you, it’s about us; it’s about whether the human race will last or whether we’re just too dumb to figure out how the world works.

Some days I am optimistic, but today I fear we are too dumb. Interactions of humans in large groups have consequences that we do not intuitively grasp, a failure that underlies not only twitter witch-hunts and viral fake news, but is also the reason why science works so inefficiently. I’m not sure we can fix this. Scientists have known for decades that the pressure to work on topics that produce results quickly and that are well-cited supports the widespread use of bad methodologies. But they do nothing about it except for the occasional halfhearted complaint.

Unsurprisingly, taxpayers who are financing research-bubbles with zero return on investment have taken cue. Some of them conclude, not entirely incorrectly, that much of the scientific enterprise is corrupt and conclusions cannot be trusted. If we carry on like this, science skeptics are bound to become more numerous. And that’s how it will end, the great human civilization: Not with a bang and not with a whimper, but with everyone yelling at each other that someone else was responsible to do something about it.

And if not even scientists can learn that social feedback influences their decisions, how can we expect the same of people who have not been trained to objectively evaluate evidence? Most scientists still believe their enterprise is governed by an invisible hand that will miraculously set things right should they go astray. They believe science self-corrects. Hahaha. It does not, of course. Someone, somewhere, has to actually do the correcting. Someone has to stand up and say: “This isn’t good science. We shouldn’t do this. Stop it.” Hence my book.

I used to think old people must hate all younger people because who wouldn’t rather be young. Now that I’ve reached a certain age myself I find the opposite is true. Not only am I relieved that my hyperactive brain is slowing down, making it much easier for me to focus on one thing at a time. I also love young people. They give me hope, hope that I lost in my own generation. Kids, I know you inherit a mess. I am sorry. Now hand me the wine.

But getting older also has an awkward side, which is that younger people ask me for advice. Worse, I get invited to speak about my experience as a woman in science. I am supposed to be a role model now, you see, I am supposed to encourage young women to follow my footsteps. If only I had something encouraging to say; if only those footsteps would lead elsewhere than nowhere. I decline these invitations. My advice, ladies, is to find your own way. And keep in mind, life is short.

Today’s advice to myself is to come up with an idea how I’ll make a living next year. But after two weeks of travel, 4 lectures and 2 interviews, with a paper and an essay and two blogposts squeezed in between, I am only tired. I have also quite possibly had a glass of wine too much.

Maybe I’ll make a plan tomorrow, first thing when I wake up. If I wake up.

Wednesday, September 05, 2018

Superfluid dark matter passes another check: strong gravitational lensing

[image: kamere.com]
Physicists still haven’t figured out what dark matter is made of, if anything. The idea that it’s made of particles that interact so weakly we haven’t yet measured them works well to explain some of the observational evidence. Notably the motions of galaxies bound to clusters and the features of the cosmic microwave background fit with theories of particle dark matter straight-forwardly. The galaxies themselves, not so much.

Astronomers have found that galaxies have regularities that are difficult to accommodate in theories of particle dark matter, for example the Tully-Fisher relation and the Radial Acceleration Relation. These observed patterns in the measurements don’t follow all that easily from the simple models of particle dark matter. Thrifty theorists have to invoke additional effects that are assigned to various astrophysical processes, notably stellar feedback. While these processes arguably exist, it isn’t clear that they actually act in galaxies in amounts necessary to explain the observations.

In the past 20 years or so, astrophysicists have improved computer simulations for galaxy formation until everything fit with the data, sometimes adapting the models to new observations. These computer simulations now contain about a dozen or so parameters (there are various simulations and not all of them list the parameters, so it’s hard to tell exactly) and the results agree well with observation.

But I find it somewhat hard to swallow that regularities that seem to be generic in galaxies follow from the theory only after much fiddling. Indeed, the very fact that it took astrophysicists so long to get galaxies right tells me that the patters in our observations are not generic to particle dark matter. It signals that the theories are missing something important.

One of the proposals for the missing piece has long been that gravity must be modified. But I, as many theorists, have not been particularly convinced by this idea, the reason being that it’s hard to change anything about Einstein’s theory of general relativity without running into conflict with the many high precision measurements that are in excellent agreement with the theory. On the other hand, modified gravity works dramatically well for galaxies and explains the observed regularities.

For a long time I’ve been rather agnostic about the whole issue. Then, three years ago, I read a paper in which Berezhiani and Khoury proposed that dark matter is a superfluid. The reason I even paid attention to this had nothing to do with dark matter; at the time I was working on superfluid condensates that can mimic gravitational effects and I was looking for inspiration. But I have since become a big fan of superfluid dark matter – because it makes so much sense!

You see, the superfluid that Berezhiani and Khoury proposed at isn’t just any superfluid. It has an interaction with normal matter and this interaction creates a force. This force looks like modified gravity. Indeed, I think, it is justified to call it modified gravity because the pull acting on galaxies it now no longer that of general relativity alone.

However, to get the stuff to condense, you need sufficient pressure, and the pressure comes from the gravitational attraction of the matter itself. Only if you have matter sufficiently clumped together will the fluid become a superfluid and generate the additional force. If the matter isn’t sufficiently clumped, or is just too warm, it’ll not condense.

This simple idea works remarkably well to explain why the observations that we assign to dark matter seem to fall into two categories: Those that fit better to particle dark matter and those that fit better to modified gravity. It’s because the dark matter is a fluid with two phases. In galaxies it’s condensed. In galaxy clusters, most of it isn’t condensed because the average potential isn’t deep enough. And in the early universe it’s too warm for condensation. On scales of the solar system, finally, it doesn’t make sense to even speak of the superfluid’s force, it would be like talking about van der Waals forces inside a proton. The theory just isn’t applicable there.

I was pretty excited about this until it occurred to me there’s a problem with this idea. The problem is that we know at least since the 170817 gravitational wave event with an optical counterpart that gravitational waves travel to good precision at the same speed as light. This by itself is easy to explain with the superfluid idea: Light just doesn’t interact with the superfluid. There could be various reason for this, but regardless of what the reason, it’s simple to accommodate this in the model.

This has the consequence however that light which travels through the superfluid region of galaxies will not respond to the bulk of what we usually refer to as dark matter. The superfluid does have mass and therefore also has a gravitational pull. Light notices that and will bend around it. But most of the dark matter that we infer from the motion of normal matter is a “phantom matter” or an “impostor field”. It’s really due to the additional force from the superfluid. And light will not respond to this.

As a result, the amount of dark matter inferred from lensing on galaxies should not match the amount of dark matter inferred from the motion of stars. My student, Tobias Mistele, and I hence sent out to have a look at strong gravitational lensing. We just completed our paper on this and it’s now available on the arXiv.
    Strong lensing with superfluid dark matter
    Sabine Hossenfelder, Tobias Mistele
    arXiv:1809.00840 [astro-ph.GA]
It turns out that the observations from strong gravitational lenses are not hard to accommodate with superfluid dark matter. The reason is, loosely speaking, that the amount of superfluid can be adjusted or, somewhat more technically, that the additional fields require additional initial conditions and those allow us to always find solutions that fit the data.

This finding hence exemplifies why criticisms on modified gravity that insist on there only being one way to fit a galaxy are ill-founded. If you modify gravity by introducing additional fields – and that’s how almost all modifications of gravity work – the additional fields will have additional degrees of freedom and generally require additional initial conditions. There will hence usually be several solutions for galaxies. Indeed, some galaxies may by some statistical fluke not have attracted enough of the fluid for it to condense to begin with, though we have found no evidence of that.

We have been able to fit all lenses in our sample – 65 in total – except for one. The one outlier is a near-miss. It could be off for a variety of reasons, either because the measurement is imprecise, or because our model is overly simplistic. We assume, for example, that the distribution of the superfluid is spherically symmetric and time-independent, which almost certainly isn’t the case. Actually it’s remarkable it works at all.

Of course that doesn’t mean that the model is off the hook; it could still run into conflict with data that we haven’t checked so far. That observations based on the passage of light should show an apparent lack of dark matter might have other observable consequences, for example for gravitational redshift. Also, we have only looked at one particular sample of galaxies and those have no detailed data on the motion of stars. Galaxies for which there is more data will be more of a challenge to fit.

In summary: So far so good. Suggestions for what data to look at next are highly welcome.


Further reading: My Aeon essay “The Superfluid Universe”, and my recent SciAm article with Stacy McGaugh “Is dark matter real?”

Monday, September 03, 2018

Science has a problem, and we must talk about it

Bad stock photos of my job.
A physicist is excited to
have found a complicated way
of writing the number 2.
When Senator Rand Paul last year proposed that non-experts participate in review panels which award competitive research grants, my first reaction was to laugh. I have reviewed my share of research proposals, and I can tell you that without experience in the respective discipline you can’t even judge whether the proposal is feasible, not to mention promising.

I nodded to myself when I read that Jeffrey Mervis, reporting for Science Magazine, referred to Sen Paul’s bill as an “attack on peer review,” and Sean Gallagher from the American Association for the Advancement of Science called it “as blatant a political interference into the scientific process as it gets.”

But while Sen Paul’s cure is worse than the disease (and has, to date, luckily not passed the Senate), I am afraid his diagnosis is right. The current system is indeed “baking in bias,” as he put it, and it’s correct that “part of the problem is the old adage publish or perish.” And, yes, “We do have silly research going on.” Let me tell you.

For the past 15 years, I have worked in the foundations of physics, a field which has not seen progress for decades. What happened 40 years ago is that theorists in my discipline became convinced the laws of nature must be mathematically beautiful in specific ways. By these standards, which are still used today, a good theory should be simple, and have symmetries, and it should not have numbers that are much larger or smaller than one, the latter referred to as “naturalness.”

Based on such arguments from beauty, they predicted that protons should be able to decay. Experiments have looked for this since the 1980s, but so far not a single proton has been caught in the act. This has ruled out many symmetry-based theories. But it is easy to amend these theories so that they evade experimental constraints, hence papers continue to be written about them.

Theorists also predicted that we should be able to detect dark matter particles, such as axions or weakly interacting massive particles (WIMPs). These hypothetical particles have been searched for in dozens of experiments with increasing sensitivity – unsuccessfully. In reaction, theorists now write papers about hypothetical particles that are even harder to detect.

The same criteria of symmetry and naturalness led many particle physicists to believe that the Large Hadron Collider (LHC) should see new particles besides the Higgs-boson, for example supersymmetric particles or dark matter candidates. But none were seen. The LHC data is not yet fully analyzed, but it’s clear already that if something hides in the data, it’s not what particle physicists thought it would be.

You can read the full story in my book “Lost in Math: How Beauty Leads Physics Astray.”

Most of my colleagues blame the lack of progress on the maturity of the field. Our theories work extremely well already, so testing new ideas is difficult, not to mention expensive. The easy things have been done, they say, we must expect a slowdown.

True. But this doesn’t explain the stunning profusion of blundered predictions. It’s not like we predicted one particle that wasn’t there. We predicted hundreds of particles, and fields, and new symmetries, and tiny black holes, and extra-dimensions (in various shapes, and sizes, and widths), none of which were there.

This production of fantastic ideas has been going on for so long it has become accepted procedure. In the foundations of physics we now have a generation of researchers who make career studying things that probably don’t exist. And instead of discarding methods that don’t work, they write increasingly more papers of decreasing relevance. Instead of developing theories that better describe observations, they develop theories that are harder to falsify. Instead of taking risks, they stick to ideas that are popular with their peers.

Of course I am not the first to figure beauty doesn’t equal truth. Indeed, most physicists would surely agree that using aesthetic criteria to select theories is not good scientific practice. They do it anyway. Because all their colleagues do it. And because they all do it, this research will get cited, will get published, and then it will be approved by review panels which take citations and publications as a measure of quality. “Baked in bias” is a pretty good summary.

This acceptance of bad scientific practice to the benefit of productivity is certainly not specific to my discipline. Look for example at psychologists whose shaky statistical analyses now make headlines. The most prominent victim is Amy Cuddy’s “Power Posing” hypothesis, but the problem has been known for a long time. As Jessica Utts, President of the American Statistical Association, pointed out in 2016 “statisticians and other scientists have been writing on the topic for decades.”

Commenting on this “False Positive Psychology,” Joseph Simmons, Leif Nelson, and Uri Simonsohn, wrote “Everyone knew it was wrong.” But I don’t think so. Not only have I myself spoken to psychologists who thought their methods were fine because it’s what they were taught to do. It also doesn’t make sense. Had psychologists known their results were likely statistical artifacts, they’d also have known other groups could use the same methods to refute their results.

Or look at Brian Wansink, the Cornell Professor with the bottomless soup bowl experiment. He recently drew unwanted attention to himself with a blogpost in which he advised a student to try harder getting results out of data because it “cost us a lot of time and our own money to collect.” Had Wansink been aware that massaging data until it delivers is not sound statistical procedure, he’d probably not have blogged about it.

What is going on here? In two words: “communal reinforcement,” more commonly known as group-think. The headlines may say “research shows” but it doesn’t: researchers show. Scientists, like all of us, are affected by their peers’ opinions. If everyone does it, they think it’s probably ok. They also like to be liked, not to mention that they like having an income. This biases their judgement, but the current organization of the academic system does not offer protection. Instead, it makes the problem worse by rewarding those who work on popular topics.

This problem cannot be solved by appointing non-experts to review panels – that merely creates incentives for research that’s easy to comprehend. We can impose controls on statistical analyses, and enforce requirements for reproducibility, and propose better criteria for theory development, but this is curing the symptoms, not the disease. What we need is to finally recognize that scientists are human, and that we don’t do enough to protect scientists’ ability to make objective judgements.

We will never get rid of social biases entirely, but simple changes would help. For starters, every scientist should know how being part of a group can affect their opinion. Grants should not be awarded based on popularity. Researchers who leave fields of declining promise need encouragement, not punishment because their productivity may dwindle while they retrain. And we should generally require scientists to name both advantages and shortcomings of their hypotheses.

Most importantly, we should not sweep the problem under the rug. As science denialists become louder both in America and in Europe, many of my colleagues publicly cheer for their profession. I approve. On the flipside, they want no public discussion about our problems because they are afraid of funding cuts. I disagree. The problems with the current organization of research are obvious – so obvious even Sen Paul sees them. It is pretending the problem doesn’t exist, not acknowledging it and looking for a solution, that breeds mistrust.

Tl;dr: Academic freedom risks becoming a farce if we continue to reward researchers for working on what is popular. Denying the problem doesn’t help.

Sunday, August 26, 2018

Dear Dr B: What does the universe expand into?

    “When the universe expands, into what is it expanding? In what medium is it expanding? Is the universe like a bubble in a higher dimension something?
    [Anonymous], Indiana, USA”
[image: luftballon-profi.at]

This is a very good question and one, I should add, I get frequently. It is, I believe, to no small part caused by the common illustrations of a curved universe: it’s a rubber-sheet with a bowling-ball on it, it’s an inflating balloon, or – in the rarer case that someone tries to illustrate negative curvature, it’s a potato chip (because really I have no idea what a saddle looks like).

But in each of these cases what the illustration actually shows is a two-dimensional surface embedded in a non-curved (“flat”) three-dimensional space. That’s good because you can draw it, but it’s bad because it raises the impression that to speak of curvature you need to put the surface into a larger space. That, however, isn’t so: Curvature is a property of the surface itself.

To get an idea of how this works, consider the simplest example of a curved surface, a ball. On the ball’s surface the angles of triangles will not add up to 180 degrees. You can calculate the curvature from measuring all the angles in all triangles that you could draw onto the ball. This is a measurement which can be done entirely on the surface itself. Or by ants crawling on the surface, if you wish, to use another common analogy.

Curvature, hence, is an intrinsic property of the surface – you do not need the embedding space to define it and to measure it. Also note that the curvature is a local property; it can change from one place to the next, just that a ball has constant curvature.

General relativity uses the same notion of local, intrinsic curvature, just that in this case we aren’t dealing with two dimensions of space and ants crawling on it, but with three dimensions of space, one dimension of time, and humans crawling around in it. So the math is more complicated and all the properties of space-time are collected in something called the curvature-tensor, but that is still an entirely internal construct. We can measure it by tracking the motion of particles, and it’s this curvature that creates the effect we usually refer to as gravity.

Now, what cosmologists mean when they speak of the expansion of the universe is a trend of certain measurement results that, using Einstein’s equations, can be interpreted as being due to an increasing distance between galaxies. Again, this expansion is an entirely internal notion. It is defined and measured in our universe. You do not have to embed this four dimensional space-time into anything else to quantify it. You do not need a medium and you do not need a larger space. Einstein’s theory is entirely self-contained with a four-dimensional, internally curved space-time.

While you do not have to embed space-time in a higher-dimensional flat space, you can. Indeed it can be mathematically proved that you can embed any curved four dimensional space-time into a ten dimensional flat space-time. The reason physicists don’t normally do this is that these additional dimensions are superfluous and they don’t aid the math either.

Black hole embedding diagram.
Only the surface itself has physical meaning.
The surrounding space is for visual purposes.
[Image source: Quora]  
We do, however, on occasion use what is called an “embedding diagram”, which
can be useful to visualize the extrinsic curvature of certain slices of space-time. This is, for example, what gives rise to the idea that when matter collapses to a black hole, space develops a long throat with a bubble that eventually pinches off. But please keep in mind that these are merely visual aids. They have their uses as such, but one has to be very careful in interpreting them because they depend on the chosen embedding.

Now you ask what does the universe expand into? It doesn’t expand into anything, it just expands. That the universe expands is a statement about what happens inside the universe, supported by measurements inside the universe. It’s an entirely internal notion that does not require us to speak of an outside of the universe or a medium into which it is embedded.

Thanks for an interesting question!

Away Note

I’ll be traveling the next two weeks. First I am in Santa Fe, giving both a colloquium and a public lecture, and then I am in Oslo, giving two talks, one at the Kavli Symposium and one at the public library.

Later in September I’ll be in London at HowTheLightGetsIn. The first week of October, I’ll be in NYC and afterwards in Lexington, Kentucky. The week after that I’ll be at the international book fair in Frankfurt, and in early November I’ll be in Berlin (details to come).

I have been advised that giving talks about my book is private business, so please note that the next two weeks I am officially on vacation for the first time since 2008 (which was our two-years-late honeymoon trip).

Vacation or not, it is foreseeable that I will be offline for extended periods, so please prepare for a slow time on this blog.


Tuesday, August 21, 2018

Roger Penrose still looks for evidence of universal rebirth

Roger Penrose really should have won a Nobel Prize for something. Though I’m not sure for what. Maybe Penrose-tilings. Or Penrose diagrams. Or the Penrose process. Or the Penrose-Hawking singularity theorems. Or maybe just because there are so many things named after Penrose.

And at 87 years he’s still at it. Penrose has a reputation for saying rude things about string theory, has his own interpretation of quantum mechanics, and he doesn’t like inflation, the idea that the early universe underwent a rapid phase of exponential expansion. Instead, he has his own theory called “conformal cyclic cosmology” (CCC).

According to Penrose’s conformal cyclic cosmology, the universe goes through an infinite series of “aeons,” each of which starts with a phase resembling a big bang, then forming galactic structures as usual, then cooling down as stars die. In the end the only thing that’s left are evaporating black holes and thinly dispersed radiation. Penrose then conjectures a slight change to particle physics that allows him to attach the end of one aeon to the beginning of another, and everything starts anew with the next bang.

This match between one aeon’s end and another’s beginning necessitates the introduction of a new field – the “erebon” – that makes up dark matter, and that decays throughout the coming aeon. We previously met the erobons because Penrose argued their decay should create noise in gravitational wave interferometers. (Not sure what happened to that.)

If Penrose’s CCC hypothesis is correct, we should also be able to see some left-over information from the previous aeon in the cosmic microwave background around us. To that end, Penrose has previously looked for low-variance rings in the CMB, that he argued should be caused by collisions between supermassive black holes in the aeon prior to ours. The search for that, however, turned out to be inconclusive. In a recent paper with Daniel An and Krzysztof Meissner he has now suggested to look instead for a different signal.

The new signal that Penrose et al are looking for are points in the CMB at the places where in the previous aeon supermassive black holes evaporated. He and collaborators called these “Hawking Points” in memory of the late Stephen Hawking. The idea is that when you glue together the end of the previous aeon with the beginning of ours, you squeeze together the radiation emitted by those black holes and that makes a blurry point at which the CMB temperature is slightly increased.

Penrose estimates the total number of such Hawking Points which should be in the total cosmic microwave background is about a million. The analysis in the paper, covering about 1/3 of the sky, finds tentative evidence for about 20. What’s with the rest remains somewhat unclear, presumably too weak to be observed.

They look for these features by generating fake “normal” CMBs, following standard procedure, and then trying to find Hawking Points in these simulations. They have now done about 5000 of such simulations, but none of them, they claim, has features similar to the actually observed CMB. This makes their detection highly statistically significant, with a chance of less than 1/5000 that the Hawking Points which they find in the CMB are due to random chance.

In the paper, the authors also address an issue that I am guessing was raised by someone else somewhere, which is that in CCC there shouldn’t be a CMB polarization signal like the one BICEP was looking for. This signal still hasn’t been confirmed, but Penrose et al claim pre-emptively that in CCC there should also be a polarization, and it should go with the Hawking Peaks because:
“primordial magnetic fields might arise in CCC as coming [...] from galactic clusters in the previous aeon […] and such primordial magnetic fields could certainly produce B-modes […] On the basis that such a galactic cluster ought to have contained a supermassive black hole which could well have swallowed several others, we might expect concentric rings centred on that location”
Quite a collection of mights and coulds and oughts.

Like Penrose, I am not a big fan of inflation, but I don’t find conformal cyclic cosmology well-motivated either. Penrose simply postulates that the known particles have a so-far unobserved property (so the physics becomes asymptotically conformally invariant) because he wants to get rid of all gravitational degrees of freedom. I don’t see what’s wrong with that, but I also can’t see any good reason for why that should be correct. Furthermore, I can’t figure out what happens with the initial conditions or the past hypothesis, which leaves me feeling somewhat uneasy.

But really I’m just a cranky ex-particle-physicist with an identity crisis, so I’ll leave the last words to Penrose himself:
“Of course, the theory is “crazy”, but I strongly believe (in view of observational facts that seem to be coming to light) that we have to take it seriously.”

Monday, August 20, 2018

Guest Post: Tam Hunt questions Carlo Rovelli about the Nature of Time

Tam Hunt.
[Tam Hunt, photo on the right, is a renewable energy lawyer in Hawaii and an “affiliate” in the Department of Psychological and Brain Sciences at UC Santa Barbara. (Scare quotes his, not mine, make of this what you wish.) He has also published some papers about philosophy and likes to interview physicists. The below is an email interview he conducted with Carlo Rovelli about the Nature of Time. Carlo Rovelli is director of the quantum gravity group at Marseille University in France and author of several popular science books.]

TH:Let me start by asking why discussions about the nature of time should matter to the layperson?

CR: There is no reason it “should” matter. People have the right to be ignorant, if they wish to be. But many people prefer not to be ignorant. Should the fact that the Earth is not flat matter for normal people? Well, the fact that Earth is a sphere does not matter during most of our daily lives, but we like to know.

TH: Are there real-world impacts with respect to the nature of time that we should be concerned with?

CR: There is already technology that has been strongly impacted by the strange nature of time: the GPS in our cars and telephones, for instance.
Carlo Rovelli.

TH: What inspired you to make physics and the examination of the nature of time a major focus of your life's work?

CR: My work on quantum gravity has brought me to study time. It turns out that in order to solve the problem of quantum gravity, namely understanding the quantum aspects of gravity, we have to reconsider the nature of space and time. But I have always been curious about the elementary structure of reality, since my adolescence. So, I have probably been fascinated by the problem of quantum gravity precisely because it required rethinking the nature of space and time.

TH: Your work and your new book continue and extend the view that the apparent passage of time is largely an illusion because there is no passage of time at the fundamental level of reality. Your new book is beautifully and clearly written -- even lyrical at times -- and you argue that the world described by modern physics is a “windswept landscape almost devoid of all trace of temporality.” (P. 11). How does this view of time pass the “common sense” test since everywhere we look in our normal waking consciousness there is nothing but a passage of time from moment to moment to moment?

CR: Thanks. No, I do not argue that the passage of time is an illusion. “Illusion” may be a misleading word. It makes it seem that there is something wrong about our common-sense views on time. There is nothing wrong with it. What is wrong is to think that this view must hold for the entire universe, or that it is valid at all scales and in all situations. It is like the flat Earth: Earth is almost perfectly flat at the scale of most of our daily life, so, there is nothing wrong in considering it flat when we build a house, say. But on larger scales the Earth just happens not to be flat. So with time: as soon as we look a bit farther than our myopic eyes allow, we see that it works differently from what we thought.

This view passes the “common sense” test in the same way in which the fact that the Earth rotates passes the “common sense” view that the Earth does not move and the Sun moves down at sunset. That is, “common sense” is often wrong. What we experience in our “normal waking consciousness” is not the elementary structure of reality: it is a complex construction that depends on the physics of the world but also on the functioning of our brain. We have difficulty in disentangling one from the other.

“Time” is an example of this confusion; we mistake for an elementary fact about physics what is really a complex construct due to our brain. It is a bit like colors: we see the world in combinations of three basic colors. If we question physics as to why the colors we experience are combination of three basic colors, we do not find any answer. The explanation is not in physics, it is in biology: we have three kinds of receptors in our eyes, sensible to three and only three frequency windows, out of the infinite possibilities. If we think that the three-dimensional structure of colors is a feature of reality external to us, we confuse ourselves.

There is something similar with time. Our “common sense” feeling of the passage of time is more about ourselves than the physical nature of the external world. It regards both, of course, but in a complex, stratified manner. Common sense should not be taken at face value, if we want to understand the world.

TH: But is the flat Earth example, or similar examples of perspectival truth, applicable here? It seems to me that this kind of perspectival view of truth (that the Earth seems flat at the human scale but is clearly spherical when we zoom out to a larger perspective) isn’t the case with the nature of time because no matter what scale/perspective we use to examine time there is always a progression of time from now to now to now. When we look at the astronomical scale there is always a progression of time. When we look at the microscopic scale there is always a progression of time.

CR: What indicates that our intuition of time is wrong is not microscopes or telescopes. It’s clocks. Just take two identical clocks indicating the same time and move them around. When they meet again, if they are sufficiently precise, they do not indicate the same time anymore. This demolishes a piece of our intuition of time: time does not pass at the same “rate” for all the clocks. Other aspects of our common-sense intuition of time are demolished by other physics observations.

TH: In the quote from your book I mentioned above, what are the “traces” of temporality that are still left over in the windswept landscape “almost devoid of all traces of temporality,” a “world without time,” that has been created by modern physics?

CR: Change. It is important not to confuse “time” and “change.” We tend to confuse these two important notions because in our experience we can merge them: we can order all the change we experience along a universal one-dimensional oriented line that we call “time.” But change is far more general than time. We can have “change,” namely “happenings,” without any possibility of ordering sequences of these happenings along a single time variable.

There is a mistaken idea that it is impossible to describe or to conceive change unless there exists a single flowing time variable. But this is wrong. The world is change, but it is not [fundamentally] ordered along a single timeline. Often people fall into the mistake that a world without time is a world without change: a sort of frozen eternal immobility. It is in fact the opposite: a frozen eternal immobility would be a world where nothing changes and time passes. Reality is the contrary: change is ubiquitous but if we try to order change by labeling happenings with a time variable, we find that, contrary to intuition, we can do this only locally, not globally.

TH: Isn’t there a contradiction in your language when you suggest that the common-sense notion of the passage of time, at the human level, is not actually an illusion (just a part of the larger whole), but that in actuality we live in a “world without time”? That is, if time is fundamentally an illusion isn’t it still an illusion at the human scale?

CR: What I say is not “we live in a world without time.” What I say is “we live in a world without time at the fundamental level.” There is no time in the basic laws of physics. This does not imply that there is no time in our daily life. There are no cats in the fundamental equations of the world, but there are cats in my neighborhood. Nice ones. The mistake is not using the notion of time [at our human scale]. It is to assume that this notion is universal, that it is a basic structure of reality. There are no micro-cats at the Planck scale, and there is no time at the Planck scale.

TH: You argue that time emerges: “Somehow, our time must emerge around us, at least for us and at our scale.” As such, how do you reconcile the notion of emergence of time itself with the fact that the definition of emergence necessarily includes change over time? That is, how is it coherent to argue that time itself emerges over time?

CR: The notion of “emergence” does not always include change over time. For instance we say that if you look at how humans are distributed on the surface of the Earth, there are some general patterns that “emerge” by looking at a very large scale. You do not see them at the small scale, you see them looking at the large scale. Here “emergence” is related to the scale at which something is described. Many concepts we use in science emerge at some scale. They have no meaning at smaller scales.

TH: But this kind of scale emergence is a function solely of an outside conscious observer, in time, making an observation (in time) after contemplating new data. So aren’t we still confronted with the problem of explaining how time emerges in time?

CR: There is no external observer in the universe, but there are internal observers that interact with one another. In the course of this interaction, the temporal structure that they ascribe to the rest may differ. I think that you are constantly misunderstanding the argument of my book, because you are not paying attention to the main point: the book does not deny the reality of change: it simply confronts the fact that the full complexity of the time of our experience does not extend to the entire reality. Read the book again!

TH: I agree that common sense can be a faulty guide to the nature of reality but isn’t there also a risk of unmooring ourselves from empiricism when we allow largely mathematical arguments to dictate our views on the nature of reality?

CR: It is not “largely mathematical arguments” that tell us that our common sense idea of time is wrong. It is simple brute facts. Just separate two accurate clocks and bring them back together and this shows that our intuition about time is wrong. When the GPS global positioning system was first mounted, some people doubted the “delicate mathematical arguments” indicating that time on the GPS satellites runs faster than at sea level: the result was that the GPS did not work [when it was first set up]. A brute fact. We have direct facts of evidence against the common-sense notion of time.

Empiricism does not mean taking what we see with the naked eye as the ultimate reality. If it was so, we would not believe that there are atoms or galaxies, or the planet Uranus. Empiricism is to take seriously the delicate experience we gather with accurate instruments. The idea that we risk unmooring “ourselves from empiricism when we allow largely mathematical arguments to dictate our views on the nature of reality” is the same argument used against Galileo when we observed with the telescope, or used by Mach to argue against the real existence of atoms. Empiricism is to base our knowledge of reality on experience, and experience includes looking into a telescope, looking into an electronic microscope, where we actually can see the atoms, and reading accurate clocks. That is, using instruments.

TH: I’m using “empiricism” a little differently than you are here; I’m using the term to refer to all methods of data gathering, whether directly with our senses or indirectly with instruments (but still mediated through our senses because ultimately all data comes through our human senses). So what I’m getting at is that human direct experience, and the constant passage of time in our experience, is as much data as are data from experiments like the 1971 Hafele-Keating experiment using clocks traveling opposite directions on airplanes circling the globe. And we cannot discount either category of experience. Does this clarification of “empiricism” change your response at all?

CR: We do not discount any category of experience. There is no contradiction between the complex structure of time and our simple human experience of it. The contradiction appears only if we extrapolate our experience and assume it captures a universal aspect of reality. In our daily experience, the Earth is flat and we take it to be flat when we build a house or plan a city; there is no contradiction between this and the round Earth. The contradiction comes if we extrapolate our common-sense view of the flat Earth beyond the small region where it works well. So, we are not discounting our daily experience of time, we are just understanding that it is an approximation to a more complicated reality.

TH: There have been, since Lorentz developed his version of relativity, which Einstein adapted into his Special Theory of Relativity in 1905, interpretations of relativity that don’t render time an illusion. Isn’t the Lorentz interpretation still valid since it’s empirically equivalent to Special Relativity?

CR: I think you refer here to the so called neo-Lorentzian interpretations of Special Relativity. There is a similar case in the history of science: after Copernicus developed his systems in which all planets turn around the Sun and the Earth moves, there were objections similar to those you mention: “the delicate mathematical arguments” of Copernicus cannot weight as much as our direct experience that the Earth does not move.

So, Tycho Brahe developed his own system, where the Earth is at the center of the universe and does not move, the Sun goes around the Earth and all the other planets rotate around the Sun. Nice, but totally useless for science and for understanding the world: a contorted and useless attempt to save the common sense-view of a motionless Earth, in the face of overwhelming opposite evidence.

If Tycho had his way, science would not have developed. The neo-Lorentzian interpretations of Special Relativity do the same. They hang on to the wrong extrapolation of a piece of common sense.

There is an even better example: the Moon and the Sun in the sky are clearly small. When in antiquity astronomers like Aristarchus come out with an estimate of the size of the Moon and the Sun, it was a surprise, because it turned out that the Moon is big and the Sun even bigger than the Earth itself. This was definitely the result of “largely mathematical arguments.” Indeed it was a delicate calculation using geometry, based on angles under which we see these objects. Would you say that the fact that the Sun is larger than the Earth should not be believed because it is based on a “largely mathematical argument“ and contradicts our direct experience?

TH: But in terms of alternative interpretations of the Lorentz transformations, shouldn’t we view these alternatives, if they’re empirically equivalent as they are, in the same light as the various different interpretations of quantum theory (Copenhagen, Many Worlds, Bohmian, etc.)? All physics theories have two elements: 1) the mathematical formalisms; 2) an interpretive structure that maps those formalisms onto the real world. In the case of alternatives to Special Relativity, some have argued that we don’t need to adopt the Einstein interpretation of the formalisms (the Lorentz transformations) in order to use those formalisms. And since Lorentz’s version of relativity and Einstein’s Special Relativity are thought to be empirically equivalent, doesn’t a choice between these interpretations come down to a question of aesthetics and other considerations like explanatory power?

CR: It is not just a question of aesthetics, because science is not static, it is dynamic. Science is not just models. It is a true continuous process of better understanding reality. A better version of a theory is fertile: it takes us ahead; a bad version takes no part. The Lorentzian interpretation of special relativity assumes the existence of entities that are unobservable and undetectable (a preferred frame). It is contorted, implausible, and in fact it has been very sterile.

On the other hand, realizing that the geometrical structure of spacetime is altered has led to general relativity, to the prediction of black holes, gravitational waves, the expansion of the universe. Science is not just mathematical models and numerical predictions: it is developing increasingly effective conceptual tools for making sense and better understanding the world. When Copernicus, Galileo and Newton realized that the Earth is a celestial body like the ones we see in the sky, they did not just give us a better mathematical model for more accurate predictions: they understood that man can walk on the moon. And man did.

TH: But doesn’t the “inertial frame” that is the core of Einstein’s Special Relativity (instead of Lorentz’s preferred frame) constitute worse “sins”? As Einstein himself states in his 1938 book The Evolution of Physics, inertial frames don’t actually exist because there are always interfering forces; moreover, inertial frames are defined tautologically (p. 221). Einstein’s solution, once he accepted these issues, was to create the general theory of relativity and avoid focusing on fictional inertial frames. We also have the cosmic frame formed by the Cosmic Microwave Background that is a very good candidate for a universal preferred frame now, which wasn’t known in Einstein’s time. When we add the numerous difficulties that the Einstein view of time results in (stemming from special not general relativity), the problems in explaining the human experience of time, etc., might it be the case that the sins of Lorentzian relativity are outweighed by Special Relativity’s sins?

CR: I do not know what you are talking about. Special Relativity works perfectly well, is very heavily empirically supported, there are no contradictions with it in its domain of validity, and has no internal inconsistency whatsoever. If you cannot digest it, you should simply study more physics.

TH: You argue that “the temporal structure of the world is not that of presentism,” (p. 145) but isn’t there still substantial space in the scientific and philosophical debate for “presentism,” given different possible interpretations of the relevant data?

CR: There is a tiny minority of thinkers who try to hold on to presentism, in the contemporary debate about time. I myself think that presentism is de facto dead.

TH: I’m surprised you state this degree of certainty here when in your book you acknowledge that the nature of time is one of physics’ last remaining large questions. Andrew Jaffe, in a review of your book for Nature, writes that the issues you discuss “are very much alive in modern physics.”

CR: The debate on the nature of time is very much alive, but it is not a single debate about a single issue, it is a constellation of different issues, and presentism is just a rather small side of it. Examples are the question of the source of the low initial entropy, the source of our sense of flow, the relation between causality and entropy. The non-viability of presentism is accepted by almost all relativists.

TH: Physicist Lee Smolin (another loop quantum gravity theorist, as you know) argued views quite different than yours, in his book, Time Reborn, for example. In an interview with Smolin I did in 2013, he stated that “the experience we have of time flowing from moment into moment is not an illusion but one of the deepest clues we have as to the nature of reality.” Is Smolin part of the tiny minority you refer to?

CR: Yes, he is. Lee Smolin is a dear friend for me. We have collaborated repeatedly in the past. He is a very creative scientists and I have much respect of his ideas. But we disagree on this. And he is definitely in the minority on this issue.

TH: I’ve also been influenced by Nobel Prize winner Ilya Prigogine’s work and particularly his 1997 book, The End of Certainty: Time, Chaos and the New Laws of Nature, which opposes the eternalist view of time as well as reversibility in physics. Prigogine states in his book that reversible physics and the notion of time as an illusion are “impossible for me to accept” He argues that whereas many theories of modern physics include a reversible t term, this is an empirical mistake because in reality the vast majority of physical processes are irreversible. How do you respond to Prigogine and his colleagues’ arguments that physics theories should be modified to include irreversibility?

CR: That he is wrong, if this is what he writes. There is no contradiction between the reversibility of the laws that we have and the irreversibility of the phenomena. All phenomena we see follow the laws we have, as far as we can see. The surprise is that these laws allow also other phenomena that we do not see. So, something may be missing in our understanding --and I discuss this at length in my book-- but something missing does not mean something wrong.

I do not share the common “block universe” eternalist view of time either. What I argue in the book is that the presentist versus eternalist alternative is a fake alternative. The universe is neither evolving in a single time, nor static without change. Temporality is just more complex than either of these naïve alternatives.

TH: You argue that “the world is made of events, not things” in part II of your book. Alfred North Whitehead also made events a fundamental feature of his ontology, and I’m partial to his “process philosophy.” If events—happenings in time—are the fundamental “atoms” of spacetime (as Whitehead argues), shouldn’t this accentuate the importance of the passage of time in our ontology, rather than downgrade it as you seem to otherwise suggest?

CR: “Time” is a stratified notion. The existence of change, by itself, does not imply that there is a unique global time in the universe. Happenings reveal change, and change is ubiquitous, but nothing states that this change should be organized along the single universal uniform flow that we commonly call time. The question of the nature of time cannot be reduced to a simple “time is real”, “time is not real.” It is the effort of understanding the many different layers giving rise to the complex phenomenon that we call the passage of time.

Monday, August 13, 2018

Book Review: “Through Two Doors at Once” by Anil Ananthaswamy

Through Two Doors at Once: The Elegant Experiment That Captures the Enigma of Our Quantum Reality Hardcover 
By Anil Ananthaswamy
Dutton (August 7, 2018)

The first time I saw the double-slit experiment, I thought it was a trick, an elaborate construction with mirrors, cooked up by malicious physics teachers. But no, it was not, as I was soon to learn. A laser beam pointed at a plate with two parallel slits will make 5 or 7 or any odd number of dots aligned on the screen, their intensity fading the farther away they are from the middle. Light is a wave, this experiment shows, it can interfere with itself.

But light is also a particle, and indeed the double-slit experiment can, and has been, done with single photons. Perplexingly, these photons will create the same interference pattern; it will gradually build up from single dots. Strange as it sounds, the particles seem to interfere with themselves. The most common way to explain the pattern is that a single particle can go through two slits at once, a finding so unintuitive that physicists still debate just what the results tell us about reality.

The double-slit experiment is without doubt one of the most fascinating physics experiments ever. In his new book “Through Two Doors at Once,” Anil Anathaswamy lays out both the history and the legacy of the experiment.

I previously read Anil’s 2013 book “The Edge of Physics” which got him a top rank on my list of favorite science writers. I like Anil’s writing because he doesn’t waste your time. He says what he has to say, doesn’t make excuses when it gets technical, and doesn’t wrap the science into layers of flowery cushions. He also has a good taste in deciding what the reader should know.

A book about an experiment and its variants might sound like a washing list of technical detail with increasing sophistication, but Anil has picked only the best of the best. Besides the first double-slit experiment, and the first experiment with single particles, there’s also the delayed choice, the quantum eraser, weak measurement, and interference of large molecules (“Schrödinger’s cat”). The reader of course also learns how to detect a live bomb without detonating it, what Anton Zeilinger did on the Canary Islands, and what Yves Couder’s oil droplets may or may not have to do with any of that.

Along with the experiments, Anil explains the major interpretations of quantum mechanics, Copenhagen, Pilot-Wave, Many Worlds, and QBism, and what various people have to say about this. He also mentions spontaneous collapse models, and Penrose’s gravitationally induced collapse in particular.

The book contains a few equations and Anil expects the reader to cope with sometimes rather convoluted setups of mirrors and beam splitters and detectors, but the heavier passages are balanced with stories about the people who made the experiments or who worked on the theories. The result is a very readable account of the past and current status of quantum mechanics. It’s a book with substance and I can recommend it to anyone who has an interest in the foundation of quantum mechanics.

[Disclaimer: free review copy]

Tuesday, August 07, 2018

Dear Dr B: Is it possible that there is a universe in every particle?

“Is it possible that our ‘elementary’ particles are actually large scale aggregations of a different set of something much smaller? Then, from a mathematical point of view, there could be an infinite sequence of smaller (and larger) building blocks and universes.”

                                                                      ~Peter Letts
Dear Peter,

I love the idea that there is a universe in every elementary particle! Unfortunately, it is really hard to make this hypothesis compatible with what we already know about particle physics.

Simply conjecturing that the known particles are made up of smaller particles doesn’t work well. The reason is that the masses of the constituent particles must be smaller than the mass of the composite particle, and the lighter a particle, the easier it is to produce in particle accelerators. So why then haven’t we seen these constituents already?

One way to get around this problem is to make the new particles strongly bound, so that it takes a lot of energy to break the bond even though the particles themselves are light. This is how it works for the strong nuclear force which holds quarks together inside protons. The quarks are light but still difficult to produce because you need a high energy to tear them apart from each other.

There isn’t presently any evidence that any of the known elementary particles are made up of new strongly-bound smaller particles (usually referred to as preons), and many of the models which have been proposed for this have run into conflict with data. Some are still viable, but with such strongly bound particles you cannot create something remotely resembling our universe. To get structures similar to what we observe you need an interplay of both long-distance forces (like gravity) and short-distance forces (like the strong nuclear force).

The other thing you could try is to make the constituent particles really weakly interacting with the particles we know already, so that producing them in particle colliders would be unlikely. This, however, causes several other problems, one of which is that even the very weakly interacting particles carry energy and hence have a gravitational pull. If they are produced at any substantial rates at any time in the history of the universe, we should see evidence for their presence but we don’t. Another problem is that by Heisenberg’s uncertainty principle, particles with small masses are difficult to keep inside small regions of space, like inside another elementary particle.

You can circumvent the latter problem by conjecturing that the inside of a particle actually has a large volume, kinda like Mary Poppins’ magical bag, if anyone recalls this.


via GIPHY

Sounds crazy, I know, but you can make this work in general relativity because space can be strongly curved. Such cases are known as “baby universes”: They look small from the outside but can be huge on the inside. You then need to sprinkle a little quantum gravity magic over them for stability. You also need to add some kind of strange fluid, not unlike dark energy, to make sure that even though there are lots of massive particles inside, from the outside the mass is small.

I hope you notice that this was already a lot of hand-waving, but the problems don’t stop there. If you want every elementary particle to each have a universe inside, you need to explain why we only know 25 different elementary particles. Why aren’t there billions of them? An even bigger problem is that elementary particles are quantum objects: They get constantly created and destroyed and they can be in several places at once. How would structure formation ever work in such a universe? It is also a generally the case in quantum theories that the more variants there are of a particle, the more of them you produce. So why don’t we produce humongous amounts of elementary particles if they’re all different inside?

The problems that I listed do not of course rule out the idea. You can try to come up with explanations for all of this so that the model does what you want and is compatible with all observations. But what you then end up with is a complicated theory that has no evidence speaking for it, designed merely because someone likes the idea. It’s not necessarily wrong. I would even say it’s interesting to speculate about (as you can tell, I have done my share of speculation). But it’s not science.

Thanks for an interesting question!

Wednesday, August 01, 2018

Trostpreis (I’ve been singing again)

I promised my daughters I would write a song in German, so here we go:

 

“Trostpreis” means “consolation prize”. This song was inspired by the kids’ announcement that we have a new rule for pachisi: Adults always lose. I think this conveys a deep truthiness about life in general.

 After I complained the last time that the most frequent question I get about my music videos is “where do you find the time?” (answer: I don’t), I now keep getting the question “Do you sing yourself?” The answer to this one is, yes, I sing myself. Who else do you think would sing for me?

(soundcloud version here)

Monday, July 30, 2018

10 physics facts you should have learned in school but probably didn’t

[Image: Dreamstime.com]
1. Entropy doesn’t measure disorder, it measures likelihood.

Really the idea that entropy measures disorder is totally not helpful. Suppose I make a dough and I break an egg and dump it on the flour. I add sugar and butter and mix it until the dough is smooth. Which state is more orderly, the broken egg on flour with butter over it, or the final dough?

I’d go for the dough. But that’s the state with higher entropy. And if you opted for the egg on flour, how about oil and water? Is the entropy higher when they’re separated, or when you shake them vigorously so that they’re mixed? In this case the better sorted case has the higher entropy.

Entropy is defined as the number of “microstates” that give the same “macrostate”. Microstates contain all details about a system’s individual constituents. The macrostate on the other hand is characterized only by general information, like “separated in two layers” or “smooth on average”. There are a lot of states for the dough ingredients that will turn to dough when mixed, but very few states that will separate into eggs and flour when mixed. Hence, the dough has the higher entropy. Similar story for oil and water: Easy to unmix, hard to mix, hence the unmixed state has the higher entropy.

2. Quantum mechanics is not a theory for short distances only, it’s just difficult to observe its effects on long distances.

Nothing in the theory of quantum mechanics implies that it’s good on short distances only. It just so happens that large objects we observe are composed of many smaller constituents and these constituents’ thermal motion destroys the typical quantum effects. This is a process known as decoherence and it’s the reason we don’t usually see quantum behavior in daily life.

But quantum effect have been measured in experiments spanning hundreds of kilometers and they could span longer distances if the environment is sufficiently cold and steady. They could even span through entire galaxies.

3. Heavy particles do not decay to reach a state of smallest energy, but to reach a state of highest entropy.

Energy is conserved. So the idea that any system tries to minimize its energy is just nonsense. The reason that heavy particles decay if they can is because they can. If you have one heavy particle (say, a muon) it can decay into an electron, a muon-neutrino and an electron anti-neutrino. The opposite process is also possible, but it requires that the three decay products come together. It is hence unlikely to happen.

This isn’t always the case. If you put heavy particles in a hot enough soup, production and decay can reach equilibrium with a non-zero fraction of the heavy particles around.

4. Lines in Feynman diagrams do not depict how particles move, they are visual aids for difficult calculations.

Every once in a while I get an email from someone who notices that many Feynman diagrams have momenta assigned to the lines. And since everyone knows one cannot at the same time measure the position and momentum of a particle arbitrarily well, it doesn’t make sense to draw lines for the particles. It follows that all of particle physics is wrong!

But no, nothing is wrong with particle physics. There are several types of Feynman diagrams and the ones with the momenta are for momentum space. In this case the lines have nothing to do with paths the particles move on. They really don’t. They are merely a way to depict certain types of integrals.

There are some types of Feynman diagrams in which the lines do depict the possible paths that a particle could go, but also in this case the diagram itself doesn’t tell you what the particle actual does. For this you actually have to do the calculation.

5. Quantum mechanics is non-local, but you cannot use it to transfer information non-locally.

Quantum mechanics gives rise to non-local correlations that are quantifiably stronger than those of non-quantum theories. This is what Einstein referred to as  “spooky action at a distance.”

Alas, quantum mechanics is also fundamentally random. So, while you have those awesome non-local correlations, you cannot use them to send messages. Quantum mechanics is indeed perfectly compatible with Einstein’s speed-of-light limit.

6. Quantum gravity becomes relevant at high curvature, not at short distances.

If you estimate the strength of quantum gravitational effects, you find that they should become non-negligible if the curvature of space-time is comparable to the inverse of the Planck length squared. This does not mean that you would see this effect at distances close by the Planck length. I believe the confusion here comes from the term “Planck length.” The Planck length has the unit of a length, but it’s not the length of anything.

Importantly, that the curvature gets close to the inverse of the Planck length squared is an observer-independent statement. It does not depend on the velocity by which you move. The trouble with thinking that quantum gravity becomes relevant at short distances is that it’s incompatible with Special Relativity.

In Special Relativity, lengths can contract. For an observer who moves fast enough, the Earth is a pancake of a width below the Planck length. This would mean we should either long have seen quantum gravitational effects, or Special Relativity must be wrong. Evidence speaks against both.

7. Atoms do not expand when the universe expands. Neither does Brooklyn.

The expansion of the universe is incredibly slow and the force it exerts is weak. Systems that are bound together by forces exceeding that of the expansion remain unaffected. The systems that are being torn apart are those larger than the size of galaxy clusters. The clusters themselves still hold together under their own gravitational pull. So do galaxies, solar systems, planets and of course atoms. Yes, that’s right, atomic forces are much stronger than the pull of the whole universe.

8. Wormholes are science fiction, black holes are not.

The observational evidence for black holes is solid. Astrophysicists can tell the presence of a black hole in various ways.

The easiest way may be to deduce how much mass must be combined in some volume of space to cause the observed motion of visible objects. This alone does not tell you whether the dark object that influences the visible ones has an event horizon. But you can tell the difference between an event horizon and a solid surface by examining the radiation that is emitted by the dark object. You can also use black holes as extreme gravitational lenses to test that they comply with the predictions of Einstein’s theory of General Relativity. This is why physicists are excitedly looking forward to the data from the Event Horizon Telescope.

Maybe most importantly, we know that black holes are a typical end-state of certain types of stellar collapse. It is hard to avoid them, not hard to get them, in general relativity.

Wormholes on the other hand are space-time deformations for which we don’t know any way how they could come about in natural processes. Their presence also requires negative energy, something that has never been observed, and that many physicists believe cannot exist.

9. You can fall into a black hole in finite time. It just looks like it takes forever.

Time slows down if you approach the event horizon, but this doesn’t mean that you actually stop falling before you reach the horizon. This slow-down is merely what an observer in the distance would see. You can calculate how much time it would take to fall into a black hole, as measured by a clock that the observer herself carries. The result is finite. You do indeed fall into the black hole. It’s just that your friend who stays outside never sees you falling in.

10. Energy is not conserved in the universe as a whole, but the effect is so tiny you won’t notice it.

So I said that energy is conserved, but that is only approximately correct. It would be entirely correct for a universe in which space does not change with time. But we know that in our universe space expands, and this expansion results in a violation of energy conservation.

This violation of energy conservation, however, is so minuscule that you don’t notice it in any experiment on Earth. It takes very long times and long distances to notice. Indeed, if the effect was any larger we would have noticed much earlier that the universe expands! So don’t try to blame your electricity bill on the universe, but close the window when the AC is running.

Monday, July 23, 2018

Evidence for modified gravity is now evidence against it.

Hamster. Not to scale.
Img src: Petopedia.
It’s day 12,805 in the war between modified gravity and dark matter. That’s counting the days since the publication of Mordehai Milgrom’s 1983 paper. In this paper he proposed to alter Einstein’s theory of general relativity rather than conjecturing invisible stuff.

Dark matter, to remind you, are hypothetical clouds of particles that hover around galaxies. We can’t see them because they neither emit nor reflect light, but we do notice their gravitational pull because it affects the motion of the matter that we can observe. Modified gravity, on the other hand, posits that normal matter is all there is, but the laws of gravity don’t work as Einstein taught us.

Which one is right? We still don’t know, though astrophysicists have been on the case since decades.

Ruling out modified gravity is hard because it was invented to fit observed correlations, and this achievement is difficult to improve on. The idea which Milgrom came up with in 1983 was a simple model called Modified Newtonian Dynamics (MOND). It does a good job fitting the rotation curves of hundreds of observed galaxies, and in contrast to particle dark matter this model requires only one parameter as input. That parameter is an acceleration scale which determines when the gravitational pull begins to be markedly different from that predicted by Einstein’s theory of General Relativity. Based on his model, Milgrom also made some predictions which held up so far.

In a 2016 paper, McGaugh, Lelli, and Schomberg analyzed data from a set of about 150 disk galaxies. They identified the best-fitting acceleration scale for each of them and found that the distribution is clearly peaked around a mean-value:

Histogram of best-fitting acceleration scale.
Blue: Only high quality data. Via Stacy McGaugh.


McGaugh et al conclude that the data contains evidence for a universal acceleration scale, which is strong support for modified gravity.

Then, a month ago, Nature Astronomy published a paper titled “Absence of a fundamental acceleration scale in galaxies“ by Rodrigues et al (arXiv-version here). The authors claim to have ruled out modified gravity with at least 5 σ, ie with high certainty.

That’s pretty amazing given that two months ago modified gravity worked just fine for galaxies. It’s even more amazing once you notice that they ruled out modified gravity using the same data from which McGaugh et al extracted the universal acceleration scale that’s evidence for modified gravity.

Here is the key figure from the Rodrigues et al paper:

Figure 1 from Rodrigues et al


Shown on the vertical axis is their best-fit parameter for the (log of) the acceleration scale. On the horizontal axis are the individual galaxies. The authors have sorted the galaxies so that the best-fit value is monotonically increasing from left to right, so the increase is not relevant information. Relevant is that if you compare the error-margins marked by the colors, then the best-fit value for the galaxies on the very left side of the plot are incompatible with the best-fit values for the galaxies on the very right side of the plot.

So what the heck is going on?

A first observation is that the two studies don’t use the same data analysis. The main difference is the priors for the distribution of the parameters which are the acceleration scale of modified gravity and the stellar mass-to-light ratio. Where McGaugh et al use Gaussian priors, Rodrigues et al use flat priors over a finite bin. The prior is the assumption you make for what the likely distribution of a parameter is, which you then feed into your model to find the best-fit parameters. A bad prior can give you misleading results.

Example: Suppose you have an artificially intelligent infrared camera. One night it issues an alert: Something’s going on in the bushes of your garden. The AI tells you the best fit to the observation is a 300-pound hamster, the second-best fit is a pair of humans in what seems a peculiar kind of close combat. Which option do you think is more likely?

I’ll go out on a limb and guess the second. And why is that? Because you probably know that 300-pound hamsters are somewhat of a rare occurrence, whereas pairs of humans are not. In other words, you have a different prior than your camera.

Back to the galaxies. As we’ve seen, if you start with an unmotivated prior, you can end up with a “best fit” (the 300 pound hamster) that’s unlikely for reasons your software didn’t account for. At the very least, therefore, you should check that whatever the resulting best-fit distribution of your parameters is doesn’t contradict other data. The Rodrigues et al analysis hence raises the concern that their best-fit distribution for the stellar mass-to-light ratio doesn’t match commonly observed distributions. The McGaugh paper on the other hand starts with a Gaussian prior, which is a reasonable expectation, and hence their analysis makes more physical sense.

Having said this though, it turns out the priors don’t make much of a difference for the results. Indeed, for what the numbers are concerned the results in both papers are pretty much the same. What differs is the conclusion the authors draw from it.

Let me tell you a story to illustrate what’s going on. Suppose you are Isaac Newton and an apple just banged on your head. “Eureka,” you shout and postulate that the gravitational potential fulfils the Poisson-equation.* Smart as you are, you assume that the Earth is approximately a homogeneous sphere, solve the equation and find an inverse-square law. It contains one free parameter which you modestly call “Newton’s constant.”

You then travel around the globe, note down your altitude and measure the acceleration of a falling test-body. Back home you plot the results and extract Newton’s constant (times the mass of the Earth) from the measurements. You find that the measured values cluster around a mean. You declare that you have found evidence for a universal law of gravity.

Or have you?

A week later your good old friend Bob knocks on the door. He points out that if you look at the measurement errors (which you have of course recorded), then some of the measurement results are incompatible with each other at five sigma certainty. There, Bob declares, I have ruled out your law of gravity.

Same data, different conclusion. How does this make sense?

“Well,” Newton would say to Bob, “You have forgotten that besides the measurement uncertainty there is theoretical uncertainty. The Earth is neither homogeneous nor a sphere, so you should expect a spread in the data that exceeds the measurement uncertainty.” – “Ah,” Bob says triumphantly, “But in this case you can’t make predictions!” – “Sure I can,” Newton speaks and points to his inverse square law, “I did.” Bob frowns, but Newton has run out of patience. “Look,” he says and shoves Bob out of the door, “Come back when you have a better theory than I.”

Back to 2018 and modified gravity. Same difference. In the Rodrigues et al paper, the authors rule out that modified gravity’s one-parameter law fits all disk galaxies in the sample. This shouldn’t come as much of a surprise. Galaxies aren’t disks with bulges any more than the Earth is a homogeneous sphere. It’s such a crude oversimplification it’s remarkable it works at all.

Indeed, it would be an interesting exercise to quantify how well modified gravity does in this set of galaxies compared to particle dark matter with the same number of parameters. Chances are, you’d find that particle dark matter too is ruled out at 5 σ. It’s just that no one is dumb enough to make such a claim. When it comes to particle dark matter, astrophysicists will be quick to tell you galaxy dynamics involves loads of complicated astrophysics and it’s rather unrealistic that one parameter will account for the variety in any sample.

Without the comparison to particle dark matter, therefore, the only thing I learn from the Rodrigues et al paper is that a non-universal acceleration scale fits the data better than a universal one. And that I could have told you without even looking at the data.

Summary: I’m not impressed.

It’s day 12,805 in the war between modified gravity and dark matter and dark matter enthusiasts still haven’t found the battle field.


*Dude, I know that Newton isn’t Archimedes. I’m telling a story not giving a history lesson.