Thursday, May 31, 2018

New results confirm old anomaly in neutrino data

The collaboration of a neutrino experiment called MiniBooNe just published their new results:
    Observation of a Significant Excess of Electron-Like Events in the MiniBooNE Short-Baseline Neutrino Experiment
    MiniBooNE Collaboration
    arXiv:1805.12028 [hep-ex]
It’s a rather unassuming paper, but it deserves a signal boost because for once we have an anomaly that did not vanish with further examination. Indeed, it actually increased in significance, now standing at a whopping 6.1σ.

MiniBooNE was designed to check the results of an earlier experiment called LSND, the Liquid Scintillator Neutrino Detector experiment that ran in the 1990s. The LSND results were famously incompatible the results of a bulk of other neutrino experiments. So incompatible, indeed, that the LSND data are usually excluded from global fits – they just don’t fit.

All the other experimental data could be neatly fitted with assuming that the three known types of neutrinos “oscillate,” which means they change back and forth into each other as they travel. Problem is, in a three-neutrino oscillation model, there are only maximally nine parameters (the masses, mixing angles, and a few phases, the latter depending on the type of neutrino). These parameters are not sufficient to also fit the LSND data.

See figure below for the LSND trouble: The turquoise/yellow areas in the figure do not overlap with the red/blue ones.

[Figure: Hitoshi Murayama]

The new data from MiniBooNE, now, confirms that this tension in the data is real. It’s not a quirk of the LSND experiment. This data can (to my best knowledge) not be fitted with the standard framework of three types of neutrinos (one for the electron, one for the muon, and one of the tau). Fitting this data requires either new particles (sterile neutrinos) or some kind of symmetry violation, typically CPT violation or Lorentz-invariance violation (or both).

This is the key figure from the new paper. See how the new results agree with the earlier LSND results. And note the pale grey line indicating that this area is “ruled out” by other experiments (assuming the standard explanation).

[Figure 5 from arXiv:1805.12028]

So this is super-exciting news: An old anomaly was re-examined and confirmed! Now it’s time for theoretical physicists to come up with an explanation.

Monday, May 28, 2018

What do physicists mean when they say the laws of nature are beautiful?

Simplicity in photographic art.
“Monday Blues Chat”
By Erin Photography
In my upcoming book “Lost in Math: How Beauty Leads Physics Astray,” I explain what makes a theory beautiful in the eyes of a physicist and how beauty matters for their research. For this, I interviewed about a dozen theoretical physicists (full list here) and spoke to many more. I also read every book I could find on the topic, starting with Chandrasekhar’s “Truth and Beauty” to McAllister’s “Beauty and Revolution in Science” and Orrell’s “Truth or Beauty”.

Turns out theoretical physicists largely agree on what they mean by beauty, and it has the following three aspects:

Simplicity:

A beautiful theory is simple, and it is simple if it can be derived from few assumptions. Currently common ways to increase simplicity in the foundations of physics is unifying different concepts or adding symmetries. To make a theory simpler, you can also remove axioms; this will eventually result in one or the other version of a multiverse.

Please note that the simplicity I am referring to here is absolute simplicity and has nothing to do with Occam’s razor, which merely tells you that from two theories that achieve the same you should pick the simpler one.

Naturalness:

A beautiful theory is also natural, meaning it does not contain numbers without units that are either much larger or much smaller than 1. In physics-speak you’d say “dimensionless parameters are of order 1.” In high energy particle physics in particular, theorists use a relaxed version of naturalness called “technical naturalness” which says that small numbers are permitted if there is an explanation for their smallness. Symmetries, for example, can serve as such an explanation.

Note that in contrast to simplicity, naturalness is an assumption about the type of assumptions, not about the number of assumptions.

Elegance:

Elegance is the fuzziest aspect of beauty. It is often described as an element of surprise, the “aha-effect,” or the discovery of unexpected connections. One specific aspect of elegance is a theory’s resistance to change, often referred to as “rigidity” or (misleadingly, I think) as the ability of a theory to “explain itself.”

By no way do I mean to propose this as a definition of beauty; it is merely a summary of what physicists mean when they say a theory is beautiful. General relativity, string theory, grand unification, and supersymmetry score high on all three aspects of beauty. The standard model, modified gravity, or asymptotically safe gravity, not so much.

But while physicists largely agree on what they mean by beauty, in some cases they disagree on whether a theory fulfills the requirements. This is the case most prominently for quantum mechanics and the multiverse.

For quantum mechanics, the disagreement originates in the measurement axiom. On the one hand it’s a simple axiom. On the other hand, it covers up a mess, that being the problem of defining just what a measurement and a measurement apparatus are.

For the multiverse, the disagreement is over whether throwing out an assumption counts as a simplification if you have to add it again later because otherwise you cannot describe our observations.

If you want to know more about how arguments from beauty are used and abused in the foundations of physics, my book will be published on June 12th and then it’s all yours to peruse!

Wednesday, May 23, 2018

This is how I pray

I know, you have all missed my awesome chord progressions, but do not despair, I have relief for your bored ears.


I am finally reasonably happy with the vocal recordings if not with the processing, or the singing, or the lyrics. But working on that. The video was painful, both figuratively and literally; I am clearly too old to crouch on the floor for extended periods and had to hobble stairs for days after the filming.

The number one comment I get on my music videos is, sadly enough, not "wow, you are so amazingly gifted" but "where do you find the time?". To which the answer is "I don't". I did the filming for this video on Christmas, the audio on Easter, and finished on Pentecost. If it wasn't for those Christian holidays, I'd never get anything done. So, thank God, now you know how I pray.

Friday, May 18, 2018

The Overproduction Crisis in Physics and Why You Should Care About It

[Image: Everett Collection]
In the years of World War II, National Socialists executed about six million Jews. The events did not have a single cause, but most historians agree that a major reason was social reinforcement, more widely known as “group think.”

The Germans who went along with the Nazis’ organized mass murder were not psychopaths. By all accounts they were “normal people.” They actively or passively supported genocide because at the time it was a socially acceptable position; everyone else did it too. And they did not personally feel responsible for the evils of the system. It eased their mind that some scientists claimed it was only rational to prevent supposedly genetically inferior humans from reproducing. And they hesitated to voice disagreement because those who opposed the Nazis risked retaliation.

It’s comforting to think that was Then and There, not Now and Here. But group-think think isn’t a story of the past; it still happens and it still has devastating consequences. Take for example the 2008 mortgage crisis.

Again, many factors played together in the crisis’ buildup. But oiling the machinery were bankers who approved loans to applicants that likely couldn’t pay the money back. It wasn’t that the bankers didn’t know the risk; they thought it was acceptable because everyone else was doing it too. And anyone who didn’t play along would have put themselves at a disadvantage, by missing out on profits or by getting fired.

A vivid recount comes from an anonymous Wall Street banker quoted in a 2008 NPR broadcast:
“We are telling you to lie to us. We’re hoping you don't lie. Tell us what you make, tell us what you have in the bank, but we won’t verify. We’re setting you up to lie. Something about that feels very wrong. It felt wrong way back then and I wish we had never done it. Unfortunately, what happened ... we did it because everyone else was doing it.”
When the mortgage bubble burst, banks defaulted by the hundreds. In the following years, millions of people would lose their jobs in what many economists consider the worst financial crisis since the Great Depression of the 1930s.



It’s not just “them” it’s “us” too. Science isn’t immune to group-think. On the contrary: Scientific communities are ideal breeding ground for social reinforcement.

Research is currently organized in a way that amplifies, rather than alleviates, peer pressure: Measuring scientific success by the number of citations encourages scientists to work on what their colleagues approve of. Since the same colleagues are the ones who judge what is and isn’t sound science, there is safety in numbers. And everyone who does not play along risks losing funding.

As a result, scientific communities have become echo-chambers of likeminded people who, maybe not deliberately but effectively, punish dissidents. And scientists don’t feel responsible for the evils of the system. Why would they? They just do what everyone else is also doing.

The reproducibility crisis in psychology and in biomedicine is one of the consequences. In both fields, an overreliance on data with low statistical relevance and improper methods of data analysis (“p-value hacking”) have become common. That these statistical methods are unreliable has been known for a long time. As Jessica Utts, President of the American Statistical Association, pointed out in 2016 “statisticians and other scientists have been writing on the topic for decades.”

So why then did researchers in psychology continue using flawed methods? Because everyone else did it. It was what they were taught; it was generally accepted procedure. And psychologists who’d have insisted on stricter methods of analysis would have put themselves at a disadvantage: They’d have gotten fewer results with more effort. Of course they didn’t go the extra way.

The same problem underlies an entirely different popular-but-flawed scientific procedure: “Mouse models,” ie using mice to test the efficiency of drugs and medical procedures.

How often have you read that Alzheimer or cancer has been cured in mice? Right – it’s been done hundreds of times. But humans aren’t mice, and it’s no secret that mice-results – while not uninteresting – often don’t transfer to humans. Scientists only partly understand why, but that mouse models are of limited use to develop treatments for humans isn’t controversial.

So why do researchers continue to use them anyway? Because it’s easy and cheap and everyone else does it too. As Richard Harris put it in his book Rigor Mortis: “One reason everybody uses mice: everybody else uses mice.”

It happens here in the foundations of physics too.

In my community, it has become common to justify the publication of new theories by claiming the theories are falsifiable. But falsifiability is a weak criterion for a scientific hypothesis. It’s necessary, but certainly not sufficient, for many hypotheses are falsifiable yet almost certainly wrong. Example: It will rain peas tomorrow. Totally falsifiable. Also totally nonsense.

Of course this isn’t news. Philosophers have gone on about this for at least half a century. So why do physicists do it? Because it’s easy and because all their colleagues do it. And since they all do it, theories produced by such methods will usually get published, which officially marks them as “good science”.

In the foundations of physics, the appeal to falsifiability isn’t the only flawed method that everyone uses because everyone else does. There are also those theories which are plainly unfalsifiable. And another example are arguments from beauty.

In hindsight it seems perplexing, to say the least, but physicists published ten-thousands of papers with predictions for new particles at the Large Hadron Collider because they believed that the underlying theory must be “natural”. None of those particles were found.

Similar arguments underlie the belief that the fundamental forces should be unified because that’s prettier (no evidence for unification has been found) or that we should be able to measure particles that make up dark matter (we didn’t). Maybe most tellingly, physicists in these community refuse to consider the possibility that their opinions are affected by the opinions of their peers.

One way to address the current crises in scientific communities is to impose tighter controls on scientific standards. That’s what is happening in psychology right now, and I hope it’ll also happen in the foundations of physics soon. But this is curing the symptoms, not the disease. The disease is a lacking awareness for how we are affected by the opinions of those around us.

The problem will reappear until everyone understands the circumstances that benefit group-think and learns to recognize the warning signs: People excusing what they do with saying everyone else does it too. People refusing to take responsibility for what they think are “evils of the system.” People unwilling to even consider that they are influenced by the opinions of others. We have all the warning signs in science – had them for decades.

Accusing scientists of group-think is standard practice of science deniers. The tragedy is, there’s truth in what they say. And it’s no secret: The problem is easy to see for everyone who has the guts to look. Sweeping the problem under the rug will only further erode trust in science.



Read all about the overproduction crisis in the foundations of physics and what you – yes you! – can do to help in my book “Lost in Math,” out June 12, 2018.

Tuesday, May 15, 2018

Measuring Scientific Broadness

I have a new paper out today and it wouldn’t have happened without this blog.

A year ago, I wrote a blogpost declaring that “academia is fucked up,” to quote myself because my words are the best words. In that blogpost, I had some suggestions how to improve the situation, for example by offering ways to quantify scientific activity other than counting papers and citations.

But ranting on a blog is like screaming in the desert: when the dust settles you’re still in the desert. At least that’s been my experience.

Not so this time! In the week following the blogpost, three guys wrote to me expressing their interest in working on what I suggested. One of them I never heard of again. The other two didn’t get along and one of them eventually dropped out. My hopes sank.

But then I got a small grant from the Foundational Questions Institute and was able to replace the drop-out with someone else. So now we were three again. And I could actually pay the other two, which probably helped keeping them interested.

One of the guys is Tom Price. I’ve never met him, but – believe it or not – we now have a paper on the arXiv.
    Measuring Scientific Broadness
    Tom Price, Sabine Hossenfelder

    Who has not read letters of recommendations that comment on a student's `broadness' and wondered what to make of it? We here propose a way to quantify scientific broadness by a semantic analysis of researchers' publications. We apply our methods to papers on the open-access server arXiv.org and report our findings.

    arXiv:1805.04647 [physics.soc-ph]

In the paper we propose a way to measure how broad or specialized a scientists’ research interests are. We quantify this by analyzing the words they use in the title and abstracts of their arXiv papers.

Tom tried several ways to quantify the distribution of keywords, and so our paper contains four different measures for broadness. We eventually picked one for the main text, but checked that the other ones give largely comparable results. In the paper, we report the results of various analyses of the arXiv data. For example, here is the distribution of broadness over authors:

It’s a near-perfect normal distribution!

I should add that you get this distribution only after removing collaborations from the sample, which we have done by excluding all authors with the word “collaboration” in the name and all papers with more than 30 authors. If you don’t do this, the distribution has a peak at small broadness.

We also looked at the average broadness of authors in different arXiv categories, where we associate an author with a category if it’s the primary category for at least 60% of their papers. By that criterion, we find physics.plasm-ph has the highest broadness and astro-ph.GA the lowest one. However, we have only ranked categories with more than 100 associated authors to get sensible statistics. In this ranking, therefore, some categories don’t even appear.

That’s why we then also looked at the average broadness associated with papers (rather than authors) that have a certain category as the primary one. This brings physics.pop-ph to the top, while astro-ph.GA stays on the bottom.

That astrophysics is highly specialized also shows up in our list of keywords, where phrases like “star-forming” or “stellar mass” are among those of the lowest broadness. On the other hand, the keywords “agents”, “chaos,” “network”, or “fractal” have high values of broadness. You find the top broadest and top specialized words in the below table. See paper for reference to full list.


We also compared the average broadness associated with authors who have listed affiliations in certain countries. The top scores of broadness go to Israel, Austria, and China. The correlation between the h-index and broadness is weak. Neither did we find a correlation between broadness and gender (where we associated genders by common first names). Broadness also isn’t correlated with the Nature Index, which is a measure of a country’s research productivity.

A correlation we did find though is that researchers whose careers suddenly end, in the sense that their publishing activity discontinues, are more likely to have a low value of broadness. Note that this doesn’t necessarily say much about the benefits of some research styles over others. It might be, for example, that research areas with high competition and few positions are more likely to also be specialized.

Let me be clear that it isn’t our intention to declare that the higher the broadness the better. Indeed, there might well be cases when broadness is distinctly not what you want. Depending on which position you want to fill or which research program you have announced, you may want candidates who are specialized on a particular topic. Offering a measure for broadness, so we hope, is a small step to honoring the large variety of ways to excel in science.

I want to add that Tom did the bulk of the work on this paper, while my contributions were rather minor. We have another paper coming up in the next weeks (or so I hope), and we are also working on a website where everyone will be able to determine their own broadness value. So stay tuned!

Friday, May 11, 2018

Dear Dr B: Should I study string theory?

Strings. [image: freeimages.com]
“Greetings Dr. Hossenfelder!

I am a Princeton physics major who regularly reads your wonderful blog.

I recently came across a curious passage in Brian Greene’s introduction to a reprint edition of Einstein's Meaning of Relativity which claims that:
“Superstring theory successfully merges general relativity and quantum mechanics [...] Moreover, not only does superstring theory merge general relativity with quantum mechanics, but it also has the capacity to embrace — on an equal footing — the electromagnetic force, the weak force, and the strong force. Within superstring theory, each of these forces is simply associated with a different vibrational pattern of a string. And so, like a guitar chord composed of four different notes, the four forces of nature are united within the music of superstring theory. What’s more, the same goes for all of matter as well. The electron, the quarks, the neutrinos, and all other particles are also described in superstring theory as strings undergoing different vibrational patterns. Thus, all matter and all forces are brought together under the same rubric of vibrating strings — and that’s about as unified as a unified theory could be.”
Is all this true? Part of the reason I am asking is that I am thinking about pursuing String Theory, but it has been somewhat difficult wrapping my head around its current status. Does string theory accomplish all of the above?

Thank you!

An Anonymous Princeton Physics Major”

Dear Anonymous,

Yes, it is true that superstring theory merges general relativity and quantum mechanics. Is it successful? Depends on what you mean by success.

Greene states very carefully that superstring theory “has the capacity to embrace” gravity as well as the other known fundamental forces (electromagnetic, weak, and strong). What he means is that most string theorists currently believe there exists a specific model for superstring theory which gives rise to these four forces. The vague phrase “has the capacity” is an expression of this shared belief; it glosses over the fact that no one has been able to find a model that actually does what Greene says.

Superstring theory also comes with many side-effects which all too often go unnoticed. To begin with, the “super” isn’t there to emphasize the theory is awesome, but to indicate it’s supersymmetric. Supersymmetry, to remind you, is a symmetry that postulates all particles of the standard model have a partner particle. These partner particles were not found. This doesn’t rule out supersymmetry because the particles might only be produced at energies higher than what we have tested. But it does mean we have no evidence that supersymmetry is realized in nature.

Worse, if you make the standard model supersymmetric, the resulting theory conflicts with experiment. The reason is that doing so enables flavor changing neutral currents which have not been seen. This became clear in the mid 1990s, sufficiently long ago so that it’s now one of the “well known problems” that nobody ever mentions. To save both supersymmetry and superstrings, theorists postulated an additional symmetry, called “R-parity” that simply forbids the worrisome processes.

Another side-effect of superstrings is that they require additional dimensions of space, nine in total. Since we haven’t seen more than the usual three, the other six have to be rolled up or “compactified” as the terminology has it. There are many ways to do this compactification and that’s what eventually gives rise to the “landscape” of string theory: The vast number of different theories that supposedly all exist somewhere in the multiverse.

The problems don’t stop there. Superstring theory does contain gravity, yes, but not the normal type of gravity. It is gravity plus a large number of additional fields, the so-called moduli fields. These fields are potentially observable, but we haven’t seen them. Hence, if you want to continue believing in superstrings you have to prevent these fields from making trouble. There are ways to do that, and that adds a further layer of complexity.

Then there’s the issue with the cosmological constant. Superstring theory works best in a space-time with a cosmological constant that is negative, the so-called “Anti de Sitter spaces.” Unfortunately, we don’t live in such a space. For all we presently know the cosmological constant in our universe is positive. When astrophysicists measured the cosmological constant and found it to be positive, string theorists cooked up another fix for their theory to get the right sign. Even among string-theorists this fix isn’t popular, and in any case it’s yet another ad-hoc construction that must be added to make the theory work.

Finally, there is the question how much the requirement of mathematical consistency can possibly tell you about the real world to begin with. Even if superstring theory is a way to unify general relativity and quantum mechanics, it’s not the only way, and without experimental test we won’t know which one is the right way. Currently the best developed competing approach is asymptotically safe gravity, which requires neither supersymmetry nor extra dimensions.

Leaving aside the question whether superstring theory is the right way to combine the known fundamental forces, the approach may have other uses. The theory of strings has many mathematical ties with the quantum field theories of the standard model, and some think that the gauge-gravity correspondence may have applications in condensed matter physics. However, the dosage of string theory in these applications is homeopathic at best.

This is a quick overview. If you want more details, a good starting point is Joseph Conlon’s book “Why String Theory?” On a more general level, I hope you excuse if I mention that the question what makes a theory promising is the running theme of my upcoming book “Lost in Math.” In the book I go through the pros and cons of string theory and supersymmetry and the multiverse, and also discuss the relevance of arguments from mathematical consistency.

Thanks for an interesting question!

With best wishes for your future research,

B.

Thursday, May 03, 2018

Book Review: “The Only Woman In the Room” by Eileen Pollack

The Only Woman in the Room: Why Science Is Still a Boys Club
By Eileen Pollack
Beacon Press (15 Sep 2015)

Eileen Pollack set out to become an astrophysicist but switched to a career in writing after completing her undergraduate degree. In “The Only Woman In The Room” she explores the difficulties she faced that eventually led her to abandon science as a profession.

Pollack’s book is mostly a memoir, and an oddly single-sided one in that. At least for the purpose of the book, she looks at everything from the perspective of gender stereotypes. It’s about the toys she didn’t get, and the teachers who didn’t let her skip a year, and the boys who didn’t like nerdy girls, and the professors who didn’t encourage her, and so on.

I had some difficulties making sense of the book. For one, Pollack is 20 years older than I and has grown up in a different country. In the book she assumes the reader understands the context, but frankly I have no idea whatsoever how American school education looked like in the 1960s. I also missed most of the geographic, religious, and cultural references but wasn’t interested enough to look up every instance.

Leaving aside that Pollack clearly writes for people like her to begin with, the rest of the story didn’t make much sense to me either. The reader learns in a few sentences that Pollack in her youth develops an eating disorder. She also seems to have an anxiety disorder, is told (probably erroneously) that she has too high testosterone levels, and that later she regularly sees a therapist. But these asides never reappear in her narrative. Since it’s exceedingly unlikely her problems just disintegrated, there must have been a lot going on which the reader is not told about.

The story of the book is that Pollack sets out to track down her former teachers, professors, and classmates, and hear what they’ve been up to and what, if anything, changed about the situation for women in physics. Things did change, it turns out: The fraction of female students and faculty has markedly increased and many men have come to see the good in that. Pollack concludes with a somewhat scattered list of suggestions for further improvement.

Pollack does mention some studies on gender disparities, but her sample seems skewed to confirm her beliefs and she does not discuss the literature in any depth. She entirely avoids the more controversial questions, like whether some gender differences in performance are innate, whether it’s reasonable to assume women and men should be equally represented in all professions, or whether affirmative action is compatible with constitutional rights.

Despite this, the book has its uses. It sheds light on the existing problems, and (as Google will tell you) in reaction many women have spoken about and compared their experiences. For me, the value of the book has been to let me see my discipline through somebody else’s eyes.

I found it surprising just how different Pollack’s story is from my own, though my interests seem to be very close to hers. I’ve been told from as early as I can recall that I’m not social enough, that I don’t play with the other kids enough, that I’m too quiet, don’t integrate well, am bad at group work, and “will never make it at the university” unless I “learn to work with others.” I am also the kind of person who doesn’t give a shit what other people think I should do, so I went and got a PhD in physics.

The problem that Pollack blames most for her dropping out – that professors didn’t encourage her to pull through courses she had a hard time with – is a problem I never encountered because I didn’t get bad marks to begin with. I didn’t have friends among the students either, but I was just glad they left me alone. And where I am from, university is tuition-free, so while my money was short, financing my education was never a headache for my family.

Like Pollack, I have a long string of DSM classifiers attached to me and spent several years in therapy, but it never occurred to me to blame my profs for that. When doctors checked my testosterone levels (which has happened several times over the decades) I didn’t conclude I must be a man, but that it’s probably a standard check for certain health problems. And since now you wonder, my hormone levels are perfectly normal. Or at least that’s what I thought until I read that Pollock had a crush on pretty much every one of her profs. Maybe I’m abnormal in that I never fancied my profs. Or that I never worried I might not find a guy if I study physics.

Nevertheless, Pollack is right of course that we have a long way to go. Gender disparities which reinforce stereotypes are still omnipresent, and now that I am mother of two daughters I don’t have to look far to see the problems. The kids’ teachers are all women except for the math teacher. The parents who watch their toddlers at the playground are almost exclusively mothers. And I get constantly told I am supposedly aggressive, sometimes for doing nothing more than looking the way I normally look, that is, mildly dismayed at the bullshit men throw at me. But I’m not quite old enough to write a memoir, so let me leave it at this.

I’d recommend this book for anyone who wants to understand why some women perceive science and engineering as extremely unwelcoming workplaces.