Pages
▼
Monday, December 25, 2017
Merry Christmas!
We wish you all happy holidays! Whether or not you celebrate Christmas, we hope you have a peaceful time to relax and, if necessary, recover.
I want to use the opportunity to thank all of you for reading along, for giving me feedback, and for simply being interested in science in a time when that doesn’t seem to be normal anymore. A special “Thank you" to those who have sent donations. It is reassuring to know that you value this blog. It encourages me to keep my writing available here for free.
I’ll be tied up with family business during the coming week – besides the usual holiday festivities, the twins’ 7th birthday is coming up – so blogging will be sparse for some while.
Monday, December 18, 2017
Get your protons right!
The quarks and gluons interact through the strong nuclear force. The strong nuclear force does not have only one charge – like electromagnetism – but three charges. The charges are called “colors” and often assigned the values red, blue, and green, but this is just a way to give names to mathematical properties. These colors have nothing to do with the colors that we can see.
Colors are a handy terminology because the charges blue, red, and green can combine to neutral (“white”) and so can a color and its anti-color (blue and anti-blue, green and anti-green, and so on). The strong nuclear force is mediated by gluons which each carry two types of colors. That the gluons themselves carry a charge means that, unlike the photon, they also interact among each other.
The strong nuclear force has the peculiar property that it gets stronger the larger the distance between two quarks, while it gets weaker on short distances. A handy metaphor for this is a rubber string – the more you stretch it, the stronger the restoring force. Indeed, this string-like behavior of the strong nuclear force is where string-theory originally came from.
The strings of the strong nuclear force are gluon flux-tubes, that are connections between two color-charged particles where the gluons preferably travel along. The energy of the flux-tubes is proportional to their length. If you have a particle (called a “meson”) made of a quark and an anti-quark, then the flux tube is focused on a straight line connecting the quarks. But what if you have three quarks, like inside a neutron or a proton?
According to the BBC, gluon flux-tubes (often depicted as springs, presumably because rubber is hard to illustrate) form a triangle.
This is almost identical to the illustration you find on Wikipedia:
Here is the proton on Science News:Here is Alan Stonebreaker for the APS:
This is the proton according to Carole Kliger from the University of Maryland:
And then there is Christine Davies from the University of Glasgow who pictured the proton for Science Daily as an artwork resembling a late Kandinsky:
So which one is right?
At first sight it seems plausible that the gluons form a triangle because that requires the least stretching of strings that each connect two quarks. However, this triangular – “Δ-shaped” – configuration cannot connect three quarks and still maintain gauge-invariance. This means it violates the key principle of the strong force, which is bad and probably means this configuration is not physically possible. The Y-shaped flux-tubes on the other hand don’t suffer from that problem.
But we don’t have to guess around because this is physics and one can calculate it. This calculation cannot be done analytically but it is tractable by computer simulations. Bissey et al reported the results in a 2006 paper: “We do not find any evidence for the formation of a Delta-shaped flux-tube (empty triangle) distribution.” The conclusion is clear: The Y-shape is the preferred configuration.
And there’s more to learn! The quarks and gluons in the proton don’t sit still, and when they move then the center of the Y moves around. If you average over all possible positions you approximate a filled Δ-shape. (Though the temperature dependence is somewhat murky and subject of ongoing research.)
The flux-tubes also do not always exactly lie in the plane spanned by the three quarks but can move up and down into the perpendicular direction. So you get a filled Δ that’s inflated to the middle.
This distribution of flux tubes has nothing to do with the flavor of the quarks, meaning it’s the same for the proton and the neutron and all other particles composed of three quarks, such as the one containing two charm-quarks that was recently discovered at CERN. How did CERN picture the flux tubes? As a Δ:
Now you can claim you know quarks better than CERN! It’s either a Y or a filled triangle, but not an empty triangle.
I am not a fan of depicting gluons as springs because it makes me think of charged particles in a magnetic field. But I am willing to let this pass as creative freedom. I hope, however, that it is possible to get the flux-tubes right, and so I have summed up the situation in the image below :
Tuesday, December 12, 2017
Research perversions are spreading. You will not like the proposed solution.
The ivory tower from The Neverending Story |
At the root of the problem is academia’s flawed reward structure. The essence of the scientific method is to test hypotheses by experiment and then keep, revise, or discard the hypotheses. However, using the scientific method is suboptimal for a scientist’s career if they are rewarded for research papers that are cited by as many of their peers as possible.
To the end of producing popular papers, the best tactic is to work on what already is popular, and to write papers that allow others to quickly produce further papers on the same topic. This means it is much preferable to work on hypotheses that are vague or difficult to falsify, and stick to topics that stay inside academia. The ideal situation is an eternal debate with no outcome other than piles of papers.
You see this problem in many areas of science. It’s origin of the reproducibility crisis in psychology and the life sciences. It’s the reason why bad scientific practices – like p-value hacking – prevail even though they are known to be bad: Because they are the tactics that keep researchers in the job.
It’s also why in the foundations of physics so many useless papers are written, thousands of guesses about what goes on in the early universe or at energies we can’t test, pointless speculations about an infinitude of fictional universes. It’s why theories that are mathematically “fruitful,” like string theory, thrive while approaches that dare introduce unfamiliar math starve to death (adding vectors to spinors, anyone?). And it is why physicists love “solving” the black hole information loss problem: because there’s no risk any of these “solutions” will ever get tested.
If you believe this is good scientific practice, you would have to find evidence that the possibility to write many papers about an idea is correlated with this idea’s potential to describe observation. Needless to say, there isn’t any such evidence.
What we witness here is a failure of science to self-correct.
It’s a serious problem.
I know it’s obvious. I am by no means the first to point out that academia is infected with perverse incentives. Books have been written about it. Nature and Times Higher Education seem to publish a comment about this nonsense every other week. Sometimes this makes me hopeful that we’ll eventually be able to fix the problem. Because it’s in everybody’s face. And it’s eroding trust in science.
At this point I can’t even blame the public for mistrusting scientists. Because I mistrust them too.
Since it’s so obvious, you would think that funding bodies take measures to limit the waste of money. Yes, sometimes I hope that capitalism will come and rescue us! But then I go and read things like that Chinese scientists are paid bonuses for publishing in high impact journals. Seriously. And what are the consequences? As the MIT technology review relays:
-
“That has begun to have an impact on the behavior of some scientists. Wei and co report that plagiarism, academic dishonesty, ghost-written papers, and fake peer-review scandals are on the increase in China, as is the number of mistakes. “The number of paper corrections authored by Chinese scholars increased from 2 in 1996 to 1,234 in 2016, a historic high,” they say.”
If you think that’s some nonsense the Chinese are up to, look at what goes on in Hungary. They now have exclusive grants for top-cited scientists. According to a recent report in Nature:
- “The programme is modelled on European Research Council grants, but with a twist: only those who have published a paper in the past five years that counted among the top 10% most-cited papers in their discipline are eligible to apply.”
To begin with, you would sure as hell not work on any topic that is not already pursued by a large number of your colleagues, because you need a large body of people able to cite your work to begin with.
You would also not bother criticize anything that happens in your chosen research area, because criticism would only serve to decrease the topic’s popularity, hence working against your own interests.
Instead, you would strive to produce a template for research work that can easily and quickly be reproduced with small modifications by everyone in the field.
What you get with such grants, then, is more of the same. Incremental research, generated with a minimum of effort, with results that meander around the just barely scientifically viable.
Clearly, Hungary and China introduce such measures to excel in national comparisons. They don’t only hope for international recognition, they also want to recruit top researchers hoping that, eventually, industry will follow. Because in the end what matters is the Gross Domestic Product.
Surely in some areas of research – those which are closely tied to technological applications – this works. Doing more of what successful people are doing isn’t generally a bad idea. But it’s not an efficient method to discover useful new knowledge.
That this is not a problem exclusive to basic research became clear to me when I read an article by Daniel Sarewitz in The New Atlantis. Sarewitz tells the story of Fran Visco, lawyer, breast cancer survivor, and founder of the National Breast Cancer Coalition:
- “Ultimately, “all the money that was thrown at breast cancer created more problems than success,” Visco says. What seemed to drive many of the scientists was the desire to “get above the fold on the front page of the New York Times,” not to figure out how to end breast cancer. It seemed to her that creativity was being stifled as researchers displayed “a lemming effect,” chasing abundant research dollars as they rushed from one hot but ultimately fruitless topic to another. “We got tired of seeing so many people build their careers around one gene or one protein,” she says.”
-
“Scientists cite one another’s papers because any given research finding needs to be justified and interpreted in terms of other research being done in related areas — one of those “underlying protective mechanisms of science.” But what if much of the science getting cited is, itself, of poor quality?
Consider, for example, a 2012 report in Science showing that an Alzheimer’s drug called bexarotene would reduce beta-amyloid plaque in mouse brains. Efforts to reproduce that finding have since failed, as Science reported in February 2016. But in the meantime, the paper has been cited in about 500 other papers, many of which may have been cited multiple times in turn. In this way, poor-quality research metastasizes through the published scientific literature, and distinguishing knowledge that is reliable from knowledge that is unreliable or false or simply meaningless becomes impossible.”
Sarewitz concludes that academic science has become “an onanistic enterprise.” His solution? Don’t let scientists decide for themselves what research is interesting, but force them to solve problems defined by others:
- “In the future, the most valuable science institutions […] will link research agendas to the quest for improved solutions — often technological ones — rather than to understanding for its own sake. The science they produce will be of higher quality, because it will have to be.”
Wednesday, December 06, 2017
The cosmological constant is not the worst prediction ever. It’s not even a prediction.
I can’t say much about fields outside my specialty, but it’s obvious this happens in physics. The claim that the bullet cluster rules out modified gravity, for example, is a particularly pervasive myth. Another one is that inflation solves the flatness problem, or that there is a flatness problem to begin with.
I recently found another myth to add to my list: the assertion that the cosmological constant is “the worst prediction in the history of physics.” From RealClearScience I learned the other day that this catchy but wrong statement has even made it into textbooks.
Before I go and make my case, please ask yourself: If the cosmological constant was such a bad prediction, then what theory was ruled out by it? Nothing comes to mind? That’s because there never was such a prediction.
The myth has it that if you calculate the cosmological constant using the standard model of particle physics the result is 120 orders of magnitude larger than what is observed due to contributions from vacuum fluctuation. But this is wrong on at least 5 levels:
1. The standard model of particle physics doesn’t predict the cosmological constant, never did, and never will.
The cosmological constant is a free parameter in Einstein’s theory of general relativity. This means its value must be fixed by measurement. You can calculate a contribution to this constant from the standard model vacuum fluctuations. But you cannot measure this contribution by itself. So the result of the standard model calculation doesn’t matter because it doesn’t correspond to an observable. Regardless of what it is, there is always a value for the parameter in general relativity that will make the result fit with measurement.
(And if you still believe in naturalness arguments, buy my book.)
2. The calculation in the standard model cannot be trusted.
Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies. If that is so, then any calculation of the contribution to the cosmological constant using the standard model is wrong anyway. If there are further particles, so heavy that we haven’t yet seen them, these will play a role for the result. And we don’t know if there are such particles.
3. It’s idiotic to quote ratios of energy densities.
The 120 orders of magnitude refers to a ratio of energy densities. But not only is the cosmological constant usually not quoted as an energy density (but as a square thereof), in no other situation do particle physicists quote energy densities. We usually speak about energies, in which case the ratio goes down to 30 orders of magnitude.
4. The 120 orders of magnitude are wrong to begin with.
The actual result from the standard model scales with the fourth power of the masses of particles, times an energy-dependent logarithm. At least that’s the best calculation I know of. You find the result in equation (515) in this (awesomely thorough) paper. If you put in the numbers, out comes a value that scales with the masses of the heaviest known particles (not with the Planck mass, as you may have been told). That’s currently 13 orders of magnitude larger than the measured value, or 52 orders larger in energy density.
5. No one in their right mind ever quantifies the goodness of a prediction by taking ratios.
There’s a reason physicists usually talk a about uncertainty, statistical significance, and standard deviations. That’s because these are known to be useful to quantify the match of a theory with data. If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero.
In summary: No prediction, no problem.
Why does it matter? Because this wrong narrative has prompted physicists to aim at the wrong target.
The real problem with the cosmological constant is not the average value of the standard model contribution but – as Niayesh Afshordi elucidated better than I ever managed to – that the vacuum fluctuations, well, fluctuate. It’s these fluctuations that you should worry about. Because these you cannot get rid of by subtracting a constant.
But of course I know the actual reason you came here is that you want to know what is “the worst prediction in the history of physics” if not the cosmological constant...
I’m not much of a historian, so don’t take my word for it, but I’d guess it’s the prediction you get for the size of the universe if you assume the universe was born by a vacuum fluctuation out of equilibrium.
In this case, you can calculate the likelihood for observing a universe like our own. But the larger and the less noisy the observed universe, the less likely it is to originate from a fluctuation. Hence, the mere fact that you have a fairly ordered memory of the past and a sense of a reasonably functioning reality would be exceedingly tiny in such a case. So tiny, I’m not interested enough to even put in the numbers. (Maybe ask Sean Carroll.)
I certainly wish I’d never have to see the cosmological constant myth again. I’m not yet deluded enough to believe it will go away, but at least I now have this blogpost to refer to when I encounter it the next time.