|img src: openclipart.org|
We can classify the most notable breakthroughs this way: Electric and magnetic fields (experiment-led), electromagnetic waves (theory-led), special relativity (theory-led), quantum mechanics (experiment-led), general relativity (theory-led), the Dirac equation (theory-led), the weak nuclear force (experiment-led), the quark-model (experiment-led), electro-weak unification (theory-led), the Higgs-boson (theory-led).
That’s an oversimplification, of course, and leaves aside the myriad twists and turns and personal tragedies that make scientific history interesting. But it captures the essence.
Unfortunately, in the past decades it has become fashionable among physicists to present the theory-led breakthroughs as a success of beautiful mathematics.
Now, it is certainly correct that in some cases the theorists making such breakthroughs were inspired by math they considered beautiful. This is well-documented, eg, for both Dirac and Einstein. However, as I lay out in my book, arguments from beauty have not always been successful. They worked in cases when the underlying problem was one of consistency. They failed in other cases. As the philosopher Radin Dardashti put it aptly, scientists sometimes work on the right problem for the wrong reason.
That breakthrough problems were those which harbored an inconsistency is true even for the often-told story of the prediction of the charm quark. The charm quark, so they will tell you, was a prediction based on naturalness, which is an argument from beauty. However, we also know that the theories which particle physicists used at the time were not renormalizable and therefore would break down at some energy. Once electro-weak unification removes this problem, the requirement of gauge-anomaly cancellation will tell you that a fourth quark is necessary. But this isn’t a prediction based on beauty. It’s a prediction based on consistency.
This, I must emphasize, is not what historically happened. Weinberg’s theory of the electro-weak unification came after the prediction of the charm quark. But in hindsight we can see that the reason this prediction worked was that it was indeed a problem of consistency. Physicists worked on the right problem, if for the wrong reasons.
What can we learn from this?
Well, one thing we learn is that if you rely on beauty you may get lucky. Sometimes it works. Feyerabend, I think, had it basically right when he argued “anything goes.” Or, as the late German chancellor Kohl put it, “What matters is what comes out in the end.”
But we also see that if you happen to insist on the wrong ideal of beauty, you will not make it into history books. Worse, since our conception of what counts as a beautiful theory is based on what worked in the past, it may actually get in the way if a breakthrough requires new notions of beauty.
The more useful lesson to learn, therefore, is that the big theory-led breakthroughs could have been based on sound mathematical arguments, even if in practice they came about by trial and error.
The “anything goes” approach is fine if you can test a large number of hypotheses and then continue with the ones that work. But in the foundations of physics we can no longer afford “anything goes”. Experiments are now so expensive and take such a long time to build that we have to be very careful when deciding which theories to test. And if we take a clue from history, then the most promising route to progress is to focus on problems that are either inconsistencies with data or internal inconsistencies of the theories.
At least that’s my conclusion.
It is far from my intention to tell anyone what to do. Indeed, if there is any message I tried to get across in my book it’s that I wish physicists would think more for themselves and listen less to their colleagues.
Having said this, I have gotten a lot of emails from students asking me for advice, and I recall how difficult it was for me as a student to make sense of the recent research trends. For this reason I append below my assessment of some of the currently most popular problems in the foundations of physics. Not because I want you to listen to me, but because I hope that the argument I offered will help you come to your own conclusion.
(You find more details and references on all of this in my book.)
Is an inconsistency between theory and experiment and therefore a good problem. (The issue with dark matter isn’t whether it’s a good problem or not, but to decide when to consider the problem solved.)
There are different aspects of this problem, some of which are good problems others not. The question why the cosmological constant is small compared to (powers of) the Planck mass is not a good problem because there is nothing wrong with just choosing it to be a certain constant. The question why the cosmological constant is presently comparable to the density of dark matter is likewise a bad problem because it isn’t associated with any inconsistency. On the other hand, the absence of observable fluctuations around the vacuum energy (what Afshordi calls the “cosmological non-constant problem”) and the question why the zero-point energy gravitates in atoms but not in the vacuum (details here) are good problems.
The Hierarchy Problem
The hierarchy problem is the big difference between the strength of gravity and the other forces in the standard model. There is nothing contradictory about this, hence not a good problem.
A lot of physicists would rather have one unified force in the standard model rather than three different ones. There is, however, nothing wrong with the three different forces. I am undecided as to whether the almost-prediction of the Weinberg-angle from breaking a large symmetry group does or does not require an explanation.
Quantum gravity removes an inconsistency and hence a solution to a good problem. However, I must add that there may be other ways to resolve the problem besides quantizing gravity.
Black Hole Information Loss
A good problem in principle. Unfortunately, there are many different ways to fix the problem and no way to experimentally distinguish between them. So while it’s a good problem, I don’t consider it a promising research direction.
It would be nice to have a way to derive the masses of the particles in the standard model from a theory with fewer parameters, but there is nothing wrong with these masses just being what they are. Thus, not a good problem.
Quantum Field Theory
There are various problems with quantum field theories where we lack a good understanding of how the theory works and that require a solution. The UV Landau pole in the standard model is one of them. It must be resolved somehow, but just exactly how is not clear. We also do not have a good understanding of the non-perturbative formulation of the theory and the infrared behavior turns out to be not as well understood as we thought only years ago (see eg here).
The Measurement Problem
The measurement problem in quantum mechanics is typically thought of as a problem of interpretation and then left to philosophers to discuss. I think that’s a mistake; it is an actual inconsistency. The inconsistency comes from the need to postulate the behavior of macroscopic objects when that behavior should instead follow from the theory of the constituents. The measurement postulate, hence, is inconsistent with reductionism.
The Flatness Problem
Is an argument from finetuning and not well-defined without a probability distribution. There is nothing wrong with the (initial value of) the curvature density just being what it is. Thus, not a good problem.
The Monopole Problem
That’s the question why we haven’t seen magnetic monopoles. It is quite plausibly solved by them not existing. Also not a good problem.
Baryon Asymmetry and The Horizon Problem
These are both finetuning problems that rely on the choice of an initial condition, which is considered to be likely. However, there is no way to quantify how likely the initial condition is, so the problem is not well-defined.
The Strong CP Problem
Is a naturalness problem, like the Hierarchy problem, and not a problem of inconsistency.
There are further always a variety of anomalies where data disagrees with theory. Those can linger at low significance for a long time and it’s difficult to decide how seriously to take them. For those I can only give you the general advice that you listen to experimentalists (preferably some who are not working on the experiment in question) before you listen to theorists. Experimentalists often have an intuition for how seriously to take a result. That intuition, however, usually doesn’t make it into publications because it’s impossible to quantify. Measures of statistical significance don’t always tell the full story.