Monday, September 03, 2018

Science has a problem, and we must talk about it

Bad stock photos of my job.
A physicist is excited to
have found a complicated way
of writing the number 2.
When Senator Rand Paul last year proposed that non-experts participate in review panels which award competitive research grants, my first reaction was to laugh. I have reviewed my share of research proposals, and I can tell you that without experience in the respective discipline you can’t even judge whether the proposal is feasible, not to mention promising.

I nodded to myself when I read that Jeffrey Mervis, reporting for Science Magazine, referred to Sen Paul’s bill as an “attack on peer review,” and Sean Gallagher from the American Association for the Advancement of Science called it “as blatant a political interference into the scientific process as it gets.”

But while Sen Paul’s cure is worse than the disease (and has, to date, luckily not passed the Senate), I am afraid his diagnosis is right. The current system is indeed “baking in bias,” as he put it, and it’s correct that “part of the problem is the old adage publish or perish.” And, yes, “We do have silly research going on.” Let me tell you.

For the past 15 years, I have worked in the foundations of physics, a field which has not seen progress for decades. What happened 40 years ago is that theorists in my discipline became convinced the laws of nature must be mathematically beautiful in specific ways. By these standards, which are still used today, a good theory should be simple, and have symmetries, and it should not have numbers that are much larger or smaller than one, the latter referred to as “naturalness.”

Based on such arguments from beauty, they predicted that protons should be able to decay. Experiments have looked for this since the 1980s, but so far not a single proton has been caught in the act. This has ruled out many symmetry-based theories. But it is easy to amend these theories so that they evade experimental constraints, hence papers continue to be written about them.

Theorists also predicted that we should be able to detect dark matter particles, such as axions or weakly interacting massive particles (WIMPs). These hypothetical particles have been searched for in dozens of experiments with increasing sensitivity – unsuccessfully. In reaction, theorists now write papers about hypothetical particles that are even harder to detect.

The same criteria of symmetry and naturalness led many particle physicists to believe that the Large Hadron Collider (LHC) should see new particles besides the Higgs-boson, for example supersymmetric particles or dark matter candidates. But none were seen. The LHC data is not yet fully analyzed, but it’s clear already that if something hides in the data, it’s not what particle physicists thought it would be.

You can read the full story in my book “Lost in Math: How Beauty Leads Physics Astray.”

Most of my colleagues blame the lack of progress on the maturity of the field. Our theories work extremely well already, so testing new ideas is difficult, not to mention expensive. The easy things have been done, they say, we must expect a slowdown.

True. But this doesn’t explain the stunning profusion of blundered predictions. It’s not like we predicted one particle that wasn’t there. We predicted hundreds of particles, and fields, and new symmetries, and tiny black holes, and extra-dimensions (in various shapes, and sizes, and widths), none of which were there.

This production of fantastic ideas has been going on for so long it has become accepted procedure. In the foundations of physics we now have a generation of researchers who make career studying things that probably don’t exist. And instead of discarding methods that don’t work, they write increasingly more papers of decreasing relevance. Instead of developing theories that better describe observations, they develop theories that are harder to falsify. Instead of taking risks, they stick to ideas that are popular with their peers.

Of course I am not the first to figure beauty doesn’t equal truth. Indeed, most physicists would surely agree that using aesthetic criteria to select theories is not good scientific practice. They do it anyway. Because all their colleagues do it. And because they all do it, this research will get cited, will get published, and then it will be approved by review panels which take citations and publications as a measure of quality. “Baked in bias” is a pretty good summary.

This acceptance of bad scientific practice to the benefit of productivity is certainly not specific to my discipline. Look for example at psychologists whose shaky statistical analyses now make headlines. The most prominent victim is Amy Cuddy’s “Power Posing” hypothesis, but the problem has been known for a long time. As Jessica Utts, President of the American Statistical Association, pointed out in 2016 “statisticians and other scientists have been writing on the topic for decades.”

Commenting on this “False Positive Psychology,” Joseph Simmons, Leif Nelson, and Uri Simonsohn, wrote “Everyone knew it was wrong.” But I don’t think so. Not only have I myself spoken to psychologists who thought their methods were fine because it’s what they were taught to do. It also doesn’t make sense. Had psychologists known their results were likely statistical artifacts, they’d also have known other groups could use the same methods to refute their results.

Or look at Brian Wansink, the Cornell Professor with the bottomless soup bowl experiment. He recently drew unwanted attention to himself with a blogpost in which he advised a student to try harder getting results out of data because it “cost us a lot of time and our own money to collect.” Had Wansink been aware that massaging data until it delivers is not sound statistical procedure, he’d probably not have blogged about it.

What is going on here? In two words: “communal reinforcement,” more commonly known as group-think. The headlines may say “research shows” but it doesn’t: researchers show. Scientists, like all of us, are affected by their peers’ opinions. If everyone does it, they think it’s probably ok. They also like to be liked, not to mention that they like having an income. This biases their judgement, but the current organization of the academic system does not offer protection. Instead, it makes the problem worse by rewarding those who work on popular topics.

This problem cannot be solved by appointing non-experts to review panels – that merely creates incentives for research that’s easy to comprehend. We can impose controls on statistical analyses, and enforce requirements for reproducibility, and propose better criteria for theory development, but this is curing the symptoms, not the disease. What we need is to finally recognize that scientists are human, and that we don’t do enough to protect scientists’ ability to make objective judgements.

We will never get rid of social biases entirely, but simple changes would help. For starters, every scientist should know how being part of a group can affect their opinion. Grants should not be awarded based on popularity. Researchers who leave fields of declining promise need encouragement, not punishment because their productivity may dwindle while they retrain. And we should generally require scientists to name both advantages and shortcomings of their hypotheses.

Most importantly, we should not sweep the problem under the rug. As science denialists become louder both in America and in Europe, many of my colleagues publicly cheer for their profession. I approve. On the flipside, they want no public discussion about our problems because they are afraid of funding cuts. I disagree. The problems with the current organization of research are obvious – so obvious even Sen Paul sees them. It is pretending the problem doesn’t exist, not acknowledging it and looking for a solution, that breeds mistrust.

Tl;dr: Academic freedom risks becoming a farce if we continue to reward researchers for working on what is popular. Denying the problem doesn’t help.

100 comments:

Peter said...

Does any measure come close to predicting who will step out of the box in exactly the right direction in ten years time?

Space Time said...

You say that in the past 40 years, in the foundations of physics, things have gone the wrong way (because of the sense of aesthetics of the majority of the practitioners). I have two questions. 1. What is the field "foundations of physics" exactly? 2. What was done in this field prior the decline of the last 40 years?

Uncle Al said...

http://bit.ly/NSF_IDEA_MACHINE
… Uncle Al proposes removing all carbon dioxide from air within 10 years using existing chemical waste.

Physics theorizes (Bayesian inference, "one need not look"). Physics properly predicts then seeks to verify (given "accepted theory") not falsify (contradicting "accepted theory"). Verification builds infrastructure, falsification is unquantifiable risk.

"testing new ideas is difficult, not to mention expensive" One day to empirically falsify quantum mechanics with a diffraction grating and Hund's paradox. One day to empirically falsify dark matter and the Equivalence Principle. DOI:10.1002/anie.201704221 using these molecules (stereograms). No contradiction of prior observations. Challenge postulates not derivations.

"Academic freedom risks becoming a farce if we" only fund the politics of Rapunzel Haboob LaShatiqua Hernandez.

marten said...

".....Paul's cure is worse than the disease."

Sorry if I missed it, but what is the disease he wishes to cure? As far as I know he is an expert on cataract and glaucoma surgery.

Filippo Salustri said...

Thank you for a great article. I would suggest, however, that the problem lies more in the human nature of the scientists than in science itself. I would suggest that the scientific body of knowledge is in excellent shape, and that the methods of science are just fine generally speaking. The problem, it seems to me, is that scientists just aren't reflecting enough on the nature of their own work and the underlying assumptions to it all.
In other words, I think it's more a question of how to *be* a scientist rather than how to *do* science.
Caveat: I'm not a scientist; I'm an engineer. I apply the same distinction (being and engineer versus doing engineering work) in my teaching. I just see significant parallels between engineering and science in this case.

Joey Blau said...

OK, but it is true that symmetries do occur in nature, and that certain characteristics are conserved. (As you know) .

Since it worked in the past, people tried it again and again.

I guess you are saying that scientist are clinging to these ideas because of a lack of imagination or daring to seek another path. But is is hard to go forward without a roadmap.

Has not the LHC team announced plans to survey large numbers of collisions with many filters turned off? To try and get the data to say something new? Is this what you want?

And if LHCB is measuring and remeasuring strange decay branches seeking to find anew SM discrepancy, is that not productive if not exactly promising work?

Or perhaps you are pointing at the theorists instead of the experimenters. I guess flogging super-symmetry expect the horse to finally win a race is a bit sad, but they do what they know. (As you pointed out)

ahmed mohammed said...

Good article.
Thanks Sabine.

Sabine Hossenfelder said...

Space Time,

With foundations of physics I refer to the parts of physics concerned with the most fundamental laws, that's currently parts of high energy particle physics, quantum gravity, quantum foundations, and cosmology. What was done in this field prior to the development of the standard model? That's not a question I can answer in a blog comment, I suggest you consult a history book.

Sabine Hossenfelder said...

marten,

The bias he complains about.

Sabine Hossenfelder said...

Joey,

I don't think that lack of imagination is the problem. The problem is that whatever their imagination, they are by and large forced to work on what's popular and productive. Of course there are some exceptions from this rule - there are a few lucky ones who have no such pressures. Unfortunately these are the ones the public most often hears about. But what creates the big bubbles are the thousands of unnamed researchers who do "more of the same" because it's what they can get funding for.

Sabine Hossenfelder said...

Fillipo,

Depends on what problem you are referring to. There are certainly problems in the foundations of physics that we want a solution for and these are scientific problems. Part of the reason that progress is slow is arguably that these are difficult problems. But, yes, I agree with you that it adds on top of this that physicists "aren't reflecting enough on the nature of their own work and the underlying assumptions to it all".

Filippo Salustri said...

Sabine,
The problems of the foundations of physics are the edges of scientific investigation. There are plenty of those. As an engineer, I defer as graciously as possible to scientists on that front. Where I think perhaps I can contribute more is to the problems of the "enterprise" of science because it is quite close to the "enterprise" of (some types of) engineering. It is to these latter problems that I was referring.
So, for instance, the whole notion of "publish or perish" is systemic in both science and engineering. Similarly with issues of peer review, and granting/funding systems. This may be a place where engineering researchers and scientists might collaborate more (presenting a united front and all that).

Sabine Hossenfelder said...

Filippo,

In principle I'd be all in favor of that. In practice, there is no front in science. The vast majority just goes along with it. And those who can't get themselves to abandon their notion of what is promising research leaves.

Filippo Salustri said...

Sabine,
Well, for what it's worth, I think you're doing great work on the science communication front and also on the... dare I call it "philosophy of science"? Just for lack of a better term. Metascience? Science praxis?
Anyways, whatever you want to label it, I applaud and support your work.

Pavel Nadolsky said...

Dear Sabine, you are trying to solve an optimization problem about the distribution of finite resources, not an ultimate truth problem. A system based entirely on predictable regularities can be gamed. Grants must be awarded based in part on utilitarian considerations (popularity, promise, etc), and in part on random choice. No perfect solution exists, but the probability of successful investment in research can be maximized up to a point through relatively simple procedures. See, for example, a solution to a related problem of security screening in airports: https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2010.00452.x

doktor Boktor said...

You know what branch of science suffers badly from this problem? Economics.

Since it is so hard to understand human behavior in markets, Economists tend to assume coldly rational, hyper-intelligent, perfectly informed and purely self-serving behavior. Partly because that sort of behavior is easiest to define and solve equations for. It results in many beautiful equations that sometimes vaguely resemble reality..

Lockley said...

Thank you for your review of "Two Doors at Once" and the recent contribution to "Quanta" magazine. Those efforts contribute to making the points you emphasize in this post.

Certainly "Lost in Math" makes it clear that a different attitude toward research enterprises is required. Original insight is needed to stimulate progress. The question becomes: How are new insights to be generated when the tools used are only sharpened by years of study in familiar ground?

Unknown said...

Dr. Hossenfelder and Filippo,

I follow Dr. Hossenfelder and a number of doctors on twitter, and a debate always seems to surface on whether science can be political. People seem to always want to demarcate these matters of "publish or perish" and the limits of the funding system as non-scientific administrative matters. The phrase "science is a tool" comes up often. But, it does seems plausible that politics in the form of bias can enter into the scientific method and analysis without causing blatant ethical concerns (i.e. misuse of statistical methods). Would you say that this topic seems to provide a definitive answer on the subject?

Unknown said...

Researchers and scientists should be free to work on whatever they find interesting and promising. Having somebody tell them what to do and what not to do comes across as patronising, arrogant, and I don't see how that might help science or progress in any way. We are not all idiots that just follow fashion and spit out hundreds of papers just for the sake of it.

Michael John Sarnowski said...

Hi Sabine, I started a thread on cosmoquest forum to see the interest in discussing your ideas about defects in space-time. The main argument for not discussing it was that there was not much hope for verifying anything about the defects. Basically the same arguments against beauty. The big difference I see, is that, granular space-time, defects in space-time, or discrete space-time, is really an understudied area of physics. I think that the problem with discussing defects in space-time is that it is associated with ether theory.

JimV said...

As you probably already know, I think trial-and-error, plus memory, is the fundamental process of progress; and that trial and error requires good selection rules to identify the errors. This is easy in computer-programming: run test data sets; if the program crashes or produces the wrong results--error. It seems to me the scientists you're talking about are spinning their cogitative wheels waiting for more test results. There are more experiments that could be made, but they have gotten more and more expensive. LIGO took a lot of time and money to construct, and is still being expanded. An array of radio-telescopes in space might be the next big thing, but it will take a lot of time and money.

Meanwhile, people grow up wanting to be scientists, and to get there they have to write a thesis on some new idea; and to stay there, more papers on newish ideas. So current results seem inevitable, until the next set of new, important data arrives (if it does). By the nature of things, there will always more wrong ideas to work on and write about than right ideas (about the unknown).

I can't much blame people for that, as long as they maintain some perspective about their chances of being right. My friend Mario likes to say about people he doesn't trust, "That guy acts like he believes his own resume!" However, there again, that seems to be a big part of human nature.

Rand Paul should keep his red, crusty beak out of science. Other than that, I'm not sure what should be done.

SRP said...

It seems that these arguments could benefit from a clearer framing. From an economic point of view, you could frame the perceived problem as a decline in real scientific productivity, where that is defined NOT as the number of papers per input but rather as relevant, accurate discoveries about the physical world per input.

You point to two types of evidence for the existence of this problem: First, a large, growing number of theoretical papers in the foundations of physics pursuing the same ideas for decades despite lack of empirical confirmation, with these theories becoming more complex over time in order to make them less testable and hence not yet falsified. Second, the credibility and reproducibility crises in a number of experimental sciences (you could have pointed to biomedical and biological research as well as psychology here).

(These are both basically issues about accuracy, rather than relevance. Another criticism coming back into fashion [as in the Sarewitz article you linked a while ago] is that publication relevance is declining also, with an increasing number of papers published that don't connect outside their very narrow contexts to contribute to broader knowledge or to applications.)

Your general causal explanation for these two different manifestations is identical: a) Scientists' incentives to publish and b) an increasing disconnection of publication from accuracy. The specific causal mechanisms for b) in the two realms, however, must be different.

In theoretical foundations of physics you claim that there is a herding equilibrium around a particular aesthetic or heuristic for building models, one that worked in the past but hasn't been successful for the last several decades. People follow it because other people follow it an will cite them in turn, whereas nonconforming ideas will languish, and the cost of doing experiments prevents data from quickly adjudicating such prejudices.

In experimental sciences, the proximate cause is not herding around a common failed heuristic (although such might exist--for example, there are evolutionary biologists very critical of molecular-reductionist approaches prevalent in the field). Instead, the age-old inability of peer review to screen out practices that enable researchers to consciously or unconsciously bias their results combines with a newly reduced chance of other researchers trying to replicate one's work because of the gigantic and growing number of researchers and subspecialties and publications. In effect, the problem is "anti-herding"--not enough groups working on the same experimental systems, too much reward for novelty over checking published results for accuracy, etc.

What you seem to be saying is that despite these different mechanisms leading to low productivity--one with excessive herding, the other with excessive dispersion--they could both be cured by getting at the common factor of scientists' incentive to get published and cited as much as possible. It isn't clear if you think this either a necessary or sufficient condition for solving the problem. Most "reformers" in the experimental area work on ways to detect biased methods, to archive data or others to analyze, etc., that is they work on reconnecting accuracy and publication. Would there be an equivalent sort of reform of reconnection possible in the theoretical foundations of physics? Is that what would happen if people in your field took your thesis about the poverty of "naturalness" and aesthetics to heart?

Parsing the issues in some way, if not this exact way, would help make the discussion move forward in my opinion, as we could stop going in circles and merely collecting inevitable gripes and dissatisfactions with the practice of science.

sean s. said...

The science community must consider itself on notice. As long as it is dependent on public funds, they will need to justify how they are using those funds. If they cannot get their house in order, outsiders will be ever more tempted to meddle. Like it or not ...

sean s.

sean s. said...

... and I second Filippo's remark. You, Sabine, are doing a great job. Please do keep it up.

sean s.

Unknown said...

Naive idea from a non-expert: how about making every scientific paper include an advocatus diaboli section where the authors discuss how their findings might _not_ be valid. Everybody would be motivated to demonstrate they thought about at least the most likely or plausible objections. That might help a little.

John Mark Morris said...

Bee,

What is the process by which research directions are set out strategically? Are there any governing or advising bodies and at what scope (national, university, etc)? Where can the researchers, the funders, and the public go to express their views as input to the research strategy? It seems to me that such governance/advisement at large scale could potentially be helpful.

What I am thinking about is something like 5-10 year views on the various bets (investments) that are most promising. And I think there should be a bucket of 20%-40% for new ideas. (my favorite is forming competing teams to go back 100 years and follow non-Copenhagen paths forward).

What do you think? Does something like this exist? Could it be effective?

Marko

p.s., I agree having lay people review grant proposals is a non-starter.

Lawrence Crowell said...

Science has had some falsehoods advanced as truth before. In paleontology the classic case is Piltdown Man, which turned out to be a human skull with a chimpanzee jaw. This fake was meant to promote the idea cranial development was more ancient. The forgery was eventually uncovered. There have been in physics various oddball ideas, such as N-rays 100 years ago and more recently the cold fusion flap doodle. These things have been with us all along.

With the foundation of physics I think the real problem is the scale of physics at the foundation. In the case of quantum gravitation the energy necessary to to probe that physics is 15 orders of magnitude larger than what we can work with. As I see it the only possible hope is that in the coalescence of black holes the quantum hair on their horizons interact in a way that is observable in gravitational radiation. This would be a form of gravitational memory, where the final position of test masses is changed after the passing of a gravitational wave. Gravitons produced by the interaction of gravitational quantum hair on horizons might in this way be detected.

Kaluza-Klein theory has been around since the 1920s, and Einstein was impressed with it. Essentially a metric with an additional fifth dimension with the assumption there is no derivative of spacetime variables with this fifth dimension derives electric and magnetic fields from the connection term. The Riemann curvature for spacetime leads to products of these fields that recovers the Yang-Mills Lagrangian. This assumption that spacetime variables do not differentiate with respect to the fifth dimension is the cyclicity condition that this fifth dimension is a tiny circle. For more complex gauge fields this curled up internal space is found in these Calabi-Yau manifolds. This segues into string and M theory and has lead to this so called landscape. The number of possible compact topologies is enormous, about 10^{500}.

This an other examples from theoretical physics seem in some ways compelling, but as yet experimental evidence is thin. Curiously the axion is hard to find not because it is very massive, but because it has a tiny mass it is hard to find. It would be surprising if the none of these things had anything to do with the observable world. It could be that our low energy large scale world is one of such highly broken symmetry these structures are highly obscured. The idea of swampland by Cumrun Vafa is interesting, for string theory really works in anti de Sitter spacetimes with negative curvature and vacuum energy. This might mean that de Sitter spacetime is a sort of “accident” in this setting where symmetry breaking is so extreme that quantum field theory is locally not consistent with any form of quantum gravity.

Who knows where this will lead to. Maybe some sort of observation will lead to greater clarity. On the other hand there is no reason to think that nature cares whether we are happy with things and we may end up in nests of quibbles that lead nowhere.

Ian Miller said...

I think the problem is more that of funding than social. I have sat on a funding panel, and to make life simpler for the panel (this was not any of my doing) the applicant tended to nominate peer reviewers and the reviewers were organised elsewhere before the panel saw anything. What resulted was great reviews if the applicant was doing something very similar to what the reviewer was doing, especially if the application had cited them, and poorer reviews if the applicant was going off somewhere that was not in the reviewer's interest. Further, the reviewers tended to have quite different grading scales. Thus say, out of five, some would think 3 was a good proposal; others would think anything under 4 was a kiss of death. I noticed most of the panelists had their biases, and invariably they wanted proposals that would fit in with current thinking. The last thing they wanted was to fund something that should possibly be laughed out as "impossible" and remember, in detail they would generally not be sufficiently familiar with the detail to know. In short, they played safe.

The second point is, they want productivity. Unfortunately, productivity means the number of papers, and panelists simply do not have the time to read them. This means that quantity has a value all of its own. You only get quantity by "cranking the handle", and you can only do that on well-trodden paths, picking up some variation on a theme or something from the edges.

To illustrate what I mean, let me suggest some hypothetical proposals. I am going to suggest they would never get funded. First, a proposal comes in that wants to replace quantum field theory. The proposal acknowledges there have been successes, but argues that there are too many fields, and secondly, the predicted value of the cosmological constant is out by about 120 orders of magnitude. However, the panel haas considerable difficulty understanding what the applicant is actually going to do, other than to note that this will take a number of years. My feeling is nobody would waste time making such an application, and would have to work privately until they got a long way down the road. A second hypothetical proposal: suppose someone proposed that gravity was an emergent property from wave particle duality? The problem here is the suggestion has no reason to suggest that progress will be made, but on the other hand, if you can work up a proposal with sufficient detail that progress is assured, why put it out for peer review and have competition to get there? My point is the funding system is working against you.

Unknown said...

The system of science described is simply the system developing entropy, a non-differentiating propensity, maybe it feels better to call it bias. 21st century has more detection of this system property, and yet now that too feeds the drift to even more entropy...trying to correct the system, to increase differentiation, will only elaborate the drift. Try to be original as 1 n... now have 1000 fold n try it. 999 or so will simply get stuck in the growing entropy loop. And there's nothing yu can do about it..

Sabine Hossenfelder said...

Marko,

Strategy works in some fields of research and not in others. I have had this discussion many times with colleagues, how it's patently ridiculous to have to come up with a 5-year plan and milestones and so on for a research project you just propose to carry out. Chances are that once you begin working on it, you'll find that step 1 on your great taskplan doesn't work, and there goes the rest of your plan. That's why so many people actually only go and apply for work they have half done already, because in that case they have at least a chance to postdict what they will have been doing. Needless to say, this means more often than not that what you get in a proposal is "more of the same." What you will definitely not get is people trying something really new.

Frankly I think that some funding agencies merely want applicants to jump through hoops to deter too many from applying in the first place.

Strategy planning is a good idea if you're head of a lab or some large institute and you have to figure out where to direct your efforts in the long run. You want a plan for that. I understand this. But you don't want to tell people what knob to turn on Wednesdays. There's too much micromanagement in academia already. The trouble with top-level macromanagement is that they pay too much attention to oversimplified indicators of "success" (like the number of papers).

Now if you are asking for strategy on the large scale (how much into this field, how much into another) that brings up the question who gets to make the decision. That's a really difficult issue because on the one hand you want people who are expert enough to understand what's doable and promising, on the other hand you don't want them to be too involved because then they'll just go and promote their own ideas. This is why in my book I suggest that we dedicate some jobs to "review positions" - that are people who have the task of developing an understanding of a field but should themselves remain uninvolved, not have any stakes in the debate. Such people presently basically don't exist; the reason is that you need to do research yourself to be able to finance yourself following the research.

And, yeah, sure, in addition to this you may want to get input from other bodies who have other interests/needs. Best,

B.

Sabine Hossenfelder said...

Ian,

What you describe agrees with my experience. I am saying that this is driven by social bias because we should all know that humans feel reassured in their opinions if their opinions are shared by many others. This means that reviewers should be explicitly encouraged to work against this and preferably we should use reviewers who have no own agendas to push.

As I said above that's unfortunately presently pretty much impossible. It's not so surprising, then, that what it comes down to in the end is a vote of popularity. The more people work in a field already the more likely you are to get reviewers who will be supportive of financing more work in the field. This is, in a nutshell, where bubbles come from. Best,

B.

Maarten Havinga said...

Lawrence Crowell,

What is exactly the connection between the axion and theories like Kaluza-Klein, Yang-Mills, Calabi-Yau and M-theory? Are you suggesting this connection or is it coincidence that you start about the axion as if it was the subject of the preceding paragraph as well?

Pascal said...

Bee, what do you suggest theoretical physicists should be doing instead of proposing new fields / new particles / extra dimensions / alternative universes..?

I am not asking you to propose your own Theory of Everything in the comments to this post, but rather what kind of activity in a broad sense should be pursued (since all of the above has apparently failed).

Sabine Hossenfelder said...

Pascal,

As I say in my book: A) Before you try to solve a problem, make sure it really is a problem. That is to mean, they should stop trying to fix aesthetic issues in the existing theories and instead make sure they have a mathematically well-defined problem. Yes, I am saying we need more math, not less. B) Don't forget that it's only science if you describe nature. That is to say you should always try to make contact to observation. It seems like an obvious thing to say, but there seem to be whole disciplines where people have given up even trying, at which point you can rightfully ask whether it's still science.

Nick W said...

The quantum gravity-dark-matter-energy issue almost certainly has something to do with causal sets (Sorkin, Rideout, etc) in a sort of computational universe. Or: the fundamentals of the universe are information, and spacetime+matter emerges from the evolution of that information as partially ordered events.

The dynamics of which element comes next in the set (from the perspective of an observer in the set) can probably be predicted by some sort of complex geometry like Arkani-Hamed's amplituhedron. Geometry, time, and gravity emerge from the way the events occur. Early in the set, the unfolding looks like "inflation", and at large scales, the dynamics of the set gives the appearance of dark matter and energy.

Yeah it's crazy and I'm a pseudo-intellectual wierdo on the internet. But it still sounds more reasonable to me than "magic inflation fields" which disappear for no reason, "parallel universes" we cant see, and "dark particles" we can't find. Why does no one else see this? Why does Ethan Siegel keep writing articles about dark matter and dark energy and inflation?

Unknown said...

Dear Sabine,
What else did you expect from science? From my perspective, the 20th century has been a wonderful exception to the rule. The rule has always been that amidst large crowds of mediocre workers, religious leaders, suspect politicians, and others who vociferously expressed their dubious views on what the worlds looks like according to them, every now and then a more insightful, deeper thinker arose who told us what we should be doing and what numerous mistakes have been made, and how it could be done better. When, finally, these people were heard, others came to profit from the new lessons and made headlines; thus science made some steady progress, with its ups and downs.
Not so in the 20th century. I think some sort of phase transition took place around 1900, when there happened to be a couple of real masters who managed to get new messages across to their peers and the public. Science reached the stage that precise measurements, accurate theoretical analysis, the rejection of several old prejudices, and new means of spreading information reinforced each other, so that good science made large impacts on society. Now, we are back at the stage of wild ideas and bold - but wrong - predictions

Unknown said...

Yes, the criterion of beauty often comes up. "Occam's razor". It is often used as an argument. If it isn't beautiful, it can't be true. That’s not something new, we've seen that before: weren't the five elements of the Greeks more beautiful than 92 elements, as Mendeleev tried to defend? Weren't circles more beautiful than ellipses to describe the motion of planets? And so on. I remember the time that we had to defend the existence of scalar fields in our models for the subatomic particles. Scalars were ugly, gauge fields and fermions were kosher. Theories that required renormalization and perturbation expansions were also frowned upon as ugly, but now we know that they work far better than the concoctions people came up with to describe hadrons and leptons in the old days. Quarks were rejected because they could not be detected experimentally.
You may be right about researchers following their peers blindly. String theorists claim that they understand black holes, but I can tell you they do not. I have shown how to do black holes right but they continue with their fuzzballs and firewalls. I know how to handle firewalls, and why this is the only correct way, but not many seem to be at all interested, it looks as if it’s because I'm not applying string theory.

servant said...

but economics isn't a science,most of the recent questioning of science is because economics and psychology produces mostly irreproducible results...they're not sciences...

Lying sweetly said...

Sabine,
I like both the style and content of your posts. You have laid out your boundaries and frontiers for exploration well enough for us to understand the direction you are traveling in your career and in each post.

That said, this most recent post is discussing a problem that neither you nor the commenters seem to have properly identified for what it is: sociology. Apologies for invoking soft (really soft) science.

Nobody operates in a vacuum, obviously. But the nature of the setting in which they do operate never really gets discussed at the level which matters. Issues like careers, funding, publication, peer review, field-specific and science-wide politics, and many others, only hint at the global picture of how science should function and yet systemically fails across all fields to do a good job. The raw material for any scientific endeavor is not data, it is people, in situ. By that I mean their education, working environment, and anything else that can distort their decision making processes.

Without an overview of the sociology of scientific settings and funding, including the politics, no one will ever clean up the mess that scientists have made for themselves. And for god’s sake don’t ask a sociologist to help. They suffer from the same problems.

As I see it, only a broad discussion in the scientific community stands a chance of having an impact, but who in the hell would fund or otherwise offer their voice to a large scale discussion that would end so many “careers” and lines of “research”?

Are we really stuck with what we’ve got, or can someone come up with a way out this impasse?

Jeff Jones said...

If one gives the topic of scientific advancement even cursory thought he must come to the conclusion that it is an absolute miracle that ANYTHING useful comes from research. There is so much bias, peer pressure and good old politics in the scientific community that real advancement is crippled. From the hoaxes of spontaneous appearance of life and its first silly cousin Darwinian evolution, to the current flavor of the month, anthropomorphic CO2 caused global warming, the career ending indictment of 'settled science' prevents most true science from taking place. Real data and real scientific results are summarily tossed onto the garbage heap in favor of politically correct constructs.

S Johnson said...

"Don't forget that it's only science if you describe nature. That is to say you should always try to make contact to observation."

But, isn't it the case that the prevailing understanding of QM/QFT is that physics does not subscribe to realism? That physics is about correlating data, about predicting probabilities of experimental outcomes? That there are no descriptions of nature, there is only the consistency of the formalism? If this is the case, then the conclusion that any mathematical exercise that incorporates, or can be (in principle at least) reduced to QM/QFT is therefore in "contact with observation."

Indeed, not just in contact with observation, but in contact with decades of the most precise, best attested, most fundamental observations, so much so that any other proposition is incompetent, nonscientific in denying the correct theory. The conclusion is that any mathematical development of QM/QFT is the only science because it builds on exactly the contact with observations. (GR seems to be tacked onto the Standard Model, but is deemed wrong for purposes of fundamental theoretical analysis, no?)

It is my understanding that antirealism borders on being official policy. It seems that in practice most working scientists actually use an unexamined so-called naive realism. But that's sort of an irrelevance. Isn't it possible that the problem is not so much an obsession with math per se? That the love of math is a default because nature is denied as the object of adoration?

ZombieHero said...

I think Paul's idea has a lot of merit in so far as the person acts as a Devil's Advocate. Experts, especially in niche fields tend to be insular and groupish. They are humans after all.
How do you break that up? With a Devil's Advocate.

Adding in someone that is a non-expert should, key word should, make the panel of experts explain their reasoning.

You don't have to be an expert in particle physics to find faulty reasoning and bad logic.

That's how I see Paul's idea working. Making the experts question their own assumptions and expose studies with bad assumptions.

It's not that studies with bad assumptions aren't valid or couldn't be useful, but we live in a world with finite resources and infinite wants.

I see the shock from scientists more in the line of shock that people actually think scientist are just like regular normal people, with built in biases that can blind them from seeing the truth. Scientist have been told how awesome and better they are than everyone else so often that they probably believe it themselves. They are fooled by their own rationality into thinking that the "others", us plebs, aren't qualified to discuss matters that they deem in their own magisteria.

Ironically, their insular nature and tribalism shows us just how much like everyone else they really are.

David Schroeder said...

It seems that the field of paleoanthropology, like physics, has also been stuck in a multi-decade group-think rut. As an avid follower of the forums at a genetic site (23andme), one knowledgeable poster there, is highly critical of the greatly simplified, highly linear model of human evolution that has been in place for a very long time. Because this model was so entrenched, dissenters were largely excluded from publication in the mainstream journals - that is, until recent sequencing of archaic hominid genomes changed the picture entirely. It turns out that modern humans, throughout Eurasia, carry bits and pieces of the Neanderthal and Denisovian genomes. Europeans, for example, have between 2% and 4% of their genomes from Neanderthals. Thus as that knowledgeable poster points out we are now learning that our evolution was a lot more "messy" than previously believed.

Uncle Al said...

@Lawrence Crowell: If M-theory, etc., were Cauchy horizons, 10^500 compact topologies are an unlimited stack of ^500s, none of them being empirical.

@Ian Miller: "Wave-particle duality" excludes particle structure. C60 diffracts, as does MW= 10,123 (810 atoms). DOI:0.1039/c3cp51500a Physics excludes geometric chirality via Hund's paradox. Falsify quantum mechanics by transiently excluding dissipation from a molecular beam of homochiral molecules during diffraction (properly, interference), then not obtaining an exiting racemic mixture.

An exiting racemic mixture contradicts chemistry and thermodynamics. How is this not the most interesting experiment ever?

crevo said...

"This problem cannot be solved by appointing non-experts to review panels – that merely creates incentives for research that’s easy to comprehend."

Is that a bad thing? Is it possible that part of the problem is that the community itself is too enclosed in its own terminology and thinking habits that it can't get air from outside? Perhaps what is needed is, instead of advancing the field, take some time to rethink the field so you can explain it well to others, and at a level that they can make informed contributions.

As an example, Calculus used to be exclusively taught to mathematicians in the early 19th century, but by the end of it calculus was being taught to other groups, and by the early 20th century had found its way into some high school curricula. Now it is no longer a thing of mystery. Along the way, methods of explanation were improved, the important from the unimportant details were sorted out, etc. Perhaps if physics followed the same path rather than everyone looking for the next "new" physics, it would start advancing, because it would in fact allow people outside the field to be sufficiently versed in the subject as to make good critiques.

Lawrence Crowell said...

@ Maarten Havinga,

The question you ask would require considerable length just to give a bibliography. I mentioned these things because Bee wrote of them in here article.

I will say that axions are sort of neat, and I would love it if these little gems were found. They were first proposed in the Peccei–Quinn theory as a way that parity violation occurs with the strong nuclear interaction or QCD. In effect the axion "carries parity violation" for the strong force. The QCD Lagrangian is interesting, for it is similar to that for edge states with topological insulators and quantum Hall effect. Remember Maldecena demostrated that conformal field theories exist on the conformal boundary of anti de Sitter spacetimes. The dynamical equation for the axion is coupled to the electromagnetic field. So these may show up as anomalous photons.

Calabi-Yau manifolds are within forms of Kaluza-Klein theories and the wrapping of D-branes is a form of Calabi-Yau manifold. Look these up in Wikipedia for some fair overviews of these subjects.

akidbelle said...

Hi Sabine,

the math never comes first. More math will certainly be needed eventually, but before that, you need to have an idea of what the next physics is talking about and only then, in what language. e.g. at present, I understand you believe that it will be new particles and fields. If this is not the physical problem, you are stuck forever...

So, in my opinion, the question is what amount of personal beliefs are you ready to sacrifice to really get to something new and real. I mean not you specifically, but also any other theoretical physicist. I understand that the answer is currently "nothing".

Best,
J.

lagunastreets said...

The spread of magical thinking has funded fusion physics for years and years, the result is good paying research jobs leading to more project hubris despite the lack of success. That is academics welfare not research.

Just_Write said...

Sorry but it looks like most of these people need to get a proper job and stop conning others with Imaginary Ideas?

JeanTate said...

Bee, I have a practical question, based on the following:

I am a scientist, an astronomer, but not an astrophysicist (and certainly not a psychologist or theoretical physicist). I work with one of the most successful online citizen science projects, the Zooniverse. Right now my efforts are focused on obtaining, reducing, and analyzing data from the Hubble (and the VLA, and ...), as part of a long-running project into a rare and poorly-understood class of extra-galactic radio sources. This will likely keep me busy for at least the next couple of years, though I will also participate in astronomical research done by my (online!) colleagues.

Other than "talk about it" (per the title of your blog post), what - specifically - would you suggest that I do differently?

SteveB said...

Thanks for this article Sabine.

I liked your book very much, where this theme is a main point.

Questions:

1. How often are grants awarded just because the work that will be done (while maybe not meaningful in the great scheme of things) trains the PhD candidates doing the work in the techniques of physics that they would ultimately need to teach and/or provide relevance for industrial jobs? Is that a legitimate reason to award grants?

2. What do the characters prefixed to the last line in your post mean? T l ; d r : Am I just too old? Does my Chrome browser on a Windows 7 system just not understand?

Sabine Hossenfelder said...

Unknown, ZombieHero,

Yes, the idea that you have a 'devil's advocate' and a section in your paper that lists shortcomings I think are good steps towards limiting bias that comes, essentially, from overconfidence. I have listed similar items in the appendix of my book.

Sabine Hossenfelder said...

SteveB,

1. You always have to explain the relevance of the research and the expected impact and the novelty and so on if you apply for grants. But for PhD students it's often the case that the money comes from a grant some more senior researcher already has & they often have some flexibility of how to use the funding, as long as they can come up with a reason for how this relates to their research project. Scholarships are an entirely different thing. There are private scholarships that are given out for candidates for reasons that have basically nothing to do with their research project.

2. It means "too long; didn't read" and you could easily have found out by asking your search engine of choice.

Sabine Hossenfelder said...

JeanTate,

Well, I like to believe that close contact to data keeps research healthy, so I don't think you have much to worry about. Yes, "talk about it" scores high on my list. As I also say in the appendix of my book, I would like to add "resist" whenever you notice someone advocates measuring scientific success by the number of publications or the number of citations. These are oversimplified measures and flawed for well-known reasons. Still, they continue being used in research evaluation. If you notice this happening ("this person's research isn't getting cited well") point out the obvious problem with it (benefits the growth of research bubbles). And on the flipside, just because a research area produces a lot of papers that are being well-cited doesn't mean it's interesting science. It just means that a lot of people have managed to write a lot of papers.

Sabine Hossenfelder said...

crevo,

It is correct of course that non-experts should have some input into where tax-money goes, but this input shouldn't come on the level of actually evaluating proposals. It doesn't make any sense: If you evaluate a proposal, an interesting-sounding non-technical summary is handy but not decisive. You need to be able to tell whether the project is doable at all, whether the budget is realistic, and whether the PI knows what they are talking about. You can't do such an evaluation if you don't know the research area and aren't familiar with the literature. If you simply pour money into interesting-sounding topics, you'll get a return on investment that's even worse than presently.

Maarten Havinga said...
This comment has been removed by the author.
Unknown said...

Juries decide cases, often difficult and technical, without special preparation. That's the way the system works.

You see, there is a bull shit sensor within humans. It works without regard for the apparent complexity of the subject.

R Feynman one said “You dont understand something unless you can teach it to freshmen.” Yup. If you can not make it clear to a jury, or a science grant approval panel ( more educated than freshmen), you dont understand what you are doing.

Liralen said...

@unknown: “Researchers and scientists should be free to work on whatever they find interesting and promising. Having somebody tell them what to do and what not to do comes across as patronising, arrogant, and I don't see how that might help science or progress in any way. We are not all idiots that just follow fashion and spit out hundreds of papers just for the sake of it.”

Of course you are free to work on you whatever you find interesting and promising.

If you want somebody else to pay you to do it, you are not free, by definition.

This does not mean that I agree things should be this way. It’s simply a statement of what is.

I live in the state that elected Rand Paul, but I am not a supporter, and have always voted for his opponents because I pretty much despise him. Primarily because I view him as an outsider and an idiot (he was not born in Kentucky, but moved here when he was 30 years old).

Ayn Rand wrote the book “Atlas Shrugged”, for whom Rand Paul was named. According to the wiki “The book depicts a dystopian United States in which private businesses suffer under increasingly burdensome laws and regulations”, which I read in school when I was a child and hence can agree with that description, but now am wondering if the required reading back-fired in my case.

It did make me think.

It made me think it was wrong.

Liralen said...

P.S to my previous comments

https://www.youtube.com/watch?v=8PaoLy7PHwk

Lying sweetly said...

What UNKNOWN said, THIS -->
-----------------
You see, there is a bull shit sensor within humans. It works without regard for the apparent complexity of the subject.

R Feynman once said “You don't understand something unless you can teach it to freshmen.” Yup. If you can not make it clear to a jury, or a science grant approval panel ( more educated than freshmen), you don't understand what you are doing.
---------------------
Of course Feynman also said:
There is no historical question being studied in physics at the present time.
We do not have a question, "Here are the laws of physics, how did they get that way?" We do not imagine, at the moment, that the laws of physics are somehow changing with time, that they were different in the past than they are at present. Of course they may be, and the moment we find they are, the historical question of physics will be wrapped up with the rest of the history of the universe, and then the physicist will be talking about the same problems as astronomers, geologists, and biologists.

I think Feynman may have been trying to stay away from issues that frequently plague other sciences, issues such as the effects of sociology on a branch of science, and why science (or any particular branch of science) exists at all. Those questions are important ones and not easily answered.
Of course science exists in a setting and of course settings influence modes, theories, education, research, funding and so forth. Those influences may be more important than any other single topic, at least until they are well understood. And yet, there is no general discussion about the influences of the settings in which science is practiced.

Reasonably, Sabine wants to make sure that scientists make (maintain) contact with observation (reality). But observation, in and of itself, always takes place in a setting, whether the setting is scientific or not. The setting starts with the mental state of the observer and expands more quickly than the speed of light :-P , all of which have an effect on what is observed and what is concluded from the observations.

Is it not exceedingly strange that there are no support groups where scientists can go and get a reality check on their own minds? Have discussions about how to reduce bias? You can never eliminate bias, but you can certainly reduce it. Wouldn't such options make science sharper, more meaningful, more broadly supported?

Peter Donnelly said...

OMG, that article about Amy Cuddy was amazing, riveting.

JeanTate said...

Thanks Bee.

Somewhat off-topic, but perhaps not much (our discussion here includes non-scientists' inputs on research proposals): from what I've done in various Zooniverse projects, working with citizen scientists, I see a persistent (if relatively low level) concern: paywalls. Sometimes papers which depend critically on the unpaid work of thousands of citizen scientist volunteers appear in journals which demand $$$ from those who did that critical work, to be able to read them. Many millions of tax-payer $$$ (and other currencies) go to producing papers which those tax payers must pay again to read. While the motivations of some leading US figures are anything but pro-science, in their promotion of some recent legislation seemingly to do with open science, the idea that science needs to be open resonates far beyond those guys' (they're all guys, I think) political base.

Greater openness would, I think, also result in wider discussion of the sorts of shortcomings of theoretical physics, psychology (and more) that your blog post so lucidly describes.

JeanTate said...

Details of the "some recent legislation" I mentioned in my last post: I was thinking of H.R.1430 — 115th Congress (2017-2018), the so-called "HONEST Act" (link: https://www.congress.gov/bill/115th-congress/house-bill/1430). Although its explicit scope is the US' EPA, and its intent very different from its words, these words look good:

"... all scientific and technical information relied on to support such action is the best available science, specifically identified, and publicly available in a manner sufficient for independent analysis and substantial reproduction of research results."

lagunastreets said...

MEMO: Corporate has determined a lucrative opportunity to stimulate grant funded research into inflated Newtonian devices. Effective immediately no further reference shall be made to the Second Law punishable by termination of service. Violators shall be demoted, marginalized and ostracized. Consult your group-think coordinator for Kool-aid and further instructions.

Liralen said...

@JeanTate. I am not aware of a case where "... all scientific and technical information relied on to support such action is the best available science, specifically identified, and publicly available in a manner sufficient for independent analysis and substantial reproduction of research results." is not already available, per existing laws.

Can you provide an example where that is not true? My knowledge is limited to air quality, and I'm struggling to remember a situation where that information was not already provided.



JeanTate said...

@Liralen. As I said earlier, what I am intimately familiar with is astronomy, extra-galactic astronomy in particular, with a strong Zooniverse (online citizen science) flavor. In astronomy in general, papers - mostly funded by taxpayer funds - in good, peer-reviewed journals are public ... but you have to pay to read them! Over the last half century or so, there has been a big change in making the underlying data a paper depends on public ... if only after a "proprietary period" (nearly always reasonably short), and again also sometimes behind paywalls.

So, my strong desire to talk about openness in science goes beyond just having papers and the underlying data "pubic", it is also all about ensuring that both are also freely available (available for free), especially if tax-payer funds were used to create them.

There's a curious story here which Bee may well appreciate: a couple of my side projects involve looking at what I call "myths" (yeah, provocative), views widely accepted within the community which - when you dig into them - turn out to be based on decades' old work, on tiny samples, and often relying on data that has never been made public (in any sense). It can take no more than a day's work, using modern data (bigger, better, higher quality samples, etc), to show that some are indeed myths. Writing a paper to "expose" such myths is, however, Herculean ... since the views are so entrenched, one has to do vastly more work than the astronomers who kick-started the myth (often unwittingly) ever had to do.

Sabine Hossenfelder said...

JeanTate,

What "myths" are you referring to concretely?

JeanTate said...

Bee,

The ones I've investigated, so far, are surely of interest to only the few thousand astronomers (and perhaps some astrophysicists) who get excited about galaxies. I'm co-author on a paper already submitted to MNRAS on one such long-standing, widely-held concept (we show that reality is a bit different); I'll be happy to appraise y'all of it once it's public (the lead author chose to not put the preprint on arXiv). Fair warning: most readers of your blog will likely find this trivial and boring.

Two examples of what were once widely-held ideas which have been shown - in the last decade or so - to be quite wrong (i.e. they would have remained "myths" but for detailed work): that (normal) elliptical galaxies are "all the same" (except for those with an AGN) ... they're not, they are a quite heterogeneous class, with a big divide between "fast rotators" (i.e. more like spirals) and "slow rotators"; and that (normal) galaxies with discs (spirals, lenticulars) all have discs with an exponential profile (Sersic index = 1), they don't, two-component model fits show discs have Sersic indices covering a wide range.

I don't really know (I am far more interested in what observations tell us than how best to account for these, in astrophysical models), but suspect that untangling the mythical parts could lead to some pretty interesting insights into what makes galaxies, and what makes them tick. Kinda like how Tully-Fisher may be now viewed more as a curious outcome (epiphenomenon?) rather than something fundamental ...

rene anand said...

Incisive analysis Sabine! Science as practiced does have a very BIG problem. To encourage diversity of explorations, half of the budget of any agency should be given to many more scientists (smaller in size but enough to "sustain" entirely new experiments or areas (at the nexus of different fields of inquiry included). The odds are now cast in favor of creativity and innovation not just "regression to the mean" or the "status quo". My 2 cents.

SteveMobile Hayes said...

Picking up from the NYT article
register hypothesis beforehand so that researchers could not fish around for a new hypothesis if they turned up some unexpected finding

That seems absurd to me. If for example all the subjects died it would be reasonable to report that. Or if many of them got divorced or pregnant or ... Yes it would need further exploration to eliminate alternative explanations but nonetheless the unexpected finding is worth reporting

Unknown said...

Even if someone made real insight into the further foundations of physics, how could the enterprise of science possibly be awakened to that fact?

In my experience an attempt to discuss basic principles leads to the criticism of being over-simple. It seems that over-complicated, fantastical theories draw more attention.

In addition, i have asked many people where the next fundamental breakthrough will be made, they already have a firm idea of where the solution will occur, nevermind how. Anyone with such certainty should be making the effort themselves instead of carelessly dismissing the hard work of others.

Sabine, where would you recommend a person like myself discuss results with a truly open-minded individual or group? So far I've come up completely empty. Thanks, Sean

Sabine Hossenfelder said...

Sean,

By my experience people who look for physicists who are "truly open-minded" really mean physicists who are willing to waste time with sub-standard ideas.

Sabine Hossenfelder said...

SteveMobile,

That you register hypothesis beforehand doesn't mean you cannot report unexpected findings. It just means that people will know these findings where not what you originally set out to study.

Unknown said...

I suppose that I should have added more detail. It has been impossible to even start a conversation.

Wouldn't a new idea necessitate a new vocabulary and way of thinking? Such things are easily dismissed as sub-standard for their inherent nonconformity. And such a thing reeks of elitism and close-mindedness.

Are you open to new ways of thinking or only those which fit comfortably into your education and world view?

sean s. said...

Sabine;

By my experience people who look for physicists who are "truly open-minded" really mean physicists who are willing to waste time with sub-standard ideas.

That may often be true, but it need not be.

Any idea that is unpopular, or novel, or which many do not agree with is likely to be called “sub-standard”.

But the only way out of the rut, the only way out of group-think is to consider ideas many think “sub-standard”; or to support those who do.

sean s.

Filippo Salustri said...

I feel compelled to argue against Unknown's comments.

"Wouldn't a new idea necessitate a new vocabulary and way of thinking?" No, not necessarily. There has never been an argument that successfully defended this as a universal claim. Indeed, I have argued, and continue to argue, that the best way to demonstrate one kind of value of a new idea is to be able to frame it in the existing language. Not only does it provide an analytic bridge to help determine the validity of the new idea, but it provides a learning bridge for people to understand the idea well enough to promote accepting the idea.

It may be the case for some ideas that new vocabulary or thinking is needed, but it is certainly not the universal claim implied by Unknown.

"Such things are easily dismissed as sub-standard for their inherent nonconformity." This is why I hate the passive voice. Dismissed by whom? And why? If and when a good, rational, and evidence-based argument is made by some agent that demonstrates a substantive reason to dismiss the idea, then it's perfectly fair to dismiss it, at least until the idea is re-ideated to address the identified shortcomings. It's not about dismissing the idea, it's about the reasons why it's dismissed or accepted.

It also has to do with human nature. It is human nature to prefer the "safety" of the known. Some people crave that safety more than others. Those people will resist new ideas not out of any particular malice, elitism, or close-mindedness; they will dismiss the idea simply because that's how they are.

Tossing out a requirement for one to change one's "way of thinking" (a spectacularly vague term) as one might change one's trousers (or skirt) is far too casual for what might well constitute very deep cognitive changes. Even if one is open to changing one's "way of thinking", executing the actual change can take years of conscious effort. Great patience is required if you don't want to alienate those whose "way of thinking" you're trying to change.

"And such a thing reeks of elitism and close-mindedness." Seriously? No, it doesn't. There are myriad reasons why, in any specific case, this claim would fail. This kind of over-generalization is fallacious and, quite frankly, harmful to both those trying to bring about actual good change and those struggling to keep up.

David Halliday said...

Unknown (Sean?):

You are likely correct that «a new idea [may] necessitate a new vocabulary and [likely new] way[s] of thinking». However, all purveyors of all previous «new idea[s]» figured out how to "connect" their «new idea[s]» to what the scientific community already knew, and how to make the «new idea[s]» relevant to said scientific community.

It's not really all that difficult, if the ideas are well founded, and well formed, and the person proposing such is well versed in the most current science (at least within the relevant areas).

David Halliday said...

Sabine:

I found it interesting that you advocate that «we should generally require scientists to name both advantages and shortcomings of their hypotheses» (emphasis added). I would have thought that, by now, Richard Feynman's admonition would be well known, at least by scientists: «I’m talking about a specific, extra type of integrity that is not lying, but bending over backwards to show how you are maybe wrong, that you ought to do when acting as a scientist.» (from his Cargo a Cult talk, emphasis added)

Sabine Hossenfelder said...

sean,

Of course exceptions may exist in principle but I yet have to see one in practice. As any area of science, physics has quality standards and you can assess whether an idea meets these standards regardless of whether or not you like the idea. The current standard has it that you need a mathematical formulation and you need to demonstrate the absence of conflicts with existing data, or at least provide a plausible reason for the absence of such conflict. Whenever someone asks me to be "open-minded" they really mean "here, please read my 40 page pamphlet with many pictures and three randomly selected equations".

Math is a wonderful tool and it has proved to be incredibly powerful. That's why we use it, and if someone doesn't understand what the math is good for I think it is very justified to ignore them. They haven't walked the walk and looking into it would be waste of time.

Sabine Hossenfelder said...

I want to second David's comment that new ideas need to be connected to the already existing ones in order to be taken seriously by the community, and I think that's a reasonable expectation. (I actually explain that in my book.)

Sabine Hossenfelder said...

David,

I deliberately phrased that in terms of 'advantage' and 'shortcomings' rather than 'right' and 'wrong'. You see, the whole book is about theory-assessment in cases where you cannot (yet) tell right from wrong. If you'd know, that would settle the case. Take supersymmetry or string theory as example. It's not that these ideas are wrong. The question is whether it's promising to study them.

Now what is happening is that people forget about all the unfortunate mishaps in the pursuit of these ideas (which I list in my book) and only talk about the benefits. That makes them overly enthusiastic about the prospects of these ideas. Hence, I say, they should do some regular exercise to remind themselves (and others) of the actual situation.

John Mark Morris said...

From my experience it is impossible to introduce a new idea from outside the field of physics. Physicists are good at broadcasting information but will not engage an amateur. This is a mistake.

David Halliday said...

Well, Sabine, I'm not actually disagreeing that «we should generally require scientists to name both advantages and shortcomings of their hypotheses» (emphasis added).

Additionally, I don't believe Richard Feynman's admonition is so much about «'right'» vs. «'wrong'», in any larger sense, such as even whether one is actually «'right'» or «'wrong'», in one's «hypotheses», etc.

What I understand him as saying is that we, as scientists, should be more than just honest about our work, and it's potential for success, but that we, as scientists, should go (a bit) "overboard" in pointing out the (potential) weaknesses in our own ideas, hypotheses, theories, etc.

Basically, if we are following Richard's admonition (which, incidentally, he gave his "Cargo Cult Science" talk as a commencement address to graduating scientists), then we should already be «nam[ing]» the «shortcomings of [our] hypotheses» (emphasis added).

That was the intent of my comment.

I'm sorry that it appears not to have been conveyed more completely.

Unknown said...

Sean, thanks for the support. This article and the threaded conversation are all about trying to break the problems of group thinking and the like.

A minority opinion can be fully fleshed and supported by mathematics and still be accused of being substandard. This is the bias Sabine refers to and wishes to puncture.

Certainty and narrowness are the enemies here, not me and my differing perspective.

Sean

Unknown said...

Fillipo,

I appreciate your comments (active voice). Sometimes I prefer to express myself to avoid my argument looking like a personal attack, hence the passive tone.

Any idea that is truly new will require an alteration in what we think, even if in only a small way. We sometimes hold so tightly to our current way of looking at things that anything alien is immediately suspect. And I strongly disagree that all new things can be expressed in our current language. There will always be an area of contradiction.

I do agree with you that the human failings we all share permeate the enterprise of science. But surely these are the very things that we are attempting to identify and overcome.

Speaking once again to Sabine's topic it is my opinion that there is a pervasive attitude in our field to seek out safe courses of study. For example, adding to the enormous data and research for things like dark matter. The field has grown to the weight where it cannot be questioned on fundamental grounds anymore, despite the lingering fundamental questions. Becoming a member of the legion of truth-holders is defiantly an aspect of them versus us elitism.

As for your admonitions of overgeneralization serving falsity and harm, this medium has its shortcomings. Please be mindful that I am attempting to communicate within these limitations.

Sean

Unknown said...

Hi David,

You are quite right that we must be able to recognize new ideas as being some what conforming to current models. For physicists this means that the mathematics must remain the language of commonality. I would firmly require that these ideas must be able to be written down and shared to be validated. Wishful thinking and creativity are insufficient.

There is still the fact looking at old things in a new and different way is difficult. Allow me to ask the following straightdorward question that had a simple, yet less obvious answer.

Which is more fundamental, place or velocity? (I omit acceleration to make it easier)
Why?

Sean

Unknown said...

Sabine,

When working in fundamental ideas it seems to me that the math becomes simpler. I agree wholeheartedly that it remains necessary as a medium for communication.

Can't it be the case that this simplicity hides complexity behind it? Or perhaps that the simplest principles have the most far reaching consequences? They are the most powerful, for want of a better word.

As a time management skill, dismissing what doesn't have an immediately apparent value is a better strategy than pausing a moment to try and understand if their is anything substantive inside. Is this the best way to serve science?

Sean

Unknown said...

David and Sabine,

It has been my experience that at the onset people start off with the proper reservations about the work they are doing. They initially state that there are shortcomings and potentially other explanations. Yet this is purely lip service. Once they engage they focus solely on moving things in a positive direction without ever revisiting the fundamental, and often incompletely answered questions.

It seems that we have spent decades building sevreal houses of cards just for their own sake. Several, if not all, are possibly fantasy. Building a more complex structure is the madness of purists, who toil for their own sake and will never capitulate to stopping. Usefulness is a measure for true physicists, who seem to be in short supply these days.

Sean

Phillip Helbig said...

"From my experience it is impossible to introduce a new idea from outside the field of physics. Physicists are good at broadcasting information but will not engage an amateur. This is a mistake."

That might be your experience but it doesn't hold in general. Most physicists are familiar with amateurs sending them self-published theories of everything (I even have one in hardcover). These are usually recognizable as crackpot within a couple of seconds, so one doesn't even have to choose not to consider them at all, because practically no time is wasted.

If you really think that you have something to contributed "from outside", then write it up and submit it to a journal. While there might be some journals which won't accept papers from people without an institutional affiliation, there are enough which do. (I think that, as a matter of principle, no-one should publish in a journal which, as a matter of principle, requires an institutional affiliation. Scientific ideas should be evaluated based on content alone.)



Filippo Salustri said...

Sean,
I appreciate you fleshing out your response. I for the most part agree with you.

Matters internal to your field are for people in your field and I will stand aside there (I'm an engineer).

More broadly, on the point of new language, one can think of it from a student's point of view. When a student first encounters a new abstract concept, in science, or in engineering, or pretty much anything else, the concept is explained with language that can seem new to the student. I remember the first time I learned about quantum mechanics - it made tremendous sense to me because I had a great teacher and that language used built up the concept from things I already knew about. But the first time I learned about plastic behaviour of materials - far more relevant to mechanical engineering than QM - I was completely fluxumoned, because I had a very weak teacher who could ground the concept well.

So yes, there is new language with new concepts, but if it's properly framed/translated/explained with respect to old/existing language, then I think it can be done. Not for everything, but for most things.

I look forward to seeing how scientists resolve their internal struggles. Engineering faces many of the same problems; perhaps your solutions will transfer - via some new language and new ways of thinking :-) - to engineering too.

Liralen said...

@JeanTate
My apologies. Upon review, you were simply citing the legislation as best practices, whereas I was upset with the legislation because it required what already exists with respect to air quality rules.

I am so sorry. I think we are kindred spirits and the fact that I should quarrel with you over this is awful.


David Halliday said...

Unknown (Sean) and John Mark Morris:

In my experience, it's not that "outsiders" cannot contribute to various areas of science.

In my experience, the problem seems to nearly invariably "boil down" to an issue of effective communication: Those that have worthwhile contributions and can communicate them well, succeed; those that do not have well formed ideas, or are unable to communicate their ideas effectively, do not succeed.

Unfortunately, this does mean that there may be those with worthwhile contributions that do not succeed due to poor communication.

One historic example I can think of is Michael Faraday: brilliant experimentalist, but unable to formulate his ideas into the language of Mathematics. (Fortunately, he was able to communicate the results of his experiments, both through demonstrations as well as writings.)

In his case, it took an intermediary to translate his work into the appropriate mathematical language.

So, if one is having trouble contributing, as an "outsider", one needs to take an honest assessment of both their desired contribution and their communication skills.

Unfortunately, so far, my experience has been very much as Phillip Helbig describes:
«Most physicists are familiar with amateurs sending them self-published theories of everything (I even have one in hardcover). These are usually recognizable as crackpot within a couple of seconds, so one doesn't even have to choose not to consider them at all, because practically no time is wasted.»

Namely, the usual problem is both their desired contribution and their communication skills. (I believe most of us try hard to look past poor communication skills, if we can see a worthwhile contribution "hidden" within.)

Note: Please do not ever be offended at being referred to as "amateur". The root of the term "amateur" means love, so an "amateur" is one that does something purely for the love of it!

Unfortunately, the principle problem (what, I believe, leads Phillip Helbig to use the term "«crackpot»") is that in nearly all cases the pervayor of the "contribution" is in violation of Richard Feynman's First Principle: «The first principle is that you must not fool yourself—and you are the easiest person to fool.» (also from his "Cargo Cult Science" commencement address)

That's where an honest assessment of both their desired contribution and their communication skills is most important.

David Halliday said...

Unfortunately, Liralen, I have seen actual cases where the EPA's own Science Advisory Committee "pushed" the EPA into poor scientific practice: namely, in the particular case I am thinking of right now, placing stronger emphasis upon "positive, though statistically insignificant, trends" (emphasis added).

Surely a Science Advisory Committee should know better!

(Instead, we as State Agencies, had to take the "brunt" of this poor science.)

John Mark Morris said...

David Halliday,

Point taken on Feynman’s first principle. It is very difficult to differentiate your own ideas from self delusion.

However, on communications, the physicists I have attempted to engage are either mindlessly crushing ants beneath the soles of their shoes or aligning their magnifying glass so as to sadistically enjoy their rejection of amateurs rather than the amateurs ideas. Either way it is rather unpleasant for the loving amateur. If 1000 good to great amateur ideas were proposed to physicists today, it would be quite encouraging if even 1 were recognized.

Lastly it seems disingenuous to ask an amateur to write in the language of mathematics.

sean s. said...

David Halliday;

Your comment to Unknown and JMM is well written and spot on, imho.

... and I am not the Sean you’re replying to ...

sean s.

Unknown said...

Communication remains a hurdle to affinity. Likely this will always be part of the human problem. Point well taken.

Sean

Unknown said...

Fillipo, along with you I have experienced good teachers and bad ones.

I would like to clarify something. Instead of teaching someone an idea entirely new to them, I am referring to the introduction of a new fundamental idea. This necessitates an additional step. Namely, the diminish mentioned of the old way of thinking about the topic. Such a thing is extremely difficult and requires an active effort on the part of the reader.

If only it was as easy as just filling in the the blanks.

Cheers, Sean

David Halliday said...

John Mark Morris:

I'm sorry you feel like «the physicists [you] have attempted to engage are either mindlessly crushing ants beneath the soles of their shoes or aligning their magnifying glass so as to sadistically enjoy their rejection of amateurs rather than the amateurs ideas. Either way it is rather unpleasant for the loving amateur.»

Unfortunately, scientists, in general, and physicist, more in particular, are human: they have the all to common faults, including pride.

However, a word of caution: effective communication goes both ways.

Just as you wish to be properly understood, you should try diligently to understand those you are trying to communicate with.

It is far too easy to misinterpret others!

One problem I have noticed—first in communicating with my own wife—is that scientists have been trained to recognize that attacks upon their ideas are not attacks upon themselves. This is most certainly not how the general public "sees" such things.

As a result, we, scientists, tend to be rather "ruthless", blunt, forthright, etc., in our attacks on each other's ideas! It's how science is done. It's about "survival of the fittest" ideas.

So, my caution, is to try to divest attacks upon your ideas from attacks upon you, personally, if you possibly can.

You end with:
«Lastly it seems disingenuous to ask an amateur to write in the language of mathematics.»

As Sabine, and others, have pointed out, Mathematics is the language of scientific communication.

However, as my Michael Faraday example was intended to point out, there are ways for even those that are not versed in that language to communicate to scientists.

Michael Faraday was not trained or schooled as a scientist. He certainly wasn't trained in the language of Mathematics.

This did get in the way of him communicating his findings—particularly his ideas about fields and how they interact, including their interactions with charges.

Fortunately, he was a meticulous researcher, and he was able to express his ideas in diagrams and descriptions, coupled to his copious and exacting experimental findings.

This, then, allowed another scientist—well versed in the language of Mathematics of the time—to "translate" Michael's diagrams and descriptions into mathematical form. (The mathematical forms were subsequently reworked by others into the form we know them today.)

In point of fact, this other scientist actually pointed out that the exacting nature of Michael's work actually was quite mathematical, it was simply not expressed in the usual mathematical symbols and forms.

So, "the moral of the story", is that there is, indeed, hope even for those that are not versed in the language of Mathematics, provided one is sufficiently carefull, exacting, and complete!

However, I am not going to mislead you into thinking it will be easy.

The other problem with not being conversant in the language of Mathematics is that this places a severe barrier to such a person being able to be conversant in the knowledge the scientist should already know when trying to move the science forward: it's like Richard Feynman's analogy of trying to "crack" a safe, and having someone kibitzing about what combinations should be tried, with no knowledge of what has already been tried and what is already known about the combinations of such a safe.

(See Feynman's talk on "Scientific Method" at https://youtu.be/EYPapE-3FRw . The analogy is somewhere in the latter half, as I recall.)

Peter John said...

Physics isn't math. Math doesn't have to be real. Physics must be real.

Liralen said...

@David Halliday: I completely agree that policy-relevant science is an oxymoron, a concept I first read about in the last Bush administration, but that was not was I referring to in my comment.

Jean Tate’s response to my request for clarification was that “In astronomy in general, papers - mostly funded by taxpayer funds - in good, peer-reviewed journals are public ... but you have to pay to read them! Over the last half century or so, there has been a big change in making the underlying data a paper depends on public ... if only after a "proprietary period" (nearly always reasonably short), and again also sometimes behind paywalls.

So, my strong desire to talk about openness in science goes beyond just having papers and the underlying data "pubic", it is also all about ensuring that both are also freely available (available for free), especially if tax-payer funds were used to create them.”

Those papers are available for the EPA rules I've reviewed, even if the EPA ignored them and eventually concluded things like mercury is not harmful from an air quality perspective. It’s only harmful if you eat fish.

The EPA’s conclusion made me laugh. But the documents they based their decisions on were at least available for public review.