Monday, August 03, 2015

Dear Dr. B: Can you make up anything in theoretical physics?

“I am a phd-student in neuroscience and I often get the impression that in physics "everything is better". E.g. they replicate their stuff, they care about precision, etc. I've always wondered to what extend that is actually true, as I obviously don't know much about physics (as a science). I've also heard (but to a far lesser extent than physics being praised) that in theoretical physics you can make up anything bc there is no way of testing it. Is that true? Sorry if that sounds ignorant, as I said, I don't know much about it.”

This question was put forward to me by Amoral Atheist at Neuroskeptic’s blog.

Dear Amoral Atheist:

I appreciate your interest because it gives me an opportunity to lay out the relation of physics to other fields of science.

About the first part of your question. The uncertainty in data is very much tied to the objects of study. Physics is such a precise science because it deals with objects whose properties are pretty much the same regardless of where or when you test them. The more you take apart stuff, the simpler it gets, because to our best present knowledge we are made of only a handful of elementary particles, and these few particles are all alike – the electrons in my body behave exactly the same way as the electrons in your body.

If the objects of study get larger, there are more ways the particles can be combined and therefore more variation in the objects. As you go from elementary particle physics to nuclear and atomic physics to condensed matter physics, then chemistry and biology and neuroscience, the variety in construction become increasingly important. It is more difficult to reproduce a crystal than it is to reproduce a Hydrogen atom, and it is even more difficult to reproduce cell cultures or tissue. As variety increases, expectations for precision and reproducibility go down. This is the case already in physics: Condensed matter physics isn’t as precise as elementary particle physics.

Once you move past a certain size, where the messy regime of human society lies, things become easier again. Planets, stars, or galaxies as a whole, can be described with high precision too because for them the details (of, say, organisms populating the planets) don’t matter much.

And so the standards for precision and reproducibility in physics are much higher than in any other science not because physicists are smarter or more ambitious, but because the standards can be higher. Lower standards for statistical significance in other fields is nothing that researchers should be blamed for, it comes with their data.

It is also the case though that since physicists have been dealing with statistics and experimental uncertainty at such high precision since hundreds of years, they sometimes roll eyes about erroneous handling of data in other sciences. It is for example a really bad idea to only choose a way to analyze data after you have seen the results, and you should never try several methods until you find a result that crosses whatever level of significance is standard in your field. In that respect I suppose it is true that in physics “everything is better” because the training in statistical methodology is more rigorous. In other words, one is lead to suspect that the trouble with reproducibility in other fields of science is partly due to preventable problems.

About the second part of your question. The relation between theoretical physics and experimental physics goes both ways. Sometimes experimentalists have data that needs a theory by which they can be explained. And sometimes theorists have come up with a theory that they need new experimental tests for. This way, theory and experiment evolves hand in hand. Physics, as any other science, is all about describing nature. If you make up a theory that cannot be tested, you’re just not doing very interesting research, and you’re not likely to get a grant or find a job.

Theoretical physicists, as they “make up theories” are not free to just do whatever they like. The standards in physics are high, both in experiment and in theory, because there are so many data that are known so precisely. New theories have to be consistent with all the high precision data that we have accumulated in hundreds of years, and theories in physics must be cast in the form of mathematics; this is an unwritten rule, but one that is rigorously enforced. If you come up with an idea and are not able to formulate it in mathematical terms, nobody will take you seriously and you will not get published. This is for good reasons: Mathematics has proved to be an enormously powerful way to ensure logical coherence and prevent humans from fooling themselves by wishful thinking. A theory lacking a mathematical framework is today considered very low standard in physics.

The requirement that new theories both be in agreement with all existing data and be mathematically consistent – ie do not lead to internal disagreements or ambiguities – are not easy requirements to fulfil. Just how hard it is to come up with a theory that improves on the existing ones and meets these requirements is almost always underestimated by people outside the field.

There is for example very little that you can change about Einstein’s theory of General Relativity without ruining it altogether. Almost everything that you can imagine doing to its mathematical framework has dire consequences that lead to either mathematical nonsense or to crude conflict with data. Something as seemingly innocuous as giving a tiny mass to the normally massless carrier field of gravity can entirely spoil the theory.

Of course there are certain tricks you can learn that help you invent new theories that are not in conflict with data and are internally consistent. If you want to invent a new particle for example, as a rule of thumb you better make it very heavy or make it very weakly interacting, or both. And make sure you respect all known symmetries and conservation laws. You also better start with a theory that is known to work already and just twiddle it a little bit. In other words, you have to learn the rules before you break them. Still, it is hard and new theories don’t come easily.

Dark matter is a case in point. Dark matter has first been spotted in the 1930s. 80 years later, after the work of tens of thousands of physicists, we have but a dozen possible explanations for what it may be that are now subject to further experimental test. If it was true that in theoretical physics you “can make up anything” we’d have hundreds of thousands of theories for dark matter! It turns out though most ideas don’t meet the standards and so they are discarded of very quickly.

Sometimes it is very difficult to test a new theory in physics, and it can take a lot of time to find out how to do it. Pauli for example invented a particle, now called the “neutrino,” to explain some experiments that physicists were confused about in the 1930s, but it took almost three decades to actually find a way to measure this particle. Again this is a consequence of just how much physicists know already. The more we know, the more difficult it becomes to find unexplored tests for new ideas.

It is certainly true that some theories that have been proposed by physicists are so hard to test they are almost untestable, like for example parallel universes. These are extreme outliers though and, as I have complained earlier, that they are featured so prominently in the press is extremely misleading. There are few physicists working on this and the topic is very controversial. The vast majority of physicists work in down-to-earth fields like plasma physics or astroparticle physics, and have no business with the multiverse or parallel universes (see my earlier post “What do most physicists work on?”). These are thought-stimulating topics, and I find it interesting to discuss them, but one shouldn’t mistake them for being central to physics.

Another confusion that often comes up is the relevance of physics to other fields of science, and the discussion at Neurosceptic’s blogpost is a sad example. It is perfectly okay for physicists to ignore biology in their experiments, but it is not okay for biologists to ignore physics. This isn’t so because physicists are arrogant, it is because physics studies objects in their simplest form when their more complicated behavior doesn’t play a role. But the opposite is not the case: The simple laws of physics don’t just go way when you get to more complicated objects, they still remain important.

For this reason you cannot just go and proclaim that human brains somehow exchange signals and store memory in some “cloud” because there is no mechanism, no interaction, by which this could happen that we wouldn’t already have seen. No, I'm not narrowminded, I just know how hard it is to find an unexplored niche in the known laws of nature to hide some entirely new effect that has never been observed. Just try yourself to formulate a theory that realizes this idea, a theory which is both mathematically consistent and consistent with all known observations, and you will quickly see that it can’t be done. It is only when you discard the high standard requirements of physics that you really can “make up anything.”

Thanks for an interesting question!

22 comments:

  1. I want to some points (rants) about the statistics quality discussion because it's worse than "the data are inherently noisy" and "people are poorly trained in statistics". (Source: I'm a physicist who switched to neuroscience and later infectious disease (where I'm staying).) Please forgive the generalizations below. By "people", "the theorists" ... I mean that the field norms for minimal acceptable science by tenure-track faculty and their people are ....

    My experience in neuroscience is that people want to skip steps. The experimentalists are incentivized by short-sighted funding models, paper-chasing, and the desire to do most experiments within a single lab to act as if many small sample size studies will somehow find reliable effects from noisy data. As long as everyone agrees it's okay to assume p=0.05 without multiple testing correction means you've found something interesting, everyone can keep publishing on schedule and keep grants for individual projects around a million dollars (animals are expensive!). That this leads to things like assuming that the only signal that matters is the difference in averages across brains from two different treatments, and that the variability among brains within a treatment is often unexamined.

    On the theory side, there are problems of similar scope. First, most theory is not done by trained theorists, and so there are a lot of verbal models that are poorly defined that everyone can interpret in different ways to different ends. Second, the theorists trained from physics, comp sci, and math, are also often skipping steps by pretending that superficial analogies between the math of the month and coarse neurological structures and aggregate neural data somehow indicates that the theory/model is close to the biology.

    The best experimental research programs spend years dissecting well-defined systems piece by piece, consuming many grad students in the process. It's only after similar systems in multiple animals have been done well that we seem to get reliable synthesis of ideas about how the system works (photoreception and the first few stages of visual processing is a good example). The best theory/modeling right now is either closely tied to a specific experiment (rare because experiment and theory don't talk enough) or is openly less concerned with biological realism but is aimed at understanding basic features of how to compute with high-dimensional dynamical systems. The gap in scope between the best experiment and theory is often large because that's where the field is actually at---still collecting essential facts, pretty far from the Newtonian revolution. Neuroscience is incredibly complex compared to physics, so it's no one's fault that deep synthesis is far away, but it's silly to pretend it's close in grant applications, paper discussions, and pop science articles.

    These problems plague medical and global health research too, but at least there, when people are dying, the junk can only propagate so far before we must acknowledge it's junk.

    ReplyDelete
  2. Hi MFamulare,

    Thanks for your interesting comment, though it is somewhat depressing, especially to hear that these problems are driven by funding models and paper chasing. I suppose I hoped the grass was greener on the other side. Best,

    B.

    ReplyDelete
  3. Dear sabine,
    thanks for your comments, that was a very nice and accurate description of what means to do research in physics.


    just i would like to add some little comment to your paragraph:
    "...And so the standards for precision and reproducibility in physics are much higher than in any other science not because physicists are smarter or more ambitious, but because the standards can be higher. Lower standards ...ยช

    that is very true, but it is also true the converse: the standards are high because the physicist choose to only study problems which first, can be mathematically modelled, and, second, are experimentally tractable under such very high standards.
    whatever else it is considered beyond the scope of physics.
    this is one of the big legacies of modern science since s XVI (and one the reasons of the big advances since then). It is usually a non explicit rule but a very powerful razor to guide the directions of research (and eliminate any temptation of pursuing pseudosciences and/or metaphysical problems).

    the wonderful thing it is that nature responds positively to this rule: if you treat her with rigour she returns you with a bounty
    of unexpected results and beautiful theories beyond your initial imagination.

    cheers!

    ReplyDelete
  4. It's worth pointing out that I'm more pessimistic than average on this issue. The average take is probably much closer to your view that things are inherently noisy and we do the best we can.

    And thank you for your blog. I've been (mostly) silently reading since circa 2006!

    ReplyDelete
  5. "Condensed matter physics isn’t as precise as elementary particle physics."
    Could you elaborate on this? At first instance, I disagree strongly.

    ReplyDelete
  6. Hi Erik, I'm thinking of stuff like electric/optic resistivity/conductivity, temperature/pressure dependence thereof, viscosity, elasticity, magnetic phases and so on. Compare that to, say, the muon g-2. What was on your mind? Best,
    B.

    ReplyDelete
  7. Hi Sabine,

    I tend NOT to share your optimistic remark " that in physics [...] the training in statistical methodology is more rigorous."

    I don't think compulsory courses in statistical methodology are the norm for physics students. Five years after my PhD in optics I personally feel I have almost no clue about statistics, beyond putting error bars more or less automatically on a graph.

    Your remark is hopefully valid in the field of high-energy and particle physics, but most of the "nano-research" community, for example, is not properly trained in statistics.

    ReplyDelete
  8. So Bee, do you believe physicists should be open and frank about the current situation?
    It's just that, the person was clearly referring to the fact no theories have been producing predictions for decades. Reading that it's as if that isn't true.
    Do you think you might have misled him?

    ReplyDelete
  9. Chris,

    I think you didn't understand what I wrote, or maybe you didn't read it. What is misleading here is not me, but the press that suggests that there is something at odds in physics, when what they are actually referring to is a fringe area. Your statement that "no theories have been producing predictions for decades" is wrong in a ridiculous way. Why, for example, do you think everybody tried to create graphene? Because it was predicted to have awesome properties. Predictions of General Relativity and the standard model have been confirmed to extremely high precision, for decades, over and over again - here is an example just from today. In condensed matter physics and quantum optics they have now been able to confirm all kinds of effects and quasi-particles that were predicted to be there but had never been seen before, such as decoherence in action or magnetic monopoles. The problem is that the press doesn't like headlines that say physicists calculated weak gravitational lensing statistics and found what they expected. But even in pop sci speech we read "Einstein was right" over and over again. So please what are you referring to? Best,

    B.

    ReplyDelete
  10. I have estimated the number of live dark matter explanations (apart from fine details like exact parameter values) to be about a dozen several times in the last couple of weeks and it is encouraging to see that someone else independent is making the same guestimate that I am.

    FWIW, I read the new arxiv abstracts pretty much every day and would offer a somewhat less charitable view of how grounded theoretical physics papers are in practice.

    First, there is an entire cottage industry in "toy models" that admittedly aren't claimed to be mathematically consistent (e.g. the original MOND theory) or aren't claimed to be consistent with existing data, offered to prove that it might be possible to come up with a mathematically consistent version consistent with the data some day. In GR, for example, it is very common to prove theories in 2 or 3 or >5 or 10 dimensions without establishing that a conjecture is true in the 4 dimensions. (Of course, in Lattice QCD, it actually is standard procedure to model your system with the wrong number of quarks and color charges and the wrong pion mass calibrations a few times, in order to extrapolate the results to the true parameters of real world QCD because the math is apparently easier that way.)

    Second, there are lots of papers that start to drift into pure mathematics. For example, catalog every finite group that is possible, or if the generalization of Lorenz invariance to a system of arbitrarily many branes with a non-abelian connection is unique. These are the tool and die makers of the field, who are busy inventing tools that might someday be relevant to some problem but have no direct application to physics at all.

    Third, the standard that new theories be consistent with existing evidence isn't honored very rigorously in the field. For example, I'd guess that at least a third of new dark matter papers propose solutions that are inconsistent with one or another solidly established piece of empirical evidence, mostly because specialists in one sub-fields are often unfamiliar with the literature in another. For example, it is very common to see a particle physicist propose a dark matter candidate that various astronomy observations have ruled out as if it were viable.

    Your point about a lot of theoretical physics involve theme and variations papers explaining one of the modest number of promising avenues for new physics that has already been the subject of significant investigation (e.g. SUSY variants, the leading GUT variants, see saw models with right handed neutrinos, the odd relict Technicolor paper, f(R) gravity papers, massive gravity papers, etc.) is well taken, however.

    Lastly, theoretical physics does seem to have more than its fair share of what I would call "hand waving" papers that make proposals while ignoring the "elephants in the room" that make a particular approach much more problematic than is acknowledged. For example, briefly acknowledging that the mathematics as formulated in a theory has too many ghosts, but dismissing that concern because an undescribed function of an unstated variable can be assumed to exist and fix all of the problems "because science", since ghosts aren't observed in the real world.

    So yes, there are more real standards in theoretical physics that one might naively assume, but there is certainly a lot of fairly wacky stuff out there as well, often with no viable game plan by which the paper could ever contribute to anything that could every actually describe the real world mathematically. All told, I'd estimate that perhaps 20%-40%+ of all theoretical physics preprints are arguably in that category for one reason or another.

    ReplyDelete
  11. (This isn't to say that the experimentalists are always better. For example, there really needs to be an entire category for proposed experiments which may or may not every be done unless someone reads it and decides to give me a multi-million or multi-billion dollar funding line, as opposed to the results of experiments that have already been done. There is also quite a cottage industry in writing review papers based on experimental results announced in recent professional conferences that are pretty much identical to half a dozen other review papers of precisely the same things written by someone else who went to the same conference sessions.)

    ReplyDelete
  12. Hi Bee,

    Of course you are right that measurements of g2 in particle physics are extremely precise. However, what about the extremely precise measurements of e²/h using the quantum Hall effect, which can even be used as a standard for electrical resistance? This shows that condensed matter theory can be just as precise, or perhaps I am missing some orders, this could be true :)

    Regards,

    Erik

    ReplyDelete
  13. Erik,

    Sure, you can make very precise measurements in cond mat, especially at low temperatures, and certainly also some in astrophysics (think spectral lines) and so on. I didn't mean to insult condensed matter physicists, but as a rule of thumb: more possible variety means more uncertainty means less precision. The same is already the case within particle physics. The LHC collides hadrons which are not elementary particles and the need to model the distribution of quarks and gluons brings in some additional uncertainty, also a somewhat inefficient use of energy by redistribution the whoom to various constituents, which is one of the main reasons many physicists would rather see a lepton collider. Best,

    B.

    ReplyDelete
  14. Andrew,

    Which arxives are you reading?

    You are confusing toy models with phenomenological models. A toy model is normally a mathematical abstraction that is meant for conceptual understanding or educational purposes, or maybe to understand a particular limit. A phenomenological model on the other hand is a model that describes something observable (the "phenomenon") but it often does not derive from a more fundamental theory. Phenomenological models typically appear as pre-stages or call it placeholders for more complete, and more consistent, theories. They play an important role in bridging the gap between theory and experiment but shouldn't be misunderstood as the last word spoken. Best,

    B.

    ReplyDelete
  15. Hi Bee,

    Thank you for your clarification, a very reasonable explanation. Rock on with your blog!

    Best,

    Erik

    ReplyDelete
  16. "You are confusing toy models with phenomenological models."

    You are, of course, right. I was being sloppy in my language and as a result conflating two separate issues.

    First, there are quite a few papers with "toy models", which as you note, generally aren't intended to replicate reality and instead just demonstrate an idea.

    Second, there are quite a few papers with "phenomenological models" which show numerical relationships of physical quantities without necessarily having a theoretical basis, and also often having a limited domain of applicability out of which the crude version of the phenomenological model would contradict known physical laws or data.

    The main point to mentioning them at all is simply to demonstrate that lots of papers that are perfectly publishable don't live up to the high ultimate objective of a theoetrical physicist to produce work that is both mathematically consistent and is also consistent with all evidence observed to date within error bars. Papers that actually meet that high standard are probably a minority of the total.

    In terms of what I read, I usually start with he-exp, then lattice, then astrophysics, the phenomenology, and then theory, skipping the rest of the categories absent some odd whim to look at something different (I can barely make sense of the lot of the condensed matter stuff and for some reason find a lot of the nuclear and atomic stuff to be comparatively boring). I always read titles, always skim abstracts, and read at least some of the pdfs of the most interesting ones, maybe half a dozen a day, more on a good day with interesting posts, less on a busy day in the office or one that has uninteresting posts.

    ReplyDelete
  17. Hi Sabine - I didn't say no predictions in the last few decades, or no good work. And I couldn't have meant predictive theories like Relativity had stopped predicting in the domain they used to, because that would be really nonsensical.

    I think the press are pretty positive. If you go and look at any news portal, everything scientists want to announce they seem to get announced.

    When you say this is all about some fringe activity. This is about the 'fringe' that is the edge of knowledge right? Fine the work done in earlier periods still have underived consequences. Or existing for checking new resolutions.

    At the edge of knowledge things have slowed dramatically. Theories are not predictive for decades now at that edge. Strings have been around for 50 years.

    That chap, was tongue-in-cheek in his wording, and you could easily have opted to ignore him on those grounds. But you opted to take him seriously. But as if he meant the whole domain of physics.

    But he obviously was asking about what is going at the edge of knowledge.

    If you meant by 'fringe' ideas like the multiverse, the thing is that is just a word, and you'd really have take essence of what theories like that are about, before taking the measure of.

    I think we're talking about any theory that depends on an infinite resource right at its explanatory core, without which the theory collapses. A multiverse theory is an infinity theory.

    And it is not fringe. The current standard view of cosmology is an infinity theory, for example.

    ReplyDelete
  18. Hi,

    Thank you very much for responding to my question so exhaustive. I appreciate it. And sorry for my late response.

    Form what you are writing I think that Physics is indeed better (i.e. more committed to scientific methods) than (some) other sciences (i.e. neuroscience).

    Data in other fields might be inherently more noisy, but of course it is problematic when the noise is not described (e.g. with confidence intervals) and thus you can’t know to what degree you can trust the data. Data are described as being clear (in neuroscience) even when they (clearly ;) ) aren’t.

    “It is for example a really bad idea to only choose a way to analyze data after you have seen the results, and you should never try several methods until you find a result that crosses whatever level of significance is standard in your field.”

    Yes, the second is frequently done… and I worry about that too. The first: For the kind of data I’m working with, it is not possible to see anything (any results) before you analyzed them. If it were, why would you need to analyze them?* Is this different in physics? Or did I misunderstand?

    “The requirement that new theories both be in agreement with all existing data and be mathematically consistent – ie do not lead to internal disagreements or ambiguities – are not easy requirements to fulfil. Just how hard it is to come up with a theory that improves on the existing ones and meets these requirements is almost always underestimated by people outside the field.”

    I can imagine that it is, but what did you mean with your comment on Neuroskeptics Blog then? “I work in theoretical physics. In some sense it's even worse there. You can make up whatever you want and if it gets published and it's shown to be wrong, you can say, "well, I was dumb," rather than "I deliberately fabricated a theory because I wanted it to look nice," though that would be a good summary of most papers.”

    *(Other than to potentially know error-rates or degrees of belief. But that might work much better in physics than in other sciences:
    http://www.fisme.science.uu.nl/staff/christianb/downloads/meehl1967.pdf Meehl: Theory-Testing in Psychology and Physics: A Methodological Paradox
    http://ist-socrates.berkeley.edu/~maccoun/PP279_Cohen1.pdf Cohen: The earth is round (p<0.05) ).

    ReplyDelete
  19. Reply to MFamulare :

    Thank you so much for your comment, I can relate very much to it!

    “My experience in neuroscience is that people want to skip steps. The experimentalists are incentivized by short-sighted funding models, paper-chasing”

    As I wrote in the comment Dr. B. responded to, I’m a Phd-student so obviously my practical knowledge in the field is limited (I’ve only worked in one department so far, two if you count my master thesis), but that is my experience as well and I’m shocked by it.

    The simplest example (and one I can give without any danger and without blaming anyone specific) is: I’ve given this https://youtu.be/HRLxtJeopDs (8min video link) as a talk at our institute. I was prepared for an interesting/tough discussion but all response I got was: Well, yeah, but we have to publish. That was it! Nothing else.

    (And the much worse thing is that I’m stuck with replicating something which won’t replicate... ...)

    “As long as everyone agrees it's okay to assume p=0.05 without multiple testing correction means you've found something interesting, everyone can keep publishing on schedule and keep grants for individual projects around a million dollars (animals are expensive!). That this leads to things like assuming that the only signal that matters is the difference in averages across brains from two different treatments, and that the variability among brains within a treatment is often unexamined.”

    Well, we do fMRI (on humans) so we have to do multiple testing corrections (for the voxel tested) and also the variability among brains in usually taken into account, but then the “same” analysis is repeated a lot of times if not the desired result is achieved with slightly different parameters and for that is not corrected. Furthermore most studies are underpowered to find the desired effect, so it is questionable what they find, if they do find something after all (e.g. http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html ).

    I’ve just come to all these problems quite recently. So it might be that I’m under- or overestimating them, but they keep me awake at night: We study humans; the studies are not (always) pleasurable for them even though they of course give consent. But if they knew, they might withdraw their consent. We use money that might be used better otherwise, e.g. one big study instead of 10 or 100 small ones, one planned study instead of 100 studies in which the hypotheses are randomly guessed. I don’t know how to justify that.
    “On the theory side, there are problems of similar scope. First, most theory is not done by trained theorists”

    Yes, there are. I was just recently thinking a lot about theory makes (have not come to a conclusion) but I hear the word “trained theorists” the first time. The problem I was aware of is, that most research (I know about) in neuroscience is done without any or with a very weak theoretical basis, i.e. it does not test a theory. Of course, on the other hand, and might be that there are simply no theories yet (because of no or insufficient conclusive data), so that first data have to be collected and then brought together into a model or theory. But it might be helpful to be clear about that, because the statistics (can) also differ. (http://www.ncbi.nlm.nih.gov/pubmed/26172257 , http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3610572/ )

    What I also hate is, when some results do not fit into the data you always have to explain it even when there is no known explanation (other than: measurement error, imprecise instruments, instructions for participants might have been unclear etc.; so errors basically), not saying that there is none at all, just that the “made up explanations” often don’t make sense and it would be much more honest to admit that it might be due to some error (and if one could at least locate the error, that would be better, but instead some wild theories are made up). Of course I don’t want to say with this, that it is not worth thinking about possible explanations, just that error often is more likely!

    ReplyDelete
  20. Re: "I think we're talking about any theory that depends on an infinite resource right at its explanatory core, without which the theory collapses. A multiverse theory is an infinity theory."

    This is above my physics pay-grade, but doesn't the expanding-universe/Big Bang theory assume an infinite supply of (new) space, and doesn't quantum mechanics assume an infinite supply of virtual particles? Personally, I have nothing against getting "something from nothing" (especially of the form zero --> -E + E). Some people seem to consider it a basic principle of logic that we can't, but I consider it an assumption, not a conclusion. Either we can (in some circumstances) or we can't, and if we can then yes, there is an infinite resource. If we can't, then how do we explain Hubble's Constant and the Casimir effect?

    I suppose some theories require more infinite resources than others, but I don't feel that the concept is ridiculous per se, if that was implied.

    ReplyDelete
  21. Chris,

    No, you are just plainly wrong, and I find it stunning that you continue to disagree with me even though the facts are in your face. The 'edge of knowledge' isn't just in the search for a theory of everything. There is an 'edge of knowledge' in every discipline, and in every area of physics. And I am telling you that the vast majority of physicists have no business whatsoever with the multiverse and a theory of everything and similar distractions. If you think anything else, it just serves to prove my point how misleading the representation of physics in the press is.

    No, it is plainly not true that everything that goes on in physics appears in the press. Here is a simple example. This is totally awesome work, uncontroversial, innovative, with potential applications in drug development - not a word in the press. Why? Of course you can say that's a singular case, but this isn't so. The vast majority of physics research in fields like condensed matter physics, plasma physics, statistical mechanics, and so on never receive the slightest mentioning. The public believes by and large these fields don't exist or they are tiny. The opposite this is true. This is what most physicists work on.

    Your idea that "the current standard view of cosmology is an infinity theory" is another one of these bizarre mistaken beliefs that you get from reading too much popular science news. The current standard view of cosmology is that inflation is an effective model and everything beyond that is philosophical speculation. Before you continue to disagree with me ask yourself where you get your impression from and what reason do you have to believe that it is actually reflective of the community. Best,

    B.

    ReplyDelete
  22. "The vast majority of physicists work in down-to-earth fields like plasma physics or astroparticle physics."

    How has nobody mentioned how beautiful this was? It was a great pun that did not deserve to be ignored. I am embarrassed on behalf of all human beings that no one had earlier complimented this pun.

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.