Pages

Wednesday, January 08, 2020

Update January 2020

A quick update on some topics that I previously told you about.


Remember I explained the issue with the missing electromagnetic counterparts to gravitational wave detections? In a recent paper a group of physicists from Russia claimed they had evidence for the detection of a gamma ray event coincident with the gravitational wave detection from a binary neutron star merger. They say they found it in the data from the INTEGRAL satellite mission.

Their analysis was swiftly criticized informally by other experts in the field, but so far there is no formal correspondence about this. So the current status is that we are still missing confirmation that the LIGO and Virgo gravitational wave interferometers indeed detect signals from outer space.

So much about gravitational waves. There is also news about dark energy. Last month I told you that a new analysis of the supernova data showed they can be explained without dark energy. The supernova data, to remind you, are the major evidence that physicists have for dark energy. And if that evidence does not hold up, that’s a big deal because the discovery of dark energy was awarded a nobel prize in 2011.

However, that new analysis of the supernova data was swiftly criticized by another group. This criticism, to be honest, did not make much sense to me because they picked on the use of the coordinate system, which was basically the whole point of the original analysis. In any case, the authors of the original paper then debunked the criticism. And that is still the status today.

Quanta Magazine was happy to quote a couple of astrophysicists saying that the evidence for dark energy from supernovae is sound without giving further reasons.

Unfortunately, this is a very common thing to happen. Someone, or a group, goes and challenges a widely accepted result. Then someone else criticizes the new work. So far, so good. But after this, what frequently happens is that everybody else, scientists as well as the popular science press, will just quote the criticism as having sorted out the situation just so that they do not have to think about the problem themselves. I do not know, but I am afraid that this is what’s going on.

I was about to tell you more about this, but something better came to my mind. The lead author of the supernova paper, Subir Sakar is located in Oxford and I will be visiting Oxford next month. So, I asked if he would be in for an interview and he kindly agreed on that. So you will have him explain his work himself.

Speaking of supernovae. There was another paper just a few days ago that claimed that actually supernovae are not very good standards for standard candles, and that indeed their luminosity might just depend on the average age of the star that goes supernova.

Now, if you look at more distant supernovae, the light has had to travel for a long time to reach us, which means they are on the average younger. So, if younger stars that go bang have a different luminosity than older ones, that introduces a bias in the analysis that can mimic the effect of dark energy. Indeed, the authors of that new paper also claim that one does not need dark energy to explain the observations.

This gives me somewhat of a headache because these are two different reasons for why dark energy might not exist. Which raises the question what happens if you combine them. Maybe that makes the expansion too slow? Also, I said this before, but let me emphasize again that the supernova data are not the only evidence for dark energy. Someone’s got to do a global fit of all the available data before we can draw conclusions.

One final point for today, the well-known particle physicist Mikhail Shifman has an article on the arXiv that could best be called an opinion piece. It is titled “Musings on the current status of high energy physics”. In this article he writes “Low energy-supersymmetry is ruled out, and gone with it is the concept of naturalness, a basic principle which theorists cherished and followed for decades.” And in a footnote he adds “By the way, this principle has never been substantiated by arguments other than aesthetical.”

This is entirely correct and one of the main topics in my book “Lost in Math”. Naturalness, to remind you, was the main reason so many physicists thought that the Large Hadron Collider should see new particles besides the Higgs boson. Which has not happened. The principle of naturalness is now pretty much dead because it’s just in conflict with observation.

However, the particle physics community has still not analyzed how it could possibly be that such a large group of people for such a long time based their research on an argument that was so obviously non-scientific. Something has seriously gone wrong here and if we do not understand what, it can happen again.

207 comments:

  1. Thanks for your very interesting article.
    A minor issue, the link to the paper about SN Ia as standard candles seems to be pointing to a different paper.

    ReplyDelete
    Replies
    1. Thanks for pointing out; I have fixed the link now.

      Delete
  2. In former times, the people had have other feelings about soundness than modern people. See e.g. the seventh chord

    https://en.wikipedia.org/wiki/Seventh_chord

    So maybe men may find the physics aestethical afterwards.

    ReplyDelete
    Replies
    1. A good point. Wasn't the fourth once considered the work of Satan?

      Delete
    2. Yes, in the form of the tritone.

      https://de.wikipedia.org/wiki/Tritonus

      But in the English wiki version only mentioned with:

      "Johann Joseph Fux cites the phrase in his seminal 1725 work Gradus ad Parnassum, Georg Philipp Telemann in 1733 describes, "mi against fa", which the ancients called "Satan in music""

      Delete
  3. "So the current status is that we are still missing confirmation that the LIGO and Virgo gravitational wave interferometers indeed detect signals from outer space."

    But this is a case where absence of evidence is not evidence of absence; there can be gravitational-wave events without electromagnetic counterparts.

    ReplyDelete
    Replies
    1. Phillip Helbig 8:47 AM, January 08, 2020

      Unlike universal fine-tuning where absence of evidence could well be evidence of absence. How's that paper coming along in which you will assume that the values of physical constants follow certain probability distributions *without any justification whatsoever*? You should get in touch with the Templeton Foundation - they pay cranks a fortune to publish nonsense.

      Delete
    2. Absence of evidence is never evidence of absence in a logical sense. The paper is coming along fine. As you can see from another comment, I've just published a 27-page paper on a completely different topic; polemic against crackpots has a lower priority. How many scientific papers have you published? However, I don't make the assumption which you claim I do.

      The Templeton Foundation does pay cranks to publish nonsense. However, there are many (otherwise) sensible people who have taken Templeton money. As far as I can tell, it hasn't affected their outlook. Opinions vary. Some don't want to take Templeton money at all. Some are happy to take laundered Templeton money (e.g. not accepting it directly, but going to a conference funded by it). Some take it directly, perhaps arguing that it is better that they have it than someone else.

      However, Socrates would have sent you flying for your lack of grasp of logic. A more legitimate question would be who funds you to write hateful comments?

      Delete
    3. Absence of evidence is absolutely evidence of absence. Absence of PROOF is not PROOF of absence.

      sean s.

      Delete
    4. Phillip Helbig 11:19 AM, January 10, 2020

      "How many scientific papers have you published?"
      None. I'm not a scientist. But, unlike you and some professional physicists on here and a Physics Nobel Prize winner and the Astronomer Royal who is paid with my taxes, I do at least know what modus ponens is and that the standard of truth in natural science is observation. Maybe I should publish a paper (of 3 lines) on how modus ponens works and send it to Brian Schmidt and Martin Rees...

      "However, I don't make the assumption which you claim I do."
      Yes, you do. You assume it is physically possible for physical constants like the Cosmological Constant to take values other than the only one that has been measured in the only universe we know of. This is pure assumption. Also, you think that if someone points this out to you that they are claiming that the value of the Cosm. Const. obeys a Dirac function probability distribution. Another basic, basic logical blunder. I and others are pointing out to you that we *don't know* what all the physically possible values of the Cosm. Const, are - the only known physically possible value is the one that has been measured.

      This is one step of simple logic, and yet you don't understand it. The interesting scientific question is how can you have a brain and yet not understand one step of logic. It's a bit like my dog staring at the back door when he wants to go out - the operation of the door handle completely transcends his brain's comprehension. Similarly, the 3 steps of modus ponens appear to transcend your brain's limits.

      "crackpots"
      Merriam: Crackpot: one given to eccentric or lunatic notions.
      So go ahead and tell me what lunatic notions I am given to.
      For example, Luke Barnes is a crackpot because he believes that a supernatural being created the universe and then 13.7 billion years later supernaturally impregnated a human woman. And 1/3 of the physics book he wrote was about this! Yet, in your review of the book, you failed to point out that Luke Barnes is mad and should seek professional help for his delusions not describe them in a popular physics book.

      "your lack of grasp of logic"
      Yes, but yet again you don't point out where my lack of logic is. You claim the universe is fine-tuned, that I am a crackpot and illogical but provide no evidence of any of it.

      " who funds you to write hateful comments?"
      All my comments on here have been pointing out the truth - that religions are delusions, that science has refuted all religions, that there is no physical evidence of fine-tuning, string theory, multiverses, MWI, inflation, that the C-13 resonance cannot be shown to be an example of the Anthropic Principle. All these are plainly true. You presumably find this interest in the truth strange because you don't share it.

      "who funds you"
      I fund me. And I also presumably funded you with my taxes at one point. So I'd prefer it if you learned modus ponens or gave me my money back.

      Delete
    5. Yes, absence of evidence *can* be evidence of absence. This is basic probability theory: if one expects to see certain evidence if the hypothesis is true, but not if it is false, then absence of that expected evidence makes the hypothesis less likely.

      In math: If Pr(E | H) > Pr(E | not H), then Pr(not E | not H) > Pr(not E | H), and using Bayes' Rule, Pr(H | not E) < Pr(H). That is, absence of the expected evidence yields a posterior probability for H that is lower than the prior probability.

      An example: Suppose I manage an apartment complex, and somebody tells me that squatters are living in unit 37B. I open the door to that unit, step inside, and find... nothing. No litter on the floor. No blankets or matresses anywhere. No toiletries in the bathroom. No food in the refrigerator. The complete absence of any evidence of occupation is strong evidence that there are, in fact, no squatters.

      Delete
    6. Response to Steven Evans' latest comment here.

      "you don't point out where my lack of logic is"

      I'm going to point to one such instance, which is somewhat subtle:

      "You assume it is physically possible for physical constants like the Cosmological Constant to take values other than the only one that has been measured in the only universe we know of."

      This is a failure to understand what experimentalists and observers can do. Take G, the gravitational constant. It has been measured by many teams, many times, using many different methods. The results reported are a value and an uncertainty (sometimes a distribution of that uncertainty). Those values and uncertainties are largely consistent with each other, but at least some people claim that they (or that a subset) are not.

      It is illogical to claim, from the observational/experimental results, that G is a constant ... it could vary, from place to place or time to time, by amounts that are consistent with the uncertainties.

      That it is, indeed, fixed (i.e. has a single value, expressible to an arbitrary number of significant digits) is a common assumption, not a logical deduction.

      Delete
    7. JeanTate 11:19 AM, January 11, 2020
      Yes, Physics is an empirical subject and measurements have finite precision. Within those limits G, c, etc, are currently considered to be constants in this universe. That has nothing to do with the question of fine-tuning.
      Fine-tuning is the question of whether it is physically possible that the constants could have completely different values but have been tuned in some way to give us a universe of the nature we observe.
      There is no evidence whatsoever of universal fine-tuning, but bizarrely Phillip Helbig, Luke Barnes, the Astronomer Royal, a Physics Nobel Laureate, Physics Today, etc., don't seem to understand this simple fact.

      Delete
    8. Thanks Steven Evans.

      Sorry, but your logic is still flawed. For example "the constants could have completely different values": 1.0001 and 1.0002 are just as "completely different" as 1.0001 and 100.01 are, from the perspective of logic.

      Also, as Lawrence Crowell points out in a comment here, it is a failure of logic to equate "tuning" with "tuned". As you do: "have been tuned in some way" vs "no evidence whatsoever of universal fine-tuning".

      Minor point: "Within those limits G, c, etc, are currently considered to be constants in this universe." Currently, c is defined to be constant, unlike G (and some others).

      Finally, re "There is no evidence whatsoever of universal fine-tuning": this is factually incorrect (note this is about tuning, not tuned). A more accurate statement would explain the context, which could include one or another model (preferably based on some physical theory) in which fine tuning exists as a possibility. Evidence may then be cited, either in support of such a model or as showing it to be inconsistent. There is a historical precedent: the fine-structure constant (see the Wikipedia entry for this for a quick introduction).

      Delete
    9. JeanTate 10:16 AM, January 12, 2020

      Yes, "fine-tuning" refers to the possibility of statistical happenstance (e.g. the multiverse) as well as the possibility of some tuning mechanism (e.g. a creator). But there is no evidence of either, and you have provided none in your comment so I don't know what the point of your comment is.

      Constants are measured up to a certain precision. Fine-tuning is talking about the physical possibility that the constants could have taken values ****outside***** this empirically verified range. There is no evidence that such values are possible, and consequently *zero* evidence of tuning. I explained this in my previous comment. Now I have explained it again. It's a blindingly obvious point. Don't be a time-waster.

      The fine-structure constant is not an example of fine-tuning. It has a measured value (to a certain precision, I'm so glad that you are wasting my time by making me write the small print which everyone knows) and that is its only known physically possible value. So it is not known that it can be "tuned". This is primary school level logic. Did you not go to primary school?

      If you think there is evidence of universal fine-tuning, then present it. There is none on the physical record, so this would be a Nobel Prize-winning comment.

      You don't seem to understand basic logic or basic science. Maybe I could teach a class to you, Phillip Helbig, Lawrence, Luke Barnes, Brian Schmidt, and the Astronomer Royal on these basic matters.

      Delete
    10. Steven Evans,

      A key "point" of my (now several) comments is that many of your descriptions (or diatribes) are illogical. Yet you are big on logic. I detect, in sequences of your comments, some weaving and dodging, wherein you walk back some earlier statements without explicitly acknowledging doing so.

      Another of your tactics, strawmen: "If you think there is evidence of universal fine-tuning, then present it."

      Just one more, for now: "There is no evidence that such values are possible". This is factually incorrect. The history of the fine-structure constant is enough to show this. It is also factually incorrect in that there are papers on possible variations in the values of the fine-structure constant, by cosmic history and location (I'd be happy to cite some if you can't find them yourself). It is true that such evidence is weak and not widely accepted ... but it is enough to demolish your blanket statements.

      Delete
    11. JeanTate 10:59 AM, January 14, 2020

      "A key "point" of my (now several) comments is that many of your descriptions (or diatribes) are illogical."
      You have pointed out no problems with logic. You write nonsense. I explain to you why it's nonsense. Then you simply move on to writing more nonsense, as you have done here. And now I will explain why you are talking nonsense again.

      "Another of your tactics, strawmen: "If you think there is evidence of universal fine-tuning, then present it.""

      So you mean that you agree that there is no evidence of universal fine-tuning?

      "Just one more, for now: "There is no evidence that such values are possible". This is factually incorrect. "
      Again you are missing the point about fine-tuning. At any time the fine-structure constant has been considered to be only one particular constant value and the reason it takes that value is not known. This is not fine-tuning. If it turns out that the fine-structure constant does indeed vary by time and location then it's not a universal constant as currently thought. But that has nothing to do with fine-tuning.

      You seem to think that pointing out that what are currently thought to be constants may not be constants has some connection with fine-tuning. It doesn't. There may be perfectly good non-tuning physical reasons for the values the variable takes or the value the constant takes, unless you show otherwise. Which you haven't.

      I never ever tire of teaching people basic, basic, basic logic. I'm going to update my job on LinkedIn to Special Needs Teacher.

      Delete
    12. The fine structure constant
      "is what it is".

      The sun does not revolve around the earth.
      The purpose of the universe is not to produce human beings.
      Particularly, not this lot! 🙂

      Actually, of all the physics constants discovered/invented/defined by people, the fine structure constant "runs".

      Of the many definitions of of the fine structure constant, I prefer v1/c, where v1 is the velocity of of the electron in the ground state of the hydrogen atom.

      v/c is the natural series expansion parameter in 'special relativity'.

      If you consider the relative velocities of the particles during an interaction, you can 'easily see' why the fine structure constant runs in qed calculations.

      Delete
    13. Greg Feild 2:57 PM, January 15, 2020

      "you can 'easily see' why the fine structure constant runs in qed calculations. "
      So the fine-structure constant may not actually be constant, but there's still no evidence it's fine-tuned.

      Delete
    14. I think I was arguing against fine tuning, although I guess I am not sure what it means!

      Delete
  4. "Speaking of supernovae. There was another paper just a few days ago that claimed that actually supernovae are not very good standards for standard candles, and that indeed their luminosity might just depend on the average age of the star that goes supernova.

    Now, if you look at more distant supernovae, the light has had to travel for a long time to reach us, which means they are on the average younger. So, if younger stars that go bang have a different luminosity than older ones, that introduces a bias in the analysis that can mimic the effect of dark energy. Indeed, the authors of that new paper also claim that one does not need dark energy to explain the observations."


    This will confuse people who don't know what it means already. Type Ia supernovae (which are the ones which matter here) are white dwarfs onto which matter accretes until it blows up. Typical stars which end up as white dwarfs live for several billion years (more massive ones have much shorter lifetimes and blow up as a different type of supernova). A given white dwarf can go supernova several times. We have no way of knowing how old a given supernova is, or rather the white dwarf involved. What does increase with redshift is the age of the universe at which the star formed. (Since star formation is an ongoing process, a given nearby supernova could be younger in its own life than a more distant one.)

    But I'm not completely sure since the paper you link to is "Search for Advanced LIGO Single Interferometer Compact Binary Coalescence Signals in Coincidence with Gamma-Ray Events in Fermi-GBM" which is probably not the one you mean.

    ReplyDelete
    Replies
    1. Sorry for the wrong reference, I have fixed the link now.

      Delete
    2. Right. As I suspected, it's not the age of the star which goes supernova, but, in essence, the age of the universe when that start formed.

      Delete
    3. A given white dwarf can go supernova several times.

      You're confusing nova outbursts (a brief period of runaway thermonuclear burning on the surface of a white dwarf of material accreted from a companion), which can indeed happen multiple times, with Type Ia supernovae, which involve the wholesale thermonuclear detonation of the entire white dwarf (or of both white dwarfs in the case of white-dwarf mergers) and necessarily only happen once.

      Delete
    4. "You're confusing nova outbursts (a brief period of runaway thermonuclear burning on the surface of a white dwarf of material accreted from a companion), which can indeed happen multiple times, with Type Ia supernovae, which involve the wholesale thermonuclear detonation of the entire white dwarf (or of both white dwarfs in the case of white-dwarf mergers) and necessarily only happen once."

      Yes, you're right. But apart from that, what I wrote should be OK. In particular, the white dwarf can exist for some shorter or longer time before the supernova happens.

      Delete
  5. The supposition might be that heavy element abundance might affect Ia event characteristics-- not so many heavy elements available the farther back you go. A second supposition might be the that the repeated events might not be identical. A definite trend from early outbursts to later outbursts. So maybe the material from the contributing star that accumulates on the surface of the white dwarf is drawn from successively different layers and has a somewhat different composition each time. A mechanism like this is not too hard to imagine. The results could easily be skewed. In fact, both of these mechanisms might be at work.

    ReplyDelete
    Replies
    1. Of perhaps similar significance: while overall metal abundance correlates with a galaxy's age, for an individual star its local environment may be more important.

      For example, metal abundance in Main Sequence stars may rise rapidly (in cosmic time) in regions of rapid star-formation, e.g. a nuclear star-burst, but much more slowly in a star cluster in the outskirts of a low mass spiral galaxy say.

      Delete
    2. There are so many variables, it comes as a surprise to me that these events can actually serve as "standards". On the other hand, if they are approximately standard, they served their purpose. But now it seems that astronomers/cosmologists need more precise data, and approximately standard candles are no longer good enough!

      Delete
    3. The good news: the numbers of observed SNe 1a continues to increase, rather quickly. More good news: while SNe 1a's may lose their place as a leading method to estimate the values of cosmological parameters (at least to z ~2), there are other methods whose precision and accuracy may soon become comparable if not better.

      The bad news (for SNe 1a's): more data has also resulted in the need to nail down yet more systematics (e.g. when will SALT2 become ancient history?). And unless and until formation scenarios can be unambiguously tied to observed light curves (and spectra, and ...) 1a's will continue to be dogged with charges that they cannot be sufficiently good standards.

      Delete
  6. Two comments, this first on supernovae, Type 1a in particular, and cosmology tests in general. Next will be on LIGO, GWR, etc.

    You mention two papers on SNe 1a, both of which are touted as casting doubt on the existence of dark energy (DE). We can say that both these examine "systematics" which had not previously been looked at (or at least, not in as much detail).

    However, there are many - possibly dozens - of similar papers, but which reach different conclusions, namely that SNe 1a data is consistent with the Planck CMB analyses and its published value of the DE parameter (say). These papers do not result in breathless PRs, excited write-ups in pop-sci sites, etc.

    What to make of all this?

    That the bane of an astronomer's work - systematics - has reared its ugly head once again. Big time.

    FWIW, I think both the Sakar+ and Kang+ papers suffer from inadequate treatment of some systematics (even not recognizing some of them); I'd be happy to write down a few for you to ask Sakar directly about.

    "This gives me somewhat of a headache because these are two different reasons for why dark energy might not exist. Which raises the question what happens if you combine them. Maybe that makes the expansion too slow?"

    The short answer is that until you do actually combine them, you cannot guess what will happen. But perhaps a more interesting question might be something like what systematics have been ignored or overlooked in both papers?

    "Also, I said this before, but let me emphasize again that the supernova data are not the only evidence for dark energy. Someone’s got to do a global fit of all the available data before we can draw conclusions."

    This happens all the time; there must be a dozen papers from 2019 alone which at least sorta fit this bill.

    It's a somewhat ironic situation ... the number of reported SNe 1as is already considerable (thousands), and keeps growing rapidly. However, creating a database of SNe, with well-known and understood uncertainties, and of uniform quality, is a nightmare.

    The best news is that this is an extremely hot topic, so an awful lot of independent research is being done, by a great many terrific experimentalists (a.k.a. astronomers).

    ReplyDelete
    Replies
    1. With today's review paper in hand (see below), two of the obvious questions to ask Sakar are:
      - did you check the JLA dataset for 1a SNe that are inconsistent with the Phillips relation?
      - how thoroughly did you check SALT2 for biases?

      Delete
    2. Another: in your paper, the 1a SNe are distributed very unevenly across the sky (see Figure 1). And two of the datasets have extremely restricted distributions (all but one of the HST 1a's are from a single tiny region; all the SNLS are from four very small regions). Only the SDSS dataset samples a lot of the sky.

      To what extent did you examine these highly non-random distributions for possible biases? How did you account for these distributions in your analyses?

      Delete
    3. Jean Tate,

      I'm curious, did you do a similar systematics analysis with respect to the original SNe 1a papers from 1998 that established the DE paradigm?

      Delete
    4. bud rap,

      I recall running the data in one of the two papers through my own tests, but can't find the results (almost certainly lost in a PC crash over a decade ago now). From memory, it looked pretty good. I doubt that I ran tests anything like those of Sakar+; I almost certainly thought that there were far too few 1a's to any such analyses.

      Delete
    5. One more, fairly major (don't know why I missed it): What are your plans to perform similar analyses on the Pantheon dataset?

      Delete
  7. Re electromagnetic ("photons") signatures from BNS (binary neutron star) mergers detected in LIGO, this joint Fermi GBM and LIGO team paper is of direct interest: "A Joint Fermi-GBM and LIGO/Virgo Analysis of Compact Binary Mergers From the First and Second Gravitational-wave Observing Runs" (https://arxiv.org/abs/2001.00923).

    Here are the last three sentences of the abstract:

    All search methods recover the short gamma-ray burst GRB 170817A occurring ~1.7 s after the binary neutron star merger GW170817. We also present results from a new search seeking GBM counterparts to LIGO single-interferometer triggers. This search finds a candidate joint event, but given the nature of the GBM signal and localization, as well as the high joint false alarm rate of 1.1×10−6 Hz, we do not consider it an astrophysical association. We find no additional coincidences.

    Note that the two observing runs (O1 and O2) are now ancient history! September 2015 to January 2016, and November 2016 to August 2017, respectively.

    ReplyDelete
  8. Is the metallic content of white dwarfs higher for more recent supernova?

    ReplyDelete
    Replies
    1. Good question!

      Unfortunately, white dwarfs (WD) are intrinsically faint, so we can obtain spectra from those quite close to us (in the Milky Way) only. And those spectra are only indirectly helpful in determining metal content (Note: to astronomers, "metal" means any element other than H or He).

      Why?

      Because of their high density (and being a gas/plasma), the elements in a WD get sorted quickly (in terms of their lifetimes), H at the top, then He, etc. There are multiple reasons to think that the bulk of a WD is composed of carbon+oxygen, or oxygen+neon+magnesium (there are rare exceptions); these compositions should be independent of the age of the WD (or its progenitor).

      For SNe 1a, a still open question is what proportion form from a single detonation (SD) - where the WD accretes matter from a companion star - or double, where two WDs merge/collide (DD). Or indeed if there are other possible progenitor scenarios.

      I do not know if the metal content of 1a's has been studied closely enough to be able to say with confidence how and why any observed metal content variation arises.

      Delete
  9. White dwarf stars around a main sequence companion can exhibit a nova event multiple times. Hydrogen accreeted on to the surface can build up and flash in a fusion event. A supernova type I occurs when accreeted material raises the mass above 1.3 solar masses. At this point the degenerate electron pressure can no longer support the mass and it implodes. This then induces a fusion event where the white dwarf explodes. It is from my understanding completely gone.

    IK Pegasi B only 154 light years away is a white dwarf close to exceeding the Chandrashenkar limit. If this explodes we may get decent measurements of this standard candle. If we are fortunate enough this SNI happens we may get better calibration of these events.

    ReplyDelete
  10. Mikhail Shifman seems on target with respect to HEP, but quite behind the state of the art and what is going on with respect to dark matter. For example, an exclusively gravitational attraction from dark matter is all but ruled out, and non-particle based gravity modification approaches (which he sites in his map in the farthest corners of Siberia), like rather more promising than he suggests.

    ReplyDelete
    Replies
    1. "an exclusively gravitational attraction from dark matter is all but ruled out"

      May I ask, why? Specifically, what is the observational and/or experimental evidence for this?

      Delete
    2. Twelve reasons, more studies. (1) Most precisely on point: "We show that the scaling laws among the structural properties of the dark and luminous matter in galaxies are too complex to derive from two inert components that just share the same gravitational field." Paolo Salucci and Nicola Turini, "Evidences for Collisional Dark Matter In Galaxies?" (July 4, 2017) https://arxiv.org/abs/1707.01059 and inferred dark matter distributions are more tightly correlated with luminous matter distributions than is possible with GR alone See e.g., Paolo Salucci, "The distribution of dark matter in galaxies" (November 21, 2018) https://arxiv.org/abs/1811.08843 and Edo van Uitert, et al., "Halo ellipticity of GAMA galaxy groups from KiDS weak lensing" (October 13, 2016). https://arxiv.org/abs/1610.04226 (2) Dark matter is inconsistent with the behavior of wide binary stars. See Hernandez et al. http://arxiv.org/abs/1105.1873 (3) it has long been proven analytically that dark matter halos governed solely by gravity should have Navarro-Frenk-White (NFW) distributions, but inferred dark matter halos are much closer to an inconsistent pseudo-isothermal (ISO) distribution. See e.g. Marie Korsaga, et al., https://arxiv.org/abs/1809.06306 and Antonino Del Popolo et al., https://arxiv.org/abs/1808.02136 and Katz https://arxiv.org/abs/1605.05971 and https://arxiv.org/abs/1710.01375 and https://arxiv.org/abs/1707.09689 (4) gravity only predicts the wrong amount of globular clusters Peter Creasey, et al., "Globular Clusters Formed within Dark Halos I: present-day abundance, distribution and kinematics" (June 28, 2018) https://arxiv.org/abs/1806.11118 (5) galaxies form too late in these theories Manoj K. Yennapureddy, Fulvio Melia, "A Cosmological Solution to the Impossibly Early Galaxy Problem" (March 19, 2018). https://arxiv.org/abs/1803.07095 (6) it can't explain bulge formation in galaxies: Alyson M. Brooks, et al. "Bulge Formation via Mergers in Cosmological Simulations" (12 Nov 2015). https://arxiv.org/abs/1511.04095 (7) Baryon effects can't save cold dark matter models. https://arxiv.org/abs/1706.03324 (8) Cold dark matter models don't explain the astronomy data. https://arxiv.org/pdf/1305.7452v2.pdf (9) The Bullet Cluster is a huge problem for DM with just gravity. Jounghun Lee, et al., https://arxiv.org/abs/1003.0939 and also for El Gordo https://iopscience.iop.org/article/10.1088/0004-637X/800/1/37 (10) too big to fail and satellite plane problems. Marcel S. Pawlowski, Benoit Famaey, David Merritt, Pavel Kroupa, "On the persistence of two small-scale problems in ΛCDM" (October 27, 2015). https://arxiv.org/abs/1510.08060 (11) 21cm signals are inconsistent with this. https://www.nature.com/articles/nature25792 (12) dwarf galaxy diversity Isabel M.E. Santos-Santos, et al., "Baryonic clues to the puzzling diversity of dwarf galaxy rotation curves" (November 20, 2019) (submitted to MNRAS).https://arxiv.org/abs/1911.09116

      Delete
    3. More problems: https://arxiv.org/pdf/1306.0913.pdf and https://arxiv.org/abs/1109.3187 (laundry list of galaxy scale CDM problems some of which carry over to WDM); WDM problems too https://arxiv.org/pdf/1202.1282.pdf and https://arxiv.org/abs/1208.0008 Proposed warm dark matter annihilation signals also turned out to be false alarms. https://arxiv.org/abs/1408.1699 and https://arxiv.org/abs/1408.4115 can't be a source of annihilation signals https://arxiv.org/abs/1504.01195 more WDM constraints https://arxiv.org/abs/1306.2314 more constraints on WDM from many studies that pin it to a mass range ruled out by other studies https://dispatchesfromturtleisland.blogspot.com/2013/05/xenon-100-contradicts-all-positive.html

      Delete
    4. Thanks andrew. Lots of fun papers to read!

      Pet peeve: the very first (https://arxiv.org/abs/1707.01059) is
      a) not apparently published in a relevant peer-reviewed journal, and
      b) it's not hard to find reasons why it - in its arXiv version - would not be accepted (double pet peeve: too many sloppy mistakes, not necessarily a problem on their own, but in my experience very likely to hide at least one fatal error).

      But like I said, lots of fun reading ahead! :-)

      Delete
    5. If everything rested on a single paper or even a single lead author, this could be a problem. But, given that so many authors in so many papers on more than a dozen independent grounds show the inconsistency of DM that interacts only via gravity with experimental evidence, only physics sociology can explain why that hypothesis isn't considered a crackpot idea at this point.

      Delete
  11. I would call your attention to a third independent issue with dark energy measurements in addition to the two mentioned in the main post, from a 2015 paper. It notes that there are two subtypes of type IA supernova that are hard to distinguish in the visible light spectrum that are clear at other wavelengths, and that when this distinction is accounted for, the mean estimate for dark energy is much smaller and not entirely inconsistent with zero. The paper is Peter A. Milne, Ryan J. Foley, Peter J. Brown, Gautham Narayan. THE CHANGING FRACTIONS OF TYPE IA SUPERNOVA NUV–OPTICAL SUBCLASSES WITH REDSHIFT. The Astrophysical Journal, 2015; 803 (1): 20 DOI: 10.1088/0004-637X/803/1/20.

    ReplyDelete
  12. Regarding your last paragraph, I personally think analyzing that is a waste of time and energy. Scientists, the same as everyone else, continue to exhibit the same patterns of human behavior and bias for all recorded history. There is no sound reason to think any of that will change if it is pointed out to anyone. Eventually our errors collapse under their mounting weight and we move forward albeit slowly. It's unfortunate but very true. The only people that will ever improve this behavior are those who constantly look inwards and become more aware of it in themselves because everyone of us does this naturally and it takes constant effort and reasoning to subdue.

    ReplyDelete
  13. The LHC wasn't powerful enough - it was a really shitty attempt by humans to detect beyond Standard Model Physics - I'm sure you agree.

    If not, which supersymmetric particle did you expect at 10 Tev?

    Yeah, none, so shut up with the criticism - they will appear at closer to 100 Tev, almost surely.

    (I am from the future)

    ReplyDelete
  14. Regarding: "Someone’s got to do a global fit of all the available data before we can draw conclusions."

    There are different research groups who have built the inhomogeneous void structure of the universe into comprehensive cosmological models. They argue that such models must be based on “an exact solution to a Buchert average of the Einstein equations with backreaction”. After a decade or more of development, they claim that their model fits all data as well as the standard Lambda CDM model and possible better for some data sets. Recent papers include
    2009: arXiv.org > astro-ph > arXiv:0909.0749
    2017: arXiv.org > astro-ph > arXiv:1706.07236

    ReplyDelete
  15. Addressing several comments here.

    First, again, even if the supernova data are completely wrong, there is still a strong case for the standard model. Those who claim that the supernova data are wrong, for whatever reason, (such that a correct analysis would indicate no acceleration, or even no positive cosmological constant) need to explain why they "just happen" to fit well with the concordance model when the latter is based only on other tests (i.e. "recalculating" the concordance model by neglecting the affected observations; the fact that one gets the same result, more or less, is why it is called the concordance model).

    One should also check whether "two wrongs make a right", i.e. paper a saying something is wrong is correct, as is paper b, but the effects more or less cancel, so that taking both into account would really not change the result very much.

    With regard to back reaction, grossly inhomogeneous models, etc: again, if large-scale* inhomogeneities are tricking us into believing in a positive cosmological constant, either because lensing effects influence the apparent-magnitude--redshift relation or because the inhomogeneities actually cause acceleration themselves, why does the resulting model just happen to be the concordance model if the latter is derived by neglecting observations influenced by the inhomogeneities?

    If the supernova data had indicated a universe without a cosmological constant, my guess is that some of these studies would not have been made.

    ___
    * As far as small-scale inhomogeneities go, in principle they can also affect the apparent-magnitude--redshift relation, even drastically, but in our Universe the effect appears to be almost negligible where cosmology is concerned; see my recent review on this topic.

    ReplyDelete
    Replies
    1. Thanks for your review, Phillip Helbig, very interesting.

      I think I've got this right, but just to make sure: estimates of the values of relevant cosmological parameters from BAO research will not be affected by realistic inhomogeneities in the universe we live in, because the angular scales in BAO are so large that realistic inhomogeneities average out. Is that a more or less accurate summary?

      I'm also interested in your take on GWR (gravitational wave radiation): your review seems to be about photons only (at least explicitly), but it seems to me that it should also be almost directly applicable to GWR too. And as LIGO-like sources of GWR will be as small as SNe (angular size) if not smaller, what you write about 1a SNe should apply to GWR sources too, right?

      I'll leave neutrinos to a later comment.

      Delete
    2. The thought occurred to me that if there are differences in SN1 in the early universe it is due to what is called metallicity. Ancient white dwarf stars probably had less heavier elements beyond lithium, what the astrophysicists refer to as metallicity, and so should have more helium. This would I think lead to higher luminosity of these SN1s. These lighter elements are further to the left of the so called binding curve of energy. This means the run away fusion should produce more energy. This would then mean there is more of a loss of apparent luminosity due to accelerated expansion.

      Of course there is already a discrepancy between 67km/sec-Mpc from CMB and 74km/sec-Mpc. This might actually widen the gap. If ancient white dwarf stars had more "bang" because they consisted of lighter elements this would be more in favor of dark energy,

      Delete
    3. "I think I've got this right, but just to make sure: estimates of the values of relevant cosmological parameters from BAO research will not be affected by realistic inhomogeneities in the universe we live in, because the angular scales in BAO are so large that realistic inhomogeneities average out. Is that a more or less accurate summary?"

      Yes. We know that our Universe is well described by a Friedmann model, so (despite some claims from the backreaction crowd), inhomogeneities must be small, so things on larger angular scales will always be a fair sample. The interesting thing is that it looks like the Universe is homogeneous on very small scales as well, even though we know that it is not, at least as far as luminous matter (and dark matter associated with it) is concerned. There are at least two possible explanations: 1) a large amount of dark matter could be very smoothly distributed, 2) although far from inhomogeneous, the combination of over- and underdensities traversed by light from high red shift results in the same apparent magnitude as if the Universe were homogeneous.

      "I'm also interested in your take on GWR (gravitational wave radiation): your review seems to be about photons only (at least explicitly), but it seems to me that it should also be almost directly applicable to GWR too. And as LIGO-like sources of GWR will be as small as SNe (angular size) if not smaller, what you write about 1a SNe should apply to GWR sources too, right?"

      Right. As far as we know, gravitational waves are affected in the same manner as light. However, in gravitational-wave events, one knows the absolute luminosity. The uncertainties are still large, but the fact that Schutz's method gives a Hubble constant of about 70 from GWs is encouraging. In contrast to supernovae and other conventional sources, there should be no selection effects with regard to GWs: they are not absorbed or outshined.

      "I'll leave neutrinos to a later comment."

      I'll stay tuned.

      Delete
    4. Note re independent methods of estimating the values of cosmological parameters, particularly dark energy: this recent paper points to an interesting development "On the accuracy of time-delay cosmography in the Frontier Fields Cluster MACS J1149.5+2223 with supernova Refsdal"
      Link: https://arxiv.org/abs/2001.02232

      Last two sentences of the abstract: "We also present the interesting possibility of measuring the value of the equation-of-state parameter w of the dark energy density, currently with a 30% uncertainty. We conclude that time-delay cluster lenses have the potential to become soon an alternative and competitive cosmological probe."

      Delete
    5. "We also present the interesting possibility of measuring the value of the equation-of-state parameter w of the dark energy density"

      Interestingly, when first discussing the idea of using the time delay to measure cosmological parameters, Refsdal, back in 1966 (about a year and a half before I was born; he later moved to Hamburg and even later I became his student there), thought of a supernova. It turned out that QSOs, being more common, were what was used, until this Supernova Refsdal (others are named after other people, not necessarily scientists). This is a particularly interesting system. In general, there are an odd number of images in a gravitational-lens system, though one (the third or fifth, etc.) is often too faint to see. In this case, it isn't. Moreover, the others were observed before the the last one, so models could predict where and when it would appear.

      On the other hand, pretty much any cosmological test can measure w by looking to see whether any dark-energy term is not actually constant. There is no convincing theory, though, as to why it should vary and, of so, how. Common parameterizations are just that, something to measure, and don't reflect any firm theoretical prediction.

      Delete
    6. "back in 1966 (about a year and a half before I was born"

      Should be a year and a half before. Maybe something to do with time delay. :-)

      Delete
  16. " that’s a big deal because the discovery of dark energy was awarded a nobel prize in 2011. "

    Having read Brian Schmidt's foreword in "A Fortunate Universe", the conclusion has to be that he is a completely illogical dimwit. I would be wholly unsurprised if his measurements and conclusions were all wrong.

    ReplyDelete
    Replies
    1. May I be so bold as to ask: which of Brian Schmidt's "measurements and conclusions" are you referring to?

      Also, if by "measurements" you mean something like the astronomical work he did (together with many colleagues), specifically the observations, how could they be "all wrong"?

      Delete
    2. His Nobel Prize-winning work on supernovae and dark energy for starters, and anything else he has published. His foreword to "A Fortunate Universe" raises questions about his mental competence.

      Delete
    3. Thanks.

      The paper cited by the Nobel committee, with Schmidt as an author, is "Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant". The lead author - usually the one takes final responsibility for the content - is Adam G. Riess, who also got a Nobel. The other authors are: Alexei V. Filippenko, Peter Challis, Alejandro Clocchiatti, Alan Diercks, Peter M. Garnavich, Ron L. Gilliland, Craig J. Hogan, Saurabh Jha, Robert P. Kirshner, B. Leibundgut, M. M. Phillips, David Reiss, Robert A. Schommer, R. Chris Smith, J. Spyromilio, Christopher Stubbs, Nicholas B. Suntzeff, and John Tonry.
      Link: https://iopscience.iop.org/article/10.1086/300499/pdf

      Adam Riess, and these other co-authors, are they all also "completely illogical dimwits."?

      In simple terms, the "measurements" are the digital data collected from detectors mounted on various telescopes. Can you explain, in some detail, how these could be "all wrong"?

      Delete
    4. Jean, yes, Evans thinks that they are all dimwits. Read his other comments; the guy is a crackpot troll. It's not worth replying except in cases where the reply might help someone else. If anyone has serious mental issues, it's him.

      Delete
    5. JeanTate 9:09 AM, January 10, 2020

      Well before the current criticisms of the Schmidt et al. supernovae data there were question marks about whether they had sufficient data to make the conclusion of dark energy. Fine - it was pioneering work, but maybe Nobel Prizes shouldn't have been handed out until there were more data to back up the claimed dark energy conclusion. It looks like the physicists you list may have jumped the gun, and that suspicion has been around since the paper was first published.

      Brian Schmidt, as a Nobel Prize laureate, wrote the foreword for a popular physics book which contains 3 main claims: universal fine-tuning, existence of a multiverse, and that baby Jesus' papa made the universe.
      There is zero evidence of universal fine-tuning or the multiverse and writing about silly, primitive superstitions in a physics book is the mark of a delusional madman. So why did Schmidt give it the imprimatur of a laureate?
      In the foreword Schmidt writes this kind of palpably unscientific drivel:
      "the constants of nature that underpin the equations tuned to values that allow our remarkable Universe to exist in a form where we, humanity, can study it"
      Drivel - it is not known that the constants are tuned. They may have a physical explanation. Nor are the prerequisites for intelligent life known.
      "A slight change to these constants"
      Drivel - what does he mean by "slight"? And it is not known that the constants could physically be any other values than those measured.
      "We seem to be truly fortunate to be part of our universe"
      Drivel - we don't have any idea what the probability of human existence is.
      "A seemingly perfectly rational argument to come to terms with this streak of good luck.."
      Drivel - there is no information about probabilities for existence of the universe or humans, so no known "good luck".
      "But what happens when, as with our Universe, there is only one thing to observe?"
      Moronic drivel - in this case you can only conclude that there is only one universe
      "You will see that humanity appears to be part of a remarkable set of circumstances"
      Drivel - there is no evidence they are "remarkable". They may appear remarkable to the ignorant.
      "involving a special time"
      Drivel - nobody has any idea whether intelligent life has existed at other times.
      "around a special planet"
      Drivel - exoplanets like Earth are being discovered every month
      "which orbits a special star"
      The moron doesn't even know the basic facts of his own subject - there are billions of Sun-like stars.

      So in putting his name to a book written by a delusional madman and writing illogical, ignorant nonsense in the foreword, Schmidt has marked himself out as a complete moron. I would check any work he has published; I would check any of his contributions to any published work.
      That the CUP published "A Fortunate Universe" and that the editor of Physics Today thought it would be a good idea to get another Templeton-funded delusional lunatic, Marcelo Gleiser, to write an insane review of it are also bizarre.

      Imagine if Stephen Hawking had written a chapter in A Brief History claiming witchcraft was true, or Feymann had written a volume 4 with some lectures on voodoo. This is completely f*cked that a Nobel Laureate would put his name to such drivel, that CUP would print superstitions as science, and that the editor of Physics Today thinks it's a good idea to get one corrupt sociopath to review the lunatic work of another corrupt sociopath.

      Something is rotten in the state of Denmark.

      Delete
    6. Phillip Helbig 11:21 AM, January 10, 2020

      "Jean, yes, Evans thinks that they are all dimwits. "
      Don't put words in my mouth - that's naughty. People who believe in fine-tuning when there is zero evidence are dimwits e.g. you, Brian Schmidt, Martin Rees, etc.

      People who believe silly primitive fairy tales are true are delusionally insane and should get professional help e.g. Luke Barnes, Catholics, Protestants, Muslims, etc.

      "the guy is a crackpot troll."
      But yet again you fail to point out any lunatic belief I hold to support your crackpot theory; while you think fine-tuning is true without any evidence which *is* lunacy. And pointing out the truth is not trolling, although I understand it may be inconvenient for people like you who are trying to get away with claiming utter nonsense, for example that there is evidence for universal fine-tuning.

      You could of course just give your claimed evidence for fine-tuning, but you always avoid doing so because you know I would tear your nonsense to pieces using 0.000001% of my brain power. You are intellectually terrified of me.

      Delete
    7. There is a difference between fine tuning and a fine tuner. There might be statistical reasons for how this cosmos we observe has the RG attractor points on a false vacuum we observe. It can all be "tuned" as it is by statistical happenstance.

      Fine tuning would be strange if we existed in a universe where it was hostile to our being. An analogue would be if we lived on Venus, a crucible of a place, where in spite of the impossible conditions we existed there.

      Fine tuning is rather subtle. I am not convinced of Barnes' argument, and I see no convincing reason to assume a fine tuner.

      Delete
    8. Lawrence Crowell 2:41 AM, January 12, 2020

      But there is *zero* evidence that the universe is subject to fine-tuning either of the everyday meaning variety (tuning a piano) or statistical happenstance variety (a lottery win due to large number of participants). The RG attractor points on a false vacuum (whatever that is!) is an observation. It is not known why this observation is what it is, nor if it could physically be different, and so neither type of fine-tuning can be concluded. This is the situation for *every* claimed universal fine-tuning.

      Where the less intelligent go wrong is that they don't understand that if they present even a million examples of
      If such-and-such a physical ratio or constant was "only" 0.0000001% different then the nature of the universe would be completely different,
      this adds *zero* confidence to the hypothesis that the universe is fine-tuned because all such statements are conditional.

      However, the layperson would certainly be convinced by such fallacious arguments like this, which is why cranks like Luke Barnes and Marcelo Gleiser make such arguments. It's profitable.

      Why the physics community, like Brian Schmidt and the editor of Physics Today, has allowed them to get away with their nonsense is a failure of duty, though. The review of "A Fortunate Universe" should read: there is *zero* evidence of universal fine-tuning; there is *zero* evidence of a multiverse; and, if Luke Barnes thinks baby Jesus' papa made the universe he needs to get professional help for his delusions. Does the editor of Physics Today not know that both Luke Barnes, the author, and Marcelo Gleiser, the reviewer, are both funded by the Templeton Foundation to the tune of hundreds of thousands of dollars?

      "I am not convinced of Barnes' argument"
      He doesn't have an argument. He's lying through his teeth.

      Delete
    9. I've pointed out the mistakes in Evans's logic many times for all to read. This is just a note to say that I won't respond to any more similar comments from him, as it is clear that he is immune to changing his mind. In other words, absence of future comments on such topics should not be confused with evidence for the absence of further arguments against his logic, as I'm sure most or all here can understand.

      Nevertheless, if someone has a specific question about something he and I disagree on, and is genuinely interested in learning something, then I'm happy to point that out. Send me an email in case I miss it. :-|

      Another commentator pointed out that fine-tuning does not imply a fine-tuner. This is one thing which Evans gets wrong. Also, he doesn't differentiate the physical world from its interpretation by a specific person. In other words, he claims that since Barnes is a theist, everything he says must be wrong, but this is ostensibly not true: he probably believes that the Hubble constant is between 60 and 80, say. Similarly, Christian apologists have argued that many natural phenomena (adaptation of living things to their environments, say) are evidence of God. There are, of course, other explanations (evolution is one). The non-existence of God does not imply that such adaptations do not exist.

      Of course, this does not prove that fine-tuning is not real. There is a large literature on this topic, and even serious scientists disagree on some points. One shouldn't get the impression, though, that the most vocal must be right. However, this is disagreement on another level; Evans is more akin to a creationist who sees evidence of debate on a fine point of evolution theory as an admission that the entire science is a scam.

      Delete
    10. Phillip Helbig 4:10 AM, January 13, 2020

      "Another commentator pointed out that fine-tuning does not imply a fine-tuner. This is one thing which Evans gets wrong. "

      I've never claimed fine-tuning would necessarily imply a fine-tuner. Again your low level of reading comprehension is letting you down. Maybe you've been reading too much German and forgotten English.

      "There is a large literature on this topic, and even serious scientists disagree on some points."

      Absolute nonsense. No serious scientist thinks there is any evidence of universal fine-tuning, because there clearly is no evidence. To show fine-tuning one would at least have to show that it is physically possible for fundamental constants or laws to be different to what they have been observed to be in the one universe we know of.

      This is not the case.

      So no fine-tuning has been demonstrated.

      Or are you claiming that there is evidence that fundamental physics could be different ??????????????????????????????? ???????????????????????????????????????????????????????????????????????

      " sees evidence of debate on a fine point of evolution theory "

      Evolution theory has mountains of evidence. Fine-tuning has *none*. There are no fine points of debate. There is *zero* evidence. To compare the scientific status of fine-tuning with that of evolution is to mark yourself out as a delusional lunatic of the order of Luke Barnes. Which is as mad as a human being can be.

      " This is just a note to say that I won't respond to any more similar comments from him"
      You have never responded. Every time you fail to provide any evidence of fine-tuning, witter on about irrelevant points, then run away with your tail between your legs intellectually defeated.

      Delete
    11. Phillip Helbig4:10 AM, January 13, 2020

      " This is just a note to say that I won't respond to any more similar comments "

      If you are so intellectually terrified of me to argue with me, why don't you get your friend, crazy Luke, who you wrote the sycophantic dishonest drivel review for to come and argue?

      Crazy Luke is a professional academic with a big, big brain and has been thinking out by this question for decades, whereas I have only thought about the question for 5 minutes.

      Or is crazy Luke intellectually terrified of me, too? I remember last time he appeared in these comments he ran away pretty quickly...

      Delete
    12. Lawrence Crowell 2:41 AM, January 12, 2020

      "Fine tuning is rather subtle."
      It's not, though. There is no evidence of any fine-tuning mechanism and there is no evidence of fundamental constants or laws, etc., that have been show to have an extremely low probability which might hint at a multiverse, because no such probabilistic information is available. **None**. Also, assuming fine-tuning and considering what the consequences might be (e.g. positing a multiverse) has also led nowhere.

      Or can you provide examples of these mysterious subtleties that you claim? I don't think so.

      Researching fine-tuning or the multiverse is a wild goose chase.

      Delete
    13. Steven Evans wrote: "If you are so intellectually terrified of me to argue with me,"

      I am not Phillip Helbig, but I too will not be writing any further comments in response to yours. Not here in BackReAction anyway.

      My reason?

      For whatever reason - including, possibly, me not clicking on the Publish button - several of my responses to your comments have not appeared.

      So if nonsense like this (bold added) goes unchallenged, I have no interest in any further discussion: "I would be wholly unsurprised if his [Brian Schmidt's] measurements [...] were all wrong"

      Delete
    14. JeanTate10:46 AM, January 13, 2020

      " I have no interest in any further discussion: "I would be wholly unsurprised if his [Brian Schmidt's] measurements [...] were all wrong"

      I pointed out that Brian Schmidt gave the imprimatur of a Nobel Laureate to a popular Physics book which claims there is evidence of fine-tuning and a multiverse - there is none. The book also claimed that baby Jesus' papa "created" the universe - this is delusional nonsense and the person who wrote it needs to seek the help of a psychiatrist. Again, why would Brian Schmidt put his name to delusionally insane ravings? I also pointed out line by line that Schmidt's foreword was nonsense - he claimed the Sun is a "special star"??!

      I conclude that he's a bit thick and we should be wary of all the work he has published or joint published - there's no reason to trust any of his work.

      You are welcome to explain why we should trust him.

      Delete
    15. Steven Evans wrote to JeanTate:

      >I conclude that he's a bit thick and we should be wary of all the work he has published or joint published - there's no reason to trust any of his work.

      Steve, I can get at least as angry as you at conventional, orthodox Christians: I had horrible nightmares as a child because of attempts to "scare the Hell out of me."

      But, empirically, there is a minority of scientists who claim to believe in the Virgin Birth, the Incarnation, etc. who are nonetheless competent scientists. It is a fairly small minority but they do exist: one I have known personally is the relativist Don Page.

      I find this odd myself, but somehow they "compartmentalize" and keep their science separate from their faith.

      On the other hand, empirically, I have not known any competent natural scientist who is a "Young Earth Creationist": perhaps that just contradicts so many sciences -- from biology to astronomy to geology -- that that is just not possible.

      By the way, I also know a lot of physicians and engineers -- much higher rate of religious believers than among natural scientists. That makes sense, I think -- engineers and physicians are usually involved in following existing practices, not in questioning and challenging accepted beliefs in their fields as scientists are.

      Do you think it is possible that my empirical observations are correct, that someone like Brian Schmidt can hold goofy ideas on Sunday morning but still be a competent scientist Monday through Friday?

      All the best,

      Dave

      Delete
    16. This may all depend upon what you think fine tuning is. At this time there is no reason to think the fine structure constant α = (1/4πε)e^2/ħc should be α ≈ 1/137. The speed of light is really one light second per second and so is a unity and the Planck constant is similarly a unity so that a spread in momentum or wave vector is multiplied by a spread in configuration variable equal to the reciprocal of the momentum. So these are fixed. The electrostatic unit e^2/4πε is the real subject. How is it this electric charge assumed the value it has on the low energy physical vacuum?

      There is the subject of renormalization group (RG) flow. The method of renormalization with a cut off has a self-similar structure to it with each N-graph. This self-similar pattern in how a gauge coupling constant is computed is what leads to this idea of RG flow. As of now there is no real way of computing how the RG flow ends. Why do radiative corrections for QED stop at α ≈ 1/137?, At the EW energy this is around α ≈ 1/128 and in the standard model will approach unity as the energy reaches a GUT or Planck scale. Why does this stop here?

      A related question is how is it the cosmological constant is Λ ≈ 10^{-52}m^{-2}. The Hubble constant H = Λ/3 is then due to the vacuum structure of the universe. This is then determined by the zero point energy of quantum fields in the universe. This question is most likely related to the cosmological constant or the Hubble constant.

      The RG flow is similar in ways to Navier-Stokes theory of fluid flow. We might suppose that with this these radiative corrections are similar to scales on which one gets turbulent flow. If we informally “map” this to the logistic map then there are domains of regular dynamics interspaced in regions of chaotic dynamics. The absolute end of regular dynamics is given by the Feigenbaum number δ = 4.669… , where this might have some bearing on the theory of RG flow. However, this would tell us about the UV end of the RG flow. The IR low energy end of the flow is not known. Maybe some sort of duality between the UV and IR scales is at work. It might also reflect some sort of extremal principle with complexity.

      So my point is this a subject with big open questions. Call this fine tuning or whatever, you can’t just have wave dismiss this issue away. I am not arguing for any sort of infinite disembodied consciousness “up there,” but the issue is sticky. If the fine structure constant were different the structure of bio-molecules would be different. Even a tiny difference would mean complex chains would fold very differently. So I am not proposing a mechanism for fine tuning, but just stating there are open questions here.

      Delete
    17. PhysicistDave1:26 AM, January 14, 2020

      Dave, I take your point, and can't refute your empirical observation that the same human brain can be both hyper rational and delusional (Godel being a tragic example). But you probably haven't read everything I've written, and my actual point a few comments above is that Brian Schmidt wrote nonsense *about physics* in the his foreword to "A Fortunate Universe" and put his name, that of a Nobel Laureate, to a Physics book in which it is claimed that baby Jesus' papa made the universe(!!!). It is also my empirical observation that there are religious lunatics who *do* let their beliefs corrupt their science - Luke Barnes, who is one of the author's of "A Fortunate Universe", and Marcelo Gleiser, who reviewed it for Physics Today, are 2 clear examples of such corrupt scientists. The M.O. is blatant - you publish some figures from some models not actually about the universe, but then claim this tells us something about the universe in a popular book or in a national newspaper.

      Are you aware that Luke Barnes has announced to the Sydney Herald that he has shown definitively that the universe is fine-tuned and that the multiverse is not physically possible??

      The guy is lying through his teeth. The Physics community should be outing his lies and pushing for his sacking. The same goes for Gleiser.

      Delete
    18. Steven Evans, take a step back for a moment. Ignoring the weaving and ducking you do, to refine (shall I say) your initial comments and noodle out this: one person (Brian Schmidt in this case, but there are many others you've attacked) is so sneaky and powerful that they can trick photons into behaving in ways that are consistent with his theology. Not only that, but he's so smart that no one on his supernova team notices the trickery with photons. Nor how he subtly manipulates analyses of the data. Further, by some morphic resonance (shades of Sheldrake!), he twists the minds of an independent team so they come up with much the same conclusions. And the reviewers of the submitted papers.

      That's some powerful mind!

      Alternative: all other members of the two research teams are part of a giant cabal ... they manipulated the data, bought off the reviewers, etc. Not quite as giant a conspiracy as pulling off the Apollo Moon Hoax, but still very impressive.

      So, are you peddling some sort of conspiracy theory or not? Or, despite calling him a dimwit, attributing to Schmidt near god-like powers?

      Delete
    19. Thanks for your insightful comment, Lawrence Crowell.

      To look at this from a slightly higher level: what is a universal constant, or fine tuning, is theory dependent. If one day we have a "better" theory, we may conclude that some sorts of fine tuning are consistent with that theory, that the value of G, say, in our universe is an accident, something quite random. Or that the very concept of G is as quaint as phlogiston.

      Delete
    20. Lawrence Crowell 6:27 AM, January 14, 2020

      "This may all depend upon what you think fine tuning is. "
      I think we all know, but ok...We're talking about universal fine-tuning. And we're talking about a lottery win or a piano being tuned, applied to the value of a fundamental constant, for example. You will first need to show that the constant can physically take different values (not just the one measured in this universe) o.w. it is an abuse of language to refer to tuning. Then you will need to show that the probability of the constant having its measured value is miniscule (lottery win), or you will need to demonstrate the actual process by which the constant is set to its specific value (piano tuning).

      "How is it this electric charge assumed the value it has on the low energy physical vacuum?"
      So this is an unexplained observation. You have shown no suggestion of fine-tuning.

      "Maybe some sort of duality between the UV and IR scales is at work. It might also reflect some sort of extremal principle with complexity."
      These are questions that are currently unanswered. You haven't shown any suggestion of fine-tuning.

      " Call this fine tuning or whatever, you can’t just have wave dismiss this issue away."
      Nobody would call this fine-tuning. It's not clear why you would even consider referring to these open questions as fine-tuning.

      "If the fine structure constant were different the structure of bio-molecules would be different. "
      And here you make the classic fine-tuner's basic logical blunder numero un. You don't know if it is physically possible for the fine structure constant to be different than it's measured value. Everything you write after "If" is not known to refer to physical reality and is therefore not natural science. It is sci-fi.

      "Even a tiny difference would mean complex chains would fold very differently. "
      And I am grateful to you for providing the fine-tuner's classic logical blunder numero deux. Again you simply do not know if there can be any difference not even what you might consider a "tiny" one. So again you are not writing about physical reality, you are writing more sci-fi.

      All other claimed fine-tunings contain the same basic logical blunders. You have failed to provide any evidence of what could possibly be referred to as fine-tuning, so will you now finally admit the utterly obvious, what is as plain as the nose on one's face, that there is *zero* evidence that the universe is fine-tuned?
      (I assume you understand that I'm not claiming the universe *isn't* fine-tuned, just that there is no evidence that it is.)

      Delete
    21. JeanTate 10:40 AM, January 14, 2020

      I explained why it was wrong of Brian Schmidt to put his name to "A Fortunate Universe", giving a book of absolute drivel and lies the imprimatur of a Nobel Laureate. I also pointed out line by line that his foreword to the book is full of blatant scientific mistakes and schoolboy errors. I can only conclude that someone who writes such nonsense is extremely dim and is not to be trusted as a scientist. I presume he had major influence over and made significant contributions to the supernovae data paper because he received a Nobel Prize for it. I am simply saying that I'm not surprised that this paper is facing criticisms given the apparent low IQ of Brian Schmidt. Stupid people are more likely to make stupid mistakes.

      Delete
    22. JeanTate10:47 AM, January 14, 2020

      "Thanks for your insightful comment, Lawrence Crowell."
      Which didn't show any fine-tuning.

      "To look at this from a slightly higher level: what is a universal constant, or fine tuning, is theory dependent. "
      The speed of light takes the same value (up to a high degree of precision, I presume) wherever it has been measured in the universe. It's physically a universal constant, independent of any theory.
      If it is not physically possible for the speed of light to be any other value than the one measured, then it cannot be accused of being fine-tuned, because tuning implies the possibility of different values. Again, this is not dependent on any theory.

      Delete
    23. If it is not physically possible for the speed of light to be any other value than the one measured, then it cannot be accused of being fine-tuned, because tuning implies the possibility of different values. Again, this is not dependent on any theory.

      I come late to this party, so please forgive me if I’ve missed something. But isn’t the question of “fine tuning” about fine tuning of our universe?

      If the answer to that is “yes” then the question should be “is it physically possible for the speed of light to be a different value in a different universe?” I don’t know how that could be definitively answered, much less definitively excluded.

      In our universe, the speed of light is what it is. COULD other universes exist where the speed of light is different? I don’t know; I don’t know how we could know given our current knowledge.

      Our universe may be the one and only universe; it may be one of many. At this point I don’t think we know that either.

      If (as it seems to be agreed) there is no “fine tuner” then this whole matter does come down to the process by which our universe came into being: how variable is/was that process? Another interesting question for which we seem to have no data.

      sean s.

      Delete
    24. Lawrence wrote Jan 14 2020
      "If the fine structure constant were different the structure of bio-molecules would be different. Even a tiny difference would mean complex chains would fold very differently"
      For the pPlanck charge to remain constant a change in the value of 1/SQRT fine structure constant implies that the electric charge would change accordingly.

      Protein chain folding for instance is quite small beer.All other Planck units would remain unchanged.
      However, the size and energy levels of atoms would change greatly.
      Imagine an electron with a charge 1/3e then to conserve the Planck charge (11.7..*e) alpha would increase by the square and the fine structure constant would increase to 1233.

      Atoms would be larger. Take H hydrogen atom
      The Rydberg energy is ~m*e^4. 1/(1/3)^4=81. The ionisation potential of 13.6eV would be 81 fold less.
      The Bohr radius changes as ~1/m*e^2.
      Does the electron mass get it's rest mass from dragging its electric field? If so, the mass of the electron would be 1/3 of its extant cousin, classical e. The Bohr atom size for this H atom would then be 27 fold larger ca. 27 Angstrom.
      The Rydberg energy for this allo H would be 1/243 of 13.6eV. Its ionisation temperature would be only ca. 130 Kelvin, microwave radiation in molecular galactic clouds. The nuclear charges of 1/3 would translate to quark charge proportionally and the H nuclear mass would be reduced (by 1/3 ? perhaps)
      In such a universe a reduced electric charge would greatly increase the number of stable elements to 92 fold 3.
      Yet the the cosmic scale size would remain unchanged. The Earth's stellar distance would remain unchanged. h*c would remain unchanged. as h goes to zero and looks more quantum in character c goes to infinity and this universe appears more classical, ie. less relativistic.
      Could we observe the natural occurrence of such elements? Protein unfolding etc. denatures in water at ca. 373 Kelvin for 4 minutes as the egg boils; that's ca 0.5 eV or ca. 12 kilocals(~50kJ)/degree mole. An allo protein with 1/3 e would denature at ca. 4.7 Kelvin!
      Could we observe such radiation in the microwave spectrum. The radiation we do infact observe is due to classical molecular rotations rather than the much higher electronic transitions of the classical electron. An allo H atom would hardly reside here on Earth but would exist peripheral to galaxies and has indeed been proposed as Dark Matter candidates. Could whole galaxies exist were alpha to change markedly. It would be difficult to observe them. An allo galaxy at the distance of Andromeda would be much less bright , far less bright than a nearby Low Surface Brightness galaxy. Aditionally, were it seen in the far infra red it would masquerade as a far distant galaxy z>10! In other words invisible presently.

      Interestingly the H atomic spin flip would remain unchanged and it would appear in the H 21cm line.Conventionally, HI line refers to neutral H. An allo atom H (a new periodic table atom, Group1 ?)would form a halo around every galaxy and we would presume it is conventional Hydrogen (H)

      Otherwise Lawrence's comment is highly appropriate , a change in the fine structure constant would have enormous consequences to our perception of the universe.

      Delete
    25. The ionization of hydrogen is E_i =13.7 eV and this is dependent on e^4. So a small difference with e → e + δe would mean this ionization energy is changed by E’_i ≈ E_i(1 + 4δe/e). Similarly the Bohr radius would change by -2δe/e.

      The point about polypeptide folding is the dihedral angle between amino acid residue groups is dependent on a dipole moment. If the electric charge changes the change in the angle is amplified down the chain. It is a bit like chaos theory in a way, except instead of a temporal effect it is spatial.

      If coupling constants were different this would be a markedly different world. I am not sure what Evans is meaning by saying there is no fine-tuning problem. There seems to be a legitimate question over why the RG flows have their attractor points at the values we observe on the IR physical vacuum.

      Delete
    26. sean s.12:21 PM, January 14, 2020

      " But isn’t the question of “fine tuning” about fine tuning of our universe?"
      In this case, yes.

      "If the answer to that is “yes” then the question should be “is it physically possible for the speed of light to be a different value in a different universe?”
      In a different universe or in a different cycle of a bouncing universe or in a different patch of the universe that contains the observable universe, etc. Anyway, these physical constants are only known to take one value, so the question of fine-tuning is a non-starter, as you seem to agree in your comment.

      Delete
    27. Graham Dungworth 1:23 PM, January 14, 2020

      "Otherwise Lawrence's comment is highly appropriate , a change in the fine structure constant would have enormous consequences to our perception of the universe."

      "Would" having replaced Lawrence's "If" as the key word here. You are confusing a model with reality. Yes, in some model maybe it would have enormous consequences. However, in physical reality, which is the subject of Physics, it's not known that the fine structure constant or any other physical constant could be different from its one measured value. So there is no evidence of fine-tuning and Lawrence's comment was highly irrelevant.

      You should read the blog author's book, "Lost in Math". As should Lawrence.

      Delete
    28. Lawrence Crowell 7:46 PM, January 14, 2020

      As I have pointed out before, if my aunt had b*lls, she'd be my uncle.

      I have no idea about the technical details, fascinating though they seem, but I just do a scan of your comment for "would" and "if" and once again find you are not talking about physical reality.

      "So a small difference with e → e + δe would mean this ionization energy is changed by E’_i ≈ E_i(1 + 4δe/e). Similarly the Bohr radius would change by -2δe/e."

      Key word: "would". There is no such "small" difference in e, therefore it is not known that the ionization energy could be different, nor that the Bohr radius could be different. More sci-fi.

      "If the electric charge changes the change in the angle is amplified down the chain."
      Key word: "If". You don't know that the electric charge can be any other value, so your conclusion is once again sci-fi.

      "If coupling constants were different this would be a markedly different world. "
      Key words: "If" and "would". It's not known that the coupling constants can be different. So your conclusion is sci-fi.

      "I am not sure what Evans is meaning by saying there is no fine-tuning problem."
      I am saying that there is no evidence of fine-tuning. You seem to be demonstrating fine-tuning in a model not reality. You can show in your model that if you change e a little then this has far-reaching consequences in the model. But in reality, it is simply not known that e can be any different to its measured value. It adds nothing to your argument when you claim that the change required in the model for large consequences in the model is "tiny". It's still not known to be possible in physical reality.

      "There seems to be a legitimate question over why the RG flows have their attractor points at the values we observe on the IR physical vacuum."
      I'll take your word that the question is legitimate. But it may have a perfectly good non-tuning physical answer that does not require a multiverse or any tuning mechanism. It may be the only physically possible arrangement.

      I get the impression that your logic is backwards.
      The polypeptide folding depends on the dihedral angle between amino acid residue groups which is dependent on a dipole moment which depends on the electric charge.That's just a scientific explanation based on observation. You don't then get to say that if the charge were different, everything would be different because it's not known that it is physically possible for the charge to be different.

      Physics has taken all the innumerable observed phenomena in the universe and reduced it to a few dozen constants and laws, and a small dense soup of quarks 13.7 billion years ago. Of course, if you posit changes in this base it may theoretically imply wide-ranging consequences, because the small fundamental base explains a lot of phenomena. But the point is that it is not known that any of these fundamental laws and constants can be any different ***in physical reality***.

      Do you agree that there is no evidence of fine-tuning, yet? Or are you going to write another comment with lots of ifs, woulds and coulds?

      Delete
    29. Folks,

      Steven Evens is right, of course, though I wish he could express himself a little more politely. Speaking about "fine-tuning" requires a measure. It is impossible to ever obtain any evidence for a measure on the space of possible laws of nature because we can only observe one point in that space. Therefore, talking about fine-tuning is mathematically ill-defined and scientifically useless.

      This is patently obvious and frankly I don't understand why we still have to talk about this. For the sake of making progress, unless you can explain what is wrong with the the above argument that proves fine-tuning considerations are generally and necessarily unscientific by construction, please avoid further comments.

      Delete
    30. Correction: I should have been more clear that I am referring to finetuning in the laws of nature (ie, the content of your discussion). Finetuning arguments are fine if you do have a statistical sample. Eg, you can very well talk about fine-tuning in (say) the properties of a galaxy, because we have statistics about the properties of galaxies. You can likewise talk about "finetuning" in, say, human architecture because we have much data that demonstrates certain configurations are unlikely to occur other than by human design, and so on.

      Delete
    31. Steven Evens is right

      About what? About the low IQ of Brian Schmidt? About his (demonstrably false if you read the book) claims that the book by Lewis and Barnes says things which it doesn't?

      of course, though I wish he could express himself a little more politely

      Conventional wisdom holds that violence is the last refuge of the incompetent. If his arguments are so good, why hasn't he written a refereed journal paper pointing out that Weinberg, Rees, Schmidt, Carr, and so on are delusional low-IQ dimwits?

      Speaking about "fine-tuning" requires a measure. It is impossible to ever obtain any evidence for a measure on the space of possible laws of nature because we can only observe one point in that space. Therefore, talking about fine-tuning is mathematically ill-defined and scientifically useless.


      This is a misunderstanding on your part. As promised, since your "screams for explanation" paper has now been published (though in a journal I hadn't previously heard of), I'll write a reply in a respectable journal, which I think is a better forum for discussion than a comments thread where non-scientists verbally abuse others. I'll also write a paper explaining your misunderstanding. I'll report back when they have been published, hopefully this year (there are several others already in progess ahead of them in the queue). Some other people have the same misunderstanding as well, so you are in good company. But if you really believe it, then you should go on record, naming names, in a refereed-journal paper, saying that Rees, Schmidt, Carr, Weinberg etc. are deluded on the question of fine tuning, the multiverse, the anthropic principle, and combinations of these.

      This is patently obvious and frankly I don't
      understand why we still have to talk about this.


      As you said yourself in another context, "saying that it is obvious doesn't make it obvious". :-)


      For the sake of making progress, unless you can explain what is wrong with the the above argument that proves fine-tuning considerations are generally and necessarily unscientific by construction, please avoid further comments.

      Delete
    32. But …, Sabine opened the door where I am in part arguing. Does this statistical sample space exist? It is not clear, but if you work within the assumption it does, say the landscape etc, then asking how things are fine tuned is not unreasonable. The flip side, without assumption of a sample space, is the question on how the vacuum of this observable universe has the ZPE or cosmological constant observed. Unless there is some obstruction that makes the question unanswerable, which BTW I think is very possible, this is a reasonable question.

      Delete
    33. Lawrence,

      The statistical sample space "exists" as a mathematical construct. It does not exist in the scientific sense.

      Delete
    34. Phillip,

      You are either deliberately or mistakenly using my words out of context. I did not merely say it is "obvious" by way of explaining it. I FIRST explained it and then said it is obvious.

      I am eagerly waiting to see how you will manage to derive statistics from a sample of one.

      Aside, Synthese is one of the oldest and best-known journals in the philosophy of science. I am somewhat shocked to hear you don't know it.

      Delete
    35. You are either deliberately or mistakenly using my words out of context. I did not merely say it is "obvious" by way of explaining it. I FIRST explained it and then said it is obvious.

      I DID include a smiley. :-)

      I am eagerly waiting to see how you will manage to derive statistics from a sample of one.

      Again, a misunderstanding. I'll keep folks here updated.

      Aside, Synthese is one of the oldest and best-known journals in the philosophy of science. I am somewhat shocked to hear you don't know it.

      I'm not (yet) a philosopher. However, there are some interesting things in philosophy-of-science journals, so I've been expanding my reading, which is where (purely by chance) I came across your paper.

      Delete
    36. The sample space is hypothetical, but then again atomic theory was hypothetical for 2000 years.

      Delete
    37. Evans wrote: There is no such "small" difference in e, therefore it is not known that the ionization energy could be different, nor that the Bohr radius could be different. More sci-fi.

      The Rydberg number and ionization energy and the Bohr radius are standard results. If one puts in different numbers than known, say by adding some small value, the result is easy to find.

      The fine structure constant is reduced at high energy, and known for GeV and TeV energy. This rescaling of the electric charge and charge renormalization is well known. So talking about "changing" charge is known, and called screening. With QCD there is an anti-screening.

      To my mind the relevant question is how did the RG flow of these running gauge parameters have the attractor points they have as physics approaches the IR scale. I don't think this is an illegitimate question. If an answer exists it is either due to some sample space of possible outcomes, say on the landscape, or there is some grand RG scheme that determines them. of course there is a third possibility, which is there is not decidable answer. Either way, the question is not wrong and with due diligence and answer should be had.

      Delete
    38. Steven Evans wrote:

      “these physical constants are only known to take one value, so the question of fine-tuning is a non-starter”

      As long as all we know about is our universe, I agree with you and Sabine that, as Sabine wrote, “talking about fine-tuning is mathematically ill-defined and scientifically useless.

      sean s.

      Delete
    39. Lawrence Crowell wrote:
      >The sample space is hypothetical, but then again atomic theory was hypothetical for 2000 years.

      Yes, but the atomic theory was an idea about the actually existing physical world that seemed to be either true or not true. I.e., you could either take a grain of sand and divide it into finer and finer pieces indefinitely or, at some point, you would get to a smallest piece that could not be divided further.

      And it actually does work more or less like that: electrons, protons, and neutrons really cannot be broken down into smaller constituents (there are no free quarks).

      The atomic hypothesis was one of two plausible hypotheses about the nature of the world: it was a real hypothesis about the actual physical world that could, in principle, be directly tested and confirmed or refuted.

      No one will ever take a photograph of a bubble-chamber event of your hypothetical sample space. No one knows how to weigh or measure part of your sample space. Even talking in this way s a "category error": the "sample space" does not exist out there in the physical world in the sense that electrons, and water, and beluga whales do.

      Treating statements about this sample space as relevant to scientific knowledge is, at the very least, a very different sort of activity than what natural science has hitherto pursued.

      Delete
    40. Phillip Helbig 4:38 AM, January 15, 2020

      "About what? About the low IQ of Brian Schmidt?"
      In a comment above I listed a whole bunch of schoolboy errors in Brian Schmidt's foreword of "A Fortunate Universe". Or do you think the Sun is a special star, too?

      " About his (demonstrably false if you read the book) claims that the book by Lewis and Barnes says things which it doesn't?"
      Lewis and Barnes claim the universe is fine-tuned. What's the evidence, please? I don't see any.

      "Conventional wisdom holds that violence is the last refuge of the incompetent."
      I have given reasons for all my comments, even the ad hominem ones. Calling Physicists out for being incompetent or corrupt is not violence.

      "If his arguments are so good, why hasn't he written a refereed journal paper pointing out that Weinberg, Rees, Schmidt, Carr, and so on are delusional low-IQ dimwits?"
      I explained this before. Why do I have to keep repeating the same points to you? Weinberg is a Physics god and is well aware and has stated that there is no evidence of fine-tuning. Carr admitted to me in an email that there was no physical evidence of fine-tuning for the reason Dr H. gives - the information does not exist to calculate probabilities.

      "which I think is a better forum for discussion than a comments thread where non-scientists verbally abuse others."
      i.e. you will run away again. Your paper will be based on an assumption that constants can be other values than the ones measured, which is not known, and then you will claim here that you have shown fine-tuning, which you won't have.

      "Some other people have the same misunderstanding as well, so you are in good company."
      Whereas you are in bad company - mostly people funded by the Templeton Foundation.

      "a refereed-journal paper, saying that Rees, Schmidt, Carr, Weinberg etc. are deluded on the question of fine tuning, the multiverse, the anthropic principle, and combinations of these."

      Weinberg and Carr are well aware that there is no evidence for universal fine-tuning or the multiverse. Again your reading comprehension is letting you down. The anthropic principle is a simple tautology and has led to no significant scientific knowledge, as was discussed in the recent post and comments on it.

      I am really looking forward to this promised paper of yours.

      Delete
    41. Lawrence Crowell 10:39 AM, January 15, 2020

      "The sample space is hypothetical,"

      Right, so you agree that universal fine-tuning is pure speculation. Are you going to admit that explicitly?

      Lawrence Crowell2:29 PM, January 15, 2020
      "If one puts in different numbers than known, say by adding some small value, the result is easy to find."
      If you put in different numbers than known, then you are not talking about physical reality, are you?

      "The fine structure constant is reduced at high energy, and known for GeV and TeV energy. This rescaling of the electric charge and charge renormalization is well known. So talking about "changing" charge is known, and called screening. With QCD there is an anti-screening."
      I bow to your superior knowledge, but that's still not fine-tuning. There may be non-tuning physical reasons for the screening and anti-screening, or no accessible reasons.

      "If an answer exists it is either due to some sample space of possible outcomes, say on the landscape, or there is some grand RG scheme that determines them. of course there is a third possibility, which is there is not decidable answer. "
      I completely agree. So why were you claiming this was an example of fine-tuning? Are you going to come out and explicitly state that universal fine-tuning is pure speculation (and speculation that has led to no physical knowledge to boot)?

      Delete
    42. Sabine Hossenfelder 3:35 AM, January 15, 2020

      "This is patently obvious and frankly I don't understand why we still have to talk about this."

      It's a non-issue on the physical record, but that's not what is being told to the world outside physics journals. Brian Schmidt gave his stamp of approval to "A Fortunate Universe"; and the only 2 reviews of the book I know of (in Physics Today and The Observatory) are apologies to the nonsense; Luke Barnes has also announced to the Sydney Herald that the universe is fine-tuned, but the multiverse is extremely unlikely. Apparently, physicists can just lie to the media with complete impunity. It's not in a published journal paper so it's all just "opinion".

      The only way to deal with these lies to the non-physics world, is to send a letter to the editor of Physics Today pointing out the silly schoolboy errors in Marcelo Gleiser's review, Brian Schmidt's foreword and the book itself. The letter would need signatories who significantly outgun Gleiser and Schmidt reputationally. But presumably such a letter would be difficult to effect.
      If this were medical science, Luke Barnes, Geraint Lewis, Brian Schmidt and Marcelo Gleiser would all be struck off for dishonesty. In most jobs, if one were as utterly incompetent as these 4, one would be summarily dismissed.I'm glad at least none of them are receiving any of my taxes.

      Delete
  17. Sabine, I got wind of the "something that happened" as you call it, during my student days in the 1980s. It was a great personal crisis because it gradually dawned on me that I did not want to be a part of what was going on. And that's all I'll say about it - it was a palpable pathology. I have my own ideas about what happened, which I will keep to myself.

    -drl

    ReplyDelete
  18. Perfect timing!

    A review paper, by Ruiter, appeared on arXiv today (10 January 2019): "Type Ia Supernova Sub-classes and Progenitor Origin"
    Link: https://arxiv.org/abs/2001.02947

    A particularly interesting sentence: "It is probably wise to keep in mind that using lower redshift (higher metallicity) SNe Ia to standardize SNe Ia at large redshift (lower metallicity) in order to perform precise calculations of cosmological quantities may then lead to difficult to trace uncertainties in the derivation of cosmological parameters."

    Note that while metalicity has, on average, increased over cosmic time, SNe 1a progenitors will almost certainly have a range of metalicities, at all redshifts.

    ReplyDelete
    Replies
    1. The authors appear to agree with my assessment on red shift and metallicity. This has concerned me in the past. I generally figured this might just be statistical noise. However, with the discrepancy between CMB H = 67km/sec-Mpc and galactic and sn1 data with H = 74km/sec-Mpc this is an important issue.

      Delete
    2. "However, with the discrepancy between CMB H = 67km/sec-Mpc and galactic and sn1 data with H = 74km/sec-Mpc this is an important issue."

      Indeed. And it's not just a tension between those two methods, but also other methods. The good news is that all methods used to estimate values of cosmological parameter are getting a LOT of scrutiny! And it's no surprise that many systematics - ignored when uncertainties were nowhere near 1% - are turning out to be important.

      Re "my assessment on red shift and metallicity": I would like to stress, again, that metalicity can vary within a galaxy, so it may turn out that the more important relationship (with regard to 1a SNe) is with birth environment, and that redshift is merely an imperfect proxy. It may also turn out that formation channel (for the binary or triple systems which end up being 1a's) is much more closely correlated with redshift.

      A recent paper which looks at how metalicity varies in our own Milky Way galaxy (or rather, a particular subset of stars): "On the Metallicity Gradient in the Galactic Disk"
      Link: https://arxiv.org/abs/2001.02916

      Delete
    3. Thanks! If high redshift supernova 1A progenitors are systematically less metallic and so have a different intrinsic magnitude than low redshift ones, presumably there is a systematic correction that can be discovered with sufficient observations, so we can hope ....

      Delete
    4. It makes sense that a white dwarf core with lighter elements, say mostly helium and little carbon and oxygen would produce more energy in a SN1 event. I am not enough of an astrophysicists to comment in depth. A white dwarf formed from PopII stars may have far less elements heavier than helium, but PopI stars differ from PopII stars in this regard by a percent or so. It is really not that huge. So one might wonder by what factor this would change the absolute luminosity of these SN1 events.

      A data set with lots of these events would maybe wash this variation out statistically. On the other hand if SN1 events from white dwarfs produced by PopII stars do have a significant enough luminosity change or increase over those from PopI stars then this is significant.

      The disparity between the Hubble factor from the CMB and other methods from galactic redshift to SN1s may suggest that something odd is occurring with the vacuum of the cosmos. A variation or increase in the vacuum energy density would lend support for phantom energy and an asymptotic explosive expansion or big rip of space in the comparatively near future. By near it is meant maybe 10s of billions of years. This will have profound implications for quantum gravitation and cosmology.

      Delete
  19. Recently, using principles elucidated by Maxwell, I posted on the fridge a cartoon from the New Yorker magazine. In the cartoon, Eve in the Garden of Eden confronts the snake. Her reaction is "Holy shit, a talking snake!" Well Holy Shit! Here we all are in a talking universe.

    If you don't have some level of surprise at a universe that talks back to you, then you are missing the point. Some people find science and math to be the common language of the universe. Others have to carry on the conversation through religion or even poetry. Its not so much that every possible approach is equally valid, its that the talking universe presents us all with such a remarkable existential challenge that many individuals don't handle the challenge in the best possible way. So if old stories from old books will draw people to the campfire, so be it.

    ReplyDelete
  20. Sabine,

    Have you seen the article in the current Scientific American “Quantum Slits Open New Doors”, which seems to imply that superposition is not quite correct in QM?

    Their core claim:

    >”Our research, however, has revealed a flaw in the way physicists have traditionally dealt with wave-equation calculations when they are applied to the double-slit experiment. Imagine the classic experiment and let the two slits be named A and B, respectively. The solutions to the wave equation describing a particle in this system can be labeled ψA when slit A is open and ψB when slit B is open. What happens when both slits are open? It is common practice in textbooks to call the solution ψA + ψB to represent the fact that the particle is in a superposition state in which it is passing through both slits. This is indeed an application of the superposition principle, though an incomplete one. The reason is simple: The situation with two slits open at once is not the same as the combination of having the slits open separately. We know that when they are open at the same time, a particle in some ways passes through both and interacts with itself, and we cannot represent these interactions by simply adding the two solutions.”

    As far as I can tell, all that is really going on here is the old distinction between “first quantization” and “second quantization.” The EM field is not the quantum wave function but a real physical field, which can induce currents in the metal they use to block the microwaves and with those currents then able to produce further EM radiation. As everyone today is supposed to know, you have to do what we nowadays just call “field quantization” to do quantum mechanics correctly in such a case: if you treat the “first-quantized” fields – whether Klein-Gordon or Maxwell – as actual quantum wave functions, things do not really work.

    It turns out that all of this has been a bit of a cottage industry, going back over a decade: see, for example, the 2008 paper on the arXiv, ”Testing Born’s Rule in Quantum Mechanics with a Triple Slit Experiment”: as that paper makes clear, they do seem to be challenging both QM superposition and Born’s rule.

    It seems to me that this has little to do with QM per se and is really just exploring nooks and crannies of Maxwell’s equations (boundary conditions at conductors can be challenging!).

    What really perplexes me is the author’s claim in the SciAm article that her “three-slit” experiment has implications for quantum computing. Of course it is true that if one has a more than two-state system, one can store more than one bit in each example of that system. But what this has to do with her claims about superposition and Born’s rule in the three-slit experiment completely escapes me.

    I’ll be interested to hear your take.

    All the best,

    Dave

    ReplyDelete
    Replies
    1. Dave,

      I haven't followed this but I know that there's been some discussion about whether or not this or that experiment actually produces a superposition state since the 1980s or so. Usually it comes down to some interaction with the system setup or a self-interaction.

      This is an important discussion of course if you want to properly predict the outcome of an experiment but in and by itself cannot solve the measurement problem. I can explain in more detail why that is but not sure it's interesting to you.

      I don't know what this has to do with quantum computing per se. Of course, to use a quantum computer, you need to know what state you have prepared, so in that sense it matters, but that's not specific to quantum computing.

      In any case, thanks for pointing out, will have a look at this.

      Delete
    2. Sabine,

      Well, superficially, it would be absolutely crucial in understanding the measurement problem -- if Born's rule fails, even textbook QM is just wrong on measurement and we are basically trying to solve the wrong problem (we all want, at least, an explanation of why Born's rule works).

      In fact, I think it is a red herring. I think they are just noticing some mildly interesting features of classical EM.

      But I do wonder if we are now going to see a blizzard of "How to do QM now that Born's rule is disproven" papers on the arXiv.

      All the best,

      Dave

      Delete
    3. Dave,

      To say the obvious, Born's rule has worked just fine so far and it's not going to stop working tomorrow because Scientific American published some essay. Born's rule almost certainly fails in certain circumstances, namely, as you say, when we are ignoring processes that can't be ignored. It's certainly good to better understand this but it won't change the world.

      The reason this cannot solve the measurement problem is that for this you'd need some interaction that is either non-local or relies on future input. This is obvious if you think about it for a bit. You can't use a local interactions (say, with components of the system setup) to tell the prepared state what detector eigenstates it is *going to be measured with* (in the future). It just doesn't work. Best,

      Sabine

      Delete
    4. PhysicistDave,

      Soory to ruin your feng shui but I need to remind you that 10 days have passed since you've made the claim:

      "of course you can freely specify initial conditions at different points of space and change them in one region without having to change them everywhere!"

      And the multiple requested proof is still just:

      "It hardly bears mentioning"

      I think the time has come to either present the proof for your assertion or retract it.

      Regards!

      Delete
    5. Dave,

      “... they do seem to be challenging both QM superposition and Born’s rule.”
      This might have been their goal, but so far superposition and Born’s rule are still alive and kicking.
      They just calculated the amplitude more precisely. Instead of taking only ψ_A+ψ_B+ψ_C they did the path integral, see here (pdf) eq. (6), (8) and (10), i.e. also adding the amplitudes of the purple path(s) in fig. 2 – a clear sign that superposition still works.

      Delete
    6. Andrei wrote to me:
      >I think the time has come to either present the proof for your assertion or retract it.

      I already presented it on the other blog, Andrei: you just lack the math background to understand it.

      I may post another version here if you ask sweetly.

      But, frankly, you bore me.

      I find Jay Yablon's mistakes much more interesting.

      Delete
    7. Reimond wrote to me:
      >They just calculated the amplitude more precisely. Instead of taking only ψ_A+ψ_B+ψ_C they did the path integral...

      Yeah, I know but the path integral is just another way to solve Schrödinger's equation: you should not get a different answer.

      In any case, the microwave experiment presented in SciAm seems to be essentially a classical EM experiment, and it has a classical EM explanation as I suggested above: classical currents and charges induced in the metal barrier.

      I don't think any of us have linked to the initial technical paper reporting the microwave results: "Measuring the deviation from the superposition principle in interference experiments".

      In the abstract, they claim, "We report the first measurement of a deviation (as big as 6%) from the superposition principle in the microwave domain using antennas as sources and detectors of the electromagnetic waves. We also show that our results can have potential applications in astronomy."

      I have done desktop experiments with microwaves and slitted barriers myself.

      Given the decades of experience we have with microwaves and the more than a century and a half experience with Maxwell's equations and EM radiation... well, if they really have discovered a novel 6% effect in a classical physics situation but not explainable by classical physics nor even by standard QM but instead by a violation of quantum superposition and/or Born's rule...

      Well, folks, if that is all true, we are all looking at something bigger than Michelson-Morley or the black-body problem. Fasten your seat belts.

      Or just maybe it has the simple classical-physics explanation I suggested above.

      I think this whole thing will end up in the same category as "cold fusion" and "faster-than-light neutrinos."

      But my guess is that that will only occur after a few thousand pointless papers pop up on the arXiv and after a big brouhaha in the pop-science press. And some people are going to get very angry.

      Their paper does allude to the possibility of the kind of effects I am discussing:
      "Another recent work [19] has shown the measurementof a non-zero Sorkin parameter in a completely different experimental scheme using fundamentally different amplification techniques for an enhanced effect. Their triple slit experiment was done using single photons of 810 nm wavelength by enhancing the electromagnetic near-fields in the vicinity of slits through excitation of surface plasmons in the material used for etching the slits. Thus, they enhanced the Sorkin parameter by using near field components of the photon wave function and material induced effects..."

      I think they are too cavalier in dismissing such effects in their own experiment.

      By the way, there is a whole slew of related papers on the arXiv: I have found that useful search words on Google are: arxiv Sorkin parameter

      Delete
    8. PhysicistDave,

      "I already presented it on the other blog, Andrei: you just lack the math background to understand it."

      No, you just assumed what you needed to prove (the changed state is consistent with EM equations) and then insisted on a red herring (EM is time reversible, etc.).

      "I may post another version here if you ask sweetly."

      There is no reason for me to ask "sweetly". You either present the proof or you are de facto defeated.

      Delete
    9. Andrei wrote to me:
      >No, you just assumed what you needed to prove (the changed state is consistent with EM equations) and then insisted on a red herring (EM is time reversible, etc.).

      Well... since you were making claims about classical EM, yes, I did assume that what we were talking about was consistent with the EM equations! All I did was point out the initial conditions that can be chosen for the EM equations. Obvious to anyone who knows classical EM. But not you.

      Duh...

      And then Andrei accused me of:
      >a red herring (EM is time reversible, etc.)

      Duh...

      You see, uh, well, really, y'know, well, okay you don't know, the EM equations are time-reversible (as I have said, you have to replace B by -B, since B is an "axial vector").

      You have proved my case. The "debate" is now over Yes, I am just appealing to the very basic properties of the diff equs that constitute classical EM that you just denied. Anyone who understands classical EM and the diff equs describing it knows that what you just maintained is nonsense and what I said is right.

      Anyone who does not understand this... I just don't care about them.

      Andrei also wrote:
      > You either present the proof or you are de facto defeated.

      I have not entered into a contest with you -- of all people! -- in which I can be "defeated." It is not my obligation to "disprove" every crackpot in the world who posts something idiotic on the Web. And it is certainly not my obligation to convince you that you are wrong or stand "defeated"!

      I figured out the trick played by guys like you a long time ago, Andrei. It is always possible for you guys to say, "I'm not convinced, so you lose -- nah, nah, nah, nah!"

      Nope, you lose, Andrei, because you just admitted that your case hinges on the EM equations not applying and not being time reversible.

      But we were simply talking about situations in which classical EM is true (i.e., the classical EM equations do apply) and those equations most certainly are time reversible, which is obvious to anyone who knows classical EM.

      Consider yourself victorious as you wish.

      I have no respect for you. Our conversation is over.

      All hail the winner of the debate, the brilliant Andrei!

      Delete
    10. @PD Will you drop the attacks on people? Your tone is insufferable.

      -drl

      Delete
    11. drl wrote to me:
      >Will you drop the attacks on people? Your tone is insufferable.

      Oh, what you are seeing is my profoundly restrained response to people I consider liars or arrogant ignroamuses.

      I have a great deal of sympathy for people who are ignorant but not arrogant in their ignorance.

      And I am willing to tolerate people who are arrogant but brilliant.

      But for people like bud rap, antooneo, Andrei, etc. who hurl blatant libels at scientists who know much, much more than they do... well, I see no reason not to tell them the truth about their behavior and their level of ignorance.

      You seem to think all of us scientists should show them more courtesy than they show us.

      Why?

      I do not think you can find a case where I was insulting to someone until he was insulting towards scientists or blatantly lied.

      You seem to think I am insufferable when I inform some people who have been insulting towards me and many others that our conversation is over? I do not have the right to not communicate with people I find rude?

      Quite frankly, I have not appreciated your tone, which I have repeatedly found offensive, but I will not try to insist you change it.

      If you are offended by my tone, please do not read anything I post. Then everyone can be happy.

      Delete
    12. PhysicistDave,

      "since you were making claims about classical EM, yes, I did assume that what we were talking about was consistent with the EM equations!"

      Let me refresh your memory a little bit.

      We have an EM system A-B containing two subsystems A and B an arbitrary distance away. This system satisfies EM equations.

      You claim that you can have another system A'-B', where:

      A' = A, and
      B' ≠ B'

      My claim (supported by the argument you were unable to refute) is that A'-B' cannot exist (does not satisfy EM equations).

      You simply asserted that A'-B' is valid (satisfies EM equations). There was no proof provided for this assertion.

      "EM equations are time-reversible"

      Sure they are. IF the state satisfies EM equations it will satisfy them both in the future and the past. But you didn't show that A'-B' satisfies them.

      "You have proved my case."

      No, I have proven my case. Your "case" is just an unjustified assertion.

      "The "debate" is now over"

      If you fail to prove A'-B' is valid, it is. And you lost it.

      "your case hinges on the EM equations not applying and not being time reversible"

      That's ridiculous.

      "I have no respect for you."

      I've noticed. You use ad-hominems instead of arguments. I don't care. The arguments are the ones that matter. And you failed repeatedly to address them.

      "All hail the winner of the debate, the brilliant Andrei!"

      Sabine and the other participants here will judge who the winner is.

      Delete
    13. Sorry, there is a typo here:

      A' = A, and
      B' ≠ B'

      The correct version is

      A' = A, and
      B' ≠ B

      Delete
    14. PhysicistDave,,

      "I do not think you can find a case where I was insulting to someone until he was insulting towards scientists or blatantly lied."

      Such a case is easy to find. Me. I've never insulted you or anybody else and I did not lie. Yet, you repeatedly insulted me.

      But I understand you. You are a sore looser. Your failure to address my arguments with evidence from physics makes you angry. The first step in getting rid of that anger would be to behave like a man and accept you were wrong. You may then retract your false claims here and on Mateus' blog. He has every right to share the bitter fruits of defeat with you as he is even more clueless on the subject than you are.

      Delete
    15. @PD I didn't say you were offensive, I have a great respect for your obvious knowledge, which implies equally hard work. I didn't take offense. I said it was insufferable, because it shuts down all discussion to adopt such a tone. Yes you are right. But what happens when such discussions happen between fellow experts? It is not so easy to be dismissive. I am sorry for my outburst.

      -drl

      -drl

      Delete
    16. drl,

      May I ask your opinion about my argument above? Do you agree or not that a system A-B satisfying EM equations cannot be replaced with a system A'-B' (where the states of A and A' are identical and the states of B and B' are different) while keeping A'-B' consistent with EM equations?

      Thanks!

      Delete
    17. @Andrei all you can argue from "EM equations" is what they imply. On the classical level, there is no way to have isolated point charge centers (hopeless divergence). On the quantum electrodynamic level, there is no way to have isolated point charge centers without wishful thinking (logarithmic divergence, for which there is hope in the form of renormalization). Clearly, all arguments are bound to be vague until there is a form of electrodynamics without ambiguities.

      -drl

      Delete
    18. drl,

      I think we can ignore for now the problem of point charges. I did not claim that the systems consist of point charges anyway. Let's assume that the charges are not very close to each other so that we need not bother with calculating the fields at their location. What is your opinion about my question?

      Take for example the very simple case where subsystem A consts of one charge, and system B is just some region of space at an arbitrary distance from A. We are in a frame where A is stationary. The state at B will be represented by the electric field originating at A and no magnetic field (A is stationary). If I try to change the state of B, say by making the electric field zero, while keeping A unchanged, the EM equations would not be satisfied, contrary to Dave's claim. What do you think?

      Delete
    19. drl wrote to Andrei:

      >Andrei all you can argue from "EM equations" is what they imply. On the classical level, there is no way to have isolated point charge centers (hopeless divergence). On the quantum electrodynamic level, there is no way to have isolated point charge centers without wishful thinking (logarithmic divergence, for which there is hope in the form of renormalization).

      drl, in the usual form of Maxwell's equations, we deal with charge densities and current densities, so we do not have to deal with point charges. I think that dealing with densities may go back to Maxwell himself.

      So, at least in terms of the equations, we can ignore the point-charge issue.

      Andrei's error is that he is using what he vaguely remembers from frosh physics class and not looking carefully at Maxwell's equations, in particular at the time derivative terms in the equations. Anyone who does so realizes that, of course, you can change things over here without immediately affecting things over there.

      This not only follows from the equation; it is, after all, merely common sense.

      If this were not true, every time some electric charges were moved around here on earth, then there would have to be some immediate change in E or B fields over near Alpha Centauri. We would have faster than light signaling!

      I actually knew an engineering prof at Stanford who convinced himself this was true.

      Of course it is not true. Nothing at all will change over at Alpha Centauri for 4.3 years, given that it is 4.3 light-years from us.

      This is all obvious to anyone who is at all technically literate. I think Andrei has convinced himself otherwise because he has a half-baked but incorrect theory of how physical influences are transmitted inside the light-cone.

      I laid all this out on the other guy's blog on which Andrei can unfortunately no longer comment.

      I have delayed posting all the details here because:
      A) It is obvious to anyone with a decent STEM education (Alpha Centauri and all that)
      B) I did lay this out on the other guy's blog, and I do not appreciate the venomous way Andrei responded. So, I decided I would just let Andrei dig himself in deeper and deeper with his own obviously false beliefs.

      I am not sure Sabine even wants me to repost here an argument I already posted elsewhere and which is completely obvious anyway.

      Dave

      Delete
    20. drl,

      Dave's error consits in the confusion between a physical change inside our universe (the normal time evolution of the EM system) and a counterfactual change that requires "replacing" our universe with another one that is similar with ours in some respects (the state of the A subsystem) and different in other respects (the state of the B subsystem). There are two main differences between the two scenarios:

      1. A physical change takes time, a counterfactual change not.
      2. A physical change does not alter the past, while a counterfactual change does (because EM is deterministic).

      Let me now present Dave's claim:

      "you can freely specify initial conditions at different points of space and change them in one region without having to change them everywhere! It hardly bears mentioning."

      The first question we need to ask is what scenario could in principle justify the above assertion.

      Case 1: A physical change inside our universe. You change the subsystem B (say you rotate a detector). That change requires time, so subsystem A's state will also change (the electrons and quarks will continue moving). Conclusion: this cannot justify Dave's claim. He is correct that a change of B will not cause any immediate effect on A, but this is irrelevant because A will continue it's normal time evolution anyway and the result will be that A'≠A and B'≠B.

      Case 2: A counterfactual change. We imagine a different universe where everything on Alpha Centaury (our subsystem A) is the same, but our subsystem B (Earth) is different. If such a universe is possible, Dave's assertion is justified. But is it possible? My simple example above shows that it is not, and Dave did not provide any evidence. What Dave needs to do is to provide a proof that such a counterfactual universe would satisfy EM equations. No such proof is presented, not here and not on Mateus' blog. And I am sure that no such proof will ever be presented because it does not exist.

      Delete
    21. @PD Yes, Maxwell deals always with densities, but that's never done at the basic level, and as far as I know, all charged leptons are treated as point sources. I did some work on a purely density-based version of EM where the point charge is never introduced. It is rather amazing that this is possible and everything works, including the Lorentz force dynamics.

      -drl

      Delete
    22. drl,

      On the "basic level" no one deals with charged leptons using classical EM.

      Delete
    23. @SH, Einstein once said "Das Elektron ist ein Fremder in die Elektrodynamik!" - the electron is a stranger in electrodynamics. This thought really penetrated, because as usual he was right. It's not about classical vs. quantum.

      -drl

      Delete
    24. drl,

      Einstein was right, but you evidently don't understand what he said, otherwise you wouldn't be going on about supposed problems in describing electrons with a classical theory.

      Delete
    25. @SH Maybe :) But 1) Yes there is still a regime in which all you need is classical theory (e.g. a galactic core) and 2) I'm not interested in describing electrons with a classical theory - although I would be in good company - Dirac tried it several times :)

      -drl

      Delete
    26. drl,

      What's your opinion about the Bopp-Podolsky modification of classical electrodynamics? Feynman seems to agree that the theory is consistent. See here:

      https://www.feynmanlectures.caltech.edu/II_28.html

      His only objection is that:

      "The theory of Bopp has never been made into a satisfactory quantum theory."

      But since we are discussing a classical setup, this should not bother us.

      Delete
    27. drl,

      Andrei wrote to you:
      >Let me now present Dave's claim:

      >[Dave]"you can freely specify initial conditions at different points of space and change them in one region without having to change them everywhere! It hardly bears mentioning."

      >[Andrei]The first question we need to ask is what scenario could in principle justify the above assertion.
      ...
      >Case 2: A counterfactual change. We imagine a different universe where everything on Alpha Centaury (our subsystem A) is the same, but our subsystem B (Earth) is different. If such a universe is possible, Dave's assertion is justified. But is it possible? My simple example above shows that it is not, and Dave did not provide any evidence. What Dave needs to do is to provide a proof that such a counterfactual universe would satisfy EM equations. No such proof is presented, not here and not on Mateus' blog. And I am sure that no such proof will ever be presented because it does not exist.

      That, drl, is indeed the claim and that can indeed be proven mathematically and is in fact obvious to anyone with a good understanding of Maxwell's equations.

      We're getting to the point where Andrei has pinned himself down enough as to exactly what he is claiming that it would not be too hard to explain the proof. I've been waiting for him to be completely clear and explicit about what he is claiming to be the case before I try to point out his error: otherwise, I know he will just move the goalposts.

      Now, if Andrei can ever reach the point where he seems willing to actually consider the possibility that a Stanford PhD who is writing a monograph on electromagnetism just might possibly know something Andrei does not know.. well, if Andrei ever gets there I will consider posting an explanation here of the proof I already posted on the other fellow's blog. (It really is obvious and goes without saying to anyone who understands Maxwell's equations.)

      There is a game that some people play on the Web that might be called “beat the physicist,” that consists of saying something that is definitely false in terms of physics. And the non-physicist “wins” this game simply be never, ever admitting he is wrong.

      You see this with Andrei declaring that I have been beaten if I do not respond to his satisfaction in explaining her errors even though I have already posted explanations on that other fellow's blog.

      So, I think that perhaps the best thing I can do is admit that Andrei has indeed won thie game – he has refused to exert the effort to see why he is wrong, and so by the terms of the game, he wins.

      But in fact, it remains true that anyone who understands Maxwell's equation does know that, yes, in Andrei's terms:

      >A counterfactual change. We imagine a different universe where everything on Alpha Centaury (our subsystem A) is the same, but our subsystem B (Earth) is different. If such a universe is possible, Dave's assertion is justified.

      This is obvious to anyone who understands “initial conditions” for differential equations.

      Eppur si muove.

      I am frankly sick of Andrei and hereby concede that I am beaten. More than you can know.

      All the best,

      Dave

      Delete
    28. Andrei, Dave,

      I can't really figure out what it is that you are even discussing. Dave is entirely correct of course, the equations of classical EM have solutions for arbitrary initial conditions (provided these are not overconstrained, needless to say). That's trivial to prove and can be found in any textbook.

      As Andrei says, a change in initial conditions is not a change that is actually physically possible. That's correct.

      If you have a theory that rules out certain types of initial conditions, you generically violate statistical independence which makes the theory superdeterministic. This is also correct.

      I fail to see however why or how classical EM would be superdeterministic.

      My best interpretation is that what Andrei means with his systems A and B are not the initial conditions in these systems, but actually the time evolution of the whole system. In that case it is correct that you generically can't change them independently because that would no longer solve the equations of motion. What this says, however, is not that the theory is superdeterministic, but just that the dynamical law constrains what time-evolutions are even possible.

      Please let me know in case I misinterpreted one of you.

      Delete
    29. Sabine,

      The issue is how free you are to specify initial conditions at t=0 in classical EM.

      Andrei is (finally) actually pretty clear about what he is claiming on this.

      Andrei wants to consider two separate cases in terms of the initial conditions, the second case being a "counter-factual" "different universe," as he says.

      In fact, you can simply think of it as just two separate problems that the professor assigns the student to solve with different initial conditions -- there is no need to get metaphysical.

      So, the issue is how much choice there is mathematically in setting initial conditions at t=0 in classical EM.

      My claim is that the professor can have the initial conditions (E and B fields, charge densities and current densities) be the same at t=0 for the two problems he has assigned in one region of space but different in another region of space that is far away.

      I assume that it is obvious to you that the professor can indeed do this: e.g., both problems could have identical localized light waves in region A but one problem could have no light waves in region B while the second problem does have some localized light in region B.

      This is trivially obvious.

      Andrei denies this.

      The reason he denies it is interesting, but wrong. I'll try to summarize what his misunderstanding is:

      No matter how far apart Region A and Region B are, their light cones overlap at some point in the distant past.

      So, if you “run time backwards,” then the current initial conditions in Region A will impact part of the interior of Region B's light cone in the past. But whatever impacts Region B's light cone in the past will impact Region B in the present. Therefore, setting initial conditions in Region A impacts Region B's past and therefore Region B's present.

      Therefore, if the professor, in his second problem specifies different initial conditions in Region A, the professor also must change the initial conditions he specifies in Region B.

      QED.

      Now, of course, you and I know that, mathematically, the professor can in fact alter the initial conditions in Region A and leave them the same in region B: the argument I just laid out must be wrong.

      Andrei has been very clear as to what he is claiming is true but not so clear as to what his argument is as to why it is true, so he may object to how I have formulated his “proof.”

      Anyway, forget about changing the initial conditions in the real world and just think about what initial conditions the professor can specify in problem 1 vs. problem 2, and you will get the mathematical point at issue. Andrei and I have (finally!) gotten it down to this pure math issue of specifying initial conditions at t=0 for differential equations.

      So, what does all this have to do with whether classical EM is an example of “superdeterminism”?

      In my opinion, nothing at all.

      In Andrei's opinion, it seems, everything.

      I do not intend to challenge Andrei's metaphysical beliefs about superdeterminism: metaphysics is beyond my pay grade.

      But the concrete mathematical issue of how the professor can alter the initial conditions between problem 1 and problem 2 does have a definite, mathematical answer. And on that, as I assume you know, Andrei is simply incorrect.

      It is worth thinking about why his back-and-forth in time light-cone argument is wrong; in any case he may feel that I have not accurately formulated his argument on that.

      Tomorrow, I will post a definite numerical example that meets Andrei's
      A' = A, and
      B' ≠ B

      requirements.

      Andrei will not believe how initial conditions can be specified by the professor at t=0 for Maxwell's equations unless given a specific concrete example.

      Of course, he probably will not accept it even then!

      I hope this clarifies at least a bit what is being debated.

      Again, just think of it as an issue of how the professor can vary the initial conditions between problem 1 and problem 2, and it becomes clearer.

      All the best,

      Dave

      Delete
    30. Hi Dave,

      The reason I was rather suddenly asking about this is that I am giving a talk next week in which I say a few words about the issue of counterfactuals. Metaphysics isn't as irrelevant as you seem to think, so I am sympathetic to Andrei's point on that.

      However, at least the way you have summarized his argument, it's trivially wrong. Since we seem to agree on that, I don't see the need to explain why. In any case, thanks for the clarification. Best,

      Sabine

      Delete
    31. Sabine wrote to me:
      >Metaphysics isn't as irrelevant as you seem to think, so I am sympathetic to Andrei's point on that.

      Oh, I don't say that metaphysics is irrelevant: it may be of the essence of the disagreement between you and Andrei.

      But metaphysics is irrelevant to whether Andrei is right or wrong on this particular issue: he is just trivially wrong on this particular matter. His understanding of differential equations is... limited.

      When I said metaphysics was "above my pay grade" (old American slang, again!), I just meant that metaphysical issues were not something I was able to resolve.

      Whether it is free will or consciousness or superdeterminism, I have found that, for better or for worse, people almost never change their opinions. Or at least whatever I say almost never changes their opinions!

      All the best,

      Dave

      Delete
    32. Sabine,

      "Dave is entirely correct of course, the equations of classical EM have solutions for arbitrary initial conditions (provided these are not overconstrained, needless to say). That's trivial to prove and can be found in any textbook."

      The positions/momenta of charges are indeed parameters that can be freely chosen for the initial state. The E and B fields on the other side cannot be specified independently of the position/momenta of charges. For example an initial state that contains a charge and a null electric field everywhere is not a valid state. So, in the case of classical EM we have supplementary constraint that does not exist in non-field Newtonian mechanics. This is the reason I maintain classical EM is superdeterministic.

      "If you have a theory that rules out certain types of initial conditions, you generically violate statistical independence which makes the theory superdeterministic. This is also correct."

      As noted above, some initial conditions that you can write down are not physically possible, so EM is superdeterministic.

      "My best interpretation is that what Andrei means with his systems A and B are not the initial conditions in these systems, but actually the time evolution of the whole system."

      No, it's the initial state that is constrained.

      PhysicistDave:

      "Andrei has been very clear as to what he is claiming is true but not so clear as to what his argument is as to why it is true, so he may object to how I have formulated his “proof.”"

      I think it is better to specify exactly what I have in mind when speaking about "setting initial conditions in Region A impacts Region B's past". Please take a look at Feynman's lecture:

      "Solutions of Maxwell’s Equations with Currents and Charges" here:

      https://www.feynmanlectures.caltech.edu/II_21.html

      Now look at the equations 21.1 that give the " fields produced by a charge which moves in any arbitrary way". If I want to change the fields in one region (the left part of the equations) I also need to change the right part (position, velocity, acceleration) of the charge at a time in the past that depends on the distance to the charge.

      There is one more thing I need to stress. The systems A and B are assumed to have a finite, non-zero volume. If A is just one point it's trivial to keep the fields the same there while changing B. However, the atom that emits the entangled particles in a Bell test must occupy some non-zero volume, so taking A to be a single point in space would not be a relevant situation.

      I'm looking forward to see Dave's example.

      Delete
    33. Andrei:

      Of course you can't chose the fields independently of the charges, because the charges are sources for the fields (then you can add solutions to the free equation on top of that). I don't know why you think that's superdeterministic. As I said above, if you overconstrain your initial conditions you will of course find that there are no solutions.

      Look, it is arguably the case (and has been noted many times before) that the assumption of statistical independence is highly suspect because in almost all cases the detector and the prepared state have been in causal contact for a long time. Quite possibly what you want to say is that the detector has long-range fields that are de facto correlated with the prepared state, even if the fields are too small to be measurable. That's kind of like saying, well, the position of Mercury does in fact influence the Sunday lottery drawing, it's just that the influence is really tiny.

      That's all well and fine, but doesn't really address the major challenge of superdeterminism which is that you need to find a reason why you do not observe certain measurement outcomes (that result in superpositions of detector eigenstates). Eg, for some reason certain states are either very unlikely or don't exist at all, and you need to find a mechanism that makes that happen. Best,

      Sabine

      Delete
    34. Sabine,

      "Of course you can't chose the fields independently of the charges, because the charges are sources for the fields (then you can add solutions to the free equation on top of that). I don't know why you think that's superdeterministic."

      Superdeterminism requires a violation of statistical independence between the hidden variable (say the spins of the entangled particles) and the settings of the detector. Let's show that:

      1. The polarization of an EM wave depends on the way the electron in the atom that emits those particles accelerates.

      2. The way the electron accelerates depends on the field configuration at the place where the atom is located. (Lorentz force)

      3. The field configuration at the place where the atom is located depends on the past detector state (Feynman's equations 21.1)

      4. From 1-3 it follows that the polarization of the entangled particles depends on the past state of the detector.

      5. EM is deterministic, so the past state of the detector determines its present "instantaneous" state.

      6. From 4,5 it follows that the hidden variable depends on the instantaneous state of the detector.

      Now, Jochen argued earlier that what I have proven here is not a violation of statistical independence, but only the fact that the microscopic state of the detector correlates with the hidden variable. But this is enough, since under such conditions one cannot claim that statistical independence holds (it might or it might not) and Bell's theorem fails (it is based on a premise that cannot be shown to be true)

      Regards!

      Delete
    35. Andrei,

      As I said above, you are correct that statistical independence is generically violated just due to well-known physics and should therefore be considered a highly suspect assumption for a fundamental theory. It is arguably a fine assumption on the macroscopic level (like the tobacco trials that people like to go on about).

      But as I said above, that observation in and by itself doesn't help because saying that statistical independence can be violated doesn't give you a theory that is any better than quantum mechanics. Take the example you just gave. You simply state that "the microscopic state of the detector correlates with the hidden variable". But you haven't told us what the hidden variable is or how it determines the measurement outcome. It's the latter point that's the difficult one.

      Delete
    36. Sabine,

      "As I said above, you are correct that statistical independence is generically violated just due to well-known physics"

      Great, this is all I intended to prove and this is what Dave disagrees with. I think this result is very important. It shows that Bell's theorem is simply irrelevant as a tool to select which theories might be able to reproduce QM. We are basically in the same situation as Einstein was when the EPR paper was published. My personal preference goes with stochastic electrodynamics, a theory that was able to reproduce many "quantum" results, such as the black body radiation or the specific heat of solids.

      "It is arguably a fine assumption on the macroscopic level (like the tobacco trials that people like to go on about)."

      Yes, but we can provide the reason why it is so. Statistical independence (SI) holds whenever field interactions can be ignored. Clearly, the EM interaction between the electrons and quarks inside macroscopic, "neutral" objects will not have an observable effect on the center of mass of those objects due to statistical reasons. This is why rigid-body mechanics with contact forces (which obeys SI) is a good approximation. When fields become noticeable (the quantum domain, but also the gravitational systems like planetary systems) SI fails. For example the positions of two orbiting stars do not obey SI.

      We are therefore in the situation that we have plenty of examples where SI fails (EM or gravitational systems) but also situations in which it holds (coin flips, tobaco experiments, etc.). So, to say that one should not trust tobaco trials because of SI violations in a Bell test is fallacious. The reasonable position is to assume SI only where there are reasons to do so.

      Delete
    37. Sabine,

      "You simply state that "the microscopic state of the detector correlates with the hidden variable". But you haven't told us what the hidden variable is or how it determines the measurement outcome. It's the latter point that's the difficult one."

      I think that you assume an unnecessary burden here. It is the job of those who try to falsify a theory to justify their assumptions. If they can't (which is the case here) the reasonable position is to ignore Bell test completely as just some random experiment for which a prediction cannot be calculated due to computational limitations. One cannot prove that QM correctly predicts that a duck can quack yet you don't see the physics community working on that experiment. We just have to accept that experiments involving more than a few particles cannot be modeled on the computers we have so we may never now how to "explain" them. More to the point, explaining a Bell test in classical EM would imply calculating the fields of all particles at the locus of the emission. It will probably remain an impossible task. This is no reason however to abandon the theory. All theories are in the same situation in regards to many experiments.

      Delete
    38. Andrei,

      Well, I agree with you to some extent, as I think we have noticed before. The tobacco argument and all similar arguments that rest on drawing conclusions about the quantum realm from the classical realm are obviously logically wrong. It is beyond me why people continue to make them. I likewise agree that it is absolutely not helpful to obsess about Bell's theorem because Bell's theorem tells you next to nothing once you realize that statistical independence is easy to violate.

      Having said that, this in and by itself does not give us a theory that is any better than quantum mechanics. For me this is the main challenge here: What does the theory actually look like.

      Best,

      Sabine

      Delete
    39. drl and Sabine,

      I promised to post a full explanation of the matter concerning initial conditions for Maxwell's equations. Alas, this will probably take much more than the 4096 character limit, which I usually try to adhere to. And, I apologize in advance to fellow physicists and other STEM people for belaboring the totally obvious, but some folks are not aware of the obvious.

      Maxwell's equations in SI units are:

      (1) div E = ρ / ε0

      (2) div B = 0

      (3) curl E = - ∂B/∂ t

      (4) c^2 curl B = j / ε0 + ∂E/∂ t

      Here, E and B are the electric and magnetic fields, ρ is the net electric charge density, and j is the net electric current density.

      If you want to think about how you would solve these equation in an actual concrete situation, it is very useful to re-write the last two equations as:

      (5) ∂B/∂ t = - curl E

      (6) ∂E/∂ t = c^2 curl B - j / ε0

      The reason for re-writing them in this way can be understood if you think about how you would solve these equations on a computer. You would start with the values of E, B, ρ, and j at a particular point in time, and then step forward (or backward as you wish) in small time steps using equations (5) and (6) that refer to the derivative of E and B with respect to time to tell you how to change E and B at each time step.

      The simplest approach to doing this is called the “forward Euler method” and can be proven, under normal situations, to give the right answer in the limit the time step Δt goes to zero.

      The important thing to grasp here is that all the terms on the right-hand sides of equations (5) and (6) can be anything at all: they are only there to tell you how to update E and B with each time step.

      Equations (1) and (2) however, are different: there are no time derivatives here, but only spatial derivatives, so that these constrain the E and B fields at one point in time.

      The constraints are, however, quite simple: equation (1) says that, to use Faraday's picture of “lines of force,” E fields must start or end only on electric charges. Lines of force for the B fields cannot start or end at all: they must either continue on out to infinity or loop back and connect to themselves.

      These are the only constraints on the E and B fields at the initial point in time at which the professor chooses to tell the students the values of E, B, ρ, and j.

      Now, someone who remembers frosh physics might say, “No, no, that's not true: I remember that the E field around a charge has to obey Coulomb's law!” No, that is not true – that is only for the time-independent case. But, Maxwell's equations allow the fields to be varying in time in accordance with Equations (5) and (6). And no matter how crazy the E and B fields are -- as long as the lines of force for B fields never end and the lines of force for E fields begin or end on electric charges as required by Equation (1), the E and B fields can start out as crazy and twisted as you like and Equations (5) and (6) will tell you how the E and B fields change in time in such a way as to satisfy Maxwell's equations.

      And that is it. Classical electromagnetism just is Maxwell's equations.

      Anyone who does not get this does not understand classical electromagnetism.

      Anyone who denies this is ignorant of physics.

      (End of part 1)
      (Cont. below)

      Delete
    40. (Beginning of part 2)
      (Cont. from above)

      What about the change in time of ρ, and j?

      Well, normally we would just specify the initial position and velocity of each chunk of charge (point charges cannot really exist in classical electromagnetism – the infinite energy in the E field for a point charge requires a negative infinite bare mass for the point charge, and this leads to mechanical instability – see Rohrlich's Classical Charged Particles). We would then just update the velocity of the hunks of charge via

      (7) F = dp/dt

      where F can be any possible source of force.

      Andrei has specified that he wants the only forces to be electromagnetic, so

      (8) F = q (E + v x B)

      which is fine.

      Again the central point here is that the initial E and B fields can be anything at all with the only constraints being that they must satisfy Equations (1) and (2).

      So, how does this relate to Andrei's claim?

      In his own words, Andrei claims:

      >A counterfactual change. We imagine a different universe where everything on Alpha Centaury (our subsystem A) is the same, but our subsystem B (Earth) is different. If such a universe is possible, Dave's assertion is justified. But is it possible? My simple example above shows that it is not, and Dave did not provide any evidence. What Dave needs to do is to provide a proof that such a counterfactual universe would satisfy EM equations.

      Note that Andrei claims that such a counter-factual is not possible. A single example would show that he is wrong and that it is indeed possible. I will give one such single example.

      In fact, anyone who understands my single example should immediately grasp that there are an infinite number of such examples that are arbitrarily complicated. But one single example is sufficient to prove that Andrei's claim of impossibility is false.

      Here again is another description by Andrei of the basic point at issue (and correcting what he said was a typo):
      >[Andrei] “Let me refresh your memory a little bit.

      >We have an EM system A-B containing two subsystems A and B an arbitrary distance away. This system satisfies EM equations.

      >”You claim that you can have another system A'-B', where:

      >A' = A, and
      >B' ≠ B

      >”My claim (supported by the argument you were unable to refute) is that A'-B' cannot exist (does not satisfy EM equations).

      >”You simply asserted that A'-B' is valid (satisfies EM equations). There was no proof provided for this assertion.”

      I am going to give a very simple, easily calculated counter-example to Andrei's claim, and one counter-example suffices. But, again, this could be made as complex as the reader wishes.

      Subsystem A of my system is located in a sphere one light-year in radius around the sun. Subsystem B of my system is located in a sphere one light-year in radius around Alpha Centauri. The edges of the spheres are of course more than two light-years (more than 12 trillion miles) apart from each other.

      I am going to specify E and B fields everywhere in the universe that obey Equations (1) and (2), and therefore are allowable initial conditions for Maxwell's equations.

      (End of part 2)
      (Cont. below)

      Delete
    41. (Beginning of part 3)
      (Cont. from above)

      First, the initial conditions the professor specifies for problem 1:

      ρ and j are zero everywhere in the universe.

      The E and B fields are zero at time t=0 everywhere except inside the spheres centered on the sun and Alphs Centauri.

      In the sphere centered on the sun (this will be subsystem A), the E field is zero at time t=0, but the B field is given by (units of length are light-years):

      Bx = M * (-y) * f(r)
      By = M * (x) * f(r)
      Bz = 0

      where f(r) = (r^4 – 2*r^3 + r^2)

      and where the (x,y,z) coordinate system is centered on the sun and

      r = sqrt (x^2 + y^2 + z^2)

      I've chosen this functional form just to make the needed calculation easy: the only constraint that exists is that div B must be zero, and this is just first-year calculus to check with this simple functional form. The functional form has the needed partial derivatives existing and continuous everywhere: in particular, B will vanish at the edge of the sphere. (I simply chose the polynomial f(r) to vanish quadratically at r = 0 and r = 1).

      For fun, I'll set M to be one gigaTesla.

      In the sphere centered on Alpha Centauri (i.e., subsystem B), the magnetic field is just the same, but of course with the coordinates centered on Alpha Centauri).

      Again, the only constraints that must be checked is that div E is zero (trivial since E is zero) and that div B is zero, which is checked exactly as for subsystem A

      Now, the initial conditions the professor specifies for problem 2:

      For the sphere centered on the sun (this is subsystem A' which must be the same as subsystem A), the conditions are just as before: the E field is zero and

      Bx = M * (-y) * f(r)
      By = M * (x) * f(r)
      Bz = 0

      But, for subsystem B', we must change things.

      So, for subsystem B', centered on Alpha Centauri, the professor specifies the field values at time t=0 to be: the magnetic field now vanishes in the sphere of radius one light-year around Alpha Centauri, but the electric field is given by:

      Ex = V * (-y) * f(r)
      Ey = V * (x) * f(r)
      Ez = 0

      where, for fun, we will let V be 1 teraVolt/meter and where, of course, x,y,z, and r are centered on Alpha Centauri..

      In the sphere around Alpha Centauri, the only constraint equations that must be checked are (1) and (2): since the B field is zero, Equation (2) is trivially satisfied and, since there is no charge anywhere in the universe, Equation (1) merely specifies that div E is zero, which can be checked as a trivial calculus problem.

      My examples satisfy Maxwell's equations and, obviously, Andrei's provision that:

      >A' = A, and
      >B' ≠ B

      Therefore this shows that what Andrei claimed was impossible is in fact possible. QED.

      Now, of course, I have obviously used an elephant gun to kill a gnat. The only thing that required any cleverness at all was to come up with a functional form for E and B fields that vanished outside of a sphere and that had zero divergence. And that was pretty easy.

      The only real point that needs to be understood is that Equations (5) and (6) (or (3) and (4) ) are time evolution equations, not constraint equations. The only constraint equations that must be satisfied for the initial conditions are Equations (1) and (2).

      If you don't get that, you do not get Maxwell's equations.

      Simple though this example may be, it does prove Andrei's claim to be false: one counter-example disproves an universal claim.

      One possible objection is that my different subsystems could never have come into existence. A simple way of disproving that is to just take any of my initial conditions for any of my subsystems and “run time backwards” (i.e., let the time step Δt be negative) and you will calculate what past conditions led to these field values at t=0.

      Indeed, that is the actual assignment for which the professor specified the initial conditions (hint: think M. Fourier – you will find that incoming light rays happened to converge to give these field values).

      (End of part 3)
      (Cont. below)

      Delete
    42. (Beginning of part 4)
      (Cont. from above)

      Now I assume that Andrei will say something along the lines of, “But, but, that's trivial, there is no real content there!” Yes, I have been saying it is trivial all along. But you do have to understand that Equations (5) and (6) are time evolution equations, not constraints on the initial conditions.

      And, I also suspect that Andrei will say something along the lines of, “But you do not even have any non-zero charges or currents!” Indeed, but that is easily fixed.

      Here are the professor's problems 3 and 4.

      For problem 3, you the student can specify whatever initial values you want for E and B fields and j and ρ that you like in both the spheres around the sun and Alpha Centaurir and, indeed, throughout the universe, subject only to the constraint Equations (1) and (2).

      For problem 4, you leave everything just as you chose to specify for problem 3, except that in the sphere around Alpha Centauri, you add to the E field you already had, the additional E field:

      ΔEx = V * (-y) * f(r)
      ΔEy = V * (x) * f(r)
      ΔEz = 0

      Since the divergence of ΔE is zero, it will not alter the left-hand side of Equation (1) and so, since we did not alter the right-hand side at all, Equation (1) will continue to hold.

      So, I have now taken arbitrarily complicated states for A and B and altered B but not A and yet still obeyed the constraint equations.

      But of course that too will not satisfy Andrei.

      Andrei will want to actually add some charges to sphere B around Alpha Centauri, and, since Andrei remembers Coulomb's law, he is sure that this will alter the E fields near the sun, if ever so slightly.

      Nope: Coulomb's law is static, it relies on the fact that curl E is zero because ∂B/∂t is zero in a static situation. But Maxwell's equation are fully dynamic: a failure of Coulomb's law to give the E field because curl E is non-zero is merely a sign that the time evolution equation will kick in to produce a time-varying magnetic field.

      But... don't the lines of force have to spread out spherically from a charge, so that some of them will get all the way to the sun from Alpha Centauri? No, only in the static case.

      In fact, here is an E field that is non-zero only in the half space for x>0, and that yet satisfies the constraint Equation (1) for a charge q at the origin:

      for x > 0,

      Ex = 3 * q/ (2π *ε0 ) * x^3/r^5
      Ey = 3 * q/ (2π *ε0 ) * x^2 * y/r^5
      Ez = 3 * q/ (2π *ε0 ) * x^2 * z/r^5

      while for x < 0, E = 0.

      Of course, the curl of this E field is non-zero, so it will correspond to a time-varying B field. But, it is elementary to confirm that it has divergence of zero, except at the origin.

      Andrei does not like this crazy shape for the E field? Well, conceptually, just draw the field lines for the normal Coulomb solution while within the one light-year sphere around Alpha Centauri, but, after you leave that sphere, bend the field lines away from the sun so that they do not enter Region A.

      By the way, far from being a bizarre or outré idea, this is actually what happens if, say, an electron-positron pair is created near Alpha Centauri and then rapidly separate by, say a few kilometers. Close in you have the Coulomb field, further out a dipole field, but out here near the sun it will actually be more than four years before any change at all occurs due to what happened at Alpha Centauri (speed of light limitation).

      Now, of course, after all the effort I have put into this, Andrei will still deny that I am correct. I think that maybe the people to whom this will be clear is those who have actually written computer code to model the time evolution of a system with constraints on the initial conditions. Having had to think this through for themselves, it should be self-evident. Any competent physicist should be able to grasp it.

      Andrei, alas, has told us he is a chemist.....

      Dave

      Delete
    43. There is a misunderstanding I guess that lies in a (not explicit enough stated) implicit assumption.
      Dave evolves forward separate patches A and B which is of course correct, but Andrei and I guess Sabine are thinking about the entire evolution of a world that is defined by the very initial conditions.

      Let’s get meta-physical:
      A world where only EM rules is deterministic and everything that ever happens is (pre-)determined (1).
      Within the time evolution of this deterministic world there simple is no place where one can choose to put arbitrarily a charge here or there. But of course, this restriction on later initial conditions does not make it automatically superdeterministic (2).

      Since nothing can later on be freely, arbitrarily chosen, also “cause and effect” has no real meaning, since there is no agency and I guess same holds for counterfactuals.
      A purely classical deterministic world thus also has its problems.

      And I guess that´s the reason why there is QM and a Born’ rule in the measurement. The clash between QM and the classical (now GR) is the dynamics. GR is the defender of (smooth) spacetime and destroyer of superpositions (when the cat gets too fat). The measurement as a permanent phase transition, a process with a threshold that triggers reductions all the “time” and thus defines a fractal surface – our world.

      Back to reality:
      Reductionism works, therefore differential equations (DE) are so successful.
      But we need to be careful when putting stuff together again, i.e. integrating. Analytic solutions are rare, even a simple ordinary non-linear DE needs a process with a threshold to be integrated. (A threshold to determine Δt like in Runge-Kutta and Δt needs to be non zero, otherwise we would not come very far ;-)



      -----------------
      (1) There might be problems like:
      - how to choose the very initial conditions on the very Cauchy surface (does it exist at all?), so that later on there are no conflicts like over-constraining.
      - what is the time in the time evolution, who´s time per se. (per se - very meta-physical wording ;-)
      (2) For superdeterminism you also need a model/theory/explanation that reproduces e.g. the distant correlations in a measurement as QM does. As Sabine said here“... the best way is to stick with quantum mechanics to the extent possible ...”.

      Delete
    44. Reimond wrote:
      >Dave evolves forward separate patches A and B which is of course correct, but Andrei and I guess Sabine are thinking about the entire evolution of a world that is defined by the very initial conditions.

      Well, not quite. I was not actually trying to use the time-evolution equations: I was merely pointing out that the time-evolution equations do not constrain the initial conditions in any way.

      Of course, I did need to point out that if anyone did want to know the state of the universe at any time in the future or the past, they could indeed use the time-evolution equations.

      The only thing that constrains the initial conditions are the two divergence equations, and what I showed in infinite detail is what anyone familiar with vector analysis is already supposed to know: only specifying the divergence of a vector field is hardly sufficient to specify the field at all, and, indeed, specifying the divergence in a sphere centered on one point tells you nothing at all about the vector field in a disjoint sphere.

      You need the divergence plus something else, and that something else is just not there in the initial conditions for Maxwell's equations. Maxwell's equations are very "loose" in terms of initial conditions.

      I assume Sabine already knows all this. As to what Andrei was thinking, he was quite explicit, and indeed he correctly specified what he needed for his argument.

      This all started in a discussion about whether classical EM could produce Bell-type entanglement at a distance, and, for that, Andrei needed (and though he had) some constraint between two space-like separated regions imposed by Maxwell's equations.

      I proved that such a constraint does not exist.

      Reimond also wrote:
      >Reductionism works, therefore differential equations (DE) are so successful.
      >But we need to be careful when putting stuff together again, i.e. integrating. Analytic solutions are rare, even a simple ordinary non-linear DE needs a process with a threshold to be integrated. (A threshold to determine Δt like in Runge-Kutta and Δt needs to be non zero, otherwise we would not come very far ;-)

      Well, when I actually do modeling, for obvious reasons I prefer Runge-Kutta to forward (!) Euler, but I did not mention that because it would have further complicated an already lengthy explanation. And the existence and convergence theorems are easier to prove by taking the Δt goes to zero limit for forward Euler.

      Reimond also wrote:
      >Since nothing can later on be freely, arbitrarily chosen, also “cause and effect” has no real meaning, since there is no agency and I guess same holds for counterfactuals.
      >A purely classical deterministic world thus also has its problems.

      Yeah, anyone who wants to posit s superdeterministic theory needs to specify very carefully what they actually mean by "repeating" a Bell-type experiment. It can never be truly repeated exactly, so what do they mean by "repeating" it? And since the entire evolution of the universe is fully determined in a superdeterministic theory, the initial conditions tell the whole story. Perhaps they need to carefully specify a special set of initial conditions? Or maybe they can appeal to some sort of ergodicity to prove that the Bell-like experiments work out in terms of the long-term statistics?

      In any case, Bell-like entanglement is a statistical concept, so somehow superdeterminists need to get statistics out of a deterministic theory.

      I was not trying to address those broader issues but merely address the specific way that Andrei thought he could get entanglement in classical EM. That cannot be done.

      All the best,

      Dave

      Delete
    45. Reimond,

      My argument that proves classical EM to be a superdeterministic theory is this:

      "1. The polarization of an EM wave depends on the way the electron in the atom that emits those particles accelerates.

      2. The way the electron accelerates depends on the field configuration at the place where the atom is located. (Lorentz force)

      3. The field configuration at the place where the atom is located depends on the past detector state (Feynman's equations 21.1)

      4. From 1-3 it follows that the polarization of the entangled particles depends on the past state of the detector.

      5. EM is deterministic, so the past state of the detector determines its present "instantaneous" state.

      6. From 4,5 it follows that the hidden variable depends on the instantaneous state of the detector."

      As a side issue, it is not based on the static Coulomb equation, but on the general equations relating the position/velocity acceleration of a charge to its field at some arbitrary location. The expressions (eq. 21.1) can be found in this Feynman lecture:

      "Solutions of Maxwell’s Equations with Currents and Charges" here:

      https://www.feynmanlectures.caltech.edu/II_21.html

      I agree that PhysicistDave has found a potential loophole in my argument. He is right, it is possible to conceive some source-free fields in distant locations that are independent on each other. I agree with his first example. The second describes a particle/antiparticle creation, a process that is not possible in classical EM (it implies a violation of the superposition principle), but this is not important since, indeed, a single example is enough.

      The question is now to what extent the scenarios described by PhysicistDave are expected to occur during a Bell test. It's clear that source-free fields such as those presented by PhysicistDave cannot appear/disappear from an experimental run to another. A Bell test that can actually be performed will be fully covered by my argument.

      So, while I thank Dave for the time and effort to present his examples and while I accept that at least the first one is valid I do not see how this actually disproves my argument. In order to do that Dave should also present a reason for why such source-free fields are expected to make a sudden appearance during Bell tests.

      Delete
    46. “... in a superdeterministic theory, the initial conditions tell the whole story”
      This already holds in a (exclusively) deterministic theory - without super.

      Delete
    47. Andrei wrote to me:
      >I agree that PhysicistDave has found a potential loophole in my argument. He is right, it is possible to conceive some source-free fields in distant locations that are independent on each other. I agree with his first example.

      Okay, Andrei, I want to give you credit for having the courage to admit that I proved that point!

      Abdrei also wrote:
      >The second describes a particle/antiparticle creation, a process that is not possible in classical EM (it implies a violation of the superposition principle), but this is not important since, indeed, a single example is enough.

      Well, you can just have some negative and positive charge lying on top of each other and then pulled apart -- e.g., by an ambient E field. Same effect as pair creation, or at least as close as you can get in classical physics.

      Andrei also wrote:
      >The question is now to what extent the scenarios described by PhysicistDave are expected to occur during a Bell test.

      Well... you cannot actually do a Bell test in classical EM -- no stable atoms, for example. One of the reasons for QM (e.g., the Bohr model) was to try to understand how atoms could be stable when classical EM forbade that.

      Andrei also wrote:
      >It's clear that source-free fields such as those presented by PhysicistDave cannot appear/disappear from an experimental run to another.

      Oh, yeah, they can: they are basically just EM radiation zipping in from elsewhere.

      Andrei also wrote:
      > In order to do that Dave should also present a reason for why such source-free fields are expected to make a sudden appearance during Bell tests.

      Well, why not? Life happens.

      Seriously, I do not think classical EM is at all relevant to Bell tests simply because you cannot do any sorts of tests without atoms to make up polarizers, Stern-Gerlach apparatuses, etc.

      And, even if you could carry out Bell-type experiments, my argument should suffice to show you that you could imagine the features "over here" that represent your apparatus and the system to be measured having been different, while leaving the features "over there" that you are going to be measuring.

      If you are worried about the Bell-type measurements being sequentially repeated and therefore affecting future measurements, just carry then out simultaneously at very widely spatially separated experimental sites (here, Alpha Centauri, Sirius, Aldebaran, etc.).

      The truth is that, if you have wide separations for one-shot experimental runs, then in classical, deterministic EM, you do not get any particular prediction at all: it all just depends on the initial conditions of the universe long ago.

      I suppose that you might argue that that is all that is happening in any superdeterministic theory: the Bell inequality is violated simply because you choose very special initial conditions guaranteed to violate the Bell inequality.

      But, I think I shall leave it to you and Sabine to debate whether that is "really" what superdeterminism means!

      In any case, classical EM is not going to automatically give you a violation of Bell's inequality, though perhaps you can force it to if you carefully choose appropriate initial conditions.

      Assuming you could actually run Bell experiments without atoms, which I seriously doubt.

      Dave

      Delete
    48. PhysicistDave,

      "Well... you cannot actually do a Bell test in classical EM"

      This is not my point. The idea is that in order to falsify a theory you need both the theoretical prediction of the theory and the experimental data. I agree that classical EM would not violate the inequality if fields of the type you propose are present. But we still need to have an actual Bell test performed in those conditions. It might be the case that the inequalities would not be violated experimentally either.

      "One of the reasons for QM (e.g., the Bohr model) was to try to understand how atoms could be stable when classical EM forbade that."

      I think that the evidence is much weaker than usually assumed. There is no rock-solid proof that classical EM cannot explain atoms. We know that a particular model (electron and nucleus are point charges, they have no magnetic moment, there are no external fields) fails.

      A Polish physicist, Michał Gryziński published a paper presenting a classical atomic model:

      "Radially Oscillating Electron-the Basis of the Classical Model of the Atom"

      Physical Review Letters. 14 (26): 1059–1059.

      Unfortunately I cannot access the paper, but it appears that it is the electron's spin that provides the solution.

      There is also the SED approach where the electron recovers the energy lost as radiation from the surrounding EM fields.

      "Oh, yeah, they can: they are basically just EM radiation zipping in from elsewhere."

      I think it is an absolute requirement for your example to work that the fields are source-free (basically generated at the Big-Bang). If the gamma rays are produced by some accelerating charges, those charges will produce effects at both A and B subsystems, so adding a charge at B in this way would imply a change of the gamma ray source (say it's C), and, by my argument, that will also imply a change at A.

      "And, even if you could carry out Bell-type experiments, my argument should suffice to show you that you could imagine the features "over here" that represent your apparatus and the system to be measured having been different, while leaving the features "over there" that you are going to be measuring."

      Yes, that is true, but it is still the case that we need to have experimental data from such a regime. It might be the case that either QM would not work under such conditions or, if those fields would be taken into account QM's prediction would be different as well.

      "If you are worried about the Bell-type measurements being sequentially repeated and therefore affecting future measurements, just carry then out simultaneously at very widely spatially separated experimental sites (here, Alpha Centauri, Sirius, Aldebaran, etc.)."

      No, I do not assume anything more than I presented in my argument, no "memory" effect, etc.

      "Bell inequality is violated simply because you choose very special initial conditions guaranteed to violate the Bell inequality."

      Not at all. My argument now is that it cannot be shown that classical EM cannot violate Bell if no source-free fields are present. If such fields are present we have no experimental data. In the end one cannot use Bell to rule out classical EM.

      Delete
    49. Andrei wrote to me:
      >This is not my point. The idea is that in order to falsify a theory you need both the theoretical prediction of the theory and the experimental data. I agree that classical EM would not violate the inequality if fields of the type you propose are present. But we still need to have an actual Bell test performed in those conditions. It might be the case that the inequalities would not be violated experimentally either.

      Well, the real world is not a classical EM world! So I do not know how you could do the sort of tests you are thinking of.

      One key point that I did not mention in the previous post was that Bell-inequality tests make use of individual photons or (in principle) spin-1/2 particles.

      There are no individual photons in classical EM, and, of course, the idea of spin that is h-bar/2 makes no sense at all in classical physics (h-bar is zero). So again, tests of the Bell inequality make no sense in classical EM.

      More broadly, the Bell inequality refers to experiments in which the answer is yes/no: the single photon gets through the polarizer or it does not (or the spin 1/2 particle is deflected up or down by the Stern-Gerlach apparatus). Again, makes no sense in classical physics.

      Andrei also wrote:
      >[Dave] "One of the reasons for QM (e.g., the Bohr model) was to try to understand how atoms could be stable when classical EM forbade that."

      >[Andrei]I think that the evidence is much weaker than usually assumed. There is no rock-solid proof that classical EM cannot explain atoms.

      Well, most physicists disagree. In classical EM, the electron radiates continuously, and very, very rapidly spirals into the nucleus.

      The result is elementary and pretty unassailable.

      There is a good reason Gryzinski's ideas are not taken seriously: according to classical EM (which is what you are resting your case on!), radially accelerated electrons would radiate.

      Andrei also wrote:
      >I think it is an absolute requirement for your example to work that the fields are source-free (basically generated at the Big-Bang). If the gamma rays are produced by some accelerating charges, those charges will produce effects at both A and B subsystems, so adding a charge at B in this way would imply a change of the gamma ray source (say it's C), and, by my argument, that will also imply a change at A.

      No, not at all. It is trivial to have those fields produced by sources at some time in the past. I hope I do not need to write a lengthy set of posts to proves this and that you might now consider accepting that I actually do know a lot more than you on this subject!

      What I keep trying to tell you is that Maxwell's equations are much “looser” than you realize. You can produce all sorts of weird and marvelous situations that may seem counter-intuitive to students who were trained to think of static E and B fields as the general model for E and B fields: unfortunately, the intro “weeder” courses do give that false impression, and few students, except for (some) physics majors ever get beyond that.

      Andrei also wrote:
      >...it is still the case that we need to have experimental data from such a regime. It might be the case that either QM would not work under such conditions or, if those fields would be taken into account QM's prediction would be different as well.

      Well, as I explained above, I do not think that it is logically possible to test the Bell inequality in a universe governed solely by classical EM simply because:

      A) Bell-inequality tests assume features (such as individual photons) that do not exist in classical EM.

      B) We have no access to a universe in which classical EM is true.

      Anyway, my concern was to point out your error in your A' = A / B' ≠ B argument, which you agree I have done. I don't claim to be an expert on supredeterminism, though I think I have just pointed out insurmountable problems you will have trying to show violations of Bell's inequality in classical EM.

      Dave

      Delete
    50. Weird question about A and B: why so big (e.g. sphere 2 light-years in diameter)?

      Couldn't the gedunkenexperiment be done with spheres 1 m in diameter? 0.0001 m? Or, perhaps, 1 Mpc (megaparsec)?

      Delete
    51. JeanTate wrote to me:
      >Weird question about A and B: why so big (e.g. sphere 2 light-years in diameter)?

      Hi, Jean!

      The answer is partly my physicist's sense of humor (might as well make it big!), partly to choose some definite number to make the matter as concrete as possible, partly to emphasize that this is not just a tiny little local effect, and mainly to emphasize that it would actually be a whole year before we here deep in the Solar System would see any EM effect at all from anything outside the sphere (and several years before we would see anything from the Alpha Centauri sphere).

      I did want to make clear that I was not just guaranteeing things here to be independent of things there for only an instant.

      Jean also asked:
      >Couldn't the gedunkenexperiment be done with spheres 1 m in diameter? 0.0001 m? Or, perhaps, 1 Mpc (megaparsec)?

      Sure, except, since I needed the spheres not to overlap, if I had made each sphere a Megaparsec, I would have needed the other sphere a lot further away than Alpha Centauri!

      I hope it is clear that there is nothing novel at all in what I was claiming: almost all physicists (and, I suppose, astronomers) would take for granted what I proved: things over there cannot affect things over here until the speed of light delay. And, indeed, even after the speed of light delay, it is only when certain "reasonable" assumptions about EM emitters and absorbers between ourselves and Alpha Centauri, about the density of ambient radiation, etc. are true that we can “see” Alpha Centauri.

      For example, if you think about using EM effects to find out what was happening four light years away well before the "time of last scattering"... well, it could not be done.

      My main goal was just to prove that Andrei's claim that you could not have A' = A / B' ≠ B was trivially and obviously wrong.

      Of course, obviously, in the real world, there will usually be some connection between EM effects here and on Alpha Centauri (i.e., we can see Alpha Centauri), but that is not a direct consequence of Maxwell's equations alone but also of the current state of the universe here and now (e.g., interstellar space nowadays is fairly transparent). And, so, Andrei could not draw his conclusions from Maxwell's equations alone.

      Indeed, as I said, I am pretty sure Andrei cannot get what he wants in a world governed solely by classical EM, since you cannot do the standard Bell-inequality tests without individual photons, which are not present in classical EM.

      So, what I proved was hardly profound, just pointing out what we all know, but Andrei wanted it spelled out in detail.

      All the best,

      Dave

      Delete
    52. PhysicistDave,

      "No, not at all. It is trivial to have those fields produced by sources at some time in the past. I hope I do not need to write a lengthy set of posts to proves this and that you might now consider accepting that I actually do know a lot more than you on this subject!"

      I agree that you know much more than me, but I still think that producing those fields in the past (from field sources, charges) would not work. Do you think you can show that A' = A / B' ≠ B if all the fields in the universe are associated with field sources/charges? In this case the 21.1 equations here:

      https://www.feynmanlectures.caltech.edu/II_21.html

      seem to imply the contrary.

      According to Feynman:

      "If a charge moves in an arbitrary way, the electric field we would find now at some point depends only on the position and motion of the charge not now, but at an earlier time"

      So, in order to change E at B you need to change the "position and motion of the charge" (presumably located at A) in the past. But then you cannot get A=A' because the theory is deterministic.

      Sorry, I do not want to abuse your time, if you can point me out to a solution of this conundrum in literature, I'll do my best to research it there.

      "There is a good reason Gryzinski's ideas are not taken seriously: according to classical EM (which is what you are resting your case on!), radially accelerated electrons would radiate."

      This is what I think as well, but I find it strange that the reviewers at PHYSICAL REVIEW LETTERS could not spot such an error. Gryzinski published more papers related to his model there, some of them with hundreds of citations, yet I could not find a paper debunking his theory. It is also the case that his model seems to give reasonably good predictions, as shown in papers by different authors, like:

      Classical electron ionization cross sections
      A E Kingston
      Proceedings of the Physical Society, Volume 89, Number 1

      "The cross section for electron ionization of the first five excited states of hydrogen are calculated using Gryzinski's classical theory. These cross sections are compared with cross sections calculated using the first Born approximation. The classical cross sections are in good agreement with the quantal cross sections except at very large impact energies where the classical cross sections do not have the correct fall-off. If the correct fall-off is added to the classical cross sections, these cross sections differ only slightly from the Born cross sections at high impact energies."

      Or this one:

      Gryziński Electron-Impact Ionization Cross-Section Computations for the Alkali Metals
      Robert H. McFarland
      Phys. Rev. 139, A40 – Published 5 July 1965

      "The absolute cross sections for ionization by electron impact have been calculated by classical theory for the alkaline metals. The results are in agreement with earlier experimental measurements."

      Or this one:

      Single and double ionization of neon 2s- and 2p-subshells by proton and electron impact
      M. Eckhardt & K. -H. Schartner
      Zeitschrift für Physik A Atoms and Nuclei volume 312, pages321–328(1983)

      "Absolute single and double ionization cross sections of neon 2s- and 2p-subshells for proton (40–900 keV) and electron impact (0.2–10 keV) have been measured using photon spectroscopy in the spectral range of the vacuum ultraviolet. Cross sections for double ionization decrease more rapidly with increasing impact energy than cross sections for single ionization. No definite asymptotic energy dependence of a Bethe-Fano-plot could be found for double ionization in contrast to single ionization. The experimental results are compared with theoretical predictions of the shake-off model and Gryzinski's classical binary encounter theory."

      And there are more.

      Delete
    53. PhysicistDave,

      I found a paper apparently dealing with the stability of the Gryzinski's classical atom:

      Spin-dynamical theory of the wave-corpuscular duality
      International Journal of Theoretical Physics volume 26, pages967–980(1987)

      "The assumption that translations of the electron are accompanied by spin precession enables a deterministic description of electron diffraction and quantization of atomic systems. It is shown that the electromagnetic field of the precessing electron is responsible for modulation of the beam intensity of an electron scattered from a system of charges and for mechanical stability of the orbital motion of electrons in bound states."

      Delete
    54. Andrei wrote to me:
      >This is what I think as well, but I find it strange that the reviewers at PHYSICAL REVIEW LETTERS could not spot such an error.

      Unfortunately, this is standard operating procedure for reviewers and has been for at least forty years.

      For example, back in 1981 when I was finishing my doctorate at Stanford, I ran across a paper in Phys. Rev. Lett. that claimed to have proven an important point about confinement in QCD. The paper had an elementary algebra error: the authors had divided by a quantity that could be proven to be equal to zero. I wrote up a note to PRL: they did not bother to correct the error.

      I talked it over with a couple of profs at Stanford. My thesis advisor told me not to bother to read PRL because it was filled with garbage. Another prof told me that no one took the paper seriously, so it did not matter: he proved to be correct, since what should have been an important result (if it had been true!) simply sank into oblivion.

      So, no, reviewers do not check for correctness of papers; From public comments by reviewers that I have seen, reviewers judge whether papers are about a subject of genuine interest, whether they are too long, etc. But, contrary to what you might think, for theory papers the reviewers do not generally check the math.

      Sad, I know (though remember that reviewers are generally unpaid).

      There is a huge amount of utter nonsense in the published literature (not to mention on the arXiv!). Congratulations: you have found more of it!

      Anyway, Gryzinski’s approach is so obviously nonsensical that it belongs up there with Velikovskian celestial mechanics and Young Earth Creationism: his charges would radiate. But they don’t.

      Eppur si muove.

      Andrei also wrote:
      > I agree that you know much more than me, but I still think that producing those fields in the past (from field sources, charges) would not work.

      Well, you are wrong: this is a standard result in EM theory. If you insist, I will offer you a crude outline of the proof, but the details involve replacing E and B by the scalar and vector potentials, imposing the Lorentz condition, Fourier transforming, and then finding the Green’s functions in k-space. Are you familiar with everything I just mentioned? If you are, I can summarize the proof for you. If not... well, I cannot very well teach you several hundred pages of EM theory here in Sabine’s comments section!

      You misunderstand the quote from Feynman: he is just talking about speed of light delay (you see the charge as it was, not as it is now, due to speed of light delay): if you read the whole sentence, he makes that clear.

      You wrote:
      >So, in order to change E at B you need to change the "position and motion of the charge" (presumably located at A) in the past. But then you cannot get A=A' because the theory is deterministic.

      Andrei, I proved my result from Maxwell’s equations: if Feynman disagreed, it would just prove that he made a mistake – Maxwell trumps Feynman. Anyway, Feynman did not disagree.

      I suppose your mistake is in thinking that somehow the field at B must be controlled by a charge in A in the past (hence your “presumably at A” comment). That is obviously not true: lots and lots of charges can influence the current state of B, not just the ones in A’s past.

      I cannot point you to some place that corrects your misconception because it is not even clear why you think there is a problem here: lots of stuff in the past affects the current state of A and B, and I proved that it is mathematically possible to do so in a way that brings about A' = A B' ≠ B'. For some reason, you seem to think you see some problem with this, but it is not clear what this supposed problem is.

      I proved rigorously from the differential equations that you can indeed have a state with A' = A B' ≠ B'. I realize that you have lots and lots of confusion about how EM works: I am quite certain I will never cure your confusion. But your confusion does not alter the proof.

      Dave

      Delete
    55. Dave wrote: "My thesis advisor told me not to bother to read PRL because it was filled with garbage" and "There is a huge amount of utter nonsense in the published literature (not to mention on the arXiv!)." It is interesting to read what Max Planck had to say regarding journal submissions to Annalen der Physik: "Even if a paper was strongly fanciful and arbitrarily carried through it could still have significance if it treated some puzzling phenomena in a stimulating way." Also, "he seldom rejected a paper outright" and "physicists needed freedom, for otherwise they would never find new and better ways." Planck noted "it is impossible in advance to know if an idea will prove fruitful for later work, and so it is best to be tolerant toward unfamiliar work." Rejection rate during Planck's tenure as advisor for Annalen der Physik was between 5 and 10 % (Volume 2, page 311, Intellectual Mastery of Nature, by Jungnickel and McCormmach).

      Delete
    56. PhysicistDave,

      "Well, you are wrong: this is a standard result in EM theory. If you insist, I will offer you a crude outline of the proof, but the details involve replacing E and B by the scalar and vector potentials, imposing the Lorentz condition, Fourier transforming, and then finding the Green’s functions in k-space. Are you familiar with everything I just mentioned?"

      A am not familiar with Green functions. It might be better if I reformulate my argument in the light of what you have proven. Here it is:

      Premise 1: All fields in the universe are associated with charges (no source-free fields exist) -I do not know if this is true or not, but let's assume it is true. I will propose a different argument in the opposite case.

      Premise 2: The fields at some arbitrary location are given by the equations 21.1 in Feynman's lecture. For multiple charges we just add them according to the superposition principle.

      Premise 3: Space is continuous. This implies that in any non-zero volume (say the volume occupied by the source of the entangled particles) there is an infinite number of points.

      Premise 4: Charge is quantized (experimental fact).

      Premise 5: The number of charges in the universe is finite.

      6. From 1, 2, 3, 4 it follows that the field configuration in any volume of space is equivalent with an infinite number of equations, so the system is overdetermined. This implies that the solution is unique.

      Conclusion: The field configuration at the source location completely determines the state of the universe (charge positions/ velocities).

      Hopefully this argument is more clear and you can more easily spot the error, without too complex calculations.

      I need to stress the importance of charge not being a continuous quantity. If it is it's trivial to find examples where different charge distributions determine the same field (a charged sphere has the same field as a point charge located at the center of the sphere).

      If the complex proof you mention still applies under the above restrictions (quantized charge, no source-free fields) please point me to that material. I'll do my best to understand it.

      About Gryzinski, I am willing to accept that a lot of nonsensical papers were published in PRL. I find it harder to accept how different authors were able to apply his model and get results that are in agreement with experiments. Still, given the fact that I cannot access the paper I have to drop this point. There is still the SED approach (the electron gets energy from the external fields) which blocks the so-called falsification of classical EM by atom's stability.

      Delete
  21. The measurement problem again! I thought about taking a thermodynamic view, to see if that might make sense. Consider an electron, unobserved, and pretend it is in equilibrium or so close to it that it obeys the linear dynamic of the wave equation. It encounters a detector, say a phosphor screen. We regard the measurement as a forcing influence, driving the electron out of equilibrium into a state so far from equilibrium that it's state description must be non-linear. Reverse time at this point and we see an electron far from thermodynamic equilibrium evolving in a non-linear way (and this sort of behavior can be quite complex and unpredictable) into an equilibrium state describable by a wave function. The "forcing" measurement now looks like energy dissipation. What about the electron after measurement? It is no longer being observed, the measurement was an event in the "past". Back in equilibrium? Linear dynamics again? NOW reverse time and you see a quiescent electron in thermodynamic equilibrium suddenly entering a non-equilibrium state, dissipating energy (where did the energy come from?) and returning to equilibrium. The problem seems to be in the tiny interval composing the measurement. If the arrow of time is ambiguous, not clearly defined, in that tiny interval, the time reversed picture could be symmetric. I don't know whether it helps to examine the problem in this way-- it was just an idea.

    ReplyDelete
  22. I am with Asher Peres who writes: "Quantum theory does not provide a universal description of nature. It merely is a set of rules for computing the probabilities of occurrence of definite outcomes, in tests that follow definite preparations. Anyone who wants to see more than that in quantum theory does so at his own risk." (page 446, Relativistic Measurements, in Fundamental problems in Quantum Theory, 1995).

    ReplyDelete
    Replies
    1. Gary Allen wrote:

      >I am with Asher Peres who writes: "Quantum theory does not provide a universal description of nature. It merely is a set of rules for computing the probabilities of occurrence of definite outcomes, in tests that follow definite preparations. Anyone who wants to see more than that in quantum theory does so at his own risk."

      Well, in practice, that is pretty much what all physicists do: In Dave Mermin's immortal phrase, "Shut up and calculate!"

      The problem though is that the standard textbook rules make a distinction between the physical world subject to the quantum rules and we, the observers, who are not part of the quantum world governed by the continuous, unitary transformation brought about by Schrödinger's equation. We break the purely unitary quantum behavior, forcing a system (randomly) into one of a set of possible eigenstates, etc.

      No randomness in the Schrödinger equation: only randomness when we god-like beings intervene in the physical world.

      In a sense, we humans are gods who stand outside the physical (quantum) world.

      Now, of course there have been lots of attempts to say that "the observer" does not need to be a human being: it can just be some apparatus that records the result.

      But that does not work: in principle, the apparatus itself can be described by QM, and this can actually be shown to make a difference in practice -- both by QM techniques such as "quantum erasure" and the fact that we actually do use QM to describe many of the devices we really do use to make measurements (e.g., semiconductor electronics: the behavior of electrons and holes in semiconductors is explained by QM).

      Maybe that is the end of it. Maybe we humans are just different sorts of things (ectoplasm anyone?) from the gross, mundane physical world.

      But natural science for over four centuries -- from Harvey's discovery of the circulation of the blood to Darwin's discovery that humans evolved to the breaking of the genetic code -- has placed us humans as part of the natural order.

      And so a lot of us think that physicists should not accept the QM textbook approach that puts us human observers outside of the material world.

      I suppose you could say our motivation is (anti)-theological. We distrust a scientific theory that places us outside of nature.

      And hence MWI, GRW, Bohmina mechanics, and all the rest -- none of which has garnered general support among physicists (usually for good technical reasons).

      But it is also no longer the case that the overwhelming majority of physicists are satisfied with textbook QM.

      And I think we are right to be wary of s physical theory that seems to not treat humans as part of the natural world. Can you see the reason for our discomfort?

      All the best,

      Dave

      Delete
    2. Gary,

      By the way, here is an essay from three years ago by my old professor (and Nobel laureate) Steve Weinberg. The take away section:

      >”It seems to me that the trouble with this approach is not only that it gives up on an ancient aim of science: to say what is really going on out there. It is a surrender of a particularly unfortunate kind. In the instrumentalist approach, we have to assume, as fundamental laws of nature, the rules (such as the Born rule I mentioned earlier) for using the wave function to calculate the probabilities of various results when humans make measurements. Thus humans are brought into the laws of nature at the most fundamental level. According to Eugene Wigner, a pioneer of quantum mechanics, “it was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness.”

      >”Thus the instrumentalist approach turns its back on a vision that became possible after Darwin, of a world governed by impersonal physical laws that control human behavior along with everything else. It is not that we object to thinking about humans. Rather, we want to understand the relation of humans to nature, not just assuming the character of this relation by incorporating it in what we suppose are nature’s fundamental laws, but rather by deduction from laws that make no explicit reference to humans. We may in the end have to give up this goal, but I think not yet.”

      I hope you will agree that he and I are making much the same point (you can judge who is more eloquent!). I will say that I have been bothered by all this for far longer than Steve has been; however, his comments in recent years (I believe Sabine's book first made me aware of his discomfort with QM) has helped sharpen my own thinking.

      Now if I were a Hegelian, I'd start prattling on about how textbook QM is alienating us from our natural being as part of the natural order. But, since I'm not, I'll leave it at that.

      Dave

      Delete
    3. Sidney Coleman gave a lovely lecture, Quantum Mechanics in Your Face. I was thrilled to find it on Youtube (1994, APS). His lecture should be required viewing for all. World Scientific will publish the lecture in a book, Theoretical Physics in Your Face: Selected Correspondence of Sidney Coleman (June 2020). Steven Weinberg begins his Volume One, Quantum Theory of Fields, with these words: "First, some good news, Quantum Field Theory is based on the same quantum mechanics that was invented by Schrodinger, Heisenberg, Pauli, Born and others in 1925-1926." In Steven Weinberg's essay he writes "perhaps the problem is merely one of words."
      With that phrase, I concur.

      Delete
    4. Gary Alan,

      Sidney's arguments for many-worlds, both for his "Definiteness operator" and for his analysis of asymptotic probability fail quite dramatically and for very obvious reasons: in both cases, at the end of the argument he appeals to standard measurement postulates from textbook quantum mechanics!

      But the whole idea of MWI was to be rid of the textbook measurement postulates and rely on the Schrödinger equation alone: why bother to add on to all that the additional (and obviously speculative) hypothesis of many-worlds if we still need the textbook measurement assumptions?

      It would be as if we insisted on reviving the ether in relativity.

      "Sire, I have no need of that hypothesis."

      There is also a technical problem with his "Definiteness operator." He wants the Definitiess operator to have an eigenvalue of 1 for the state |+1> and also for the state |-1>.

      But then, it will also have an eigenvalue of 1 for the state 1/sqrt(2) ( |+1> + |-1> ).

      I.e., it cannot explain why we perceive the world as being in certain states rather than linear combinations of those states. But this is the whole point of the Schrödinger cat paradox, the Wigner friend paradox, etc.

      I think Sidney also misunderstood what Bell thought he had proved. If you read through Bell's Speakable and Unspeakable in Quantum Mechanics, Bell thought that his inequality applied to any system, classical or not, in which influences do not travel faster than light.

      Since QM violates the inequality, in QM influences must travel faster than light. QED

      Sidney gave no argument at all against this, just saying that the experiments proved that the world is not classical. But, if you read Bell, Bell most assuredly did not assume the world is classical in deriving the inequality, but only that influences do not travel faster than light!

      There is an interesting sociological observation here: Sidney's lecture is over a quarter century old.

      For ovr sixty years now, physicists have been arguing over MWI -- there have been two longstanding and seemingly insurmountable problems, the ones Sidney tried to address, the "preferred-basis" problem and the "probability-measure" problem.

      Periodically, someone claims to have solved one or both of these problems, proponents of MWI get enthusiastic, and then a careful look shows that, no, the solutions do not work.

      And then a few years later the cycle repeats.

      I have given reasons that I think stand on their own to show that Sdiney made serious errors.

      But, putting me aside, Sidney was a brilliant and highly respected physicist. If he had resolved the problems of MWI, word would have gotten out. People would not still be arguing about it.

      But they are.

      So, even if you reject my arguments, you should ask yourself this: if Sidney was right in 1994, who do so few physicists today cite him as having solved the problems of MWI?

      Dave

      Delete
    5. Thanks for your thoughtful reply, Dave. As usual, I enjoyed reading your comments. Sidney Coleman's lecture, to my mind, was eventful in re-awakening interest, at the time, for the entire debate. Foundations of Physics has just published: Nonlocality versus Modified Realism (by Zwirn, arXiv 1812.06451). Here is the abstract: "A large number of physicists now admit that quantum mechanics is a non local theory. The EPR argument and the many experiments (including recent loophole free tests) showing the violation of Bell's inequalities seem to have confirmed convincingly that quantum mechanics cannot be local. Nevertheless, this conclusion can only be drawn inside a standard realist framework assuming an ontic interpretation of the wave function and viewing the collapse of the wave function as a real change in the physical state of the system. We show that this standpoint is not mandatory and that if the collapse is not considered an actual physical change, it is possible to recover locality."
      It is an interesting paper.


      Delete
    6. Gary Alan quoted from a recent abstract:

      >"A large number of physicists now admit that quantum mechanics is a non local theory. The EPR argument and the many experiments (including recent loophole free tests) showing the violation of Bell's inequalities seem to have confirmed convincingly that quantum mechanics cannot be local."

      Well... in all honesty, I sometimes feel as if I am in a quantum superposition between thinking "Of course, it is non-local!" vs. "Nah, there has to be some subtle way out of this apparent non-locality."

      Which is why I am not in the heat of the battle over "super-determinism": on he one hand, I have trouble seeing how it can work... on the other hand, maybe Sabine is onto something here.

      The key problem is that non-locality should obviously violate relativity.

      And, yet, the no-signalling theorem in QM and quantum field theory assures us that no detectable violations of relativity can occur.

      So, how can there be a violation of relativity that can never, in principle, be detected?

      Incidentally, it is rarely remarked that something like this is already built into quantum field theory: it is possible for a physical effect to occur between two points that are spatially separated (i.e., a signal would have to go faster than light). In technical terms, the vacuum expectation value of φ*(x) φ(y) can be non-zero even when a signal from x to y would go faster than light.

      Anyone who seriously studied QFT saw this at some point, but people rarely think about it.

      So, how is this possible?

      Well, in one frame of reference, a particle goes from point x to point y (where y is later in time than x), but in another frame of reference (where x is later in time than y), an anti-particle goes from point y to point x. All this is handled with the "time-ordering" prescription in Feynman diagrams, and, of course, it only works if you have anti-particles.

      This is also tied in to the strange properties of fermions (basically the Pauli exclusion principle).

      The student is supposed to see for himself or herself how this all works, though I don't think that most do.

      One book that actually does spell it out, rather than hoping the student will put it all together, is Tom Banks' Modern Quantum Field Theory: A Concise Introduction, Section 1.2. In Bamks' words (p. 5),
      >"Our formula for the emission/absorption amplitude is thus covariant, but it poses the following paradox: it is non-zero when the separation between the emission and absorption points is space-like. The causal order of two space-like separated points is not Lorentz-invariant (Problem 2.1), so this is a real problem.

      >"The only known solution to this problem is to impose a new physical postulate: every emission source could equally well be an absorption source (and vice versa)."

      Banks goes on to explain how this requires anti--particles (unless, like photons, a particle is its own anti-particle).

      Anyway, I had figured this out for myself many years ago, as I assume many students did, but Banks' book is the first time I have seen it explained clearly and explicitly early in the book.

      Let me make clear that this feature of QFT, which is intimately tied in to the existence of anti-particles, the spin-statistics theorem, etc., does not explain the entanglement issues ("Bell non-locality") connected to Bell's theorem.

      And yet I have this nagging feeling that Nature is trying to tell us something, that QFT is telling us that past and future and causality are all a bit more intricate than we realize, and if only we would think this through carefully...

      Well, I haven't been able to do anything with the idea!

      I will look at the paper. Thanks for the reference.

      All the best,

      Dave

      Delete
    7. The extent that quantum mechanics is universal is a measure of how quantum bits are conserved. If quantum entropy S = -k ρlog(ρ) is fundamentally constant, then QM has some level of universality to it. I leave it as a QM exercise to show that ρ = |ψ⟩⟨ψ| transformed or evolved by a unitary operator will keep entropy invariant. This may still be a local principle bounded by the cosmological horizon or limits on observability in the universe, but conservation of qubits may be the firmest anchor we can ever put physics and cosmology on.

      For N quantum bits the number of possible entanglements is given by the integer partition function. For N large this is approximated by the Hardy-Ramanujan formula. This also describes the density of string states and the number of statistical partitions of states on a black hole event horizon. So, this motivates a physical hypothesis GRAVITY = QUANTUM, and even if there is a locality in knowing how qubits are conserved, due to event horizons etc, there is an underlying nonlocality that makes QM a global principle.

      The practice of doing QM calculations and interpretations is a matter of how one chooses to work. Whether one chooses a ψ-epistemic or ψ-ontic interpretation is a matter of choice, and that choice can include the Mermin “shut up and calculate.”

      QM turns out to have a certain ambiguity over basis with measurement. The Frauchiger and Renner result illustrates how observers of observers can have an ambiguity in their reports on what their respective observers are observed to measure. It appears that QM does not as a result give a perfectly objective outcome, and this reflects how there is an ambiguity in the basis of measurement.

      Delete
  23. I should have mentioned this-- if we regard the electron post-measurement as back in thermodynamic equilibrium, there ought to an energy dissipation event post-measurement. If so, then time reversal would make that event look like a measurement, and the measurement would still look like energy dissipation. Now everything is neat and symmetric in time reversal, no matter which way you go. Mirror images?? Still just an idea...

    ReplyDelete
  24. Most of the time, when somebody criticizes a model in science, the critics are wrong and the model is correct. So not bothering to look at the criticism carefully is an entirely rational thing to do for things like the criticism of the supernova analyses, where the criticism is relatively new. It's only when an incorrect hypothesis stands up for years and years (like for naturalness) that you can deduce that something is really wrong.

    ReplyDelete
    Replies
    1. It is rational if your goal is to save time. If your goal is to be a good scientist, it not.

      Delete
    2. If your goal is to be a good scientist, you cannot possibly investigate every non-conventional theory that anybody has. You only have 24 hours in a day (and less than that if you want to eat, sleep, and enjoy yourself). If everybody dropped everything to investigate anomalous results, science would grind to a halt. Ideally, a few scientists would investigate it, and if they found this criticism held up, more would, and eventually it would get to be the accepted wisdom.

      The fact that things have gone wrong in high-energy physics does not mean that things are going wrong in other fields when valid criticisms of the consensus theories are not accepted by the community right away. I don't think that astrophysics/cosmology is suffering from the same problems that high energy physics is.

      Delete
    3. This is not a "con-conventional" theory, it's a peer-reviewed re-analysis of evidence for which a Nobelprize was given out.

      Astrophysics/cosmology is a wide field. Parts of it are doing well. Other parts, not so much. But that's entirely irrelevant because striving to avoid cognitive biases is something that all scientists should do, regardless of how well Peter Shor thinks their field is doing.

      Delete
    4. You misunderstood me. By "non-conventional theory", I meant the idea that there is no dark energy. And it's a massive waste of people's effort if every cosmologist drops everything else and starts looking at the Sakar et al. and Kang et al. papers; you just need a few cosmologists to do so. And some of them may be doing so; it's way too early to tell.

      Delete
    5. Peter,

      Well, you have a funny use of the word "theory" if you use it to refer to the results of a data analysis. Be that as it may, you realize that you are documenting the very problem that I am talking about? That everybody thinks somebody else should look at the "non-conventional" result because nobody wants to "waste" their time. "Waste" their time with what is the job of a scientist: To look at new evidence and think about it.

      Yeah, I certainly hope that there are cosmologists looking at this right now as we speak. Nevertheless, it does not help if the popular science media preemptively declares "nothing to see here".

      Delete
    6. FWIW, while having some cosmologists look into the role of 1a SNe in estimating cosmological parameter values in more detail is certainly welcome, it may be more helpful for some of them to get their hands dirty by listening to their astrophysicist and astronomer colleagues. Helping to understand scenarios by which white dwarfs become supernovae, for example. Or seeking to understand how observed behavior relates to those scenarios. IMO, the Sakar+ paper is a good example of a missed opportunity in this regard.

      Delete
  25. For the 3-slit experiment go to arXiv:gr-qc/9401003v2, 14 May 1994, Quantum Mechanics as Quantum Measure Theory, Rafael D. Sorkin.

    ReplyDelete
    Replies
    1. Arun,

      That is one of many of these papers on the arXiv. In some sense, I think the paper you link to is indeed the root, but with a huge caveat. What Sorkin seems to be doing is creating a very abstract framework to consider alternative probability measures of which the textbook QM measure (Born's rule) is one among many.

      Testing the basic postulates of QM is of course perfectly legit (e.g., Bell's theorem) and has the virtue that it often involves a very cheap tabletop experiment.

      There has been a mutation, however, in the last quarter century since the paper you link to. People have taken Sorkin's initial framework and plastered onto it some erroneous theoretical calculations (the erroneous path-integral calculations) and now some classical EM experiments that they are wrongly claiming disprove Born's rule.

      Sorkin probably should not be held responsible for these stupid errors.

      As Sorkin says in his paper:
      > If some more general form of dynamics than quantum mechanics is at work in nature, it should show itself in a failure of the sum rule for which the three-slit discussion is a prototype.

      I.e., Sorkin was setting out a general framework for discussing alternatives to QM, but some of the later authors took one of the calculational techniques of QM (the path-integral approach) and claimed that this QM calculational technique itself contradicted QM. That would merely show that QM is mathematically inconsistent, since the path-integral approach in based on the postulates of QM.

      Now of course maybe behind the scenes Sorkin has actually been encouraging this nonsense. and I am letting him off the hook too easily!

      But based on the information we have it is not Sorkin but his epigones who have screwed up royally.

      Delete
    2. Dave,

      You said here ... path integral is just another way to solve Schrödinger's equation: you should not get a different answer”, which is of course correct.

      And yes, it is “... essentially a classical EM experiment ... ” therefore their propagator (kernel, Green's function), eq. (4) is that for the Helmholtz equation.

      In a linear differential equation (like e.g. wave, heat, Schrödinger, Klein-Gordon equation) superposing different solutions gives another solution. But the implicit assumption in doing so is, that the boundary conditions stay the same.

      Open a third slit changes the boundary conditions – therefore, I find “Quantum Slits Open New Doors” quite catchy. The wave, the “particle” can go more ways – more convoluted ways.

      Sure their wording in “deviation ... from the superposition principle” is also a bit convoluted and prone to confusion, but it does not at all mean that something is wrong with superposition principle as I said here.

      The problem lies in the restricted way (i.e. assuming same boundary conditions) the term “superposition” is used – this implicit assumption was not stated explicit enough in the 2018 paper.

      In the 2014 paper (pdf) they explicitly say: “In discussions which invoke the “zeroness” of κ, it is implicitly assumed that only classical paths contribute to the interference. In his seminal work [17], Sorkin had also assumed that the contribution from nonclassical paths was negligible. Now, what is the effect of nonclassical paths on κ?”
      “As mentioned before, in calculating κ one inherently assumes contributions only from the classical straight line paths as shown in green in Fig. 2.” [emphasize added]
      (κ is the Sorkin parameter; “classical paths” = green lines = ψ_A, ...; “nonclassical paths” = purple lines = “sub-leading” in 2018)

      Now, since in the 2018 paper “... deviation (as big as 6%) from the superposition principle ...” refers to κ=-0.06 in Figure 3, the term “superposition principle” refers to adding only “classical paths” (green lines, ψ_A,...).

      And yes, at first sight “deviation ... from the superposition principle” sounds very strange.
      They simply expressed it more dramatically than it is - very clever, because likely to be much talked about.
      (And obviously it works - without you having pointed it out I still would not have known what the Sorkin parameter is ;-). Here Sorkin is in the audience.

      Anyway, they efficiently pointed out that also “sub-leading” path have a measurable contribution in a microwave triple slit experiment – which is nice, but does not at all shake the world.

      Delete
    3. Dave,

      you said here ”... we all want, at least, an explanation of why Born's rule works”.

      Well, I could at least provide a paradox if the “squaring” would not work which A. Landé pointed out 1975.

      Galilean relativity and de Broglie would be in conflict, see here.
      The resolution is that ψ is complex and Galilean invariance only requires |ψ(t,x)|²=|ψ’(t’,x’)|², not ψ(t,x)=ψ’(t’,x’), see here.

      Just another aspect of all the “squaring” here and here.

      Further the probabilities, via “squaring” finally also save SR from superluminal messaging in the measurement. Well, at least if Bell non-locality holds, i.e. if we assume statistical independence (and reject MWI).

      An explanation how non-locality in QM is realized (what mechanism is behind) would be nice, but maybe we just need to accept/postulate it as with c=const. And maybe as with the ether it turns out to be superfluous to look for a mechanism.

      Delete
    4. Just realized that e.g. this link here in a browser located in the US will probably be redirect to “...google.com/books...”.
      Replace it by “...google.de/books...” then clicking on “+” (enlarge) should “work” (uncovering more pages – and do not ask me why ;-).

      Delete
    5. Reimond wrote to me:
      >[Dave] said here ”... we all want, at least, an explanation of why Born's rule works”.

      >[Reimond]: Well, I could at least provide a paradox if the “squaring” would not work which A. Landé pointed out 1975.

      Well, perhaps I should have said "why Born's rule is necessary at all."

      We have this perfectly nice continuous differential equation, the Schrödinger equation.

      And then we have to bollix it all up by adding the various measurement postulates, including the Born rule. And worse than that, these measurement postulates seem, as Steve Weinberg eloquently explains, to take us humans out of nature and give us a position such that nothing ever definitively happens in nature until we make a measurement! (Okay: perhaps humans or cats, a la Schrödinger.)

      Alas, the various attempts to keep only the Schrödinger equation and not have the measurement postulates -- notably MWI -- fail for various technical reasons.

      I mean, this ain't right!

      I don't have the answer, but I am glad to see that each succeeding generation of thoughtful and reflective physicists, from Einstein to Weinberg to out young host on this blog, keep realizing that something is wrong.

      Dave

      Delete
    6. Dave,

      “... until we make a measurement!” – exactly, the “we” is the problem.
      With observer independent triggered reductions all the “time” it would not be ”... about us doing measurements ...” as I said here.

      Delete
    7. Reimond wrote to me:
      >With observer independent triggered reductions all the “time” it would not be ”... about us doing measurements ...” as I said here [a link to Penrose's The Road to Reality].

      Could be. Or GRW.

      The problem of course is to come up with something that is definite enough to be empirically testable. We don't want a solution that allows its proponents to keep moving the goalposts forever no matter what experimental results we get.

      I also take seriously attempts a la Wigner to connect the measurement problem to consciousness: the Copenhagen interpretation seems to privilege consciousness, so let's take seriously the possibility that they were right. After all, no one (yet) understands consciousness: maybe the two mysteries are connected.

      The problem again is how to make that idea definite enough that we can test it experimentally.

      I'm actually more hopeful than I was forty years ago: the first step in solving a problem is recognizing you have a problem. Now, that foundation of quantum mechanics is a respectable research area.. well, put a bunch of bright people on a problem and they may solve it.

      All the best,

      Dave

      Delete
    8. The Copenhagen interpretation is the cause of all our current controversy and confusion.

      Better to wait for a physical explanation that is physically testable.

      Physics has to be rational, logical and reasonable (and understandable!).


      Delete
  26. "the elements in a WD get sorted quickly" - are we sure that there is no convection?

    ReplyDelete
    Replies
    1. White dwarfs - or at least that part beneath their atmospheres and a thin outer layer - are isothermal. This is due to their very high thermal conductivity, in turn due to the fact that this part of the star is electron degenerate. This is the equilibrium state ... things get interesting when an outer skin of hydrogen ignites, for example.

      Delete
    2. There can be convection in a very narrow region within the non-degenerate atmosphere, so that hydrogen and helium there can sometimes be mixed. But the vast majority of the mass is degenerate carbon and oxygen (or O-Ne-Mg), where, as Jean points out, high conductivity obviates the possibility of convection.

      Delete

  27. I don't know if my message was sent. So I'm rewriting it, but without the links. I believe this message fits because your post casts doubt on certaintys and also criticizes the obfuscation that mathematics and pragmatism has brought over philosophy.

    I think that we must point that there is an effort by serious and respectable physicists, as was the Nobel laureate Hannes Aflvén's, wich depart from the premise that the dominant dynamics of interaction of universe is plasma dynamics. From their modeling they unify the physics of the heavens and the earth (just like Newton), and do not require dark matter, dark energy, and other unobservant elements in the sub-lunar world to explain cosmological effects. For them, it's just a matter of scale.

    Do you know this theoretical and epistemological line? Like you, they believe that current physics is "Lost in Math."

    The experimental, philosophical, and sociological aspects of their search in the scientific academy for at least room for debate are well exposed in the documentary "Universe - The Cosmology Quest." I recommend this video to those who sincerely care about philosophical and sociological aspects of physics, including persecution, historical distortion, and fake news.

    ReplyDelete
    Replies
    1. chaib,

      How is this related to the Velikovskian nonsense (and anti-science) "Electric Universe"? Or to Alfvén's own ideas on cosmology?

      From your own reading, what is the explanation for the Cosmic Microwave Background, by these "serious and respectable physicists"? The Hubble redshift-distance relationship?

      Delete
  28. Camlibel+ (2020), "Pantheon update on a model-independent analysis of cosmological supernova data" is an interesting addition to the discussion of DE and SNe 1a.
    Link: https://arxiv.org/abs/2001.04408

    Unlike Sakar+, they use the Pantheon 1a dataset; Pantheon has more 1a's than JLA (~1000 vs ~700). Both use SALT2 for standardization.

    While Camlibel+ do not incorporate a "correction" for the CMB dipole, they do attempt to derive constraints on cosmology somewhat independently of models; in particular they seek to make a "model-independent determination of the expansion history of the universe". Their conclusions are ... interesting.

    Re the question of whether SNe 1a's are consistent with the existence of DE, their answer is "yes, but ...", with the "but" part having to do with which theory of gravity you choose (GR gives an unambiguous "yes").

    ReplyDelete
  29. chaib:

    Please stop trying to submit posts about your theory. It is not a technical problem, I do not approve such comments. You are wasting both your and my time. Please read our comment rules.

    ReplyDelete
  30. Hot off the presses: Smith+ (2020), "First Cosmology Results using Type Ia Supernovae from the Dark Energy Survey: The Effect of Host Galaxy Properties on Supernova Luminosity" (link: https://arxiv.org/abs/2001.11294).

    tl;dr version: yet another, previously unknown, SNe 1a systematic ...

    ReplyDelete
  31. Having lots more SNe 1a to analyze will obviously lead to much tighter constraints on cosmological parameter values (including "dark energy" ones), right? After all, the two best, current, datasets (JLA, Pantheon) each have only ~1k objects in them. And the Vera Rubin observatory (a.k.a. LSST) will deliver thousands of new SNe 1a's, so all good, right?

    Of course, but ...

    ... it won't be so easy! :(

    Graziani+ (2020) "Peculiar velocity cosmology with type Ia supernovae" (link: https://arxiv.org/abs/2001.09095) illustrates one reason why.

    The reasons are buried deep in the text of the paper, and unless you know what's going on, in terms of observations and "best models" of SNe 1a, you will surely miss some key factors. Consider this snippet: "It is therefore expected that one can obtain a photometrically typed SNeIa sample with a few percent non-Ia contamination during the LSST era. This might not be sufficient for classical Hubble diagram analyses given that the contaminating populations might evolve as a function of redshift in an uncontrolled way. For peculiar velocity studies that are made within a very limited redshift range, the first order impacts of non-Ia contamination simply are: (1) an increase of the sample magnitude scatter and(2) a larger fraction of peculiar velocity outliers."

    I admire the optimism. However, when the observational data starts to pile up, it will be discovered that there are several flies in the ointment, spoilers that no one thought about, and effects that turned out to be larger than anticipated. That is, if this new work is typical of observational astronomy of the last half century or more. Of course, in the meantime, the question of what the 1a progenitors may be definitively settled, or ... :)

    ReplyDelete
    Replies
    1. Jean,

      Thanks for keeping us informed.

      I was mildly surprised when it turned out the CC was non-zero, and I will be mildly surprised if it turns out that that result was wrong (and then there is the matter of getting a single age for the universe that agrees with the oldest stars, etc.).

      But, hopefully, everyone agrees that the empirical data gets the final say.

      All the best,

      Dave

      Delete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.