Pages

Saturday, September 05, 2020

What is a singular limit?

Imagine you bite into an apple and find a beheaded worm. Eeeh. But it could have been worse. If you had found only half a worm in the apple, you’d now have the other half in your mouth. And a quarter of worm in the apple would be even worse. Or a hundredth. Or a thousandth. If we extrapolate this, we find that the worst apple ever is one without worm.

Eh, no, this can’t be right, can it? What went wrong?

I borrowed the story of the wormy apple from Michael Berry, who has used it to illustrate a “singular limit”. In this video, I will explain what a singular limit is and what we can learn from it.


A singular limit is also sometimes called a “discontinuous limit” and it means that if some variable gets closer to a certain point, you do not get a good approximation for the value of a function at this point. In the case of the apple, the variable is the length of the worm that remains in the apple, and the point you are approaching is a worm-length of zero. The function is what you could call the yuckiness of the apple. The yuckiness increases the less worm is left in the apple, but then it suddenly jumps to totally okay. This is a discontinuity, or a singular limit.

You can simulate such a function on your smartphone easily if you punch in a positive number smaller than one and square it repeatedly. This will always give zero, eventually, regardless of how close your original number was to 1. But if you start from 1 exactly, you will stay at 1. So, if you define a function from the limit of squaring a number infinitely often, that would be f(x) is the limit n to infinity of x2n, where n is a natural number, then this function makes a sudden jump at x equals to 1.

This is a fairly obvious example, but singular limits are not always easy to spot. Here is an example from John Baez that will blow your mind, trust me, even if you are used to weird math. Look at this integral. Looks like a pretty innocent integral over the positive, real numbers. You are integrating the function sin(t) over t, and the result turns out to be π/2. Nothing funny going on.

You can make this integral a little more complicated by multiplying the function you are integrating with another function. This other function is just the same function as previously, except that it divides the integration variable by 101. If you integrate the product of these two functions, it comes out to be π/2 again. You can multiply these two functions by a third function in which you divide the integration variable by 201. The result is π/2 again. And so on.

We can write these integrals in a nicely closed form because zero times 100 plus 1 is just one. So, for an arbitrary number of factors, that we can call N, you get an integral over this product. And you can keep on evaluating these integrals, which will give you π/2, π/2, π/2 until you give up at N equals 2000 or what have you. It certainly looks like this series just gives π/2 regardless of N. But it doesn’t. When N takes on this value:
    15,341,178,777,673,149,429,167,740,440,969,249,338,310,889
The result of the integral is, for the first time, not π/2, and it never becomes π/2 for any N larger than that. You can find a proof for this here. The details of the proof don’t matter here, I am just telling you about this to show that mathematics can be far weirder than it appears at first sight.

And this matters because a lot of physicists act like the only numbers in mathematics are 2, π, and Euler’s number. If they encounter anything else, then that’s supposedly “unnatural”. Like, for example, the strength of the electromagnetic force relative to the gravitational force between, say, an electron and a proton. That ratio turns out to be about ten to the thirty-nine. So what, you may say. Well, physicists believe that a number like this just cannot come out of the math all by itself. They called it the “Hierarchy Problem” and it supposedly requires new physics to “explain” where this large number comes from.

But pure mathematics can easily spit out numbers that large. There isn’t a priori anything wrong with the physics if a theory contains a large number. We just saw one such oddly specific large number coming out of a rather innocent looking integral series. This number is of the order of magnitude 1043. Another example of a large number coming out of pure math is the dimension of the monster group that is about 1053. So the integral series is not an isolated case. It’s just how mathematics is.

Let me be clear that I am not saying these particular numbers are somehow relevant for physics. I am just saying if we find experimentally that a constant without units is very large, then this does not mean math alone cannot explain it and it must therefore be a signal for new physics. That’s just wrong.

But let me come back to the singular limits because there’s more to learn from them. You may put the previous examples down as mathematical curiosities, but they are just very vivid demonstrations for how badly naïve extrapolations can fail. And this is something we do not merely encounter in mathematics, but also in a lot of physical systems.

I am here not thinking of the man who falls off the roof and, as he passes the 2nd floor, thinks “so far, so good”. In this case we full well know that his good luck will soon come to an end because the surface of earth is in the way of his well-being. We have merely ignored this information because otherwise it would not be funny. So, this is not what I am talking about. I am talking about situations where we observe sudden changes in a system that are not due to just willfully ignoring information.

An example you are probably familiar with are phase transitions. If you cool down water, it is liquid, liquid, liquid, until suddenly it isn’t. You cannot extrapolate from the water being liquid to it being a solid. It’s a pattern that does not continue. There are many such phase transitions in physical systems where the behavior of a system suddenly changes, and they usually come along with observable properties that make sudden jumps, like entropy or viscosity. These are singular limits.

Singular limits are all over the place in condensed matter physics, but in other areas, physicists seem to have a hard time acknowledging their existence. An example that you find frequently in the popular science press are calculations in a universe with a negative cosmological constant, that’s the so-called Anti-de Sitter space, which falsely raise the impression that these calculations tell us something about the real world, which has a positive cosmological constant.

A lot of physicists believe the one case tells us something about the other because, well, you could take the limit from a very small but negative cosmological constant to a very small but positive cosmological constant, and then, so they argue, the physics should be kind of the same. But. We know that the limit from a small negative cosmological constant to zero and then on to positive values is a singular limit. Space-time has a conformal boundary for all values strictly smaller than zero, but no longer for exactly zero. We have therefore no reason to think these calculations that have been done for a negative cosmological constant tell us anything about our universe, which has a positive cosmological constant.

Here are a few examples of such misleading headlines. They usually tell stories about black holes or wormholes because that’s catchy. Please do not fall for this. These calculations tell us nothing, absolutely nothing, about the real world.

27 comments:

  1. I would not count AdS spaces out as having something to do with physics. It is the case that AdS spacetimes have a conformal timelike boundary, while Minkowski spacetime has ℐ^{±∞}. However, the near horizon condition on a Kerr black hole is AdS_2×S^2 and for the region between the horizons of two black holes near coalescence it is AdS_4. These are approximations, but it does illustrate there is some possible role for these spacetimes with the Boulware vacua of black holes. Of course, it is clear we do not live in an AdS_4 cosmology. Though AdS_5 ≃ CFT_4 suggests that the observable universe is an Einstein space on the AdS_5 boundary, or maybe a Lanzcos junction in AdS_5.

    I tend to concur with the statement about large numbers. I think a natural gravitational coupling constant is m_{higgs}/m_{planck} ≃ 4.0×10^{-19}, or the square of this. There is something odd with the Higgs mass. If the bare mass were a little more the RG group would make the Higss mass near the Planck scale, There is something I think very strange here, and it sounds similar to a singular limit.

    LC

    ReplyDelete
  2. An interesting visualization:

    https://youtu.be/sD0NjbwqlYw

    ReplyDelete
  3. I agree that discussion of 'singular limits' does not get as much attention in fundamental physics as it should. Such singular limits might be very important in quantum gravity. A quantum spacetime might yield large scale observables whose 'classical' limit may not correspond to standard GR.

    ReplyDelete
  4. Sabine,

    Fascinating! From John Baez’s site I gather that this integral behaves like an exceptionally long fuse, burning at a snail’s pace across a vast mathematical plateau until it finally falls off at the far edge.

    -----

    Regarding the hierarchy problem, I note that in terms of observed force strengths, the electromagnetic force suffers from its own curiously self-inflected version of the hierarchy problem.

    Here is what I mean: On paper, the electromagnetic force appears to be vastly more powerful and long-range than the other forces. A naïve observer from some other universe might look at this situation and conclude that the electromagnetic force is so overpowering that it will shred any emerging structures, making our universe barren and boring.

    However, on closer examination she would discover that at least at cosmic scales, the electromagnetic force is much weaker than gravity! From the great galactic walls down to puny planetoids, it is gravity that dominates, not electromagnetism.

    But how can that be, given the ghostlike weakness of gravity in comparison to the electromagnetic force?

    On further examination she uncovers the reason: The electromagnetic force in our universe is almost entirely self-canceling. The numbers of negative and positive charges match, and geometrically they tend to pair off and inexplicably stabilize at a singular limit of about one tenth of a nanometer. The resulting charge pairs are so tiny that the electromagnetic force loses most of its long-range impact, allowing negligible gravity to dominate instead. It is the ultimate underdog story!

    But she is also puzzled. Why don’t the evenly matched electrons and positrons just annihilate each other, removing electric charge entirely from our universe?

    Then she sees it: Those aren’t positrons! Somehow, a strange and much heavier particle with the same charge as a positron has replaced almost every positron in our universe. The quantum incompatibility of these heavier particles, these protons, keeps their electric charges ever so slightly at bay from those of the electrons, making full charge cancellation impossible.

    In all of the universes she has observed, she has never seen such an extreme taming of a force! In combination with quantum mechanics, the nominally all-powerful electromagnetic force has been transformed into something many orders of magnitude weaker, but also capable of far greater complexity. It has become chemistry. Within the quite limited regions of the universe in which this weakened version of the electromagnetic force has full play, it enables almost unlimited levels of complexity, and thus also of life. She is delighted: This odd universe is not completely barren after all!

    -----

    The bottom line is that a significant hierarchy problem can exist even within a single force. The nearly complete self-cancelation of electric charge in our universe demonstrates how this can happen, since it leaves the electromagnetic force so operationally toothless that at cosmic scales even ghostlike gravity can dominate over it.

    Which brings me to my final and perhaps most disturbing observation: What if the extreme weakness of observed gravity is similarly not inherent, but is instead just another example of incomplete self-cancelation?

    That is, what if observed gravity is the singular limit of an unimaginably more powerful and binary supergravity force, one that like the electromagnetic force includes two opposite extremes? These extremes would strive to annihilate each other fully, but like electric charges in hydrogen atoms would be unable to complete the job due to one or more out-of-whack quantum numbers, e.g. opposing geometric vectors for time. In such a model the exquisitely balanced cosmological constant of our universe would not be the result of hyperinflation or blind luck, but would instead be a necessary outcome of the incomplete self-annihilation of binary supergravity.

    ReplyDelete
    Replies
    1. Regarding:” In all of the universes she has observed, she has never seen such an extreme taming of a force! In combination with quantum mechanics, the nominally all-powerful electromagnetic force has been transformed into something many orders of magnitude weaker, but also capable of far greater complexity.”

      This opinion may have been discredited through recent observations of universal magnetism. EMF may hold sway over gravity as a major force in forming the universe from it very beginning.

      https://www.quantamagazine.org/the-hidden-magnetic-universe-begins-to-come-into-view-20200702/

      The Hidden Magnetic Universe Begins to Come Into View

      “Last year, astronomers finally managed to examine a far sparser region of space — the expanse between galaxy clusters. There, they discovered the largest magnetic field yet: 10 million light-years of magnetized space spanning the entire length of this “filament” of the cosmic web. A second magnetized filament has already been spotted elsewhere in the cosmos by means of the same techniques. “We are just looking at the tip of the iceberg, probably,” said Federica Govoni of the National Institute for Astrophysics in Cagliari, Italy, who led the first detection.”



      Delete
    2. @ Bollinger: I responded to this post, but below at:

      http://backreaction.blogspot.com/2020/09/what-is-singular-limit.html?showComment=1599394869790#c2636735410559537673

      @ Axil: This phenomenology is a good and bad thing. The relevant research paper is at

      https://arxiv.org/pdf/2004.09487.pdf

      The measurements of these magnetic fields is from polarization of dust in intergalactic spacetime. That will muddle CMB measurements. This is much the same as happened with the BICEPII results back in 2015. The claim is these very extensive magnetic field, 10^{-20}Gauss and thus very weak, provide enough dynamics to account for the disparity between H = 67.4km/sec-Mpc by CMB data and H = 74.0km/sec-Mpc. So large scale magnetic fields contribute about 10% the dynamics of dark energy or Λ-induced gravitational expansion. I think physically this is a manifestation of how lines of magnetic field tend to repel each other and so this gives some contribution to the expansionary expansion of the cosmos.

      As a disclaimer, this does not support the Arp cosmology or the claims of the “electric universe” crowd. That stuff is sort of analogous to flat-Earth ideology.

      Delete
    3. Nope. Like fish in water, we are so accustomed to the incomprehensibly moderating effects of localized electric charge cancellation that we mistake electromagnetism merely competing with wimpy gravity as evidence of the full power of electromagnetism.

      It isn't. Even Richard Feynman, who was about as aware of the extremity of the difference as anyone in the last century, showed genuine shock in the audio version of his Lectures when he translated the "billion-billion-billion-billion" (his phrase) difference in magnitude between these two forces into a human-scale example.

      If you search on the phrase:

      Feynman "1-1 Electrical forces"

      ... you will find both the above quote and Feynman's description of the result of his calculation:

      If you were standing at arm’s length from someone and each of you had one percent more electrons than protons, the repelling force would be incredible. How great? Enough to lift the Empire State Building? No! To lift Mount Everest? No! The repulsion would be enough to lift a “weight” equal to that of the entire earth!

      Delete
    4. I am presuming you are responding to Axil. This has to do with magnetic fields. These are actually weaker than electric field by division by the speed of light. This does though mean the magnetic field is still a lot stronger than the gravitational field. The thing that makes magnetic field have such long range influence is there are no magnetic monopoles. I should have indicated this.

      With the electric field the equality of + and - charges means the field tends to saturate out. Any region with excess charge tends to attract the opposite charge and the field damps out. The universe also should not have excess charge, for that would mean lines of electric field would wrap around the universe in a disastrous way. Then again maybe there is a singular limit that prevent the disaster ;-).

      If there were magnetic monopoles around these lines of magnetic field would terminate on these magnetic monopoles and this would tend to "eat up" these large long range magnetic fields. This then may be evidence of no magnetic monopoles on a cosmological scale.

      Delete
    5. Hi Lawrence, and Axil,

      My apologies to both of you! I usually remember to put a salutation at the top, both for etiquette and for clarity of reference. I slipped that time!

      Please note that I'm not trying to debate the impact of magnetic fields on large-scale cosmic structures. I am simply stating a much dumber, simpler fact: At least in our universe, our most powerful forces -- strong and electric -- have both demonstrated an uncanny knack for almost completely obliterating themselves at the cosmic size scale. This leaves forces that are literally billions of billions of billions of billions times weaker as the de facto winners for determining the large-scale structure of the universe.

      Another way to think of it is to realize that if these larger forces had not vacated the cosmic-scale premises so completely, forces like gravity and galactic-scale magnetic fields literally would not even be detectable.

      This is important in the context of hierarchy arguments because we tend to overlook the fact that equally baffling limits and dynamic ranges exist within single forces. We accept as givens that electric charges are: equal in number, mostly smoothly distributed, and begin to dominate only when you get down to the incredibly tiny atomic scale.

      But seriously, and especially with regards to that last issue of the size limit at which electric charge abruptly stops self-annihilating, why should the electric force really kick in only at the atomic scale? Why not at the scale of, say, planets or even galaxies?

      Yes, I know: Planck's constant is why! But why couldn't Planck's constant, which might more accurately be called the volume constant, be so large that atoms would be on the size of galaxies? And that is just one number, once constant to twiddle with to make electromagnetism vastly more impactful!

      The bottom line is that there is more than just one flavor of hierarchy problem in physics -- but to understand them, the first step is to notice that they exist.

      Delete
    6. This is because the charges of gauge fields come in opposites. They also transform by roots in an elliptic fashion. Think of the electric charge and its gauge group U(1), which is just the unit circle in the complex plane. The real valued parts reflect the two charges or weights and they are connected by a circle.

      Gravitation is a hyperbolic gauge-like group. The "charge" is mass, and there is an opposite charge of negative mass. However, these are not simply connected as the transformations are hyperbolic. Physically this is manifested by the nonexistence of negative mass or violations of the Hawking-Penrose energy conditions --- at least no violations globally.

      The electric field is more present than we know. Ever walked outside to run into a spider web that spans some distance across the porch? Ever wonder how that spider did that? A spider spins a tendril of web and it has a slight charge polarization and will point along very weak electric fields. This sticky web then reaches some object and the spider has made a span to start a web. In fact spiders use this to fly, they will fly through the air this way and travel considerable distances. The Earth and planets have considerable electrodyanamics, where when electric fields becomes strong there can be dielectric breakdown; we call it lightning.

      Delete
    7. This is a second post to address separate question. There are gauge coupling constants, the basic on being the electric charge or the fine structure constant α = e^2/4πεħc. where the gauge charge can adjust by renormalization group flow. Then in particular there are the constants such as the Planck constant ħ and the speed of light c. The speed of light is a way to convert time into space, eg c = 1 light year per year. The Planck constant intertwines quantum uncertainty in momentum with quantum uncertainty in position by ΔpΔx = ħ/2. In natural units these are just unit, or 1. They are no RG running parameters.

      Delete
  5. Interesting; I wonder if the limits discussed recently in Decoherence have a singular limit that might solve the Measurement Problem.

    Because the limits there produce an "unreal" system, but every one of the near-infinite disturbances in the path must produce a real system. So it looks to me like a singular limit (the unreal system) is being claimed as a discontinuity.

    Or, I just realized, that may be an invalid approach to averaging; since every system after disturbance must lie on the unit circle, the "average" outcome must also lie on the unit circle; perhaps as the average angle (radians) moved from the original system; but still on the unit circle; never at any point in the interior. (Or perhaps be non-existent if a random encounter results in 100% cancellation).

    ReplyDelete
    Replies
    1. I agree with you, and in the decoherence thread on 4:22 AM, August 22, 2020 I wrote: ... I suspect that it is unrealistic to suppose that a particular electron has an average value of exp(iθ) which is zero. A series of buffetings should merely move the path traced out along the [unit] circle for a single individual particle?

      And the implication for me may be that such decoherence only applies to aggregates of entities (e.g. a laser beam) and not to individual entities (like an electron or photon)?

      Austin Fearnley

      Delete
  6. This comment has been removed by the author.

    ReplyDelete
  7. Gravitation is not really that weak. It appears weak because the masses of elementary particles are so small compared to the Planck mass scale. If the masses of elementary particles were comparable to the Planck mass then gravitation would be extraordinarily strong, and in fact the universe would be composed largely of black holes. In this world where the proton is 19 orders of magnitude smaller in mass than the Planck mass, we require a large amount of protons to get any appreciable gravitational force. In fact because the coupling is GMm if the mass of protons were comparable to the Planck mass gravitation would be 10^{38} times stronger, which is a number close to the ratio of the electrostatic and gravitational forces.

    In teaching elementary classes I demonstrate this weakness with a rubber ball. I drop it on the table, and it bounces back and I illustrate how this shows the weakness of gravity. Students are a bit unclear until I explain the time it took to fall and how the bounce happened in a much shorter time period. The bounce is due to the electrostatic forces in the molecules of the ball and the material in the table. Then I use the electrostatic attraction of pith balls and how a little bit of charge can oppose a gravitation force involving the entire Earth.

    As I indicate above there is something odd with the Higgs field. There is something strange with the renormalization of a φ^4 theory or potential V(φ) = -μφ^2 + λφ^4. While technicolor theory seems not applicable, this would make the Higgs field in line with gauge forces. In particular for gauge-like gravitation this would connect the Higgs field and this φ^4 theory in line with gravitation.

    There might be an alternative route for technicolor. There is also the Thirring fermion with potential V(ψ) = ψ^4 for ψ a fermion. This gives rise to the sine-Gordon equation, which is a soliton wave equation with analogues to gravitational waves. The nice little book by Feynman and Weinberg Elementary Particles and the Laws Physics there is a little discussion on this. A Lagrangian terms ψ^4 or according to raising and lowering operatprs (b^†)^4 + b^4 corresponds to the physics of a state composed of 4 spin ½ fermions and if “colorless” is equivalent to a graviton.

    The Thirring fermion and the Higgs field might then be connected by supersymmetry. If you do not like SUSY then maybe on a 2-dimensional surface such as a stretched horizon with anyons. There is a lot here to think about and work on in my opinion.

    I might be off on this, but the breakdown on ∫dx Π_nsin(t/n)/(t/n) appears in some way related to the problem of overshoot in Fourier analysis. However, where I may be off-base is the failure of this integration occurs for very large n, while the overshoot in Fourier analysis is not as obscure.

    ReplyDelete
  8. And now for something completely different. This morning as I was first waking a question popped into my mind, perhaps because I recently saw an overview of the Unruh effect. Why should photons be special? Do Unruh and Hawking radiation and any other horizon effects that may exist also involve radiation other than electromagnetic or can they in the right circumstances? Are, for example, gluons, W and Z bosons and Higgs bosons and, if they exist, gravitons also emitted?

    ReplyDelete
    Replies
    1. Hawking radiation contains all particles. Could you please not post random question here but in some forum. My comment sections are to discuss the topics of my blogposts. You find the comment rules here. Thank you.

      Delete
  9. Pure mathematics is an infinite subject, and the Egan/Baez problem is very interesting, but for the life of me I cannot see how it is of any use in the real world. I'll stick to physics.

    ReplyDelete
  10. Using the complex plane seems to be a dangerous place to keep ones footing. To cross Morecombe bay's sand flats you need a guide. Else it might work out at 1000 safe steps followed by the next step sinking you in quicksand. QM is linear but it does occupy the complex plane landscape so is it surprising if it ends up in a quicksand on a measurement?

    There is a simple fractal formula, which I cannot remember but was shown in a text book on fractals, that I programmed decades ago in an Amstrad. The fractal stays finite around zero for about 5000 iterations and then zooms off to apparent infinity. I don't know if it would ever come back again. Sounds similar to the Baez case, but 5000 is a relatively small number. A larger number is how many years it takes the universe to come to a crunch. Assuming the space metric continually refreshes using an interplay of information between fermions and bosons, then at some time in the future there may not be any fermions left to take part in the calculations. So the metric of space disappears (Penrose's CCC end of cycle) after quite a large number of years, and an even larger number of seconds, and I have no idea about the number of refreshment iterations.

    A fractal needs a Z^n term (where n = 2, 3, etc) to produce catastrophe effects. Likewise, Baez's function has component functions multiplied together so catastrophe is not surprising. But the large n is surprising.

    ReplyDelete
  11. Hi Sabine,

    I guess you mean "ten to the power thirty-nine", not only "ten to the thirty-nine".

    Best,
    J.

    ReplyDelete
    Replies
    1. Sabine: Yes it does. "X to the N" always means X raised to the Nth power.

      The only quibble is some people insist on the adjective form instead of the numeral.

      So we say "X to the third" instead of "X to the three".

      Or in this case "Ten to the thirty-ninth". But that is a quibble, "ten to the thirty-nine" is fine.

      Delete
    2. Dr Castaldo:

      Thanks for the clarification, much appreciated.

      Delete
  12. In condensed matter physics, the same issues are also at play if you do theoretical work. All the results on critical exponents based on renormalization group arguments are non-rigorous, because one has to assume that there exists a non-singular mapping from the (known or unknown) exact mathematical model describing whatever physical system one is studying to some effective field theory that is studied using RG methods. If that's the case then the non-analytical behavior of the model will be due to some critical fixed point.

    Even for some models that can be solved exactly, like spin chains using the Bethe Ansatz, one makes certain assumptions when one considers the continuum limit to infinite system size that have not been rigorously proven.

    While rigorous results do exist in this field, the people interested in getting to such results usually end up proving results that were known for a long time.

    ReplyDelete
  13. That integral limit was really unexpected :). It gets you thinking that consciousness could be a singular limit just like that. A limit on the number of neurons/connections. You can have any number of neurons/connections lower than a magic value and nothing happens but somewhere around then the magic value and HOP! pops the consciousness...

    ReplyDelete
  14. .
    DJ Industrial Average chart, 1925-1935.
    Sometimes provokes Generalized Cause/Effect analysis and extrapolations.
    Like when the lunch box has a wormy apple.

    ReplyDelete
  15. Dr Hossenfelder,

    Once again I find your posts to be very insightful, thank you so very much. I do not believe that anybody can argue with the statement that mathematics is the language of the universe. It is important for us to keep in mind, as you have just pointed out, that mathematics is the universe's language not ours. The universe has been kind enough to give us clues and guidance in how we should proceed. Ignoring these clues and reinterpreting the language of the universe so that it fits the picture of the universe we want to have will simply lead us to dead ends. There is also the cascading effect in that one break down in understanding the universe leads to others down the line.

    If I may be so bold as to ask a few question, so what major discovery is in our future based on the directions we are going now? Are we betting that a big break through in the direction we have chosen for the universe will give us the next big discovery? Or will it be a cascaded failure that sends science chasing its tail?

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.