Model of Inflation. img src: umich.edu |

But if you follow the equations back in time, general relativity eventually stops working. Therefore, no one presently knows how the universe began. Indeed, we may never know.

Since the days of Einstein, physicists have made much progress detailing the history of the universe. But the deeper they try to peer into our past, the more difficult their task becomes.

This difficulty arises partly because new data are harder and harder to come by. The dense matter in the early universe blocked light, so we cannot use light to look back to any time earlier than the formation of the cosmic microwave background. For even earlier times, we can make indirect inferences, or hope for new messengers, like gravitational waves or neutrinos. This is technologically and mathematically challenging, but these are challenges that can be overcome, at least in principle. (Says the theorist.)

The more serious difficulty is conceptual. When studying the universe as whole, physicists face the limits of the scientific method: The further back in time they look, the simpler their explanations become. At some point, then, there will be nothing left to simplify, and so there will be no way to improve their explanations. The question isn’t whether this will happen, the question is when it will happen.

The miserable status of today’s theories for the early universe makes me wonder whether it has already happened. Cosmologists have hundreds of theories, and many of those theories come in several variants. It’s not quite as bad as in particle physics, but the situation is similar in that cosmologists, too, produce loads of ill-motivated models for no reason other than that they can get them published. (And they insist this is good scientific practice. Don’t get me started.)

The currently most popular theory for the early universe is called “inflation”. According to inflation, the universe once underwent a phase in which volumes of space increased exponentially in time. This rapid expansion then stopped in an event called “reheating,” at which the particles of the standard model were produced. After this, particle physics continues the familiar way.

Inflation was originally invented to solve several finetuning problems. (I wrote about this previously, and don’t want to repeat it all over again, so if you are not familiar with the story, please check out this earlier post.) Yall know that I think finetuning arguments are a waste of time, so naturally I think these motivations for inflations are no good. However, just because the original reason for the idea of inflation doesn’t make sense doesn’t mean the theory is wrong.

Ever since the results of the Planck in 2013 it hasn’t looked good for inflation. After the results appeared, Anna Ijjas, Paul Steinhardt, and Avi Loeb argued in a series of papers that the models of inflation which are compatible with the data themselves require finetuning, and therefore bring back the problem they were meant to solve. They popularized their argument in a 2017 article in Scientific American, provocatively titled “Pop Goes the Universe.”

The current models of inflation work not simply by assuming that the universe did undergo a phase of exponential inflation, but they moreover introduce a new field – the “inflaton” – that supposedly caused this rapid expansion. For this to work, it is not sufficient to just postulate the existence of this field, the field also must have a suitable potential. This potential is basically a function (of the field) and typically requires several parameters to be specified.

Most of the papers published on inflation are then exercises in relating this inflaton potential to today’s cosmological observables, such as the properties of the cosmic microwave background.

Now, in the past week two long papers about all those inflationary models appeared on the arXiv:

**Cosmic Inflation: Trick or Treat?**

By Jerome Martin

arXiv:1902.05286 [astro-ph.CO]

**Inflation after Planck: Judgement Day**

By Debika Chowdhury, Jerome Martin, Christophe Ringeval, Vincent Vennin

arXiv:1902.03951 [astro-ph.CO]

The first paper, by Jerome Martin alone, is a general overview of the idea of inflation. It is well-written and a good introduction, but if you are familiar with the topic, nothing new to see here.

The second paper is more technical. It is a thorough re-analysis of the issue of finetuning in inflationary models and a response to the earlier papers by Ijjas, Steinhardt, and Loeb. The main claim of the new paper is that the argument by Ijjas

*et al*, that inflation is “in trouble,” is wrong because it confuses two different types of models, the “plateau models” and the “hilltop models” (referring to different types of the inflaton potential).

According to the new analysis, the models most favored by the data are the plateau models, which do not suffer from finetuning problems, whereas the hilltop models do (in general) suffer from finetuning but are not favored by the data anyway. Hence, they conclude, inflation is doing just fine.

The rest of the paper analyses different aspects of finetuning in inflation (such as quantum contributions to the potential), and discusses further problems with inflation, such as the trans-planckian problem and the measurement problem (as pertaining to cosmological perturbations). It is a very balanced assessment of the situation.

The paper uses standard methods of analysis (Bayesian statistics), but I find this type of model-evaluation generally inconclusive. The problem with such analyses is that they do not take into account the prior probability for the models themselves but only for the initial values and the parameters of the model. Therefore, the results tend to favor models which shove unlikeliness from the initial condition into the model (eg the type of function for the potential).

This is most obvious when it comes to the so-called “curvature problem,” or the question why the universe today is spatially almost flat. You can get this outcome without inflation, but it requires you to start with an exponentially small value of the curvature already (curvature density, to be precise). If you only look at the initial conditions, then that strongly favors inflation.

But of course inflation works by postulating an exponential suppression that comes from the dynamical law. And not only this, it furthermore introduces a field which is strictly speaking unnecessary to get the exponential expansion. I therefore do not buy into the conclusion that inflation is the better explanation. On the very contrary, it adds unnecessary structure.

This is not to say that I think inflation is a bad idea. It’s just that I think cosmologists are focusing on the wrong aspects of the model. Finetuning arguments will forever remain ambiguous because they eventually depend on unjustifiable assumptions. What’s the probability for getting any particular inflaton potential to begin with? Well, if you use the most common measure on the space of all possible function, then all so-far considered potentials have probability zero. This type of reasoning just does not lead anywhere. So why waste time talking about finetuning?

Instead, let us talk about those predictions whose explanatory value does not depend on finetuning arguments, of which I suspect (but do not know) that ET-correlations in the CMB power spectrum are an example. Since finetuning debates will remain unsolvable, it would be more fruitful to focus on those benefits of inflation that can be quantified unambiguously.

In any case, I am sure the new paper will make many cosmologists happy, and encourage them to invent many more models for inflation. Sigh.

what about higgs inflation? since we know from LHC the higgs field exist, perhaps higgs is the inflation field.

ReplyDeleteare there better alternatives to inflation, i understand that bounce cosmology has been suggested.

also, doesn't inflation also depend on the details of quantum gravity there are a lot papers on loop quantum cosmology and inflation

ie

Inflation in Loop Quantum Cosmology

https://arxiv.org › gr-qc

by A Bhardwaj - 2018 - Related articles

Dec 17, 2018 - General Relativity and Quantum Cosmology ... is the same scalar field that is responsible for the bounce in Loop Quantum Cosmology (LQC).

Inflation and Loop Quantum Cosmology

https://arxiv.org › gr-qc

by A Barrau - 2010 - Cited by 6 - Related articles

Nov 24, 2010 - On the other hand, loop quantum cosmology is very successful: it solves ... Recent results can let us hope that inflation and LQC could mutually ...

Loop quantum cosmology and slow roll inflation

https://arxiv.org › gr-qc

by A Ashtekar - 2009 - Cited by 114 - Related articles

Dec 21, 2009 - General Relativity and Quantum Cosmology ... bang is replaced by a quantum bounce which is followed by a robust phase of super-inflation.

maybe if you combine LQC with higgs inflation you get the universe observed

Proving inflation is clearly as probable as you understanding the vital importance of punctuation.

DeleteI've read the papers you cited, and am nowhere near any closer to understanding the rationale for inflation. My bad.

ReplyDeleteIsn't it at least possible that the inflation event, which supposedly took place in the first 10^(-32) second or so of creation, places cosmologists at a point where only a workable quantum gravity theory is required? And since we currently have no such theory, isn't it all just speculation at this time?

Many thanks for your wonderful blog!

Sabine,

ReplyDeleteAlan Guth proposed his initial inflation model (since superseded, of course) when he was at SLAC, at the same time I was a grad student at SLAC. As a result, I've followed inflation pretty much from the beginning: I remember one of Alan's early talks when I tried to wrap my head around the concepts of the flatness problem, the horizon problem, etc.

It's always seemed to me that inflation is an intriguing idea, the details of which would be difficult to work out or to confirm observationally. When I talk about cosmology with non-physicists, I say that cosmologists seem to think that inflation is the most promising approach they have, but that I remain a bit skeptical.

I must admit that I do find the concept of "eternal inflation" aesthetically appealing in that it seems to avoid the need for a "moment of Creation," whether in some theological sense or in the sense of the universe "tunneling out of nothing" -- I rather doubt the latter can be made to work (I had an amusing discussion with Heinz Pagels about the latter issue, also back around 1980). Of course, Alan's 2007 paper claims that eternal inflation cannot extend infinitely far into the past, but I have never been able to understand that paper.

Also, I wonder if you could elaborate a bit on your statement "And not only this, it furthermore introduces a field which is strictly speaking unnecessary to get the exponential expansion"?

All the best,

Dave

Dave,

DeleteI didn't want to get into the issue of eternal inflation/multiverse here because I think it's somewhat tangential to the question of how our universe began. I may get back to this some other time.

What I mean by the remark you ask about is just that from a purely axiomatic point of view you merely need to assume a phase of exponential expansion that ends at some point. You do not also need to assume that this phase was caused by a specific field which had a certain potential.

Also, in this paper, the authors argue that you can achieve the same effect with a speed of sound that's faster than the speed of light. With that you basically push the burden from the expansion of space into the properties of the field.

It's not that I am advocating this or saying I find it particularly convincing. My point is simply that if you focus on only one or two measurement values (for this argument: small curvature, spectral index) you can't tell apart different models, and the currently used ones are not the most minimalistic ones. Instead they are the ones that use the common narrative of BSM pheno (introduce new fields and guess their interactions).

Dave : Before alan Guth's paper, Demos Kazanas wrote a paper in 1980 arguing that exponential expansion of universe the horizon and flatness problem. See http://adsabs.harvard.edu/abs/1980ApJ...241L..59K

DeleteThe basic idea of inflation has a decent amount though not complete amount of observational support. The issue is what the physics of the inflation is. Inflation does solve primarily the flatness problem or horizon problem. How is it that regions of the universe with z > 1 and causally disconnected have the same degree of isotropy and further the same sorts of local physics? Inflation entered in as a way that a tiny region about 10^{-26}m in radius could expand in 10^{-30} seconds into a meter radius region, which then permitted all these far regions to have once been causally connected. Further phenomenology so far tracks observations, even with the unfortunate dusting problem with BICEPII four years ago. However, follow on work with this is narrowing the role of dust vs possible gravitational waves generated during inflation.

ReplyDeleteI tend to favor the slow roll idea where the inflaton field φ evolves, but is changed with time by a type of friction. The field φ in the Lagrangian density is really φ/a^{3/2}, where a is the scale factor of the FLRW metric. The kinetic energy term is considered to be largely time varying and spatial variations are very small, though with detailed work these are computed to look at anisotropies. Ignoring that the time variation of this is

∂_t(φa^{-3/2}) = a^{-3/2)[(∂_tφ) - 3/2φ(∂_ta)a^{-1/2}]

The kinetic energy term then has a term that varies as -(∂_tφ)×(∂_ta)/a = -(∂_tφ)H, which attenuates the inflaton field. Here H = (∂_ta)/a is the Hubble parameter, which during inflation was enormous. This attenuation means there is this slow roll down a potential ramp. Then this vacuum is a false vacuum and collapses into a very small value with the generation of particles and radiation within a pocket region. In this scenario we are in a moderately aged pocket world or so called universe,

The inflationary spacetime can be thought of as a de Sitter spacetime with a very large Λ(φ, ∂_tφ) ~ 10^{45}cm^{-2} or so. A quantum fluctuation in this manifold generates an event horizon, which is an instability in the vacuum and this starts the slow roll and vacuum transition. This dS spacetime is continually cranking out these pocket worlds. We are now talking about eternal inflation and the so called multiverse, which is a term I disdain. The idea is nifty and I have no reason to doubt it, though there is absolutely no empirical data to support this. This multiverse is then connected to the string landscape or the huge number of Calabi-Yau spacetimes possible. However, for all we know every single one of these pockets being generated in this dS spacetime could be physically the same as what we observe. We have no way at this time of knowing. Guth and Velinkin also demonstrated the dS manifold is not eternal in the past, so this was generated by some physics. Further, the dS vacuum may not be eternally stable, and some conflicting data might be telling us this pocket world as a low energy dS vacuum is unstable.

Medieval theologians debated how many angels can dance on the head of a pin. String theorists replaced the angels with Calabi-Yau manifolds.

DeleteWho needs a Genie in the bottle when you have the multiverse? Anything you wish is in the multiverse.

The multiverse, which is a term BTW that I dislike, is something that comes out of inflation. These low energy vacuum regions or pocket worlds may have different vacua and thus different gauge and particle physics. For a number of reasons I question this, and think they may only differ according to attractors or endpoints in their RG flows.

DeleteLawrence,

DeleteAs is stressed in the paper, there are scenarios of inflation that do not give rise to a multiverse.

When Kepler came up with his laws of planetary motion, physicists of the day had the choice of either saying "hmmm, it seems like God like ellipses" or trying to find some deeper law that would account for them. Newton found universal gravitation and it was a great scientific advance.

ReplyDeleteSimilarly, I think, cosmologists today have the choice of either saying "God likes flatness, uniform temperature,tiny initial density fluctuations, etc" or trying to find a deeper law that explains them. Inflation is such an idea.

Of course nobody has figured out exactly how to link it to deeper physics, but anybody with a better idea should chime in.

CIP,

DeleteI am afraid you entirely missed the point. I am telling you that it's not clear what you even mean by inflation "explains" it. Why do you think postulating a phase of exponential expansion is a better explanation than just postulating an exponentially small initial value?

CIP.

DeleteI think the important difference is that Newton just completely nailed Kepler's laws in a

veryprecise quantitative manner. I.e., Newton did not just show that Kepler's three lawsmightbe consistent withsomeforce law given Newton's laws of motion; rather Newton showed in great detail that an inverse square force accountedexactlyfor Kepler's first and third laws (Kepler's second law, essentially conservation of angular momentum, is, of course, true for any central force).Most of the great achievements in science -- Maxwell's equations, plate tectonics, special relativity, the atomic theory, quantum mechanics, etc. -- have these properties. They make clear, unambiguous, and usually quantitatively precise predictions. They tie together wildly different observations that one might have thought were simply isolated brute facts. And, once one sees how precisely the theories work and how they tie together such disparate facts, it actually becomes very difficult to doubt that the theories are basically true.

The really successful scientific theories just do not require a Bayesian analysis that, based on your personal priors, tells you the degree of confidence you should have in the theories. They do not require surveying the landscape of possible theories to judge which is most likely true. They are not based on arguments along the line of "Well, but do you have a better suggestion for something that might work?"

I fear that too many philosophers of science and logicians look for very precise criteria that can be carefully used to decide which scientific theory is slightly more probable than another. And, alas, too many physicists, in the cases of superstring theory and inflationary cosmology, are following the same line.

The recent example of all this was Polchinski's famous Bayesian calculation that it is 94% certain that the multiverse exists (having known Joe in his undergrad days, I am pretty certain this was meant as a joke, poking fun at the tendency I am criticizing).

It is a quite amazing fact that we actually do have a number of scientific theories whose success is just so overwhelming that it would be silly to try to estimate the probability that they are basically false (yes, they can be refined, but basically false? -- no).

That's the little secret of the scientific method: happily we have successful theories that just cannot be doubted by any intelligent, informed person.

The arguments for inflation -- Well, there are so many possible inflationary models that we can probably

somehowfit the data! Do you have any alternative suggestion for solving the horizon problem? etc.-- just do not have that irresistible, overwhelming power of the truly successful scientific theories.@Physicist Dave - It seems to me that inflation, in more or less any of its incarnations, does an excellent job of explaining both the extreme uniformity of the early universe and the small deviations from that uniformity. If regions that have been out of causal contact since very early in the history of the universe were previously in thermal contact, that explains uniformity up to quantum fluctuations. If inflation took place, that explains why they have been out of contact since and how those quantum fluctuations grew to measurable size.

DeleteCIP,

DeleteYou say inflation "does an excellent job of explaining..."

The problem is that inflation is not one single theory. It is more a framework for theories.

We need a

particularinflationary theory (the inflationists gotta choose just one and stick with it!) that gives clean, unambiguous predictions about a variety of apparently unrelated phenomena that can be precisely confirmed by observation.That, after all, is what Newtonian mechanics plus Newton's law of gravitation did. or Maxwellian electrodynamics, or plate tectonics.

Once you know all the phenomena that are very precisely shown to be explained by, say, plate tectonics (paleomagnetic bands on the seafloor, the actual current measurements that show continents to be separating, the observations at mid-oceanic ridges, the seismologic observations at subduction zones, and an enormous amount more), you would have to be pretty dense to doubt that plate tectonics is true.

I myself lived through the plate tectonics revolution. When I first started learning about geology in the early '60s, I read about Wegener's continental drift and found it an interesting speculation. When I took geology from Gene Shoemaker in the fall of 1973 and learned all the evidence for plate tectonics... well, to call plate tectonics just speculation would have been eccentric. It was a fact. (Plate tectonics was not the same as Wegner's speculations, by the way.)

Inflationary theories (and the plural here is important!) are, at best, at the stage of Wegener's speculations about moving continents. Maybe the inflationists will eventually narrow it down to one single theory and show that this theory is indubitably true. I hope so: as I said above, I like the inflationary approach aesthetically, just as I liked Wegener's speculations.

Maybe. Someday. In the future. But not yet.

Dave

It seems to me that inflation explains several things that otherwise require individual explanations or postulations, similarly to the way universal gravitation explains all those ellipses. Inflation explains isotropy, uniformity (horizon), flatness, monopole suppression and the initial density fluctuation spectrum. Otherwise, you need to specify these seemingly diverse parameter values individually. Not only that, but inflation seems to be motivated by a partially understood fundamental process, quantum field theoretic symmetry breaking.

ReplyDeleteOf course you are free to prefer different theories or specifications for all these things, but I can't think of any time in the history of science where this has proven a winning strategy.

CIP,

DeleteIndeed, one could try to argue that inflation is the better explanation for the combined observations because in that case it serves as a simplification. I would consider this a good argument, if it is supported by data. But this is not how finetuning arguments work, and I am not aware of anyone who has actually quantified the claim you make.

My writing above on inflation stopped at the question of fine tuning. The fine tuning problem is in one sense not that mysterious. What would be mysterious is if we lived in a universe that forbade our existence. Schopenhauer aside, who in his misanthropic sense of things argued such, we do live in a universe that permits the occurrence of organic molecules and large molecular biological systems.

ReplyDeleteCosmology is a different sort of science, for so far we only have one subject to look at. We also, in spite of monikers like multiverse etc, have ultimately this idea of “one existential system” we call the universe. In the setting of an inflationary dS manifold we can think of this with all its pocket worlds as “the universe.” So cosmology is about trying to find the entirety of physics in a single global ontology or epistemology, take your choice in quantum mechanics, that connects to local physics. Fine tuning is then a problem to trying to connect this sort of physics of global cosmology to local physics of interaction we study in a lab.

The holographic principle of black holes indicates that any system that approaches a black hole becomes less localized as seen by an asymptotic observer. The optical lensing of spacetime spreads any wave function or for that matter a local field amplitude across the near horizon region. Quantum field theory with its assumptions of Wightman conditions to remove quantum nonlocality may no longer be applicable. These were imposed in part to remove nonlocal quantum physics, which in high energy is on a very small scale from the physics one observes with detectors on a larger scale.

The best thing to come out of superstring theory is Maldecena's correspondence between the anti-de Sitter spacetime of dimension N with the conformal field theory on the boundary in N - 1 dimensions. This gives me a sense that superstring theory has maybe far less to do with TeV scale physics and a lot more to do with quantum cosmology. In effect this connects a global physics of cosmology in the bulk of an AdS spacetime with the local conformal field theory on the boundary with one dimension less. This is a quantum spacetime version of the Gauss-Bonnet theorem! If one expands the AdS action S = ∫d^4x\sqrt{-g}R with R_{abcd}R^{abcd} as instantons and dual terms you get the Euler and Hirzebruch characteristics. Then in the AdS/CFT correspondence the difference between the topological numbers from quantum gravity in the nonlocal AdS bulk and the local topological numbers on the boundary is zero. Fantastic, if you think about it!

The connection between locality and nonlocality defines both the dS and AdS spacetimes. The AdS spacetime is one part of hyperboliod on two sheets, and the dS one sheet.

http://www.network-graphics.com/images/math/hyper_parts_m.jpg

In the momentum-energy representation these meet at I^± in momentu-energy spacetime with the Planck scale. So the dS spacetime is a sort of patching of two AdS's with the transition to positive Λ, which in turn has two causal regions. Hence a holographic screen with a positive junction in AdS_n will contain a dS_{n-1}. Since these all connect to the physics of the boundary CFT, I think this may constrain the physics. I could write a lot more on this, and this has analogues to edge states and gaps in solid state physics. This might address matters of fine tuning.

Particle physics has been formulated with the idea of localizing physical structures and identifying what we call particles as the smallest possible localized elements. This has a long history where elementary units at one time turn out to be composite structures later; given some nod to Anaxagoras' ancient idea. Yet as modern theory indicates the understanding of elementary particles as localized elements requires understanding spacetime physics. The holographic principle, the AdS ~ CFT and the nonlocality of quantum gravity dual to locality of field theory means we are really dealing with a new meaning to terms such as local and nonlocal.

Lawrence, I am pretty sure that Sabine will not want a detailed discussion of your theory here!

DeleteBut, if I can ask just one question: You said, "The holographic principle of black holes indicates that any system that approaches a black hole becomes less localized as seen by an asymptotic observer. The optical lensing of spacetime spreads any wave function or for that matter a local field amplitude across the near horizon region." Suppose you have a super-super purely classical black hole with a huge radius. Everything that happens as you go through the event horizon is then rather gentle (e.g., tidal forces). (The reason I specified "purely classical" is to ignore questions such as the "firewall" issue: just a plain old MTW, Hawking and Ellis, Schwarzschild-solution black hole.)

Are you sure that everything you said is true then? Rather than give us all a detailed lecture here, perhaps you could give a link to a place where this is dealt with carefully.

I'm skeptical of what you've said, but willing to admit I might be missing something.

Dave

As an undergraduate I started reading a text on quantum field theory. I was rather dumb struck by this. I had taken a course in special relativity and introduction to general relativity as well as quantum mechanics. I was filled with the Einsteinian concept that space and time were transformed into each other and really were not that fundamentally different. Then upon reading QFT I was godsmacked with the fact QFT treated space and time very differently. QFT was still relativistic, but how quantum fields were treated on spatial manifolds was different than those in a time ordered sence inside the light cone. Then in graduate school this was all presented again. Now of course timelike and spacelike curves can't transformed into each other by Lorentz boosts, but it seemed as if the nonlocal aspects of quantum physics, which has no preference for space or time, were broken by relativity in a way that seemed unatural. Einsteinian equivalence of space and time ran amok with quantum physics.

DeleteOf course this invokes naturalness that Sabine is trying to abolish, but ... .

This was done by imposing the condition that field amplitudes, which are operators that act on a Fock space basis to construct quantum states and observables, on a spatial surface and outside the light cone have zero commutator. The commutator is only nonzero for amplitudes with a timelike separation. Now there is clearly a difference, for fields separated by timelike direction can be causally connected. We do all sorts of fancy stuff with time ordered products in path integrals and products of variations of the sort d/dj on a path integral so the jA source term defines such products. Nonlocal physics is completely swept under the rug, and this is one reason the commutators outside the light cone are set to zero. Further, these nonlocal quantum connections at high energy interactions become scrambled at larger scales with the decay of daughter particles and out the the detectors. The detectors on the LHC are many meters in length and far above the scale where measurable nonlocality would exist at around the 10^{-17}cm scale. This avoids as well nonlocal connections of fields muddling up the causal time ordering of fields. So quantum mechanics is in a sense neutered to derive QFT. However, for high energy physics this works very well --- until the black hole enters the picture.

I find Lubos Motl to be amusing when he says any talk of nonlocality is done by "anti-quantum zealots." He clearly has drunk the QFT Koolaid a bit too deeply. Quantum nonlocality is performed at very low energy with photons. Massless particles are conformal invariant, they have no Compton wavelength that defines an absolute scale of length or 1/L as a mass does, and this means one can get nonlocal physica on a lab scale. We don't do measurements of nnnlocality at the MeV through TeV scale, --- More as I exceeded 4096 limit

Continued ---- A black hole does something odd. The g_{00} = 1 - 2m/r metric component is, at least for weak gravity, a bit like an index of refraction. Take a wine glass with a stem, cut off the cup part and you are left with the base and stem. Now think of this as a lens you place over pictures or look through. The regions of the scene you observe the stem is centered over are spread around in a ring of sorts. Things such as the Einstein rings observed by weak lensing by Elliptical galaxies are cases of this. For strong lensing things get a bit strange with multiple passes of photons, but I will ignore that now. As a result quantum particles near the horizon of a black hole will exhibit physics that is time dilated, appearing very slowed down, and spread out so the transverse direction to the radius is very large. The convenience of QFT in ignoring nonlocality is lost. This gets us to the matter of quantum gravitation as a field theory with the metric or curvature as the field that defines the light cone and in QFT the light cone sets these field amplitude conditions. Quantum gravitation is then very nonlocal.

DeleteThis is a part of my thesis on nonlocality of quantum graviation at the UV scale as dual to the local physics of quantum fields at the IR scale. The AdS ~ CFT physics suggests much the same. I think this is an aspect of how quantum physics and gravitation are identical, with spacetime built up from quantum entanglements. Thus quantum entanglement and its quotient space constructions have a duality to quotient constructions of gauge moduli spaces and "mod-diffeos" was use in Polyakov path integrals and the rest.

There’s a video in which KThorne and possibly LSuskind et al. explore this very thing. To paraphrase from my jumbled recollection, LC’s “asymptotic observer” is Bob, and as you say the “huge radius” is to afford Alice a gentle fall per “the FLRW metric.” According to LS’s crystal ball (beta model: AdS/CFT(FLRW(GR))), Bob, who stays far away in flat space (to satisfy ADM safety protocols), over epochal time sees Alice smeared onto the horizon. So, to investigate this apparent violation of Einstein’s equivalence principle (alpha inviolate), KT built to spite Hawking an as maximally complex numerical simulation as could be mustered (let’s call it gamma trial; I believe it also stars in a Hollywood movie) in order to—to predict the unpredictable—report on intrepid Alice’s abridged version of events during her fall to Hades, to wit: she indeed passes through the horizon unnoticed and only at some depth do things get dicey (which certainly has to do with Heisenberg’s uncertainty, or the particle/wave duality—which are likely the same, and probably because in holography the existential smear of Alice that Bob observed has one less degree of freedom than Alice’s experiential reality—ostensibly, and maybe a bit too with Pauli’s exclusion—just in case) as space becomes mass-like and vice versa and her forward view gets lensed all around. Now the model collective is not yet simple enough (nor comprehensible enough for that matter) to be wholly true, i.e. to accurately reflect omega’s unique design; nevertheless that specific finding is rather apt in that it supports LC’s assertion of ambiguous locality. © And by way of more recent evidence there’s this: https://www.quantamagazine.org/how-our-universe-could-emerge-as-a-hologram-20190221/

DeleteLawrence,

DeleteYou wrote, "This gets us to the matter of quantum gravitation as a field theory with the metric or curvature as the field that defines the light cone and in QFT the light cone sets these field amplitude conditions. Quantum gravitation is then very nonlocal."

So, what happens to the light cone in quantum gravity? Does it get smeared out and thereby wreck locality? I don't know; I suppose no one knows.

Sabine has been ruminating on such issues: I would like to know her thoughts.

You also wrote,"As a result quantum particles near the horizon of a black hole will exhibit physics that is time dilated, appearing very slowed down, and spread out so the transverse direction to the radius is very large."

Well, Kruskal-Szekeres coordinates make things look more normal at the horizon. But, then we get into the firewall, etc.

One general comment: there are two different meanings of "locality" in these discussions. One is the sort of locality involved in Bell's theorem; the other is the locality that requires spacelike (anti)commutators to be zero. Apples and oranges. Standard QFT is certainly "non-local" in Bell's sense and certainly "local" in the sense of the spacelike (anti)commutators. I've seen truly brilliant physicists confuse these two meanings and thereby tie themselves up in knots.

Dave

PhysicistDave,

DeleteIt's an interesting question that Ford studied 25 years ago. Paper is here. Somewhat depressingly, it turns out the effect is far too small to be observable. There are other types of space-time fluctuations that produce larger effects, but those are not as well motivated.

Interesting to see the report on the work by Xi Dong and Eva Silverstein. This is essentially what I am saying. They do not seem to go further as I do with symmetry protected topological states and an the like. Dang, that is one problem with this. If you really get a good idea you most often know it is a good idea because somebody else does it.

DeleteThe holographic screen reduces a dimension and it time dilates these frames. I can set this up here. The invariant momentum-energy interval m^2 = E^2 - p^2, where I set the speed of light c = 1 lets me write

E = sqrt{p_r^2 + sum_ip_i^2 + m^2).

Here I separate out the radial momentum p_r from the others. This momentum is boosts by p_r → e^{t/4M}p_4 and so becomes huge. So I can pull that out

E = p_r sqrt{1 + [sum_ip_i^2 + m^2]/p_r^2) ≈ p_r + ½[ sum_ip_i^2 + m^2]/p_r ].

The last line is due to binomial theorem. Now I get

(E - p_r)p_r = ½ sum_ip_i^2 + ½m^2.

The p_r is Lorentz boosts with the e^{t/4M} and so this is essentially a boost of energy and we have the left hand side as E'. The right hand side is just a classical nonrelativistic energy or Hamiltonian. The dynamics on the holographic screen is then a sort of Newtonian physics.

I wrote the following on Physics Stack Exchange

https://physics.stackexchange.com/questions/257476/how-did-the-universe-shift-from-dark-matter-dominated-to-dark-energy-dominate/257542#257542

which I think I have quoted before on this blog. Anyway it illustrates how the FLRW metric for the accelerated cosmos can be derived with just Newtonian mechanics. There is the loss of the k/a^2 term that comes with general relativity, but k = 0 if space is really flat. This seems to reflect how our universe is some set of fields on a holographic screen with a de Sitter metric. In fact this screed would be in a 5 dimension space, such as an anti-de Sitter spacetime AdS_5. This holographic screen could be due to a Lancosz junction with positive energy bounding a causal wedge in AdS_5. A negative junction would produce AdS_4, and the dS_4 and AdS_4 would be related to each other in a way similar to what I mention and Xi Dong and Eva Silverstein have examined.

You are right in part by saying locality in QFT is different from Bell type of locality with his inequalities, but it is not really apples and oranges. It might better be oranges and grapefruits. When we do P = p + iA to make momentum covariant the commutator of the gauge potentials is

Delete[A^a_μ, A^b_ν] = C^{ab}_cε_{μν}^σA^c_σ.

The field tensor F_{νμ} = ∂_μA_ν - ∂_νA_μ + g[A_μ, A_ν], where I have suppressed the internal space indices, Now consider the Aharanov-Bohm effect. This is a quantum phase shift that occurs with a solenoid obstructing the path of a quantum particle with charge. The particle exhibits a phase ψ → ψ e^{i∮A·dx}. Now Stokes' rule says

∮A·dx = ∫∫ F_{ij}d(area)^{ij} = ∫∫B·da

where this magnetic field is the element in the field tensor. For a magnetic monopole with the solenoid having a tail we require that the field be such that g/ħ∮A·dx = 2πn, for n = 0, 1, 2, … , with g the “gluon charge” or color. In this way the Dirac monopole tail has not contributing physics. The area is the opening area of the solenoid and this determines a magnetic monopole charge μ , with g/ħ∫∫B·da = gμ/ħ. With the Dirac monopole this gives a Bohr-Sommerfeld quantization.

The Aharanov-Bohm effect is a case of nonlocal physics, where a magnetic field confined in a solendoid induces an interference or phase shift of a wave function. Now assume we have a flat guage potential ∂_μA_ν = 0, so the magnetic field is due to the commutator. The Stokes' rule implies that a gauge potential evaluated along points of a loop are equal to a commutator of that gauge potential with a gauge potential evaluated on all other points that sweep out the area. If we assume Wightman conditions all of this is problematic. Yet the Aharanov-Bohm effect is experimentally known. I think this means there are nonlocal physics with gauge fields with spacelike separation.

What happens to the light cone? Bee here has discussed quantization on the large. Roger Penrose wrote about this in his

Roads to Realityas well. With Penrose I don't think his R-process is fundamental, but rather a sort of effective theory if one ignores conservation of quantum information. He makes some arguments for possible experiments for the superposition of large masses and putative spacetime physics.One way to look at the issue of quantum gravity and the big obstruction we face is that quantum information is nice in that fundamentally it has no natural units. This means it has no imposition of a scale length. Gravitation on the other hand has this Planck scale. The Planck scale really tells us there is a cut-off scale for locating a quantum bit. This is a scale where a black hole radius r = 2GM/c^2 is equal to the Compton wavelength of the black hole, λ = ħ/Mc. Just equate 2λ = r, the 2 is from a Nyquist requirement, to get M = sqrt{ħc/G}. The Heisenberg uncertainty ΔEΔt ≈ ħ to get the Planck time and then get the Planck length ℓ_p = sqrt{Għ/c^3} and find this tiny distance ℓ_p = 1.6×10^{-33}cm. This is odd in a way, for spacetime physics, particularly if we are to think of matter and fields as derived from spacetime, should be conformally invariant. This means there is no scale where the physics is different, which occurs with masses and their Compton wavelength where their physics is very different. We have this contradiction of sorts! Strangely I have not seen anyone make a fuss over this. So something is indeed odd here.

Interconnectedness is “certainly” interconnectedness—regardless of UV or IR regime, decay rates notwithstanding. Just ask Bohr, who’s not burdened with having to forget QCD(QFT(QED)). (If unavailable, LS might do).

DeletePerhaps one cannot escape fine-tuning. There is the fine-tuning of the parameters of a single theory (and one may want theories where this fine-tuning is minimized), but there is the higher-level "fine-tuning" in the construction of a theory itself from a language that can generate a large collection of theories. The latter is what seems to be the case in fundamental physics, leading to lots of theories with no resolution.

ReplyDelete[ Sorry for philosophical interruption. :) ]

I'm surprised that you don't like Bayesian methods and there really isn't anything from stopping you from in principle putting a prior on models itself. You could do some sort of hierarchical modelling where the models are defined in terms of some hyperparameters. Of course to do that you would need to have a measure of that space and then a parameterization which of course is a very hard problem. But that's not in principle a problem for Bayesian inference. Plus the alternative is frequentist methods which just ignore priors all together.

ReplyDeleteIt may well turn out that cosmic inflation indeed took place. But at this point, the impetus for the theory probably has as much to do with Metaphysics as Physics - the motivation being an attempt to avoid the implications (including ID), of what appears to be an outrageously fine-tuned universe.

ReplyDeleteLet us look at this posit:

ReplyDelete"The universe hasn’t always been this way. That the cosmos as a whole evolves, rather than being eternally unchanging, is without doubt one of the most remarkable scientific insights of the past century. It follows from Einstein’s theory of general relativity: Einstein’s theory tells us that the universe must expand. As a consequence, in the early universe matter must have been compressed to high density."

First, certain attributes of the universe that are fundamental and others are emergent. The fundamental attributes have been the same and unchanging from the beginning of the universe. The emergent properties will change over time to maintain the fundamental properties as unchanging.

It gives me a warm feeling to thinks of the universe as flat with a constant energy density. Energy can be positive or negative but the sum of the positive and negative energy must always add up to zero.

The density of matter and energy in the universe is fundamental and will self adjust as emergent properties self adjust to maintain the fundamental properties as a constant and that constant value is zero.

I think that the hangup in thinking about this subject is that all matter and energy must have existed from the vary first instances of the universe's existence. This is a crazy idea. It is more likely to beleive that there is a process that has always existed and is still operating to this very day that can create matter and energy from the vacuum.

Sometimes conditions in the universe exist were lots of matter and energy is derived from the vacuum and the size of the universe expanses apace to keep the density of the matter/energy constant.

And sometimes conditions are such that only a small amount of energy and matter are created and the universe does not inflate so much in response.

The posit that "As a consequence, in the early universe matter must have been compressed to high density." may not be true if we recognize that matter and energy can be created at any time and at anyplace in the universe. The size of the universe as an emergent property will automatically adjust to keep the fundamental properties: density of matter and energy constant and the universe flat.

I read Allan Guths book a couple of years ago and from that I remember that he wanted to solve monopole problem not the fine tuning problem which came out as an extra benefit.

ReplyDeleteThat's right. But as I have explained in this earlier blogpost, magnetic monopoles may simply not exist, so this is a weak motivation. (In fact it is a motivation that I find is rarely mentioned these days.)

DeleteThanks for the link to the blog post. Nice and clear. What I don't understand is your criticism against the inflaton field and that it is an ad-hoc construction which is unnecessary. To me it seems to be a very important link between the early universe and the creation of the elementary particles using standard field theories. I am the amateur here but it would be interesting to hear your view on this.

DeleteSome people are looking for monopoles in an unorthodox way :-)

ReplyDeletehttps://cerncourier.com/cms-beam-pipe-to-be-mined-for-monopoles/

ReplyDelete"Einstein’s theory tells us that the universe must expand."Technical point: No, it doesn't. In fact, the first relativistic cosmological model, by Einstein himself, was static. Yes, it is unstable, as pointed out be Eddington. Of course, the ideal model can be unstable and still static, because there is no way to perturb it, but it is true that it doesn't work as an approximation, because an approximation to the Einstein model can be perturbed. (Interestingly, the Einstein-de Sitter model is also an unstable fixed point in phase space, but as far as I know no-one has used this as an argument against it.)

The universe could also contract. The fact that it is expanding is not due to the laws of GR, but to the initial conditions.

Phillip,

DeleteI think we discussed this before. It is correct that this statement depends on the choice of constants and initial values. But if you know that, then you don't need the extra explanation, so it's superfluous to mention.

ReplyDelete"This is most obvious when it comes to the so-called “curvature problem,” or the question why the universe today is spatially almost flat. You can get this outcome without inflation, but it requires you to start with an exponentially small value of the curvature already (curvature density, to be precise). If you only look at the initial conditions, then that strongly favors inflation."What you are doing here is succumbing to a fine-tuning argument, since the exponentially small value is actually not disfavoured (though wrong fine-tuning arguments say that it is disvavoured). Much more on this can be found in Marc Holman's review of this topic (free to read on arXiv; now published in

Foundations of Physics).Phillip,

DeleteI believe we have discussed this before. Let me therefore just say that this refers to a statement that you find in the paper which I am summarizing.

Sabine, you wrote:

ReplyDelete"But of course inflation works by postulating an exponential suppression that comes from the dynamical law. And not only this, it furthermore introduces a field which is strictly speaking unnecessary to get the exponential expansion."

Did you mean to write "suppression" rather than "expansion" as the final word here? Otherwise you confuse me greatly. In the GR framework, don't you need something like a field to get the negative equation of state that drives the exponential expansion? And it can't be a vanilla cosmological constant if the exponential phase ends. Of course, if you abandon GR you can posit an arbitrary early history for the universe, but the motive for inflation disappears because the whole cosmological story depends on assuming that GR is valid. E.g. rather than exponential expansion you could arbitrarily halt the expansion at some point in the optically-thick phase to allow thermalisation of the visible universe.

Hi Paddy,

DeleteSorry for being unclear. I did mean expansion. Let me explain: Your statement assumes the validity of GR to begin with, which may not be correct. It may be a modification of gravity, it may be a completely different theory, the source may not be a field, it may be some emergent something, and so on.

Let me add that I am personally happy to think of inflation as an effective description that models a large variety of cases, I just think one has to be careful stating what one can reliably infer from the data. Is the field really necessary? If you only look at the value of the curvature (density), then no.

Is there any progress on understandimg dark energy? The last I heard, they were questioning the underlying data from Perlmutter and Riess...

ReplyDeleteSabine,

ReplyDeleteNot sure you understand my point. Sure, throw out GR if you like, but the only reason you thought an exponential expansion was needed was by using GR to extrapolate the expansion backwards in time far earlier than is supported by any actual evidence. (High quality evidence supporting GR as a good approximation does not exist for epochs prior to recombination... poor quality evidence in the form of He nucleosynthesis gets you back to a few seconds. If you think you know anything about earlier epochs you are relying on GR.

Hi Paddy,

DeleteSorry, I think you are right, I don't understand your point. I agree with what you say in that if the only thing you care about is consistency with observation, then you could stop the time evolution according to GR & SM at some density higher than what we have tested already, continue it with whatever else you want, and just put in an initial condition that works.

But in general this will not be a simplification. I commented on this previously here. If you allow yourself to posit arbitrary initial conditions and arbitrary laws that can be as complicated as you wish you are really not doing science any more. (The example I use is creationism.)

Now, the simplest continuation of laws that you can think of is just using the laws that we already have evidence for. Alas, this is *not* what inflation does. Inflation uses a different law, based on the introduction of a new field. So you need to explain how this addition is justified. What I am saying is that the argument from finetuning doesn't hold when it comes to the curvature parameter because you could treat off the complication in the dynamical law against a complication of the initial condition. One therefore needs a different kind of reasoning.

Please let me know if you disagree with any of that, then maybe we can move forward from that.

No problems with any of this. For what it's worth I am moderately sceptical about current cosmological theory but I think less so than you. That is, I think the evidence in favour of GR is indeed good enough that it is sensible to talk about the early universe (we agree on that, perhaps). I absolutely agree that the inflationary explanation for flatness is more complicated (per Roger Penrose) than just positing a flat universe ab initio. Where I disagree with you is that if you are going to accept the GR motivated logic for early exponential expansion, I think you have to accept the GR inference that there is an actual quantum field. Given the evidence for dark matter, it's clear that the combination (GR + standard model) fails, and I would put money (but not my house) on there being new physics not included in the standard model. In which case, why not an inflaton as well?

DeleteThe main reason I think inflation is worth testing is its explanation for structure formation (which requires a quantum field and not just exponential expansion), which has at least a small amount of predictive success (power spectrum remarkably close to scale-free). It is a fair criticism that the available test (i.e. B-mode polarisation) is not make-or-break: a convincing detection would be very strong evidence pro (and also fix the energy scale), but a non-detection would just rule out some classes of model. On the other hand there is not an infinite sequence of B-mode experiments on offer because within a decade or two we will hit unavoidable measurement limits (lensing foreground).

Paddy,

DeleteYou can do exponential expansion with a constant. Not only is this not a quantum field, it's not a field. But my point is a much weaker one than what you seem to think. I am merely saying that it's an unnecessary assumption and hence should not be made. It is sufficient to say "there was a phase of almost exponential expansion and it ended". And scientists should not make unnecessary assumptions.

Yes, I do think it's good to look for B-modes.