## Wednesday, August 28, 2019

### Solutions to the black hole information paradox

In the early 1970s, Stephen Hawking discovered that black holes can emit radiation. This radiation allows black holes to lose mass and, eventually, to entirely evaporate. This process seems to destroy all the information that is contained in the black hole and therefore contradicts what we know about the laws of nature. This contradiction is what we call the black hole information paradox.

After discovering this problem 40 years ago, Hawking spent the rest of his life trying to solve it. He passed away last year, but the problem is still alive and there is no resolution in sight.

Today, I want to tell you what solutions physicists have so-far proposed for the black hole information loss problem. If you want to know more about just what exactly is the problem, please read my previous blogpost.

There are hundreds of proposed solutions to the information loss problem, that I can’t possibly all list here. But I want to tell you about the five most plausible ones.

1. Remnants.

The calculation that Hawking did to obtain the properties of the black hole radiation makes use of general relativity. But we know that general relativity is only approximately correct. It eventually has to be replaced by a more fundamental theory, which is quantum gravity. The effects of quantum gravity are not relevant near the horizon of large black holes, which is why the approximation that Hawking made is good. But it breaks down eventually, when the black hole has shrunk to a very small size. Then, the space-time curvature at the horizon becomes very strong and quantum gravity must be taken into account.

Now, if quantum gravity becomes important, we really do not know what will happen because we don’t have a theory for quantum gravity. In particular we have no reason to think that the black hole will entirely evaporate to begin with. This opens the possibility that a small remainder is left behind which just sits there forever. Such a black hole remnant could keep all the information about what formed the black hole, and no contradiction results.

2. Information comes out very late.

Instead of just stopping to evaporate when quantum gravity becomes relevant, the black hole could also start to leak information in that final phase. Some estimates indicate that this leakage would take a very long time, which is why this solution is also known as a “quasi-stable remnant”. However, it is not entirely clear just how long it would take. After all, we don’t have a theory of quantum gravity. This second option removes the contradiction for the same reason as the first.

3. Information comes out early.

The first two scenarios are very conservative in that they postulate new effects will appear only when we know that our theories break down. A more speculative idea is that quantum gravity plays a much larger role near the horizon and the radiation carries information all along, it’s just that Hawking’s calculation doesn’t capture it.

Many physicists prefer this solution over the first two for the following reason. Black holes do not only have a temperature, they also have an entropy, called the Bekenstein-Hawking entropy. This entropy is proportional to the area of the black hole. It is often interpreted as counting the number of possible states that the black hole geometry can have in a theory of quantum gravity.

If that is so, then the entropy must shrink when the black hole shrinks and this is not the case for the remnant and the quasi-stable remnant.

So, if you want to interpret the black hole entropy in terms of microscopic states, then the information must begin to come out early, when the black hole is still large. This solution is supported by the idea that we live in a holographic universe, which is currently popular, especially among string theorists.

4. Information is just lost.

Black hole evaporation, it seems, is irreversible and that irreversibility is inconsistent with the dynamical law of quantum theory. But quantum theory does have its own irreversible process, which is the measurement. So, some physicists argue that we should just accept black hole evaporation is irreversible and destroys information, not unlike quantum measurements do. This option is not particularly popular because it is hard to include additional irreversible process into quantum theory without spoiling conservation laws.

5. Black holes don’t exist.

Finally, some physicists have tried to argue that black holes are never created in the first place in which case no information can get lost in them. To make this work, one has to find a way to prevent a distribution of matter from collapsing to a size that is below its Schwarzschild radius. But since the formation of a black hole horizon can happen at arbitrarily small matter densities, this requires that one invents some new physics which violates the equivalence principle, and that is the key principle underlying Einstein’s theory of general relativity. This option is a logical possibility, but for most physicists, it’s asking for too much.

Personally, I think that several of the proposed solutions are consistent, that includes option 1-3 above, and other proposals such as those by Horowitz and Maldacena, ‘t Hooft, or Maudlin. This means that this is a problem which just cannot be solved by relying on mathematics alone.

Unfortunately, we cannot experimentally test what is happening when black holes evaporate because the temperature of the radiation is much, much too small to be measurable for the astrophysical black holes we know of. And so, I suspect we will be arguing about this for a long, long time.

1. "To make this work, one has to find a way to prevent a distribution of matter from collapsing to a size that is below its Schwarzschild radius." -- But wait, when some papers say "Black holes don’t exist" they probably mean not that something has to prevent collapse to a size < R_schw, but that Schwarzschild metric does not occur in nature in the first place. E.g., if one assumes existence of a scalar field, then one has the Janis-Newman-Winicour solution which, unlike the Schwarzschild solution, does not have a horizon (thus - no information paradox).

1. As I have said before, the problem is not the horizon, it's the singularity. The horizon does not destroy information, it merely makes it irretrievable from the outside. The problem is caused by geodesic incompleteness.

The Schwarzschild metric is static and of course does not occur in nature in the first place. It is also not the metric that Hawking used in his calculation.

2. Sabine, doesn't every wave function collapse destroy information? I mean before the collapse you have a wave-function with several components, each with its own complex amplitude, afterwards you have a wave function which is just one eigenstate.

2. "... and I suspect that we will argue about this problem for a long long time." Dear Sabine, I cannot agree more, and since it seems that it will be like that I would like to propose to step back a bit since the solutions to the Paradox implies that we do not know the processes that occur in what we call a black hole or the non-existence of these. Would it be possible to speculate on the formation of these entities instead of taking their existence and properties for granted? Yes, there are pictures of how, very close to them as GR predicts, the distortion of space-time is huge but we don't really know if there is a horizon and much less a singularity. How is the moment when at the point of highest density inside a collapsing star a first unit of Planck's volume turns black. When does the horizon appear? How does it evolve? Perhaps trying to solve these questions we can better focus on how to arrive at a theory of quantum gravity. Maybe we should rethink why talking about what is beyond the cosmic horizon is theology and what is beyond the event horizon is not. By the way, what is the difference between the two? Thank you very much Sabine.

3. Dear Dr. Bee,

I don't know how the Unruh effect works, so this may be moot. Is it possible that the information loss happens in an approximation of the formulae? Just like 'real' thermal radiation isn't without information, it's just that we basically decide not to care about single-particle information and integrate over lots of them.

How is this with Hawking Radiation? Is there any large-N-approximation that destroys the information? The point with remnants is that you need to calculate microcanonical (as far as I know), but even that doesn't usually care about separate particles, only about conservation of energy.

1. All of this is conceivably possible. You are welcome to write your own paper with your own solution.

4. In solution 5, one can count the change of coordinates that I mentioned in a comment to a previous post ("How we know that Einstein's General Relativity cannot be quite right").
It seems to get rid of *all* singularities of the Schwarzchild solution (horizon + central singularity).
If that is correct, we have no singularity, no black hole and no information paradox.

1. That is a big "if". Note also that the singularity at the horizon is merely a coordinate singularity, like the north pole. Nothing special happens there.

5. I have always asked ...How tiny must the black hole be in order to note quantum gravity effects? People would say that Planck size...I presume, but the firewall seems to say that the problem with quantum gravity corrections appear much before...At the horizon?
Information lost in quantum mechanics seems quite a trouble, and despite that, Hawking tried himself with the superscattering operator idea...About modifying quantum mechanics, has someone asked how much modified should quantum mechanics to be in order to avoid the BHIP? Where semiclassical gravity breaks down?

1. You are assuming that the firewall is real.

2. The firewall is an obstruction. Once a black hole has quantum emitted about half its mass there is a quantum mechanical inconsistency. Since the duration of a black hole is proportional to the cube of the mass this means at 7/8ths the duration this problem happens. Given a stellar mass black hole exists for 10^{67} years there are no black holes with this problem now.

This problem comes about since Hawking radiation is entangled with the black hole. However at 1/2 the mass more of the radiation previously emitted is now entangled with more recent radiation, and this is in turn entangled with the black hole. So for the "old" Hawking radiation in a bipartite entanglement has transformed into a tripartite entanglement. This is not a unitary process.

The firewall is put in place as a way to stop Hawking radiation. However, this violates the equivalence principle, for if you reach this region where the horizon was you cease to exist or something happens that is "unfortunate."

One possible form of a firewall is a completely extremal black hole. The outer and inner horizons merge and in effect this is a sort of naked singularity with zero temperature. The snag I see with this is that nature has arrived at zero temperature and this violates the third law of thermodynamics. So any sort of firewall violate the 3rd law of thermodynamics and this suggests the firewall is only a formal patch we human impose to "save" QM at the expense of GR and thermodynamics.

6. If one throws a test mass into a black hole there will be a response with gravitational radiation. If that mass is small enough the gravitational radiation emitted should be more in the form of gravitons, or quantized forms of gravitational waves. Conversely, if a black hole emits a quantum of mass-energy there should also be a response in the form of gravitational radiation. Currently a semi-classical fix is done with back reaction, which is to just redefine the metric by hand. The gravitational waves, unless they impart no information onto test masses or the spacetime background, will leave a signature. If so then this is information that ultimately comes from the black hole. I think it is then possible that information apparently lost in a black hole is absorbed into spacetime itself. Spacetime may be emergent from entanglement, and this process would then establish a different entanglement structure. I think in doing this the problem of firewall with bipartite entanglements, say between a black hole and a Hawking photon, apparently shifting to a tripartite entanglement may be removed because the process is tripartite throughout.

If quantum bits or degrees of freedom, say quantum numbers for color or flavor etc, are transformed into BMS charges or entanglement with spacetime, this might conserve information. It would also mean that this process is time reversible. It does not mean things of course are time reversal invariant, unless the universe should turn out to be a torus that permits quanta to re-converge and the past is maybe reconstructed. Your long lost friend who went into a black hole may emerge in part in spacetime entanglements and random Hawking radiation, but this does not mean this person is accessible. In general the universe is going to get colder and darker with cosmic time.

The best thing I can see about these proposed solutions with remnants and the like is that they strike me as wrong. The emission of radiation from a black hole should in some ways be no qualitatively different than the emission of a photon by a hydrogen atom. What does make things difficult is that spacetime is involved. If Bohr had solved the problem of EM field divergence with the classical atom he might have proposed there was some “remnant final state” for the atom that is the ground state. Fortunately Bohr proposed something far more elegant.

7. Dr. Bee, is the crux of the problem based on "pure state" particles entering the black hole but emerging as "mixed state" radiation? I've never really understood the unitarity issue.

8. What do you make of Maudlin's argument that the paradox is entirely conceptual and that no real problem exists?

1. The problem with Maudlin's argument is
that if black holes violate unitarity, then Planck-scale interactions should be able to produce virtual black holes, and also will violate unitary. And this is very hard to reconcile with Maudlin's argument.

9. Dear Sabine,

Your videos keep getting better and better ! Always clear and concise explanations - you make it sound simple !

10. There's something I've never understood about the black hole information paradox. As I understand, in the reference frame of an observer outside the event horizon, any object takes infinite time to cross the event horizon. If this is correct, how is information actually lost?

I must be missing something basic here. But what?

11. As a non-scientist I have wondered for some time why there was no mention of the possibility of an evaporating singularity losing enough mass to reenter normal space time before total evaporation. I now see that you and others have not only considered it, but it's at your top spot. Very interesting and thank you for your videos. A+.

12. I have heard of a another solution: the suggestion that, if we incorporate our suspicions of string theory to black holes, then there is no event horizon or singularity, but rather, a "fuzzball" of quantum strings compressed together. These hairs on the surface then reach out across an sufficiently indistinct region of space to leak information out of the black hole and into the surrounding universe. What are your opinions on fuzzballs?

13. Chuck Norris could escape from a Black Hole. However, the Black Hole cannot escape from Chuck Norris !

14. This comment has been removed by the author.

15. Bee

(1)
could black hole remnants be a component of dark matter, esp if primordial black holes have all evaporated by now?

(2) when you say black holes don't exist, does this mean observed black holes are something else like string theory fuzz balls or dark energy stars?

(3)
how does contenders for QG such as string theory and LQG address black hole information paradox ?

1. Black holes of any sort have been ruled out as a significant contributor to dark matter. The search for microlensing of background light through the galactic halo has pretty much ruled this out.

Fuzzballs or quantum hair on black holes etc are variants of quantum states on event horizons.

String theory with holography seemed to offer solutions, but the failure of monogamy of bipartite entanglements has lead to this obstruction called the firewall. LQG has other problems. In fact it can't evaluate quantum amplitudes well, such as building a black hole from Feynman diagrams. Vafa showed how this can be done in string theory.

2. 1 black holes may be ruled out, but I was alluding to black hole remnants or micro-black holes as dark matter.
I'm not aware of any papers on this.

2 string theory papers based on AdS/CFT have problems which Bee talked about earlier, AdS in our dS as well as results only work in the case of extremel.

there are papers in LQG that address LQG to BH physics.

what should happen to information in BH and build a theory based on this.

3. You can read this wikipedia article and look at the references. https://en.wikipedia.org/wiki/Primordial_black_hole It was announced rather recently that based on microlensing studies that primordial black holes, from those in the 10^{15}kg to much larger 10^{30}kg have been ruled out as a major contributor to dark matter. This does not mean primordial black holes do not exist, but they do not exist in sufficient number to be considered a MACHO dark matter candidate of significance.

The role of the anti-de Sitter (AdS) spacetime is odd. If one considers a cone, a double cone connected at their points, the de Sitter (dS) spacetime is the hyperboloid on the outside that is a single sheet, whereas there are two copies of anti-de Sitter spacetimes in the two cones. The AdS and dS spacetimes meet at I^±∞, or a conformal boundary. As such the AdS quantum information equivalency to CFT_{n-1} on the boundary ∂AdS_n means the dS_n carries the same quantum data.

There are some subtle issues here that I don't think have been well studied. String theory exists in various forms, and the simplest is the bosonic string. This exists in a Lorentz spacetime of 26 dimensions, which some people have difficulty with. However, this connects to some interesting developments with the Fischer-Griess “Monster” group, automorphism over the Leech lattice space or group of 24 dimensions embedded in this 26-dim spacetime, monstrous moonshine, the Langland program, and other matters. This string theory has some oddities, such as a zero point energy of negative energy and a first excited state that is a tachyon. What does this mean? A tachyon just means the vacuum state is unstable. So the supersymmetric version of strings has a zero energy vacuum, where supersymmetry is broken for positive energy. This leads into some questions we might ask, and one is if the bosonic string symmetry or vacuum symmetry is broken then should not the same hold for the supersymmetric string? If the corresponding SUSY strings are broken, where during inflation they should be strongly broken, they have positive vacuum energy and it is natural to think these correspond to dS spacetimes.

In the structure of quantum mechanics there is a theorem that QM requires positive operator valued measures (POVM). This just means the identity operator can be partitioned 1 = sum_nE_n so that the density matrix forms positive probabilities p_n = tr(Eρ) and this rule may be extended into other operators to define the Born rule. So this just means we have positive probabilities, and for the vacuum energy it indicates that T_n = tr(T_nρ), where we think of T as giving the T^{00} stress-energy and this is positive. However, we know the AdS spacetime has a cosmological constant Λ < 0 and implies a negative vacuum. While the bosonic string might want to live here, and by some extension string theory in general, the occurrence of a tachyon state mean instability. On the other hand we have trouble making string theory fit in dS spacetime, and as far as I can see supersymmetry must be broken.

This is really a form of a frustrated system, and the two “light cone” conditions are a case of a Haldane-like chain or system with two quantum phases. The AdS and dS are two phases, and neither is completely stable. The only stable configuration is the flat Minkowski spacetime. What this means is not entirely clear. There is Hawking-Gibbon radiation where a dS spacetime with a cosmological horizon emits a sort of Unruh radiation and the cosmological constant fades to zero over an enormous period of time, such as 10^{10^{100}} years or maybe more. The other alternative is the vacuum violently quantum tunnels into a lower energy, maybe the AdS form, and this expands throughout the dS spacetime in some light cone. If this is coming our way we can't know about it and suddenly everything we know, including ourselves,would cease to exist. Another way this could happen is some form of phantom energy or big rip.

16. At last, the singularity is no problem but the situation if we have less singularities than information units. You can consider every elementary particle as singularity.

Hence the formation of event horizon seems to be the key issue.

17. Sabine, the original calculation by Hawking, as far as I recall, is for a static observer at a given radius, extrapolated to infinity to get the radiation flux at infinity and hence the evaporation rate. What have always confused me is that there are no ideal static observers, as any static observer needs a source of thrust to stay in place, which affects the metric and thus the result of this rather sensitive calculation. Admittedly, the thrust required vanishes in the limit of infinite distance, but, as you have pointed out a time or two right here, the limit is not necessarily the value. Continuity and all that. If, instead of starting with a static observer, one takes an inertial one in a circular orbit, calculate the radiation seen by them, and take the same limit, the results don't seem to match, even though they are the same observer at infinity.

I am also not very convinced in the related effect, Unruh radiation, because it seems to neglect the backreaction from any thrust sources that would be required to keep a detector uniformly accelerating. The best I could find to model an accelerating observer is the Kinnersley rocket metric with constant mass (that requires that half the radiation is negative energy, but that's a different issue). Sadly, I was unable to reliably calculate the analog of the Unruh radiation for that metric.

1. Sergei: why doesn't the limit match when you calculate the Hawking radiation seen by somebody in a circular orbit? Do you actually have a reference for this, or are you just extrapolating from popular articles describing Hawking radiation?

2. Peter: Sadly, I cannot locate references now, beyond what's described in https://journals.aps.org/prd/abstract/10.1103/PhysRevD.88.104023 and https://journals.aps.org/prd/abstract/10.1103/PhysRevD.89.104002.

To quote the latter: "in the circular case, even at large, negative energies, the Boulware rate does not align with the Hartle-Hawking and Unruh rates"

There should be an honest calculation somewhere of the excitation profile for a test observer (detector) in a circular orbit in the Schwarzschild metric, but the second paper is the best reference I have found.

18. If a complex system repeatedly fails over time to produce self-consistent results, one analytical strategy is to focus not on the results, but on whether there is a flaw in the initial system that is generating the inconsistent outcomes. Thus what strikes me most about the black hole information problem is not the debate per se, but the dependence of that debate on casual acceptance of mathematical singularities within the Kruskal-Szekeres coordinate system. It is the KS system that made such debates possible in the first place.

The usual interpretation of KS is that even though time appears to stop for someone falling into a black hole, that is only relative to an outside observer. To the person falling in, it's a smooth, continuous fall down to the singularity. Problem solved!

However, if I understand the symmetries of this argument correctly, the infalling observer will see the passage of time in the still-visible outside universe accelerate to nearly infinite speed as she nears the event horizon. What does that mean in practice?

If she can stay intact, it means she should observe the end of the universe before she reaches the event horizon. That is a problem, since it's hard to see how any observer in any frame could watch the universe end without the universe actually, well… ending. Furthermore, since Hawking radiation limits the lifespan even of galactic black holes, chances are good she won't get that far before she finds herself sliced, diced, and radiated back out as Hawking radiation.

(But what about matter that was already in the black hole when it collapsed? Easy: Since the absolute event horizon forms at the center of the star and moves outwards, nothing ever begins de facto "inside" the frozen time horizon.)

It is for such reasons that I find the "antipodes" interpretation recently proposed by Gerard 't Hooft persuasive, especially since he addresses the interpretation of KS mathematics. In his approach, a small enough black hole would become a time-symmetric version of the ultimate smoothie blender, taking in particle and state information and immediately spitting out new particles with correlated info and no info loss. This symmetry would work well for the scenario I just described, where the only thing an infalling observer would every actually see is a flash and boom as she hit the event horizon and was converted into pure radiation.

The Wikipedia article on Kruskal-Szekeres coordinates (dated 28 Aug 2019) includes a nicely succinct summary (below) of 't Hooft's interpretation of KS. His antipode papers are also readily available on arXiv.

"... [Gerard] 't Hooft proposed [in a Feb 2019 arXiv paper] that the regions I and III, and II and IV are just mathematical artefacts coming from choosing branches for roots rather than parallel universes ... If we think of regions III and IV as having spherical coordinates but with a negative choice for the square root ... then we just correspondingly use opposite points on the sphere to denote the same point in space."

1. Terry Bollinger wrote:
>If she can stay intact, it means she should observe the end of the universe before she reaches the event horizon. That is a problem, since it's hard to see how any observer in any frame could watch the universe end without the universe actually, well… ending.

Yes, but long before the "end of the universe," the Hawking evaporation will have shrunk the black hole to a tiny fraction of its original size (it may in fact disappear).

So, in fact the Hawking effect prevents your dilemma from occurring.

I believe people have calculated what your infalling observer will see given the Hawking effect (of course, as you say, she actually gets torn apart by tidal forces before the end in any scenario), though I cannot recall a reference offhand.

Terry also wrote:
>This symmetry would work well for the scenario I just described, where the only thing an infalling observer would every actually see is a flash and boom as she hit the event horizon and was converted into pure radiation.

It is worth remembering that infalling matter is not somehow magically converted into the Hawking radiation.

As Sabine has said, the Hawking radiation comes from the difference between the original and new(er) vacuums: in effect, the infalling matter pumps the vacuum and this distorts the vacuum and produces new particles.

So, where does the energy for the Hawking radiation come from? The infalling matter is getting closer and closer to the speed of light (in a local frame), which implies that the energy would be tending towards infinity (standard special relativity). Except... the infalling matter is deeper and deeper in the gravitational well, and the negative potential energy exactly compensates (I oversimplify -- it's really the change in g00, but the effect is the same). In the classical case.

This is well-understood for a classical collapse.

I am pretty sure (i.e., I cannot prove this quantitatively -- maybe Sabine can) that in the case of Hawking evaporation, the infalling matter does not speed up quite as fast as it should (i.e., there is a drag from pumping the vacuum), which causes the "kinetic energy" to be slightly overwhelmed by the negative gravitational "potential energy" so as to lower the total energy (AKA mass) of the black hole. (Again, "kinetic energy" and "potential energy" are the wrong terms: we're actually talking the special-relativistic increase of energy as you approach the speed of light vs. the g00 effect of gravitational time dilation).

This is, I think, implicit in much of what is written in the field (I hasten to add that the last paragraph may be entirely wrong: I am open to correction).

However, this does not solve the information-loss (or, as Sabine prefers, time-reversal) paradox. All the infalling matter is still there in a tighter and tighter blob, simply having sacrificed some of its energy of motion to fuel the Hawking radiation.

So, now the question is what happens to this blob as its size approaches the Planck radius and quantum effects necessarily dominate. And to answer that, we need a quantum theory of gravity (we think). Which we do not have.

In short, I do not think we need 't Hooft's proposals, but we do still need a quantum theory of gravity (I think).

Dave

2. Dave, good points and great discussion, thanks!

I agree 100% that Tina (she needs a name) would never witness the end of the universe. To be honest, the only reason I mention that idea is because it can be used as part of a block universe explanation for why the KS transition is viable. The idea is that sure, you briefly get to glimpse the end of time, but since the end of time already exists in a block universe, there's no paradox for you then to pop back into the "now" of the more-or-less ordinary spacetime inside the event horizon. I do not think that view actually works, for multiple reasons, but I also respect it as a well-structured and principles-based approach to resolving a difficult problem.

The only part of your response that I'm unsure about is when you ask what happens when the shrinking black hole blob reaches Planck radius.

My question back is this: Why would that ever happen?

More specifically, as the black hole evaporates it will eventually reach roughly the mass-energy of, say, an electron. At that point I cannot see any reason why standard, ordinary quantum mechanics would not take over fully, including in particular the voracious qubit-centered refusal of standard QM to allow any one state to persist when other states that meet all applicable conservation laws are also available. Thus what should happen at that point is that standard QM should take charge of whatever is left of the black hole — and whether that is a Planck singularity or a very small cosmic disco ball truly doesn't matter — and simply convert it into whatever combination of electrons, neutrinos, and photons best conserves the last dregs of quantum numbers left in the dying hole.

Even speculatively, I am not aware of any mechanism that would move this process further down the size scale, let alone into the Planck domain, even if you accept that the original singularity was itself on the Planck scale. Or stated another way: there is no inertia to the Hawking evaporation process, no mechanism by which it can "overshoot" into the Planck domain once its mass-energy reaches the scale at which the tardigrade-like voraciousness of particle-scale quantum physics takes over. (And yes, that was an Antman reference… :)

Schrödinger-style QM truly is ferociously unforgiving in these things. The one and only way we know for overcoming the built-in ambiguity of the qubit edifices that we call wavefunctions is to invest enormous mass-energy resources that enable the definition of details at smaller and smaller scales. In the specific case of an evaporating black hole, the hole initially has an enormous mass-energy advantage over ordinary QM, but it then loses this advantage simply by becoming quantum-scale in its total mass-energy near the end of its existence.

I believe 't Hooft was getting at a closely related point in his 2018 paper on the discreteness of black hole microstates.[1] There he argued, for the first time I think, against the need to include Planck-scale string fuzziness when describing at least some classes of event horizons. In the first sentence of his abstract he asserts "It is explained that, for black holes much heavier than the Planck mass, black hole microstates can be well understood without string theory."

[1] 2018 't Hooft, Discreteness of Black Hole Microstates, https://arxiv.org/abs/1809.05367

>Even speculatively, I am not aware of any mechanism that would move this process further down the size scale, let alone into the Planck domain, even if you accept that the original singularity was itself on the Planck scale.

Well... all cautious physicists add a caveat about not knowing what happens at the Planck scale simply because we just don't know!

Maybe, as you say, it just evaporates completely and there is no Planck-scale issue. Maybe not. No one knows.

Two problems with the "evaprates-completely" scenario:

As Sabine has emphasized this violates time reversal invariance. But, both GR and QM allow time reversal. Something must be wrong.

The other point that I have been emphasizing is that there is no magic want that converts the infalling matter into outgoing Hawking radiation. In fact, the Hawking radiation seems to come from a slight drag force on the infalling matter: the infalling matter does not speed up quite as much as it should.

So, what happens to the infalling matter?

People have come up with various possible answers: e.g., it pops out in a white hole in another universe.

I think it is fair to say that none of these answers seems all that plausible to most physicists.

I hasten to add again that what I have said here may be completely wrong: indeed, almost certainly I (and most physicists) am missing some essential part of the problem.

After all, there are no paradoxes in nature, only in the minds of physicists.

Dave

19. I must confess total confusion about this subject. Is "information" as used by theoretical physicists a borrowed term like quark or plasma? As I understand information it is totally outside the domain of physical laws.

1. Remarkably, "information" in theoretical physics means exactly the same thing as bits and bytes in your computer or smart phone.

Information is just history, a record of something that occurred in the past. Real world information is usually less neatly packaged than the bits and bytes of a smart phone, but both provide records of events that occurred in the past. And as it happens, information in the form of simple 0-or-1 bits shows up in some of the deepest mysteries of physics.

Take for example the case of a quantum physicist who has constructed a two-hole electron interference apparatus. As part of her testing, she adds detectors to both holes to see if the electron went left or right. Her recording of the path of an electron thus requires exactly one bit of information, e.g. by assigning a 0 if the electron went left and 1 if it went right.

The reason why this "Well, duh!" example links information to some of the deepest mysteries of physics is that if she then removes the hole detectors, the final location of the electron instead becomes a fully symmetric pattern that reflects the locations of both holes, but says nothing about which hole the electron went through. This symmetric signature is the infamous "wave interference pattern" that folks talk about in quantum experiments. Instead of having a well-defined value of 0 or 1, the history of the electron is in this case a superposition of 0 and 1 that reflects the equal importance of both paths. This odd, quantum-only version of a bit is called a qubit.

You might think that qubits are a rare, exotic phenomenon, but nothing could be further from the truth. A key reason why electrons orbit stably around atomic nuclei is because their orbital paths are described by qubits, that is, by wavefunctions that allow all physically possible orbits to exist at once. In chemistry we call such fluffy, puffy, wave-like qubit orbits "orbitals." If qubits were not in charge of electron orbits, chemistry as we know it would cease to exist. Even worse, without the fluffiness that qubit ambiguity creates, atomic volume would cease to exist, and the entire earth would turn into a marble-sized black hole.

Finally, when individual histories are spread so widely over a region of space that each subregion has the same smooth profile, the region is said to have high entropy. For example, the sound of one person speaking over a broadcast system in an auditorium would be a "low entropy" state, while the indecipherable white noise of everyone in the auditorium trying to speak at once would be a "high entropy" state. Entropy is important in physics because a history that has been spread too widely is a history that, like Humpty Dumpty, cannot easily be recovered. Sabine has talked about how this statistical irreversibility due to entropy is what distinguishes time invariance from time symmetry.

Related to this point, don't let quantum philosophers confuse you into thinking that "observation" requires sentient observers. The simple truth is that once a quantum system exchanges information with any large bit of thermal matter, the entropic irreversibility of that thermal matter will ensure that the quantum system remains "observed" for the rest of the history of the universe. In other words, the Great Observer of quantum mechanics is nothing more than ordinary, everyday thermal matter, scrambling quantum wave functions until they are beyond recovery.

The brilliant but underappreciated Boltzmann realized that the accumulation of entropy, of information distributed so widely that it can never be recovered, is linked deeply to the definition of time itself. If you go back to that idea that information is history, this connection makes a lot of sense.

2. Addendum: To understand why information in the computer world and information in physics feel very different, here's an observation that may help: People and computers use information to build models of reality, while the physical world uses information to build itself.

Think back to the two-hole electron diffraction experiment. After passing through the holes, the electron leaves a mark on a screen. That mark creates new information that will help determine the next round of history — the next round of causality. Every classical impact of a molecule, tick of a clock, death of a microbe, or explosion of a supernova leaves its own unique information signature, and each such signature changes forever the possible futures of our universe.

These histories run very deep. For example, if you have a ring composed of gold, platinum, or tungsten, you are wearing information created when two neutrons stars collided violently enough to synthesize and spread metals heavier than iron though local interstellar space, where they later contributed to the formation of our sun and planet. Your ring records many other events as well, some as recent as how you acquired it, others going back to who made the ring and where its metal was mined, and eventually all the way to the violent star collision that created its metal in the first place. The wealth of information recorded in even one tiny physical object is astonishingly complex and often truly ancient. Each such object is a snippet of the vast fabric of cause and effect that led to our current universe.

In contrast, when information is used for modeling, the stories are much narrower and more focused. Ironically, they can also be far more consequential. Tina, the hypothetical physicist in several of my comments, was able to transform the screen impact image of the electron into an extremely succinct one-bit model of that history. Well-designed models allow us to predict complex future events with remarkable accuracy. In the case of an electron passing though the two-hole interference device, a single bit was enough to capture what turned out to be the most critical detail of how the electron interacted with the device. By expanding that bit to include qubit data, it can even help estimate the likelihood of future quantum events for that electron. This huge reduction in complexity and accompanying amplification of predictive power is another reason why computer information feels so different from the richer but also harder-to-decipher information of physical objects.

Finally, an ability to predict the future well enough to survive another day is a huge part of what makes life possible, even if the earliest of those predictions were no more complex than cyanobacteria knowing when to rise in the water to catch the morning sun. But for any such predictive capabilities to have come about, we first had to be in a universe that makes simple-but-powerful prediction possible. Structurally, that is not as easy as it sounds. If we did not live in a universe that is both "chunky" and "simple" at multiple levels and along multiple paths, no living thing would have survived long enough for us to be worrying about such issues. It is in this area that life is perhaps the most tightly intertwined with physics.

20. Hi SABINE. !!!

I hope your day is going well.
Mine is... well,I have a moment.
Wow,. Black Holes.
Really ... ?
Maybe I can cut through a bit of the hoopla' with some really scientific sounding words.

1.) A Black Hole is a
Phenomenon*.
- When presented with a
Phenomenon. we,as Humans,
then (predictably) Conjecture*.

If we subtract assumptions,
presumption, theories (crazy or otherwise), and 'ad hoc' or
'sketchy' math,... then

What we actually scientifically 'know' about
Black Holes (anything,-
generation/demise, mechanism,
structure, etc.)
is next to nothing.
(and that 'next' is an iota)

And as for arguing;

- argument exists in the void
between unverifiability
and hard scientific evidence,
fact and proof.

- people who like to argue all day might be best suited as lawyers.. or, I don't know, ..
politicians ?
- just wonderin'.

* these terms should be understood or 'looked up'
to make sense
of the above statement.

All Love,

Oh, and yeah ,I could get
'all technical on ya'
anytime. ;)

21. A black hole is the physical archetype of mother love, it's the syringe hole of the makers which keeps all homes in balance. Each garden or galaxy is constantly harvested for a type of energy yet unknown to public science. Dark energy is invisible to our physical instruments because its not wholly in time-space, but interacts with it as a carrier of said undiscovered energy.

22. "and other proposals such as those by Horowitz and Maldacena, ‘t Hooft, or Maudlin. "

I do not understand the solution of Maudlin. He seems to argue that the information stays within the event horizon and after evaporation, the event horizon "lives on" in the past. I cannot see how "unitary evolution" remains meaningful in this interpretation.

't Hooft argues that the space inside the event horizon does not exist for us at the outside. Nothing ever really crosses the horizon, at least not in finite time. This sounds like the "Black hole complementarity" of Susskind. This position does get rid of the singularity at the center of the black hole. That space simply "does not exist" for us.

I can see how this works for material that falls into the black hole. It stays out of the horizon until it evaporates again. But I do not understand how this would work for material that was already within the radius of the event horizon before it formed.

1. Perhaps you missed the pages and pages and pages of comments on Sabine's post about Maudlin's paper.

23. I must be missing something fundamental here. Isn't it the case that according to Einstein, from an external observer point of view any matter falling into a black hole will take infinitely long to reach the event horizon? It seems to follow that as far as we are concerned, the information loss is indefinitely deferred, because for us nothing ever actually falls *into* a black hole -- it's all piled up at the event horizon. Why is that not the case?

1. The black hole evaporation takes finite time, the "infinitely long" thing is a classical calculation, without taking into account semi-classical effects like evaporation.

2. Hi MLA!
It's a legitimate argument, explored by Thorne, Price and Macdonald in their "Membrane Paradigm" book (1986) and described in an article in Scientific American in April 1988.

The Membrane Paradigm deliberately describes black hole physics "naively" as it appears to play out as seen by a distant observer. If we drop a large rock into a black hole, the observer sees the rock to be progressively redshifted and time-dilated, and never to actually reach the horizon. Drop another rock on top, and the same is seen. But the two rocks are never seen to touch. So the first rock doesn't just appear to have a rate of timeflow that tends to zero, it's height also appears to tend to zero, and all the matter that has ever fallen into the hole appears to be still there, converging on a thin membrane of Planck-scale thickness.

The Membrane Paradigm treats this apparent effectively two dimensional shell of matter as real. So when you drop a third rock into the hole, and the far side of the hole emits a gravity-wave, the signal doesn't pass through the hole but ripples around the membrane.

The Membrane Paradigm is in some ways the ancestor of 't Hooft's Holographic Principle - it replaces interior 3D physics with a two-dimensional surface description.

24. Here is an interesting paper that strikes a bit at the heart of quantum gravity.

Bell’s Theorem for Temporal Order
Magdalena Zych, ∗ Fabio Costa,
Igor Pikovski, and Caslav Brukner

https://www.nature.com/articles/s41467-019-11579-x.pdf

or one can read the preprint with larger type if you are like me and going half blind:

https://arxiv.org/pdf/1708.00248.pdf

The paper illustrates how a mass in a superposition of two locations, with a superposition of gravity fields has a time dilation effect at different locations. The Bell theorem and quantum violations of the Bell inequalities means there is an ambiguity in time ordering. This is related to Sabine's past posts on measuring superpositions of gravity fields.

This appears commensurate with what I see as the nonlocality of quantum gravitation. Unlike QFT where there are zero commutator conditions imposed on fields outside the light cone, with q-gravity this is an artificial imposition. The locality of quantum fields is a way we humans place theoretical models in spacetime, but nature does not fundamentally obey this locality.

25. @J: I believe most firewall proponents don't expect firewalls to have observable effects that can be seen from outside the black hole. (Which makes this prediction very problematic, as it can't be tested experimentally.)

26. A question for Sabine,or any knowledgeable reader: solution 5 (black holes do not exist) is not your favourite option, but still you include it in your list of the 5 most pausible options among "several hundred".
This raises the question of what phenomena have created the gravitational waves detected in the last couple of years.
This detection was presented as a proof of the existence of black holes, but is the argument really ironclad?
If it is, solution 5 would surely have to be discarded.

27. The existence of an event horizon depends on the assumption that there is no gravitational field energy density in the matter free space outside a spherically symmetric mass. Eliminate that assumption and you get the Yilmaz metric, no event horizon, no singularity and no information paradox. It is time for physicists to leave this black rabbit hole.

28. Sabine,

in your exemplified solution no. 5 you state: " ... this requires that one invents some new physics which violates the equivalence principle, and that is the key principle underlying Einstein’s theory of general relativity."

Why is this a point? It is known since long time that this equivalence principle is not valid. There is a simple consideration showing this:
Assume an observer who has a charged object with him. This charged object will radiate if it is accelerated (maybe together with the observer). But when resting in a gravitational field, it will not radiate (one reason is that there is no energy available to permit this). So it is generally possible to distinguish between gravitation and acceleration. And this case alone falsifies Einstein’s GR.

I have once presented this consideration to several professors of relativity and never got an explanation for this conflict.

1. @antooneo
and references therein).
The answer seems to bee nontrivial, but that does not justify to "throw the baby out with the bathwater".

2. Supposing you don't have any net electric field and no difference in the gravitational potential between the observer (you) and the charge, you won't see any radiation from the charge. You can think of inertia as a repulsive force. The gravitational energy goes in the particle and the inertia goes out. The flow-in equals the flow-out. Einstein GR does not contain any model of particles. It is all scalar fields and the strong equivalence principle hold on point particle and mass-energy. If particles are not point particles, the fields are vectorial and the gravitational field could potentially diverge from a point source only at the scale of the particle. Inertia is only observed in the direction of the acceleration and it is relative.

The behavior of a charged object is only one conflict. The other one is dilation. In a gravitational field, there is dilation. This can be easily proven these days by use of an atomic clock. But there is no dilation with respect to acceleration. Regarding motions dilation depends only on the actual speed, not on acceleration. That was for instance proven at the muon storage ring at CERN. The lifetime of the particles was extended by an amount related to the speed in the ring. If the huge acceleration in the ring would also be of influence, then, in comparison to a gravitational field, their lifetime would have been extended by another factor of 100 to 1000.
This is another proof that gravitation is different from acceleration.

@MarkusM:
This is only one conflict with Einstein’s GRT out of several. And his necessity of just this equivalence was an essential reason for Einstein to stay with the non-existence of an ether even though he had admitted to Hendrik Lorentz that rotation (e.g. the Foucault pendulum) cannot be understood without an ether. So the point here is not at all to “through the baby out with the bathwater”.

@MC Squared:
According to Maxwell the radiation of an accelerated charge does not depend on the existence of an electric field. And there is no need to assume that the gravitational field is different for the particle and the observer. Please look at any experiment which accelerates e.g. electrons.
How does gravitational energy go into a particle? What do you mean by this? And what does it mean saying “inertia goes out”? Does the particle loose inertia in specific situations? Please explain.

4. It is just a matter of mass-energy. Gravity attracts and inertia pushes. Energy being conserved locally and the observer and the charge being at the same potential (equivalent to be in the same frame), let's say from the point of view of the observer, then either both radiate or both don't. Where is gravity in Maxwell equations? Anyway, only a third observer seeing you and the charge redshifted can see any gradient of gravitational or EM radiation. If the charge emits radiation then you also do because you are also made of charged particles. Neglecting any delay, there is no net exchange of radiation. You are both at the same temperature. Here, I also neglected any acceleration due to gravity between the observer and the charge. The only problem is the neutrinos which are not supposed to have any charge... All other elementary particles are either bosons, having a charge or being virtual.

5. Just a correction to avoid confusion.

"Anyway, only a third observer seeing you and the charge redshifted can see any gradient"

I should have written "redshifting" not "redshifted". It is the change in the redshift which must be considered.

29. I must be missing something fundamental here. Isn't it the case that according to Einstein, from an external observer point of view any matter falling into a black hole will take infinitely long to reach the event horizon? It seems to follow that as far as we are concerned, the information loss is indefinitely deferred, because for us nothing ever actually falls *into* a black hole -- it's all piled up at the event horizon. Why is that not the case?

Because black holes are not eternal; they evaporate, so you can't take an infinite time to fall in from the perspective of an outside observer.

(1) There is never any matter inside a black hole when it forms.
(2) Event horizons are best understood as impenetrable but hungry force fields.
(3) Event horizons are unique, momentum-like configurations of spacetime.

I'll elaborate the first point below, and the second and third in my next comment.

----------
(1) There is never any matter inside a black hole when it forms.

As best I've been able to figure out from reading older articles on the topic, the whole reason for abandoning the earlier "frozen star" model of black holes and moving to singularities was the same concern Rob van Son mentioned earlier in this thread: What happens to matter inside the black hole when it forms?

The KS coordinate system, or more precisely the prevalent interpretation of its meaning, seems mostly to have been an attempt to deal with precisely this issue. Even though KS has an infinite-time singularity at the event horizon, the idea was that on either side of that horizon, time would get back to a more or less normal state of affairs. Thus matter that was inside the event horizon when it formed would be more like a long-lost cousin in a distant land, still carrying on with life but unable to communicate. This idea made people happy, and seemed to provide better symmetry for understanding what happened to matter that was inside the event horizon when the black hole formed. As long as you can persuade yourself not to worry too much about what happens when the infalling observer sees the universe end, e.g. by invoking a block universe, KS made it possible to believe that (as with breaking the sound barrier) the act of passing through the event horizon was no big deal for the person doing it. This also made people happy.

The problem with all of this is astonishingly simple: The real event horizon, the one that is truly out of touch with the universe, always begins as a point at the center of whatever is collapsing. There is never a scenario in which matter is already stuck inside of this true or absolute event horizon. What got folks a bit flummoxed was that for an external observer in one frame of motion, the event horizon of a collapsing star would appear first as a large spherical region of space. This is a false horizon, however, since that same spherical volume remains in contact with other frames, at least until the growing absolute event horizon expands enough to catch up with it. At that later point the false and true event horizons appear to become one.

Let's be blunt: This has nothing to do with math. As best I've been able to tell, the misconception that some matter "starts" inside the black hole when it forms was the main reason why folks thought that, at the very least, there should be some interpretation of KS coordinates that would allow an outsider to join anyone already on the inside. But also as best I can tell, the idea that matter could begin on the inside of the hole was itself the result of an incorrect analysis that confused single-frame event horizons (which mean nothing) with absolute event horizons (which mean everything). In the end, it all became an issue not of math, but of human interpretation of that math. As Feynman adamantly pointed out more than once, interpretation is where math stops being a pile of equations and is transformed into a tool for gaining a deeper understanding of physics.

[TO BE CONTINUED…]

31. [CONTINUED…]

----------
(2) Event horizons are best understood as impenetrable but hungry force fields.

Simplified but basically correct visual images make navigating complex dynamics easier. If nothing can get through an event horizon, what you have in effect is an impenetrable, spherically symmetric force field. This field is generated by the interactions of matter and energy in the black hole atmosphere (yes, black holes have atmospheres). At the same time, this force field is "hungry" in the sense that it absorbs matter to grow in size.

Why is all of this important? Because it leads to mathematically quantifiable, testable predictions. I'll get to that after my third point.

----------
(3) Event horizons are unique, momentum-like configurations of spacetime.

A more precise way to describe event horizons is to use 't Hooft's infall-to-antipode momentum functions as the jumping-off point for describing event horizons as a second configuration of spacetime, one in which matter behaves differently than in flat spacetime. (This is my interpretation, please do not blame him.) This configuration has a binary, two-way holographic (Fourier) relationship with the opposite extremum of flat space. That's another way of saying that an event horizon is a type of momentum space, like that of electrons delocalized in mirrors and metals. However, this momentum space applies to all matter that falls into it. Thus the deeper reason for why an event horizon is impenetrable is that it is spacetime, just a different and (to us) far more localized form of it.

Since it is the delocalized electrons that reflect light in mirrors, I like to call this the Dark Mirror model of black holes. (Cosmic Disco Balls also works… :) The Dark Mirror model allows nicely allows particles to remain incredibly hot in terms of their energy states, yet thermodynamically cool like delocalized electrons in metals.

In the Dark Mirror model, time dilation is a consequence of a balance between flat space with ordinary time and momentum space with unique dynamics that do not contribute to the information elaboration behind causal time in flat space.

-----
I said this is predictive, and I meant it. Here's a question: Why do supernovas blow up spectacularly when a black hole forms at their center?

Oddly, there does not seem to be a really good answer for that. Intuitively, one would think that the black hole would just immediately suck everything into it.

The experimentally verifiable hypothesis is this: If the absolute event horizon behaves like an irresistibly expanding impenetrable force field that starts at the center of the collapsing star, then it is the formation and rapid expansion of the black hole event horizon that violently explodes the star.

Here's how the momentum interpretation plays into that. The most important parameter for measuring explosive yield is how quickly the event horizon expands. The second most important parameter is how much of that impulse is absorbed or "eaten" by the black hole as it expands. This consumption rate is determined by the dynamics of transitioning ("delocalizing" in solid-state terminology) matter from flat space into momentum space.

't Hooft's papers show that an infalling particle cannot reach its final state until momentum waves with finite propagation velocities spread across the spherical event horizon surface and refocus on the other (antipode) side of the hole. This delay means that black holes cannot absorb matter an unlimited rate.

The prediction is this: Once a mathematically accurate model of how quickly an event horizon can absorb matter is defined, it can be combined with event horizon expansion models to estimate total explosive yield. Good models of both should accurately predict the behavior of stars that create black holes when they explode.

1. With the implosion of a star there is a caustic point in spacetime and inside the imploding material where the singularity and the event horizon form. Ellis and Hawking have this in their seminal text Large Scale Structure of Space-Time. More work has been done with this with numerical computations using the Raychaudhuri equations for fluid motion.

The event horizon is not a force field. General relativity is really about removing the idea of gravity as a force. Instead it is about geodesic flows. An event horizon is just a surface that is inaccessible to any possible geodesic, or even accelerated world line or path, in some region.

2. Terry Bollinger:

You say: "There is never any matter inside a black hole when it forms."

What happens if I'm sitting peacefully on a planet, and somebody throws 100-ton concrete blocks at it simultaneously from all directions? Why won't I be inside the black hole when it forms? Do I automatically get teleported out? Or do I converted into gravitons which then radiate outwards and reach the event horizon just as it forms?

Note: this question isn't original to me. I originally saw it in a lecture of 't Hooft's, where he used TV sets rather than 100 ton concrete blocks.

The distinction between the absolute event horizon and false, local-observer-only horizons is independent of the velocities of implosion. In fact, infalling neutronium at the core of a collapsing star arguably would make what you just described look pretty paltry.

The point-like origin of the absolute event horizon is instead a topological issue related to SR signal delays. It might be possible to stretch it into a line by using a cylindrical collapse topology -- hmm, that's an interesting question actually -- but even then its initial volume would still be zero.

4. Also, specifically to the first part of your question: You just exactly invoked the false horizon issue, which as I noted has been quite popular (but also incorrect) with many physicists for a long time.

So, what happens is this: If you throw in enough blocks -- which will not be easy, since compared to neutronium they are like wisps of smoke -- the absolute event horizon will form first at the center of your planet, expand outwards at nearly the speed of light, and flatten you far flatter than any bug. In fact, it will flatten you so extraordinarily well that even the protons and neutrons of your atoms will get squashed into quasi two dimensional momentum wave functions that spread out -- "delocalize", just like electrons in a mirror -- over the entire (rather small) surface of the resulting black hole.

In short, you turn into a working component of a very dark mirror, one that absorbs almost everything that comes in, but also occasionally spits a few Hawking temperature microwave photons back out.

As retirements go it's a very placid and nearly timeless one, once you get past all the squishing unpleasantries.

5. @Terry: this requires faster-than-light communication. For the absolute event horizon to know it is time to form, it needs to become aware of the infalling blocks, which are far enough away that this requires a violation of Einstein's Theory of Relativity.

That doesn't mean that this isn't the correct solution, but it certainly needs a lot more working out before it can be accepted.

6. Peter,

To convert your excellent critique-the-model question into something that could actually be explored using computer simulations, try this:

You are the first astronomer with a spaceship capable of traveling into the void between stars. You have just stopped at a nice spot to make observations.

Alas, you are also cosmically unlucky. Your very first instrument sweep reveals that you somehow picked the only spot in the history of the universe where four spin-free, very quiet neutron stars, each one just a few kilotons away from collapsing into a black hole, are all heading straight at each other. When they hit, they will form a perfect tetrahedral cage centered on you in your observation chair… and you have no time to run!

So, what will happen?

The false-horizon interpretation of black hole formation says that as the stars get close, an outside observer will see a fairly large spherical event horizon form. This horizon will be large enough to include all of you and most of the four neutron stars. Thus you will already be inside the black hole as it forms, and so will never be heard from again. (You will also very soon afterwards get squished big time by four, count 'em four, neutron stars.)

However, the reason this horizon is false is because it gets lopsided when observed by other spacecraft that happen to be whizzing nearby at close to the speed of light. Consequently, none of the horizons that they (or you) see can be the real one, since by summing all of the information from the near-c spaceships it's still possible to see deep within the tetrahedron of neutron stars. The only location where none of the spacecraft can obtain information begins as a point located in the pit of your stomach, since with your exceptionally bad luck you happened to be sitting exactly at the center of the tetrahedron. That location is the beginning of the absolute event horizon, since it is the only point that all observers agree has fallen out of communication with the rest of the universe. (Note that to verify whether this is really where the absolute event horizon forms, this model or a simpler variant using two or three stars could be simulated on a supercomputer.)

The question is this: What happens when this absolute event horizon expands?

In the prevalent interpretation of KS coordinates, you fall into and are quickly consumed by the pit of your stomach, a reversal of normal digestion dynamics.

In the dark mirror interpretation, the infinite time delay of the KS boundary is reinterpreted as an energy incentive for converting matter from energy-costly 3D xyz wavefunctions into much flatter (and thus energetically preferable) quasi-2D momentum-space wavefunctions. The resulting hologram, which is really more of a multi-particle Fermi sea, still resides within our universe, but is extremely inaccessible due the huge energy barrier of returning to 3D. Incidentally, if you are the one who fell in, don't get too optimistic about still being around after the conversion. That's because the intense gravity that drives the holographic transformation also first slices you up at the subatomic level.

Bottom line: In the dark mirror interpretation, instead of getting digested or teleported or anything like that, you just explode. This is immediately followed by your pieces being converted into delocalized matter on the surface of a spherical dark mirror (or a cosmic disco ball, either description works pretty well).

7. @PeterShor (hmm, does that syntax work here?),

Good concern, thanks! It's 4am, so just a quick addendum on the speed of light issue.

The absolute event horizon definition has been around for quite a while, so I'm pretty sure it is OK in terms of light delays. But your point is I think a bit more subtle than that, so I'll have to think on it more when I'm actually awake... :)

Regarding the definition of absolute event horizons, the only thing I've added is the assertion that if any event horizon is capable of irreversibly cutting off access to the outside universe, then only the absolute event horizon can qualify. That is because as I described in the thought experiment using neutron stars, all of the other event horizons are porous to information flow if you choose observers in the right frames.

8. Peter and Terry,

Wikipedia'a article on "Event Horizon" includes a relevant quote from Hawking:
> This strict definition of EH has caused information and firewall paradoxes; therefore Stephen Hawking has supposed an apparent horizon to be used, saying "gravitational collapse produces apparent horizons but no event horizons" and "The absence of event horizons mean that there are no black holes - in the sense of regimes from which light can't escape to infinity."

>The black hole event horizon is teleological in nature, meaning that we need to know the entire future space-time of the universe to determine the current location of the horizon, which is essentially impossible.

This has been my understanding for a long time: the definition of horizon is a subtle one, and you have to be very careful in using it.

Specifically, it is hard to make the definition local.

Dave

9. Peter,

The issue of your being inside the shell of incoming blocks is essentially the collapsing dust shell scenario, which has been dealt with extensively in the literature.

To see some of the recent work on this scenario relevant to the Hawking radiation issue, go to the Arxiv and search on "Valentina Baccetti" and follow the reference chains from those papers.

I myself see nothing wrong with Baccetti's work (although that does not mean it is right!). I take it from one of Baccetti's acknowledgements that she knows Sabine, so Sabine can comment on this work if she wishes.

The basic issue is how do you paste the volume inside of the shell, which, if you ignore Hawking radiation, remains essentially Minkowskian (due to Brikhoff's theorem) to the volume outside, which remains Schwarzschildian (also due to Birkhoff's theorem). I have myself gone through the calculations for this in the classical case (i.e., no Hawking radiation) and painfully convinced myself that the formulae in the literature are indeed correct for this case.

In a nutshell, my understanding of these formulae is that for you sitting inside the collapsing shell, life goes on as usual (i.e., Minkowskian) but your clocks go slower and slower compared to those far outside the collapsing shell. Until, of course, the shell hits you: then you are toast.

But, Hawking radiation will change things, and I am not sure there is yet a consensus on all the details if one allows Hawking radiation. You of course still come to a tragic end, but the details of the dynamics are different.

Dave

10. Peter,

Regarding whether there is a speed-of-light problem in the formation of the absolute event horizon, the absolute event horizon would not come into existence until sufficient gravitational effects from incoming bodies had propagated at light speed to the location where it forms. Thus the initial formation of the absolute event horizon would be subject to speed-of-light delays.

If your concern is that the formation of the false event horizon would prevent the center from "seeing" the impact of new masses approaching, recall that it really is a false event horizon, one that remains leaky to information. Information can still get out, and the right observers in the right frames with the right instruments would still be able to watch the increasing gravitational impact of new incoming masses right up until when the growing absolute event horizon obliterates even that view.

Regarding 't Hooft's views on event horizons, here's a 't Hooft quote about event horizons starting out as points. It's from a paper [1] he published about a year ago. The description is in the last sentence in the paper, which is a bit surprising considering that it's a fairly radical departure from the any number of older descriptions of black hole formation. The italics are mine, not his:

"As we only considered black holes close to equilibrium, the question how antipodal identification switches on in the black hole formation process was not answered, but we may assume that this will be genuine Planckian physics that is not yet understood. Note that, when a black hole forms, the horizon starts out as almost a single point, where 'antipodal identification' would only span Planckian distance scales."

11. The definition of "absolute horizon" that doesn't come into existence until sufficient gravitational effects have propagated at light speed to the location where it forms is not the standard definition of "absolute horizon". I expect 't Hooft knows the difference.

12. Peter, I humbly bow to your greater expertise on this point! I admit that since someone as familiar with the GR math as 't Hooft now seems to accept event horizons as being initially point-like (see the reference to his paper in my last comment), I've been willing to accept that the expanding event horizon interpretation of black hole growth as fully consistent with earlier mathematical descriptions of the absolute event horizon. But I'll look it up and dig into it more. If you happen to know a good paper reference or two I would be most appreciative.

13. @PhysicistDave,

Thanks! The Valentina Baccetti et al papers are excellent.

In 2017 "Do event horizons exist?", Baccetti, Mann, and Terno wrote (my bold):

"In other words, r_g is a hypothetical surface that the shell gets very close to but never crosses. Neither trapped surface, nor horizon nor singularity ever form. The distance ϵ_* ∝ C^(−1) [18] grows as the shell evaporates. If the radiation stops at some point, then the remaining shell will collapse into a black hole."

My approach is a bit unique: I am proposing that when the stress-energy tensor crosses a certain threshold, the result is not collapse, but a reorganization of matter at the quantum level due to momentum space becoming energetically favorable over xyz flat space. Particles are squeezed so flat that their only energetically attractive escape is to delocalize, becoming momentum waves on a sphere whose surface expands as more matter is added.

If you accept my deeper premise (I do not expect you to!) that spacetime is nothing more than a cooperatively maintained universal ledger for ensuring absolute conservation of quantum numbers, then what we have been calling a black hole is actually a dark mirror, a second organization of spacetime.

The ability of particles to separate indefinitely in xyz space without any added energy penalty makes Dirac's flat space very attractive. Momentum cooperatives in contrast are rare, since separation in momentum space has a huge energy cost. That is why the Fermi surface electrons in silver have X-ray level energy states.

However, when the energy gain from flattening matter by delocalizing it perpendicular to the gravity vector gets high enough, it should transcend the energy cost of deepening Fermi seas. The result would resemble 't Hooft's infall-antipode momentum waves, though the geometry would be smoother since waves behave differently in even-numbered spaces. Solid state folks who have looked closely at electron conduction in quasi-2D spheres might be able to help here.

If spacetime is just a quantum number conservation ledger, then it cannot form singularities, or Planck foam, or infinitely dense space, or infinite universes, or worm holes, or Planck-scale strings (though nucleon-scale Regge trajectory strings are a much more interesting question), or any of the other infinity/infinitesimal driven issues that have plagued physics for half a century. Spacetime can only reorganize the numbers in the ledger, with wavefunction Fourier symmetry telling us in advance what the two main options are: flat xyz space and momentum space.

Now, watch carefully: If the worst gravity can ever do is force matter to reorganize, then "Neither trapped surface, nor horizon nor singularity ever form." Thus even if I arrived at them by starting from a much more radical set of principles, I don't think my main outcomes are much in conflict with Baccetti, Mann, and Terno.

Experimentally, the dark mirror concept is richly predictive. Dark mirrors can get very dark, but they never leave the universe. They are sheltered from classical time, but never fully escape it. They do not erase information. Their highly conductive surfaces can retain both magnetic fields and atmospheres. They should blow the stars that created them into smithereens. Finally, Hawking radiation becomes an analogue of the Dirac flat-space tension: If you bend the 2D sphere surface too sharply, high-momentum-state particles start finding it energetically favorable to pop back into xyz space, akin to mundane field emission.

Black holes states have only mass, spin, and charge, but dark mirrors can have complex internal states due to particles continuing to exist and interact within them. When you ponder why quasar black holes were so energetic in the early universe and are so quiet now, exploring the possibility of stable and unstable multi-particle band states and band interactions in dark mirrors could have interesting predictive power.

14. Terry wrote:
>I am proposing that when the stress-energy tensor crosses a certain threshold, the result is not collapse, but a reorganization of matter at the quantum level due to momentum space becoming energetically favorable over xyz flat space.

Terry, the problem is that this is too vague an idea to be physics until you put it into math. And, you also have the problem that there are a bunch of different types of elementary particles (quarks, leptons, gauge bosons, Higgs), and you need to deal with all of these (in mathematical detail).

I saw a video of Feynman the other day talking with Fred Hoyle, and one of the things Feynman mentioned that he had learned from John Wheeler is that you should always make the minimal changes in existing theories that you possibly can in trying to understand new phenomena.

The problem, he explained, is that there are just too many wild and crazy new ideas that you can try, and you get lost in the forest of new ideas. And, indeed, usually established science turns out to be sufficient: usually, you do not really need radical new ideas at all.

As you quote from the Baccetti paper, "Neither trapped surface, nor horizon nor singularity ever form." And they are just using standard GR. Perhaps when well-understood in standard GR, all the problems go away. Or not -- but you won't know unless you first try to solve the problems using existing theories. (To anyone who has not read the paper, I believe that Baccetti et al. are not claiming that they have shown there are never horizons or singularities but rather that these do not show up in the particular model they are analyzing.)

Terry also wrote:
>When you ponder why quasar black holes were so energetic in the early universe and are so quiet now, exploring the possibility of stable and unstable multi-particle band states and band interactions in dark mirrors could have interesting predictive power.

When you look at black holes, what you are actually seeing is the accretion disks. And, the accretions disks can be very flamboyant indeed! Probably, quasars had much more spectacular accretion disks in the early days than they have today: indeed, that is what one would expect.

15. "I am proposing that when the stress-energy tensor crosses a certain threshold, the result is not collapse, but a reorganization of matter at the quantum level due to momentum space becoming energetically favorable over xyz flat space."

A horizon can form at any matter density. Even our universe would be a black hole at its current density with a radius of about 10B ly, if it were static and had a center. GR does not require high curvatures to generate a horizon.

We could be living inside a black hole without noticing it (for some time). Johannes Koelman described it quite nicely here:
https://hammockphysicist.wordpress.com/2017/01/29/how-to-stomach-a-black-hole/

"The scenario usually considered that leads to ‘finding yourself inside a black hole’ is the scenario of a static black hole. A black hole in eternal existence. The only way to find yourself inside such a black hole is by falling into it. Yet there is an alternative scenario leading to the situation of ‘being inside a black hole’. It’s the scenario of a black hole growing from inside you. And this alternative scenario gives a better insight into the global spacetime nature of black holes. "

16. @PhysicistDave,

Thanks! That was a well-written and constructive response, though I'm surprised and a bit disappointed in myself that I came over in any way as anti-math. I'm busy today but should have some comments by tomorrow.

17. Terry wrote to me:
>I'm surprised and a bit disappointed in myself that I came over in any way as anti-math.

Oh, I'm not accusing you of being "anti-math." I'm merely making the generic point that ideas that seem interesting are a dime a dozen... until you try to make them precise by phrasing them in mathematical terms.

It's amazing how many ideas seem crystal clear and obviously true until you try to put them in math and then you find out you do not even have a consistent idea at all.

Einstein was once asked whether he kept a notebook to write down all his brilliant ideas. He replied that he did not have enough brilliant ideas to need a notebook!

Dave

18. Hi, PhysicistDave!
Hawking was correct to say that replacing event horizons with apparent horizons allows information to migrate outwards through a horizon, solving the black hole information paradox. It turns the horizon into an //acoustic// horizon, described by an acoustic metric, and acoustic metrics generate the classical counterpart of Hawking radiation.

What Hawking neglected to mention (in the texts I've seen) is that that we cannot make that substitution without losing special relativity. SR-based gravitational physics generates an absolute horizon.

So while Hawking's solution technically works, it hasn't been gratefully embraced by the physics community, because in order to use it, you have to drop both Einstein's special and general theories of relativity, and go over to a rewritten version of GR that doesn't reduce exactly to SR (an idea that doesn't fill most physicists with wild enthusiasm).

So yes, there's a technical fix available, but the cost is generally considered to be too high to be worth paying. One ends up breaking an awful lot of other things.

32. Whether this would go anywhere, who knows, but there might be a "quantum neural network solution" developed based on a mapping between a black hole quantum field model and "an artificial quantum neural network based on gravity-like synaptic connections":

Black Holes as Brains: Neural Networks with Area Law Entropy
https://arxiv.org/abs/1801.03918

1. Quantum hair on the stretched event horizon of a black hole, or if you prefer a variant of the idea called fuzzballs, are spread around the event horizon. Lorentz contraction squeezes qubits onto a surface and Einstein lensing means they appear around the horizon. So the Planck unit of area any qubit occupies is squeezed and folded, and this has some analogues with aspects of chaos theory such as the cat map or the "horseshoe dynamics" of folding phase space. These qubits that form quantum hair of course interact and enter into entanglements in ways similar to a neural network.

I doubt black holes though are conscious entities. There is an entertaining science fiction short story by Varley Lollipop and the Tar Baby, as I recall the title, about a conscious micro-black hole. On balance I think black holes have less consciousness than a box of bricks.

33. Some commenters have mentioned that for a distant observer, it takes an infinite amount of time for infalling matter to reach the event horizon. This classical result relies on the assumption that the time variable "t" in the Schwarzchild solution represents time as seen by a distant observer.
This assumption is usually justified by the fact that when
r (the radial coordinate) goes to infinity, the Schwarzchild
metric coincides with the "flat" metric from special relativity. One under-appreciated fact is that there are other time variables (namely Eddington's time t_E) that have this property. If t_E is interpreted as time seen by a distant observer, it now takes *finite* time for infalling matter to reach the horizon. So it is perhaps time to reconsider the accepted picture...

34. Information is energy. Energy cannot be destroyed, but it can (be) change(d). So I don't see the problem.

35. I am on the same page like Maudlin. Why is this a problem at all?

Say an entangled particle ends up in a black hole (ignoring time issues). Particle A is entangled with particle B (far away). The information of particle B is lost forever. They are entangled, but it becomes irrelevant, we will never be able to know anything anymore about A through B.
Now the black hole shrinks. As long as A is still out there, the information isn't lost. Say the black hole eventually completely evaporates. New information spits out in the form of new particles. They miss their particle/antiparticle counterpart, but nothing changes about the balance of information.

Remark: I wonder if the "spooky action at a difference" is fundamentally severed after B has passed the event horizon

36. Information loss might not be the only thing that the black hole suffers during hawking radiation. Other imbalances might result. I am assuming that infalling Hawking radiation particles carry a positive charge into the black hole. During the evaporation of the black hole through Hawking radiation, what adjustment mechanism(s) keeps the accounting of the charge of the of the black hole in balance, and how does this mechanism keep the black hole from developing a massive negative energy bubble that the vacuum around the black hole finds disruptive.

37. Hi Bee, many thanks for these interesting posts. I've one question: I've heard, that for an external observer, far away from the event horizon of the black hole, time evolves slower at points in space which are closer to the horizon. The extreme case: time evolution stops at the horizon.
If that's true, then nothing crosses the event horizon as far as external observers are concerned. This leads to the question: why is information lost?

38. The nature of the singularity at the core of the black hole might tell us something about what is inside the black hole. If the vacuum existed inside the black hole, then the Hawking radiation mechanism would be active near the surface of the singularity. Such a Hawking radiation mechanism would be creating mass and energy inside the black hole which would cause the black hole to expand its gravitational potential over time. So the interior of the black hole must not allow the vacuum to exist in its interior within the event horizon.

39. Just like there really is no singularity of "infinite" density at the center of a black hole (it's just a mathematical figment from Einstein's classical equations), so too there may be no evaporation of a black hole from Hawking radiation. Consider for example a 10 solar mass black hole located in some typical area of the Milky Way. Over a billion years, will this black hole acrete more mass than it loses from Hawking radiation? The answer, it seems to me, is that there is far more acretion than radiation by orders of magnitude. So where is the information loss problem? Is it down in the realm of insignificant digits? There is no real problem.

My own view is that a black hole may evaporate but by quantum mechanical means other than Hawking radiation. A black hole is simply a stellar corpse which cannot be supported by the quantum mechanical effects of electron degeneracy pressure (white dwarf) or neutron degeneracy pressure (neutron star). So, does the corpse collapse into an object of infinite density which allows nothing to escape? Balderdash! There must be another level of degeneracy pressure which prevents the singularity. Quark degeneracy? Preon degeneracy? Unkown at this point. So, what would likely happen inside the event horizon if we assume this third quantum mechanical pressure? On black hole formation, all the incoming matter and energy from the stellar corpse would collapse inward at high velocity until it reached the third degeneracy pressure, then it would be forced to bounce off violently just like a supernova explosion. But, this supernova explosion would be gravitationally contained. By Einstein's equivalence principle, substantial matter being accelerated outward by the bounce will slightly counteract the the gravity of all the infalling matter which formed the event horizon before the bounce. This will cause the event horizon to jiggle or oscillate just enough to allow some radiation escape; no visible light, just high-energy, low-frequency radio waves. And then what happens? The cycle repeats because the explosion is contained, so all the matter will inflow again, and be violently bounced off again, ad infinitum, almost like a spherical combustion engine. All the while, the event horizon is breathing in and out in response to tremendous amounts of mass being accelerated in and out, with radiation leaving in graduated pulses almost like a pulsar. Now this is the real radiation leaving a black hole! It's not like tiny Hawking radiation. If this is true, then a black hole could indeed evaporate, and the so called "problem" of Hawking radiation would pale in comparison to the actual reality of nature.

Experimental verification: Look for high-energy, low-frequency radio-wave pulsations near suspected black holes. Are pulsars beaming their energy, or radiating isotropically? Are quasars glimpses into the interior of black holes because of relativistic effects?

In any case, Hawking radiation seems such a quaint and minor problem to me.

1. Henry wrote:
>Just like there really is no singularity of "infinite" density at the center of a black hole (it's just a mathematical figment from Einstein's classical equations), so too there may be no evaporation of a black hole from Hawking radiation. Consider for example a 10 solar mass black hole located in some typical area of the Milky Way. Over a billion years, will this black hole acrete more mass than it loses from Hawking radiation? The answer, it seems to me, is that there is far more acretion than radiation by orders of magnitude. So where is the information loss problem?

I think you, and probably a lot of people, misunderstand the point of the paradox.

Yes, solar-mass black holes are actually (much) cooler than the cosmic microwave background radiation, so they are taking in more mass from the CMB alone than they radiate through Hawking radiation.

But, that's not the point. The issue is a conceptual one having to do with contradictions in our theories, not with some observational paradox.

Trying to weld QM and GR together via Hawking radiation does not quite work: there is a paradox.

So, why should we physicists care? After all, we know that Hawking's analysis is not based on a full theory of quantum gravity.

The answer is that we hope trying to understand the paradox will give us clues that will help us move forward in understanding quantum gravity.

Henry also wrote:
> So, does the corpse collapse into an object of infinite density which allows nothing to escape? Balderdash! There must be another level of degeneracy pressure which prevents the singularity. Quark degeneracy? Preon degeneracy? Unkown at this point.

The answer I was given was that there are very general arguments having to do with pressure vs. density and the speed of sound necessarily being no greater than light speed that show that this strategy will fail.

Was I totally convinced? No, but no one has yet taken the approach you (and decades ago, I myself) suggested and made it work.

In fact, I think it might work when you get down to the Planck scale and therefore produce a Planckian remnant. But, of course, I have been just as unsuccessful as everyone else in trying to do this.

Dave

2. "There must be another level of degeneracy pressure which prevents the singularity."

The pressure makes a contribution to the stress-energy tensor and hence contributes to the gravitational force. At pressures a bit higher than exist at the core of neutron stars this contribution to gravity will prevent any degeneracy pressure from being able to resist the collapse, as the higher the pressure gets the more pressure would be needed to stop it and that gap only increases with inceasing pressure.

3. This has been investigated. Ideas of a force or some quantum process such as the Pauli exclusion principle countering implosion has been considered. The upshot of these investigations is that process capable of resisting all possible gravitational implosions propagates signals faster than light.

4. According to wikipedia the existence of "quark stars" is an open problem in physics, so the idea of of "another level of degeneracy" might perhaps still be viable?

40. The discussions here are admittedly quite a bit more sophisticated than their medieval versions, but the solution seems to be that 42 angels can dance on the head of a pin.

41. Hi, Sabine. I would very much appreciate your opinions one these two papers:

https://arxiv.org/abs/1801.05923

https://arxiv.org/abs/1901.01902

The former is the only answer to Maudlin's paper that I'm aware of.

The latter argues against the statistical interpretation of black hole entropy.

http://backreaction.blogspot.com/2019/08/solutions-to-black-hole-information.html?showComment=1567520929450#c2566740477589404098]

@PhysicistDave, I like your idea that new papers should in general make only small deltas to existing work, but let's look at that strategy more closely.

Evolutionary programming also keeps mutations small to keep changes within smooth regions of the solution space. Larger mutations tend to break things. E.g., if you make one gear in a clock a tiny bit larger, there's a fair chance the clock will keep time better. But if you make the gear twice as large, you just break the clock.

For small-delta searches to work, the solution space thus must be topologically smooth at the scale of the deltas. In science, smooth solution spaces are created by a preexisting combination of well-established experimental results and solid theories for those results.

This does not appear to be the situation for research on intense gravity fields.

Intense gravity research, especially for the Planck foam domain, is instead the product of mathematical interpolations of older physical theories that stopped generating new experimental ideas somewhere back in the 1900s.

Experimentation thus was replaced by unfettered mathematical extrapolation. Alas, the danger of this is that it encourages the growth of mutually contradictory schools and sub-schools in which each school has its own set of beliefs about what is important and what is not. Even worse, without experimental validation, nothing ever gets pruned. Branches just grow and create more subbranches.

So here's the nub: Small-delta exploration requires smooth solutions spaces, and thus fails in situations like this. Each new delta instead just adds another small branch to the fractal tree. While individual additions may be well-intended, well-structured, and impressive, the fractal geometry ensures that as a whole the tree will be self-contradictory. That's another way of saying that the tree is mostly noise, information that despite the work that went into it can never be transformed into actionable work.

Machine intelligence does not care one whit about how many smart people devoted their lives to some research topic, so what do algorithms from that domain say? Well, if a large exploration branch has gone fractal, machine intelligence is brutal and always says the same thing: Prune the entire branch and start over. Economists say it this way: Sunk-cost arguments are a great way to go broke.

While I'm just a poor, bewildered information specialist, I do deeply admire the analytical strategies of physicists who remain steady in their belief that in the end, the universe will turn out to be surprisingly simple. I also understand search spaces decently well, and alas, and I do not easily see the current research strategies for intense gravity topics ever converging into testable results.

Thus I would not-so-humbly dare to suggest this area needs a reboot, one that begins by taking a very hard look at own earliest and most experimentally established roots. Here are three examples of what I mean by that:

(1) Quantum uncertainty really is absolute. Thus if location-momentum uncertainty says you would need almost infinite energy to create even a small patch of Planck foam, it means no such foam exists, not even abstractly.

(2) Conservation of quantum numbers is the first principle of reality, not a derived one. Topological smoothness and differential equations arise from absolute conservation when it is delocalized over various spaces.

(3) The quantum theory of spacetime is ordinary quantum mechanics, reinterpreted as a framework for creating space-like and time-like relationships between particles, with second quantization being one example. The goal of this fabric is to ensure universal absolute conservation of quantum numbers. Gravity in this framework is reduced to a self-limiting topological effect.

1. Terry wrote to me:
>@PhysicistDave, I like your idea that new papers should in general make only small deltas to existing work, but let's look at that strategy more closely.

No, you misunderstand my (and Feynman's) view. Our point is that the delta should be zero: i.e., you should try hard, really hard, to explain new phenomena with existing theories, as those theories already are.

And, usually that works. Our scientific theories nowadays are good enough that usually you do not need to change them to explain new phenomena. This was true, by the way, even in the late nineteenth century: physics back then (Newtonian mechanics, Maxwellian electrodynamics) was really pretty good.

Of course, we all learn about the phenomena where classical physics needed to be changed: Michelson-Morley, the black-body problem, etc. With the advantage of hindsight, we now see where the old theories were doomed to failure. But, physicists at the time could only work this out by trying hard to make the established theories work and then focusing in on the few areas where they did not.

Furthermore, in seeing how and where old theories fail, we often get clues as to how to move beyond the old theories: this was certainly true, for example, with Planck and the black-body problem.

So, no, not small changes but zero changes -- unless and until you have a real feel as to why zero changes are not working. Then you may have finally left kindergarten and be ready for first grade: i.e., then you may be ready to try to discover truly novel theories.

I am afraid that people generally forget how really good the innovators were at the old physics of their time before they managed to discover new physics: this is true of Maxwell, Planck, Einstein, Schrodinger, Feynman, Weinberg, and many others.

"I'm gonna invent something brilliant and completely new" is almost always a losing strategy. "I'm going to master existing physics and apply it to new phenomena and only try to change existing theories if forced to" -- that strategy often works.

And, yes, I would apply that to most of the work in quantum gravity: I think there has been way too much wild and crazy speculation and too little attempt to master what is actually known at the level, say, of Adler, Bazin, and Schiffer and try to understand what that tells us about the actual formation of collapsed stars.

This criticism does not apply to everyone: there are people like Jim Bardeen and Baccetti's group and a number of others that are trying to understand the nitty-gritty details using the theories we already have. Of course, sometimes they will succeed and sometimes they will fail.

But, based on history, I suspect it is a better strategy than the sort of "Here are three new first principles!" approach you are suggesting.

To be sure, I might be wrong. If someone follows your strategy and comes up with a brilliant new theory of quantum gravity that obviously works, then you win the bet!

But it has not happened yet.

2. Dave, chatting with you in this thread has been lots of fun and thought provoking, so I cheerfully give you the last word! See you in future threads... :)

43. I don't quite agree with your comments in point 4. It can be done and in fact has been done.
To obtain an observer independent formulation of quantum mechanics, Fröhlich, in his Events, Trees, Histories (ETH) approach to quantum mechanics, puts the loss of access to information at the heart of quantum mechanics.
That approach essential claims to resolve black hole information loss as a corollary to resolving the measurement problem.

44. I think you got the point. Information is just lost at the transition from quantum physics to the classically observed physical states within the measurement process.
Personally, I have also a problem with the very definition of information within quantum theory. Clearly, the definition is straigth forward, since the so called "information" covers, what can be measured at least in principle, and for quite obvious reasons, algorithmic information is neglected. Most of our daily life information however is of algorithmic nature. For example the information stored in artifical or in natural neural networks (human brains). Another example are evolutionary or genetic algorithms and the content of an encrypted harddisk.
I would therefore suggest, that what we better name current "information theories" - "theories of random noise" instead.

45. Hi Sabine, I'm enjoying your book so much, and it leads me to ideas like as following.

Maybe the solution to the information paradox is: 10^60 billion years is the average time before reality clashes into a "INFORMATION DESTROYED ERROR". One day it will happen and the universe will fall apart, but until then nobody will notice that old, remote, frozen information is being destroyed by a black hole somewere in space. Moreover, this will not start until the CMBR will be much much colder. Seriously, a big program full of bugs can run for years and you will only notice small glitches (and the reality is full of glitches), until you step into a a severe one and the program crashes.

46. I was reading Gravitation by Misner, Thorne and Wheeler. And on page 933 it referes to a paper by Hawkings which is implied to say that black holes can never bifurcate.

In other words GR is not time reversible since coalescing of black holes are possible.

It seems like the hawking radiation is just a red hearing.

47. This comment has been removed by the author.