First of all, what is the black hole information loss problem, or paradox, as it’s sometimes called. It’s an inconsistency in physicists’ currently most fundamental laws of nature, that’s quantum theory and general relativity.

Stephen Hawking showed in the early nineteen-seventies that if you combine these two theories, you find that black holes emit radiation. This radiation is thermal, which means besides the temperature, that determines the average energy of the particles, the radiation is entirely random.

This black hole radiation is now called Hawking Radiation and it carries away mass from the black hole. But the radius of the black hole is proportional to its mass, so if the black hole radiates, it shrinks. And the temperature is inversely proportional to the black hole mass. So, as the black hole shrinks, it gets hotter, and it shrinks even faster. Eventually, it’s completely gone. Physicists refer to this as “black hole evaporation.”

When the black hole has entirely evaporated, all that’s left is this thermal radiation, which only depends on the initial mass, angular momentum, and electric charge of the black hole. This means that besides these three quantities, it does not matter what you formed the black hole from, or what fell in later, the result is the same thermal radiation.

Black hole evaporation, therefore, is irreversible. You cannot tell from the final state – that’s the outcome of the evaporation – what the initial state was that formed the black holes. There are many different initial states that will give the same final state.

The problem is now that this cannot happen in quantum theory. Processes in quantum theory are always time-reversible. There are certainly processes that are in practice irreversible. For example, if you mix dough. You are not going to unmix it, ever. But. According to quantum mechanics, this process is reversible, in principle.

In principle, one initial state of your dough leads to exactly one final state, and using the laws of quantum mechanics you could reverse it, if only you tried hard enough, for ten to the five-hundred billion years or so. It’s the same if you burn paper, or if you die. All these processes are for all practical purposes irreversible. But according to quantum theory, they are not fundamentally irreversible, which means a particular initial state will give you one, and only one, final state. The final state, therefore, tells you what the initial state was, if you have the correct differential equation. For more about differential equations, please check my earlier video.

So you set out to combine quantum theory with gravity, but you get some something that contradicts what you started with. That’s inconsistent. Something is wrong about this. But what? That’s the black hole information loss problem.

Now, four points I want to emphasize here. First, the black hole information loss problem has actually nothing to do with information. John, are you listening? Really the issue is not loss of information, which is an extremely vague phrase, the issue is time irreversibility. General Relativity forces a process on you which cannot be reversed in time, and that is inconsistent with quantum theory.

So it would better be called the black hole time irreversibility problem, but you know how it goes with nomenclature, it doesn’t always make sense. Peanuts aren’t nuts, vacuum cleaners don’t clean the vacuum. Dark energy is neither dark nor energy. And black hole information loss is not about information.

Second, black hole evaporation is not an effect of quantum gravity. You do not need to quantize gravity to do Hawking’s calculation. It merely uses quantum mechanics in the curved background of non-quantized general relativity. Yes, it’s something with quantum and something with gravity. No, it’s not quantum gravity.

The third point is that the measurement process in quantum mechanics does not resolve the black hole information loss problem. Yes, according to the Copenhagen interpretation a quantum measurement is irreversible. But the inconsistency in black hole evaporation occurs before you make a measurement.

And related to this is the fourth point, it does not matter whether you believe time-irreversibility is wrong even leaving aside the measurement. It’s a mathematical inconsistency. Saying that you do not believe one or the other property of the existing theories does not explain how to get rid of the problem.

So, how do you get rid of the black hole information loss problem. Well, the problem comes from combining a certain set of assumptions, doing a calculation, and arriving at a contradiction. This means any solution of the problem will come down to removing or replacing at least one of the assumptions.

Mathematically there are many ways to do that. Even if you do not know anything about black holes or quantum mechanics, that much should be obvious. If you have a set of inconsistent axioms, there are many ways to fix that. It will therefore not come as a surprise to you that physicists have spent the past forty years coming up with always new “solutions” to the black hole information loss problem, yet they can’t agree which one is right.

I have already made a video about possible solutions to the black hole information loss problem, so let me just summarize this really quickly. For details, please check the earlier video.

The simplest solution to the black hole information loss problem is that the disagreement is resolved when the effects of quantum gravity become large, which happens when the black hole has shrunk to a very small size. This simple solution is incredibly unpopular among physicists. Why is that? It’s because we do not have a theory of quantum gravity, so one cannot write papers about it.

Another option is that the black holes do not entirely evaporate and the information is kept in what’s left, usually called a black hole remnant. Yet another way to solve the problem is to simply accept that information is lost and then modify quantum mechanics accordingly. You can also put information on the singularity, because then the evaporation becomes time-reversible.

Or you can modify the topology of space-time. Or you can claim that information is only lost in our universe but it’s preserved somewhere in the multiverse. Or you can claim that black holes are actually fuzzballs made of strings and information creeps out slowly. Or, you can do ‘t Hooft’s antipodal identification and claim what goes in one side comes out the other side, fourier transformed. Or you can invent non-local effects, or superluminal information exchange, or baby universes, and that’s not an exhaustive list.

These solutions are all mathematically consistent. We just don’t know which one of them is correct. And why is that? It’s because we cannot observe black hole evaporation. For the black holes that we know exist the temperature is way, way too small to be observable. It’s below even the temperature of the cosmic microwave background. And even if it wasn’t, we wouldn’t be able to catch all that comes out of a black hole, so we couldn’t conclude anything from it.

And without data, the question is not which solution to the problem is correct, but which one you like best. Of course everybody likes their own solution best, so physicists will not agree on a solution, not now, and not in 100 years. This is why the headline that the black hole information loss problem is “coming to an end” is ridiculous. Though, let me mention that I know the author of the piece, George Musser, and he’s a decent guy and, the way this often goes, he didn’t choose the title.

What’s the essay actually about? Well, it’s about yet another proposed solution to the black hole information problem. This one is claiming that if you do Hawking’s calculation thoroughly enough then the evaporation is actually reversible. Is this right? Well, depends on whether you believe the assumptions that they made for this calculation. Similar claims have been made several times before and of course they did not solve the problem.

The real problem here is that too many theoretical physicists don’t understand or do not want to understand that physics is not mathematics. Physics is science. A theory of nature needs to be consistent, yes, but consistency alone is not sufficient. You still need to go and test your theory against observations.

The black hole information loss problem is not a math problem. It’s not like trying to prove the Riemann hypothesis. You cannot solve the black hole information loss problem with math alone. You need data, there is no data, and there won’t be any data. Which is why the black hole information loss problem is for all practical purposes unsolvable.

The next time you read about a supposed solution to the black hole information loss problem, do not ask whether the math is right. Because it probably is, but that isn’t the point. Ask what reason do we have to think that this particular piece of math correctly describes nature. In my opinion, the black hole information loss problem is the most overhyped problem in all of science, and I say that as someone who has published several papers about it.

On Saturday we’ll be talking about warp drives, so don’t forget to subscribe.

Right to the point, Sabine.

ReplyDeleteTo quote Einstein:

“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality"

Sabine, can you please elaborate on your statement "All these processes are for all practical purposes irreversible. But according to quantum theory, they are not fundamentally irreversible, which means a particular initial state will give you one, and only one, final state." Why and how does quantum mechanics allow reversibility? And does this not already put quantum mechanics into conflict with the second law of thermodynamics even if we never say one word about black hole information loss? Finally, is this inconsistency in some way connected to quantum mechanics being a linear theory and general relativity being nonlinear?

ReplyDeleteThat's just a feature of quantum mechanics, and that happens to be an *extremely* well confirmed theory. I don't know what you want me to elaborate on. No, this does not put quantum mechanics in conflict with the second law of thermodynamics. The second law of thermodynamics is an approximate, average description. It is not fundamental. The fundamental theory, quantum mechanics, is reversible.

DeleteGiven Sergei's mention that what you referred to as the reversibility of quantum mechanics is tied to its unitarity, that is the point on which I was seeking elaboration.

DeleteDr. Hossenfelder can correct me if I am wrong, but I believe that a quantum mechanical description (including time) would be a time-dependent wave function, describing how probability density flows over time. That wave function is reversible I believe - one could change the sign of the time variable and still have a valid equation. But once one makes an observation, one changes the wave function, and that is irreversible. The observation changes what is known about the state of the system. That is when entropy comes into play, I believe. Entropy increases when the number of possible states of the _system_ increases, and that occurs when one makes an observation of some part of the system.

DeleteWe can consider quantum mechanics as a geodesic flow. Consider the Schrödinger equation i∂_tψ = Hψ, and the evolution of the wave function can be considered to be contained in a unitary matrix U, ψ(t) = U(t)ψ(0) so that i∂_tU = HU. Then

DeleteiU^†U_t = U^†HU with U_t = ∂_tU.

Now take the time derivative of this

i∂_t(U^†U_t) = ∂_t(U^†HU),

where it is easy to see the right hand side is zero. Chain rule on the left gives

U_t^†U_t + U^†U_{tt} = 0.

We now multiply by U with UU^† = 1 to get

U_t^†UU_t + U_{tt} = 0.

This is a geodesic equation. This can be generalized to geodesic deviation equation.

We may then compare this to general relativity with geodesics. We know by Penrose and Hawking that in a black hole, geodesics are incomplete. This occurs on a singularity. We may similarly say that when a measurement occurs the unitary evolution and by extension this geodesic evolution of a unitary matrix ends.

There is entropy interpretation here. With black holes there is entropy measured by the area of the horizon. Similarly, the end of this unitary geodesic is also entropic.

Sabine, just and simply thanks (again) for the beautiful and clear explanation.

ReplyDeleteSabine, thank you for hammering home the description of the black hole evaporation conundrum as irreversibility/loss of unitarity. This is something that a layperson can understand, as opposed to the nebulous "information loss".

ReplyDeleteAlso, you have a few typos in the transcript, probably due to youtube auto-transcript feature, such as "(i)ntaiely".

Finally, could it be that we would find that black holes do not evaporate once you account for the properties of the detector? The standard approach is to assume an accelerating massless detector, then recalculate the expectation values of quantum fields from the inertial to the accelerating vacuum background, when deriving both the Unruh and the Hawking effect. On the other hand, when considering an inertial observer, one does not get the same thermal bath, even if the observer is in a circular orbit around the black hole and ought to see it evaporate eventually. I wonder if neglecting the spacetime changes resulting from the acceleration (e.g. assuming the locally flat spacetime, rather than, say, a locally Kinnersley photon rocket spacetime) changes the calculation enough to break unitarity.

Hi Sergei,

DeleteThanks for pointing out, I have fixed the typos.

As to the properties of the detector. The detector really doesn't matter for the evaporation, what matters is how you define the vacuum state, relative to which observer, in rest, in rest with what, accelerated or not accelerated. What breaks unitarity in the end is simply that part of the wave-function runs into the singularity and it's lost there. You just can't reverse that.

Sorry for naive question from an amateur, but could we in principle retain the simple GR+QM theory's ability to predict future evolution of a quantum system containing black holes (ex. a collection of extremely energetic particles that can with nonzero probability collide to produce micro black holes) by "manually" compensating for the parts of wave function that get lost in black holes with a different wave function describing all the possible products of black hole evaporation? Essentially giving up linearity and reversibility, but patching up unitarity, since common sense dictates that this system should always have well-defined probability distributions of different measurement outcomes.

DeleteAlso, can processes that are sufficiently energetic to result in micro black hole creation occur in nature at all? (Collision between neutron stars? Or between black holes with accretion disks? Or early moments after Big Bang?) If not, then this is a rather useless thought experiment with no relation to our universe.

> The detector really doesn't matter for the evaporation, what matters is how you define the vacuum state, relative to which observer, in rest, in rest with what, accelerated or not accelerated.

DeleteIndeed, different observers see different vacua, just like two observers can disagree on whether an accelerating electric charge radiates. However, in the case of an evaporating black hole a stationary observer at infinity (who sees Boulware vacuum?) must agree with an inertial observer at a finite distance (who sees a locally Minkowski vacuum state?) about the final fate of the black hole. If the inertial observer never sees any radiation, then they would not see the black hole lose mass at all. This is where I have trouble reconciling the two views. I certainly understand that, once you accept the premise of QFT on a fixed curved background, the math leads you to evaporation and irreversibility.

Or maybe I'm confused and an inertial observer on a circular trajectory does indeed see Hawking radiation, I was unable to find a reference that does the calculation.

I think what you are missing is that the inertial observer doesn't detect radiation *locally*. This doesn't mean there's no radiation.

DeleteHi Sergei,

DeleteA geodesic observer orbiting the black hole should indeed see Hawking radiation. The Boulware vacuum is thought to be an appropriate vacuum state for spherical masses that don't have an event horizon, e.g. the Earth. It has been argued that for a black hole, the Boulware vacuum is not appropriate and a vacuum state called the Unruh vacuum would be the appropriate one to use. The Boulware vacuum has no particles for a constant r, accelerating observer, but this same observer in the Unruh state would detect particles. Keep in mind these states are for eternal mass distributions, i.e. the black hole always existed and didn't form at some time in the past. I believe this is why there is some ambiguity in the choice of correct vacuum state, there wouldn't be as much ambiguity if you considered a more real black hole that formed from a collapsing object, which is what Hawking originally did but with approximations.

Also:

Thanks for the great article, Sabine.

Alex,

DeleteThank you for the clarification re Boulware vs Unruh vacuum. The presence of a horizon of course makes all the difference.

> A geodesic observer orbiting the black hole should indeed see Hawking radiation.

I could not find any papers with an actual calculation, any suggestions? Or if, as Sabine suggested, a geodesic observer does not have to see radiation, how would such an observer notice the black hole losing mass?

Sergei,

DeleteI think Sabine's statement means that a geodesic detector outside a black hole would detect particles. The particles would just come from a non-local source, the black hole. They wouldn't be locally generated like in the sense that an accelerated detector in Minkowski space sees radiation.

Unfortunately, I don't have a reference to an explicit calculation. Intuitively though, a constant r (accelerated) observer and a geodesic orbiting observer become the same as r goes to infinity. But the constant r observer will always see radiation with temperature T = 1/(8*Pi*M). Since the orbiting observer's motion will get arbitrarily close to the constant r's motion at r = infinity, this observer should also measure radiation of the same temperature.

When I see theorists labor to solve the black hole problems, I understand that they want to take the GR and QFT to their breaking point. They always state something has to give, either unitarity or the horizon.

ReplyDeleteI do not see why this is wrong? There are no experiments that found any deviation from GR or QFT. Both theories are exact for all we know.

So I can see why it would be worthwhile to see which one breaks first, GR or QFT?

And I admit I do like the theoretical developments that are coming up to solve these puzzles, like the ADS-CFT equivalence, Verlinde's theory, or Netta Engelhardt's solution to the above problem.

Only if we have an idea of a theory we will know what to look for in, e.g, gravital wave patterns.

Work forces me to make a brief comment only. The emission of Hawking radiation is computed with an ad hoc back-reaction of the metric. If we treat this instead in at least some WKB-ish way with the emission of gravitons then this apparent information loss may be removed. It is in a sense similar to Susskind's ER = EPR, but where for a complex black hole the "other black hole" is at I^+, or in a conformal AdS at the timelike infinity. The problem with ever cataloging information is we attempt to make a local accounting of information, but in fact this is impossible.

ReplyDeleteThe solvability of a Black Hole's information is just a matter of calculating the ratio between its information equity and its information debt.

ReplyDeletePeople say "information loss" to mean "time irreversibility" because the state after an irreversible process, in a very real sense, takes less information to describe than the state before it. I think there is some merit to calling it information loss because "irreversibility" *also* has a potentially confusing second meaning: it might be taken to mean "irreversible due to entropy increase." In some papers on the border between physics and computer science people talk about "reversible computing," by which they mean "constant-entropy computing." So, since the word reversible had already gotten attached to thermodynamics, information loss got attached to non-time-reversible solutions.

ReplyDeleteAlso:

A system can be time-irreversible without information loss! For example, consider the very simple equation f(t) = f'(t). The solution is exp(t), which is not symmetric under time-reversal because exp(-t) is not a solution. The coincidence of the property that f(t) being a solution always means f(-t) is a solution with the property that the past can always be computed from the future happens in Lagrangian mechanics, but not in every physical system. Right now it is believed that what cannot happen in Lagrangian mechanics ultimately cannot happen in the fundamental universe, but future experimental discoveries may change that. We can certainly *imagine* exceptions even if we haven't seen any.

So that is my case for the two terms meaning different things, and for "information loss" being an acceptable way to phrase it, with some merits over "time-irreversibility."

As always, I concede that language is socially constructed and also that words mean whatever people think they mean, so all semantic debates are ultimately unfounded, and you should not worry too much about this comment. ;)

Much of physics is done for closed systems. We like to specify some set of degrees of freedom for a system in some set, often finite. We then have a condition H = E that the system is on an energy surface. For a quantum system this energy surface is given by Schrodinger equation. This energy surface can evolve in various ways, and for a statistical system it can deform into great complexity. For a set mole of atoms initially set at the corner of a room and let go this is what happens. The system will not return to its initial condition. though there is Poincare recurrence time for a very long time into the future. In this case the degrees of freedom or information of the system is in principle conserved, but you will not be able to easily find it.

DeleteA system that is open is one which permits DoFs or information to come in from the outside world or to escape out into this outside world. In the case of a quantum system subjected to some quantum noise or thermal distribution of separable states, this energy surface H = E expands and thus entropy increases. If the system loses DoFs this energy surface shrinks.

A quantum gravitational system is nonlocal. In the case of an anti-de Sitter spacetime the bulk has gravitational information that is nonlocal, which is equivalent to local quantum field or conformal field information on the boundary. A black hole is similar, and there is an AdS black hole correspondence. It is then difficult for an observer to identify all quantum information in a local region in the bulk. Even if in principle information is conserved, it is not possible to localize it.

The emission of a quantum particle by a black hole is accompanied by some gravitational field response. With Hawking radiation this is accounted for with a "by hand" change in the metric, the so called metric back reaction. This is a sort of hack or fix we impose due to insufficient knowledge of quantum gravitation. This is analogous to the bulk field reaction of gravitation in the AdS case.

We can see this as being a type of open world system. Most of these models of quantum information attempt to find how a local QFT has conserved qubits in black hole or gravitation. There are some possible hints of this with the CFT boundary. However, it is not clear to me this conformal infinity, a timelike region "at infinity" is what I would call local in any practical sense.

Nice and interesting post, subscribed!

ReplyDeletewhen particles in the cake mix collide, they make contact for a very short time, and so the energy uncertainty is very high (in gasses about 10% uncertainty at room temperature)because delE delt>=h bar. this uncertainty is carried to the next collision. so it is not clear how this is reversible at all.

ReplyDeleteTo me, quantum mechanics looks like fishy stuff, poorly understood by physicists even at the level of Young's double slit, unitary transformations, wave functions, etc. E.g., Feynmann claims in his 'Lectures' that a particle with unknown position has position with probability distribution uniform on all of space. No it doesn't: All of space has infinite Lebesgue measure and can't have a uniform probability distribution. Feynmann seemed to have problems with Probability 101, first week. Next, commonly QM (quantum mechanics) lectures and texts say that the wave functions form a Hilbert space. Wrong. A Hilbert space is a complete inner product space (W. Rudin). The wave functions are differentiable and, hence, continuous. Well, with the inner product, the continuous functions are not complete, that is, can converge to a function that is not continuous. With such elementary mistakes, how can I take QM seriously? Then there is the mystery of 'super position'. That looks a little like just linearity but there is more to it that is wildly obscure. QM wants to use the Fourier transform. Last time I checked, a function with compact support has a transform with infinite extent. That's not realistic. Another one is, when a photon hits a detector, its wave function goes away and, apparently, with a function of time that is no longer unitary. Okay, the wave function of interest includes both the photon and the detector so that after the photon is absorbed somehow THAT wave function has still been unitary -- but the details don't get discussed. Then send a photon to a beam splitter where, as in the Michelson-Morley interferometer, the wave function of the photon gets split into two parts, say, one part going north and the other, east. Have a detector 1 light year away in the path of the part going east. When get a detection, what happens to the part going north? It can be 2 light years away, so no fair saying that it instantly disappears. And no fair saying that it somehow doesn't count since might have mirrors that have the beam splitter a M-M device that would bring the two halves back together again. I'm questioning the idea that one photon splits into two parts and that, really, for the fringes, need many photons. The Hawking radiation assumes virtual particles from the Heisenberg uncertainty principle from Plancherel's result in Fourier theory. To me no fair saying that empty space obeys the uncertainty principle when it was derived from assuming photon or particle was there. Virtual particles sound fishy. But for Hawking radiation, one of the two virtual particles, the one that enters the black hole, has "negative energy" -- can buy that at eBay? And the negative energy is what is supposed to have the black hole evaporate? It all sounds fishy to me. Net, to me QM seems so fishy that it's hopeless to say anything solid about QM and black holes. QM needs to be cleaned up first.

ReplyDelete(Most) physicists don't understand probability or statistics, nor do they let that slow them down!

DeleteIn any high energy vacuum environment as exists inside a black hole, the Higgs field will be overcome and this false vacuum will convert all matter to energy. Inside a black hole or equivalently any high energy false vacuum bubble state emerges where the Higgs field is superseded, only photons can persist. In these high energy vacuum states, this reduction of matter to pure EMF is a one way reaction because of PT symmetry breaking. If those photons ever tunnel their way outside the black hole by any means, the structure of the matter that originally when into that black hole will be forever lost.

ReplyDeleteIt is my belief that when any change in the energy content of the vacuum occurs, PT symmetry breaking also results. In more detail, when the Higgs field is overruled by the development of a false vacuum bubble, PT symmetry breaking occurs.

I believe that gravity is a red herring. The key driver of information loss as relates to the structure of matter is the development of a false vacuum bubble. This bubble will produce PT symmetry breaking which is where the structure of matter is forever lost.

When Einstein had proposed his theory of special relativity, did he wait for data to come in before thinking about general relativity? No. He first proposed general relativity, and

ReplyDeletethenpeople went out and looked for data. And it turned out GR was correct.None of the things you're calling "mathematically consistent solutions" so far are real theories. If we had a real theory, it would explain

howthe information got out.So I say that physicists should go ahead and look for real theories that explain black hole information loss, and flesh them out as well as they can. Maybe some of them will actually have predictions that don't involve waiting to watch a black hole evaporate. And then maybe we could test them.

Oh, lots of people have explained how the information gets out, if it gets out, it's just that no one wants to hear a solution.

DeleteRegarding Einstein. As you full well know, GR is necessary to explain the perihelion precession of Mercury, of which there was data BEFORE Einstein even started thinking about it.

I think a better example is Special Relativity.

DeleteAt the time there were two incompatible theories on movement, one for matter, Newtonian mechanics, and one for Electromagnetic fields. Mathematically, they obeyed the Galileo and Lorentz transforms, respectively.

Einstein brought everything under the Lorentz transform. That was far from obvious at the time, as I understood. The experimental evidence seemed to have been less than compelling too.

Why do you think the Lorentz transformation is called Lorentz transformation?

DeleteBecause it was developed by Hendrik Lorentz. But, IIRC, that was only applied to EM problems before SR. I did not know Lorentz applied it to matter mechanics before Einstein's 1905 paper.

DeleteI took your comment to refer to experimental evidence for Lorentz transformation, but maybe I misunderstood.

Delete"experimental evidence for Lorentz transformation"

DeleteNo, the evidence is without doubt. This was just a "history of physics" remark.

Sabine: "lots of people have explained how the information gets out, if it gets out".

DeletePlease explain one of these mechanisms to me. As far as I can tell, they all go like "black holes have non-local dynamics, so we don't need to follow the constraints of general relativity (with no explanation of how this actually works), so the information can get out. Or "AdS-CFT holds, and the information clearly gets out in the boundary theory, so it must get out in the bulk theory. Or "black holes don't actually have boundaries, they are made of strings, and they have soft, fuzzy exteriors, so the information can clearly get out.

As far as I am concerned, none of these is a satisfactory explanation.

Peter,

DeleteI have quoted plenty of papers in the video, I am sure you are able to locate these by help of your search engine of choice. I am not remotely surprised you don't find them "satisfactory". As I said, in the absence of data, it comes down to what solution do you like best.

Sabine: exactly. All of these papers seem to have this step "and then a miracle occurs," like in the Sydney Harris cartoon.

DeleteBut the real problem right now isn't the absence of data, it's that none of the solutions are satisfactory. Once we get satisfactory solutions, then we can start worrying about the absence of data.

DeletePeter,

DeleteI don't know what you mean. If you think that something is wrong with one of the proposed solutions, then you should write a paper about it. Isn't that what science is all about? Writing pointless papers about unsolvable problems?

Peter,

DeleteI don't know what you think it matters whether you or anybody else finds a solutions "satisfactory". This isn't a scientific argument. These solutions are mathematically correct. The question whether they correctly describe nature is unanswerable for all practical purposes. And what else there is to say about it just isn't science.

Sabine, the point is that the proposed solutions are mathematically incomplete. They all have missing steps. It is not just a question of what the assumptions are, and whether they are physically meaningful. Do you have a reference to a complete mathematical solution without any hand waving?

DeleteYour answer here is that physicists just need to do *more* math and eliminate all the current assumptions they rely upon by replacing them with more math. And that's precisely why Sabine wrote a book called "Lost in Math." Frankly, if you have read that book and watched this video and you *still* insist that the problem with these "solutions" is that they just don't have enough math, then I think you might be hopeless.

Deletemanyoso: I think the point of Sabine's book is that jusst using math blindly without intuition won't get you anywhere. But on the flip side, intuition without math is also certainly not going to get you anywhere.

DeleteI'm curious as to why you say dark energy is neither dark nor energy. If it is not dark, why can't we see it? If it is not energy, why does it act like it has mass and pressure? I realize that nobody has a clue what dark energy is, but whatever it is, it seems to fit those two categories.

ReplyDeleteIf it was dark, it'd be swallowing light. It doesn't interact which light, but that means it's transparent, not dark. It's not energy because it just isn't. To begin with, it's just a constant of nature. Even by dimension that would be an an energy *density*, not an energy. But if you look at the stress energy tensor, it doesn't only have an entry for the energy density, but also for pressure. (And that's negative.) So whatever you want to call it, it clearly isn't energy. It's a really unfortunate terminology that causes a lot of confusion.

DeleteI will admit that "energy density" is a more precise term, but given that we only see its spatially integrated effects, is that really a material fact. Dark does not mean what you seem to think it means, at least not in astronomy and not in English, it means not emitting light, and I don't see why the sign of the pressure entry SE tensor means that it "isn't energy."

DeleteI have no idea what "a material fact" may be. I find it hard to think of something less material than dark energy. Yes, I was referring to the word "dark" as the person on the street would understand. I think they'd be very surprised if you told them that a transparent medium is actually dark. As to how astronomers use the word, I doubt there is a consensus. According to you, the full moon is dark, which strikes me as somewhat odd terminology, but maybe that's just me.

DeleteI would surely think the person on the street would think it is dark at night, even though there is nothing that absorbs light. It's the absence of light that we experience as dark. And dark energy is energy. As you say, it also has a pressure. That doesn't make it not energy. A wet dog is still a dog, even if it has other properties as well. Now I wish I could disagree with you on the rest of the article as well....

DeletePS: I do wish you could acknowledge that one can (and our colleagues did) make good progress in understanding the semi-classical approximation to quantum gravity using the Euclidean path integral while at the same time making your other points as well....

Delete

Delete" the person on the street would think it is dark at night, even though there is nothing that absorbs light"But they wouldn't, therefore, claim that air is dark.

If your colleagues start "acknowledging" my point, maybe I'll consider acknowledging theirs?

The Moon shines by reflected light, but dark matter and dark energy do not, so I really think that you are just playing a word game here.

DeleteRight, the moon does not emit light, hence according to your definition it's dark. Except of course it isn't.

DeleteI'm not "playing" any game, I am merely informing you that the average person does not understand that the "dark" in dark matter means "does not interact with light". They tend to think it means it swallows light. I actually care very little whether you believe that or not, I just try to be understood by as many people as possible.

Merriam Webster defines "dark" as: devoid or partially devoid of light : not receiving, reflecting, transmitting, or radiating light

DeleteIf at least one of four properties holds an object is commonly called "dark". The last one does hold for dark energy, so "dark" is appropriate here. If dark energy has a finite energy density it also has energy. And the fact that dark energy also has pressure does not mean that it does not have energy: a dog with wings is still a dog.

Well, according to the great Franzi, then, the sun is dark because it doesn't reflect light. This time you have really outdone yourself.

DeleteAll the unexplained parameters of the standard model are only known to a certain precision. A theory that successfully predicts their more precisely measured values would be enormous progress. If such a theory also has clear implications for the information problem, we would have strong reason to favor its approach.

ReplyDeleteSabine,

ReplyDelete"And without data, the question is not which solution to the problem is correct, but which one you like best."

I think we have plenty of data. Not about the black holes' behavior, but about QM. We know from EPR that physics must be deterministic and realistic, otherwise we need non-locality which conflicts with relativity. Yet, for a mysterious reason, the mainstream physics still clings, almost a century after the EPR paper, that physics is non-realistic and non-deterministic.

No one was able to provide a local non-realistic explanation of the EPR/Bohm experiment, we have a rock-solid logical argument that it cannot be done, but mainstream physics stubbornly believes that non-realism is the way to go forward.

But nature does not care about the physicists' irrational beliefs and continue to slap them in the face with paradoxes, the information loss being one of them. The resolution for all of them is simple. Physics is deterministic. There is no such thing as random thermal radiation. Determinism implies that the outgoing radiation uniquely depends on the initial state, prior to the formation of the black hole.

The thing that amazed me the most in the text: Peanuts are not nuts! 😱

ReplyDeleteI don't pretend to understand any of the details, but at the most basic level why do we believe General Relativity is precise in this domain? GR is a macroscopic theory and clearly doesn't describe all the things that happen microscopically when horizons and quantum effects interact. The gap between theories is the most obvious "solution" as you say.

ReplyDeleteYou misunderstand this. A "horizon" isn't anything you can interact with. Space-time at a black hole horizon can be to excellent approximation flat, flatter than it is here at Earth. We have tested this regime of curvature many times and nothing funny is going on.

DeleteWouldn't black hole evaporation reduce the diameter of the event horizon, since the strength of the gravitational field is proportional to the mass of the black hole? Have astronomers observed a gradual reduction in the diameter of the event horizon of black holes, providing evidence that black holes evaporate over time rather than grow as they consume more and more matter?

ReplyDeleteYes, but as I said it's way too small to measure. The black holes that we observe don't shrink, they increase because they swallow way more matter and energy than they lose by Hawking radiation. In fact, already the cosmic microwave background would be sufficient to prevent them from shrinking due to Hawking radiation, that's how tiny the effect is.

DeleteRecently I published a paper, "Statistical Entropy of a Schwarzschild-anti-de Sitter Black Hole," Armenian Journal of Physics, 2019, vol. 12, issue 2, pp. 178-184, where I provide a solution to the black hole information paradox by suggesting a Bose-Einstein condensate whereby the zero mass state is a limit point (or accumulation point) of condensates on the event horizon.

ReplyDeleteMoorad Alexanian

I totally agree with your view of the black hole information paradox. When I first heard of it I thought that there was no paradox at all, and I wondered why so much effort were invested in it whitout never reaching a satisfactory "resolution". As you well said, the solution will never come because of lack of data.

ReplyDeleteHowever, my line of thought seems to be in contradiction with something you said. I'll explain myself briefly; quoting Landau-Lifshitz:

"By measurement, in quantum mechanics, we understand any process of interaction between classical and quantum objects, occurring apart from and independently of any observer."

A black hole is a classical macroscopic object, so it acts as a temperature reservoir in which a quantum field reaches thermal equilibrium. The "measurement" is this interaction, and the thermal state of the quantum field has higher entropy than the initial state, hence the time irreversibility.

However, in your post you dismiss the non-unitary quantum measurement evolution as a way out of the paradox, which makes me think that maybe I overlooked something...

Kaiser,

DeleteYou don't measure the black hole (I'm not sure what that means). If you measure anything in this paradox, you measure the radiation. And as I said, the problem appears BEFORE you even make the measurement.

What I meant is that the macroscopic system that is performing the measurement on the radiation *is* the black hole, so the "measurement" is the very interaction between the field and the black hole.

DeleteWell, if you want to propose yet another ad hoc solution to the supposed problem, then I recommend you do that in a journal and not here.

DeleteI'm just an armchair philosopher, not a physicist. What I get out of this is the assertion that there is an answer to this question - in philosophical terms, there is a definite "fact of the matter" - but it lies, of necessity, outside of observable reality. I have a hard time with this; it's Lewis Carroll's green whiskers once again.

ReplyDeleteThis post makes the false assumption that GR is valid. When GR isn't valid then black holes needn't exist, so that data isn't needed to solve the paradox.

ReplyDeleteIn SR a stone can be dropped from a rocket having any constant proper acceleration, to reach any distance from the rocket as measured in the stone's frame. To obey t-symmetry, SR must also predict--and does--that the reverse is true: a freely falling stone can in principle catch up to a rocket having any constant proper acceleration, no matter how far away the rocket initially is in the stone's frame. But GR disagrees with that prediction in a local inertial frame that straddles an event horizon of a black hole: a freely falling stone below the event horizon can't catch up to a rocket that hovers above the event horizon, even when it's only a meter away in the stone's frame. By contradicting SR within a LIF, GR violates its EP to be self-inconsistent.

The solution to the paradox is to modify the EFE so that the resulting new metric for Schwarzschild geometry doesn't predict black holes, yet still agrees with all observations. The new metric predicts 42.9799 arcseconds per century for the Schwarzschild precession of Mercury, and 12.1 arcminutes per orbit for the Schwarzschild precession of S2 around Sgr A*, the same as the Schwarzschild metric does.

The real reason that the paradox is unsolvable is because black holes have become a religion, in part because of a misunderstanding of Rindler horizons. The religious won't open their minds, so we're stuck with the paradox.

I feel like paraphrasing Elizabeth Barret Browning, "This is so wrong, let me count the ways."

DeleteTom Fuches wrote:

DeleteIn SR a stone can be dropped from a rocket having any constant proper acceleration, to reach any distance from the rocket as measured in the stone's frame. To obey t-symmetry, SR must also predict that the reverse is true: a freely falling stone can in principle catch up to a rocket having any constant proper acceleration, no matter how far away the rocket initially is in the stone's frame.You're confusing different initial conditions. In terms of any given inertial coordinates (flat spacetime), the worldline of an object with constant proper acceleration "a" is asymptotic to the light rays through an event at a distance of c^2/a from the zero-speed point on the hyperbola. Any entity (including a photon) that starts from further behind than this cannot catch up to the object. This is consistent with the time reversed case you mentioned, as you can see by just applying the Lorentz transform. The chasing entity always crosses the x axis at a distance less than c^2/a from where the hyperbola crosses that axis.

By contradicting SR within a LIF, GR violates its EP to be self-inconsistent.General relativity does not violate either the equivalence principle. Special relativity is valid in the limit of local inertial coordinate systems.

See that your distance of c^2/a doesn't specify the frame in which that's measured. While it's true that "The chasing entity always crosses the x axis at a distance less than c^2/a from where the hyperbola crosses that axis", that distance can be any distance as measured in the stone' frame. So the conclusion stands that GR violates the EP. As I said, Rindler horizons are misunderstood. (Feel free to email me if you want; the address is in my profile.)

DeleteHyperbolic motion is self-similar in terms of every system of inertial coordinates, and in each of them the spatial distance between the stationary point on the hyperbola and the "fulcrum" event is c^2/a. No, that distance cannot be "any distance" when measured in the stone's frame. You're still confusing different initial conditions. Remember, in some frames the accelerating object is first approaching and then receding from the stone. No, general relativity does not violate the equivalence principle.

DeleteAmos, if you were right then a rocket wouldn't be able to accelerate and decelerate to reach any destination; the predictions listed at The Relativistic Rocket site at "Here are some of the times you will age when journeying to a few well known space marks, arriving at low speed" would be wrong. For any distance between a rocket and its destination as measured in the destination's frame when the rocket starts decelerating, the destination is above the rocket's Rindler horizon, or else the rocket couldn't reach that destination. Examples: When the rocket starts to decelerate to Vega at 1 Earth gravity, c^2/a is 0.97 light years, whereas in Vega's frame the rocket is 13.5 light years away. When the rocket starts to decelerate to Andromeda, in Andromeda's frame the rocket is 1 million light years away. So yes, SR predicts that in the destination's frame the rocket can be any distance away in principle. The distance in the destination's frame is length-contracted in the rocket's frame, to < c^2/a. The c^2/a distance applies only in a local inertial frame that momentarily co-moves with the rocket. The stone is a destination for the rocket the same as Vega and Andromeda are.

DeleteYou also indirectly disagree with the solution to the barn-pole (or ladder) paradox of SR. It's inefficient to have a discussion here. I suggest you email me at tomfuchs@gmail.com.

DeleteIf you were right then a rocket wouldn't be able to accelerate and decelerate to reach any destination.Not true. Ordinary motions along ordinary worldlines in flat spacetime, including accelerations and decelerations, are entirely consistent with special relativity, and motion with constant proper acceleration does not present any difficulties or inconsistencies. Again, in terms of every system of inertial coordinates, the spatial distance between the stationary point on a hyperbolic worldline and the "fulcrum" event is c^2/a. And, again, any entity (including a photon) that starts from further behind than this cannot catch up to the object. This is consistent with the time-reversed case you mentioned, as you can see by just applying the Lorentz transformation. In terms of any inertial coordinates, the chasing entity always crosses the x axis at a distance less than c^2/a from where the hyperbola crosses that axis. None of your "examples" contradict this.

You also indirectly disagree with the solution to the barn-pole (or ladder) paradox of SR.No, this is entirely consistent with special relativity.

Amos, I already agreed that "the chasing entity always crosses the x axis at a distance less than c^2/a from where the hyperbola crosses that axis". I said that distance applies only in a LIF that momentarily co-moves with the rocket. The examples I gave from The Relativistic Rocket site prove my point that in the destination's (chasing entity's) frame the rocket can be any distance away in principle. The stone substitutes for Vega or Andromeda. You're repeating yourself without addressing the fact that The Relativistic Rocket site disagrees with you. Your claims here disagree with a time-reversed case. A stone can be dropped from a rocket to reach any distance in the stone's frame, yet you're arguing that a stone can't catch up to a rocket starting from any distance in the stone's frame, the time-reversed case. Rather than just make claims you should prove them with math or other reasoning. Like you could show that Andromeda can't actually reach the rocket that decelerates to it starting from 1 million light years away in Andromeda's frame. You could let the rocket decelerate at 1 Earth gravity so that c^2/a is 0.97 light years. According to what you've said, Andromeda can reach the rocket only when the rocket is less than 0.97 light years away in Andromeda's frame. So prove it.

Delete

DeleteI already agreed that "the chasing entity always crosses the x axis at a distance less than c^2/a from where the hyperbola crosses that axis".Yes, that’s the relevant fact, i.e., that’s what people are referring to when they talk about that “horizon” for an accelerating object.

In the chasing entity's frame the rocket can be any distance away in principle.Sure, in terms of frames such as the one in which the “chasing entity” is stationary (for example) and the rocket is rapidly approaching while decelerating, the distance in terms of the stationary entity’s rest frame can be arbitrarily great, but the effective “horizon” of c^2/a is understood to refer to the momentarily co-moving frame of the accelerating object. There's nothing here to support your claim that general relativity violates the equivalence principle.

Amos, when in the chasing entity's frame the rocket can be any distance away in principle, GR does violate the EP, because then according to SR a freely falling stone below the event horizon can in principle reach a rocket that hovers above the event horizon. The rocket is some distance away from the stone in the stone's LIF, which meets the requirement of "any distance away". The EP is violated When GR disagrees with SR in a LIF.

Delete

DeleteIn [terms of] the [inertial] entity's frame [an accelerating] rocket can be any distance away…Right. There are no “horizons” in terms of inertial coordinate systems.

[General relativity] does violate the [equivalence principle], because according to [special relativity] a freely falling stone below the event horizon can in principle reach a rocket that hovers above the event horizon.No, the word “horizon” here applies only to the accelerating coordinate system (composed of foliations of the co-moving inertial coordinates) in which the rocket is stationary. There are no “horizons” in terms of inertial coordinate systems, such as the one in which the inertial stone is at rest. Note that, in the presence of curvature, there do not exist any extended inertial coordinate systems, so event horizons due to curvature can’t be transformed away, unlike the “horizons” associated with acceleration in flat spacetime. None of this violates the equivalence principle, which just says the spacetime manifold is tangent to a flat Lorentzian manifold at any event.

Amos, SR predicts that the stone can reach the rocket in principle; I can use SR's equations to make predictions as to, say, how long that'd take in the rocket's or the stone's frame. GR predicts that the stone can't reach the rocket. That contradiction violates the EP, which says that SR's laws must hold in any and every local inertial frame (LIF). To make your case that there's no such violation you must show that SR predicts the same as GR does; i.e that the stone can't reach the rocket. I don't see that you've done that. "In the stone's frame" means a frame in which the stone is at rest. Spacetime curvature (the tidal force) isn't relevant for this discussion. The EP is about LIFs, in which the spacetime curvature is negligible by definition. If you argue that the EP can't be tested in this case (for any reason, like curvature) then you're arguing that GR's predictions aren't fully testable / falsifiable, which is a faith-based argument not a scientific one.

Delete

DeleteI can use SR's equations to make predictions as to, say, how long that'd take in the rocket's or the stone's frame. GR predicts that the stone can't reach the rocket.Not true. In flat spacetime (where special relativity applies), special and general relativity make identical predictions.

SR predicts that the stone can reach the rocket in principleThat assertion is too ambiguous to posses a truth value. Special relativity applies in flat spacetime, and there are circumstances in flat spacetime in which a stone (or even a photon) can reach an accelerating rocket and other circumstances in which it cannot. Applying general relativity to those same sets of circumstances gives exactly the same results.

That contradiction violates the EP, which says that SR's laws must hold in any and every local inertial frameAs noted above, there is no contradiction. Also, special relativity holds good (to the first order) in a sufficiently small region around each event, whether there is curvature or not, even at gravitational event horizons, in accord with the equivalence principle. Also, when you talk about objects that are arbitrarily far apart in the context of curved spacetime, separated by gravitational event horizons, they are obviously not in the same local inertial frame.

Spacetime curvature (the tidal force) isn't relevant for this discussion.That’s a strange thing to say. You are claiming a violation of the equivalence principle, which assets local equivalence (to the first order) between sufficiently small regions of flat and curved spacetime. Now you say spacetime curvature isn’t relevant to this discussion.

The EP is about LIFs, in which the spacetime curvature is negligible by definition.You’re confused. The assertion that “curvature is negligible” (to the first order) in a sufficiently small region around every event IS the equivalence principle. This is what assures us that in any sufficiently small region we could construct a local inertial coordinate system (to the first order), whereas over extended curved regions we cannot. The regions that you are talking about, with a stone arbitrarily far away from the rocket in curved spacetime, possibly separated by a gravitational event horizon, are not “local” (no inertial coordinate system encompassing the situation exists). The equivalence principle does not assert any correspondence between such extended circumstances in curved spacetime with anything that can exist in flat spacetime.

If you argue that the EP can't be tested in this case (for any reason, like curvature) then you're arguing that GR's predictions aren't fully testable / falsifiable.That makes no sense at all. The equivalence principle applies locally, so it doesn’t assert a simplistic correspondence between arbitrarily extended situations involving curvature versus anything that can happen in flat spacetime. This obviously does not mean we can’t test the predictions of general relativity in extended regions, nor that the equivalence principle (which is a local principle) is violated.

Amos, the stone can in principle reach the rocket, SR predicts, and that's enough to show a violation of the EP. GR predicts the stone can't reach the rocket, not even in principle. The difference between "can in principle" and "can't, not even in principle" is unambiguously contradictory. SR's and GR's predictions can differ in a LIF, I showed.

DeleteMy experiment is contained in the stone's LIF, where the spacetime is negligibly curved by definition. When the spacetime is negligibly curved it can be treated as flat, the same as in any lab in which SR or the EP have been tested. For a sufficiently massive black hole the tidal force (the spacetime curvature) in the stone's LIF can be arbitrarily weak, including when the LIF is light years across and lasting for years. But that much space or time isn't needed to make my case. In the stone's LIF the rocket and stone could be just a meter apart initially, so that SR predicts they can in principle reach each other in less than a second. So yes, spacetime curvature isn't relevant for this discussion, because it's already been ruled out.

To make your case you must show that SR predicts the stone can't reach the rocket, not even in principle, the same as GR does. The reason that physicists believe GR obeys its EP re black holes isn't because of anything to do with spacetime curvature; it's because they believe a Rindler horizon can approximate an event horizon such that SR predicts the stone can't reach the rocket. They believe, as you did, that the stone can't reach the rocket when it's >= c^2/a away from the rocket in the stone's LIF, in contradiction to the examples at The Relativistic Rocket site.

I'm happy to continue any discussion by email. But here, if you're not trying to show that SR predicts the stone can't reach the rocket, I may not reply again.

DeleteThe stone can in principle reach the rocket, SR predicts...Again, a rocket with constant (and eternal) proper acceleration can be reached from some events but not from others. The hyperbolic worldline of the rocket is asymptotic to two light lightlines, and the events in the “wedge” region behind the “fulcrum” event (origin of the asymptotic lightlines) are not in the causal past or the causal future of any part of the rocket’s worldline. Needless to say, there are events outside this wedge with arbitrarily large spatial distances from the rocket (a stone dropped from the rocket will never pass into that wedge region), but that doesn’t contradict the existence of the “inaccessible” region.

…and that's enough to show a violation of the EP.No, it doesn’t show any violation of the equivalence principle. You’re talking about flat spacetime, in which special and general relativity are identical.

GR predicts the stone can't reach the rocket.General relativity is identical to special relativity in flat spacetime, and predicts the same wedge region for which the hyperbolic worldine of the rocket is neither in the causal past nor the causal future.

To make your case you must show that SR predicts the stone can't reach the rocket, not even in principle, the same as GR does.Again, whether or not the stone can reach the rocket depends on whether it is in the wedge region beyond the fulcrum. Also, there is no distinction between special and general relativity in this context (flat spacetime).

Spacetime curvature isn't relevant for this discussion…That still makes no sense at all. You claim to be talking about the equivalence principle, which involves the relationship between curved spacetime and acceleration in flat spacetime, and yet you insist it has nothing to do with curved spacetime. If all you are talking about is flat spacetime, then you aren’t invoking the equivalence principle at all.

Perhaps information is not lost, but emitted via gravitational waves?

ReplyDeletedoesn't work for the spherical modes

DeleteHow can an absolute event horizon be? No way.

ReplyDeleteAn event horizon is always observer-dependent (see Rindler horizon) and when you collapse to "black hole" the event horizon (dependent on your position) will collapse too. The Schwartschild solution is an approximation for a distant observer.

This can be seen when understood that event horizon is just like an analogue of Rindler horizon curved to itself. The collapsing matter have proper acceleration. Because free falling in space is oriented spherically, horizon doesn't keep steady as in cartesian free falling situation.

I see the singularity is naturally outside of review because there is a contradiction in the scope of interactions - a singularity stay always out of interactions and only interactions define space description.

What's up with the information loss problem. The conclusion is that you can describe the information only for an observer and only to the horizon. Who cares else? If we develop prejudices and mystify we stop understanding and we curl up to ourselves. :)

All solutions for information loss problem are futile - there is no problem at all. In nature-law-principles it's possible to go through a black hole but you will come back to totally changed universe, still the same (some challenges maight occur with energy consuming)...

The Rindler horizon occurs on an accelerated frame. Invariance principles of relativity really apply on inertial frames. The event horizon of a black hole occurs for an asymptotic observer near I^+. This means the event horizon of a black hole is an invariant aspect of spacetime.

DeleteThe invariance of event horizons is a common interpretation-based-physics of GR. You know that argument by interpretation is no good physics.

DeleteIn GR there is fundamental principles. One of them is a proper acceleration. Stellar objects maintain a proper acceleration outwards from object. That defines a horizon just like Rindler horizon but as spherically curved. For a distant observer the event horizon can occur outside the surface of massive object. But when observer came near, horizon would move deeper via elliptical transformation.

It means that whatever pair of interactive observers can keep their signalling though they collapse into the black hole.

Then if both started to accelerate away, they would get Rindler horizons of their own and strong enough acceleration can separate they from each other - the interaction breaks temporarily until stopping acceleration.

Was anyway, the fact is that we have no observations about frozen objects near event horizon. Many issue is open. I want to emphasize the roles of interactions and proper time.

Crossing the event horizon, time-space exchange with each other and inevitable way to the singularity are under very reasonable doubt. Still, i think the research of topology of BH will pay itself when spacetime foam and elementary particle mechanism be revealed.

The problem with talking about event horizons is there are so many different types of them. It is a bit complicated. However, any event horizon is a congruency of null geodesics. As null geodesics these are locally invariant.

DeleteThere are two classes of event horizons; those frame independent and those frame dependent. The Rindler wedge horizon, cosmological horizon and general forms of particle horizons are frame dependent. The black hole horizon is frame independent.

If you fall into a black hole the horizon will appear beneath you or in front of your path. Once you cross that the horizon does not disappear! It transitions into being an apparent horizon that is frame dependent. Timing when this occurs is difficult, and a clock that could do so precisely would require a mass equal to the black hole. So the issue of event horizons is surprisingly full of subtleties and difficult.

The black hole horizon though is an invariant. That can be said with considerable confidence.

You seem to be just interpretation-dependent. :)

DeleteThat's enough. No benefit to argue without observations.

> The Rindler wedge horizon [is] frame dependent. The black hole horizon is frame independent.

DeleteYes, that's the problem with GR, and the solution to the paradox. The difference you note means that a Rindler horizon can't approximate an (absolute) event horizon of a black hole, in violation of the EP. A freely falling stone can be any distance below a rocket as measured in the stone's frame, yet be above the rocket's Rindler horizon to be able to reach the rocket in principle. Which means that SR predicts that a stone just below r=2M can reach a rocket hovering just above r=2M.

I don't seem to understand many thing about this topic, here is one: so Schwartzschild solution is an GR description of an gravitational collapsed mass that has this unfortunate property that it's time irreversible. It seems to me its also time-asymptotically solution - in a sense that it takes a collapsed star an infinite amount of time (as perceived by far-away observer) to actually assume this form, am I right? OTOH lets consider a purely classical mechanical problem. Let say we have a perfect , friction-less hemisphere on a flat surface and point mass M bounded to move only on sphere surface under influence of constant acceleration g downwards. If you place mass at the "equator" and kick it straight up with kinetic energy of MgR, it will end at rest on the top of the hemisphere thus breaking time reversibility - you cannot determine initial position from its rest position. This practically impossible hit at an unstable solution also takes infinite amount of time. And this is not anything unusual - if I remember my CM class, trajectories towards separatrix have this property generally. So, asymptotic solutions can be trouble. In the mean time - all incoming phase space volume is nicely conserved (or is it?) while it takes a free-dive into oblivion. So, what gives?

ReplyDeleteIf a semiclassical calculation can resolve the paradox posed by the semiclassical blackhole emission of radiation, then the paradox may still be termed unsolved, but a major apparent contradiction between quantum mechanics and general relativity will have evaporated.

ReplyDelete..the mirage: the connection between information and entropy. Two words, which - according to John von Neumann - both cannot be understood, are linked by a logarithm and a constant that provides dimensions. What is even more astonishing: Many cosmologists believe in all seriousness that this will bring thermodynamics and its entire conceptual apparatus on board. Hence Hawking's 'logic': “The problem in Bekenstein's argument was that a black hole, if it had a finite entropy proportional to the area of its event horizon, would also have to have a finite temperature. It would follow that a black hole could be in equilibrium with thermal radiation at any temperature other than zero. But in classical terms, such a balance is not possible because the black hole would absorb without emitting. What the international elite of theoretical cosmologists never took notice of was the simple fact that 'their entropy' is more of a 'Shannonian entropy', namely information regarding however, no thermodynamic entropy. In addition, the main principles of thermodynamics and the equations of motion derived from them are formulated with process variables that locally have nothing to do with the Einstein geometry of GTR. This immediately leads to the problem of the extent to which the Second Law is compatible with the ART at all. If one absolutely wants an 'informal temperature' conjugated to information, one has to derive it from 'information theory'; but it certainly has nothing to do with the Kelvin temperature and therefore nothing at all with physical radiation processes. The 'big bang problem' as a 'primordial singularity' occurs theoretically only if Einstein's credo (: irreversibility is an illusion) applies. However, this conclusion then means that one consistently negates thermodynamics from the start and restricts oneself to flawless Hamilton mechanics as the basis of GTR, or at least only envisages isentropic processes (no change in entropy) in order to at least 'save the phenomenon of background radiation '.

ReplyDeleteThis comment has been removed by the author.

ReplyDeleteDr. Hossenfelder: Why do we believe that information cannot be destroyed?

ReplyDeleteDoesn't wavefunction collapse

alwaysdestroy information? The original superpositions and probabilities are lost forever and cannot be recovered, isn't that right?We have probabilities for spin-up and spin-down, but measurement forces the original probabilities into 1 and 0 or 0 and 1, so the original probabilities are lost forever.

As I said, the problem occurs BEFORE you make a measurement.

DeleteSabine: You have explained in detail the informational problems of a black hole.

ReplyDeleteOn the other hand, it could be interesting to check the problem in the view of Lorentzian relativity.

An essential point is the behavior of the speed of light c in a gravitational field. For Einstein, its variation is a seeming effect because in the view of a curved space-time c does not change. By applying Schwarzschild metric we see a reduction of c if viewed from the outside of the field (called “coordinate speed”). This means looking from the outside into the field using the measures of the outside.

If using now the Lorentzian way, c is reduced by a physical mechanism. A light-like particle interacts with the field-particles in a random way so that the path is statistically deflected performing a random walk. So, the microscopic speed of the light-like particle remains unchanged, but the global speed is reduced. If this reduction is calculated by the known rules of statistics, the resulting speed conforms analytically to the Schwarzschild equation. Also the dependency of the reduction of c from the direction of motion has formally the same result as with Schwarzschild. (In the Lorentzian way this is valid for fields which are not too strong.)

If we ask in the Lorentzian view for the black hole, then it turns out to be something like a ‘grey hole’ in the direction of Hawking’s result. The light-like particles do not stop at the event horizon as with Einstein/Schwarzschild, there is no singularity. But the deflections of light-like objects are getting extreme and the motion of them changes from a directed motion to a random motion, so they follow a diffusing path. They primarily diffuse towards the center of the assembly – the grey hole - and so should not lose information like in the case of Einstein.

Is there a proof for this assumption? One may understand as an argument that this described process is deduced from fundamentally known processes. Whereas Einstein’s assumption of curved space-time is not the consequence of any known elementary process.

So many vacuums.

ReplyDeleteHi Bee,

ReplyDeleteOne point that often bugs me when reading about time reversibility paradox, is the statement that black hole evaporation is thermal.

I'm sure it is known that it is thermal, but it's not clear for layman reader like me.

Would you mind writing a few lines of this? What assumptions are taken to get to the conclusion that the radiation is indeed thermal.

-Topi

That's the result of Hawking's calculation. The assumptions, as I said, are quantum field theory and the approximate validity of general relativity in the small curvature range.

DeleteBlack holes do not propagate information from their interior to the exterior in a causal manner. When black hole collide the gravitational radiation produced is from spacetime in the exterior. Thus, when a quantum particle tunnels from the black hole interior to the exterior by Hawking’s original formalism it carries no information about the black hole interior state.

DeleteWe can think of a black hole as a big gemish of entangled states. If a Hawking radiation quanta appears outside the black hole it carries an entanglement with that black hole. Entanglement entropy is given by the von Neumann entropy S = -k Tr[ρ log(ρ)], where the unitary evolution of density matrix ρ’ = U^†ρU carries over to S’ = S. I leave that as an exercise. Take the Taylor expansion of the log, there are a lot of cancellations with U^†U = 1 and it is then not hard to show. However, at a later time a Hawking radiation particle emitted may be the quantum state from the black hole entangled with a previously emitted Hawking radiation boson. This means a bipartite entanglement has transformed to a tripartite entanglement. This violates the so-called monogamy principle.

The monogamy principle just says the symmetry of an entanglement is invariant under unitary evolution. The unitary evolution presumed to underly Hawking radiation, which means the entropy of the black hole transforms into equal entropy of the black hole entangled with a Hawking radiation particle. The violation of the monogamy principle means the information change or loss Hawking cited stubbornly persists, We do though have an improvement over Hawking’s original formalism.

Bee,

DeleteI guess BH thermal radiation is equal to black body thermal radiation, right?

How is the reversibility preserved in black body thermal radiation?

-Topi

The spectrum is the same, yes.

DeleteIf a classical black body emits single thermal photon, does the photon carry information away from the black body?

DeleteI guess in other words the question is equal to: If a classical black body emits a single thermal photon, is the process time reversible?

-Topi

Photons are quanta. They're not classical. I don't know how you want to classically emit a quantum, that doesn't make any sense.

DeleteThe emission of photons by a thermal body is treated by Fermi's golden rule. You can look that up. This describes the spontaneous emission of photons by a system with lots of excited states or energy such as a hot body. This system is considered with a density of states with small transitions relative to the total energy.

DeleteHawking radiation is related to to this. This converges to a blackbody limit, and the spontaneous emission of lots of photons can be blackbody. Hawking's theory predicts blackbody spectrum for photons by a black hole.

Is there a symmetry associated with the conservation of information?

ReplyDelete