Soft hair. Redshifted. |
Last year August, Stephen Hawking announced he had been working with Malcom Perry and Andrew Strominger on a solution to the black hole information loss problem, and they were closing in on a solution. But little was explained other than that this solution rests on a symmetry group by name of supertranslations.
Yesterday then, Hawking, Perry, and Strominger, had a new paper on the arxiv that fills in a little more detail- Soft Hair on Black Holes
Stephen W. Hawking, Malcolm J. Perry, Andrew Strominger
arXiv:1601.00921
First of all, the paper seems only a first step in a longer argument. Several relevant questions are not addressed and I assume further work will follow. As the authors write: “Details will appear elsewhere.”
The present paper does not study information retrieval in general. It instead focuses on a particular type of information, the one contained in electrically charged particles. The benefit in doing this is that the quantum theory of electric fields is well understood.
Importantly, they are looking at black holes in asymptotically flat (Minkowski) space, not in asymptotic Anti-de-Sitter (AdS) space. This is relevant because string theorists believe that the black hole information loss problem doesn’t exist in asymptotic AdS space. They don’t know however how to extend this argument to asymptotically flat space or space with a positive cosmological constant. To best present knowledge we don’t live in AdS space, so understanding the case with a positive cosmological constant is necessary to describe what happens in the universe we actually inhabit.
In the usual treatment, a black hole counts only the net electric charge of particles as they fall in. The total charge is one of the three classical black hole “hairs,” next to mass and angular momentum. But all other details about the charges (eg in which chunks they came in) is lost: there is no way to store anything in or on an object that has no features, has no “hairs”.
In the new paper the authors argue that the entire information about the infalling charges is stored on the horizon in form of 'soft photons', that are photons of zero energy. These photons are the “hair” which previously was believed to be absent.
Since these photons can carry information but have zero energy, the authors conclude that the vacuum is degenerate. A 'degenerate' state is one on which several distinct quantum states share the same energy. This means there are different vacuum states which can surround the black hole and so the vacuum can hold and release information.
It is normally assumed that the vacuum state is unique. If it is not, this allows one to have information in the outgoing radiation (which is the ingoing vacuum). A vacuum degeneracy is thus a loophole in the argument originally lead by Hawking according to which information must get lost.
What the ‘soft photons’ are isn't further explained in the paper; they are simply identified with the action of certain operators and supposedly Goldstone bosons of a spontaneously broken symmetry. Or rather of an infinite amount of symmetries that, basically, belong to the conserved charges of something akin multipole moments. It sounds plausible, but the interpretation eludes me. I haven’t yet read the relevant references.
I think the argument goes basically like this: We can expand the electric field in form of all these (infinitely many) higher moments and show that each of them is associated with a conserved charge. Since the charge is conserved, the black hole can’t destroy it. Consequently, it must be maintained somehow. In the presence of a horizon, future infinity is not a Cauchy surface, so we add the horizon as boundary. And on this additional boundary we put the information that we know can’t get lost, which is what the soft photons are good for.
The new paper adds to Hawking’s previous short note by providing an argument for why the amount of information that can be stored this way by the black hole is not infinite, but instead bounded by the Bekenstein-Hawking entropy (ie proportional to the surface area). This is an important step to assure this idea is compatible with everything else we know about black holes. Their argument however is operational and not conceptual. It is based on saying, not that the excess degrees of freedom don't exist, but that they cannot be used by infalling matter to store information. Note that, if this argument is correct, the Bekenstein-Hawking entropy does not count the microstates of the black hole, it instead sets an upper limit to the possible number of microstates.
The authors don’t explain just how the information becomes physically encoded in the outgoing radiation, aside from writing down an operator. Neither, for that matter, do they demonstrate that by this method actually all of the information of the initial can be stored and released. Focusing on photons of course they can't do this anyway. But they don’t have an argument how it can be extended to all degrees of freedom. So, needless to say, I have to remain skeptical that they can live up to the promise.
In particular, I still don’t see that the conserved charges they are referring to actually encode all the information that’s in the field configuration. For all I can tell they only encode the information in the angular directions, not the information in the radial direction. If I were to throw in two concentric shells of matter, I don’t see how the asymptotic expansion could possibly capture the difference between two shells and one shell, as long as the total charge (or mass) is identical. The only way I see to get around this issue is to just postulate that the boundary at infinity does indeed contain all the information. And that in return we only know to work in AdS space. (At least it’s believed to work in this case.)
Also, the argument for why the charges on the horizon are bounded and the limit reproduces the Bekenstein-Hawking entropy irks me. I would have expected the argument for the bound to rely on taking into account that not all configurations that one can encode in the infinite distance will actually go on to form black holes.
Having said that, I think it’s correct that a degeneracy of the vacuum state would solve the black hole information loss problem. It’s such an obvious solution that you have to wonder why nobody thought of this before, except that I thought of it before. In a note from 2012, I showed that a vacuum degeneracy is the conclusion one is forced to draw from the firewall problem. And in a follow-up paper I demonstrated explicitly how this solves the problem. I didn’t have a mechanism though to transfer the information into the outgoing radiation. So now I’m tempted to look at this, despite my best intentions to not touch the topic again...
In summary, I am not at all convinced that the new idea proposed by Hawking, Perry, and Strominger solves the information loss problem. But it seems an interesting avenue that is worth further exploration. And I am sure we will see further exploration...
So, the "classical" no hair theorem, is it dead at the quantum realm?
ReplyDeleteI have NEVER read a paper about this, only bits from disseminated papers from the arXiv...
J,
ReplyDeleteCheck Gia Dvali's papers since 2012, he's been banging this drum for a while.
One of the main questions of black hole thermodynamics is the nature of the black hole constituents. What are the constituents of the black hole? According to the authors these are the so-called pixels. What is a pixel? It is unknown. It is known however that the pixels reside on the horizon. It is also believed that a pixel has the Planck size. What are the number of state available to a pixel? This is unknown. Exciting a pixel corresponds to creating a spatially localized soft graviton or photon on the horizon. The quantum state of a pixel is transformed whenever a particle crosses the horizon. That’s all. Unfortunately, I have heard nothing new.
ReplyDeleteCompare the wavelength of a "zero energy photon" to the physical extent of an event horizon. Rationalize event horizon large Q despite infalls. A Rydberg state with zero energy and not fragile is artifice.
ReplyDeleteWhen rigorously derived axiomatic systems fail to describe observation, seek non-empirical postulates. Newton failed for lightspeed, Planck's constant, and Boltzmann's constant. GR's Equivalence Principle is falsifiable outside its assumptions (e.g., ECSK gravitation). Seek the boojum.
Uncle Al, I'm trying to read your mind and getting this hopeful gut feeling that you accept the continuing decrease of C and proportionate increase of h. Would you say that zpe is vastly underrated as a problem solver?
DeleteIvan,
ReplyDeleteI wonder, if you didn't learn anything new and already knew all I just told you, then why didn't you write a paper about it and beat the authors to the punch?
Dear Sabine, I didn't say that I already know all you just tell me. I have great respect for you. I didn’t mean the mechanism of the information transfer, I meant the model of the Hawking et al. for the black hole constituents. I am interesting in such models. I said that the pixel model of the Hawking et al. is not new. As is well known, such a model was proposed by Bekenstein long before Hawking et al. Is that right? Do Hawking et al. propose a new model?
ReplyDeleteBest regards,
Ivan
Ivan,
ReplyDeleteNo, such a model was not proposed before by Bekenstein. I really don't know what you mean. People sometimes speak of 'pixels' on the horizon, correct, but that's just a word, not an explanation. The challenge is to come up with a physical way of storing and releasing information. That's what the soft gravitons (photons) do. (Or are supposed to do anyway.) Yes, it's a new model. Do you really think they'd put out a paper saying we'll just repeat what we've known since 40 years? Best,
B.
Dear Sabine, Forty years ago, by proving that the black hole horizon area is an adiabatic invariant, Bekenstein [1] proposed a “pixel” (originally, patch-degree of freedom) model and showed that the entropy S is proportional to the area of a black hole A. Indeed, if the horizon surface consists of n independent pixeles of the Planck area and every pixel has k states available to it, then the total number of states is k^{n} and S \prop A. It is just an explanation of the universal Bekenstein-Hawking relation. I think that the pixel model is also important as a physical way of storing and releasing information. I have no objections to the information transfer proposed by Hawking et al. However I find their mechanism complex and vague. At the same time, I thank you for your clear explanation of their work.
ReplyDeleteReferences
J.D. Bekenstein, Phys. Rev. D 7, 2333 (1973); Lett. Nuovo Cimento 11, 467 (1974).
Best regards,
Ivan
Ivan,
ReplyDeleteBekenstein postulated that there must be some sort of 'pixels' that account for the entropy. He did not explain what these pixels are or how they work. That's what I mean when I say it's not a model, it's word. If you think that the mechanism proposed in the new paper is 'vague' then Bekenstein's pixels must be vaguer than vague, or what do you think what they are?
Dear Sabine, With pixel model, Bekenstein explained (!) the universal Bekenstein-Hawking relation. In his model, the pixel is a fundamental (i.e. it does not need any explanations) unit of the area. That is a classical result. What did Hawking et al. explain? Where is a formula?
ReplyDeleteBest regards,
Ivan
Will these insights help to clarify what would be left over (to hold the charge) when a charged BH decayed from Hawking radiation all the way down? Normally there is supposed to be nothing at all left, but if the charge is conserved - something needs to "hold" it, or else it had to be carried away in the particulate portion of the HR.
ReplyDeleteSabine—your thought experiment regarding one concentric shell vs. two shell is a great one. This is because it should focus purely on supertranslations, with no contribution from any Poincare subgroup (though I don't know how much sense it makes to say that, because the Poincare subgroup is not a *normal* subgroup within BMS, i.e. there is no preferred 'splitting' of the BMS group into Poincare and supertranslations!).
ReplyDeleteTo make life easier, let's ask a corresonding question at scri+ instead of at a horizon. Let's say you're an observer (or you assemble a group of observers in a 2-sphere) near scri+. We start in some particular vacuum, and consider two different processes: (i) one outgoing soft photon (or graviton) vs. (ii) two outgoing soft photons (or gravitons) at different retarded times u1, u2 subject to a certain condition. Classically, each corresponds to the same large gauge transformation (or supertranslation). As we know, the supertranslation subgroup is an Abelian subgroup. So, that earlier condition I mentioned should be that the sum of the two soft photons (or gravitons) generate the same supertranslation.
Now the question is: can observer(s) at late times measure the difference between (i) and (ii)? I'm not positive, but I think the two yield the same vacuum at late times (which is different from the early-time vacuum).
For that matter, let's say we just consider two versions of the process (i), with the soft photon coming out at different retarded times. Can observer(s) know at what time the photon passed? Of course, the vacuum state is a stationary eigenstate of the Hamiltonian; there's no information about how long the universe has been in that state. That's no different from ordinary quantum mechanics experiments involving, say, atoms and photons.
The only way to tell is to consider the history of repeated measurements made by observer(s). Then they would be able record when the transition from one degenerate vacuum to another (to a third and fourth etc.) happened.
Going back to the original thought experiment, supposedly the same thing happens. This means that if you throw in one shell vs. two shells, the final vacuum state is the same, but the history of transitions between degenerate vacua is different. Correspondingly, the history of the emitted Hawking radiation should be different. There is even a classical analogue: if you separate the two shells by enough time (long enough that the BH temperature may be measured, i.e. very many emission-times), then there's an intermediate Hawking temperature T1, with T0 > T1 > T2; compare that with case (i) where the history of the temperature just goes from T0 to T2.
Re "if this argument is correct, the Bekenstein-Hawking entropy does not count the microstates of the black hole, it instead sets an upper limit to the possible number of microstates".
ReplyDeleteWhat is the consequence of that for black-hole evaporation? First guess is that a "low entropy" black hole has a higher temperature and evaporates quicker?
"This is relevant because string theorists believe that the black hole information loss problem doesn’t exist in asymptotic AdS space."
ReplyDeleteDear Bee,
Is there an explanation somewhere about why a microscopic "explanation" of black hole entropy depends on the asymptotic space? If you told me hydrogen atoms work differently depending on the large scale geometry and topology of the universe, I would be quite flummoxed.
The only thing I can think of is that somehow in asymptotic AdS space there is no objective definition of microscopic.
Thanks in advance!
-Arun
Ivan,
ReplyDeleteIt is beyond me why you prefer accepting the postulation of a fundamental 'pixel' over an explanation what the degrees of freedom are. What did they explain? Read post. Where is the formula? See paper. You have given me the impression that you complain about lack of progress on a question you don't know very much about to begin with.
Leo,
ReplyDeleteThat's a good point! I wasn't so much thinking of throwing shells into an already existing black hole, but using the shells to make a black hole to begin with. I think what you suggest might still work in this case. Though it seems to indicate that the information in the outgoing radiation is more local (in terms of time) than I'd have expected. I'll have to think about this some more. Best,
B.
Neil,
ReplyDeleteNo, the final decay is Planck scale physics.
Google,
ReplyDeleteIt doesn't affect the evaporation, since this depends on the temperature. What this affects is the relation between the temperature and the entropy. I don't know what consequences it has. However, as I indicated in my blogpost, I am not really convinced that's the right way to think of it. Best,
B.
Arun,
ReplyDeleteYou mean is there an explanation somewhere in the paper? No. I think the reason for the difference is the way of counting. The AdS microstates count, roughly, different ways you can assemble a black hole. What they count in the paper instead is the number of degrees of freedom you can excite on the black hole horizon (don't ask me how you excite something from zero energy to zero energy). Now if you focus too much energy in a small space-time region you do at some point fail at further focusing it because you just make a black hole. This sets a resolution cutoff which is at the Planck scale. You can pull this trick any time you need the Planck length to come in somewhere and this is what they've done in the paper. Problem is, there isn't actually anything colliding or interacting and this really doesn't make a lot of sense to me.
But look, if you count all the charge configurations that you can have at the boundary (at infinity) you vastly overcount the degrees of freedom on the horizon because most of these configurations wouldn't create a black hole. To get a black hole you need to, roughly speaking, aim them correctly. I'm not sure that would really give you the right number. Even after subtracting infinity from infinity you might still have infinity left. In any case, that sort of counting would be much more like the normal microstate counting (how many configurations are there that will give you the three-hair endstate) thus more likely to give you the standard result. At least that was my thinking. Best,
B.
Dear Sabine
ReplyDeleteYou are confusing charges with microstates.
Warm Regards
Caligula
raskalnikoff,
ReplyDeleteI don't understand what you mean. It would be helpful if you could be more specific. Which statement are you referring to?
Different vacuums/ground states sounds a lot like a phase transition. Is there perhaps one phase of matter inside and one out, the one inside having reached a critical density, and outside not? It also brings to mind different vacuums being seen by accelerated and inertial observers.
ReplyDeleteIn optics this can happen with superradiance, where if a group of two level atoms has a coupling that is below a threshold (which depends on the distance between the atoms divided by the wavelength of the emitted light) they radiate normally, but if the coupling increases above a critical value the atoms radiate collectively at a rate \sqrt{N} faster than free space, where N is the number of atoms. Typically when the atoms are all within a wavelength of the emission wavelength, then you are in superradiant phase
BEC's and superconductors are similar I think, with different order parameterss/physics, but the same kind of mathematical form
In a possibly related matter (no pun), black hole "superradiance" arises in a rotating BH, the Penrose process. All BH's will have some form of rotation in terms of quantum fluctuations in the horizon causing breaking rotational symmetry (the black hole twitches like a ground state SHO), and/or the horizon has "noise" that results in the acceleration/oscillation of the horizon?
I found the firewall papers you wrote very helpful
Different vacuums/ground states brings two mind two things. Unruh and Phase Transitions.
ReplyDeleteIn optics we have our version of superradiance, where if a group of N (two-level) atoms with transition wavelength less than the diameter of the ball of atoms, they radiate at a rate enhanced by \sqrt{N}. This happens when the coupling rate (or vaccum Rabi splitting) is the size of the frequency, and so the first excited state energy is equal to the vacuum energy, so the ground state is degenerate. Above the transition there are two separate phases that are ordered (antiferromagneticish), while above there is no order.
Could inside the black hole and outside have different phases?
Also in reading BH stuff, there is a Penrose process (?) where a rotating BH gives off energy. Linear brings Unruh/Hawking. Could the BH horizon be "fuzzy" from vacuum fluctuations, accelerating, and this results in Hawking radiation?
I am usually not crazy about analog BH's showing Hawking radiation, Horizon is classical so you can make one with a laser. But the flucuations to be done right would have to be quantum I would think
Is there any hint of truth to the scenario above? "Our" superradiance is basically like a Boglioubov theory of BEC and BCS stuff. Thanks
Perry,
ReplyDeleteI don't know how you want to have a phase transition without different ground state energies.
Dear Sabine,
ReplyDeleteI thought that in GRT the Compton frequency of anything in free fall is constant (once included gravitational time dilation). For an observer external to a black hole, anything falling in it takes an infinite time to reach the horizon. Then why would an external observer "see" any information loss?
Please excuse me if I fail to understand the basics.
Thanks,
J.
akidbelle,
ReplyDeleteThere isn't any information loss problem as long as the black hole doesn't evaporate. It's only when it's gone that you are forced to wonder, well, where did the information go. I recommend you read this earlier blogpost to get you up to speed :o)
Thanks Sabine, I think I would vote "there is no spoon"... mostly for the fun of it.
ReplyDeleteOne more question: Would you know of any theory where QM is a by-product of gravitation?
Thanks
J.
akidbelle,
ReplyDeleteNot sure what you mean. You mean that quantum mechanical indeterminism is a by-product of (presumably classical) gravitational fluctuations? That would be a kind of hidden-variables theory. It better be nonlocal then. That's kind of a problem because gravity is local. I don't know of any compelling model. Also, if such gravitational fluctuations had any effect they would most likely cause decoherence. This is something which people have looked for experimentally (and haven't found anything). Then again you might mean that gravity causes the 'collapse' of the wave-function, which is basically the idea of Penrose's collapse model. Problem with this is, again, that gravity is a local theory and that's a big constraint on what you can do with it in terms of model building. Either way, people are testing for this idea too. (I'm expecting tight constraints to appear soon.)
I guess I was thinking of infalling matter, the potential goes as the mass density it sees, and at some point the density increases to a critical value (planck density?) there is a phase transition at some radius r. Inside the horizon it is in the condensed phase (to use the term very loosely).
ReplyDeleteHit send too fast. At this critical density the strong grav coupling brigs the quantized energy of the first excited state down to what was the old vacuum, and the dengeneracy happens. And like in superradiance, when this happens, you get a macrscopic field amplitude for the interior.
ReplyDeleteThis happens in the Jaynes Cummings model for N atoms
H= omega( sigmaz +adagger*a) +g(adagger sigma_- + a sigma_+) when g/root(N) is larger than omega.
About that Vampire Knight Kain Akatsuki wig.... Is black hole information heir to the quantum vampire effect?
ReplyDeletehttp://quantumtantra.blogspot.com/2014/08/quantum-vampire-effect.html
and doi:10.1364/OPTICA.2.000112
Hi Sabine, thanks.
ReplyDeleteI was thinking of a paper on gravitation I quoted some time ago on this blog, based on Wheeler-Feynman's absorber (- which may be "just a coincidence"); augmented with Cramer's QM interpretation. In this way, it is an error to consider fluctuations "causing" decoherence.
I do not mean gravity causes the collapse of the wave function because this would require two separate entities (gravity and "whatever carries the wave"). I am one of those who still believe there is only one "stuff" at the bottom - maybe that's too naive - and I do think that it can be understood.
Thanks again,
J.
Perry Rice,
ReplyDeleteIt might possibly be that matter undergoes a phase transition during collapse, that depends on the matter and on its equation of state. But this has very little to do with the formation of the black hole. The horizon forms when the total matter is compressed on a radius below the Schwarzschild radius. The density of matter at that point can be arbitrarily low, and the density depends on the total amount of matter (the more matter, the lower the density). Thus, I don't really see the connection.
GratefulRob,
ReplyDeleteIt does, at least in principle. Everything that's in the metric also goes into the transformation from in to outgoing states, though this contribution is typically neglected (one makes an approximation in the horizon vicinity). It also doesn't really help though because this 'wake' doesn't carry all the information about the quantum state.
Also I am not thinking equations of state, but a quantum (zero temp) phase transiton that would be part of a possible quantum solution
ReplyDelete