Friday, February 26, 2016

"Rate your Supervisor" comes to High Energy Physics

A new website called the "HEP Postdoc Project" allows postdocs in high energy physics to rate their supervisors in categories like "friendliness," "expertise," and "accessibility."

I normally ignore emails that more or less explicitly ask me to advertise sites on my blog, but decided to make an exception for this one. It seems a hand-made project run by a small number of anonymous postdocs who want to help their fellows find good supervisors. And it's a community that I care much about.

While I appreciate the initiative, I have to admit being generally unenthusiastic about anonymous ratings on point scales. Having had the pleasure of reading though an estimated several thousand of recommendation letters, I have found that an assessment of skills is only useful if you know the person it comes from.

Much of this is cultural. A letter from a Russian prof that says this student isn't entirely bad at math might mean the student is up next for the Fields Medal. On the other hand, letters from North Americans tend to exclusively contain positive statements, and the way to read them is to search for qualities that were not listed.

But leaving aside the cultural stereotypes, more important are personal differences in the way people express themselves and use point scales, even if they are given a description for each rating (and that is missing on the website). We occasionally used 5 point rating scales in committees. You then notice quickly that some people tend to clump everyone in the middle-range, while others are more comfortable using the high and low scores. Then again others either give a high rating or refuse to have any opinion. To get a meaningful aggregate, you can't just take an average, you need to know roughly how each committee member uses the scale. (Which will require endless hours of butt-flattening meetings. Trust me, I'd be happy being done with clicking on a star scale.)

You could object that any type of online rating suffers from these problems and yet they seem to serve some purpose. That's right of course, so this isn't to say they're entirely useless. Thus I am sharing this link thinking it's better than nothing. And at the very least you can have some fun browsing through the list to see who got the lowest marks ;)

Wednesday, February 24, 2016

10 Years BackRe(action)

Yes, today marks the 10th anniversary of my first post on this blog.

I started blogging while I was in Santa Barbara, in a tiny fifth-floor office that slightly swayed with the occasional Earthquakes. I meant to write about postdoc-life in California, but ended up instead writing mostly about my research interests. Because, well, that's what I'm interested in. Sorry, California.

Those were the years of the String Wars and of Black Holes at the LHC. And since my writing was on target, traffic to this blog increased rapidly -- a somewhat surprising and occasionally disturbing experience.

Over the years, I repeatedly tried to share the work of regularly feeding this blog, but noticed it's more effort trying to convince others to write than to just write myself. And no, it's not zero effort. In an attempt to improve my Germenglish, I have read Strunk's "Elements of Style" forwards and backwards, along with several books titled "Writing Well" (which were written really well!), and I hope you benefit from it. For me, the outcome has been that now I can't read my older blogposts without crying over my own clumsy writing. Also, there's link-rot. But if you have some tolerance for awkward English and missing images, there's 10 years worth of archives totalling more than 1500 entries waiting in the side-bar.

The content of this blog has slightly changed over the years. Notably, I don't share links here any more. For this, I use instead my twitter and facebook accounts, which you can follow to get reading recommendations and the briefer commentaries. But since I can't stand cluttered pages, this blog is still ad-free and I don't make money with it. So if you like my writing, please have a close look at the donate-button in the top-right corner.

In the 10 years that have passed, this blog moved with me through the time-zones, from California to Canada, from Canada to Sweden, and from Sweden eventually back to Germany. It witnessed my wedding and my pregnancy and my daughters turning from babies to toddlers to Kindergartners. And the journey goes on. As some of you know already, I'm writing a book (or at least I'm supposed to be writing a book), so stay tuned, there's more to come.

I want to thank all of you for reading along, especially the commenters. I know that some of you have been around since the first days, and you have become part of my extended family. You have taught me a lot, about life and about science and about English grammar.

A special thank you goes to those of you who have sent me donations since I put up the button a few months ago. It is a great encouragement for me to continue.

Monday, February 22, 2016

Too many anti-neutrinos: Evidence builds for new anomaly

Bump ahead.
Tl;dr: A third experiment has reported an unexplained bump in the spectrum of reactor-produced anti-neutrinos. Speculations for the cause of the signal so far focus on incomplete nuclear fission models.


Neutrinos are the least understood of the known elementary particles, and they just presented physicists with a new puzzle. While monitoring the neutrino flux from nearby nuclear power plants, three different experiments have measured an unexpected bump around 5 MeV. First reported by the Double Chooz experiment in 2014, the excess was originally not statistically significant
5 MeV bump as seen by Double Chooz. Image source: arXiv:1406.7763
Last year, a second experiment, RENO, reported an excess but did not assign a measure of significance. However, the bump is clearly visible in their data
5 MeV bump as seen by RENO. Image source: arXiv:1511.05849
The newest bump is from the Daya Bay collaboration and was just published in PRL

5 MeV bump as seen by Daya Bay. Image source: arXiv:1508.04233

They give the excess a local significance of 4.1 σ – a probability of less than one in ten thousand for the signal being due to pure chance.

This is a remarkable significance for a particle that interacts so feebly, and an impressive illustration of how much detector technology has improved. Originally, the neutrino’s interaction was thought to be so weak that to measure it at all it seemed necessary placing detectors next to the most potent neutrino source known – a nuclear bomb explosion.

And this is exactly what Frederick Reines and Clyde Cowan set out to do. In 1951, they devised “Project Poltergeist” to detect the neutrino emission from a nuclear bomb: “Anyone untutored in the effects of nuclear explosions would be deterred by the challenge of conducting an experiment so close to the bomb,” wrote Reines, “but we knew otherwise from experience and pressed on.” And their audacious proposal was approved swiftly: “Life was much simpler in those days—no lengthy proposals or complex review committees,” recalls Reines.

Briefly after their proposal was approved, however, the two men found a better experimental design and instead placed a larger detector close by a nuclear power plant. But the controlled splitting of nuclei in a power plant needs much longer to produce the same number of neutrinos as a nuclear bomb blast, and patience was required of Reines and Cowan. Their patience eventually paid off: They were awarded the 1995 Nobel Prize in physics for the first successful detection of neutrinos – a full 65 years after the particles were first predicted.

Another Nobel Prize for neutrinos was handed out just last year, this one commemorating the neutrino’s ability to “oscillate,” that is to change between different neutrino types as they travel. But, as the recent measurements demonstrate, neutrinos still have surprises in stock.

Good news first, the new experiments have confirmed the neutrino oscillations. On short base-lines as that of Daya Bay – a few kilometer – the electron-anti-neutrinos that are emitted during nuclear fission change into to tau-anti-neutrinos and arrive at the detector in reduced numbers. The wavelength of the oscillation between the two particles depends on the energy – higher energy means a longer wavelength. Thus, a detector placed at fixed distance from the emission point will see a different energy-distribution of particles than that at emission.

The emitted energy spectrum can be deduced from the composition of the reactor core – a known mixture of Uranium and Plutonium, each in two different isotopes. After the initial split, these isotopes leave behind a bunch of radioactive nuclei which then decay further. The math is messy, but not hugely complicated. With nuclear fission and decay models as input, the experimentalists can then extract from their data the change in the energy-distribution due to neutrino oscillation. And the parameters of the oscillation that they have observed fit those of other experiments.

Now to the bad news. The fits of the oscillation parameters to the energy spectrum do not take into account the overall number of particles. And when they look at the overall number, the Daya Bay experiment, like other reactor neutrino experiments before, falls about 6% short of expectation. And then there is the other oddity: the energy spectrum has a marked bump that does not agree with the predictions based on nuclear models. There are too many neutrinos in the energy range of 5 MeV.

There are four possible origins for this discrepancy: Detection, travel, production, and misunderstood background. Let us look at them one after the other.

Detection: The three experiments all use the same type of detector, a liquid scintillator with Gadolinium target. Neutrino-nucleus cross-sections are badly understood because neutrinos interact so weakly and very little data is available. However, the experimentalists calibrate their detectors with other radioactive sources in near vicinity, and no bumps have been seen in these reference measurements. This strongly speaks against detector shortcomings as an explanation.

Travel: An overall lack of particles could be explained with oscillation into a so-far undiscovered new type of ‘sterile’ neutrino. However, such an oscillation cannot account for a bump in the spectrum. This could thus at best be a partial explanation, though an intriguing one.

Production: The missing neutrinos and the bump in the spectrum are inferred relative to the expected neutrino flux from the power plant. To calculate the emission spectrum, the physicists rely on nuclear models. The isotopes in the power plant’s core are among the best studied nuclei ever, but still this is a likely source of error. Most research studies of radioactive nuclei investigate them in small numbers, whereas in a reactor a huge number of different nuclei are able to interact with each other. A few proposals have been put forward that mostly focus on the decay of Rubidium and Yttrium isotopes because these make the main contribution to the high energy tail of the spectrum. But so far none of the proposed explanations has been entirely convincing.

Background: Daya Bay and RENO both state that the signal is correlated with the reactor power which makes it implausible that it’s a background effect. There aren’t many details in the paper about the time-dependence of the emission though. It would seem possible to me that reactor power depends on the time of the day or on the season, both of which could also be correlated with background. But this admittedly seems like a long shot.

Thus, at the moment the most conservative explanation is a lacking understanding of processes taking place in the nuclear power plant. It presently seems very unlikely to me that there is fundamentally new physics involved in this – if the signal is real to begin with. It looks convincing to me, but I asked fellow blogger Tommaso Dorigo for his thoughts: “Their signal looks a bit shaky to me - it is very dependent on the modeling of the spectrum and the p-value is unimpressive, given that there is no reason to single out the 5 MeV region a priori. I bet it's a modeling issue.”

Whatever the origin of the reactor antineutrino anomaly, it will require further experiments. As Anna Hayes, a nuclear theorist at Los Alamos National Laboratory, told Fermilab’s Symmetry Magazine: “Nobody expected that from neutrino physics. They uncovered something that nuclear physics was unaware of for 40 years.”

Wednesday, February 17, 2016

Dear Dr Bee: Can LIGO’s gravitational wave detection tell us something about quantum gravity?

“I was hoping you could comment on the connection between gravitational waves and gravitational quanta. From what I gather, the observation of gravitational waves at LIGO do not really tell us anything about the existence or properties of gravitons. Why should this be the case?”

“Can LIGO provide any experimental signature of quantum gravity?”

“Is gravity wave observation likely to contribute to [quantum] gravity? Or is it unlikely to be sensitive enough?”


It’s a question that many of you asked, and I have an answers for you over at Forbes! Though it comes down to “don’t get your hopes up too high.” (Sorry for the extra click, it’s my monthly contribution to Starts With a Bang. You can leave comments here instead.)

Monday, February 15, 2016

What makes an idea worthy? An interview with Anthony Aguirre

That science works merely by testing hypotheses has never been less true than today. As data have become more precise and theories have become more successful, scientists have become increasingly careful in selecting hypotheses before even putting them to test. Commissioning an experiment for every odd idea would be an utter waste of time, not to mention money. But what makes an idea worthy?

Pre-selection of hypotheses is especially important in fields where internal consistency and agreement with existing data are very strong constraints already, and it therefore plays an essential role in the foundation of physics. In this area, most new hypotheses are born dead or die very quickly, and researchers would rather not waste time devising experimental tests for ill-fated non-starters. During their career, physicists must thus constantly decide whether a new ideas justifies spending years of research on it. Next to personal interest, their decision criteria are often based on experience and community norms – past-oriented guidelines that reinforce academic inertia.

Philosopher Richard Dawid coined the word “post-empirical assessment” for the practice of hypotheses pre-selection, and described it as a non-disclosed Bayesian probability estimate. But philosophy is one thing, doing research another thing. For the practicing scientist, the relevant question is whether a disclosed and organized pre-selection could help advance research. This would require the assessment to be performed in a cleaner way than is presently the case, a way that is less prone to error induced by social and cognitive biases.

One way to achieve this could be to give researchers incentives for avoiding such biases. Monetary incentives are a possibility, but to convince a scientist that their best path of action is putting aside the need to promote their own research would mean incentives totaling research grants for several years – an amount that adverts on nerd pages won’t raise, and thus an idea that seems one of these ill-fated non-starters. But then for most scientists their reputation is more important than money.

Anthony Aquirre.
Image Credits: Kelly Castro.
And so Anthony Aquirre, Professor of Physics at UC Santa Cruz, devised an algorithm by which scientists can estimate the chances that an idea succeeds, and gain reputation by making accurate predictions. On his website Metaculus, users are asked to evaluate the likelihood of success for various scientific and technological developments. In the below email exchange, Antony explains his idea.

Bee: Last time I heard from you, you were looking for bubble collisions as evidence of the multiverse. Now you want physicists to help you evaluate the expected impact of high-risk/high-reward research. What happened?

Anthony: Actually, I’ve been thinking about high-risk/high-reward research for longer than bubble collisions! The Foundational Questions Institute (FQXi) is now in its tenth year, and from the beginning we’ve seen part of FQXi’s mission as helping to support the high-risk/high-reward part of the research funding spectrum, which is not that well-served by the national funding agencies. So it’s a long-standing question how to best evaluate exactly how high-risk and high-reward a given proposal is.

Bubble collisions are actually a useful example of this. It’s clear that seeing evidence of an eternal-inflation multiverse would be pretty huge news, and of deep scientific interest. But even if eternal inflation is right, there are different versions of it, some of which have bubble and some of which don’t; and even of those that do, only some subset will yield observable bubble collisions. So: how much effort should be put into looking for them? A few years of grad student or postdoc time? In my opinion, yes. A dedicated satellite mission? No way, unless there were some other evidence to go on.

(Another lesson, here, in my opinion, is that if one were to simply accept the dismissive “the multiverse is inherently unobservable” critique, one would never work out that bubble collisions might be observable in the first place.)

B: What is your relation to FQXi?

A: Max Tegmark and I started FQXi in 2006, and have had a lot of fun (and only a bit of suffering!) trying to build something maximally useful to community of people thinking about the type of foundational, big-picture questions we like to think about.

B: What problem do you want to address with Metaculus?

Predicting and evaluating (should “prevaluating” be a word?) science research impact was actually — for me — the second motivation for Metaculus. The first grew out of another nonprofit I helped found, the Future of Life Institute (FLI). A core question there is how major new technologies like AI, genetic engineering, nanotech, etc., are likely to unfold. That’s a hard thing to know, but not impossible to make interesting and useful forecasts for.

FLI and organizations like it could try to build up a forecasting capability by hiring a bunch of researchers to do that. But I wanted to try something different: to generate a platform for soliciting and aggregating predictions that — with enough participation and data generation — could make accurate and well-calibrated predictions about future technology emergence as well as a whole bunch of other things.

As this idea developed, my collaborators (including Greg Laughlin at UCSC) and I realized that it might also be useful in filling a hole in our community’s ability to predict the impact of research. This could in principle help make better decisions about questions ranging from the daily (“Which of these 40 papers in my “to read” folder should I actually carefully read”) to the large-scale (“Should we fund this $2M experiment on quantum cognition?”).

B: How does Metaculus work?

The basic structure is of a set of (currently) binary questions about the occurrence of future events, ranging from predictions about technologies like self-driving cars, Go-playing AIs and nuclear fusion, to pure science questions such as the detection of Planet 9, publication of experiments in quantum cognition or tabletop quantum gravity, or announcement of the detection of gravitational waves.

Participants are invited assess the likelihood (1%-99%) of those events occurring. When a given question ‘resolves’ as either true or false, points are award depending upon a user's prediction, the community’s predictions, and what actually happened. These points add a competitive game aspect, but serve a more important purpose of providing steady feedback so that predictors can learn how to predict more accurately, and with better calibration. As data accumulations, predictors will also amass a track record, both overall and in particular subjects. This can be used to aggregate predictions into a single, more accurate, one (at the moment, the ‘community’ predictions is just a straight median).

An important aspect of this, I think is not ‘just’ to make better predictions about well-known questions, but to create lots and lots of well-posed questions. It really does make you think about things differently when you have to come up with a well-posed question that has a clear criterion for resolution. And there are lots of questions where even a few predictions (even one!) by the right people can be a very useful resource. So a real utility is for this to be a sort of central clearing-house for predictions.

B: What is the best possible outcome that you can imagine from this website and what does it take to get there?

A: The best outcome I could imagine would be this becoming really large-scale and useful, like a Wikipedia or Quora for predictions. It would also be a venue in which the credibility to make pronouncements about the future would actually be based on one’s actual demonstrated ability to make good predictions. There is, sadly, nothing like that in our current public discourse, and we could really use it.

I’d also be happy (if not as happy) to see Metaculus find a more narrow but deep niche, for example in predicting just scientific research/experiment success, or just high-impact technological rollouts (such as AI or Biotech).

In either case, it will take continued steady growth of both the community of users and the website’s capabilities. We already have all sorts of plans for multi-outcome questions, contingent questions, Bayes nets, algorithms for matching questions to predictors, etc. — but that will take time. We also need feedback about what users like, and what they would like the system to be able to do. So please try it out, spread the word, and let us know what you think!

Wednesday, February 10, 2016

Everything you need to know about gravitational waves

Last year in September, upgrades of the gravitational wave interferometer LIGO were completed. The experiment – now named advanced LIGO – searches for gravitational waves emitted in the merger of two black holes. Such a merger signal should fall straight into advanced LIGOs reach.

Estimated gravitational wave spectrum. [Image Source]


It was thus expected that the upgraded experiment either sees something immediately, or we’ve gotten something terribly wrong. And indeed, rumors about a positive detection started to appear almost immediately after the upgrade. But it wasn’t until this week that the LIGO collaboration announced several press-conferences in the USA and Europe, scheduled for tomorrow, Thursday Feb 11, at 3:30pm GMT. So something big is going to hit the headlines tomorrow, and here are the essentials that you need to know.

Gravitational waves are periodic distortions of space-time. They alter distance ratios for orthogonal directions. An interferometer works by using lasers to measure and compare orthogonal distances very precisely, thus it picks up even the tiniest space-time deformations.

Moving masses produce gravitational waves much like moving charges create electromagnetic waves. The most relevant differences between the two cases are
  1. Electromagnetic waves travel in space-time, whereas gravitational waves are a disturbance of space-time itself.
  2. Electromagnetic waves have spin 1, gravitational waves have spin two. The spin counts how much you have to rotate the wave for it to come back onto itself. For the electromagnetic fields that’s one full rotation, for the gravitational field it’s only half a rotation.
  3. [Image Credit: David Abergel]
  4. The dominant electromagnetic emission comes from the dipole moment (normally used eg for transmitter antennae), but gravitational waves have no dipole moment (a consequence of momentum conservation). It’s instead the quadrupole emission that is leading.
If you keep these differences in mind, you can understand gravitational waves in much the same way as electromagnetic waves. They can exist at any wavelength. They move at the speed of light. How many there are at a given wavelength depends on how many processes there are to produce them. The known processes give rise to the distribution in the graphic above. A gravitational wave detector is basically an antenna tuned in to a particularly promising frequency.

Since all matter gravitates, the motion of matter generically creates gravitational waves. Every time you move, you create gravitational waves, lots of them. These are, however, so weak that they are impossible to measure.

The gravitational waves that LIGO is looking for come from the most violent events in the universe that we know of: black hole mergers. In these events, space-time gets distorted dramatically as the two black holes join to one, leading to significant emission of gravitational waves. This combined system later settles with a characteristic “ringdown” into a new stable state.



Yes, this also means that these gravitational waves go right through you and distort you oh-so-slightly on their way.

The wave-lengths of gravitational waves emitted in such merger events are typically of the same order as the dimension of the system. That is, for black holes with masses between 10 and 100 times the solar mass, wavelengths are typically a hundred to a thousand km – right in the range that LIGO is most sensitive.

If you want to score extra points when discussing the headlines we expect tomorrow, learn how to pronounce Fabry–Pérot. This is a method for bouncing back light-signals in interferometer arms several times before making the measurments, which effectively increases the armlength. This is why LIGO is sensitive in a wavelength regime far longer than its actual arm length of about 2-4 km. And don’t call them gravity waves. A gravity wave is a cloud phenomenon.

Gravitational waves were predicted a hundred years ago as one of the consequences of Einstein’s theory of General Relativity. Their existence has since been indirectly confirmed because gravitational wave emission leads to energy loss, which has the consequence that two stars which orbit around a common center speed up over the course of time. This has been observed and was awarded the Nobel Prize for physics in 1993. If LIGO has detected the sought-after signal, it would not be the first detection, but the first direct detection.

Interestingly, even though it was long known that black hole mergers would emit gravitational waves, it wasn’t until computing power had increased sufficiently that precise predictions became possible. So it’s not like experiment is all that far behind theory on that one. General Relativity, though often praised for its beauty, does leave you with one nasty set of equations that in most cases cannot be solved analytically and computer simulations become necessary.

The existence of gravitational waves is not doubted by anyone in the physics community, or at least not by anybody I have met. This is for good reasons: On the experimental side there is the indirect evidence, and on the theoretical side there is the difficulty of making any theory of gravity work that does not have gravitational waves. But the direct detection of gravitational waves would be tremendously exciting because it opens our eyes to an entirely new view on the universe.

Hundreds of millions of years ago, a primitive form of life crawled out of the water on planet Earth and opened their eyes to see, for the first time, the light of the stars. Detecting gravitational waves is a momentous event just like this – it’s the first time we can receive signals that were previously entirely hidden from us, revealing an entirely new layer of reality.

So bookmark the webcast page and mark your calendar for tomorrow 3:30 GMT  –  it might enter the history books.

Update Feb 11: The rumors were all true. They have a 5.1 σ signal of a binary black hole merger. The paper is published in PRL, here is the abstract.

Friday, February 05, 2016

Much Ado around Nothing: The Cosmological non-Constant Problem

Tl;dr: Researchers put forward a theoretical argument that new physics must appear at energies much lower than commonly thought, barely beyond the reach of the LHC.
The cosmological constant is the worst-ever prediction of quantum field theory, infamously off by 120 orders of magnitude. And as if that wasn’t embarrassing enough, this gives rise to, not one, but three problems: Why is the measured cosmological constant neither 1) huge nor 2) zero, and 3) Why didn’t this occur to us a billion years earlier? With that, you’d think that physicists have their hands full getting zeroes arranged correctly. But Niayesh Afshordi and Elliot Nelson just added to our worries.
In a paper that made it third place of this year’s Buchalter Cosmology Prize, Afshordi and Nelson pointed out that the cosmological constant, if it arises from the vacuum energy of matter fields, should be subject to quantum fluctuations. And these fluctuations around the average are still large even if you have managed to get the constant itself to be small.

The cosmological constant, thus, is not actually constant. And since matter curves space-time, the matter fluctuations lead to space-time fluctuations – which can screw with our cosmological models. Afshordi and Nelson dubbed it the “Cosmological non-Constant Problem.”

But there is more to their argument than just adding to our problems because Afshordi and Nelson quantified what it takes to avoid a conflict with observation. They calculate the effect of stress-energy fluctuations on the space-time background, and then analyze what consequences this would have for the gravitational interaction. They introduce as a free parameter an energy scale up to which the fluctuations abound, and then contrast the corrections from this with observations, like for example the CMB power spectrum or the peculiar velocities of galaxy clusters. From these measurements they derive bounds on the scale at which the fluctuations must cease, and thus, where some new physics must come into play.

They find that the scale beyond which we should already have seen the effect of the vacuum fluctuations is about 35 TeV. If their argument is right, this means something must happen either to matter or to gravity before reaching this energy scale; the option the authors advocate in their paper is that physics becomes strongly coupled below this scale (thus invalidating the extrapolation to larger energies, removing the problem).

Unfortunately, the LHC will not be able to reach all the way up to 35 TeV. But a next larger collider – and we all hope there will be one! – almost certainly would be able to test the full range. As Niayesh put it: “It’s not a problem yet” – but it will be a problem if there is no new physics before getting all the way up to 35 TeV.

I find this an interesting new twist on the cosmological constant problem(s). Something about this argument irks me, but I can’t quite put a finger on it. If I have an insight, you’ll hear from me again. Just generally I would caution you to not take the exact numerical value too seriously because in this kind of estimate there are usually various places where factors of order one might come in.

In summary, if Afshordi and Nelson are right, we’ve been missing something really essential about gravity.

Me, Elsewhere

I'm back from my trip. Here are some things that prevented me from more substantial blogging:
  • I wrote an article for Aeon, "The superfluid Universe," which just appeared. For a somewhat more technical summary, see this earlier blogpost.
  • I did a Q&A with John The-End-of-Science Horgan, which was fun. I disagree with him on many things, but I admire his writing. He is infallibly skeptic and unashamedly opinionated -- qualities I find lacking in much of today's science writing, including, sometimes, my own.
  • I spoke with Davide Castelvecchi about Stephen Hawking's recent attempt to solve the black hole information loss problem, which I previously wrote about here.
  • And I had some words to spare for Zeeya Merali, probably more words than she wanted, on the issue with the arXiv moderation, which we discussed here.
  • Finally, I had the opportunity to give some input for this video on the PhysicsGirl's YouTube channel:



    I previously explained in this blogpost that Hawking radiation is not produced at the black hole horizon, a correction to the commonly used popular science explanation that caught much more attention than I anticipated.

    There are of course still some things in the above video I'd like to complain about. To begin with, anti-particles don't normally have negative energy (no they don't). And the vacuum is the same for two observers who are moving relative to each other with constant velocity - it's the acceleration that makes the difference between the vacua. In any case, I applaud the Physics Girl team for taking on what is admittedly a rather technical and difficult topic. If anyone can come up with a better illustration for Hawking-radiation than Hawking's own idea with the pairs that are being ripped apart (which is far too localized to fit well with the math), please leave a suggestion in the comments.