Tuesday, May 21, 2019

Book and travel update

The French translation of my book “Lost in Math” has now appeared under the title “Lost in Maths: Comment la beauté égare la physique”.

The Spanish translation has now also now appeared under the title “Perdidos en las matemáticas: Cómo la belleza confunde a los físicos.” I don’t speak Spanish, but for all I can tell, the title is a literal translation.

On Thursday (May 23rd) I am giving a public lecture in Barcelona. The lecture, it turns out, will be simultaneously translated into Spanish. This, I think, will be an interesting experience.

The next talks I have scheduled are a colloquium in Mainz, Germany, on June 11, and a public lecture in Groningen, Netherlands, on June 21st. The public lecture is associated with the workshop “Probabilities in Cosmology” at the University of Groningen.

I declined the invitation to the Nobel Laureate meeting in Lindau because I was informed they would only cover my travel expenses if I agree in advance to write about their meeting for a 3rd party. (If you get pitches about the meeting, please ask the author for a COI.)

After some back and forth, I accepted the invitation to SciFoo 2019, mostly because I couldn’t think of a way to justify declining it even to myself.

The fall is filling up too. The current plan looks roughly like this: On September 21st, I am giving a public lecture in Nürnberg. Early October I am in Brussels for a workshop. Mid of October I am giving a public lecture at the University of Minnesota. (I have not yet booked the flight for this trip. So if you want me to stop by at your institution for a lecture on the way, please get in touch asap.) End of October I am giving a lecture in Göttingen, and the first week of November I am in Potsdam and, again, in Berlin.

From November on, I will be unemployed, at least that is what it presently looks like. Or maybe I should say I will be fully self-employed. Either way, I will have to think of some other way to earn money than doing calculations in Anti-DeSitter space.

Finally, here is the usual warning that I am traveling for the rest of the week and comments on this blog will be stuck in the moderation queue longer than usual.

Sunday, May 19, 2019

10 things you should know about black holes


When I first learned about black holes, I was scared that one would fly through our solar system and eat us up. That was 30 years ago. I'm not afraid of black holes anymore but I am afraid that they have been misunderstood. So here are 10 things that you should know about black holes.

1. What is a black hole?

A black hole contains a region from which nothing ever can escape, because, to escape, you would have to move faster than the speed of light, which you can’t.  The boundary of the region from which you cannot escape is called the “horizon.” In the simplest case, the horizon has the form of a sphere. Its radius is known as the Schwarzschild radius, named after Karl Schwarzschild who first derived black holes as a solution to Einstein’s General Relativity.

2. How large are black holes?

The diameter of a black hole is directly proportional to the mass of the black hole. So the more mass falls into the black hole, the larger the black hole becomes. Compared to other stellar objects though, black holes are tiny because enormous gravitational pressure has compressed their mass into a very small volume. For example, the radius of a black hole with the approximate mass of planet Earth is only a few millimeters.

3. What happens at the horizon?

A black hole horizon does not have substance. Therefore, someone crossing the black hole horizon does not notice anything weird going on in their immediate surroundings. This follows from Einstein’s equivalence principle, which implies that in your immediate surrounding you cannot tell the difference between acceleration in flat space and curved space that gives rise to gravity.

However, an observer far away from a black hole who watches somebody fall in would notice that the infalling person seems to move slower and slower the closer they get to the horizon. It appears this way because time close by the black hole horizon runs much slower than far away from the horizon. 

That’s one of these odd consequences of the relativity of time that Einstein discovered. So, if you fall into a black hole, it only takes a finite amount of time to cross the horizon, but from the outside it looks like it take forever.

What you would experience at the horizon depends on the tidal force of the gravitational field. The tidal forces is loosely speaking the change of the gravitational force. It’s not the gravitational force itself, it’s the difference between the gravitational forces at two nearby places, say at your head and at your feet.

The tidal force at the horizon is inversely proportional to the square of the mass of the black hole. This means the larger and more massive the black hole, the smaller the tidal force at the horizon. Yes, you heard that right. The larger the black hole, the smaller the tidal force at the horizon.

Therefore, if the black hole is only massive enough, you can cross the horizon without noticing what just happened. And once you have crossed the horizon, there is no turning back. The stretching from the tidal force will become increasingly unpleasant as you approach the center of the black hole, and eventually rip everything apart.

In the early days of General Relativity many physicists believed that there is a singularity at the horizon, but this turned out to be a mathematical mistake.

4. What is inside a black hole?

Nobody really knows. General relativity predicts that inside the black hole is a singularity, that’s a place where the tidal forces become infinitely large. But we know that General Relativity does not work nearby the singularity because there, the quantum fluctuations of space and time become large. To be able to tell what is inside a black hole we would need a theory of quantum gravity – and we don’t have one. Most physicists believe that such a theory, if we had it, would replace the singularity with something else.

5. How do black holes form?

We presently know of four different ways that black holes may form. The best understood one is stellar collapse. A sufficiently large star will form a black hole after its nuclear fusion runs dry, which happens when the star has fused everything that could be fused. Now, when the pressure generated by the fusion stops, the matter starts falling towards its own gravitational center, and thereby it becomes increasingly dense. Eventually the matter is so dense that nothing can overcome the gravitational pull on the stars’ surface: That’s when a black hole has been created. These black holes are called ‘solar mass black holes’ and they are the most common ones.

The next common type of black holes are ‘supermassive black holes’ that can be found in the centers of many galaxies. Supermassive black holes have masses about a billion times that of solar mass black holes, and sometimes even more. Exactly how they form still is not entirely clear. Many astrophysicists think that supermassive black holes start out as solar mass black holes, and, because they sit in a densely populated galactic center, they swallow a lot of other stars and grow. However, it seems that the black holes grow faster than this simple idea suggests, and exactly how they manage this is not well understood.

A more controversial idea are primordial black holes. These are black holes that might have formed in the early universe by large density fluctuations in the plasma. So, they would have been there all along. Primordial black holes can in principle have any mass. While this is possible, it is difficult to find a model that produces primordial black holes without producing too many of them, which is in conflict with observation.

Finally, there is the very speculative idea that tiny black holes could form in particle colliders. This can only happen if our universe has additional dimensions of space. And so far, there has not been any observational evidence that this might be the case.

6. How do we know black holes exist?

We have a lot of observational evidence that speaks for very compact objects with large masses that do not emit light. These objects reveal themselves by their gravitational pull. They do this for example by influencing the motion of other stars or gas clouds around them, which we have observed.

We furthermore know that these objects do not have a surface. We know this because matter falling onto an object with a surface would cause more emission of particles than matter falling through a horizon and then just vanishing.

And since most recently, we have the observation from the “Event Horizon Telescope” which is an image of the black hole shadow. This is basically an extreme gravitational lensing event. All these observations are compatible with the explanation that they are caused by black holes, and no similarly good alternative explanation exists.

7. Why did Hawking once say that black holes don’t exist?

Hawking was using a very strict mathematical definition of black holes, and one that is rather uncommon among physicists. 

If the inside of the black hole horizon remains disconnected forever, we speak of an “event horizon”. If the inside is only disconnected temporarily, we speak of an “apparent horizon”. But since an apparent horizon could be present for a very long time, like, billions of billions of years, the two types of horizons cannot be told apart by observation. Therefore, physicists normally refer to both cases as “black holes.” The more mathematically-minded people, however, count only the first case, with an eternal event horizon, as black hole.

What Hawking meant is that black holes may not have an eternal event horizon but only a temporary apparent horizon. This is not a controversial position to hold, and one that is shared by many people in the field, including me. For all practical purposes though, the distinction Hawking drew is irrelevant.

8. How can black holes emit radiation?

Black hole can emit radiation because the dynamical space-time of the collapsing black hole changes the notion of what a particle is. This is another example of the “relativity” in Einstein’s theory. Just like time passes differently for different observers, depending on where they are and how they move, the notion of particles too depends on the observer, on where they are and how they move. 

Because of this, an observer who falls into a black hole thinks he is falling in vacuum, but an observer far away from the black hole thinks that it’s not vacuum but full of particles. And where do the particles come from? They come from the black hole.

This radiation that black holes emit is called “Hawking radiation” because Hawking was the first to derived that this should happen. This radiation has a temperature which is inversely proportional to the black hole’s mass: So, the smaller the black hole the hotter. For the stellar and supermassive black holes that we know of, the temperature is well below that of the Cosmic microwave background and cannot be observed.

9. What is the information loss paradox?

The information loss paradox is caused by the emission of Hawking radiation. This happens because the Hawking radiation is purely thermal which means it is random except for having a specific temperature. In particular, the radiation does not contain any information about what formed the black hole. 

But while the black hole emits radiation, it loses mass and shrinks. So, eventually, the black hole will be entirely converted into random radiation and the remaining radiation depends only on the mass of the black hole. It does not at all depend on the details of the matter that formed it, or whatever fell in later. Therefore, if one only knows the final state of the evaporation, one cannot tell what formed the black hole. 

Such a process is called “irreversible” — and the trouble is that there are no such processes in quantum mechanics. Black hole evaporation is therefore inconsistent with quantum theory as we know it and something has to give. Somehow this inconsistency has to be removed. Most physicists believe that the solution is that the Hawking radiation somehow must contain information after all.

10. So, will a black hole come and eat us up?

It’s not impossible, but very unlikely. 

Most stellar objects in galaxies orbit around the galactic center because of the way that galaxies form. It happens on occasion that two solar systems collide and a star or planet or black hole, is kicked onto a strange orbit, leaves one solar system and travels around until it gets caught up in the gravitational field of some other system. 

But the stellar objects in galaxies are generally far apart from each other, and we sit in an outer arm of a spiral galaxy where there isn’t all that much going on. So, it’s exceedingly improbable that a black hole would come by on just exactly the right curve to cause us trouble. We would also know of this long in advance because we would see the gravitational pull of the black hole acting on the outer planets.

If you enjoy my blog, please consider donating by using the button in the top right corner. Thanks!




Wednesday, May 15, 2019

Climate Change: There are no simple solutions

The Earth is warming. Human carbon-dioxide emissions are one of the major culprits. We have known this for a long time. But in the past two decades, evidence for global warming has become more noticeable on local levels, as with seasonal shifts, extreme weather events, declines in biodiversity and, depending on where you live, droughts. And it will get worse.

I would describe myself as risk-averse, future-oriented, and someone who worries easily. I don’t need to be convinced that we are not doing enough to mitigate the consequences of rising temperatures. Yet, I have become increasingly frustrated about the discussion of climate change in the media, which makes it look like the problem is to convince people that climate change is happening in the first place.

It is not. The problem is that we don’t know what to do about it, and even if we knew, we wouldn’t have the means to actually do it. And nothing whatsoever has changed about this since I learned of climate change in school, some time in the 1980s. Gluing yourself to a train will not create the policies and the institutions we would need to implement them.

A good example for this bizarre problem-denial is Greta Thunberg, here speaking to a crowd of about 10,000 people in Helsinki: “The climate crisis has already been solved. We already have all the facts and solutions. All we need to do is to wake up and change.”


I do not blame Greta Thunberg for being naïve. She’s a child, and she even admits to being naïve. When I was a teenager, I thought much the same, so who am I to judge her. But adults should know better than that. Yet, here we have Bill Nye who delivers his variant of “all we need to do is wake up”:

But the fact that climate change happens does not tell us what, if anything, to do about it. And scientists really should know better than to mix “is” with “ought.”

As I said, I am future-oriented and risk-averse. These are my personal values. You may not share them. Maybe you don’t give a shit about what’s going to happen 50 years from now, and if that’s your opinion, then that’s your opinion. Maybe you are willing to accept the risk that a steep temperature rise will result in famine, social unrest, and diseases that eradicate some billion people. Or maybe you even think that getting rid of some billion humans, mostly in the developing world, would not be a bad thing. I don’t share these opinions, but there is nothing factually wrong with them.

Economists have a long story to tell about our responsibility to the coming generations. How much we value it depends on what is called the “future discount rate”, which quantifies, basically, the relevance we assign to what will happen in the future. This evaluation usually focuses on measures like the Gross Domestic Product (GDP), and comes down to the question how much GDP per year we should invest today to prevent declines of GDP in the future. (If you are familiar with the literature, this is the Stern-Nordhaus debate over carbon pricing.)

The are many problems with that kind of argument. For starters, the GDP itself doesn’t tell you all that much about the well-being of people. It’s also somewhat unclear where to get the future discount rate. Economists have devised some methods to extract it from interest rates and the like. That is something you may or may not find reasonable. Also, we don’t know, just by discounting the future, how to factor in uncertainties about what is going to happen. Eg, one line of argument is that really what we should do is not look for a solution that is economically optimal, but one that minimizes the risk of a major ecological instability because then all bets are off.

Then there is the question what policies to pursue and how to implement them. A market-based solution, eg by putting a price on carbon, would be most likely to lead to an economically optimal strategy. The problem is, however, that this would necessitate an equilibrium readjustment of the global market which is unlikely to happen fast enough, even if we could get it going yesterday. And that’s leaving aside that equilibrium theory has its flaws, which is to say that economists aren’t exactly known for making great predictions.

Either way you turn it, resources that we spend today on limiting carbon-dioxide emissions are resources we cannot spend on something else, resources that will not go to education, research, social welfare, infrastructure. Oh, and they will also not go into that next larger particle collider.

 The world has two or maybe three decades of cheap fossil fuels left. Not using those makes our lives harder, regardless of how much we subsidize renewables. That, too, is a fact. Any sincere discussion about climate change should acknowledge it. It’s a difficult situation and there are no simple solutions.

Tuesday, May 14, 2019

Quantum Mechanics is wrong. There, I’ve said it.

[Image: needpix.com]


So, you have developed a new theory of quantum mechanics. That is, erm, nice. No, please don’t show it to me. I’m almost certainly too stupid to understand it. You see, I have only a PhD in physics. All that math has certainly screwed up my neural wiring. Yes, I am sorry. But I have a message for you from the depth of abstract math: We know that quantum mechanics is wrong.

Seriously, it’s wrong. It’s as wrong as Newtonian gravity is wrong, as hydrodynamics is wrong, and as spherical cows are wrong. Quantum mechanics is an approximation. It works well in some cases. It does not work well in other cases.

You see, in quantum mechanics we give quantum properties to particles. But we know that, strictly speaking, the interactions between these particles must also have quantum properties. If we give these interactions quantum properties, we call that “second quantization.” It is not used in quantum mechanics. Second quantization results in a larger mathematical framework called “quantum field theory”. The Standard Model of particle physics is a quantum field theory. Sometimes we use the word “quantum theory” to refer to both, quantum mechanics and quantum field theory together.

Moving from quantum mechanics to quantum field theory is more than just a change of name. Quantum field theories inherit many properties from quantum mechanics: Entanglement, uncertainty, the measurement postulate. But they bring new insights – and also new difficulties.

The best known insight brought by quantum field theory is that particles can be created and destroyed, and that each particle has an anti-particle (though some particles are their own anti-particles). Another remarkable consequence of quantum field theory is that the strength of interactions between particles depends on the energy by which one probes the interaction. The strong nuclear force, it turns out, becomes weaker at high energies, an odd behavior that is known as “asymptotic freedom.”

The probably best known difficulty of quantum field theories is that many calculations result in infinity. Infinity, however, is not a very useful prediction. Such results therefore have to be dealt with by a procedure called “renormalization,” whose purpose is to suitably subtract infinity from infinity to get a finite remainder. No, there is nothing wrong with that. It works just fine, thank you.

Quantum field theories lead to other complications. For example, we know how to calculate what happens if two electrons bump into each other and create a bunch of new particles. This is called a “scattering event”. But we don’t know how to calculate what happens if three quarks stick together and form a proton. Well, we do know how to put such calculations on super-computers in an approximation called “lattice QCD”. But really we don’t have good mathematical tools to handle the case. At least not yet.

But let us come back to quantum mechanics. You can use this theory to make predictions for any experiment where the creation and destruction of particles does not play a role. This is the case for all your typical quantum optics experiments, Bell-type tests, quantum cryptography, quantum computing, and so on. It is not merely a matter of doing experiments at low energy, but it also depends on how sensitive you are to the corrections coming from quantum field theory.

So, yes, quantum mechanics is technically wrong. It’s only an approximation to the more complete framework of quantum field theory. But as the statistician George Box summed it up “All models are wrong, but some are useful.” And whatever your misgivings are about quantum mechanics, there is no denying that it is useful. 

Sunday, May 12, 2019

The trouble with Facebook and what it has in common with scientific publishing.

[In case you’d rather read than listen, a full transcript follows below.]


Today, I want to talk about Facebook. Yes, Facebook, the social media website, I’m sure you have heard of them.

Facebook currently gets a lot of media attention. And not in a good way. That’s because not only has Facebook collected and passed on user-information without those user’s consent, it has also provided a platform for the organized spread of political misinformation, aka “fake news”.

I doubt you were surprised by this. It’s hardly a breakthrough insight that an almost monopoly on the filtering of information is bad for democracy. This is, after all, why we have freedom of the press written into the constitution: To prevent an information monopoly. And when it comes to the internet, this is a problem that scientists have written about at least for two decades.

Originally the worry of scientists, however, focused on search engines as news providers. This role, we now know, has been taken over by social media, but it’s the same problem: Some company’s algorithm comes to select what information users do and do not see prominently. And this way of selecting our information can dramatically affect our opinion.

A key paper on this problem is a 2003 article by three political scientists whocoined the term “Googlearchy”. They wrote back then:

“Though no one expected that every page on the Web would receive an exactly equal share of attention, many have assumed that the Web would be dramatically more egalitarian in this regard than traditional media. Our empirical results, however, suggest enormous disparities in the number of links pointing to political sites in a given category. In each of the highly diverse political communities we study, a small number of heavily-linked sites receive more links than the rest of the sites combined, effectively dominating the community they are a part of […]

We introduce a new term to describe the organizational structure we find: “googlearchy” –  the rule of the most heavily linked. We ultimately conclude that the structure of the Web funnels users to only a few heavily-linked sites in each political category.”

We have now become so used to this “rule of the most heavily linked” that we have stopped even complaining about it, though maybe we should now call it the “rule of the most heavily liked.”

But what these political scientists did not discuss back then, was that of course people would try to exploit these algorithms and then attempt to deliberately misinform others. So really the situation is worse now than they made it sound in 2003.  

Why am I telling you this? Because it occurred to me recently that the problem with Facebook’s omnipotent algorithm is very similar to a problem we see with scientific publishing. In scientific publishing, we also have a widely used technique for filtering information that is causing trouble. In this case, we filter which publications or authors we judge as promising.

For this filtering, it has become common to look at the number of citations that a paper receives. And this does cause problems, because the number of citations may be entirely disconnected from the real world impact of a research direction. The only thing the number of citations really demonstrates is popularity. Citations are a measure that’s as disconnected from scientific relevance as the number of likes is from the truth value of an article on Facebook.

Of course the two situations are different in some ways. For example on social media there is little tradition of quoting sources. This has the effect that a lot of outlets copy news from each other, and that it is extra hard to check the accuracy of a statement. Another difference is that social media has a much faster turnover-rate than scientific publications. This means on social media people don’t have a lot of time to think before they pass on information. But in both cases we have a problem caused by the near monopoly of a single algorithm.

Now, when it comes to scientific publishing, we have an obvious solution. The problem both with the dominance of a few filtering algorithms and with the possibility of gaming comes from users being unable to customize the filtering algorithm. So with scientific publishing, just make it easier for scientists to use other ways to evaluate research works. This is the idea behind our website SciMeter.org.

The major reason that most scientists presently use the number of citations, or the number of publications, or the number of publications in journals with high impact factor, to decide what counts as “good science” is that these numbers are information they can easily access, while other numbers are not. It adds to this that when it comes to measures like the journal impact factor, no one really knows how it’s calculated.

Likewise, the problem with Facebook’s algorithm is that no one knows how it works, and it can’t be customized. If it was possible for users to customize what information they see, gaming would be much less of a problem. Well, needless to say, I am assuming here that the users’ customization would remain private information.

You may object that most users wouldn’t want to deal with the details, but this isn’t really necessary. It is sufficient if a small group of people generates templates that users can then chose from.

Let me give you a concrete example. I use Facebook mostly to share and discuss science news and to stay in touch with people I have on my “Close friends” list. I don’t want political news from Facebook, I am not interested in the social lives of people I don’t know, and if I want entertainment, I look for that elsewhere.

However, other people use Facebook entirely differently. Some spend a lot of time with groups, use it to organize events, look for distraction, or, I don’t know, to share cooking recipes, whatever. But right now, Facebook offers very little options to customize your news feed to suit your personal interests. The best you can do, really, is to sort people onto lists. But this is cumbersome and solves only some aspects of the sorting-problem.

So, I think an easy way to solve at least some of the problems with Facebook would be to allow a third-party plug to sort your news-feed. This would give users more control and also relieve Facebook of some responsibility.

Mark Zuckerberg once declared his motto clearly: “Move fast and break things. Unless you are breaking stuff, you are not moving fast enough.” Well, maybe it’s time to break Facebook’s dominance over information filtering.

Update: Now with Italian subtitles. Click on “CC” in the YouTube tool bar to turn on subtitles. Switch language in settings/gear icon.

Friday, May 10, 2019

Admin note on invisible comments

I keep getting notifications from readers that something isn't working on my blog. Here is what's happening: With the new layout of the comment section, if the list exceeds 100 comments, then comments will by default not all load. In that case you have to click on "Load more..." below the comment box, see screen shot below. I have posted this response several times in the comments, but of course you'll only see this if you already know you have to click on "Load more..."



I also know that if you click on the link in the comment widget (side bar), this will not work if there are too many comments in one thread. I am sorry about this, but nothing I can do to change it. This blog is hosted by Google. My options to customize the comment section are very limited. The comment widget is a 3rd party java script that however can't handle the more recent updates of the comment feature.

I have also noticed that sometimes the comment box doesn't appear in a reply-to-comment thread. In that case you have to scroll up to the comment you want to reply to and find the "reply" link. If you click on that, the box will appear. I have no idea what sense this makes. If anyone has suggestions for improvement other than that I should move this blog to a different provider, please let me know.

And while I am at it, let me repeat my plea that you please, please not post links or email addresses. First, because such comments are likely to end up in the spam folder. Even if not, I will only approve such comments after I have had time to check that the website is legit. Since I normally don't have time to do that, your comment will end up in the moderation queue indefinitely. Exceptions are links to websites I can recognize immediately, eg the arXiv, scientific journals, major news pages, etc.

Wednesday, May 08, 2019

Measuring science the right way. Your way.

Today, I am happy to announce the first major update of our website SciMeter.org. This update brings us a big step closer to the original vision: A simple way to calculate what you, personally, think quantifies good research.

Evaluating scientific work, of course, requires in-depth studies of research publications. But in practice, we often need to make quick, quantitative comparisons, especially those of us who serve on committees or who must document the performance of their institutions (or both). These comparisons rely on metrics for scientific impact, that are measures to quantify the relevance of research, both on the administrative level and on the individual level. Many scientists are rightfully skeptical of such attempts to measure their success, but quantitative evaluation is and will remain necessary.

However, metrics for scientific impact, while necessary, can negatively influence the behavior of researchers. If a high score on a certain metric counts as success, this creates an incentive for researchers to work towards increasing their score, rather than using their own judgement for good research. This problem of “perverse incentives” and “gaming measures” is widely known. It has been subject of countless talks and papers and manifestos, yet little has been done to solve the problem. With SciMeter we hope to move towards a solution.

A major reason that measures for scientific success can redirect research interests is that few measures are readily available. While the literature on bibliometrics and scientometrics contains hundreds of proposals, most researchers presently draw on only a handful of indicators that are easy to obtain. This is notably the number of citations, the number of publications, the Hirsch-index, and the number of papers with high impact factor. These measures have come to define what “good research” means just because in practice we have no alternative measures. But such a narrow definition of success streamlines research strategies and agendas. And this brings the risk that scientific exploration becomes inefficient and stalls.

With SciMeter, we want to work against this streamlining. SciMeter allows everyone to create their own measures to capture what they personally consider the best way of quantifying scientific impact. The self-created measures can then be used to evaluate individuals and to quickly sort lists, for example lists of applicants. Since your measures can always be adapted, this counteracts streamlining and makes gaming impossible.

SciMeter is really not a website, it’s a web-interface. It allows you make your own analysis of publication data. Right now our database contains only papers from arXiv.org. That’s not because we only care about physicists, but because we have to start somewhere! For this reason, I must caution you that it makes no sense to compare the absolute numbers from SciMeter to the absolute numbers from other services, like InSpire or Google. The analysis you can do with our website should be used only for relative comparison within our interface.

For example, you will almost certainly find that your h-index is lower on SciMeter than on InSpire. Do not panic! This is simply because our citation count is incomplete.

Besides comparing authors with each other, and sorting lists according to custom-designed metrics, you can also use our default lists. Besides all authors, we also have pre-defined lists for authors by arXiv category (where we count the main category that an author publishes in), and we have lists for male and female authors (the same gender-id that we also used for our check of Strumia’s results).

The update also now lets you get a ten-year neural-net prediction for the h-index (that was the basis of our recent paper, arxiv version here). You find this feature among the “apps”. We have also made some minor improvements on the algorithm for the keyword clouds. And on the app-page you now also find the search for “similar authors” or authors who have published on any combination of keywords. You may find this app handy if you look for people to invite to conferences or, if you are a science writer, if you are looking for someone to comment.

SciMeter was developed by Tom Price and Tobias Mistele, and was so-far financed by the Foundational Questions Institute. I also want to thank the various test-users who helped us to trouble-shoot an early version and contributed significantly to the user-friendliness of the website.

In the video below I briefly show you how the custom metrics work. I hope you enjoy the update!

Friday, May 03, 2019

Graham Farmelo’s interview of Edward Witten. Transcript.

[I’ve meant for some while to try an automatic transcription software, and Graham Farmelo’s interview of Edward Witten (mentioned by Peter Woit) seemed a good occasion. I used an app called “Trint” which seems to work okay. But both the software and I have trouble with Farmelo’s British accent and with Witten’s mumbling. I have marked the places that I didn’t understand with [xxx]. Please leave me a comment in case you can figure out what’s being said. Also notify me of any blunders that I might have missed. Thanks!]


GF [00:00:06] A mind of the brilliance of Edward Witten’s comes along in mathematical physics about once every 50 years if we’re lucky. Since the late 1970s he’s been preeminent among the physicists who are trying to understand the underlying order of the universe. Or, as you might say, trying to discover the most fundamental equations of physics. More than that, by studying the mathematical qualities of nature, Witten became remarkably influential in pure mathematics. The only physicist ever to have won the coveted Fields Medal which has much the same stature in mathematics as a Nobel Prize has in physics.

GF [00:00:46] My name is Graham Farmelo, author of  “The universe speaks in numbers.” Witten is a central figure in my book and he’s been helpful to me. Though he’s a reluctant interviewee so I was pleased when he agreed to talk with me last August about some aspects of his career and the relationship between mathematics and physics. He was in a relaxed mood sitting on a sofa in his office at the Institute for Advanced Study in Princeton wearing his tennis clothes. As usual, he speaks quietly so you’ll have to listen hard.

GF [00:01:20] He uses quite a few technical terms too. But if you’re not familiar with them I suggest that you just let them wash over you. The key thing is to get a sense of Witten’s thinking about the big picture. He is worth it.

GF [00:01:32] He gives us several illuminating insights into how he became interested in state-of-the-art mathematics while remaining a physicist to his fingertips. I began by asking him if he’d always been interested in mathematics and physics.

EW [00:01:47] When I was a kid I was very interested in astronomy. It was the period of the space race and everybody was interested in space. Then, when I was a little older, I was exposed to calculus by my father. And for a while I was very interested in math.

GF [00:02:02] You said for a while, so did that lapse?

EW [00:02:04] Yes, it did lapse for a few years, and the reason it lapsed, I think, was that after being exposed to calculus at the age of eleven it actually was quite a while before I was shown anything that was really more advanced. So I wasn't really aware that there was much more interesting more advanced math. Probably not the only reason, but certainly one reason that my interest lapsed.

GF [00:02:22] Yeah. Were you ever interested in any other subjects? I mean because you know you came on to study history and things like that. Did that really interest you comparably to math and physics?

EW [00:02:31] I guess there was a period when I imagined doing journalism or history or something but at about the age of 21 or 22 I realized that I wasn't going to work out well in my case.

GF [00:02:42] After studying modern languages he worked on George McGovern’s ill fated presidential campaign and even studied economics for one semester before he finally turned to physics.

GF [00:02:53] Apparently he showed up at Princeton University wanting to do a Ph.D. in theoretical physics and they wisely took him on after he made short work of some preliminary exams. Boy did he learn quickly. One of the instructors tasked with teaching him in the lab told me that within three weeks Witten’s questions on the experiments went from basic to brilliant to Nobel level. As a postdoc at Harvard, Witten became acquainted with several of the theorist pioneers of this model including Steven Weinberg, Shelly Glashow, Howard Georgi, and Sydney Coleman, who helped interest the young Witten in the mathematics of these new theories.

EW [00:03:33] The physicists I learned from most during those years were definitely Weinberg, Glashow, Georgi, and Coleman. And they were completely different. So Georgi and Glashow were doing model building, basically weak interaction model building, elaborations on the Standard Model. I found it fascinating but it was a little bit hard to find an entree there. If the world had been a little bit different, I might have made my career doing things like they were doing.

GF [00:04:01] Wow. This was the first time I’d heard Witten say that he was at first expecting to be like most other theorists and take his inspiration from the results of experiments building so-called models of the real world. What, I wondered, led him to change direction and become so mathematical.

EW [00:04:19] Let me provide a little background for listeners. Up to and including the time I was a graduate student, for 20, 25 years, there had been constant waves of new discoveries in elementary particle physics: strange particles, muons, hadronic resonances, parity violations, CP violation, scaling and deep inelastic scattering, the Charm particle, and I’m forgetting a whole bunch. But that’s enough to give you the idea. So that was over a period of over 20 years. So even after a lot of the big discoveries that was one every three years. Now, if experimental surprises and discoveries had continued like that, which at the time I think is what would have happened because it had been going on for a quarter century, then I would have expected to be involved in model building, or grappling with it, like colleagues such as Georgi and Glashow all were doing. Most notably, however, it turned out that this period of constant surprise and turmoil was ending just while I was a graduate and therefore later on I had no successful directions.

GF [00:05:20] Do you remember being disappointed by that in any sense?

EW [00:05:23] Of course I was, you never stop being disappointed.

GF [00:05:27] Oh dear, oh it’s a hard life.

GF [00:05:31] You were disappointed by the drawing up so to speak of the.

EW [00:05:33] There have been important experimental discoveries since then. But the pace has not been quite the same. Although they’ve been very important they’ve been a little bit more abstract in what they teach us and definitely they’ve offered fewer opportunities for model building than was the case in the 60s and 70s. I’d like to just tell you a word or two about my interaction with the other physicists. There was Steve Weinberg and what I remember best from Weinberg. He was one of the pioneers of a subject called current algebra which was an important part of understanding the nuclear force. But he obviously thought most other physicists didn't understand it properly and I was one of those. So whenever current algebra was mentioned at a seminar or a discussion meeting he would always give a short little speech explaining his understanding of it. In my case after hearing those speeches the eight to 10 times [laughter] what Steve was telling us.

EW [00:06:28] Then there was Sidney Coleman. First of all Sidney was the only one who was interested in strong coupling behavior of quantum field theories which is what I’d become interested in as a graduate student with encouragement from my advisor David Gross. So, he was really the only one I could interact with about that. Others regarded strong coupling as a black box. So, maybe for your listeners, I should explain that if you’re a student in physics they teach you what to do when quantum effects are small, but no one tells you what to do when quantum effects are big, there’s no general answer. It’s a smorgasbord of different methods that work for different problems and a lot of problems that are intractable. So, I'd become interested in that as a student but I was mostly beating my head against a brick wall because it is usually intractable, and Sydney was the only one of the professors at Harvard interested in such matters. So, apart from interacting with him about that, also he exposed me to a number of mathematical topics I wouldn’t have known about otherwise but that eventually were important in my work which most physicists didn’t know about. And certainly I didn’t know about.

GF [00:07:27] Yeah, can I ask were you consciously interested in advanced pure math at that time?

EW [00:07:32] Definitely not

GF [00:07:32] You were not?

EW [00:07:32] No, most definitely not. I got dragged into math gradually because you see the standard model had been discovered so the problems in physics were not exactly the same as they had been before. But there were new problems that were opened up by the standard model. For one thing there is new math that came into understanding the standard model. Just when I was finishing graduate school more or less Polyakov and others introduced the Yang-Mills instanton which has proved to be important in understanding physics. It’s also had a lot of mathematical applications.

GF [00:08:02] You can think of instantons as fleeting events that occur in space and time on the subatomic scale. These events are predicted by the theories of the subatomic world known as gauge theories. A key moment in this story is Witten’s first meeting with the great mathematician Michael Atiyah at the Massachusetts Institute of Technology. They will become the leaders of the trend towards a more mathematical approach to our understanding of the world.

EW [00:08:32] So Polyakov and others had discovered the Yang-Mills instanton and it was important in physics and proved to have many other applications. And then Atiyah was one of the mathematicians who discovered amazing mathematical methods that could be used to solve the instanton equations. So he was lecturing about that when he visited in Cambridge. I think in the spring of 1977, but I could be off by a few months, and I was extremely interested. And so we talked about it a lot. I probably made more of an effort to understand the math involved than most of the other physicists did. Anyway this interaction surely led to my learning all kinds of math I’d never heard of before, complex manifolds, sheaf cohomology groups.

GF [00:09:16] This was news to you at that time.

EW [00:09:18] Definitely. So I might tell you at an even more basic level the Atiyah-Singer index theorem had been news to me a few months earlier when I heard about it from Sidney Coleman.

GF [00:09:28] The index theorem first proved by Michael Atiyah and his friend Isidore Singer connects two branches of mathematics that had seemed unconnected. Calculus, that’s the mathematics of changing quantities, and topology about the properties of objects that don’t change when they’re stretched, twisted, or deformed in some way, topology is now central to our understanding of fundamental physics.

EW [00:09:51] Like other physics graduate students of the period, I had no inkling of any 20th century math, really. So, I’d never heard of the names Atiyah and Singer or of the concept of the index or if the index theorem until Albert Schwarz showed that it was relevant to understanding instantons. And even then that paper didn’t make an immediate splash. If Coleman hadn’t pointed it out, I’m not sure how long it would have been before I knew about it. And then there was progress in understanding instanton equations by Atiyah among others. The first actually was Richard Ward, Penrose’s doctoral student. So, I got interested in that but I was interested in a sense in a narrow way which is what good would it be in physics. And I learned the math or some of the math that the teacher was using. But I was a little skeptical about the applicability for physics and I wasn’t really wrong because the original program of Polyakov didn’t quite work out. The details of the Instanton equations that were beautifully elucidated by the mathematicians were not in practice that helpful for things you can actually do as a physicist. So, to sort of summarize what happened in the long run, Atiyah’s work and that of his colleagues made me learn a lot of math I’d never heard of before which turned out to be very important later but not per se for the original reasons.

GF [00:11:10] When did you start to become convinced that math was really going to be interesting?

EW [00:11:14] Well that gradually happend in the 1980s I guess. So, for example one early episode which was in 1981 or two I was trying to understand the properties of what's called the vacuum the quantum ground state in supersymmetric field theories and it really had some behavior that was hard to explain using standard physics ideas and since I couldn't understand it I kept looking at simpler and simpler models and they all had the same puzzle. So finally I got to what seemed like the simplest possible model which you could ask the question and it still had a puzzling behavior. But at a certain point, I think when I was in a swimming pool in Aspen Colorado, I remembered Raoul Bott and actually Atiyah also had given some lectures to physicists a couple of years earlier in Cargesse, and they had tried to explain something called Morse theory to us. I’m sure there are like me many other physicists that have never heard of Morse theory or are familiar with any of the questions it addresses or.

GF [00:12:11] Would you like to say what Morse theory is roughly speaking?

EW [00:12:14] Well if you’ve got a rubber ball floating in space it’s got a lowest point, where the elevation is lowest, it’s got a highest point where the elevation is highest. So it’s got a maximum and a minimum. If you have a more complicated surface like for example a rubber inner tube, it’ll have saddle points of height function as well as a maximum and minimum. And Morse theory relates the maxima and minima and the saddle points of a function such as height function to the topology of a surface or topological manifold on which the function is defined.

GF [00:12:48] You see that paper by Maxwell on that what he spoke about see in 1870.

EW [00:12:52] I’ve not read that.

GF [00:12:53] Oh I’ll show it to you later. It’s “On Hills and Dales,” gave it in Liverpool, very thinly attended talk, erm, anyway.

EW [00:13:01] So was he in fact describing the two dimensional version of Morse theory.

GF [00:13:04] I can’t go into detail but the historians of Morse theory, they often refer to that. At a public meeting incidentally in Liverpool. 

EW [00:13:13] Actually now you mentioned it, I heard the title of the Hills and Dales talk by Maxwell that had something to do with the beginnings of topology. And topology was just barely beginning in roughly that period.

GF [00:13:23] But this was useful in physics. Your Aspen swimming pool revelation...

EW [00:13:28] Well, it shed a little bit of light on the vacuum state in super symmetric quantum theories. So anyway I developed that further so you know at first that seemed exceptional but eventually there were too many of these exceptions to completely ignore.

GF [00:13:42] Am I right in saying, not to put into your mouth, but it was the advent of String Theory post Michael Greene and John Schwartz where these things started going front and center, is that fair?

EW [00:13:50] After... Following the first super string revolution as people call it which came to fruition in 1984 with the work of Greene and Schwarz on the anomalies after that the sort of math that Atiyah and others had used for the instaton equation was suddenly actually useful. Because to understand string theory, complex manifolds and index theory sheaf cohomology groups, all those funny things were actually useful in doing basic things like constructing models of the elementary particles in string theory. I should give a slightly better explanation. In physics there are the forces that we see for the elementary particles that means basically everything except gravity. Then there's gravity which is so weak that we only see it for macroscopic masses like the earth or the sun. Now we describe gravity by Einstein's theory and then we describe the rest of it by quantum field theory. It's difficult to combine the two together. Before 1984 you couldn't even make a halfway reasonable models for elementary particles that included all the forces together with gravity. The advance that Greene and Schwarz made with anomaly cancellation in 1984 made that possible. But to make such models you needed to use a lot of the math that physicists had not used previously but which was introduced by Atiyah and others when they solved the instanton equations and you had to use complex manifolds, sheaf cohomology groups and things that were totally alien to the education of a physics graduate student back in the days when I'd been a student. So those things were useful even at a basic level in making a model of the elementary particles with gravity. And if you wanted to understand it more deeply you ended up using still more maths. After string theory was developed enough that you could use it in an interesting way to make models of particle physics it was clear that a lot of previously unfamiliar math was important. I speak loosely when I say previously unfamiliar because obviously it was familiar to some people. First of all the to the mathematicians. Secondly in some areas like Penrose had used some of it in his Twister theory. But broadly speaking unfamiliar to most physicists.

GF [00:15:46] So we actually went very well in physics very very important for mathematicians in mathematics a very important physicist they're working harmoniously alongside each other. You go back to Leibnitz who used to talk about the pre established harmony between math and physics. That was one of Einstein's favorite phrases. Is there something you regard as a fact of life or is it something you would regard as possibly can be explained one day will never be explained. Do you have any comment at all on that relationship.

EW [00:16:09] Well, the intimate tie between math and physics seems to be a fact of life. I can't imagine what it would mean to explain it. The world only seems to be based on theories that involve interesting math and a lot of interesting math is at least partly inspired by the role that it plays in physics. Not all of course.

GF [00:16:25] But does it inspire you when you see a piece of math that's very relevant to physics and vice versa when you're helping mathematicians. Does that motivate you in some way to think you're on the right track.

EW [00:16:35] Well when something turns out to be beautiful that does encourage you believe that it's on the right track.

GF [00:16:39] Classic Dirac. But he took it as he put it to almost a religion. But I sense you are  a little bit more skeptical, if that's the right word or hard nosed about it I don't know.

EW [00:16:51] Having discovered the Dirac equation, Dirac was entitled to commit its use to extremes, to put it that way.

GF [00:16:58] Witten has long been a leading pioneer of the string framework which seeks to give a unified account of all the fundamental forces based on quantum mechanics and special relativity. It describes the basic entities of nature in terms of tiny pieces of string.

GF [00:17:14] Go back to string theory. Do you see that as one among several candidates or the preeminent candidate or what? I mean what do you see the status of that framework in the landscape of mathematical physics.

EW [00:17:24] Id say that string slash M theory is the only really interesting direction we have for going beyond the established framework of physics by which I mean quantum field theory at the quantum level and classical general relativity at the macroscopic scale. So where where we've made progress that's been in the string slash M theory framework where a lot of interesting things have been discovered. I'd say that there's a lot of interesting things we don't understand at all.

EW [00:17:48] But you’ve never been tempted down the other route. The other options are not.

EW [00:17:52] I’m not even sure what you would mean by other routes.

GF [00:17:54] Loop quantum gravity?

EW [00:17:56] Those are just words. There aren’t any other routes.

GF [00:17:58] Okay, all right, fair enough.

GF [00:18:01]  So there we have it. The preternaturally cautious Witten says that if we want to discover a unified theory of all the fundamental forces, string theory is the only interesting way forward that’s arisen.

GF [00:18:17] Where we are now strikes me as being quite an unusual time in particle physics because so many of us were looking forward to the Large Hadron Collider, huge energy available ,and finding the Higgs boson and maybe supersymmetry. And yet it seems that we have gotten the Higgs particle just as we were hoping and expecting. But nothing else that’s really stimulating. What are your views on where we are now?

EW [00:18:39] My generation grew up with a belief very very strong belief which by the way was drummed into us by Steven Weinberg and by others. That when physics reached the energy scale at which you can understand the weak interactions. You would not only discover the mechanism of electroweak symmetry breaking but you’d learn what fixes its energy scale as been relatively low compared to the scale of gravity. That’s what ultimately makes gravity so weak in ordinary terms. So, it came as a big surprise that we reached the energy scale to study the W and the Z and even the Higgs particle without finding a bigger mechanism behind it. That’s an extremely shocking development in the context of the thinking that I grew up with.

EW [00:19:22] There is another shock which also occurred during that 40 year period which possibly should be comparative. This is the discovery that the acceleration of the expansion of the universe. For decades physicists assumed that because of the gravitational attraction of matter the expansion of the universe with be slowing down and tried to measure it. It turned out that the expansion is actually speeding up. We don't know this for sure it seems quite likely that the results from the effects of Einstein's cosmological constant which is incredibly small but non-zero. The two things the very very small but non-zero cosmological constant and the scale of weak interactions the scale of elementary particle masses which in human terms can seem like a lot of energies. But it's very small compared to other energies in physics. The two puzzles are analogous and they're both extremely bothersome. These two puzzles although primarily the one about gravity which was discovered first are perhaps the main motivation for discussions of a cosmic landscape of vacua. Which is an idea that used to make me extremely uncomfortable and unhappy. I guess because of the challenge it poses to trying to understand the universe and the possibly unfortunate implications for our distant descendants tens of billions of years from now. I guess I ultimately made my peace with it recognizing that the universe hadn't been created for our convenience.

GF [00:20:43] So  you come to terms with it.

EW [00:20:45] I've come to terms with the landscape idea and the sense of not being upset about it. As I was for many years.

GF [00:20:49] Really upset?

EW [00:20:50] I still would prefer to have a different explanation but it doesn't upset me personally to the extent it used to.

GF [00:20:56] So just to conclude what would you say the principal challenge is all down to people looking at fundamental physics.

EW [00:21:01] I think it's quite possible that new observations either in astronomy or accelerators will turn up new and more down to earth challenges. But with what we have now and also with my own personal inclinations it's hard to avoid answering new terms of cosmic challenges. I actually believe that string slash M theory is on the right track toward a more deeper explanation. But at a very fundamental level it's not well understood. And I'm not even confident that we have a good concept of what sort of thing is missing or where to find it. The reason I'm not is that in hindsight it's clear that a view we might have given in the 1980s was what was missing was too narrow. Instead of discovering what we thought was missing instead we broadened the picture in the 90s in unexpected directions. And having lived through that I feel it might happen again.

EW [00:21:49] To give you a slightly less cosmic answer if you ask me where I think is the most likely direction for another major theoretical upheaval like happened in the 80s and then again in the 90s. I've come to believe that the whole it from qbit stuff, the relation between geometry and entanglement, is the most interesting direction.

GF [00:22:12] It from bit that was a phrase coined by the late American theoretician John Wheeler who guessed that the stuff of nature the "it" might ultimately be built from the bits of information. Perhaps the theory of information is showing us the best way forward in fundamental physics. Witten is usually wary of making strong pronouncements about the future of his subjects. So I was struck by his interest in this line of inquiry, now extremely popular.

EW [00:22:39] I feel that if in my active career there will be another real upheaval that's where it's most likely to be coming [xxx]

EW [00:22:47] I had a sense both in the early 80s and in the early 90s. I had a sense a couple of years in advance of the big upheavals where they were most likely to come from it and those two times did turn out to be right. Then for a long long time I had no idea where another upheaval might come from. By the last few years I've become convinced that it's most likely to be the it from qbit stuff of which I have not been a pioneer now. But I was not one of the first to reach the conclusion or a suspicion that I'm telling you right now. But anyway it's the view I've come to.

GF [00:23:20] There's a famous book about night thoughts of a quantum physics. are there night thoughts of a string theorists is where you have a wonderful theory that's developing you know unable to test it. Does that ever bother you.

EW [00:23:31] Of course it bothers us but we have to live with our existential condition. But let's backtrack 34 years. So in the early 80s there were a lot of hints that something important was happening in string theory but once Greene and Schwartz discovered the anomaly cancellation and it became possible to make models of elementary particle physics unified with gravity. From then I thought the direction was clear. But some senior physicists rejected it completely on the grounds that it would supposedly be untestable. Or even have cracked it would be too hard to understand. My view at the time was that when we reached the energies of the W, Z and the Higgs particle we'd get all kinds of fantastic new clues.

EW [00:24:11] So. I found it very very surprising that any colleagues would be so convinced that you wouldn't be able to get important clues that would shed light on the validity of a fundamental new theory that might in fact be valid. Now if you analyze that 34 years later I'm tempted to say we were both a little bit wrong. So the scale of clues that I thought would materialize from accelerators has not come. In fact the most important clue possibly is that we've confirmed the standard model without getting what we fully expected would come with him. And as I told you earlier that might be a clue concerning the landscape. I think the flaw in the thinking of the critics though is that while it's a shame that the period of incredible turmoil and constant experiment and discovery that existed until roughly when I started graduate school hasn't continued. I think that the progress which has been made in physics since 1984 is much greater than it would have been if the naysayers had been heeded and string theory hadn't been done in that period.

GF [00:25:11] And it's had this bonus of benefiting mathematics as well.

EW [00:25:14] Mathematics and by now even in other areas of physics because for example new ideas about black hole of thermodynamics have influenced areas of condensed metaphysics* even in the study of quantum phase transitions, quantum chaos and really other areas.

GF [00:25:31] Well let's hope we all live to see some revolutionary triumph that was completely unexpected that's the best one of all. Edward thank you very much indeed.

EW [00:25:38] Sure thing.

GF [00:25:43] I’m always struck by the precision with which Edward expresses himself and by his avoidance of fuzzy philosophical talk. He's plainly fascinated by the closeness of the relationship between fundamental physics and pure mathematics. He isn't prepared to go further to say that their relationship is a fact of life. Yet no one has done more to demonstrate that not only is mathematics unreasonably effective in physics physics is unreasonably effective in mathematics.

GF [00:26:15] This Witten said makes sense only if our modern theories are on the right track. One last point. Amazingly Witten is sometimes underestimated by physicists who characterize him as a mathematician, someone who has only a passing interest in physics. This is quite wrong. When I talk with a great theoretician Steven Weinberg he told me of his awe at Witten's physical intuition and elsewhere said that Witten's got more mathematical muscles in his head than I like to think about. You can find out more about Witten and his work in my book "The universe speaks in numbers.


--

* Condensed matter physics. I am sure he says condensed matter physics. But really I think condensed metaphysics fits better.


Thursday, May 02, 2019

How to live without free will

Lego sculpture.
By Nathan Sawaya.
[Image Source]
It’s not easy, getting a PhD in physics. Not only must you learn a lot, but some of what you learn will shake your sense of self.

Physics deals with the most fundamental laws of nature, those from which everything else derives. These laws are, to our best current knowledge, differential equations. Given those equations and the configuration of a system at one particular time, you can calculate what happens at all other times.

That is for what the universe without quantum mechanics is concerned. Add quantum mechanics, and you introduce a random element into some events. Importantly, this randomness in quantum mechanics is irreducible. It is not due to lack of information. In quantum mechanics, some things that happen are just not determined, and nothing you or I or anyone can do will determine them.

Taken together, this means that the part of your future which is not already determined is due to random chance. It therefore makes no sense to say that humans have free will.

I think I here spell out only the obvious, and use a notion of free will that most people would agree on. You have free will if your decisions select one of several possible futures. But there is no place for such a selection in the laws of nature that we know, laws that we have confirmed to high accuracy. Instead, whatever is about to happen was already determined at the big bang – up to those random flukes that come from quantum mechanics.

Now, some people try to wiggle out of this conclusion by defining free will differently, for example by noting that no one can in practice predict your future behavior (at least not currently). One can do such redefinitions, of course, but this is merely verbal gymnastics. The future is still fixed up to occasional chance events.

Others try to interpret quantum randomness as a sign of free will, but this is in conflict with evidence. Quantum processes are not influenced by conscious thought. Chaos is deterministic, so it doesn’t help. Goedel’s incompleteness theorem, remarkable as it is, has no relevance for natural laws.

The most common form of denial that I encounter is to insist that reductionism must be wrong. But we have countless experiments that document humans are made of particles, and that these particles obey our equations. This means that also humans, as collections of those particles, obey these equations. If you try to make room for free will by claiming humans obey other equations (or maybe no equation at all), you are implicitly claiming that particle physics is wrong. And in this case, sorry, I cannot take you seriously.

These are the typical objections that I hear, and none of them makes much sense.

I have had this discussion many times. Many people find it hard to comprehend that I do not believe in free will. And any such debate will, inevitably, be accompanied by the joke that the outcome of the argument was determined already, haha, aren’t you so original.

I have come to the conclusion that a large fraction of people are cognitively unable to question the existence of free will, and there is no argument that can change their mind. Therefore, the purpose of this blogpost is not to convince those who are resistant to rational arguments. The purpose is to help those who understand the situation but have trouble making sense of it. Like I have had trouble. The following shifts in perspective may help you without the need to resort to denial:

1. You never had free will.

It’s not like your free will suddenly evaporated when you learned the Euler-Lagrange equations. Your brain still functions the same way as before. So keep on doing what you have been doing. To first approximation that will work fine: Free will is a stubbornly persistent illusion, just use it and don’t worry about it being an illusion.

2. Your story hasn’t yet been told.

Free will or not, you have a place in history. Whether yours will be a happy story or a sad story, whether your research will ignite technological progress or remain a side-note in obscure journals, whether you will be remembered or forgotten – we don’t yet know. Instead of thinking of yourself as selecting a possible future, try to understand your role, and remain curious about what’s to come.

3. Input matters.

You are here to gather information, process it, and come to decisions that may, or may not result in actions. Your actions, and the information you share, will then affect the decisions and actions of others. These decisions are determined by the structure of your brain and the information you obtain. Rather than despairing over the impossibility of changing either, decide to be more careful which information you seek out, analyze, and pass on. Instead of thinking about influencing the future, ask yourself what you have learned, eg, from reading this. You may not have free will, but you still make decisions. You cannot not make decisions. You may as well be smart about it.

4. Understand yourself.

No one presently knows exactly what consciousness is or what it is good for, but we know that parts of it are self-monitoring, attentional focus, and planning ahead. A lot of the processes in your brain are not conscious, presumably because that would be computationally inefficient. Unconscious processes, however, can affect your conscious decisions. If you want to make good decisions, you must understand not only the relevance of input, but also how your own brain works. Instead of thinking that your efforts are futile, identify your goals and the strategies you have for working towards them. You are monitoring the monitor, if you wish.

Sunday, April 28, 2019

My name is Cassandra [I've been singing again]

Here is what I did on Easter. For this video I recruited my mom and my younger brother to hold the camera. Not a masterpiece of camera work, but definitely better than my green screen.

Thursday, April 25, 2019

Yes, scientific theories have to be falsifiable. Why do we even have to talk about this?

[image: theskillsfarm.com]
The task of scientists is to find useful descriptions for our observations. By useful I mean that the descriptions are either predictive or explain data already collected. An explanation is anything that is simpler than just storing the data itself.

An hypothesis that is not falsifiable through observation is optional. You may believe in it or not. Such hypotheses belong into the realm of religion. That much is clear, and I doubt any scientist would disagree with that. But troubles start when we begin to ask just what it means for a theory to be falsifiable. One runs into the following issues:

1. How long it should take to make a falsifiable prediction (or postdiction) with a hypothesis?

If you start out working on an idea, it might not be clear immediately where it will lead, or even if it will lead anywhere. That could be because mathematical methods to make predictions do not exist, or because crucial details of the hypothesis are missing, or just because you don’t have enough time or people to do the work.

My personal opinion is that it makes no sense to require predictions within any particular time, because such a requirement would inevitably be arbitrary. However, if scientists work on hypotheses without even trying to arrive at predictions, such a research direction should be discontinued. Once you allow this to happen, you will end up funding scientists forever because falsifiable predictions become an inconvenient career risk.

2. How practical should a falsification be?

Some hypotheses are falsifiable in principle, but not falsifiable in practice. Even in practice, testing them might take so long that for all practical purposes they’re unfalsifiable. String theory is the obvious example. It is testable, but no experiment in the foreseeable future will be able to probe its predictions. A similar consideration goes for the detection of quanta of the gravitational field. You can measure those, in principle. But with existing methods, you will still collect data when the heat death of the universe chokes your ambitious research agenda.

Personally, I think predictions for observations that are not presently measurable are worthwhile because you never know what future technology will enable. However, it makes no sense working out details of futuristic detectors. This belongs into the realm of science fiction, not science. I do not mind if scientists on occasion engage in such speculation, but it should be the exception rather than the norm.

3. What even counts as a hypothesis?

In physics we work with theories. The theories themselves are based on axioms, that are mathematical requirements or principles, eg symmetries or functional relations. But neither theories nor principles by themselves lead to predictions.

To make predictions you always need a concrete model, and you need initial conditions. Quantum field theory, for example, does not make predictions – the standard model does. Supersymmetry also does not make predictions – only supersymmetric models do. Dark matter is neither a theory nor a principle, it is a word. Only specific models for dark matter particles are falsifiable. General relativity does not make predictions unless you specify the number of dimensions and chose initial conditions. And so on.

In some circumstances, one can arrive at predictions that are “model-independent”, which are the most useful predictions you can have. I scare-quote “model-independent” because such predictions are not really independent of the model, they merely hold for a large number of models. Violations of Bell’s inequality are a good example. They rule out a whole class of models, not just a particular one. Einstein’s equivalence principle is another such example.

Troubles begin if scientists attempt to falsify principles by producing large numbers of models that all make different predictions. This is, unfortunately, the current situation in both cosmology and particle physics. It documents that these models are strongly underdetermined. In such a case, no further models should be developed because that is a waste of time. Instead, scientists need to find ways to arrive at more strongly determined predictions. This can be done, eg, by looking for model-independent predictions, or by focusing on inconsistencies in the existing theories.

This is not currently happening because it would make it more difficult for scientists to produce predictions, and hence decrease their paper output. As long as we continue to think that a large number of publications is a signal of good science, we will continue to see wrong predictions based on useless models.

4. Falsifiability is necessary but not sufficient.

A lot of hypotheses are falsifiable but just plain nonsense. Really arguing that a hypothesis must be science just because you can test it is typical crackpot thinking. I previously wrote about this here.

5. Not all aspects of a hypothesis must be falsifiable.

It can happen that a hypothesis which makes some falsifiable predictions leads to unanswerable questions. An often named example is that certain models of eternal inflation seem to imply that besides our own universe there exist an infinite number of other universes. These other universes, however, are unobservable. We have a similar conundrum already in quantum mechanics. If you take the theory at face value then the question what a particle does before you measure it is not answerable.

There is nothing wrong with a hypothesis that generates such problems; it can still be a good theory, and its non-falsifiable predictions certainly make for good after-dinner conversations. However, debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests.

This post was brought on by Matthew Francis’ article “Falsifiability and Physics” for Symmetry Magazine.