Pages

Thursday, May 30, 2019

Quantum mechanics: Still mysterious after all these years

Last week I was in Barcelona, Spain, where I visited the Center for Contemporary Culture (CCCB). I didn’t know until I arrived that they currently have an exhibition on quantum mechanics. I have found the exhibition to be very interesting, and – thinking you would find it interesting too! – compiled the below video.

Since I didn't have my video camera with me, it is made of footage provided by the CCCB and some recordings that a friendly lady from the Spanish TV made on her phone. Enjoy!



Update May 31st: Now with Italian and German subtitles. Click on CC in the YouTube toolbar. Chose language in settings/gear icon.

Tuesday, May 28, 2019

Capitalism is good for you

Most economists I know started out as physicists. Being a physicist myself of course means that the sample is biased, but still it serves to demonstrate the closeness of the two subjects.

The emergence of market economies in human society is almost a universal. Because markets are non-centralized, they can, and will, spontaneously arise. As of today, capitalism is the best mechanism we know to optimize the distribution of resources. We use it for one simple reason: It works.

A physicist cannot not see how similar the problem of distributing resources is to optimization problems in many-body systems, to equilibrium processes, to self-organized criticality. I know a lot of people loathe the idea that humans are just nodes in a network, tasked to exchange bits of information. But to first approximation that’s what we are.

I am not a free market enthusiast. Free markets work properly only if both consumers and producers rationally evaluate all available information, for example about the societal and environmental impacts of purchasing a product. This is a cognitive task we simply cannot, in practice, perform.

Therefore, while the theoretically most optimal solution would be that we act perfectly rational and exclusively rely on markets, in reality we use political systems as a shortcuts. Laws and regulations result in market inefficiencies, but they approximately take into account values that our sloppy purchase decisions neglect.

So far, so clear, or at least that’s what I thought. In the past months, however, I have repeatedly come across videos and opinion pieces that claim we must overthrow capitalism to save the world. Some examples:

These articles spread some misinformation that I want to briefly sort out. Even if my little blog touches the lives of only a few people each day, a drop in the ocean is still a drop. If you don’t understand how black holes emit radiation, that’s unfortunate, but honestly it doesn’t really matter. If you don’t understand how capitalism works, that matters a big deal more.

First, the major reason we have problems with capitalism is that it does not work properly.

We know that markets fail under certain circumstances. Monopolies are one of them, and this is certainly a problem we see with social media. That markets do not automatically account for externalities is another reason, and this is the problem we see with environmental protection.

The biggest problem however is what I already mentioned above, that markets only work if consumers know what they are buying. Which brings me to another misunderstanding.

Second, capitalism isn’t about money.

No, really, it is not. Money is just a medium we exchange to reach an optimal configuration. It does not itself define what is optimal.

Markets optimize a quantity called “utility”. What is “utility” you ask? It is whatever is relevant to you. You may, for example, be willing to pay a somewhat higher price for a social media platform that does not spur the demise of democracy. This should, theoretically, give companies feedback about what customers want, leading the company to improve its products.

Why isn’t this working? It’s not working because we currently pay for most online services – think Facebook – by advertisements. The cost of producing the advertisements increases the price of the advertised product. With this arrangement there is no feedback from the consumer of the online service to the service provider itself.

Indeed, I suspect that Facebook prefers financing by ads exactly because this way they do not have to care about what users want. Now add on top that users do not actually know what they are getting into, and it isn’t hard to see why capitalism fails here: The self-correction of the market cannot work if consumers do not know what they get, and producers don’t get financial feedback about how well they meet consumers demand. And that’s leaving aside the monopoly problem.

In a functioning capitalist system, nothing prevents you from preferably buying products of companies that support your non-financial values, thereby letting producers know that that’s what you want. Or, if that’s too much thinking, vote a party that passes laws enforcing these values.

Third, capitalism just shows us who we are.

Have you given all your savings to charity today? You probably haven’t. Children are starving in Africa and birds are choking from plastic, but you are sitting on your savings. A bad, bad, person you are.

Guess what, you’re not alone.

Of course you save money not for the sake of having money, but because it offers safety, freedom, health, entertainment and, yes, also luxury to some extent. You do not donate all your money to charity because, face it, you value your future well-being higher than the lives of children you don’t know.

Economists call this “revealed preferences”. What we spend and do not spend our money on reveals what matters to us. Part of the backlash against capitalism that we now see is people who are inconsistent about their preferences. Or maybe they are just virtue-signalling because it’s fashionable.

Yes, limiting climate change is important, nod-nod. But if that means rising gas prices, then maybe it’s not all that important.

Complaining about capitalism will not resolve this tension. As I said in a recent blogpost, when it comes to climate change there are no simple solutions. It will hurt either way. And maybe the truth is that many of us just do not care all that much about future generations.

Capitalism, of course, cannot fix all our problems, even if it was working perfectly from tomorrow on. That’s because change takes time, and if we don’t have time, the only thing that will get human society to move quickly is a centralization of power.

Monday, May 27, 2019

Do I exist?

Last week, we discussed what scientists mean when they say that something exists. To recap briefly: Something exists if it is useful to explain observations. This makes, most importantly, no statement about what is real or true which is a question for philosophers, not scientists.


I then asked you to tell me whether you think that I exist. Many of you submitted great answers to my existential question. I want to pick two examples to illustrate some key points.

Fernando wrote in comment on my YouTube Channel:
“When I say “that chair exists”, I am also fitting the data (collected with my senses) with my internal conceptions of a chair. I think that the same is true when I see a two dimensional representation of Sabine Hossenfelder on my computer screen and say that Sabine exists.”
As he says, correctly, he is really just trying to create a model to explain his sensory input. I am part of a model that works well, therefore he says I exist.

And Dr Castaldo wrote in a comment on this blog:
“I believe a single human exists that appears in videos and photographs and authors these blog posts, tweets, and answers to commentary. I am aware of no other plausible (to me) explanation for those artifacts and their consistency.”
The important part of this comment is that he emphasizes the explanation that I exist is plausible, not certain, and it is plausible to him, personally.

In the comments on my earlier blogpost you then find some exchange about whether it is possible that my videos are generated by an artificial intelligence and I do not exist.

For all you know, that is possible. But even with the most advanced software presently available, it would be a challenge to fake me. At the very least, making me up would be a lot of effort for no good reason. Possible, yes, but not plausible.

The simplest explanation is what most of you probably believe, that I am a human being not unlike yourself, in an apartment not unlike your own, with a camera and laptop, not unlike your own, and so on. And simple explanations are the most useful ones in a computational sense, so these are the ones scientists go with.

The important points here are the following. First, explaining sensory input is all you ever do.

You collect data with your senses and try to create a consistent model of the world that explains this data. Even scientists and their papers are just sensory input that you use to create a model of the world.

Second, confidence in existence is gradual.

The only reliable statements about what exists are based on models that you use to explain observations. But the confidence you have in these models depends on what data you have, and therefore it can gradually increase or decrease. The more videos you watch of me, the more confident you will be that I exist. It’s not either-or. It’s maybe or probably.

The thing you can be most confident exists is yourself because you cannot explain anything unless there is a you to explain something. Data about yourself are the most immediate. That’s a complicated way of rephrasing what Descartes said: I think, therefore I am.

Third, how confident you are that something exists depends on your personal history. It depends on your experience and your knowledge.

If we have met and I shook your hand, you will be much more confident that I exist. Why? Because you only know of one way to create this sensory input. If you merely see me on a laptop screen, you also have to use knowledge about how your screen works, how the internet works, how human society works, and what’s the current status of artificial intelligence and so on. It’s a more difficult analysis of the data, and you will end up with a lower confidence.

And this is why science communication is so, so relevant. Because someone who does not understand how scientists infer the existence of the Higgs-boson from data, and also does not understand how science itself works, will end up with a low confidence that the Higgs-boson exists and they will begin to question the use of science in general.

Having settled this, here is the next homework assignment: Does God exist? Let me know what you think.

Update May 29: The video now has German and Italian subtitles. To see those, click on CC in the YouTube tool bar. Chose language in settings/gear icon.

Wednesday, May 22, 2019

Does the Higgs-boson exist?

What do scientists mean when they say that something exists? Every time I give a public lecture, someone will come and inform me that black holes don’t exist, or quarks don’t exist, or time doesn’t exist. Last time someone asked me “Do you really believe that gravitational waves exist?”


So, do I believe that gravitational waves exist? Let me ask you in return: Why do you care what I believe? What does it matter for anything?

Look, I am a scientist. Scientists don’t deal with beliefs. They deal with data and hypotheses. Science is about knowledge and facts, not about beliefs.

And what I know is that Einstein’s theory of general relativity is a mathematical framework from which we can derive predictions that are in excellent agreement with observation. We have given names to the mathematical structures in this theory. One of them is called gravitational waves, another one is called black holes. These are the mathematical structures from which we can calculate the observational consequences that have now been measured by the LIGO and VIRGO gravitational wave interferometers.

When we say that these experiments measured “gravitational waves emitted in a black hole merger”, we really mean that specific equations led to correct predictions.

It is a similar story for the Higgs-boson and for quarks. The Higgs-boson and quarks are names that we have given to mathematical structures. In this case the structures are part of what is called the standard model of particle physics. We use this mathematics to make predictions. The predictions agree with measurements. That is what we mean when we say “quarks exist”: We mean that the predictions obtained with the hypothesis agrees with observations.

Same story for time. In General Relativity, time is a coordinate, much like space. It is part of the mathematical framework. We use it to make predictions. The predictions agree with observations. And that’s that.

Now, you may complain that this is not what you mean by “existence”. You may insist that you want to know whether it is “real” or “true”. I do not know what it means for something to be “real” or “true.” You will have to consult a philosopher on that. They will offer you a variety of options, that you may or may not find plausible.

A lot of scientists, for example, subscribe knowingly or unknowingly to a philosophy called “realism” which means that they believe a successful theory is not merely a tool to obtain predictions, but that its elements have an additional property that you can call “true” or “real”. I am loosely speaking here, because there several variants of realism. But they have in common that the elements of the theory are more than just tools.

And this is all well and fine, but realism is a philosophy. It’s a belief system, and science does not tell you whether it is correct.

So here is the thing. If you want to claim that the Higgs-boson does not exist, you have to demonstrate that the theory which contains the mathematical structure called “Higgs-boson” does not fit the data. Whether or not Higgs-bosons ever arrive in a detector is totally irrelevant.

Here is a homework assignment: Do you think that I exist? And what do you even mean by that?

Update: Now with Subtitles in German and Italian. To see them, click on CC in the YouTube toolbar, then chose a language in the settings/gear icon.

Tuesday, May 21, 2019

Book and travel update

The French translation of my book “Lost in Math” has now appeared under the title “Lost in Maths: Comment la beauté égare la physique”.

The Spanish translation has now also now appeared under the title “Perdidos en las matemáticas: Cómo la belleza confunde a los físicos.” I don’t speak Spanish, but for all I can tell, the title is a literal translation.

On Thursday (May 23rd) I am giving a public lecture in Barcelona. The lecture, it turns out, will be simultaneously translated into Spanish. This, I think, will be an interesting experience.

The next talks I have scheduled are a colloquium in Mainz, Germany, on June 11, and a public lecture in Groningen, Netherlands, on June 21st. The public lecture is associated with the workshop “Probabilities in Cosmology” at the University of Groningen.

I declined the invitation to the Nobel Laureate meeting in Lindau because I was informed they would only cover my travel expenses if I agree in advance to write about their meeting for a 3rd party. (If you get pitches about the meeting, please ask the author for a COI.)

After some back and forth, I accepted the invitation to SciFoo 2019, mostly because I couldn’t think of a way to justify declining it even to myself.

The fall is filling up too. The current plan looks roughly like this: On September 21st, I am giving a public lecture in Nürnberg. Early October I am in Brussels for a workshop. Mid of October I am giving a public lecture at the University of Minnesota. (I have not yet booked the flight for this trip. So if you want me to stop by at your institution for a lecture on the way, please get in touch asap.) End of October I am giving a lecture in Göttingen, and the first week of November I am in Potsdam and, again, in Berlin.

From November on, I will be unemployed, at least that is what it presently looks like. Or maybe I should say I will be fully self-employed. Either way, I will have to think of some other way to earn money than doing calculations in Anti-DeSitter space.

Finally, here is the usual warning that I am traveling for the rest of the week and comments on this blog will be stuck in the moderation queue longer than usual.

Sunday, May 19, 2019

10 things you should know about black holes


When I first learned about black holes, I was scared that one would fly through our solar system and eat us up. That was 30 years ago. I'm not afraid of black holes anymore but I am afraid that they have been misunderstood. So here are 10 things that you should know about black holes.

1. What is a black hole?

A black hole contains a region from which nothing ever can escape, because, to escape, you would have to move faster than the speed of light, which you can’t.  The boundary of the region from which you cannot escape is called the “horizon.” In the simplest case, the horizon has the form of a sphere. Its radius is known as the Schwarzschild radius, named after Karl Schwarzschild who first derived black holes as a solution to Einstein’s General Relativity.

2. How large are black holes?

The diameter of a black hole is directly proportional to the mass of the black hole. So the more mass falls into the black hole, the larger the black hole becomes. Compared to other stellar objects though, black holes are tiny because enormous gravitational pressure has compressed their mass into a very small volume. For example, the radius of a black hole with the approximate mass of planet Earth is only a few millimeters.

3. What happens at the horizon?

A black hole horizon does not have substance. Therefore, someone crossing the black hole horizon does not notice anything weird going on in their immediate surroundings. This follows from Einstein’s equivalence principle, which implies that in your immediate surrounding you cannot tell the difference between acceleration in flat space and curved space that gives rise to gravity.

However, an observer far away from a black hole who watches somebody fall in would notice that the infalling person seems to move slower and slower the closer they get to the horizon. It appears this way because time close by the black hole horizon runs much slower than far away from the horizon. 

That’s one of these odd consequences of the relativity of time that Einstein discovered. So, if you fall into a black hole, it only takes a finite amount of time to cross the horizon, but from the outside it looks like it take forever.

What you would experience at the horizon depends on the tidal force of the gravitational field. The tidal forces is loosely speaking the change of the gravitational force. It’s not the gravitational force itself, it’s the difference between the gravitational forces at two nearby places, say at your head and at your feet.

The tidal force at the horizon is inversely proportional to the square of the mass of the black hole. This means the larger and more massive the black hole, the smaller the tidal force at the horizon. Yes, you heard that right. The larger the black hole, the smaller the tidal force at the horizon.

Therefore, if the black hole is only massive enough, you can cross the horizon without noticing what just happened. And once you have crossed the horizon, there is no turning back. The stretching from the tidal force will become increasingly unpleasant as you approach the center of the black hole, and eventually rip everything apart.

In the early days of General Relativity many physicists believed that there is a singularity at the horizon, but this turned out to be a mathematical mistake.

4. What is inside a black hole?

Nobody really knows. General relativity predicts that inside the black hole is a singularity, that’s a place where the tidal forces become infinitely large. But we know that General Relativity does not work nearby the singularity because there, the quantum fluctuations of space and time become large. To be able to tell what is inside a black hole we would need a theory of quantum gravity – and we don’t have one. Most physicists believe that such a theory, if we had it, would replace the singularity with something else.

5. How do black holes form?

We presently know of four different ways that black holes may form. The best understood one is stellar collapse. A sufficiently large star will form a black hole after its nuclear fusion runs dry, which happens when the star has fused everything that could be fused. Now, when the pressure generated by the fusion stops, the matter starts falling towards its own gravitational center, and thereby it becomes increasingly dense. Eventually the matter is so dense that nothing can overcome the gravitational pull on the stars’ surface: That’s when a black hole has been created. These black holes are called ‘solar mass black holes’ and they are the most common ones.

The next common type of black holes are ‘supermassive black holes’ that can be found in the centers of many galaxies. Supermassive black holes have masses about a billion times that of solar mass black holes, and sometimes even more. Exactly how they form still is not entirely clear. Many astrophysicists think that supermassive black holes start out as solar mass black holes, and, because they sit in a densely populated galactic center, they swallow a lot of other stars and grow. However, it seems that the black holes grow faster than this simple idea suggests, and exactly how they manage this is not well understood.

A more controversial idea are primordial black holes. These are black holes that might have formed in the early universe by large density fluctuations in the plasma. So, they would have been there all along. Primordial black holes can in principle have any mass. While this is possible, it is difficult to find a model that produces primordial black holes without producing too many of them, which is in conflict with observation.

Finally, there is the very speculative idea that tiny black holes could form in particle colliders. This can only happen if our universe has additional dimensions of space. And so far, there has not been any observational evidence that this might be the case.

6. How do we know black holes exist?

We have a lot of observational evidence that speaks for very compact objects with large masses that do not emit light. These objects reveal themselves by their gravitational pull. They do this for example by influencing the motion of other stars or gas clouds around them, which we have observed.

We furthermore know that these objects do not have a surface. We know this because matter falling onto an object with a surface would cause more emission of particles than matter falling through a horizon and then just vanishing.

And since most recently, we have the observation from the “Event Horizon Telescope” which is an image of the black hole shadow. This is basically an extreme gravitational lensing event. All these observations are compatible with the explanation that they are caused by black holes, and no similarly good alternative explanation exists.

7. Why did Hawking once say that black holes don’t exist?

Hawking was using a very strict mathematical definition of black holes, and one that is rather uncommon among physicists. 

If the inside of the black hole horizon remains disconnected forever, we speak of an “event horizon”. If the inside is only disconnected temporarily, we speak of an “apparent horizon”. But since an apparent horizon could be present for a very long time, like, billions of billions of years, the two types of horizons cannot be told apart by observation. Therefore, physicists normally refer to both cases as “black holes.” The more mathematically-minded people, however, count only the first case, with an eternal event horizon, as black hole.

What Hawking meant is that black holes may not have an eternal event horizon but only a temporary apparent horizon. This is not a controversial position to hold, and one that is shared by many people in the field, including me. For all practical purposes though, the distinction Hawking drew is irrelevant.

8. How can black holes emit radiation?

Black hole can emit radiation because the dynamical space-time of the collapsing black hole changes the notion of what a particle is. This is another example of the “relativity” in Einstein’s theory. Just like time passes differently for different observers, depending on where they are and how they move, the notion of particles too depends on the observer, on where they are and how they move. 

Because of this, an observer who falls into a black hole thinks he is falling in vacuum, but an observer far away from the black hole thinks that it’s not vacuum but full of particles. And where do the particles come from? They come from the black hole.

This radiation that black holes emit is called “Hawking radiation” because Hawking was the first to derived that this should happen. This radiation has a temperature which is inversely proportional to the black hole’s mass: So, the smaller the black hole the hotter. For the stellar and supermassive black holes that we know of, the temperature is well below that of the Cosmic microwave background and cannot be observed.

9. What is the information loss paradox?

The information loss paradox is caused by the emission of Hawking radiation. This happens because the Hawking radiation is purely thermal which means it is random except for having a specific temperature. In particular, the radiation does not contain any information about what formed the black hole. 

But while the black hole emits radiation, it loses mass and shrinks. So, eventually, the black hole will be entirely converted into random radiation and the remaining radiation depends only on the mass of the black hole. It does not at all depend on the details of the matter that formed it, or whatever fell in later. Therefore, if one only knows the final state of the evaporation, one cannot tell what formed the black hole. 

Such a process is called “irreversible” — and the trouble is that there are no such processes in quantum mechanics. Black hole evaporation is therefore inconsistent with quantum theory as we know it and something has to give. Somehow this inconsistency has to be removed. Most physicists believe that the solution is that the Hawking radiation somehow must contain information after all.

10. So, will a black hole come and eat us up?

It’s not impossible, but very unlikely. 

Most stellar objects in galaxies orbit around the galactic center because of the way that galaxies form. It happens on occasion that two solar systems collide and a star or planet or black hole, is kicked onto a strange orbit, leaves one solar system and travels around until it gets caught up in the gravitational field of some other system. 

But the stellar objects in galaxies are generally far apart from each other, and we sit in an outer arm of a spiral galaxy where there isn’t all that much going on. So, it’s exceedingly improbable that a black hole would come by on just exactly the right curve to cause us trouble. We would also know of this long in advance because we would see the gravitational pull of the black hole acting on the outer planets.

If you enjoy my blog, please consider donating by using the button in the top right corner. Thanks!




Wednesday, May 15, 2019

Climate Change: There are no simple solutions

The Earth is warming. Human carbon-dioxide emissions are one of the major culprits. We have known this for a long time. But in the past two decades, evidence for global warming has become more noticeable on local levels, as with seasonal shifts, extreme weather events, declines in biodiversity and, depending on where you live, droughts. And it will get worse.

I would describe myself as risk-averse, future-oriented, and someone who worries easily. I don’t need to be convinced that we are not doing enough to mitigate the consequences of rising temperatures. Yet, I have become increasingly frustrated about the discussion of climate change in the media, which makes it look like the problem is to convince people that climate change is happening in the first place.

It is not. The problem is that we don’t know what to do about it, and even if we knew, we wouldn’t have the means to actually do it. And nothing whatsoever has changed about this since I learned of climate change in school, some time in the 1980s. Gluing yourself to a train will not create the policies and the institutions we would need to implement them.

A good example for this bizarre problem-denial is Greta Thunberg, here speaking to a crowd of about 10,000 people in Helsinki: “The climate crisis has already been solved. We already have all the facts and solutions. All we need to do is to wake up and change.”


I do not blame Greta Thunberg for being naïve. She’s a child, and she even admits to being naïve. When I was a teenager, I thought much the same, so who am I to judge her. But adults should know better than that. Yet, here we have Bill Nye who delivers his variant of “all we need to do is wake up”:

But the fact that climate change happens does not tell us what, if anything, to do about it. And scientists really should know better than to mix “is” with “ought.”

As I said, I am future-oriented and risk-averse. These are my personal values. You may not share them. Maybe you don’t give a shit about what’s going to happen 50 years from now, and if that’s your opinion, then that’s your opinion. Maybe you are willing to accept the risk that a steep temperature rise will result in famine, social unrest, and diseases that eradicate some billion people. Or maybe you even think that getting rid of some billion humans, mostly in the developing world, would not be a bad thing. I don’t share these opinions, but there is nothing factually wrong with them.

Economists have a long story to tell about our responsibility to the coming generations. How much we value it depends on what is called the “future discount rate”, which quantifies, basically, the relevance we assign to what will happen in the future. This evaluation usually focuses on measures like the Gross Domestic Product (GDP), and comes down to the question how much GDP per year we should invest today to prevent declines of GDP in the future. (If you are familiar with the literature, this is the Stern-Nordhaus debate over carbon pricing.)

The are many problems with that kind of argument. For starters, the GDP itself doesn’t tell you all that much about the well-being of people. It’s also somewhat unclear where to get the future discount rate. Economists have devised some methods to extract it from interest rates and the like. That is something you may or may not find reasonable. Also, we don’t know, just by discounting the future, how to factor in uncertainties about what is going to happen. Eg, one line of argument is that really what we should do is not look for a solution that is economically optimal, but one that minimizes the risk of a major ecological instability because then all bets are off.

Then there is the question what policies to pursue and how to implement them. A market-based solution, eg by putting a price on carbon, would be most likely to lead to an economically optimal strategy. The problem is, however, that this would necessitate an equilibrium readjustment of the global market which is unlikely to happen fast enough, even if we could get it going yesterday. And that’s leaving aside that equilibrium theory has its flaws, which is to say that economists aren’t exactly known for making great predictions.

Either way you turn it, resources that we spend today on limiting carbon-dioxide emissions are resources we cannot spend on something else, resources that will not go to education, research, social welfare, infrastructure. Oh, and they will also not go into that next larger particle collider.

 The world has two or maybe three decades of cheap fossil fuels left. Not using those makes our lives harder, regardless of how much we subsidize renewables. That, too, is a fact. Any sincere discussion about climate change should acknowledge it. It’s a difficult situation and there are no simple solutions.

Tuesday, May 14, 2019

Quantum Mechanics is wrong. There, I’ve said it.

[Image: needpix.com]


So, you have developed a new theory of quantum mechanics. That is, erm, nice. No, please don’t show it to me. I’m almost certainly too stupid to understand it. You see, I have only a PhD in physics. All that math has certainly screwed up my neural wiring. Yes, I am sorry. But I have a message for you from the depth of abstract math: We know that quantum mechanics is wrong.

Seriously, it’s wrong. It’s as wrong as Newtonian gravity is wrong, as hydrodynamics is wrong, and as spherical cows are wrong. Quantum mechanics is an approximation. It works well in some cases. It does not work well in other cases.

You see, in quantum mechanics we give quantum properties to particles. But we know that, strictly speaking, the interactions between these particles must also have quantum properties. If we give these interactions quantum properties, we call that “second quantization.” It is not used in quantum mechanics. Second quantization results in a larger mathematical framework called “quantum field theory”. The Standard Model of particle physics is a quantum field theory. Sometimes we use the word “quantum theory” to refer to both, quantum mechanics and quantum field theory together.

Moving from quantum mechanics to quantum field theory is more than just a change of name. Quantum field theories inherit many properties from quantum mechanics: Entanglement, uncertainty, the measurement postulate. But they bring new insights – and also new difficulties.

The best known insight brought by quantum field theory is that particles can be created and destroyed, and that each particle has an anti-particle (though some particles are their own anti-particles). Another remarkable consequence of quantum field theory is that the strength of interactions between particles depends on the energy by which one probes the interaction. The strong nuclear force, it turns out, becomes weaker at high energies, an odd behavior that is known as “asymptotic freedom.”

The probably best known difficulty of quantum field theories is that many calculations result in infinity. Infinity, however, is not a very useful prediction. Such results therefore have to be dealt with by a procedure called “renormalization,” whose purpose is to suitably subtract infinity from infinity to get a finite remainder. No, there is nothing wrong with that. It works just fine, thank you.

Quantum field theories lead to other complications. For example, we know how to calculate what happens if two electrons bump into each other and create a bunch of new particles. This is called a “scattering event”. But we don’t know how to calculate what happens if three quarks stick together and form a proton. Well, we do know how to put such calculations on super-computers in an approximation called “lattice QCD”. But really we don’t have good mathematical tools to handle the case. At least not yet.

But let us come back to quantum mechanics. You can use this theory to make predictions for any experiment where the creation and destruction of particles does not play a role. This is the case for all your typical quantum optics experiments, Bell-type tests, quantum cryptography, quantum computing, and so on. It is not merely a matter of doing experiments at low energy, but it also depends on how sensitive you are to the corrections coming from quantum field theory.

So, yes, quantum mechanics is technically wrong. It’s only an approximation to the more complete framework of quantum field theory. But as the statistician George Box summed it up “All models are wrong, but some are useful.” And whatever your misgivings are about quantum mechanics, there is no denying that it is useful. 

Sunday, May 12, 2019

The trouble with Facebook and what it has in common with scientific publishing.

[In case you’d rather read than listen, a full transcript follows below.]


Today, I want to talk about Facebook. Yes, Facebook, the social media website, I’m sure you have heard of them.

Facebook currently gets a lot of media attention. And not in a good way. That’s because not only has Facebook collected and passed on user-information without those user’s consent, it has also provided a platform for the organized spread of political misinformation, aka “fake news”.

I doubt you were surprised by this. It’s hardly a breakthrough insight that an almost monopoly on the filtering of information is bad for democracy. This is, after all, why we have freedom of the press written into the constitution: To prevent an information monopoly. And when it comes to the internet, this is a problem that scientists have written about at least for two decades.

Originally the worry of scientists, however, focused on search engines as news providers. This role, we now know, has been taken over by social media, but it’s the same problem: Some company’s algorithm comes to select what information users do and do not see prominently. And this way of selecting our information can dramatically affect our opinion.

A key paper on this problem is a 2003 article by three political scientists whocoined the term “Googlearchy”. They wrote back then:

“Though no one expected that every page on the Web would receive an exactly equal share of attention, many have assumed that the Web would be dramatically more egalitarian in this regard than traditional media. Our empirical results, however, suggest enormous disparities in the number of links pointing to political sites in a given category. In each of the highly diverse political communities we study, a small number of heavily-linked sites receive more links than the rest of the sites combined, effectively dominating the community they are a part of […]

We introduce a new term to describe the organizational structure we find: “googlearchy” –  the rule of the most heavily linked. We ultimately conclude that the structure of the Web funnels users to only a few heavily-linked sites in each political category.”

We have now become so used to this “rule of the most heavily linked” that we have stopped even complaining about it, though maybe we should now call it the “rule of the most heavily liked.”

But what these political scientists did not discuss back then, was that of course people would try to exploit these algorithms and then attempt to deliberately misinform others. So really the situation is worse now than they made it sound in 2003.  

Why am I telling you this? Because it occurred to me recently that the problem with Facebook’s omnipotent algorithm is very similar to a problem we see with scientific publishing. In scientific publishing, we also have a widely used technique for filtering information that is causing trouble. In this case, we filter which publications or authors we judge as promising.

For this filtering, it has become common to look at the number of citations that a paper receives. And this does cause problems, because the number of citations may be entirely disconnected from the real world impact of a research direction. The only thing the number of citations really demonstrates is popularity. Citations are a measure that’s as disconnected from scientific relevance as the number of likes is from the truth value of an article on Facebook.

Of course the two situations are different in some ways. For example on social media there is little tradition of quoting sources. This has the effect that a lot of outlets copy news from each other, and that it is extra hard to check the accuracy of a statement. Another difference is that social media has a much faster turnover-rate than scientific publications. This means on social media people don’t have a lot of time to think before they pass on information. But in both cases we have a problem caused by the near monopoly of a single algorithm.

Now, when it comes to scientific publishing, we have an obvious solution. The problem both with the dominance of a few filtering algorithms and with the possibility of gaming comes from users being unable to customize the filtering algorithm. So with scientific publishing, just make it easier for scientists to use other ways to evaluate research works. This is the idea behind our website SciMeter.org.

The major reason that most scientists presently use the number of citations, or the number of publications, or the number of publications in journals with high impact factor, to decide what counts as “good science” is that these numbers are information they can easily access, while other numbers are not. It adds to this that when it comes to measures like the journal impact factor, no one really knows how it’s calculated.

Likewise, the problem with Facebook’s algorithm is that no one knows how it works, and it can’t be customized. If it was possible for users to customize what information they see, gaming would be much less of a problem. Well, needless to say, I am assuming here that the users’ customization would remain private information.

You may object that most users wouldn’t want to deal with the details, but this isn’t really necessary. It is sufficient if a small group of people generates templates that users can then chose from.

Let me give you a concrete example. I use Facebook mostly to share and discuss science news and to stay in touch with people I have on my “Close friends” list. I don’t want political news from Facebook, I am not interested in the social lives of people I don’t know, and if I want entertainment, I look for that elsewhere.

However, other people use Facebook entirely differently. Some spend a lot of time with groups, use it to organize events, look for distraction, or, I don’t know, to share cooking recipes, whatever. But right now, Facebook offers very little options to customize your news feed to suit your personal interests. The best you can do, really, is to sort people onto lists. But this is cumbersome and solves only some aspects of the sorting-problem.

So, I think an easy way to solve at least some of the problems with Facebook would be to allow a third-party plug to sort your news-feed. This would give users more control and also relieve Facebook of some responsibility.

Mark Zuckerberg once declared his motto clearly: “Move fast and break things. Unless you are breaking stuff, you are not moving fast enough.” Well, maybe it’s time to break Facebook’s dominance over information filtering.

Update: Now with Italian subtitles. Click on “CC” in the YouTube tool bar to turn on subtitles. Switch language in settings/gear icon.

Friday, May 10, 2019

Admin note on invisible comments

I keep getting notifications from readers that something isn't working on my blog. Here is what's happening: With the new layout of the comment section, if the list exceeds 100 comments, then comments will by default not all load. In that case you have to click on "Load more..." below the comment box, see screen shot below. I have posted this response several times in the comments, but of course you'll only see this if you already know you have to click on "Load more..."



I also know that if you click on the link in the comment widget (side bar), this will not work if there are too many comments in one thread. I am sorry about this, but nothing I can do to change it. This blog is hosted by Google. My options to customize the comment section are very limited. The comment widget is a 3rd party java script that however can't handle the more recent updates of the comment feature.

I have also noticed that sometimes the comment box doesn't appear in a reply-to-comment thread. In that case you have to scroll up to the comment you want to reply to and find the "reply" link. If you click on that, the box will appear. I have no idea what sense this makes. If anyone has suggestions for improvement other than that I should move this blog to a different provider, please let me know.

And while I am at it, let me repeat my plea that you please, please not post links or email addresses. First, because such comments are likely to end up in the spam folder. Even if not, I will only approve such comments after I have had time to check that the website is legit. Since I normally don't have time to do that, your comment will end up in the moderation queue indefinitely. Exceptions are links to websites I can recognize immediately, eg the arXiv, scientific journals, major news pages, etc.

Wednesday, May 08, 2019

Measuring science the right way. Your way.

Today, I am happy to announce the first major update of our website SciMeter.org. This update brings us a big step closer to the original vision: A simple way to calculate what you, personally, think quantifies good research.

Evaluating scientific work, of course, requires in-depth studies of research publications. But in practice, we often need to make quick, quantitative comparisons, especially those of us who serve on committees or who must document the performance of their institutions (or both). These comparisons rely on metrics for scientific impact, that are measures to quantify the relevance of research, both on the administrative level and on the individual level. Many scientists are rightfully skeptical of such attempts to measure their success, but quantitative evaluation is and will remain necessary.

However, metrics for scientific impact, while necessary, can negatively influence the behavior of researchers. If a high score on a certain metric counts as success, this creates an incentive for researchers to work towards increasing their score, rather than using their own judgement for good research. This problem of “perverse incentives” and “gaming measures” is widely known. It has been subject of countless talks and papers and manifestos, yet little has been done to solve the problem. With SciMeter we hope to move towards a solution.

A major reason that measures for scientific success can redirect research interests is that few measures are readily available. While the literature on bibliometrics and scientometrics contains hundreds of proposals, most researchers presently draw on only a handful of indicators that are easy to obtain. This is notably the number of citations, the number of publications, the Hirsch-index, and the number of papers with high impact factor. These measures have come to define what “good research” means just because in practice we have no alternative measures. But such a narrow definition of success streamlines research strategies and agendas. And this brings the risk that scientific exploration becomes inefficient and stalls.

With SciMeter, we want to work against this streamlining. SciMeter allows everyone to create their own measures to capture what they personally consider the best way of quantifying scientific impact. The self-created measures can then be used to evaluate individuals and to quickly sort lists, for example lists of applicants. Since your measures can always be adapted, this counteracts streamlining and makes gaming impossible.

SciMeter is really not a website, it’s a web-interface. It allows you make your own analysis of publication data. Right now our database contains only papers from arXiv.org. That’s not because we only care about physicists, but because we have to start somewhere! For this reason, I must caution you that it makes no sense to compare the absolute numbers from SciMeter to the absolute numbers from other services, like InSpire or Google. The analysis you can do with our website should be used only for relative comparison within our interface.

For example, you will almost certainly find that your h-index is lower on SciMeter than on InSpire. Do not panic! This is simply because our citation count is incomplete.

Besides comparing authors with each other, and sorting lists according to custom-designed metrics, you can also use our default lists. Besides all authors, we also have pre-defined lists for authors by arXiv category (where we count the main category that an author publishes in), and we have lists for male and female authors (the same gender-id that we also used for our check of Strumia’s results).

The update also now lets you get a ten-year neural-net prediction for the h-index (that was the basis of our recent paper, arxiv version here). You find this feature among the “apps”. We have also made some minor improvements on the algorithm for the keyword clouds. And on the app-page you now also find the search for “similar authors” or authors who have published on any combination of keywords. You may find this app handy if you look for people to invite to conferences or, if you are a science writer, if you are looking for someone to comment.

SciMeter was developed by Tom Price and Tobias Mistele, and was so-far financed by the Foundational Questions Institute. I also want to thank the various test-users who helped us to trouble-shoot an early version and contributed significantly to the user-friendliness of the website.

In the video below I briefly show you how the custom metrics work. I hope you enjoy the update!

Friday, May 03, 2019

Graham Farmelo’s interview of Edward Witten. Transcript.

[I’ve meant for some while to try an automatic transcription software, and Graham Farmelo’s interview of Edward Witten (mentioned by Peter Woit) seemed a good occasion. I used an app called “Trint” which seems to work okay. But both the software and I have trouble with Farmelo’s British accent and with Witten’s mumbling. I have marked the places that I didn’t understand with [xxx]. Please leave me a comment in case you can figure out what’s being said. Also notify me of any blunders that I might have missed. Thanks!]


GF [00:00:06] A mind of the brilliance of Edward Witten’s comes along in mathematical physics about once every 50 years if we’re lucky. Since the late 1970s he’s been preeminent among the physicists who are trying to understand the underlying order of the universe. Or, as you might say, trying to discover the most fundamental equations of physics. More than that, by studying the mathematical qualities of nature, Witten became remarkably influential in pure mathematics. The only physicist ever to have won the coveted Fields Medal which has much the same stature in mathematics as a Nobel Prize has in physics.

GF [00:00:46] My name is Graham Farmelo, author of  “The universe speaks in numbers.” Witten is a central figure in my book and he’s been helpful to me. Though he’s a reluctant interviewee so I was pleased when he agreed to talk with me last August about some aspects of his career and the relationship between mathematics and physics. He was in a relaxed mood sitting on a sofa in his office at the Institute for Advanced Study in Princeton wearing his tennis clothes. As usual, he speaks quietly so you’ll have to listen hard.

GF [00:01:20] He uses quite a few technical terms too. But if you’re not familiar with them I suggest that you just let them wash over you. The key thing is to get a sense of Witten’s thinking about the big picture. He is worth it.

GF [00:01:32] He gives us several illuminating insights into how he became interested in state-of-the-art mathematics while remaining a physicist to his fingertips. I began by asking him if he’d always been interested in mathematics and physics.

EW [00:01:47] When I was a kid I was very interested in astronomy. It was the period of the space race and everybody was interested in space. Then, when I was a little older, I was exposed to calculus by my father. And for a while I was very interested in math.

GF [00:02:02] You said for a while, so did that lapse?

EW [00:02:04] Yes, it did lapse for a few years, and the reason it lapsed, I think, was that after being exposed to calculus at the age of eleven it actually was quite a while before I was shown anything that was really more advanced. So I wasn't really aware that there was much more interesting more advanced math. Probably not the only reason, but certainly one reason that my interest lapsed.

GF [00:02:22] Yeah. Were you ever interested in any other subjects? I mean because you know you came on to study history and things like that. Did that really interest you comparably to math and physics?

EW [00:02:31] I guess there was a period when I imagined doing journalism or history or something but at about the age of 21 or 22 I realized that I wasn't going to work out well in my case.

GF [00:02:42] After studying modern languages he worked on George McGovern’s ill fated presidential campaign and even studied economics for one semester before he finally turned to physics.

GF [00:02:53] Apparently he showed up at Princeton University wanting to do a Ph.D. in theoretical physics and they wisely took him on after he made short work of some preliminary exams. Boy did he learn quickly. One of the instructors tasked with teaching him in the lab told me that within three weeks Witten’s questions on the experiments went from basic to brilliant to Nobel level. As a postdoc at Harvard, Witten became acquainted with several of the theorist pioneers of this model including Steven Weinberg, Shelly Glashow, Howard Georgi, and Sydney Coleman, who helped interest the young Witten in the mathematics of these new theories.

EW [00:03:33] The physicists I learned from most during those years were definitely Weinberg, Glashow, Georgi, and Coleman. And they were completely different. So Georgi and Glashow were doing model building, basically weak interaction model building, elaborations on the Standard Model. I found it fascinating but it was a little bit hard to find an entree there. If the world had been a little bit different, I might have made my career doing things like they were doing.

GF [00:04:01] Wow. This was the first time I’d heard Witten say that he was at first expecting to be like most other theorists and take his inspiration from the results of experiments building so-called models of the real world. What, I wondered, led him to change direction and become so mathematical.

EW [00:04:19] Let me provide a little background for listeners. Up to and including the time I was a graduate student, for 20, 25 years, there had been constant waves of new discoveries in elementary particle physics: strange particles, muons, hadronic resonances, parity violations, CP violation, scaling and deep inelastic scattering, the Charm particle, and I’m forgetting a whole bunch. But that’s enough to give you the idea. So that was over a period of over 20 years. So even after a lot of the big discoveries that was one every three years. Now, if experimental surprises and discoveries had continued like that, which at the time I think is what would have happened because it had been going on for a quarter century, then I would have expected to be involved in model building, or grappling with it, like colleagues such as Georgi and Glashow all were doing. Most notably, however, it turned out that this period of constant surprise and turmoil was ending just while I was a graduate and therefore later on I had no successful directions.

GF [00:05:20] Do you remember being disappointed by that in any sense?

EW [00:05:23] Of course I was, you never stop being disappointed.

GF [00:05:27] Oh dear, oh it’s a hard life.

GF [00:05:31] You were disappointed by the drawing up so to speak of the.

EW [00:05:33] There have been important experimental discoveries since then. But the pace has not been quite the same. Although they’ve been very important they’ve been a little bit more abstract in what they teach us and definitely they’ve offered fewer opportunities for model building than was the case in the 60s and 70s. I’d like to just tell you a word or two about my interaction with the other physicists. There was Steve Weinberg and what I remember best from Weinberg. He was one of the pioneers of a subject called current algebra which was an important part of understanding the nuclear force. But he obviously thought most other physicists didn't understand it properly and I was one of those. So whenever current algebra was mentioned at a seminar or a discussion meeting he would always give a short little speech explaining his understanding of it. In my case after hearing those speeches the eight to 10 times [laughter] what Steve was telling us.

EW [00:06:28] Then there was Sidney Coleman. First of all Sidney was the only one who was interested in strong coupling behavior of quantum field theories which is what I’d become interested in as a graduate student with encouragement from my advisor David Gross. So, he was really the only one I could interact with about that. Others regarded strong coupling as a black box. So, maybe for your listeners, I should explain that if you’re a student in physics they teach you what to do when quantum effects are small, but no one tells you what to do when quantum effects are big, there’s no general answer. It’s a smorgasbord of different methods that work for different problems and a lot of problems that are intractable. So, I'd become interested in that as a student but I was mostly beating my head against a brick wall because it is usually intractable, and Sydney was the only one of the professors at Harvard interested in such matters. So, apart from interacting with him about that, also he exposed me to a number of mathematical topics I wouldn’t have known about otherwise but that eventually were important in my work which most physicists didn’t know about. And certainly I didn’t know about.

GF [00:07:27] Yeah, can I ask were you consciously interested in advanced pure math at that time?

EW [00:07:32] Definitely not

GF [00:07:32] You were not?

EW [00:07:32] No, most definitely not. I got dragged into math gradually because you see the standard model had been discovered so the problems in physics were not exactly the same as they had been before. But there were new problems that were opened up by the standard model. For one thing there is new math that came into understanding the standard model. Just when I was finishing graduate school more or less Polyakov and others introduced the Yang-Mills instanton which has proved to be important in understanding physics. It’s also had a lot of mathematical applications.

GF [00:08:02] You can think of instantons as fleeting events that occur in space and time on the subatomic scale. These events are predicted by the theories of the subatomic world known as gauge theories. A key moment in this story is Witten’s first meeting with the great mathematician Michael Atiyah at the Massachusetts Institute of Technology. They will become the leaders of the trend towards a more mathematical approach to our understanding of the world.

EW [00:08:32] So Polyakov and others had discovered the Yang-Mills instanton and it was important in physics and proved to have many other applications. And then Atiyah was one of the mathematicians who discovered amazing mathematical methods that could be used to solve the instanton equations. So he was lecturing about that when he visited in Cambridge. I think in the spring of 1977, but I could be off by a few months, and I was extremely interested. And so we talked about it a lot. I probably made more of an effort to understand the math involved than most of the other physicists did. Anyway this interaction surely led to my learning all kinds of math I’d never heard of before, complex manifolds, sheaf cohomology groups.

GF [00:09:16] This was news to you at that time.

EW [00:09:18] Definitely. So I might tell you at an even more basic level the Atiyah-Singer index theorem had been news to me a few months earlier when I heard about it from Sidney Coleman.

GF [00:09:28] The index theorem first proved by Michael Atiyah and his friend Isidore Singer connects two branches of mathematics that had seemed unconnected. Calculus, that’s the mathematics of changing quantities, and topology about the properties of objects that don’t change when they’re stretched, twisted, or deformed in some way, topology is now central to our understanding of fundamental physics.

EW [00:09:51] Like other physics graduate students of the period, I had no inkling of any 20th century math, really. So, I’d never heard of the names Atiyah and Singer or of the concept of the index or if the index theorem until Albert Schwarz showed that it was relevant to understanding instantons. And even then that paper didn’t make an immediate splash. If Coleman hadn’t pointed it out, I’m not sure how long it would have been before I knew about it. And then there was progress in understanding instanton equations by Atiyah among others. The first actually was Richard Ward, Penrose’s doctoral student. So, I got interested in that but I was interested in a sense in a narrow way which is what good would it be in physics. And I learned the math or some of the math that the teacher was using. But I was a little skeptical about the applicability for physics and I wasn’t really wrong because the original program of Polyakov didn’t quite work out. The details of the Instanton equations that were beautifully elucidated by the mathematicians were not in practice that helpful for things you can actually do as a physicist. So, to sort of summarize what happened in the long run, Atiyah’s work and that of his colleagues made me learn a lot of math I’d never heard of before which turned out to be very important later but not per se for the original reasons.

GF [00:11:10] When did you start to become convinced that math was really going to be interesting?

EW [00:11:14] Well that gradually happend in the 1980s I guess. So, for example one early episode which was in 1981 or two I was trying to understand the properties of what's called the vacuum the quantum ground state in supersymmetric field theories and it really had some behavior that was hard to explain using standard physics ideas and since I couldn't understand it I kept looking at simpler and simpler models and they all had the same puzzle. So finally I got to what seemed like the simplest possible model which you could ask the question and it still had a puzzling behavior. But at a certain point, I think when I was in a swimming pool in Aspen Colorado, I remembered Raoul Bott and actually Atiyah also had given some lectures to physicists a couple of years earlier in Cargesse, and they had tried to explain something called Morse theory to us. I’m sure there are like me many other physicists that have never heard of Morse theory or are familiar with any of the questions it addresses or.

GF [00:12:11] Would you like to say what Morse theory is roughly speaking?

EW [00:12:14] Well if you’ve got a rubber ball floating in space it’s got a lowest point, where the elevation is lowest, it’s got a highest point where the elevation is highest. So it’s got a maximum and a minimum. If you have a more complicated surface like for example a rubber inner tube, it’ll have saddle points of height function as well as a maximum and minimum. And Morse theory relates the maxima and minima and the saddle points of a function such as height function to the topology of a surface or topological manifold on which the function is defined.

GF [00:12:48] You see that paper by Maxwell on that what he spoke about see in 1870.

EW [00:12:52] I’ve not read that.

GF [00:12:53] Oh I’ll show it to you later. It’s “On Hills and Dales,” gave it in Liverpool, very thinly attended talk, erm, anyway.

EW [00:13:01] So was he in fact describing the two dimensional version of Morse theory.

GF [00:13:04] I can’t go into detail but the historians of Morse theory, they often refer to that. At a public meeting incidentally in Liverpool. 

EW [00:13:13] Actually now you mentioned it, I heard the title of the Hills and Dales talk by Maxwell that had something to do with the beginnings of topology. And topology was just barely beginning in roughly that period.

GF [00:13:23] But this was useful in physics. Your Aspen swimming pool revelation...

EW [00:13:28] Well, it shed a little bit of light on the vacuum state in super symmetric quantum theories. So anyway I developed that further so you know at first that seemed exceptional but eventually there were too many of these exceptions to completely ignore.

GF [00:13:42] Am I right in saying, not to put into your mouth, but it was the advent of String Theory post Michael Greene and John Schwartz where these things started going front and center, is that fair?

EW [00:13:50] After... Following the first super string revolution as people call it which came to fruition in 1984 with the work of Greene and Schwarz on the anomalies after that the sort of math that Atiyah and others had used for the instaton equation was suddenly actually useful. Because to understand string theory, complex manifolds and index theory sheaf cohomology groups, all those funny things were actually useful in doing basic things like constructing models of the elementary particles in string theory. I should give a slightly better explanation. In physics there are the forces that we see for the elementary particles that means basically everything except gravity. Then there's gravity which is so weak that we only see it for macroscopic masses like the earth or the sun. Now we describe gravity by Einstein's theory and then we describe the rest of it by quantum field theory. It's difficult to combine the two together. Before 1984 you couldn't even make a halfway reasonable models for elementary particles that included all the forces together with gravity. The advance that Greene and Schwarz made with anomaly cancellation in 1984 made that possible. But to make such models you needed to use a lot of the math that physicists had not used previously but which was introduced by Atiyah and others when they solved the instanton equations and you had to use complex manifolds, sheaf cohomology groups and things that were totally alien to the education of a physics graduate student back in the days when I'd been a student. So those things were useful even at a basic level in making a model of the elementary particles with gravity. And if you wanted to understand it more deeply you ended up using still more maths. After string theory was developed enough that you could use it in an interesting way to make models of particle physics it was clear that a lot of previously unfamiliar math was important. I speak loosely when I say previously unfamiliar because obviously it was familiar to some people. First of all the to the mathematicians. Secondly in some areas like Penrose had used some of it in his Twister theory. But broadly speaking unfamiliar to most physicists.

GF [00:15:46] So we actually went very well in physics very very important for mathematicians in mathematics a very important physicist they're working harmoniously alongside each other. You go back to Leibnitz who used to talk about the pre established harmony between math and physics. That was one of Einstein's favorite phrases. Is there something you regard as a fact of life or is it something you would regard as possibly can be explained one day will never be explained. Do you have any comment at all on that relationship.

EW [00:16:09] Well, the intimate tie between math and physics seems to be a fact of life. I can't imagine what it would mean to explain it. The world only seems to be based on theories that involve interesting math and a lot of interesting math is at least partly inspired by the role that it plays in physics. Not all of course.

GF [00:16:25] But does it inspire you when you see a piece of math that's very relevant to physics and vice versa when you're helping mathematicians. Does that motivate you in some way to think you're on the right track.

EW [00:16:35] Well when something turns out to be beautiful that does encourage you believe that it's on the right track.

GF [00:16:39] Classic Dirac. But he took it as he put it to almost a religion. But I sense you are  a little bit more skeptical, if that's the right word or hard nosed about it I don't know.

EW [00:16:51] Having discovered the Dirac equation, Dirac was entitled to commit its use to extremes, to put it that way.

GF [00:16:58] Witten has long been a leading pioneer of the string framework which seeks to give a unified account of all the fundamental forces based on quantum mechanics and special relativity. It describes the basic entities of nature in terms of tiny pieces of string.

GF [00:17:14] Go back to string theory. Do you see that as one among several candidates or the preeminent candidate or what? I mean what do you see the status of that framework in the landscape of mathematical physics.

EW [00:17:24] Id say that string slash M theory is the only really interesting direction we have for going beyond the established framework of physics by which I mean quantum field theory at the quantum level and classical general relativity at the macroscopic scale. So where where we've made progress that's been in the string slash M theory framework where a lot of interesting things have been discovered. I'd say that there's a lot of interesting things we don't understand at all.

EW [00:17:48] But you’ve never been tempted down the other route. The other options are not.

EW [00:17:52] I’m not even sure what you would mean by other routes.

GF [00:17:54] Loop quantum gravity?

EW [00:17:56] Those are just words. There aren’t any other routes.

GF [00:17:58] Okay, all right, fair enough.

GF [00:18:01]  So there we have it. The preternaturally cautious Witten says that if we want to discover a unified theory of all the fundamental forces, string theory is the only interesting way forward that’s arisen.

GF [00:18:17] Where we are now strikes me as being quite an unusual time in particle physics because so many of us were looking forward to the Large Hadron Collider, huge energy available ,and finding the Higgs boson and maybe supersymmetry. And yet it seems that we have gotten the Higgs particle just as we were hoping and expecting. But nothing else that’s really stimulating. What are your views on where we are now?

EW [00:18:39] My generation grew up with a belief very very strong belief which by the way was drummed into us by Steven Weinberg and by others. That when physics reached the energy scale at which you can understand the weak interactions. You would not only discover the mechanism of electroweak symmetry breaking but you’d learn what fixes its energy scale as been relatively low compared to the scale of gravity. That’s what ultimately makes gravity so weak in ordinary terms. So, it came as a big surprise that we reached the energy scale to study the W and the Z and even the Higgs particle without finding a bigger mechanism behind it. That’s an extremely shocking development in the context of the thinking that I grew up with.

EW [00:19:22] There is another shock which also occurred during that 40 year period which possibly should be comparative. This is the discovery that the acceleration of the expansion of the universe. For decades physicists assumed that because of the gravitational attraction of matter the expansion of the universe with be slowing down and tried to measure it. It turned out that the expansion is actually speeding up. We don't know this for sure it seems quite likely that the results from the effects of Einstein's cosmological constant which is incredibly small but non-zero. The two things the very very small but non-zero cosmological constant and the scale of weak interactions the scale of elementary particle masses which in human terms can seem like a lot of energies. But it's very small compared to other energies in physics. The two puzzles are analogous and they're both extremely bothersome. These two puzzles although primarily the one about gravity which was discovered first are perhaps the main motivation for discussions of a cosmic landscape of vacua. Which is an idea that used to make me extremely uncomfortable and unhappy. I guess because of the challenge it poses to trying to understand the universe and the possibly unfortunate implications for our distant descendants tens of billions of years from now. I guess I ultimately made my peace with it recognizing that the universe hadn't been created for our convenience.

GF [00:20:43] So  you come to terms with it.

EW [00:20:45] I've come to terms with the landscape idea and the sense of not being upset about it. As I was for many years.

GF [00:20:49] Really upset?

EW [00:20:50] I still would prefer to have a different explanation but it doesn't upset me personally to the extent it used to.

GF [00:20:56] So just to conclude what would you say the principal challenge is all down to people looking at fundamental physics.

EW [00:21:01] I think it's quite possible that new observations either in astronomy or accelerators will turn up new and more down to earth challenges. But with what we have now and also with my own personal inclinations it's hard to avoid answering new terms of cosmic challenges. I actually believe that string slash M theory is on the right track toward a more deeper explanation. But at a very fundamental level it's not well understood. And I'm not even confident that we have a good concept of what sort of thing is missing or where to find it. The reason I'm not is that in hindsight it's clear that a view we might have given in the 1980s was what was missing was too narrow. Instead of discovering what we thought was missing instead we broadened the picture in the 90s in unexpected directions. And having lived through that I feel it might happen again.

EW [00:21:49] To give you a slightly less cosmic answer if you ask me where I think is the most likely direction for another major theoretical upheaval like happened in the 80s and then again in the 90s. I've come to believe that the whole it from qbit stuff, the relation between geometry and entanglement, is the most interesting direction.

GF [00:22:12] It from bit that was a phrase coined by the late American theoretician John Wheeler who guessed that the stuff of nature the "it" might ultimately be built from the bits of information. Perhaps the theory of information is showing us the best way forward in fundamental physics. Witten is usually wary of making strong pronouncements about the future of his subjects. So I was struck by his interest in this line of inquiry, now extremely popular.

EW [00:22:39] I feel that if in my active career there will be another real upheaval that's where it's most likely to be coming [xxx]

EW [00:22:47] I had a sense both in the early 80s and in the early 90s. I had a sense a couple of years in advance of the big upheavals where they were most likely to come from it and those two times did turn out to be right. Then for a long long time I had no idea where another upheaval might come from. By the last few years I've become convinced that it's most likely to be the it from qbit stuff of which I have not been a pioneer now. But I was not one of the first to reach the conclusion or a suspicion that I'm telling you right now. But anyway it's the view I've come to.

GF [00:23:20] There's a famous book about night thoughts of a quantum physics. are there night thoughts of a string theorists is where you have a wonderful theory that's developing you know unable to test it. Does that ever bother you.

EW [00:23:31] Of course it bothers us but we have to live with our existential condition. But let's backtrack 34 years. So in the early 80s there were a lot of hints that something important was happening in string theory but once Greene and Schwartz discovered the anomaly cancellation and it became possible to make models of elementary particle physics unified with gravity. From then I thought the direction was clear. But some senior physicists rejected it completely on the grounds that it would supposedly be untestable. Or even have cracked it would be too hard to understand. My view at the time was that when we reached the energies of the W, Z and the Higgs particle we'd get all kinds of fantastic new clues.

EW [00:24:11] So. I found it very very surprising that any colleagues would be so convinced that you wouldn't be able to get important clues that would shed light on the validity of a fundamental new theory that might in fact be valid. Now if you analyze that 34 years later I'm tempted to say we were both a little bit wrong. So the scale of clues that I thought would materialize from accelerators has not come. In fact the most important clue possibly is that we've confirmed the standard model without getting what we fully expected would come with him. And as I told you earlier that might be a clue concerning the landscape. I think the flaw in the thinking of the critics though is that while it's a shame that the period of incredible turmoil and constant experiment and discovery that existed until roughly when I started graduate school hasn't continued. I think that the progress which has been made in physics since 1984 is much greater than it would have been if the naysayers had been heeded and string theory hadn't been done in that period.

GF [00:25:11] And it's had this bonus of benefiting mathematics as well.

EW [00:25:14] Mathematics and by now even in other areas of physics because for example new ideas about black hole of thermodynamics have influenced areas of condensed metaphysics* even in the study of quantum phase transitions, quantum chaos and really other areas.

GF [00:25:31] Well let's hope we all live to see some revolutionary triumph that was completely unexpected that's the best one of all. Edward thank you very much indeed.

EW [00:25:38] Sure thing.

GF [00:25:43] I’m always struck by the precision with which Edward expresses himself and by his avoidance of fuzzy philosophical talk. He's plainly fascinated by the closeness of the relationship between fundamental physics and pure mathematics. He isn't prepared to go further to say that their relationship is a fact of life. Yet no one has done more to demonstrate that not only is mathematics unreasonably effective in physics physics is unreasonably effective in mathematics.

GF [00:26:15] This Witten said makes sense only if our modern theories are on the right track. One last point. Amazingly Witten is sometimes underestimated by physicists who characterize him as a mathematician, someone who has only a passing interest in physics. This is quite wrong. When I talk with a great theoretician Steven Weinberg he told me of his awe at Witten's physical intuition and elsewhere said that Witten's got more mathematical muscles in his head than I like to think about. You can find out more about Witten and his work in my book "The universe speaks in numbers.


--

* Condensed matter physics. I am sure he says condensed matter physics. But really I think condensed metaphysics fits better.