Reasoning in Physics” at the Center for Advanced Studies in Munich. I went because it seemed a good idea to improve my reasoning, but as I sat there, something entirely different was on my mind: How did I get there? How did I, with my avowed dislike of all things -ism and -ology, end up in a room full of philosophers, people who weren’t discussing physics, but the philosophical underpinning of physicists’ arguments. Or, as it were, the absence of such underpinnings.
The straight-forward answer is that they invited me, or invited me back, I should say, since this was my third time visiting the Munich philosophers. Indeed, they invited me to stay some longer for a collaborative project, but I’ve successfully blamed the kids for my inability to reply with either yes or no.
So I sat there, in one of these awkwardly quiet rooms where everyone will hear your stomach gargle, trying to will my stomach not to gargle and instead listen to the first talk. It was Jeremy Butterfield, speaking about a paper which I commented on here. Butterfield has been praised to me as one of the four good physics philosophers, but I’d never met him. The praise was deserved – he turned out to be very insightful and, dare I say, reasonable.
The talks of the first day focused on multiple multiverse measures (meta meta), inflation (still eternal), Bayesian inference (a priori plausible), anthropic reasoning (as observed), and arguments from mediocrity and typicality which were typically mediocre. Among other things, I noticed with consternation that the doomsday argument is still being discussed in certain circles. This consterns me because, as I explained a decade ago, it’s an unsound abuse of probability calculus. You can’t randomly distribute events that are causally related. It’s mathematical nonsense, end of story. But it’s hard to kill a story if people have fun discussing it. Should “constern” be a verb? Discuss.
In a talk by Mathias Frisch I learned of a claim by Huw Price that time-symmetry in quantum mechanics implies retro-causality. It seems the kind of thing that I should have known about but didn’t, so I put the paper on the reading list and hope that next week I’ll have read it last year.
The next day started with two talks about analogue systems of which I missed one because I went running in the morning without my glasses and, well, you know what they say about women and their orientation skills. But since analogue gravity is a topic I’ve been working on for a couple of years now, I’ve had some time to collect thoughts about it.
Analogue systems are physical systems whose observables can, in a mathematically precise way, be mapped to – usually very different – observables of another system. The best known example is sound-waves in certain kinds of fluids which behave exactly like light does in the vicinity of a black hole. The philosophers presented a logical scheme to transfer knowledge gained from observational test of one system to the other system. But to me analogue systems are much more than a new way to test hypotheses. They’re fundamentally redefining what physicists mean by doing science.
Presently we develop a theory, express it in mathematical language, and compare the theory’s predictions with data. But if you can directly test whether observations on one system correctly correspond to that of another, why bother with a theory that predicts either? All you need is the map between the systems. This isn’t a speculation – it’s what physicists already do with quantum simulations: They specifically design one system to learn how another, entirely different system, will behave. This is usually done to circumvent mathematically intractable problems, but in extrapolation it might just make theories and theorists superfluous.
It then followed a very interesting talk by Peter Mattig, who reported from the DFG research program “Epistemology of the LHC.” They have, now for the 3rd time, surveyed both theoretical and experimental particle physicists to track researchers’ attitudes to physics beyond the standard model. The survey results, however, will only get published in January, so I presently can’t tell you more than that. But once the paper is available you’ll read about it on this blog.
The next talk was by Radin Dardashti who warned us ahead that he’d be speaking about work in progress. I very much liked Radin’s talk at last year’s workshop, and this one didn’t disappoint either. In his new work, he is trying to make precise the notion of “theory space” (in the general sense, not restricted to qfts).
I think it’s a brilliant idea because there are many things that we know about theories but that aren’t about any particular theory, ie we know something about theory space, but we never formalize this knowledge. The most obvious example may be that theories in physics tend to be nice and smooth and well-behaved. They can be extrapolated. They have differentiable potentials. They can be expanded. There isn’t a priori any reason why that should be so; it’s just a lesson we have learned through history. I believe that quantifying meta-theoretical knowledge like this could play a useful role in theory development. I also believe Radin has a bright future ahead.
The final session on Tuesday afternoon was the most physicsy one.
My own talk about the role of arguments from naturalness was followed by a rather puzzling contribution by two young philosophers. They claimed that quantum gravity doesn’t have to be UV-complete, which would mean it’s not a consistent theory up to arbitrarily high energies.
It’s right of course that quantum gravity doesn’t have to be UV-complete, but it’s kinda like saying a plane doesn’t have to fly. If you don’t mind driving, then why put wings on it? If you don’t mind UV-incompleteness, then why quantize gravity?
This isn’t to say that there’s no use in thinking about approximations to quantum gravity which aren’t UV-complete and, in particular, trying to find ways to test them. But these are means to an end, and the end is still UV-completion. Now we can discuss whether it’s a good idea to start with the end rather than the means, but that’s a different story and shall be told another time.
I think this talk confused me because the argument wasn’t wrong, but for a practicing researcher in the field the consideration is remarkably irrelevant. Our first concern is to find a promising problem to work on, and that the combination of quantum field theory and general relativity isn’t UV complete is the most promising problem I know of.
The last talk was by Michael Krämer about recent developments in modelling particle dark matter. In astrophysics – like in particle-physics – the trend is to go away from top-down models and work with slimmer “simplified” models. I think it’s a good trend because the top-down constructions didn’t lead us anywhere. But removing the top-down guidance must be accompanied by new criteria, some new principle of non-empirical theory-selection, which I’m still waiting to see. Otherwise we’ll just endlessly produce models of questionable relevance.
I’m not sure whether a few days with a group of philosophers have improved my reasoning – be my judge. But the workshop helped me see the reason I’ve recently drifted towards philosophy: I’m frustrated by the lack of self-reflection among theoretical physicists. In the foundations of physics, everybody’s running at high speed without getting anywhere, and yet they never stop to ask what might possibly be going wrong. Indeed, most of them will insist nothing’s wrong to begin with. The philosophers are offering the conceptual clarity that I find missing in my own field.
I guess I’ll be back.