Thursday, January 22, 2015

Is philosophy bad science?

In reaction to my essay “Does the scientific method need revision?” some philosophers at Daily Nous are discussing what I might have meant in a thread titled “Philosophy as Bad Science?”:
“[S]he raises concerns about physicists being led astray by philosophers (Richard Dawid is mentioned as an alleged culprit [...]) into thinking that observation and testability through experimentation can be dispensed with. According to her, it may be alright for mathematicians and philosophers to pontificate about the structure of the universe without experimentation, but that, she says, is not what scientists should be doing.”
The internet has a funny way of polarizing opinions. I am not happy about this, so some clarifications.

First, I didn’t say and didn’t mean that philosophy is “bad science,” I said it’s not science. I am using the word “science” here as it’s commonly used in English where (unlike in German) science refers to study subjects that describe observations, in the broad sense.

Neither am I saying that philosophy is something physicists shouldn’t be doing at all, certainly not. Philosophy, as well as the history and sociology of science, can be very helpful for the practicing physicist to put their work in perspective. Much of what is today subject of physics was anticipated by philosophers thousands of years ago, such as the question whether nature is fundamentally discrete or continuous.

Scientists though should in the first place do science, ie their work should at least aim at describing observations.

Physicists today by and large don’t pay much attention to philosophy. In most fields that doesn’t matter much, but the closer research is to fundamental questions, the more philosophy comes into play. Presently that is mostly in quantum foundations, cosmology and quantum gravity (including string theory), and beyond-the-standard-model physics that relies on arguments of naturalness, simplicity or elegance.

Physicists are not “led astray by philosophers” because they don’t actually care what philosophers say. What is happening instead is that some physicists — well, make that string-theorists — are now using Richard Dawid’s arguments to justify continuing what they’re doing. That’s okay, I also think philosophers are better philosophers if they agree with what I’ve been saying all along.

I have no particular problem with string theorists, most of which today don’t actually do string theory any more, they do AdS/CFT. Which is fine by me, because much of the appeal of the gauge-gravity duality is its potential for phenomenological applications. (Then the problem is that they’re all doing the same, but I will complain about this another time.) String theory takes most of the heat simply because there are many string theorists and everybody has heard of it.

Just to be clear, when I say “phenomenology” I mean mathematical models describing observations. Phenomenology is what connects theory with experiment. The problem with today’s research communities is that the gap between theory and experiment is constantly widening and funding agencies have let it happen. With the gap widening, the theory is increasingly more mathematics and/or philosophy and increasingly less science. How wide a gap is too wide? The point I am complaining about is that the gap has become to wide. We have a lack of balance between theory disconnected from observation and phenomenology. Without phenomenology to match a theory to observation, the theory isn’t science.

Studying mathematical structures can be very fruitful for physics, sure. I understand that it takes time to develop the mathematics of a theory until it can be connected to observations, and I don’t think it makes much sense setting physicists a deadline by which insights must have appeared. But problems arise if research areas in physics which are purely devoted to mathematics, or are all tangled up in philosophy, become so self-supportive that they stop even trying to make contact to observation.

I don’t know how often I have talked to young postdocs in quantum gravity and they do not show the slightest intention to describe observation. The more senior people have at least learned the lip confessions to be added whenever funding is at stake, but it is pretty obvious that they don’t actually want to bother with observations. The economists have a very useful expression that is “revealed preferences.” It means, essentially, don’t listen to what they say, look at what they do. Yes, they all say phenomenology is important, but nobody works on it. I am sure you can name off the top of your head some dozen or so people working on quantum gravity, the theory. How many can you name who work on quantum gravity phenomenology? How many of these have tenure? Right. Why hasn’t there been any progress in quantum gravity? Because you can’t develop a theory without contact to observation.

It is really a demarcation issue for me. I don’t mind if somebody wants to do mathematical physics or philosophy of science. I just don’t want them to pretend they’re doing physics. This is why I like the proposal put forward by George Ellis and Joe Silk in their Nature Comment:
“In the meantime, journal editors and publishers could assign speculative work to other research categories — such as mathematical rather than physical cosmology — according to its potential testability. And the domination of some physics departments and institutes by such activities could be rethought.”

Tuesday, January 20, 2015

Does the Scientific Method need Revision?

Theoretical physics has problems. That’s nothing new — if it wasn’t so, then we’d have nothing left to do. But especially in high energy physics and quantum gravity, progress has basically stalled since the development of the standard model in the mid 70s. Yes, we’ve discovered a new particle every now and then. Yes, we’ve collected loads of data. But the fundamental constituents of our theories, quantum field theory and Riemannian geometry, haven’t changed since that time.

Everybody has their own favorite explanation for why this is so and what can be done about it. One major factor is certainly that the low hanging fruits have been picked, and progress slows as we have to climb farther up the tree. Today, we have to invest billions of dollars into experiments that are testing new ranges of parameter space, build colliders, shoot telescopes into orbit, have superclusters flip their flops. The days in which history was made by watching your bathtub spill over are gone.

Another factor is arguably that the questions are getting technically harder while our brains haven’t changed all that much. Yes, now we have computers to help us, but these are, at least for now, chewing and digesting the food we feed them, not cooking their own.

Taken together, this means that return on investment must slow down as we learn more about nature. Not so surprising.

Still, it is a frustrating situation and this makes you wonder if not there are other reasons for lack of progress, reasons that we can do something about. Especially in a time when we really need a game changer, some breakthrough technology, clean energy, that warp drive, a transporter! Anything to get us off the road to Facebook, sorry, I meant self-destruction.

It is our lacking understanding of space, time, matter, and their quantum behavior that prevents us from better using what nature has given us. And it is this frustration that lead people inside and outside the community to argue we’re doing something wrong, that the social dynamics in the field is troubled, that we’ve lost our path, that we are not making progress because we keep working on unscientific theories.

Is that so?

It’s not like we haven’t tried to make headway on finding the quantum nature of space and time. The arxiv categories hep-th and gr-qc are full every day with supposedly new ideas. But so far, not a single one of the existing approaches towards quantum gravity has any evidence speaking for it.

To me the reason this has happened is obvious: We haven’t paid enough attention to experimentally testing quantum gravity. One cannot develop a scientific theory without experimental input. It’s never happened before and it will never happen. Without data, a theory isn’t science. Without experimental test, quantum gravity isn’t physics.

If you think that more attention is now being paid to quantum gravity phenomenology, you are mistaken. Yes, I’ve heard them too, the lip confessions by people who want to keep on dwelling on their fantasies. But the reality is there is no funding for quantum gravity phenomenology and there are no jobs either. On the rare occasions that I have seen quantum gravity phenomenology mentioned on a job posting, the position was filled with somebody working on the theory, I am tempted to say, working on mathematics rather than physics.

It is beyond me that funding agencies invest money into developing a theory of quantum gravity, but not into its experimental test. Yes, experimental tests of quantum gravity are farfetched. But if you think that you can’t test it, you shouldn’t put money into the theory either. And yes, that’s a community problem because funding agencies rely on experts’ opinion. And so the circle closes.

To make matters worse, philosopher Richard Dawid has recently argued that it is possible to assess the promise of a theory without experimental test whatsover, and that physicists should thus revise the scientific method by taking into account what he calls “non-empirical facts”. By this he seems to mean what we often loosely refer to as internal consistency: theoretical physics is math heavy and thus has a very stringent logic. This allows one to deduce a lot of, often surprising, consequences from very few assumptions. Clearly, these must be taken into account when assessing the usefulness or range-of-validity of a theory, and they are being taken into account. But the consequences are irrelevant to the use of the theory unless some aspects of them are observable, because what makes up the use of a scientific theory is its power to describe nature.

Dawid may be confused on this matter because physicists do, in practice, use empirical facts that we do not explicitly collect data on. For example, we discard theories that have an unstable vacuum, singularities, or complex-valued observables. Not because this is an internal inconsistency — it is not. You can deal with this mathematically just fine. We discard these because we have never observed any of that. We discard them because we don’t think they’ll describe what we see. This is not a non-empirical assessment.

A huge problem with the lack of empirical fact is that theories remain axiomatically underconstrained. In practice, physicists don’t always start with a set of axioms, but in principle this could be done. If you do not have any axioms you have no theory, so you need to select some. The whole point of physics is to select axioms to construct a theory that describes observation. This already tells you that the idea of a theory for everything will inevitably lead to what has now been called the “multiverse”. It is just a consequence of stripping away axioms until the theory becomes ambiguous.

Somewhere along the line many physicists have come to believe that it must be possible to formulate a theory without observational input, based on pure logic and some sense of aesthetics. They must believe their brains have a mystical connection to the universe and pure power of thought will tell them the laws of nature. But the only logical requirement to choose axioms for a theory is that the axioms not be in conflict with each other. You can thus never arrive at a theory that describes our universe without taking into account observations, period. The attempt to reduce axioms too much just leads to a whole “multiverse” of predictions, most of which don’t describe anything we will ever see.

(The only other option is to just use all of mathematics, as Tegmark argues. You might like or not like that; at least it’s logically coherent. But that’s a different story and shall be told another time.)

Now if you have a theory that contains more than one universe, you can still try to find out how likely it is that we find ourselves in a universe just like ours. The multiverse-defenders therefore also argue for a modification of the scientific method, one that takes into account probabilistic predictions. But we have nothing to gain from that. Calculating a probability in the multiverse is just another way of adding an axiom, in this case for the probability distribution. Nothing wrong with this, but you don’t have to change the scientific method to accommodate it.

In a Nature comment last month, George Ellis and Joe Silk argue that the trend of physicists to pursue untestable theories is worrisome. I agree with this, though I would have said the worrisome part is that physicists do not care enough about the testability — and apparently don’t need to care because they are getting published and paid regardless.

See, in practice the origin of the problem is senior researchers not teaching their students that physics is all about describing nature. Instead, the students are taught by example that you can publish and live from outright bizarre speculations as long as you wrap them into enough math. I cringe every time a string theorist starts talking about beauty and elegance. Whatever made them think that the human sense for beauty has any relevance for the fundamental laws of nature?

The scientific method is often quoted as a circle of formulating and testing of hypotheses, but I find this misleading. There isn’t any one scientific method. The only thing that matters is that you honestly assess the use of a theory to describe nature. If it’s useful, keep it. If not, try something else. This method doesn’t have to be changed, it has to be more consistently applied. You can’t assess the use of a scientific theory without comparing it to observation.

A theory might have other uses than describing nature. It might be pretty, artistic even. It might be thought-provoking. Yes, it might be beautiful and elegant. It might be too good to be true, it might be forever promising. If that’s what you are looking for that’s all fine by me. I am not arguing that these theories should not be pursued. Call them mathematics, art, or philosophy, but if they don’t describe nature don’t call them science.

This post first appeared Dec 17 on Starts With a Bang.

Thursday, January 15, 2015

I'm a little funny

What I do in the library when I have a bad hair day ;)

The shirt was a Christmas present from my mother. I happened to wear it that day and then thought it fits well enough. It's too large for me though, apparently they don't cater to physicists in XS.

My voice sounds like sinus infection because sinus infection, sorry about that.

Wednesday, January 14, 2015

How to write your first scientific paper

The year is young and the arxiv numbers are now a digit longer, so there is much space for you to submit your groundbreaking new work. If it wasn't for the writing, I know.

I recently had to compile a publication list with citation counts for a grant proposal, and I was shocked when inspire informed me I have 67 papers, most of which got indeed published at some point. I'm getting old, but I'm still not wise, so to cheer me up I've decided at least I'm now qualified to give you some advice on how to do it.

First advice is to take it seriously. Science isn't science unless you communicate your results to other people. You don't just write papers because you need some items on your publication list or your project report, but to tell your colleagues what you have been doing and what are the results. You will have to convince them to spend some time of their life trying to retrace your thoughts, and you should make this as pleasant for them as possible.

Second advice: When in doubt, ask Google. There are many great advice pages online, for example this site from Writing@CSU explains the most common paper structure and what each section should contain. The Nature Education covers the same, but also gives some advice if English is not your native language. Inside Higher ED has some general advice on how to organize your writing projects.

I'll not even try to compete with these advice pages, I just want to add some things I've learned, some of which are specific to theoretical physics.

If you are a student, it is highly unlikely that you will write your first paper alone. Most likely you will write it together with your supervisor and possibly some other people. This is how most of us learn writing papers. Especially the structure and the general writing style is often handed down rather than created from scratch. Still, when the time comes to do it all on your own, questions crop up that previously didn't even occur to you.

Before you start writing

Ask yourself who is your intended audience. Are you writing to a small and very specialized community, or do you want your paper to be accessible to as many people as possible? Trying to increase your intended audience is not always a good idea, because the more people you want to make the paper accessible to, the more you will have to explain, which is annoying for the specialists.

The audience for which your paper is interesting depends greatly on the content. I would suggest that you think about what previous knowledge you assume the reader brings, and what not. Once you've picked a level, stick with it. Do not try to mix a popular science description with a technical elaboration. If you want to do both, better do this separately.

Then, find a good order in which to present your work. This isn't necessarily always the order in which you did it. I have an unfortunate habit of guessing solutions and only later justify these guesses, but I try to avoid doing this in my papers.

The Title

The title should tell the reader what the paper is about. Avoid phrases like "Some thoughts on" or "Selected topics in," these just tell the reader that not even you know what the paper is about. Never use abbreviations in the title, unless you are referring to an acronym of, say, an experiment or a code. Yes, just spell it out. If you don't see why, google that abbreviation. You will almost certainly find that it may mean five different things. Websearch is word-based, so be specific. Exceptions exist of course. AdS/CFT for example is so specific, you can use it without worries.

Keep in mind that you want to make this as easy for your readers as possible, so don't be cryptic when it's unnecessary.

There is some culture in theoretical physics to come up with witty titles (see my stupid title list), but if you're still working on being taken seriously I recommend to stay clear of "witty" and instead go for "interesting".

The Abstract

The abstract is your major selling point and the most difficult part of the paper. This is always the last part of the paper that I write. The abstract should explain which question you have addressed, why that is interesting, and what you have found, without going much into detail. Do not introduce new terminology or parameters in the abstract. Do not use citations in the abstract and do not use abbreviations. Instead, do make sure the most important keywords appear. Otherwise nobody will read your paper.

Time to decide which scientific writing style you find least awkward. Is it referring to yourself as "we" or "one"? I don't mind reading papers in the first person singular, but this arguably isn't presently the standard. If you're not senior enough to be comfortable with sticking out, I suggest you go with "we". It's easier than "one" and almost everybody does it.

PACS, MSC, Keywords

Almost all journals ask for a PACS or MSC classification and for keywords, so you might as well look them up when you're writing the paper. Be careful with the keywords. Do not tag your paper as what you wish it was, but as what it really is, otherwise you will annoy your readership, not to mention your referees who will be chosen based on your tagging. I frequently get papers submitted as "phenomenology" that have no phenomenology in them whatsoever. In some cases it has been pretty obvious that the authors didn't even know what the word means.

The Introduction

The introduction is the place to put your work into context and to explain your motivation for doing the work. Do not abuse the introduction to write a review of the field and do not oversell what you are doing, keep this for the grant proposals. If there is a review, refer to the review. If not, list the works most relevant to understand your paper. Do not attempt to list all work on the subject, unless it's a really small research area. Keep in mind what I told you about your audience. They weren't looking for a review.

Yes, this is the place to cite all your friends and your own papers, but be smart about it and don't overdo it, it doesn't look good. Excessive self-cites are a hallmark of crackpottery and desperation. They can also be removed from your citation count with one click. The introduction often ends with a layout of the sections to come and notations or abbreviations used.

Try to avoid reusing introductions from yourself, and certainly from other people. It doesn't look good if your paper gets marked as having a text overlap with some other paper. If it's just too tempting, I suggest you read whatever introduction you like, then put it away, and rewrite the text roughly as you recall it. Do not try to copy the text and rearrange the sentences, it doesn't work.

Methods, Technics, Background

The place to explain what you're working with, and to remind the reader of the relevant equations. Make sure to introduce all parameters and variables. Never refer to an equation only by name if you can write it down. Make this easy for your readers and don't expect them to go elsewhere to convert mentioned equation into your notation.

If your paper is heavy on equations, you will probably find yourself repeating phrases like "then we find", "so we get", "now we obtain", etc. Don't worry, nobody expects you to be lyrical here. In fact, I find myself often not even noticing these phrases anymore.

Main Part

Will probably contain your central analysis, whether analytical or numerical. If possible, try to include some simplified cases and discuss limits of your calculation, because this can greatly enhance the accessibility. If you have very long calculations that are not particularly insightful and that you do not need in other places, consider exporting them into an appendix or supplementary material (expansions of special functions and so on).


I find it helpful if the results are separate from the main part because then it's easier at first reading to skip the details. But sometimes this doesn't make sense because the results are basically a single number, or you have lead a proof and the main part is the result. So don't worry if you don't have a separate section for this. However, if the results of your study need much space to be represented, then this is the place to do it.

Be careful to compare your results to other results in the fields. The reader wants to know what is new about your work, or what is different, or what is better. Do you confirm earlier results? Do you improve them? Is your result in disagreement with other findings? If not, how is it different?


In most papers the discussion is a fluff part where the author can offer their interpretation of the results and tell the reader all that still has to be done. I also often use it to explicitly summarize all assumptions that I have made along the way, because that helps putting the results into context. You can also dump there all the friendly colleagues who will write to you after submission to "draw your attention to" some great work of theirs that you unfortunately seemed to have missed. Just add their reference with a sentence in the discussion and everybody is happy.


Repeat the most relevant part of the results, emphasize especially what is new. Write the conclusion so that it is possible to understand without having read the rest of the paper. Do not mash up the conclusion with the discussion, because you will lose those readers who are too impatient to make it through your interpretations to get to the main point.


Give credit where credit is due. You might have first read about some topic in a fairly recent paper, but you should try to find the original source and cite that too. Reference lists are very political. If this is one of your first papers in the field, I recommend you ask somebody who knows "the usual suspects" if you have forgotten somebody important. If you forget to cite many relevant references you will look like you don't know the subject very well, regardless of how many textbooks or review articles you have read.

If you cite online resources, you should include the date at which you have last accessed the reference to your quotation.

Keep your reference lists in good order, it's time well spent. You will probably be able to reuse them many times.


Include figures when they are useful, not just because you have them. Figures should always contain axis labels, and if you are using dimensionful units, they should include the units. Explain in the figure caption what's shown in the image; explain it as if the reader has not read the text. It's okay if it's repetitive.

If anyhow possible avoid figures that can only be understood when printed in color. Use different line styles or widths in addition to different colors. Be very careful with 3d plots. They are often more confusing than illuminating. Try to break them down into a set of 2d plots if you can.


Try to use notation that is close to that of the existing literature, it will make it vastly easier for people to understand your paper. Make sure you don't accidentally change notation throughout your calculations. If your equations get very long, try to condense them by breaking up expressions, or by introducing dimensionless variables, which can declutter expressions considerably.

SPELLCHECK (with caution)

I find it stunning that I still see papers full of avoidable typographical errors when one can spell check text online for free. Yes, I know it's cumbersome with the LaTeX code between the paragraphs, but if you're not spell checking your paper you're basically telling your reader you didn't think they're worth the time. Be careful though and don't let the cosmic ray become a comic ray.

... and sooner than you know you'll have dozens of publications to look back at!

Thursday, January 08, 2015

Do we live in a computer simulation?

Some days I can almost get myself to believe that we live in a computer simulation, that all we see around us is a façade designed to mislead us. There would finally be a reason for all this, for the meaningless struggles, the injustice, for life, and death, and for Justin Bieber. There would even be a reason for dark matter and dark energy, though that reason might just be some alien’s bizarre sense of humor.

It seems perfectly possible to me to trick a conscious mind, at the level of that of humans, into believing a made-up reality. Ask the guy sitting on the sidewalk talking to the trash bin. Sure, we are presently far from creating artificial intelligence, but I do not see anything fundamental that stands in way of such creation. Let it be a thousand years or ten thousand years, eventually we’ll get there. And once you believe that it will one day be possible for us to build a supercomputer that hosts intelligent minds in a world whose laws of nature are our invention, you also have to ask yourself whether the laws of nature that we ourselves have found are somebody else’s invention.

If you just assume the simulation that we might live in has us perfectly fooled and we can never find out if there is any deeper level of reality, it becomes rather pointless to even think about it. In this case the belief in “somebody else” who has created our world and has the power to manipulate it at his or her will differs from belief in an omniscient god only by terminology. The relevant question though is whether it is possible to fool us entirely.

Nick Bostrum has a simulation argument that is neatly minimalistic, though he is guilty of using words that end on ism. He is saying basically that if there are many civilizations running simulations with many artificial intelligences, then you are more likely to be simulated than not. So either you live in a simulation, or our universe (multiverse, if you must) never goes on to produce many civilizations capable of running these simulations for one reason or the other. Pick your poison. I think I prefer the simulation.

Math-me has a general issue with these kinds of probability arguments (same as with the Doomsday argument) because they implicitly assume that the probability distribution of lives lived over time is uncorrelated, which is clearly not the case since our time-evolution is causal. But this is not what I want to get into today because there is something else about Bostrum’s argument that has been bugging Physics-me.

For his argument, Bostrum needs a way to estimate how much computing power is necessary to simulate something like the human mind perceiving something like the human environment. And in his estimate he assumes, crucially, that it is possible to significantly compress the information of our environment. Physics-me has been chewing on this point for some while. The relevant paragraphs are:

If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.

The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world.

Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.”
This assumption is immediately problematic because it isn’t as easy as saying that whenever a human wants to drill a hole into the Earth you quickly go and compute what he has to find there. You would have to track what all these simulated humans are doing to know whenever that becomes necessary. And then you’d have to make sure that this never leads to any inconsistencies. Or else, if it does, you’d have to remove the inconsistency, which will add even more computing power. To avoid the inconsistencies, you’ll have to carry on all results for all future measurements that humans could possibly make, the problem being you don’t know which measurements they will make because you haven’t yet done the simulation. Dizzy? Don’t leave, I’m not going to dwell on this.

The key observation that I want to pick on here is that there will be instances in which The Programmer really has to cramp up the resolution to avoid us from finding out we’re in a simulation. Let me refer to what we perceive as reality as level zero, and a possible reality of somebody running our simulation as level 1. There could be infinitely many levels in each direction, depending on how many simulators simulate simulations.

This idea that structures depend on the scale at which they are tested and that at low energies you’re not testing all that much detail is basically what effective field theories are all about. Indeed, as Bostrom asserts, for much of our daily life the single motion of each and every quark is unnecessary information, atoms or molecules are enough. This is all fine by Physics-me.

Then these humans they go and build the LHC and whenever the beams collide the simulation suddenly needs a considerably finer mesh, or else the humans will notice there is something funny with their laws of nature.

Now you might think of blasting the simulation by just demanding so much fine-structure information all at once that the computer running our simulation cannot deliver. In this case the LHC would serve to test the simulation hypothesis. But there is really no good reason why the LHC should just be the thing to reach whatever computation limit exists at level 1.

But there is a better way to test whether we live in a simulation: Build simulations ourselves, the more the better. The reason is that you can’t compress what is already maximally compressed. So if the level 1 computation wants to prevent us from finding out that we live in a simulation by creating simulations ourselves, they’ll have to cramp up computational efficiency for that part of our level 0 simulation that is going to inhabit our simulation at level -1.

Now we try to create simulations that will create a simulation will create a simulation and so on. Eventually, the level 1 simulation will not be able to deliver any more, regardless of how good their computer is, and the then lowest level will find some strange artifacts. Something that is clearly not compatible with the laws of nature they have found so far and believed to be correct. This breakdown gets read out by the computer one level above, and so on, until it reaches us and then whatever is the uppermost level (if there is one).

Unless you want to believe that I’m an exceptional anomaly in the multiverse, every reasonably intelligent species should have somebody who will come up with this sooner or later. Then they’ll set out to create simulations that will create a simulation. If one of their simulations doesn’t develop into the direction of creating more simulations, they’ll scrape it and try a different one because otherwise it’s not helpful to their end.

This leads to a situation much like Lee Smolin’s Cosmological Natural Selection in which black holes create new universes that create black holes create new universes and so on. The whole population of universes then is dominated by those universes that lead to the largest numbers of black holes - that have the most “offspring.” In Cosmological Natural Selection we are most likely to find ourselves in a universe that optimizes the number of black holes.

In the scenario I discussed above the reproduction doesn’t happen by black holes but by building computer simulations. In this case then anybody living in a simulation is most likely to be living in a simulation that will go on to create another simulation. Or, to look at this from a slightly different perspective, if you want our species to continue thriving and avoid that The Programmer pulls the plug, you better work on creating artificial intelligence because this is why we’re here. You asked what’s the purpose of life? There it is. You’re welcome.

This also means you could try to test the probability of the simulation hypothesis being correct by seeing whether our universe does indeed have the optimal conditions for the creation of computer simulations.

Brain hurting? Don’t worry, it’s probably not real.

Saturday, January 03, 2015

Your g is my e – Has time come for a physics notation standard?

Standards make sure the nuts fit the bolts.
[Image Source:]

The German Institute for Standardization, the “Deutsches Institut für Normung” (DIN), has standardized German life since 1917. DIN 18065 sets the standard for the height of staircase railings, DIN 18065 the surface of school bags to be covered with reflective stripes, and DIN 8270-2 the length of the hands of a clock. The Germans have a standard for pretty much everything from toilets to sleeping bags to funeral service.

Many of the German standards are now identical to European Standards, EN, and/or International Standards, ISO. According to DIN ISO 8610 for example the International Standard Day begins on Monday and the week has seven days. DIN EN 1400-1 certifies that a pacifier has two holes so that baby can still breathe if it manages to suck the pacifier into its mouth (it happens). The international standard DIN EN ISO 20126 assures that every bristle of your toothbrush can withhold a pull of at least 15 Newton (“Büschelauszugskraftprüfung” bristle-pull-off-force-test as the Germans call it). A lot of standards are dedicated to hardware supply and electronic appliances; they make sure that the nuts fit the bolts, the plugs fit the outlets, and the fuses blow when they should.

DIN EN 45020 is the European Standard for standards.

Where standards are lacking, life becomes cumbersome. Imagine every time you bought envelopes or folders you’d have to check they will actually fit to the paper you have. The Swedes have a different standard for paper punching than the Germans, neither of which is identical to the US American one. Filing cross-country taxes is painful for many reasons, but the punch issue is the straw that makes my camel go nuts. And let me not even get started about certain nations who don’t even use the ISO paper sizes because international is just the rest of the world.

Standards are important for consumer safety and convenience, but they have another important role which is to benefit the economic infrastructure by making reuse and adaptation dramatically easier. The mechanical engineers have figured that out a century ago, why haven’t the physicists?

During the summer I read a textbook on in-medium electrodynamics, a topic I was honestly hoping I’d never again have anything to do with, but unfortunately it was relevant for my recent paper. I went and flipped over the first 6 chapters or so because they covered the basics that I thought I know, just to then find that the later chapters didn’t make any sense. They gradually started making sense after I figured out that q wasn’t the charge and η not the viscosity.

Anybody who often works with physics textbooks will have encountered this problem before. Even after adjusting for unit and sign conventions, each author has their own notation.

Needless to say this isn’t a problem of textbooks only. I quite frequently read papers that are not directly in my research area, and it is terribly annoying having to waste time trying to decode the nomenclature. In one instance I recall being very confused about an astrophysics paper until it occurred to me that M probably wasn’t the mass of the galaxy. Yeah, haha, how funny.

I’m one of these terrible referees who will insist that every variable, constant, and parameter is introduced in the text. If you write p, I expect you to explain that it’s the momentum. (Or is it a pressure?) If you write g, I expect you to explain it’s the metric determinant. (Or is it a coupling constant? And what again is your sign convention?) If you write S, I expect you to explain it’s the action. (Or is it the entropy?)

I’m doing this mostly because if you read papers dating back to the turn of the last century it is very apparent that what was common notation then isn’t common notation any more. If somebody in a hundred years downloads today’s papers, I still want them to be able to figure out what the papers are about. Another reason I insist on this is that not explaining the notation can add substantial interpretational fog. One of my pet peeves is to ask whether x denotes a position operator or a coordinate. You can build whole theories of mixing these up.

You may wnat to dsicard this as some German maknig am eelphnat out of a muose, but think twice. You almots certainly have seen tihs adn smiliar memes that supposedly show how amazingly well the human brain is at sense-making and error correction. If we can do this, certainly we are able to sort out the nomenclature used in scientific papers. Yes, we are able to do this like you are able to decipher my garbled up English. But would you want to raed a whoel essay liek this?

The extra effort it takes to figure out somebody else’s nomenclature, even if it isn’t all that big a hurdle, creates friction that makes interdisciplinary work, even collaboration within one discipline, harder and thus discourages it. Researchers within one area often settle on a common or at least similar nomenclature, but this happens typically within groups that are already very specialized, and the nomenclature hurdle further supports this overspecialization. Imagine how much easier it would be to learn about a new subject if each paper used a standard notation or at least had a list of used notation added at the end, or in a supplement.

There aren’t all that many letters in the alphabets we commonly use, and we’d run out of letters quickly would we try to keep them all different. But they don’t need to be all different – more practical would be palettes for certain disciplines. And of course one doesn’t really have to fix each and every twiddle or index if it is explained in the text. Just the most important variables, constants, and observables would already be a great improvement. Say, that T that you are using there, does or doesn’t that include complex conjugation? And the D, is that the number of spatial coordinates only, or does it include the time-coordinate? Oh, and N isn’t a normalization but an integer, how stupid of me.

In fact, I think that the benefit, especially for students who haven’t yet seen all that many papers, would be so large that we will almost certainly sooner or later see such a nomenclature standard. And all it really takes is for somebody to set up a wiki and collect entries, then for authors to add a note that they used a certain notation standard. This might be a good starting point.

Of course a physics notation standard will only work if sufficient people come to see the benefit. I don’t think we’re quite there yet, but I am pretty sure that the day will come when some nation expects a certain standard for lecture notes and textbooks, and that day isn’t too far into the future.

Tuesday, December 30, 2014

A new proposal for a fifth force experiment

Milla Jovovich in “The Fifth Element”
I still find it amazing that all I see around me is made up of only some dozens particles and four interactions. For all we know. But maybe this isn’t all there is? Physicists have been speculating for a while now that our universe needs a fifth force to maintain the observed expansion rate, but this has turned out to be very difficult to test. A new paper by Burrage, Copeland and Hinds from the UK now proposes a test based on measuring the gravitational attraction felt by single atoms.
    Probing Dark Energy with Atom Interferometry
    Clare Burrage, Edmund J. Copeland, E. A. Hinds

Dark energy is often portrayed as mysterious stuff that fills the universe and pushes it apart, but stuff and forces aren’t separate things. Stuff can be a force carrier that communicates an interaction between other particles. In its simplest form, dark energy is an unspecified smooth, inert, and unchanging constant, the “cosmological constant”. But for many theorists such a constant is unsatisfactory because its origin is left unexplained. A more satisfactory explanation would be a dark-energy-field that fills the universe and has the desired effect of accelerating the expansion by modifying the gravitational interaction on long, super-galactic, distances.

The problem with using fields to modify the gravitational interaction on long distances and to thus explain the observations is that one quickly runs into problems at shorter distances. The same field that needs to be present between galaxies to push them apart should not be present within the galaxies, or within solar systems, because we should have noticed that already.

About a decade ago, Weltman and Khoury pointed out that a dark energy field would not affect gravity on short distances if it was suppressed by the density of matter (arXiv:astro-ph/0309411). The higher the density of matter, the smaller the value of the dark energy field, and the less it would affect the gravitational attraction. Such a field thus would be very weak within our galaxies, and only make itself noticeable between galaxies where the matter density is very low. They called this type of dark energy field the “chameleon field” because it seems to hide itself and merges into the background.

The very same property that makes the chameleon field such an appealing explanation for dark energy is also what makes it so hard to test. Fifth force experiments in the laboratory measure the gravitational interaction with very high precision, and they have so far reproduced standard gravity with ever increasing precision. These experiments are however not sensitive to the chameleon field, at least not in the parameter range in which it might explain dark energy. That is because the existing fifth force experiments measure the gravitational force between two macroscopic probes, for example two metallic plates, and the high density of the probes themselves suppresses the field one is trying to measure.

In their new paper, Burrage et al show that one does not run into this problem if one uses a different setting. To begin with they say the experiment should be done in a vacuum chamber as to get the background density to be as small as possible, and the value of the chameleon field as high as possible. The authors then show that the value of the field inside the chamber depends on the size of the chamber and the quality of the vacuum and that the field increases towards the middle of the chamber.

They calculate the force between a very small, for example atomic, sample and a larger sample, and show that the atom is too small to cause a large suppression of the chameleon field. The gravitational attraction between two atoms is too feeble to be measureable, so one still needs one macroscopic body. But when one looks at the numbers, replacing one macroscopic probe with a microscopic one would be enough to make the experiment sensitive to find out whether dark energy is a chameleon field, or at least some of it.

One way to realize such an experiment would be by using atom interferometry which has previously been demonstrated to be sensitive to the gravitational force. In these experiments, an atom beam is split in two, one half of it is subjected to some field, and then the beams are combined again. From the resulting interference pattern one can extract the force that acted on the beams. A similar setting could be used to test the chameleon field.

Holger Müller from the University of California at Berkeley, an experimentalist who works on atom interferometry, thinks it is possible to do the experiment. “It’s amazing to see how an experiment that is very realistic with current technology is able to probe dark energy. The technology should even allow surpassing the sensitivity expected by Burrage et al.,” he said.

I find this a very interesting paper, and also a hopeful one. It shows that while sending satellites into orbit and building multi-billion dollar colliders are promising ways to search for new physics, they are not the only ways. New physics can also hide in high precision measurements in your university lab, just ask the theorists. Who knows, there might be a chameleon hidden in your vacuum chamber.

This post first appeared on Starts with a Bang as "The Chameleon in the Vacuum Chamber".

Monday, December 29, 2014

The 2014 non-news: Where do these highly energetic cosmic rays come from?

As the year 2014 is nearing its end, lists with the most read stories are making the rounds. Everything in there, from dinosaurs over miracle cures, disease scares, Schadenfreude, suicide, the relic gravitational wave signal that wasn't, space-traffic accidents, all the way to a comet landing.

For the high energy physicists, this was another year of non-news though, not counting the one or the other baryon that I have a hard time getting excited about. No susy, no dark matter detection, no quantum gravity, no beyond the standard whatsoever.

My non-news of the year that probably passed you by is that the origin of highly energetic cosmic rays descended back into mystery. If you recall, in 2007, the Pierre Auger Collaboration announced that they had found a correlation between the directions from which they saw the highly energetic particles coming and the positions of galaxies with supermassive black holes, more generally referred to as active galactic nuclei. (Yes, I've been writing this blog for that long!)

This correlation came with some fineprint because highly energetic particles will eventually, after sufficiently long travel, scatter at one of the very dispersed photons of the cosmic microwave background. So you would not expect a correlation with these active galactic nuclei beyond a certain distance, and that seemed to be exactly what they saw. They didn't at this point have a lot of data so that the statistical significance wasn't very high. However, many people thought this correlation would become stronger with more data, and the collaboration probably thought so too, otherwise they wouldn't have published it.

But it didn't turn out this way. The correlation didn't become stronger. Instead by now it's pretty much entirely gone. In October, Katia Moskvitch at Nature News summed it up:

"Working with three-and-a-half years of data gleaned from 27 rays, Auger researchers reported that the rays seemed to preferentially come from points in the sky occupied by supermassive black holes in nearby galaxies. The implication was that the particles were being accelerated to their ultra-high energies by some mechanism associated with the giant black holes. The announcement generated a media frenzy, with reporters claiming that the mystery of the origin of cosmic rays had been solved at last.

But it had not. As the years went on and as the data accumulated, the correlations got weaker and weaker. Eventually, the researchers had to admit that they could not unambiguously identify any sources. Maybe those random intergalactic fields were muddying the results after all. Auger “should have been more careful” before publishing the 2007 paper, says Avi Loeb, an astrophysicist at Harvard University in Cambridge, Massachusetts."

So we're back to speculation on the origin of the ultra high energetic cosmic rays. It's a puzzle that I've scratched my head over for some while - more scratching is due.

Wednesday, December 24, 2014

Merry Christmas :)

I have a post about "The rising star of science" over at Starts with a Bang. It collects some of my thoughts on science and religion, fear and wonder. I will not repost this here next month, so if you're interested check it out over there. According to medium it's a 6 minutes read. You can get a 3 minutes summary in my recent video:

We wish you all happy holidays :)

From left to right: Inga the elephant, Lara the noisy one, me, Gloria the nosy one, and Bo the moose. Stefan is fine and says hi too, he isn't in the photo because his wife couldn't find the setting for the self-timer.

Tuesday, December 23, 2014

Book review: "The Edge of the Sky" by Roberto Trotta

The Edge of the Sky: All You Need to Know about the All-There-Is
Roberto Trotta
Basic Books (9. Oktober 2014)

It's two days before Christmas and you need a last-minute gift for that third-degree-uncle, heretofore completely unknown to you, who just announced a drop-in for the holidays? I know just the right thing for you: "The Edge of the Sky" by Roberto Trotta, which I found as free review copy in my mailbox one morning.

According to the back flap, Roberto Trotta is a lecturer in astrophysics at Imperial College. He has very blue eyes and very white teeth, but I have more twitter followers, so I win. Roberto set out to explain modern cosmology with only the thousand most used words of the English language. Unfortunately, neither "cosmology" nor "thousand" belongs to these words, and certainly not "heretofore" which might or might not mean what I think it means.

The result is a nice little booklet telling a story about "big-seers" (telescopes) and "star-crowds" (galaxies) and the "early push" (inflation) with a couple of drawings for illustration. It's pretty and kinda artsy which probably isn't a word at all. The book is also as useless as that price-winning designer chair in which one can't sit, but better than the chair because it's very slim and will not take up much space, or money. It's just the right thing to give to your uncle who will probably not read it and so he'll never find out that you think he's too dumb to know the word "particle". It is, in summary, the perfect re-gift, so go and stuff it into somebody's under-shoe-clothes - how am I doing?

Saturday, December 20, 2014

Has Loop Quantum Gravity been proved wrong?

Logo of site by name loop insight.
The insight to take away is that you have to
carefully look for those infinities
[Fast track to wisdom: Probably not. But then.]

The Unruh effect is the predicted, but so-far not observed, particle production seen by an accelerated observer in flat space. It is a result obtained using quantum field theory and does not include gravity, and the particles are thermally distributed with a temperature that is proportional to the acceleration. The origin of the particle production is that the notion of particles, like the passage of time, is observer-dependent, and so what is Bob’s vacuum might be Alice’s thermal bath.

The Unruh effect can be related to the Hawking effect, that is the particle production in the gravitational field of a black hole, by use of the equivalence principle. Neither of the two effects has anything to do with quantum gravity. In these calculations, space-time is treated as a fixed background field that has no quantum properties.

Loop Quantum Gravity (LQG) is an approach to quantum gravity that relies on a new identification of space-time degrees of freedom, which can then be quantized without running into the same problems as one does when quantizing perturbations of the metric. Or at least that’s the idea. The quantization prescription depends on two parameters, one is a length scale normally assumed to be of the order of the Planck length, and the other one is a parameter that everybody wishes wasn’t there and which will not be relevant in the following. The point is that LQG is basically a modification of the quantization procedure that depends on the Planck length.

In a recent paper now Hossain and Sadar from India claim that using the loop quantization method does not reproduce the Unruh effect

If this was correct, this would be really bad news for LQG. So of course I had to read the paper, and I am here to report back to you.

The Unruh effect has not been measured yet, but experiments have been done for some while to measure the non-gravitational analog of the Hawking effect. Since the Hawking effect is a consequence of certain transformations in quantum field theory that also apply to other systems, it can be studied in the laboratory. There is some ongoing controversy whether or not it has been measured already, but in my opinion it’s really just a matter of time until they’ve pinned down the experimental uncertainties and will confirm this. It would be theoretically difficult to claim that the Unruh effect does not exist when the Hawking effect does. So, if it’s true what they claim in the paper, then Loop Quantum Gravity, or its quantization method respectively, would be pretty much ruled out, or at least in deep trouble.

What they do in the paper is that they apply the two quantization methods to quantum fields in a fixed background. As is usual in this calculation, the background remains classical. Then they calculate the particle flux that an accelerated observer would see. For this they have to define some operators as limiting cases because they don’t exist the same way for the loop quantization method. They find in the end that while the normal quantization leads the expected thermal spectrum, the result for the loop quantization method is just zero.

I kinda want to believe it, because then at least something would be happening in quantum gravity! But I see a big problem with this computation. To understand it, you first have to know that the result with the normal quantization method isn’t actually a nice thermal distribution, it is infinity. This infinity can be identified by a suitable mathematical procedure, in which case one finds that it is the zero of a delta function in momentum space. Once identified, it can be factored out, and the prefactor of the delta function is the thermal spectrum that you’ve been looking for. One can trace back the physical origin of this infinity to find it is, roughly speaking, that you’ve looked at the flux for an infinite volume.

These types of infinites appear in quantum field theory all over the place and they can be dealt with by a procedure called regularization that is the introduction of a parameter, the “regulator”, whose purpose is to capture the divergences so that they can be cleanly discarded of. The important thing about regularization is that you have to identify the divergences first before you can get rid of them. If you try to divide out an infinite factor from a result that wasn’t divergent, all you get is zero.

What the authors do in the paper is that they use a standard regularization method for the Unruh effect that is commonly used for the normal quantization, and apply this regularization also to the other quantization. Now the loop quantization in some sense already has a regulator, that’s the finite length scale that, when the quantization is applied to space-time, results in a smallest unit of area and volume. If this length scale is first set to zero, and then the regulator is removed, one gets the normal Unruh effect. If one first removes the regulator, the result is apparently zero. (Or so they claim in the paper. I didn’t really check all their approximations of special functions and so on.)

My suspicion therefore is that the result would have been finite to begin with and that the additional regularization is an overkill. The result is zero, basically, because they’ve divided out an infinity too much.

The paper however is very confusingly written and at least I don’t see at first sight what’s wrong with their calculation. I’ve now consulted three people who work on related things and neither of them saw an obvious mistake. I myself don’t care enough about Loop Quantum Gravity to spend more time on this than I already have. The reason I am telling you about this is because there has been absolutely no reaction to this paper. You’d think if colleagues go about and allegedly prove wrong the theory you’re working on, they’d be shouted down in no time! But everybody loop quantum just seems to have ignored this.

So if you’re working on loop quantum gravity, I would appreciate a pointer to a calculation of the Unruh effect that either confirms this result or proves it wrong. And the rest of you I suggest spread word that loop quantum gravity has been proved wrong, because then I’m sure we will get a clarification of this very very quickly ;)

Saturday, December 13, 2014

The remote Maxwell Demon

During the summer, I wrote a paper that I dumped in an arxiv category called cond-mat.stat-mech, and then managed to entirely forget about it. So somewhat belatedly, here is a summary.

Pretty much the only recollection I have of my stat mech lectures is that every single one of them was inevitably accompanied by the always same divided box with two sides labeled A and B. Let me draw this for you:

Maxwell’s demon in its original version sits in this box. The demon’s story is a thought experiment meant to highlight the following paradox with the 2nd law of thermodynamics.

Imagine the above box is filled with a gas, and the gas is at a low temperature on side A and at a higher temperature on side B. The second law of thermodynamics says that if you open a window in the dividing wall, the temperatures will come to an average equilibrium value, and in this process entropy is maximized. Temperature is basically average kinetic energy, so the average speed of the gas atoms approaches the same value everywhere, just because this is the most likely thing to happen

The system can only do work on the way to equilibrium, but no longer once it’s arrived there. Once you’ve reached this state of maximum entropy, nothing happens any more, except for fluctuations. Unless you have a Maxwell demon...

Maxwell’s demon sits at the dividing wall between A and B when both sides are at the same temperature. He opens the window every time a fast atom comes from the left or a slow atom comes from the right, otherwise he keeps it closed. This has the effect of sorting fast and slow atoms so that, after some while, more fast atoms are on the right side than on the left side. This means the temperatures are not in equilibrium anymore and entropy has decreased. The demon thus has violated the second law of thermodynamics!

Well, of course he hasn’t, but it took a century for physicists to pin down the exact reason why. In brief it’s that the demon must be able to obtain, store, and use information. And he can only do that if he either starts at a low entropy that then increases, or brings along an infinite reservoir of low entropy. The total entropy never decreases, and the second law is well and fine.

It has only been during recent years that some versions of Maxwell’s demon have been experimentally realized in the laboratory. These demons use essentially information to drive a system out of equilibrium, which can then, in principle, do work.

It occurred to me that this must mean it should be possible to replace transfer of energy from a sender to a receiver by transfer of information, and this information transfer could take place with a much smaller energy than what the receiver gets out of the information. In essence this would mean one can down-convert energy during transmission.

The reason this is possible is that the relevant energy here is not the total energy – a system in thermal equilibrium has lots of energy. The relevant energy that we want at the receiving end is free energy – energy that can be used to do work. The signal does not need to contain the energy itself, it only needs to contain the information that allows one to drive the system out of equilibrium.

In my paper, I have constructed a concrete example for how this could work. The full process must include remote measuring, extraction of information from the measurement, sending of the signal, and finally making use of the signal to actually extract energy. The devil, or in this case the demon, is in the details. It took me some while to come up with a system simple enough so one could in the end compute the energy conversion and also show that the whole thing, remote demon included, obeys the Carnot limit on the efficiency of heat engines.

In the classical example of Maxwell’s demon, the necessary information is the velocity of the particles approaching the dividing wall, but I chose a simpler system with discrete energy levels, just because the probability distributions are then better to deal with. The energy extraction that my demon works with is a variant of stimulated emission that is also used in lasers.

The atoms in a laser are being “pumped” into an out-of equilibrium state, which has the property that as you inject light (ie, energy) with the right frequency, you get out more light of the same frequency than you sent in. This does not work if the system is in equilibrium though, it is then always more likely that the injected signal is absorbed rather than that it stimulates a net emission.

However, a system in equilibrium always has fluctuations. The atoms have some probability to be in an excited state, a state in which they could be stimulated to emit light. If you just knew which atoms were in the excited state, then you could target them specifically, and end up with twice the energy that you sent in.

So that’s what my remote demon does: It measures out of equilibrium fluctuations in some atomic system and targets these to extract energy. The main point is that the energy sent to the system can be much smaller than the extracted energy. It is, in essence, a wireless battery recharger. Except that the energies in question are, in my example, so tiny that it’s practically entirely useless.

I’ve never worked on anything in statistical mechanics before. Apparently I don’t even have a blog label to tag it! This was a fun project and I learned a lot. I even made a drawing to accompany it.

Saturday, December 06, 2014

10 things you didn’t know about the Anthropic Principle

“The anthropic principle – the idea that our universe has the properties it does because we are here to say so and that if it were any different, we wouldn’t be around commenting on it – infuriates many physicists, including [Marc Davis from UC Berkeley]. It smacks of defeatism, as if we were acknowledging that we could not explain the universe from first principles. It also appears unscientific. For how do you verify the multiverse? Moreover, the anthropic principle is a tautology. “I think this explanation is ridiculous. Anthropic principle… bah,” said Davis. “I’m hoping they are wrong [about the multiverse] and that there is a better explanation.””
~Anil Ananthaswamy, in “The Edge of Physics”
Are we really so special?
Starting in the mid 70s, the anthropic principle has been employed in physics as an explanation for values of parameters in the theories, but in 2014 I still come across ill-informed statements like the one above in Anil Ananthaswamy’s (otherwise very recommendable) book “The Edge of Physics”. I’m no fan of the anthropic principle because I don’t think it will lead to big insights. But it’s neither useless nor a tautology nor does it acknowledge that the universe can’t be explained from first principles.

Below the most important facts about the anthropic principle, where I am referring to the definition from Ananthaswamy’s quote “Our universe has the properties it does because if it were any different we wouldn’t be here to comment on it.”
  1. The anthropic principle doesn’t necessarily have something to do with the multiverse.

    The anthropic principle is correct regardless of whether there is a multiverse or not and regardless of what is the underlying explanation for the values of parameters in our theories, if there is one. The reason it is often brought up by multiverse proponents is that they claim the anthropic principle is the only explanation, and there is no other selection principle for the parameters that we observe. One then needs to show though that the value of parameters we observe is indeed the only one (or at least a very probable one) if one requires that life is possible. This is however highly controversial, see 2.

  2. The anthropic principle cannot explain the values of all parameters in our theories.

    The typical claim that the anthropic principle explains the value of parameters in the multiverse goes like this: If parameter x was just a little larger or smaller we wouldn’t exist. The problem with this argument is that small variations in one out of two dozen parameters do not consider the bulk of possible combinations. You’d really have to consider independent modifications of all parameters to be able to conclude there is only one combination supportive of life. This however is not a presently feasible calculation.

    Though we cannot presently scan the whole parameter space to find out which combinations might be supportive for life, we can do a little better than one and try at least a few. This has been done and thus we know that the claim that there is really only one combination of parameters that will create a universe hospitable to life is on very shaky ground.

    In their 2006 paper “A Universe Without Weak Interactions”, published in PRD, Harnik, Kribs, and Perez paper put forward a universe that seems capable of creating life and yet is entirely different from our own [arXiv:hep-ph/0604027]. Don Page argues that the universe would be more hospitable for life if the cosmological constant was smaller than the observed value [arxiv:1101.2444], and recently it was claimed that life might have been possible already in the early universe [arxiv:1312.0613. All these arguments show that a chemistry complex enough to support life can arise under circumstances that, while still special, are not anything like the ones we experience today.

  3. Even so, the anthropic principle might still explain some parameters.

    The anthropic principle might however still work for some parameters if their effect is almost independent on what the other parameters do. That is, even if one cannot use the anthropic principle to explain all values of parameters because one knows there are other combinations allowing for the preconditions of life, some of these parameters might need to have the same value in all cases. The cosmological constant is often claimed to be of this type.

  4. The anthropic principle is trivial but that doesn’t mean it’s obvious.

    Mathematical theorems, lemmas, and corollaries are results of derivations following from assumptions and definitions. They essentially are the assumptions, just expressed differently. They are always true and sometimes trivial. But often, they are surprising and far from obvious, though that is inevitably a subjective statement. Complaining that something is trivial is like saying “It’s just sound waves” and referring to everything from engine noise to Mozart.

  5. The anthropic principle isn’t useless.

    While the anthropic principle might strike you as somewhat silly and trivially true, it can be useful for example to rule out values of certain parameters. The most prominent example is probably the cosmological constant which, if it was too large, wouldn’t allow the formation of structures large enough to support life. This is not an empty conclusion. It’s like when I see you drive to work by car every morning and conclude you must be old enough to have a driver’s license. (You might just be stubbornly disobeying laws, but the universe can’t do that.) The anthropic principle is in its core function a consistency constraint on the parameters in our theories. One could derive from it predictions on the possible combinations of parameters, but since we have already measured them these are now merely post-dictions.

    Fred Hoyle's prediction of properties of the carbon nucleus that make possible the synthesis of carbon in stellar interiors — properties that were later discovered as predicted — is often quoted as successful application of the anthropic principle because Hoyle is said to have exploited the fact that carbon is central to life on Earth. Some historians have questioned whether this was indeed Hoyle's reasoning, but the mere fact that it could have been shows that anthropic reasoning can be a useful extrapolation of observation - in this case the abundance of carbon on our planet.

  6. The anthropic principle does not imply a causal relation.

    Though “because” suggests it, there is no causation in the anthropic principle. An everyday example for “because” not implying an actual cause: I know you’re sick because you’ve got a cough and a runny nose. This doesn’t mean the runny nose caused you to be sick. Instead, it was probably some virus. Alas, you can carry a virus without showing symptoms so it’s not like the virus is the actual “cause” of my knowing. Likewise, that there is somebody here to observe the universe did not cause a life-friendly universe into existence. (And the return, that a life-friendly universe caused our existence doesn’t work because it’s not like the life-friendly universe sat somewhere out there and then decided to come into existence to produce some humans.)

  7. The applications of the anthropic principle in physics have actually nothing to do with life.

    As Lee Smolin likes to point out, the mentioning of “life” in the anthropic principle is entirely superfluous verbal baggage (my words, not his). Physicists don’t usually have a lot of business with the science of self-aware conscious beings. They talk about formation of large scale structures or atoms that are preconditions for biochemistricy, but don’t even expect physicists to discuss large molecules. Talking about “life” is arguably catchier, but that’s really all there is to it.

  8. The anthropic principle is not a tautology in the rhetorical sense.

    It does not use different words to say the same thing: A universe might be hospitable to life and yet life might not feel like coming to the party, or none of that life might ever ask a why-question. In other words, getting the parameters right is a necessary but not a sufficient condition for the evolution of intelligent life. The rhetorically tautological version would be “Since you are here asking why the universe is hospitable to life, life must have evolved in that universe that now asks why the universe is hospitable to life.” Which you can easily identify as rhetorical tautology because now it sounds entirely stupid.

  9. It’s not a new or unique application.

    Anthropic-type arguments, based on the observation that there exists somebody in this universe capable of making an observation, are not only used to explain free parameters in our theories. They sometimes appear as “physical” requirements. For example: we assume there are no negative energies because otherwise the vacuum would be unstable and we wouldn’t be here to worry about it. And requirements like locality, separation of scales, and well-defined initial value problems are essentially based on the observation that otherwise we wouldn’t be able to do any science, if there was anybody to do anything at all. Logically, these requirements are the same as anthropic arguments, they just aren’t referred to it as such.

  10. Other variants of the anthropic principle have questionable scientific value

    The anthropic principle becomes speculative, for not to say unscientific, once you try to go beyond the definition that I referred to here. If one does not understand that a consistency constraint does not imply a causal relation then you come to the strange conclusion that humans caused the universe into existence. And if one does not accept that the anthropic principle is just a requirement that a viable theories has to fulfil, one is then stuck with the question why the parameter values are what they are. Here is where the multiverse comes back, for you can then argue that we are forced to believe in the “existence” of universes with all possible combinations. Or you can go off the deep end and argue that our universe was designed for the existence of life.

    Personally I feel the urge to wash my hands after having been in touch with these kinds of arguments. I prefer my principles trivially true.

This post previously appeared October 21st 2014 on Starts with a Bang.

Saturday, November 29, 2014

Negative Mass in General Relativity?

[Image Source:]
Science News ran a piece the other week about a paper that has appeared in PRD titled “Negative mass bubbles in de Sitter spacetime”. The Science News article is behind a paywall, but don’t worry I’ll tell you everything you need to know.

The arxiv version of the paper is here. Since I’m quoted in the Science News piece saying something to the extent that I have my reservations but think it’s a promising direction of study, I have gotten a lot of questions about negative masses in General Relativity lately. So here a clarification.

First one has to be careful what one means with mass. There are three types of masses: inertial mass, passive gravitational mass, and active gravitational mass. In General Relativity these masses, or their generalization in terms of tensors respectively, are normally assumed to be identical.

The equality of inertial and passive gravitational mass is basically the equivalence principle. The active gravitational mass is what causes space-time to bend; the passive gravitational mass is what couples to the space-time and determines the motion of particles in that background. The active and passive gravitational masses are identical in almost all theories I know. (The Schrödinger-Newton approach is the only exception that comes to mind). I doubt it is consistent to have them not be equal, but I am not aware of a proof for this. (I tried in the Schrödinger-Newton case, but it’s not as trivial as it looks at first sight.)

In General Relativity one further has to distinguish between the local quantities like energy-density and pressure and so on that are functions of the coordinates, and global quantities that describe the space-time at large. The total mass or energy in some asymptotic limit are essentially integrals over the local quantities, and there are several slightly different ways to define them.

The positive mass theorem, in contrast to what its name suggests, does not state that one cannot have particles with negative masses. It states instead, roughly, that if your local matter is normal matter and obeys certain plausible assumptions, then the total energy and mass are also positive. You thus cannot have stars with negative masses, regardless of how you bend your space-time. This isn’t as trivial a statement as it sounds because the gravitational interaction contributes to the definition of these integrated quantities. In any case, the positive mass theorem holds in space that is asymptotically flat.

Now what they point out in the new paper is that for all we know we don’t live in asymptotically flat space, but we live in asymptotic de-Sitter space because observational evidence speaks for a positive cosmological constant. In this case the positive mass theorem doesn’t apply. Then they go on to construct a negative mass solution in asymptotic de Sitter space. I didn’t check the calculation in detail, part of it is numerical, but it all sounds plausible to me.

However, it is somewhat misleading to call the solution that they find a negative mass solution. The cosmological constant makes a contribution to the effective mass term in what you can plausibly interpret as the gravitational potential. Taken together both, the effective mass in the potential is positive in the region where this solution applies. The local mass (density) is also positive by assumption. (You see this most easily by looking at fig 1 in the paper.)

Selling this as a negative mass solution is like one of these ads that say you’ll save 10$ if you spend at least $100 – in the end your expenses are always positive. The negative mass in their solution corresponds to the supposed savings that you make. You never really get to see them. What really matters are the total expenses. And these are always positive. There are thus no negative mass particles in this scenario whatsoever. Further, the cosmological constant is necessary for these solutions to exist, so you cannot employ them to replace the cosmological constant.

It also must be added that showing the existence of a certain solution to Einstein’s field equations is one thing, showing that they have a reasonable chance to actually be realized in Nature is an entirely different thing. For this you have to come up with a mechanism to create them and you also have to show that they are stable. Neither point is addressed in the paper.

Advertisement break: If you want to know how one really introduces negative masses into GR, read this.

In the Science News article Andrew Grant quotes one of the authors as saying:
“Paranjape wants to look into the possibility that the very early universe contained a plasma of particles with both positive and negative mass. It would be a very strange cosmic soup, he says, because positive mass gravitationally attracts everything and negative mass repels everything.”
This is wrong. Gravitation is a spin-2 interaction. It is straightforward to see that this means that like charges attract and unlike charges repel. The charge of gravity is the mass. This does not mean that negative gravitational mass repels everything. Negative gravitational mass repels positive mass but attracts negative mass. If this wasn’t so, then you’d run into the above mentioned inconsistencies. The reason this isn’t so in the case considered in the paper is that they don’t have negative masses to begin with. They have certain solutions that basically have a gravitational attraction which is smaller than expected.

In summary, I think it’s an interesting work, but so far it’s an entirely theoretical construct and its relevance for the description of cosmological dynamics is entirely unclear. There are no negative mass particles in this paper in any sensible interpretation of this term.

Saturday, November 22, 2014

Gender disparity? Yes, please.

[Image Source: Papercards]

Last month, a group of Australian researchers from the life sciences published a paper that breaks down the duration of talks at a 2013 conference by gender. They found that while the overall attendance and number of presentations was almost equally shared between men and women, the women spoke on the average for shorter periods of time. The main reason for this was that the women applied for shorter talks to begin with. You find a brief summary on the Nature website.

The twitter community of women in science was all over this, encouraging women to make the same requests as men, asserting that women “underpromote” themselves by not taking up enough of their colleagues’ time.

Other studies have previously found that while women on the average speak as much as men during the day, they tend to speak less in groups, especially so if the group is predominantly male. So the findings from the conference aren’t very surprising.

Now a lot of what goes around on twitter isn’t really meant seriously, see the smiley in Katie Hinde’s tweet. I remarked one could also interpret the numbers to show that men talk too much and overpromote themselves. I was joking of course to make a point, but after dwelling on this for a while I didn’t find it that funny anymore.

Women are frequently told that to be successful they should do the same as men do. I don’t know how often I have seen advice explaining how women are allegedly belittling themselves by talking, well, like a woman. We are supposed to be assertive and take credit for our achievements. Pull your shoulders back, don’t cross your legs, don’t flip your hair. We’re not supposed to end every sentence as if it was a question. We’re not supposed to start every interjection with an apology. We’re not supposed to be emotional and personal, and so on. Yes, all of these are typically “female” habits. We are told, in essence, there’s something wrong with being what we are.

Here is for example a list with public speaking tips: Don’t speak about yourself, don’t speak in a high pitch, don’t speak too fast because “Talking fast is natural with two of your best friends and a bottle of Mumm, but audiences (especially we slower listening men) can’t take it all in”. Aha. Also, don’t flirt and don’t wear jewelry because the slow men might notice you’re a woman.

Sorry, I got sick at point five and couldn’t continue – must have been the Mumm. Too bad if your anatomy doesn’t support the low pitches. If you believe this guy that is, but listen to me for a moment, I swear I’ll try not to flirt. If your voice sounds unpleasant when you’re giving a talk, it’s not your voice, it’s the microphone and the equalizer, probably set for male voices. And do we really need a man to tell us that if we’re speaking about our research at a conference we shouldn’t talk about our recent hiking trip instead?

There are many reasons why women are underrepresented in some professions and overrepresented in others. Some of it is probably biological, some of it is cultural. If you are raising or have raised a child it is abundantly obvious that our little ones are subjected to gender stereotypes starting at very young age. Part of it is the clothing and the toys, but more importantly it’s simply that they observe the status quo: Childcare is still predominantly female business and I yet have to see a woman on the garbage truck.

Humans are incredibly social animals. It would be surprising if the prevailing stereotypes did not affect us at all. That’s why I am supportive of all initiatives that encourage children to develop their talents regardless of whether these talents are deemed suitable for their gender, race, or social background. Because these stereotypes are thousands of years old and have become hurdles to our selfdevelopment. By and large, I see more encouragements for girls than I see for boys to follow their passion regardless of what society thinks, and I also see that women have more backup fighting unrealistic body images which is what this previous post was about. Ironically, I was criticized on twitter for saying that boys don’t need to have a superhero body to be real men because that supposedly wasn’t fair to the girls.

I am not supportive of hard quotas that aim at prefixed male-female ratios. There is no scientific support for these ratios, and moreover I witnessed repeatedly that these quotas have a big backlash, creating a stigma that “She is just here because” whether or not that is true.

Thus, at the present level women are likely to still be underrepresented from where we would be if we’d manage to ignore social pressure to follow ancient stereotypes. And so I think that we would benefit from more women among the scientists, especially in math-heavy disciplines. Firstly because we are unnecessarily missing out of talent. But also because diversity is beneficial for the successful generation and realization of ideas. The relevant diversity is in the way we think and argue. Again, this is probably partly biological and partly cultural, but whatever the reason, a diversity of thought should be encouraged and this diversity is almost certainly correlated with demographic diversity.

That’s why I disapprove of so-called advice that women should talk and walk and act like men. Because that’s exactly the opposite from what we need. Science stands to benefit from women being different from men. Gender equality doesn’t mean genders should be equal, it means they should have the same opportunities. So women are more likely to volunteer organizing social events? Wtf is wrong with that?

So please go flip your hair if you feel like it, wear your favorite shirt, put on all the jewelry you like, and generally be yourself. Don’t let anybody tell you to be something you are not. If you need the long slot for your talk go ahead. If you’re confident you can get across your message in 15 minutes, even better, because we all talk too much anyway.

About the video: I mysteriously managed to produce a video in High Definition! Now you can see all my pimples. My husband made a good camera man. My anonymous friend again helped cleaning up the audio file. Enjoy :)