Pages

Tuesday, December 31, 2013

Next Year’s News

As the year nears its end, my feed is full with last year’s news. For balance, I want to give some space to next year’s news. What do you think will be in the news next year?

In the very short run, technological developments aren’t so difficult to foresee, mostly because our brains all tick similarly. But by the time I have an idea, it’s highly likely somebody is already working on it. I’m not much of an inventor.

After the babies were born for example I remarked to Stefan that it’s about time somebody comes up with a high-tech diaper that sends me an email if full. Done.

Or this: For some years now, I keep reading about transcranial magnetic stimulation that allegedly boosts learning ability or intelligence in general. However, intelligence is a highly complex trait and I don’t buy a word of this. But wouldn’t the world be a much better place if we’d just sleep better? And don’t we know so much more about the brain in sleep-mode than in learning-mode? So why not use transcranial magnetic stimulation to treat insomnia? Somebody somewhere is working on that.

Here are three of my speculations that I haven’t yet heard much about:

    Robotic brain extension

    Insects’ brains can hooked up to chips and be remote controlled. But rather than using a computer to control a biological neural network, it would be much cooler if one could integrate the bio-brain to improve the tasks that robots are typically bad at. Balance for example is such a case that is extremely difficult for robots, yet the smallest bird-brain can easily balance a body on one leg, thanks to million of years of evolution. Can one engineer the both to work together, provided some ethics committee nods approval?

    Inaudibility shield

    Forget about invisibility shields, nobody wants to be invisible, just look at youtube. What we really need in the days of iPhones and people talking to their glasses is an inaudibility shield. I neither want to hear your phone calls, nor do you want to hear mine. Somebody please work on this.

    The conscious subconsciousness

    I read all the time about computers that are brain-controlled or most recently brain to brain communication. However, the former is still far too cumbersome and will remain so for a long time, and the latter creeps out most people. So let’s stick with one head and just route signals from the unconscious parts of our brain activity to the conscious parts and vice versa. Imagine for a moment how dramatically the world might change if we had the ability to readjust hormonal balance or heart rate.
Happy New Year! May it be a good one for you.

Friday, December 27, 2013

The finetuned cube

The standard model of parenthood.
The kids are almost three years now and I spend a lot of time picking up wooden building blocks. That’s good for your health in many ways, for example by the following brain gymnastics.

When I scan the floor under the couch for that missing cube, I don’t expect to find it balancing on a corner - would you? And in the strange event that you found it delicately balanced on a corner, would you not expect to also find something, or somebody, that explains this?

When physicists scanned the LHC data for that particle, that particle you’re not supposed to call the god-particle, they knew it would be balancing on a corner. The Higgs is too light, much too light, that much we knew already. And so, before the LHC most physicists expected that once they’d be able to see the Higgs, they’d also catch a glimpse of whatever it was that explained this delicate balance. But they didn’t.

It goes under the name ‘naturalness,’ the belief that a finely tuned balance requires additional explanation. “Naturally” is the physicist’s way of saying “of course”. Supersymmetry, neatified to Susy, was supposed to be the explanation for finetuning, but Susy has not shown up, and neither has anything else. The cube stands balanced on the corner, seemingly all by itself.

Of course those who built their career on Susy cross-sections are not happy. They are now about to discard naturalness, for this would mean Susy could hide everywhere or nowhere, as long as it’s not within reach of the LHC. And beyond the LHC there’s 16 orders of magnitude space for more papers. Peter Woit tells this tale of changing minds on his blog. The denial of pre-LHC arguments is so bold it deserves a book (hint, hint), but that’s a people-story and not mine to tell. Let me thus leave aside the psychological morass and the mud-throwing, and just look at the issue at hand: Naturalness, or its absence respectively.

I don’t believe in naturalness, the idea that finetuned parameter values require additional explanation. I recognize that it can be a useful guiding principle, and that apparent finetuning deserves a search for its cause, but it’s a suggestion rather than a requirement.

I don’t believe in naturalness because the definition of finetuning itself is unnatural in its focus on numerical parameters. The reason physicists focus on numbers is that numbers are easy to quantify - they are already quantified. The cosmological constant is 120 orders of magnitude too large, which is bad with countably many zeros. But the theories that we use are finetuned to describe our universe in many other ways. It’s just that physicists tend to forget how weird mathematics can be.

We work with manifolds of integer dimension that allow for a metric and a causal structure, we work with smooth and differentiable functions, we work with bounded Hamiltonians and hermitian operators and our fibre bundles are principal bundles. There is absolutely no reason why this has to be, other than that evidence shows it describes nature. That’s the difference between math and physics: In physics you take that part of math that is useful to explain what you observe. Differentiable functions, to pick my favorite example because it can be quantified, have measure zero in the space of all functions. That’s infinite finetuning. It’s just that nobody ever talks about it. Be wary whenever you meet the phrase “of course” in a scientific publication – infinity might hide behind it.

This finetuning of mathematical requirements appears in form of axioms of the theory – it’s a finetuning in theory space, and a selection is made based on evidence: differentiable manifolds with Lorentzian metric and hermitian operators work. But selecting the value of numerical parameters based on observational evidence is no different from selecting any other axiom. The existence of ‘multiverses’ in various areas of physics is similarly a consequence of the need to select axioms. Mathematical consistency is simply insufficient as a requirement to describe nature. Whenever you push your theory too far and ties to observation loosen too much, you get a multiverse.

My disbelief in naturalness used to be a fringe opinion and it’s gotten me funny looks on more than one occasion. But the world refused to be as particle physicists expected, naturalness rapidly loses popularity, and now it’s my turn to practice funny looks. The cube, it’s balancing on a tip and nobody knows why. In desparation they throw up their hands and say “anthropic principle”. Then they continue to produce scatter plots. But it’s a logical fallacy called ‘false dichotomy’, the claim that if it’s not natural it must be anthropic.

That I don’t believe in naturalness as a requirement doesn’t mean I think it a useless principle. If you have finetuned parameters, it will generally be fruitful to figure out the mechanism of finetuning. This mechanism will inevitably constitute another incidence of finetuning in one way or the other, either in parameter space or in theory space. But along the line you can learn something, while falling back on the anthropic principle doesn’t teach us anything. (In fact, we already know it doesn’t work.) So if you encounter finetuning, it’s a good idea to look for a mechanism. But don’t expect that mechanism to work without finetuning itself - because it won’t.

If that was too many words, watch this video:


It’s a cube that balances on a tip. If your resolution scale is the size of the cube, all you will find is that it’s mysteriously finetuned. The explanation for that finetuned balance you can only find if you look into the details, on scales much below the size of the cube. If you do, you’ll find an elaborate mechanism that keeps the cube balanced. So now you have an explanation for the balance. But that mechanism is finetuned itself, and you’ll wonder then just why that mechanism was there in the first place. That’s the finetuning in theory space.

Now in the example with the above video we know where the mechanism originated. Metaphors all have their shortcomings, so please don’t mistake me for advocating intelligent design. Let me just say that the origin of the mechanism was a complex multi-scale phenomenon that you’d not be able to extract in an effective field theory approach. In a similar way, it seems plausible to me that the unexplained values of parameters in the standard model can’t be derived from any UV completion by way of an effective field theory, at least not without finetuning. The often used example is that hundreds of years ago it was believed that the orbits of planets have to be explained by some fundamental principles (regular polygons stacked inside each other, etc). Today nobody would assign these numbers fundamental relevance.

Of course I didn’t find a cube balancing on a tip under the couch. I didn’t find the cube until I stepped on it the next morning. I did however quite literally find a missing puzzle piece – and that’s as much as a theoretical physicist can ask for.

Tuesday, December 24, 2013

Merry Christmas and Happy Holidays

I wish you all happy holidays and a peaceful Christmas time!
The next days will be a slow time on the blog as we're working our way through the holidays, the kid's 3rd birthday, and new year, with all the family and friend's visits that come with it. Back to business next year - enjoy the silence :)

Thursday, December 19, 2013

Searching for Spacetime Defects

Defects in graphene.
Image source: IOP.
Whether or not space and time are fundamentally discrete is one of the central questions of quantum gravity. Discretization is a powerful method to tame divergences that plague the quantization of gravity, and it is thus not surprising that many approaches to quantum gravity rely on some discrete structure, may that be condensed matter analogies, triangulations, or approaches based on networks. One expects that discretization explains the occurrence of singularities in general relativity as unphysical, much like singularities in hydrodynamics are merely mathematical artifacts that appear because on short distances the fluid approximation for collections of atoms is no longer applicable.

But finding experimental evidence for space-time discreteness is difficult because this structure is at the Planck scale and thus way beyond what we can directly probe. The best tests for such discrete approaches thus do not rely on the discreteness itself but on the baggage it brings, such as violations or deformations of Lorentz-symmetry that can be very precisely tested. Alas, what if the discrete structure does not violate Lorentz-symmetry? That is the question I have addressed in my two recent papers.

In discrete approaches to quantum gravity, space-time is not, fundamentally, a smooth background. Instead, the smooth background that we use in general relativity – the rubber sheet on which the marbles roll – is only an approximation that becomes useful at long distances. The discrete structure itself may be hard to test, but in any such discrete approach one expects the approximation of the smooth background to be imperfect. The discrete structure will have defects, much like crystals have defects, just because perfection would require additional explanation.

The presence of space-time defects affects how particles travel through the background, and the defects thus become potentially observable, constituting indirect evidence for space-time discreteness.To be able to quantify the effects, one needs a phenomenological model that connects the number and type of defects to observables, and can in return serve to derive constraints on the prevalence and properties of the defects.

In my papers, I distinguished two different types of defects: local defects and non-local defects. The requirement that Lorentz-invariance is maintained (on the average) turned out to be very restrictive on what these defects can possibly do.

The local defects are similar to defects in crystals, except that they are localized both in space and in time. These local defects essentially induce a violation of momentum conservation. This leads to a fairly straight-forward modification of particle interactions whenever a defect is encountered that makes the defects potentially observable even if they are very sparse.

The non-local defects are less intuitive from the particle-physics point of view. They were motivated by what Markopoulou and Smolin called ‘disordered locality’ in spin-networks, just that I did not, try as I might, succeed in constructing a version of disordered locality compatible with Lorentz-invariance. The non-local defects in my paper are thus essentially the dual of the local defects, which renders them Lorentz-invariant (on the average). Non-local defects induce a shift in position space in the same way that the local defects induce a shift in momentum space.

I looked at a bunch of observable effects that the presence of defects of either type would lead to, such as CMB heating (from photon decay induced by scattering on the local defects) or the blurring of distant astrophysical sources (from deviations of photons from the lightcone caused by non-local defects). It turns out that generally the constraints are stronger for low-energetic particles, in constrast to what one finds in deformations of Lorentz-invariance.

Existing data give some pretty good constraints on the density of defects and the parameters that quantify the scattering process. In the case of local defects, the density is roughly speaking less than one per fm4. That’s an exponent, not a footnote: It has to be a four-volume, otherwise it wouldn’t be Lorentz-invariant. For the non-local defects the constraints cannot as easily be summarized in a single number because they depend on several parameters, but there are contour plots in my papers.

The constraints so far are interesting, but not overwhelmingly exciting. The reason is that the models are only for flat space and thus not suitable to study cosmological data. To make progress, I'll have to generalize them to curved backgrounds. I also would like to combine both types of defects in a single model. I am presently quite excited about this because there is basically nobody who has previously looked at space-time defects, and there’s thus a real possibility that analyzing the data the right way might reveal something unexpected. And into the other direction, I am looking into a way to connect this phenomenological model to approaches to quantum gravity by extracting the parameters that I have used. So, you see, more work to do...

Sunday, December 15, 2013

Mathematics, the language of nature. What are you sinking about?

Swedish alphabet. Note lack of W. [Source]

“I was always bad at math” is an excuse I have heard many of my colleagues complain about. I’m reluctant to join their complaints. I’ve been living in Sweden for four years now and still don’t speak Swedish. If somebody asks me, I’ll say I was always bad with languages. So who am I to judge people for not wanting to make an effort with math?

People don’t learn math for the same reason I haven’t learned Swedish: They don’t need it. It’s a fact that my complaining colleagues are tiptoeing around but I think we’d better acknowledge it if we ever want to raise mathematic literacy.

Sweden is very welcoming to immigrants and almost everybody happily speaks English with me, often so well that I can’t tell if they’re native Swedes or Brits. At my workplace, the default language is English, both written and spoken. I have neither the exposure, nor the need, nor the use for Swedish. As a theoretical physicist, I have plenty of need for and exposure to math. But most people don’t.

The NYT recently collected opinions on how to make math and science relevant to “more than just geeks,” and Kimberly Brenneman, Director of the Early Childhood STEM Lab at Rutgers, informs us that
“My STEM education colleagues like to point out that few adults would happily admit to not being able to read, but these same people have no trouble saying they’re bad at math.”
I like to point out it’s more surprising they like to point this out than this being the case. Life is extremely difficult when one can’t read neither manuals, nor bills, nor all the forms and documents that are sometimes mistaken for hallmarks of civilization. Not being able to read is such a disadvantage that it makes people wonder what’s wrong with you. But besides the basics that come in handy to decipher the fine print on your contracts, math is relevant only to specific professions.

I am lying of course when I say I was always bad with languages. I was bad with French and Latin and as my teachers told me often enough, that was sheer laziness. Je sais, tu sais, nous savons - Why make the effort? I never wanted to move to France. I learned English just fine: it was useful and I heard it frequently. And while my active Swedish vocabulary never proceeded beyond the very basics, I quickly learned Swedish to the extent that I need it. For all these insurance forms and other hallmarks of civilization, to read product labels, street signs and parking tickets (working on it).

I think that most people are also lying when they say they were always bad at math. They most likely weren’t bad, they were just lazy, never made an effort and got away with it, just as I did with my spotty Latin. The human brain is energetically highly efficient, but the downside is the inertia we feel when having to learn something new, the inertia that’s asking “Is it worth it? Wouldn’t I be better off hitting on that guy because he looks like he’ll be able to bring home food for a family?”

But mathematics isn’t the language of a Northern European country with a population less than that of other countries’ cities. Mathematics is the language of nature. You can move out of Sweden, but you can’t move out of the universe. And much like one can’t truly understand the culture of a nation without knowing the words at the basis of their literature and lyrics, one can’t truly understand the world without knowing mathematics.

Almost everybody uses some math intuitively. Elementary logic, statistics, and extrapolations are to some extent hardwired in our brains. Beyond that it takes some effort, yes. The reward for this effort is the ability to see the manifold ways in which natural phenomena are related, how complexity arises from simplicity, and the tempting beauty of unifying frameworks. It’s more than worth the effort.

One should make a distinction here between reading and speaking mathematics.

If you work in a profession that uses math productively or creatively, you need to speak math. But for the sake of understanding, being able to read math is sufficient. It’s the difference between knowing the meaning of a differential equation, and being able to derive and solve it. It’s the difference between understanding the relevance of a theorem, and leading the proof. I believe that the ability to ‘read’ math alone would enrich almost everybody’s life and it would also benefit scientific literacy generally.

So needless to say, I am supportive of attempts to raise interest in math. I am just reluctant to join complaints about the bad-at-math excuse because this discussion more often than not leaves aside that people aren’t interested because it’s not relevant to them. And that what is relevant to them most mathematicians wouldn’t even call math. Without addressing this point, we’ll never convince anybody to make the effort to decipher a differential equation.

But of course people learn all the time things they don’t need! They learn to dance Gangnam style, speak Sindarin, or memorize the cast of Harry Potter. They do this because the cultural context is present. Their knowledge is useful for social reasons. And that is why I think to raise mathematic literacy the most important points are:

Exposure

Popular science writing rarely if ever uses any math. I want to see the central equations and variables. It’s not only that metaphors and analogies inevitably have shortcomings, but more importantly it’s that the reader gets away with the idea that one doesn’t actually need all these complicated equations. It’s a slippery slope that leads to the question what we need all these physicists for anyway. The more often you see something, the more likely you are to think and talk about it. That’s why we’re flooded with frequently nonsensical adverts that communicate little more than a brand name, and that’s why just showing people the math would work towards mathematic literacy.

I would also really like to see more math in news items generally. If experts are discussing what they learned from the debris of a plane crash, I would be curious to hear what they did. Not in great detail, but just to get a general idea. I want to know how the number quoted for energy return on investment was calculated, and I want to know how they arrived at the projected carbon capture rate. I want to see a public discussion of the Stiglitz theorem. I want people to know just how often math plays a role for what shapes their life and the lives of those who will come after us.

Don’t tell me it’s too complicated and people won’t understand it and it’s too many technical terms and, yikes, it won’t sell. Look at the financial part of a newspaper. How many people really understand all the terms and details, all the graphs and stats? And does that prevent them from having passionate discussions about the stock market? No, it doesn’t. Because if you’ve seen and heard it sufficiently often, the new becomes familiar, and people talk about what they see.

Culture

We don’t talk about math enough. The residue theorem in complex analysis is one of my favorite theorems. But I’m far more likely to have a discussion about the greatest songs of the 60s than about the greatest theorems of the 19th century. (Sympathy for the devil.) The origin of this problem is lack of exposure, but even with the exposure people still need the social context to put their knowledge to use. So by all means, talk about math if you can and tell us what you’re sinking about!

Thursday, December 12, 2013

The Comeback of Massive Gravity

Massive gravity, a modification of general relativity in which gravitons have mass, has an interesting history. Massive gravity was long believed to be internally inconsistent, but physicists at Stockholm University now claim to have constructed a consistent theory for massive gravity. This theory is a viable alternative to general relativity and can address some of its problems.

In Einstein’s theory of general relativity gravitational waves spread with the speed of light, and the quanta of the gravitational field, the gravitons, are expected to do the same*. To be more precise, gravitons move with the speed of massless particles because they are assumed to be massless. But whether or not a particle is indeed massless is in the end a question of experiment.

Neutrinos were long believed to be massless, but we know today that at least two of them have tiny non-zero masses (whose absolute value has not yet been determined). The mass of the photon is known to be zero to extremely high precision on experimental grounds. But what about gravity? This is a timely question because a small mass would lead to a long-distance modification of general relativity, and present observational evidence left physicists with some puzzles at these long distances, notably dark energy and dark matter.

However, to be able to even properly ask whether gravitons have masses, we need a consistent theory for massive gravity. But making gravitons massive is a challenge for the theoretical physicist. In fact, it was long believed to be impossible.

The problems start when you want to introduce a mass-term into general relativity. For vector fields, you can take a contraction of fields of the form AνAν to stand in front of the mass term. In general relativity the field is the metric tensor, and the only full contractions that you can create without using derivatives are constant: they create a cosmological constant, not a graviton mass. If you want a mass-term in general relativity you need a second two-tensor, that is a field which looks like a metric but isn’t the metric. Theories of this type are also known as ‘bi-metric’. Massive gravity is thus intimately related to bi-metric gravity.

But that’s only the beginning of the problems, a beginning that dates back more than 70 years.

In 1939, Fierz and Pauli wrote down a theory of massive gravity in the perturbative limit. They found that for the theory to be consistent – meaning free of ‘ghosts’ that lead to unphysical instabilities – the parameters in the mass-terms must have specific values. With these values, the theory is viable.

In 1970 however, van Dam and Veltman and, independently, Zukharov, showed that in the Fierz-Pauli approach, the limit in which the mass of the graviton is taken to zero is not continuous and does, contrary to naïve expectations, not reproduce general relativity. Any graviton mass, regardless how small, leads to deviations that can contribute factors of order one to observables, which is in conflict with observation. The Fierz-Pauli theory now seemed theoretically fine, but experimentally ruled out.

Two years later, in 1972, Vainshtein argued that this discontinuity is due to the treatment of the gravitational degrees of freedom in the linearization procedure and can be cured in a full, non-linear, version of massive gravity. Unfortunately, in the same year, Deser and Boulware claimed that any non-linear completion of the Fierz-Pauli approach reintroduces the ghost. So now massive gravity was experimentally fine but theoretically sick.

Nothing much happened in this area for more than 30 years. Then, in the early 2000s, the wormy can was opened again by Arkani-Hamed et al and Creminelli et al, but they essentially confirmed the Deser-Boulware problem.

The situation began to look brighter in 2010, when de Rahm, Gabadadze and Tolley proposed a theory of massive gravity that did not suffer from the ghost-problem in a certain limit. Needless to say, after massive gravity had been thought dead and buried for 40 years, nobody really believed this would work. The de Rahm-Gabadadze approach did not make many friends because the second metric was treated as a fixed background field, and the theory was shown to allow for superluminal propagation (and, more recently, acausality).

However, starting in 2011, Fawad Hassan and Rachel Rosen from Stockholm University (ie next door), succeeded in formulating a theory of massive gravity that does not suffer from the ghost instability. The key to success was a generalization of the de Rahm-Gabadadze approach in which the second metric is also fully dynamic, and the interaction terms between the two metrics take on a specific form. The specific form of the interaction terms is chosen such that it generates a constraint which removes the ghost field. The resulting theory is to best present knowledge fully consistent and symmetric between the two metrics.

(Which, incidentally, explains my involvement with the subject, as I published a paper with a fully dynamic, symmetric, bi-metric theory in 2008, though I wasn’t interested in the massive case and don’t have interaction terms. The main result of my paper is that I ended up in defense committees of Fawad’s students.)

In the last years, the Stockholm group has produced a series of very interesting papers that not only formalizes their approach and shows its consistency, but they also derived specific solutions. This is not a small feat as it is already difficult to find solutions in general relativity if you have only one metric and having two doesn’t make the situation easier. Indeed, not many solutions are presently known, and the known ones have quite strong symmetry assumptions. (More students in the pipe...)

Meanwhile, others have studied how well this modification of general relativity fares as an alternative to ΛCDM. It has been found that massive gravity can fit all cosmological data without the need to introduce an additional cosmological constant. But before you get too excited about this, note that massive gravity has more free parameters than ΛCDM, that being the coupling constants in the interaction terms.

What is missing right now though is a smoking-gun signal, some observation that would allow to distinguish massive gravity from standard general relativity and could be used to distinguish between both. This is presently a very active area of research and one that I’m sure we’ll hear more about.



* To be precise, in the classical theory we should be speaking of gravitational waves instead. The frequent confusion between gravitational waves and gravitons, the latter of which only exist in quantized gravity, is a bad habit but forgivable. Far worse are people who say ‘gravity wave’ when they refer to a gravitational wave. A gravity wave is a type of cloud formation and has nothing to do with linearized gravity.

Saturday, December 07, 2013

Are irreproducible scientific results okay and just business as usual?

That I even have to write about this tells you how bad it is.

During the last years a lot of attention has been drawn to the prevalence of irreproducible results in science. That published research findings tend to weaken or vanish over time is a pressing problem in particular in some areas of the life sciences, psychology and neuroscience. On the face of it, the issue is that scientists work with too small samples and frequently cherry-pick their data. Next to involuntarily poor statistics, the blame has primarily been put on the publish-or-perish culture of modern academia.

While I blame that culture for many ills, I think here the finger is pointed at the wrong target.

Scientists aren’t interested in publishing findings that they suspect to be spurious. That they do it anyway is because a) funding agencies don’t hand out sufficient money for decent studies with large samples b) funding agencies don’t like reproduction studies because, eh, it’s been done before and c) journals don’t like to publish negative findings. The latter in particular leads scientists to actively search for effects, which creates a clear bias. It also skews meta-studies against null results.

That’s bad, of course.

I will not pretend that physics is immune to this problem, though in physics the issue is, forgive my language, significantly less severe.

A point in case though is the application of many different analysis methods to the same data set. Collaborations have their procedures sorted out to avoid this pitfall, but once the data is public it can be analyzed by everybody and their methods, and sooner or later somebody will find something just by chance. That’s why, every once in while we hear of a supposedly interesting peculiarity in the cosmic microwave background, you know, evidence for a bubble collision, parallel universes, a cyclic universe, a lopsided universe, an alien message, and so on. One cannot even blame them for not accounting for other researchers who are trying creative analysis methods on the same data, because that’s unknown unknowns. And theoretical papers can be irreproducible in the sense of just being wrong, but the vast majority of these just get ignored (and if not the error is often of interest in itself).

So even while the fish at my doorstep isn’t the most rotten one, I think irreproducible results are highly problematic, and I welcome measures that have been taken, eg by Nature magazine, to improve the situation.

And then there’s Jared Horvath, over at SciAm blogs, who thinks irreproducibility is okay, because it’s been done before. He lists some famous historical examples where scientists have cherry-picked their data because they had a hunch that their hypothesis is correct even if the data didn’t support it. Jared concludes:
“There is a larger lesson to be gleaned from this brief history. If replication were the gold standard of scientific progress, we would still be banging our heads against our benches trying to arrive at the precise values that Galileo reported.”
You might forgive Jared, who is a is a PhD candidate in cognitive neuroscience, for cherry picking his historical data, because he’s been trained in today’s publish-and-perish culture. Unfortunately, he’s not the only one who believes that something is okay because a few people in the past succeeded with it. Michael Brooks has written a whole book about it. In “Free Radicals: The Secret Anarchy of Science”, you can read for example
“It is the intuitive understanding, the gut feeling about what the answer should be, that marks the greatest scientists. Whether they fudge their data or not is actually immaterial.”
Possibly the book gets better after this, but I haven’t progressed beyond this page because every time I see that paragraph I want to cry.

The “gut feeling about what the answer should be” does mark great scientists, yes. It also marks pseudoscientists and crackpots, just that you don’t find these the history books. The argument that fudging data is okay because great scientists did it and time proved them right is like browsing bibliographies and concluding that in the past everybody was famous.

I’m not a historian and I cannot set that record straight, but I can tell you that the conclusion that irreproducibility is a necessary ingredient to scientific progress is unwarranted.

But I have one piece of data to make my case, a transcript of a talk given by Irwin Langmuir in the 1950s, published in Physics Today in 1989. It carries the brilliant title “Pathological Science” and describes Langmuir’s first-hand encounters with scientists who had a gut feeling about what the answer should be. I really recommend you read the whole thing (pdf here), but just for the flavor here’s an excerpt:
Mitogenic rays.
About 1923 there was a whole series of papers by Gurwitsch and others. There were hundreds of them published on mitogenic rays. There are still a few of them being published [in 1953]. I don’t know how many of you have ever heard of mitogenic rays. They are given off by growing plants, living things, and were proved, according to Gurwitsch, to be something that would go through glass but not through quarz. They seemed to be some sort of ultraviolet light… If you looked over these photographic plates that showed this ultraviolet light you found that the amount of light was not so much bigger than the natural particles of the photographic plate, so that people could have different opinions as to whether or not it showed this effect. The result was that less than half of the people who tried to repeat these experiments got any confirmation of it…”
Langmuir relates several stories of this type, all about scientists who discarded some of their data or read output to their favor. None of these scientists has left a mark in the history books. They have however done one thing. They’ve wasted their and other scientist’s time by not properly accounting for their methods.

There were hundreds of papers published on a spurious result – in 1953. Since then the scientific community has considerably grown, technology has become much more sophisticated (not to mention expensive), and scientists have become increasingly specialized. For most research findings, there are very few scientists who are able to conduct a reproduction study, even leaving aside the problems with funding and publishing. In 2013, scientists have to rely on their colleagues much more than was the case 60 years ago, and certainly in the days of Millikan and Galileo. The harm being caused by cherry picked data and non-reported ‘post-selection’ (a euphemism for cherry-picking), in terms of waste of time has increase with the community. Heck, there were dozens of researchers who wasted time (and thus their employers money...) on ‘superluminal neutrinos’ even though everybody knew these results to be irreproducible (in the sense that they hadn’t been found by any previous measurements).

Worse, this fallacious argument signals a basic misunderstanding about how science works.

The argument is based on the premise that if a scientific finding is correct, it doesn’t matter where it came from or how it was found. That is then taken to justify the ignorance of any scientific method (and frequently attributed to Feyerabend). It is correct in that in the end it doesn’t matter exactly how a truth about nature was revealed. But we do not speak of a scientific method to say that there is only one way to make progress. The scientific method is used to increase the chances of progress. It’s the difference between letting the proverbial monkey hammer away and hiring a professional science writer for your magazine’s blog. Yes, the monkey can produce a decent blogpost, and if that is so then that is so. But chances are eternal inflation will end before you get to see a good result. That’s why scientists have quality control and publishing ethics, why we have peer review and letters of recommendation, why we speak about statistical significance and double-blind studies and reproducible results: Not because in the absence of methods nothing good can happen, but because these methods have proven useful to prevent us from fooling ourselves and thereby make success considerably more likely.

Having said that, expert intuition can be extremely useful and there is nothing wrong with voicing a “gut feeling” as long as it is marked as such. It is unfortunate indeed that the present academic system does not give much space for scientists to express their intuition, or maybe they are shying away from it. But that’s a different story and shell be told another time.

So the answer to the question posed in the title is a clear no. The question is not whether science has progressed despite the dishonest methods that have been employed in the past, but how much better if would have progressed if that had not been so.

----

I stole that awesome gif from over here. I don't know its original source.

Wednesday, December 04, 2013

The three phases of space-time

If life grows over your head, your closest pop psy magazine recommends dividing it up into small, manageable chunks. Physicists too apply this method in difficult situations. Discrete approximations – taking a system apart into chunks – are enormously useful to understand emergent properties and to control misbehavior, such as divergences. Discretization is the basis of numerical simulations, but can also be used in an analytic approach, when the size of chunks is eventually taken towards zero.

Understanding space and time in the early universe is such a difficult situation where gravity is misbehaved and quantum effects of gravity should become important, yet we don’t know how to deal with them. Discretizing the system and treating it similar to other quantum systems is the maybe most conservative approach one can think of, yet it is challenging. Normally, discretization is used for a system within space and time. Now it is space and time themselves that are being discretized. There is no underlying geometry as reference on which to discretize.

Causal Dynamical Triangulations (CDT), pioneered by Loll, Ambjørn and Jurkiewicz, realizes this most conservative approach towards quantum gravity. Geometry is decomposed into triangular chunks (or their higher-dimensional versions respectively) and all possible geometries are summed over in a path integral (after Wick-rotation) with the weight given by the discretized curvature. The curvature is encoded in the way the chunks are connected to each other. The term ‘causal’ refers to a selection principle for geometries that are being summed over. In the end, the continuum limit can be taken, so this approach in an by itself doesn’t mean that spacetime fundamentally is discrete, just that it can be approximated by a discretization procedure.

The path integral that plays the central role here is Feynman’s famous brain child in which a quantum system takes all possible paths, and observables are computed by suitably summing up all possible contributions. It is the mathematical formulation of the statement that the electron goes through both slits. In CDT it’s space-time that goes through all allowed chunk configurations.

Evaluating the path integral of the triangulations is computationally highly intensive, but simple universes can now be simulated numerically. The results that have been found during the last years are promising: The approach produces a smooth extended geometry that appears well-behaved. This doesn’t sound like much, but keep in mind that they didn’t start with anything resembling geometry! It’s discrete things glued together, but it reproduces a universe with a well-behaved geometry like the one we see around.

Or does it?

The path integral of CDT contains free parameters, and most recently the simulations found that the properties of the universe it describes depend on the value of the parameters. I find this very intriguing because it means that, if space-time's quantum properties are captured by CDT, then space-time has various different phases, much like water has different phases.

In the image below you see the phase-diagram of space-time in CDT.

CDT phase diagram of space-time.
After Fig 5 in 1302.2173

The parameter κ is proportional to the inverse of Newton’s constant, and the parameter Δ quantifies the (difference in the) abundance of two different types of chunks that space-time is built up of. The phase marked C in the upper left, with the Hubble image, is where one finds a geometry resembling our universe. In the phase marked A to the right space-time falls apart into causally disconnected pieces. In the phase marked B at the bottom, space-time clumps together into a highly connected graph with a small diameter that doesn’t resemble any geometry. The numerical simulations indicate that the transition between the phases C and A is first order, and between C and B it’s second order.

In summary, in phase A everything is disconnected. In phase B everything is connected. In phase C you can share images of your lunch with people you don’t know on facebook.

Now you might say, well, but the parameters are what they are and facebook is what it is. But in quantum theory, parameters tend to depend on the scale, that is the distance or energies by which a system is probed. Physicists say “constant’s run”, which just rephrases the somewhat embarrassing statement that a constant is not constant. Since our universe is not in thermal equilibrium and has cooled down from a state of high temperature, constants have been running, and our universe can thus have passed through various phases in parameter space.

Of course it might be that CDT in the end is not the right way to describe the quantum properties of gravity. But I find this a very interesting development, because such a geometric phase transition might have left observable traces and brings us one step closer to experimental evidence for quantum gravity.

-----------
You can find a very good brief summary of CDT here, and the details eg in this paper.

Images used in the background of the phase-diagram, are from here, here and here.

Wednesday, November 27, 2013

Cosmic Bell

On the playground of quantum foundations, Bell’s theorem is the fence. This celebrated theorem – loved by some and hated by others – shows that correlations in quantum mechanics can be stronger than in theories with local hidden variables. Such local hidden variables theories are modifications of quantum mechanics which aim to stay close to the classical, realist picture, and promise to make understandable what others have argued cannot be understood. In these substitutes for quantum mechanics, the ‘hidden variables’ serve to explain the observed randomness of quantum measurement.

Experiments show however that correlations can be stronger than local hidden variables theories allow, as strong as quantum mechanics predicts. This is very clear evidence against local hidden variables, and greatly diminishes the freedom researchers have to play with the foundations of quantum mechanics.

But a fence has holes and Bell’s theorem has loopholes. These loopholes stem from assumptions that necessarily enter every mathematical proof. Closing all these loopholes by making sure the assumptions cannot be violated in the experiment is challenging: Quantum entanglement is fragile and noise is omnipresent.

One of these loopholes in Bell’s theorem is known as the ‘freedom of choice’ assumption. It assumes that the settings of the two detectors which are typically used in Bell-type experiments can be chosen ‘freely’. If the detector settings cannot be chosen independently, or are both dependent on the same hidden variables, this could mimic the observed correlations.

This loophole can be addressed by using random sources for the detector settings and putting them far away from each other. If the hidden variables are local, any correlations must have been established already when the sources were in causal contact. The farther apart the sources for the detector settings, the earlier the correlations must have been established because they cannot have spread faster than the speed of light. The earlier the correlations must have been established, the less plausible the theory, though how early is ‘too early’ is subjective. As we discussed earlier, in practice theories don’t so much get falsified as that they get implausified. Pushing back the time at which detector correlations must have been established serves to implausify local hidden variable theories.

In a neat recent paper, Jason Gallicchio, Andrew Friedman and David Kaiser studied how to use cosmic sources to set the detector, sources that have been causally disconnected since the big bang (which might or might not have been ‘forever’). While this had been suggested before, they did the actual work, thought about the details, the technological limitations, and the experimental problems. In short, they breathed the science into the idea.

    Testing Bell's Inequality with Cosmic Photons: Closing the Settings-Independence Loophole
    Jason Gallicchio, Andrew S. Friedman, David I. Kaiser
    arXiv:1310.3288 [quant-ph]

The authors look at two different types of sources: distant quasars on opposite sides of the sky, and patches of the cosmic microwave background (CMB). In both cases, photons from these sources can be used to switch the detectors, for example by using the photon’s arrival time or their polarization. The authors come to the conclusion that quasars are preferable because the CMB signal suffers more from noise, especially in Earth-based telescopes. Since this noise could originate in close-by sources, it would spoil the conclusions for the time at which correlations must have been established.

According to the authors, it is possible with presently available technology to perform a Bell-test with such distant sources, thus pushing back the limit on conspiracies that could allow hidden variable theories to deliver quantum mechanical correlations. As always with such tests, it is unlikely that any disagreement with the established theory will be found, but if a disagreement can be found, it would be very exciting indeed.

It remains to be said that closing this loophole does not constrain superdeterministic hidden variables theories, which are just boldly non-local and not even necessarily realist. I like superdeterministic hidden variable theories because they stay as close to quantum mechanics as possible while not buying into fundamental non-determinism. In this case it is the measured particle that cannot be prepared independently of the detector settings, and you already know that I do not believe in free will. This requires some non-locality but not necessarily superluminal signaling. Such superdeterministic theories cannot be tested with Bell’s theorem. You can read here about a different test that I proposed for this case.

Thursday, November 21, 2013

The five questions that keep physicists up at night

Image: Leah Saulnier.

The internet loves lists, among them the lists with questions that allegedly keep physicists up at night. Most recently I spotted one at SciAm blogs, About.com has one, sometimes it’s five questions, sometimes seven, nine, or eleven, and Wikipedia excels in listing everything that you can put a question mark behind. The topics slightly vary, but they have one thing in common: They’re not the questions that keep me up at night.

The questions that presently keep me up are “Where is the walnut?” or “Are the street lights still on?” I used to get up at night to look up an equation, now I get up to look for the yellow towel, the wooden memory piece with the ski on it, the one-eyed duck, the bunny’s ear, the “white thing”, the “red thing”, mentioned walnut, and various other household items that the kids Will Not Sleep Without.

But I understand of course that the headline is about physics questions...

The physics questions that keep me up at night are typically project-related. “Where did that minus go?” is for example always high on the list. Others might be “Where is the branch cut?”, “Why did I not run the scheduled backup?”, “Should I resend this email?” or “How do I shrink this text to 5 pages?”, for just to mention a few of my daily life worries.

But I understand of course that the headline is about the big, big physics questions...

And yes, there are a few of these that keep coming back and haunt me. Still they’re not the ones I find on these lists. What you find on the lists in SciAm and NewScientist could be more aptly summarized as “The 5 questions most discussed on physics conferences”. They’re important questions. But it’s unfortunate how the lists suggest physicists all more or less have the same interests and think about the same five questions.

So I thought I’d add my own five questions.

Questions that really bother me are the ones where I’m not sure how to even ask the question. If a problem is clear-cut and well-defined it’s a daylight question - a question that can be attacked by known methods, the way we were taught to do our job. “What’s the microscopic origin of dark matter?” or “Is it possible to detect a graviton?” are daylight questions that we can play with during work hours and write papers about.

And then there are the night-time questions.
  • Is the time-evolution of the universe deterministic, indeterministic or neither?

    How can we find out? Can we at all? And, based on this, is free will an illusion? This question doesn’t really fall into any particular research area in physics as it concerns the way we formulate the laws of nature in general. It is probably closest to the foundations of quantum mechanics, or at least that’s where it gets most sympathy.
  • Does the past exist in the same way as the present? Does the future?

    Does a younger version of yourself still exist, just that you’re not able to communicate with him (her), or is there something special about the present moment? The relevance of this question (as Lee elaborated on in his recent book) stems from the fact that none of our present descriptions of nature assigns any special property to the ever-changing present. I would argue this question is closest to quantum gravity since it can’t be addressed without knowing what space and time fundamentally are.
  • Is mathematics the best way to model nature? Are there systems that cannot be described by mathematics?

    I blame Max Tegmark for this question. I’m not a Platonist and don’t believe that nature ultimately is mathematics. I don’t believe this because it doesn’t seem likely that the description of nature that humans discovered just yesterday would be the ultimate one. But if it’s not then what is the difference between mathematics and reality? Is there anything better? If so, what? If not, what does this mean for science?
  • Does a theory of everything exist and can it be used, in practice (!), to derive the laws of nature for all emergent quantities?

    If so, will science come to an end? If not, are there properties of nature that cannot be understood or even modeled by any conscious being? Are there cases of strong emergence? Can we use science to understand the evolution of life, the development of complex systems, and will we be able to tell how consciousness will develop from here on?
  • What is the origin and fate of the universe and does it depend on the existence of other universes?

    That’s the question from my list you are most likely to find on any ‘big questions of physics’ list. It lies on the intersection of cosmology and quantum gravity. Dark matter, dark energy, black holes, inflation and eternal inflation, the nature and existence of space-time singularities all play a role to understand the evolution of the universe.
(It's not an ordered list because it's not always the same question that occupies my mind.)

I saw that Ashutosh Jogalekar at SciAm blogs also was inspired to add his own five mysteries to the recent SciAm list. If you want to put up your own list, you can post the link in this comment section, I will wave it through the spam filter.

Monday, November 18, 2013

Does modern science discourage creativity?

Knitted brain cap. Source: Etsy.

I recently finished reading “The Ocean at the End of the Lane” by Neil Gaiman. I haven’t read a fantasy book for a while, and I very much enjoyed it. Though I find Gaiman’s writing too vague to be satisfactory because the scientist in my wants more explanations, the same scientist is also jealous – jealous of the freedom that a fantasy writer has when turning ideas into products.

Creativity in theoretical physics is in comparison a very tamed and well-trained beast. It is often only appreciated if it fills in existing gaps or if it neatly builds on existing knowledge. The most common creative process is to combine two already existing ideas. This works well because it doesn’t require others to accept too much novelty or to follow leaps of thought, leaps that might have been guided by intuition that stubbornly refuses to be cast into verbal form.

In a previous post, I summed this up as “Surprise me, but not too much.” It seems to be a general phenomenon that can also be found in the arts and in music. The next big hits are usually small innovations over what is presently popular. And while this type of ‘tamed creativity’ grows new branches on existing trees, it doesn’t sow new seeds. The new seeds, the big imaginary leaps, come from the courageous and unfortunate few who often remain under-appreciated by contemporaries, and though they later come to be seen as geniuses they rarely live to see the fruits of their labor.

An interesting recent data analysis of citation networks demonstrated that science too thrives primarily on the not-too-surprising type of creativity.

In a paper published in Science last month, a group of researchers quantified the likeliness of combinations of topics in citation lists and studied the cross-correlation with the probability of the paper becoming a “hit” ( meaning in the upper 5th percentile of citation scores). They found that having previously unlikely combinations in the quoted literature is positively correlated with the later impact of a paper. They also note that the fraction of papers with such ‘unconventional’ combinations has decreased from 3.54% in the 1980s to 2.67% in the 1990, “indicating a persistent and prominent tendency for high conventionality.” Ack, the spirit of 1969, wearing off.

It is no surprise that novelty in science is very conservative. A new piece of knowledge has to fit with already existing knowledge. Combining two previous ideas to form a new one is such a frequently used means of creativity because it’s likely to pass peer review. You don’t want to surprise your referees too much.

And while this process delivers results, if it becomes the exclusive means of novelty production two problems arise. First, combining two speculative ideas is unlikely to result in a less speculative idea. It does however contribute to the apparent relevance of the ideas being combined. We can see this happening on the arxiv all the time, causing a citation inflation that is the hep-th version of mortgage bubbles. My (unpublished) last year’s comment on the black hole firewall has been cited 18 times by now. Yeah, I plead guilty.

But secondly, and more importantly, the mechanism of combining existing ideas is a necessary, but not a sufficient, creative process for sustainable progress in science.

This study also provides another example for why measures for scientific success sow the seeds of their own demise: It is easy enough to clutter a citation list with ‘unconventional’ combinations to score according to a creativity-measure based on the correlation found in the above study. But pimping a citation list will not improve science, it will just erode the correlation and render the measure useless in the long run. This is what I refer to as the inevitable deviation of primary goals from secondary criteria.

And creativity, I would argue, is even more difficult to quantify than intelligence.
  1. Novelty is subjective and depends on the amount of details you pay attention to (the ‘course-graining’ if you excuse me borrowing a physics expression). Of course your toddler’s scribbles are uniquely creative but to everybody besides you they look like every other toddler’s scribbles.
  2. Novelty depends on your previous knowledge. You might think highly of your friend’s crocheting of Lorentz manifolds until you find the instructions on the internet. “The secret to creativity,” Einstein allegedly said, “Is knowing how to hide your sources.” Or maybe somebody creatively assigned this quotation to him.
  3. The appreciation of creativity depends on the value we assign to the outcome of the creative process. You create a novel product every time you take a shit, but most of us don’t value this product very much.
  4. We expect intent behind creativity. A six-tailed comet might be both novel and of value, but we don’t say that the comet has been creative.
Taken together this means that besides being subjective, it’s not only the product that is relevant for the assessment of creativity, but also the process itself.

In this context, let us look at another recent paper that the MIT technology review pointed out. In brief, IBM cooked up a computer code that formulates new recipes based on combinations from a database of already existing recipes. Human experts judged the new recipes to be creative and, so I assume, eatable. Can this computer rightfully be called a ‘creativity machine’?

Well, as so often it’s a matter of definition. I have no problem with the automatization of novelty production, but I would argue that rather than computerizing creativity this pushes creativity up a level to the creation of the process of automatization. You don’t even need to look at IBM’s “creativity machine” to see this shift of creativity to a metalevel. There’s no shortage of books and seminars promising to teach you how to be more creative. Everybody, it seems, wants to be more creative and nobody asks what we’re supposed to do with all these creations. Creativity is the new emotional intelligence. But to me teaching or programming creativity is like planning spontaneity, a contradiction in itself.

Anyway, let’s not fight about words. It’s more insightful to think about what IBM’s creativity machine cannot do. It cannot, for example, create recipes with new ingredients because these weren’t in the database. Neither can it create new methods of food processing. And since it can’t actually taste anything, it would never notice eg how the miracle fruit alters taste perception. IBM’s creativity machine isn’t so much creative as that it was designed to anticipate what human experts think of as creative. And you don’t want to surprise the experts too much...

It is a very thought provoking development though and it lead me to wonder whether we’re about to see a level-shift in novelty production also in science.

Let me then come back to the question posed in the title. It’s not that modern science lacks creativity, but that the creativity we have is dominated by the incremental, not-so-surprising combination of established knowledge. There are many reasons for this - peer pressure, risk-aversity, and lack of time all contribute to the hesitation of researchers to try to understand other’s leaps of thought, or trying to convince others to follow their own leaps. Maybe what we need is really an increased awareness of the possible processes of creativity in science, so that we can go beyond ‘unconventional combinations’ in literature lists.

Wednesday, November 13, 2013

Physics in product ads

I've been trying to figure out a quick way to make an embeddable slideshow and to that end I collected some physics-themed product names that I found amusing. Hope this works, enjoy :)

Monday, November 11, 2013

Marc Kuchner about his book "Marketing for Scientists"

[My recent post about the marketing of science and scientists lead to a longer discussion on facebook. I offered Marc Kuchner, author of the mentioned book "Marketing for Scientists" a place to present his point of view here. My questions are marked with B, his replies with M.]


B: Who is your book aimed at and why should they read it?

M: Most of my readers are postdocs and graduate students, but Marketing for Scientists is for anyone with a scientific bent who is interested in learning the techniques of modern marketing.

B: You are marketing marketing for scientists as a service to others. I like that and have to say this was the main reason I read your book. Can you expand?

M: I think scientists need better tools to compete today in the marketplace of ideas. Only one out of ten American adults can correctly describe what a "molecule" is. But everybody knows who Sarah Palin is. The climate change deniers understand marketing perfectly well.

B: The point of tenure is to free researchers from the need to serve others and allow them to follow their interests without being influenced by peer pressure, public pressure or financial pressure. I think this is essential for unbiased judgement and that marketing, regardless of whether you call it a service to others, negatively affects scientific objectivity and renders the process of knowledge discovery inefficient. Your advice is good advice for the individual but bad advice for the community. What do you have to say in your defense?

Our community already uses marketing. Every proposal you submit, every scientific paper you write, and every presentation you give is a piece of marketing. But sometimes we scientists aren’t clear with ourselves that we are in fact marketing our work. We call it “networking” or “communication” or “grantsmanship” or what have you, hiding the true nature of our efforts. So first I like to peel back the taboos, take off the white gloves and take an honest look at the marketing we scientists already do.

Then I want every scientist to learn how to do it better—to learn how to use the latest and greatest marketing techniques. If you picture our community as competing only with each other for a fixed slice of the pie then of course you could get the impression that there’s nothing to be gained by improving our marketing savvy. But the science pie is not fixed. In America, it’s shrinking! We scientists need to update our marketing skills to widen the impact of science as a whole. That’s good for the whole community.

B: I am afraid that marketing and advertising will erode the public's trust in science and scientists and that this is already happening. Do you not share my concerns?

Indeed, nobody likes billboards and commercials. But the practice of marketing has changed since the era of Mad Men. I try to teach scientists how modern marketing means co-creating with the customer, being receptive to feedback, and being open and honest. Those are values that scientists have always had, values that build trust in today’s new companies (think Google, Apple, TOMS shoes). These values can help rebuild the public’s trust in science.

B: Are you available for seminars and how can people reach you?

Thanks, Sabine! For more information about the Marketing for Scientists book and the Marketing for Scientists workshops, go to www.marketingforscientists.com or email me at marc@marketingforscientists.com

Thursday, November 07, 2013

Big data meets the eye

Remember when a 20kB image took a minute to load? Back then, when dinosaurs were roaming the earth?

Data has become big.

Today we have more data than ever before, more data in fact than we know how to analyze or even handle. Big data is a big topic. Big data changes the way we do science and the way we think about science. Big data even led Chris Anderson to declare the End of Theory:
“We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”
That was 5 years ago. Theory hasn’t ended yet and it’s unlikely to end anytime soon. Because there is slight problem with Anderson’s vision: One still needs the algorithm that is able to find patterns. And for that algorithm, one needs to know what one is looking for to begin with. But pattern finding algorithms for big data are difficult. One could say they are a science in themselves, so theory better not ends before having found them.

Those of us working on the phenomenology of quantum gravity would be happy if we had data at all, so I can’t say the big data problem is big on my mind, but I have a story to tell. Alexander Balatsky recently took on a professorship in condensed matter physics at Nordita, and he told me about a previous work of his that illustrates the challenge of big data in physics. It comes with an interesting lesson.


Electron conducting bands in crystals are impossible to calculate analytically except for very simplified approximations. Determining the behavior of electrons in crystals to high accuracy requires three-dimensional many-body calculations of multiple bands and their interactions. It produces a lot of data. Big data.

You can find and download some of that data in the 3D Fermi Surface Database. Let me just show you a random example example of Fermi surfaces, this one being for a gold-indium lattice:


The Fermi-surface roughly speaking tells you how electrons are packed. Pretty in a nerdy way, but what is the relevant information here?

The particular type of crystal Alexander and his collaborators, Hari Dahal and Athanasios Chantis, were interested in are so-called non-centrosymmetric crystals which have a relativistic spin-splitting of the conducting bands. This type of crystal symmetry exists in certain types of semiconductors and metals and plays a role in unconventional superconductivity that is still a theoretical challenge. Understanding the behavior of electrons in these crystals may hold the key to the production of novel materials.

The many-body, many-bands numerical simulation of the crystals produces a lot of numbers. You pipe them into a file, but now what? What really is it that you are looking for? What is relevant for the superconducting properties of the material? What pattern finding algorithm do you apply?

Let’s see...


Human eyes are remarkable pattern
search algorithms. Image Source.
The human eye, and its software in the visual cortex, is remarkably good in finding patterns, so good in fact it frequently finds patterns where none exist. And so the big data algorithm is to visualize the data and let humans scrutinize it, giving them the possibility to interact with the data while studying it. This interaction might mean selecting different parameters, different axes, rotating in several dimensions, changing colors or markers, zooming in and out. The hardware for this visualization was provided by the Los Almos-Sandia Center for Integrated Nanotechnologies, VIZ@CINT; the software is called ParaView and shareware. Here, big data meets theory again.

Intrigued about how this works in practice, I talked to Hari and Athanasios the other day. Athanasios recalls:
“I was looking at the data before in conventional ways, [producing 2-dimensional cuts in the parameter space], and missed it. But in the 3-d visualization I immediately saw it. It took like 5 minutes. I looked at it and thought “Wow”. To see this in conventional ways, even if I had known what to look for, I would have had to do hundreds of plots.”
The irony being that I had no idea what he was talking about. Because all I had to look at was a (crappy print of) a 2-dimensional projection. “Yes,” Athanasios says, “It’s in the nature of the problem. It cannot be translated into paper.”

So I’ll give it a try, but don’t be disappointed if you don’t see too much in the image because that’s the reason d’être for interactive data visualization software.

3-d bandstructure of GaAs. Image credits: Athanasios Chantis.


The two horizontal axis in the figure show the momentum space of the electrons into the directions away from the high symmetry direction of the crystal. It has a periodic symmetry, so you’re actually seeing four times the same patch, and in the atomic lattice this pattern goes on to repeat. In the vertical direction, there are two different functions shown simultaneously. One is depicted with the height profile whose color code you see on the left and shows the energy of the electrons. The other function shown (rescaled) in the colored bullets, is the spin-splitting of three different conduction bands; you see them in (bright) red, white and pink. Towards the middle of the front, note the white band getting close to the pink one. They don’t cross, but instead they seem to repel and move apart again. This is called an anti-crossing.

The relevant feature in the data, the one that’s hard if not impossible to see in two dimensional projections, is that the energy peaks coincide with the location of these anti-crossings. This property of the conducting bands, caused by the spin-splitting in this type of non-centrosymmetric crystals, affects how electrons travel through the crystal, and in particular it affects how electrons can form pairs. Because of this, materials with an atomic lattice of this symmetry (or rather, absence of symmetry) should be unconventional superconductors. This theoretical prediction has meanwhile been tested experimentally by two independent groups. Both groups observed signs of unconventional pairing, confirming at a strong connection between noncentrosymmetry and unconventional superconductivity.

This isn’t the only dataset that Hari studied by way of interactive visualization, and not the only case where it wasn’t only helpful but necessary to extract scientific information. Another example is this analysis of a data set from the composition of the tip of a scanning tunnel microscope, as well as a few other projects he has worked on.

And so it looks to me that, at least for now, the best pattern-finding algorithm for these big data sets is the eye of a trained theoretical physicist. News about the death of theory, it seems, have been greatly exaggerated.

Thursday, October 31, 2013

Science Marketing needs Consumer Feedback

It’s been a while since I read Marc Kuchner’s book “Marketing for Scientists”. I hated the book as I’ve rarely hated a book. I did not write a review then because it was a gift from an, undoubtedly well-meaning, friend who reads my blog. As time passed though, I changed my mind. Let me explain.

Product advertising and marketing is the oil on the gears of our economies. Its original purpose is to inform the customers about products and help them to decide whether they fit their needs. But marketing today isn’t only about selling a product, it’s also about selling a self-image. What we decide to spend money on tells others what we consider important and which groups we identify with.

In the quest to attract customers, advertisements often don’t contain a lot of information, and sometimes they bluntly lie. And so we have laws protecting us from these lies, though their efficiency differs greatly from one country to the next as Microsoft learned the hard way.

Everybody knows adverts make a product appear better than it is in reality, that microwave dinners never look like they do on the images, lotions won’t remove these eye bags, and ergonomic underwear will not make you run any faster. The point is not that advertisements bend reality but as that advertisements work regardless, just by drawing attention and by leaving brand names in our heads – names that we’ll recognize later. The more money a company can invest into good advertisement, the more likely they are to sell.

It isn’t so surprising that capitalistic thought is increasingly applied not only to the economy but also to academic research. Today, tax-funded scientists, far from being able to dig into the wonders of nature unbiased and led by nothing but their interests, are required to formulate 5 year plans and demonstrate a quantifiable impact of their work. And so scientists are now also expected to market themselves, their research and their institution.

Scientific knowledge however isn’t a product like a candy bar. A candy bar isn’t right or wrong, it’s right for you or wrong for you, and whether it’s right or wrong for you depends as much on you as on the candy bar. But the whole scientific process works towards the end of objective judgment, towards finding out whether a research finding should be kept or tossed. Scientific knowledge is eventually either right or wrong and academic research should be organized to make this judgment as efficiently as possible.

Marketing science is not helpful to this end for several reasons:
  • It puts at advantage those who are either skilled at marketing or who can afford help. This doesn’t necessarily say anything about the quality of their research. It’s not a useful selection criterion if what you are looking for is good science. Those who shout the loudest don’t necessarily sell the best fish.
  • Marketing of science advertises the product (research results), while what people actually want to sell is the process (the scientist’s ability to do good research). It draws attention towards the wrong criteria.
  • It has a positive feedback loop that gradually worsens the problem. The more people advertise their work, the more others will feel the need to also advertise their work as well. This leads, as with advertisement of goods, to a decrease of objectivity and honesty until it eventually nears blunt lies.
  • It takes time away from research, thus reducing efficiency.
In quantum gravity phenomenology, you will frequently see claims that something has been derived when in fact it wasn’t derived, or that something is a result, when in fact it is an ad-hoc assumption. I am aware of course, such exaggerations are advertisements, made to convince the reader of the relevance of a research study. But they’re not helpful to the process of science and even worse for science communication.

That’s why I hated Kuchner’s book. Not because his marketing advice is bad advice, but because he didn’t consider the consequences. If all researchers had Marc Kuchner’s “sell yourself” attitude, we’d end up with a community full of good advertisers, not full of good scientists. It’s the inverse of the collective action problem: A situation in which we would all benefit from not doing something (advertising), but each individual would put themselves at a disadvantage when behaving differently (not advertising), and so we all continue to do it.

Here’s why I changed my mind.

Researchers market and advertise because they have to, owing to the very real pressure of the collective action problem. There are too many people and not enough funding. Marketing might not be a good factor to select for, but standing out for whatever reason puts you at an advantage. The more people know your name, the more likely they’ll read your paper or your CV, and that’s not a sufficient, but certainly a necessary condition for survival in academia. And then there’s people, like Kuchner, who make money with that survival pressure. Sad but true.

Yes, this is a bad development, but collective action problems are thorny. Complaining about it, I’ve come to conclude, will not solve the problem. But what we can do is work towards balance. What we need then is the equivalent of customer reviews and independent product tests –  what we need is a culture that encourages feedback and criticism.

Unfortunately presently feedback and criticism on other people’s work is not appreciated by the community. Criticism is typically voiced only on very popular topics, when even criticism on other’s work is advertisement of one’s own knowledge, think climate change, arsenic life, string theory. But it’s a very small fraction of researchers who spend time on this, and it’s only on a small fraction of topics. It’s insufficient.

A recent nature editorial notes that “Online discussion is an essential aspect of the post-publication review of findings” but
In recent years, authors and readers have been able to post online comments about Nature papers on our site. Few bother. At the Public Library of Science, where the commenting system is more successful, only 10% of papers have comments, and most of those have only one.”
This really isn’t surprising. Few bother because in terms of career development it’s a waste of time.

In contrast to the futile attempt of preventing researchers from advertising themselves and their work however, the balance can be improved by appreciating the work of those who provide constructive criticism. By noting the community benefit that comes from researchers who publicly comment on other’s publications, by inviting scientists to speak not only for their own original work, but for their criticism of other people’s work, and by not thinking of somebody as negative who points out flaws. Because that consumer feedback is the oil on the gears that we need to keep science running.

Tuesday, October 29, 2013

Interna

Mamasonnenbrille.
It’s not like nothing happened, I just haven’t had the time to keep you updated on our four-body problems.

Earlier this year, we had handed over the stalled case on our child benefits to an EU institution called “SOLVIT” that takes on problems with national institutions under EU regulations. Amazingly, they indeed solved our problem efficiently and quickly. And so, after more than two and a half years and an inch of paperwork, Stefan finally gets child benefits. Yoo-hoo! If you have any institutional problem with a family distributed over several EU countries, I can recommend you check out the SOLVIT website. I really wish though the Germans and the Swedes could converge on one paper punch pattern, then I wouldn’t have to keep two different types of folders.

She knows the numbers from
1 to 10, but not their order.
Lara and Gloria will turn 3 in December and so we are about to switch from daycare to Kindergarten. They both speak more or less in full sentences now and come up with questions like "Where do clouds go at night?" and "Mommy, are you wearing underwear?" They still refer to themselves by first name though rather than using “I”, and are struggling with German grammar. At daycare the kids sing a lot, which feeds them weird vocabulary that may be delivered spontaneously in unexpected situations, Butzemann! Tschingderassabum! Wo ist meine Zie-har-mo-ni-ka? The girls both love puzzles and Lego and the wooden railway. On occasion they now demand to sit on their potty, though the timing isn’t quite working yet.

Lara can't let go of the binky, but is
okay as long as it's in the vicinity.
I meanwhile have decided, after a long back and forth, that I’ll not attend next year’s FQXi conference. The primary reason is that I looked up the flight connections and the inconvenience of getting to Vieques Island exceeds my pain tolerance. I am very reluctant these days to attend any meeting that requires me to be away on weekends and that isn’t located in vicinity of a major international airport, thus adding to my travel time. Secondary reason is that I’m not particularly interested in the topic ("The Physics of Information"), and I can just see it degenerating into yet another black hole firewall discussion. At the same time I’m sorry to miss the meeting, because from all the conferences that I’ve attended the FQXi conferences were undoubtedly the most inspiring ones.

Speaking of pain tolerance, I ran a marathon last weekend. I’ve always wondered why people run marathons. Now that I have a finisher medal, I am still wondering why people do this to themselves. I really like running, but there were too many people and too much noise on these 42 km for me.

I admit I plainly didn’t know before my first 10k about a year ago that these races tend to have typically only 20% or so of female participants. (The Frankfurt marathon had 15%, though the recent numbers from the USA look better). I find this surprising given that most of the people I meet jogging in the fields tend to be women. Neither did I know until some months ago that women weren’t even allowed in marathons until the mid 1970s, for somewhat mysterious reasons that seem to go back to the (unpublished) beliefs of some (unnamed) physicians that the female body isn’t meant for long-distance running – a claim that nobody bothered to check until some women stood up and disproved it. It’s an interesting tale, about which you can read here.

In entirely different news, Nordita now spreads word about the wonders of theoretical physics on Twitter and on Facebook. These feeds are fed by Apostolos Vasileiadis, creator of the recently mentioned short film located at Nordita. If you share our love of physics, check it out and I hope we’ll not disappoint.

Also keep in mind this year’s deadline for program proposals is Nov 15. The Stockholm weather can’t compete with Santa Barbara, but I’m told our programs are better funded :o) Instructions for the application can be found on the Nordita homepage.