Pages

Wednesday, November 27, 2013

Cosmic Bell

On the playground of quantum foundations, Bell’s theorem is the fence. This celebrated theorem – loved by some and hated by others – shows that correlations in quantum mechanics can be stronger than in theories with local hidden variables. Such local hidden variables theories are modifications of quantum mechanics which aim to stay close to the classical, realist picture, and promise to make understandable what others have argued cannot be understood. In these substitutes for quantum mechanics, the ‘hidden variables’ serve to explain the observed randomness of quantum measurement.

Experiments show however that correlations can be stronger than local hidden variables theories allow, as strong as quantum mechanics predicts. This is very clear evidence against local hidden variables, and greatly diminishes the freedom researchers have to play with the foundations of quantum mechanics.

But a fence has holes and Bell’s theorem has loopholes. These loopholes stem from assumptions that necessarily enter every mathematical proof. Closing all these loopholes by making sure the assumptions cannot be violated in the experiment is challenging: Quantum entanglement is fragile and noise is omnipresent.

One of these loopholes in Bell’s theorem is known as the ‘freedom of choice’ assumption. It assumes that the settings of the two detectors which are typically used in Bell-type experiments can be chosen ‘freely’. If the detector settings cannot be chosen independently, or are both dependent on the same hidden variables, this could mimic the observed correlations.

This loophole can be addressed by using random sources for the detector settings and putting them far away from each other. If the hidden variables are local, any correlations must have been established already when the sources were in causal contact. The farther apart the sources for the detector settings, the earlier the correlations must have been established because they cannot have spread faster than the speed of light. The earlier the correlations must have been established, the less plausible the theory, though how early is ‘too early’ is subjective. As we discussed earlier, in practice theories don’t so much get falsified as that they get implausified. Pushing back the time at which detector correlations must have been established serves to implausify local hidden variable theories.

In a neat recent paper, Jason Gallicchio, Andrew Friedman and David Kaiser studied how to use cosmic sources to set the detector, sources that have been causally disconnected since the big bang (which might or might not have been ‘forever’). While this had been suggested before, they did the actual work, thought about the details, the technological limitations, and the experimental problems. In short, they breathed the science into the idea.

    Testing Bell's Inequality with Cosmic Photons: Closing the Settings-Independence Loophole
    Jason Gallicchio, Andrew S. Friedman, David I. Kaiser
    arXiv:1310.3288 [quant-ph]

The authors look at two different types of sources: distant quasars on opposite sides of the sky, and patches of the cosmic microwave background (CMB). In both cases, photons from these sources can be used to switch the detectors, for example by using the photon’s arrival time or their polarization. The authors come to the conclusion that quasars are preferable because the CMB signal suffers more from noise, especially in Earth-based telescopes. Since this noise could originate in close-by sources, it would spoil the conclusions for the time at which correlations must have been established.

According to the authors, it is possible with presently available technology to perform a Bell-test with such distant sources, thus pushing back the limit on conspiracies that could allow hidden variable theories to deliver quantum mechanical correlations. As always with such tests, it is unlikely that any disagreement with the established theory will be found, but if a disagreement can be found, it would be very exciting indeed.

It remains to be said that closing this loophole does not constrain superdeterministic hidden variables theories, which are just boldly non-local and not even necessarily realist. I like superdeterministic hidden variable theories because they stay as close to quantum mechanics as possible while not buying into fundamental non-determinism. In this case it is the measured particle that cannot be prepared independently of the detector settings, and you already know that I do not believe in free will. This requires some non-locality but not necessarily superluminal signaling. Such superdeterministic theories cannot be tested with Bell’s theorem. You can read here about a different test that I proposed for this case.

Thursday, November 21, 2013

The five questions that keep physicists up at night

Image: Leah Saulnier.

The internet loves lists, among them the lists with questions that allegedly keep physicists up at night. Most recently I spotted one at SciAm blogs, About.com has one, sometimes it’s five questions, sometimes seven, nine, or eleven, and Wikipedia excels in listing everything that you can put a question mark behind. The topics slightly vary, but they have one thing in common: They’re not the questions that keep me up at night.

The questions that presently keep me up are “Where is the walnut?” or “Are the street lights still on?” I used to get up at night to look up an equation, now I get up to look for the yellow towel, the wooden memory piece with the ski on it, the one-eyed duck, the bunny’s ear, the “white thing”, the “red thing”, mentioned walnut, and various other household items that the kids Will Not Sleep Without.

But I understand of course that the headline is about physics questions...

The physics questions that keep me up at night are typically project-related. “Where did that minus go?” is for example always high on the list. Others might be “Where is the branch cut?”, “Why did I not run the scheduled backup?”, “Should I resend this email?” or “How do I shrink this text to 5 pages?”, for just to mention a few of my daily life worries.

But I understand of course that the headline is about the big, big physics questions...

And yes, there are a few of these that keep coming back and haunt me. Still they’re not the ones I find on these lists. What you find on the lists in SciAm and NewScientist could be more aptly summarized as “The 5 questions most discussed on physics conferences”. They’re important questions. But it’s unfortunate how the lists suggest physicists all more or less have the same interests and think about the same five questions.

So I thought I’d add my own five questions.

Questions that really bother me are the ones where I’m not sure how to even ask the question. If a problem is clear-cut and well-defined it’s a daylight question - a question that can be attacked by known methods, the way we were taught to do our job. “What’s the microscopic origin of dark matter?” or “Is it possible to detect a graviton?” are daylight questions that we can play with during work hours and write papers about.

And then there are the night-time questions.
  • Is the time-evolution of the universe deterministic, indeterministic or neither?

    How can we find out? Can we at all? And, based on this, is free will an illusion? This question doesn’t really fall into any particular research area in physics as it concerns the way we formulate the laws of nature in general. It is probably closest to the foundations of quantum mechanics, or at least that’s where it gets most sympathy.
  • Does the past exist in the same way as the present? Does the future?

    Does a younger version of yourself still exist, just that you’re not able to communicate with him (her), or is there something special about the present moment? The relevance of this question (as Lee elaborated on in his recent book) stems from the fact that none of our present descriptions of nature assigns any special property to the ever-changing present. I would argue this question is closest to quantum gravity since it can’t be addressed without knowing what space and time fundamentally are.
  • Is mathematics the best way to model nature? Are there systems that cannot be described by mathematics?

    I blame Max Tegmark for this question. I’m not a Platonist and don’t believe that nature ultimately is mathematics. I don’t believe this because it doesn’t seem likely that the description of nature that humans discovered just yesterday would be the ultimate one. But if it’s not then what is the difference between mathematics and reality? Is there anything better? If so, what? If not, what does this mean for science?
  • Does a theory of everything exist and can it be used, in practice (!), to derive the laws of nature for all emergent quantities?

    If so, will science come to an end? If not, are there properties of nature that cannot be understood or even modeled by any conscious being? Are there cases of strong emergence? Can we use science to understand the evolution of life, the development of complex systems, and will we be able to tell how consciousness will develop from here on?
  • What is the origin and fate of the universe and does it depend on the existence of other universes?

    That’s the question from my list you are most likely to find on any ‘big questions of physics’ list. It lies on the intersection of cosmology and quantum gravity. Dark matter, dark energy, black holes, inflation and eternal inflation, the nature and existence of space-time singularities all play a role to understand the evolution of the universe.
(It's not an ordered list because it's not always the same question that occupies my mind.)

I saw that Ashutosh Jogalekar at SciAm blogs also was inspired to add his own five mysteries to the recent SciAm list. If you want to put up your own list, you can post the link in this comment section, I will wave it through the spam filter.

Monday, November 18, 2013

Does modern science discourage creativity?

Knitted brain cap. Source: Etsy.

I recently finished reading “The Ocean at the End of the Lane” by Neil Gaiman. I haven’t read a fantasy book for a while, and I very much enjoyed it. Though I find Gaiman’s writing too vague to be satisfactory because the scientist in my wants more explanations, the same scientist is also jealous – jealous of the freedom that a fantasy writer has when turning ideas into products.

Creativity in theoretical physics is in comparison a very tamed and well-trained beast. It is often only appreciated if it fills in existing gaps or if it neatly builds on existing knowledge. The most common creative process is to combine two already existing ideas. This works well because it doesn’t require others to accept too much novelty or to follow leaps of thought, leaps that might have been guided by intuition that stubbornly refuses to be cast into verbal form.

In a previous post, I summed this up as “Surprise me, but not too much.” It seems to be a general phenomenon that can also be found in the arts and in music. The next big hits are usually small innovations over what is presently popular. And while this type of ‘tamed creativity’ grows new branches on existing trees, it doesn’t sow new seeds. The new seeds, the big imaginary leaps, come from the courageous and unfortunate few who often remain under-appreciated by contemporaries, and though they later come to be seen as geniuses they rarely live to see the fruits of their labor.

An interesting recent data analysis of citation networks demonstrated that science too thrives primarily on the not-too-surprising type of creativity.

In a paper published in Science last month, a group of researchers quantified the likeliness of combinations of topics in citation lists and studied the cross-correlation with the probability of the paper becoming a “hit” ( meaning in the upper 5th percentile of citation scores). They found that having previously unlikely combinations in the quoted literature is positively correlated with the later impact of a paper. They also note that the fraction of papers with such ‘unconventional’ combinations has decreased from 3.54% in the 1980s to 2.67% in the 1990, “indicating a persistent and prominent tendency for high conventionality.” Ack, the spirit of 1969, wearing off.

It is no surprise that novelty in science is very conservative. A new piece of knowledge has to fit with already existing knowledge. Combining two previous ideas to form a new one is such a frequently used means of creativity because it’s likely to pass peer review. You don’t want to surprise your referees too much.

And while this process delivers results, if it becomes the exclusive means of novelty production two problems arise. First, combining two speculative ideas is unlikely to result in a less speculative idea. It does however contribute to the apparent relevance of the ideas being combined. We can see this happening on the arxiv all the time, causing a citation inflation that is the hep-th version of mortgage bubbles. My (unpublished) last year’s comment on the black hole firewall has been cited 18 times by now. Yeah, I plead guilty.

But secondly, and more importantly, the mechanism of combining existing ideas is a necessary, but not a sufficient, creative process for sustainable progress in science.

This study also provides another example for why measures for scientific success sow the seeds of their own demise: It is easy enough to clutter a citation list with ‘unconventional’ combinations to score according to a creativity-measure based on the correlation found in the above study. But pimping a citation list will not improve science, it will just erode the correlation and render the measure useless in the long run. This is what I refer to as the inevitable deviation of primary goals from secondary criteria.

And creativity, I would argue, is even more difficult to quantify than intelligence.
  1. Novelty is subjective and depends on the amount of details you pay attention to (the ‘course-graining’ if you excuse me borrowing a physics expression). Of course your toddler’s scribbles are uniquely creative but to everybody besides you they look like every other toddler’s scribbles.
  2. Novelty depends on your previous knowledge. You might think highly of your friend’s crocheting of Lorentz manifolds until you find the instructions on the internet. “The secret to creativity,” Einstein allegedly said, “Is knowing how to hide your sources.” Or maybe somebody creatively assigned this quotation to him.
  3. The appreciation of creativity depends on the value we assign to the outcome of the creative process. You create a novel product every time you take a shit, but most of us don’t value this product very much.
  4. We expect intent behind creativity. A six-tailed comet might be both novel and of value, but we don’t say that the comet has been creative.
Taken together this means that besides being subjective, it’s not only the product that is relevant for the assessment of creativity, but also the process itself.

In this context, let us look at another recent paper that the MIT technology review pointed out. In brief, IBM cooked up a computer code that formulates new recipes based on combinations from a database of already existing recipes. Human experts judged the new recipes to be creative and, so I assume, eatable. Can this computer rightfully be called a ‘creativity machine’?

Well, as so often it’s a matter of definition. I have no problem with the automatization of novelty production, but I would argue that rather than computerizing creativity this pushes creativity up a level to the creation of the process of automatization. You don’t even need to look at IBM’s “creativity machine” to see this shift of creativity to a metalevel. There’s no shortage of books and seminars promising to teach you how to be more creative. Everybody, it seems, wants to be more creative and nobody asks what we’re supposed to do with all these creations. Creativity is the new emotional intelligence. But to me teaching or programming creativity is like planning spontaneity, a contradiction in itself.

Anyway, let’s not fight about words. It’s more insightful to think about what IBM’s creativity machine cannot do. It cannot, for example, create recipes with new ingredients because these weren’t in the database. Neither can it create new methods of food processing. And since it can’t actually taste anything, it would never notice eg how the miracle fruit alters taste perception. IBM’s creativity machine isn’t so much creative as that it was designed to anticipate what human experts think of as creative. And you don’t want to surprise the experts too much...

It is a very thought provoking development though and it lead me to wonder whether we’re about to see a level-shift in novelty production also in science.

Let me then come back to the question posed in the title. It’s not that modern science lacks creativity, but that the creativity we have is dominated by the incremental, not-so-surprising combination of established knowledge. There are many reasons for this - peer pressure, risk-aversity, and lack of time all contribute to the hesitation of researchers to try to understand other’s leaps of thought, or trying to convince others to follow their own leaps. Maybe what we need is really an increased awareness of the possible processes of creativity in science, so that we can go beyond ‘unconventional combinations’ in literature lists.

Wednesday, November 13, 2013

Physics in product ads

I've been trying to figure out a quick way to make an embeddable slideshow and to that end I collected some physics-themed product names that I found amusing. Hope this works, enjoy :)

Monday, November 11, 2013

Marc Kuchner about his book "Marketing for Scientists"

[My recent post about the marketing of science and scientists lead to a longer discussion on facebook. I offered Marc Kuchner, author of the mentioned book "Marketing for Scientists" a place to present his point of view here. My questions are marked with B, his replies with M.]


B: Who is your book aimed at and why should they read it?

M: Most of my readers are postdocs and graduate students, but Marketing for Scientists is for anyone with a scientific bent who is interested in learning the techniques of modern marketing.

B: You are marketing marketing for scientists as a service to others. I like that and have to say this was the main reason I read your book. Can you expand?

M: I think scientists need better tools to compete today in the marketplace of ideas. Only one out of ten American adults can correctly describe what a "molecule" is. But everybody knows who Sarah Palin is. The climate change deniers understand marketing perfectly well.

B: The point of tenure is to free researchers from the need to serve others and allow them to follow their interests without being influenced by peer pressure, public pressure or financial pressure. I think this is essential for unbiased judgement and that marketing, regardless of whether you call it a service to others, negatively affects scientific objectivity and renders the process of knowledge discovery inefficient. Your advice is good advice for the individual but bad advice for the community. What do you have to say in your defense?

Our community already uses marketing. Every proposal you submit, every scientific paper you write, and every presentation you give is a piece of marketing. But sometimes we scientists aren’t clear with ourselves that we are in fact marketing our work. We call it “networking” or “communication” or “grantsmanship” or what have you, hiding the true nature of our efforts. So first I like to peel back the taboos, take off the white gloves and take an honest look at the marketing we scientists already do.

Then I want every scientist to learn how to do it better—to learn how to use the latest and greatest marketing techniques. If you picture our community as competing only with each other for a fixed slice of the pie then of course you could get the impression that there’s nothing to be gained by improving our marketing savvy. But the science pie is not fixed. In America, it’s shrinking! We scientists need to update our marketing skills to widen the impact of science as a whole. That’s good for the whole community.

B: I am afraid that marketing and advertising will erode the public's trust in science and scientists and that this is already happening. Do you not share my concerns?

Indeed, nobody likes billboards and commercials. But the practice of marketing has changed since the era of Mad Men. I try to teach scientists how modern marketing means co-creating with the customer, being receptive to feedback, and being open and honest. Those are values that scientists have always had, values that build trust in today’s new companies (think Google, Apple, TOMS shoes). These values can help rebuild the public’s trust in science.

B: Are you available for seminars and how can people reach you?

Thanks, Sabine! For more information about the Marketing for Scientists book and the Marketing for Scientists workshops, go to www.marketingforscientists.com or email me at marc@marketingforscientists.com

Thursday, November 07, 2013

Big data meets the eye

Remember when a 20kB image took a minute to load? Back then, when dinosaurs were roaming the earth?

Data has become big.

Today we have more data than ever before, more data in fact than we know how to analyze or even handle. Big data is a big topic. Big data changes the way we do science and the way we think about science. Big data even led Chris Anderson to declare the End of Theory:
“We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”
That was 5 years ago. Theory hasn’t ended yet and it’s unlikely to end anytime soon. Because there is slight problem with Anderson’s vision: One still needs the algorithm that is able to find patterns. And for that algorithm, one needs to know what one is looking for to begin with. But pattern finding algorithms for big data are difficult. One could say they are a science in themselves, so theory better not ends before having found them.

Those of us working on the phenomenology of quantum gravity would be happy if we had data at all, so I can’t say the big data problem is big on my mind, but I have a story to tell. Alexander Balatsky recently took on a professorship in condensed matter physics at Nordita, and he told me about a previous work of his that illustrates the challenge of big data in physics. It comes with an interesting lesson.


Electron conducting bands in crystals are impossible to calculate analytically except for very simplified approximations. Determining the behavior of electrons in crystals to high accuracy requires three-dimensional many-body calculations of multiple bands and their interactions. It produces a lot of data. Big data.

You can find and download some of that data in the 3D Fermi Surface Database. Let me just show you a random example example of Fermi surfaces, this one being for a gold-indium lattice:


The Fermi-surface roughly speaking tells you how electrons are packed. Pretty in a nerdy way, but what is the relevant information here?

The particular type of crystal Alexander and his collaborators, Hari Dahal and Athanasios Chantis, were interested in are so-called non-centrosymmetric crystals which have a relativistic spin-splitting of the conducting bands. This type of crystal symmetry exists in certain types of semiconductors and metals and plays a role in unconventional superconductivity that is still a theoretical challenge. Understanding the behavior of electrons in these crystals may hold the key to the production of novel materials.

The many-body, many-bands numerical simulation of the crystals produces a lot of numbers. You pipe them into a file, but now what? What really is it that you are looking for? What is relevant for the superconducting properties of the material? What pattern finding algorithm do you apply?

Let’s see...


Human eyes are remarkable pattern
search algorithms. Image Source.
The human eye, and its software in the visual cortex, is remarkably good in finding patterns, so good in fact it frequently finds patterns where none exist. And so the big data algorithm is to visualize the data and let humans scrutinize it, giving them the possibility to interact with the data while studying it. This interaction might mean selecting different parameters, different axes, rotating in several dimensions, changing colors or markers, zooming in and out. The hardware for this visualization was provided by the Los Almos-Sandia Center for Integrated Nanotechnologies, VIZ@CINT; the software is called ParaView and shareware. Here, big data meets theory again.

Intrigued about how this works in practice, I talked to Hari and Athanasios the other day. Athanasios recalls:
“I was looking at the data before in conventional ways, [producing 2-dimensional cuts in the parameter space], and missed it. But in the 3-d visualization I immediately saw it. It took like 5 minutes. I looked at it and thought “Wow”. To see this in conventional ways, even if I had known what to look for, I would have had to do hundreds of plots.”
The irony being that I had no idea what he was talking about. Because all I had to look at was a (crappy print of) a 2-dimensional projection. “Yes,” Athanasios says, “It’s in the nature of the problem. It cannot be translated into paper.”

So I’ll give it a try, but don’t be disappointed if you don’t see too much in the image because that’s the reason d’ĂȘtre for interactive data visualization software.

3-d bandstructure of GaAs. Image credits: Athanasios Chantis.


The two horizontal axis in the figure show the momentum space of the electrons into the directions away from the high symmetry direction of the crystal. It has a periodic symmetry, so you’re actually seeing four times the same patch, and in the atomic lattice this pattern goes on to repeat. In the vertical direction, there are two different functions shown simultaneously. One is depicted with the height profile whose color code you see on the left and shows the energy of the electrons. The other function shown (rescaled) in the colored bullets, is the spin-splitting of three different conduction bands; you see them in (bright) red, white and pink. Towards the middle of the front, note the white band getting close to the pink one. They don’t cross, but instead they seem to repel and move apart again. This is called an anti-crossing.

The relevant feature in the data, the one that’s hard if not impossible to see in two dimensional projections, is that the energy peaks coincide with the location of these anti-crossings. This property of the conducting bands, caused by the spin-splitting in this type of non-centrosymmetric crystals, affects how electrons travel through the crystal, and in particular it affects how electrons can form pairs. Because of this, materials with an atomic lattice of this symmetry (or rather, absence of symmetry) should be unconventional superconductors. This theoretical prediction has meanwhile been tested experimentally by two independent groups. Both groups observed signs of unconventional pairing, confirming at a strong connection between noncentrosymmetry and unconventional superconductivity.

This isn’t the only dataset that Hari studied by way of interactive visualization, and not the only case where it wasn’t only helpful but necessary to extract scientific information. Another example is this analysis of a data set from the composition of the tip of a scanning tunnel microscope, as well as a few other projects he has worked on.

And so it looks to me that, at least for now, the best pattern-finding algorithm for these big data sets is the eye of a trained theoretical physicist. News about the death of theory, it seems, have been greatly exaggerated.