Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Saturday, July 19, 2014

What is a theory, What is a model?

During my first semester I coincidentally found out that the guy who often sat next to me, one of the better students, believed the Earth was only 15,000 years old. Once on the topic, he produced stacks of colorful leaflets which featured lots of names, decorated by academic titles, claiming that scientific evidence supports the scripture. I laughed at him, initially thinking he was joking, but he turned out to be dead serious and I was clearly going to roast in hell until future eternity.

If it hadn’t been for that strange encounter, I would summarily dismiss the US debates about creationism as a bizarre cultural reaction to lack of intellectual stimulation. But seeing that indoctrination can survive a physics and math education, and knowing the amount of time one can waste using reason against belief, I have a lot of sympathy for the fight of my US colleagues.

One of the main educational efforts I have seen is to explain what the word “theory” means to scientists. We are told that a “theory” isn’t just any odd story that somebody made up and told to his 12 friends, but that scientists use the word “theory” to mean an empirically well-established framework to describe observations.

That’s nice, but unfortunately not true. Maybe that is how scientist should use the word “theory”, but language doesn’t follow definitions: Cashews aren’t nuts, avocados aren’t vegetables, black isn’t a color. And a theory sometimes isn’t a theory.

The word “theory” has a common root with “theater” and originally seems to have meant “contemplation” or generally a “way to look at something,” which is quite close to the use of the word in today’s common language. Scientists adopted the word, but not in any regular way. It’s not like we vote on what gets called a theory and what doesn’t. So I’ll not attempt to give you a definition that nobody uses in practice, but just try an explanation that I think comes close to practice.

Physicists use the word theory for a well worked-out framework to describe the real world. The theory is basically a map between a model, that is a simplified stand-in for a real-world system, and reality. In physics, models are mathematical, and the theory is the dictionary to translate mathematical structures into observable quantities.


Exactly what counts as “well worked-out” is somewhat subjective, but as I said one doesn’t start with the definition. Instead, a framework that gets adapted by a big part of the community slowly lives up to deserve the title of a “theory”. Most importantly that means that the theory has to fulfil the scientific standards of the field. If something is called a theory it basically means scientists trust its quality.

One should not confuse the theory with the model. The model is what actually describes whatever part of the world you want to study by help of your theory.

General Relativity for example is a theory. It does not in and by itself describe anything we observe. For this, we have to first make several assumptions for symmetries and matter content to then arrive at model, the metric that describes space-time, from which observables can be calculated. Quantum field theory, to use another example, is a general calculation tool. To use it to describe the real world, you first have to specify what type of particles you have and what symmetries, and what process you want to look at; this gives you for example the standard model of particle physics. Quantum mechanics is a theory that doesn’t carry the name theory. A concrete model would for example be that of the Hydrogen atom, and so on. String theory has been such a convincing framework for so many that it has risen to the status of a “theory” without there being any empirical evidence.

A model doesn't necessarily have to be about describing the real world. To get a better understanding of a theory, it is often helpful to examine very simplified models even though one knows these do not describe reality. Such models are called “toy-models”. Examples are e.g. neutrino oscillations with only two flavors (even though we know there are at least three), gravity in 2 spatial dimensions (even though we know there are at least three), and the φ4 theory - where we reach the limits of my language theory, because according to what I said previously it should be a φ4 model (it falls into the domain of quantum field theory).

Phenomenological models (the things I work with) are models explicitly constructed to describe a certain property or observation (the “phenomenon”). They often use a theory that is known not to be fundamental. One never talks about phenomenological theories because the whole point of doing phenomenology is the model that makes contact to the real world. A phenomenological model serves usually one of two purposes: It is either a preliminary description of existing data or a preliminary prediction for not-yet existing data, both with the purpose to lead the way to a fully-fledged theory.

One does not necessarily need a model together with the theory to make predictions. Some theories have consequences that are true for all models and are said to be “model-independent”. Though if one wants to test them experimentally, one has to use a concrete model again. Tests of violations of Bell’s inequality maybe be an example. Entanglement is a general property of quantum mechanics, straight from the axioms of the theory, yet to test it in a certain setting one has to specify a model again. The existence of extra-dimensions in string theory may serve as another example of a model-independent prediction.

One doesn’t have to tell this to physicists, but the value of having a model defined in the language of mathematics is that one uses calculation, logical conclusions, to arrive at numerical values for observables (typically dependent on some parameters) from the basic assumptions of the model. Ie, it’s a way to limit the risk of fooling oneself and get lost in verbal acrobatics. I recently read an interesting and occasionally amusing essay from a mathematician-turned-biologist who tries to explain his colleagues what’s the point of constructing models:
“Any mathematical model, no matter how complicated, consists of a set of assumptions, from whichj are deduced a set of conclusions. The technical machinery specific to each flavor of model is concerned with deducing the latter from the former. This deduction comes with a guarantee, which, unlike other guarantees, can never be invalidated. Provided the model is correct, if you accept its assumptions, you must as a matter of logic also accept its conclusions.”
Well said.

After I realized the guy next to me in physics class wasn’t joking about his creationist beliefs, he went to length explaining that carbon-dating is a conspiracy. I went to length making sure to henceforth place my butt safely far away from him. It is beyond me how one can study a natural science and still interpret the Bible literally. Though I have a theory about this…

Thursday, November 07, 2013

Big data meets the eye

Remember when a 20kB image took a minute to load? Back then, when dinosaurs were roaming the earth?

Data has become big.

Today we have more data than ever before, more data in fact than we know how to analyze or even handle. Big data is a big topic. Big data changes the way we do science and the way we think about science. Big data even led Chris Anderson to declare the End of Theory:
“We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”
That was 5 years ago. Theory hasn’t ended yet and it’s unlikely to end anytime soon. Because there is slight problem with Anderson’s vision: One still needs the algorithm that is able to find patterns. And for that algorithm, one needs to know what one is looking for to begin with. But pattern finding algorithms for big data are difficult. One could say they are a science in themselves, so theory better not ends before having found them.

Those of us working on the phenomenology of quantum gravity would be happy if we had data at all, so I can’t say the big data problem is big on my mind, but I have a story to tell. Alexander Balatsky recently took on a professorship in condensed matter physics at Nordita, and he told me about a previous work of his that illustrates the challenge of big data in physics. It comes with an interesting lesson.


Electron conducting bands in crystals are impossible to calculate analytically except for very simplified approximations. Determining the behavior of electrons in crystals to high accuracy requires three-dimensional many-body calculations of multiple bands and their interactions. It produces a lot of data. Big data.

You can find and download some of that data in the 3D Fermi Surface Database. Let me just show you a random example example of Fermi surfaces, this one being for a gold-indium lattice:


The Fermi-surface roughly speaking tells you how electrons are packed. Pretty in a nerdy way, but what is the relevant information here?

The particular type of crystal Alexander and his collaborators, Hari Dahal and Athanasios Chantis, were interested in are so-called non-centrosymmetric crystals which have a relativistic spin-splitting of the conducting bands. This type of crystal symmetry exists in certain types of semiconductors and metals and plays a role in unconventional superconductivity that is still a theoretical challenge. Understanding the behavior of electrons in these crystals may hold the key to the production of novel materials.

The many-body, many-bands numerical simulation of the crystals produces a lot of numbers. You pipe them into a file, but now what? What really is it that you are looking for? What is relevant for the superconducting properties of the material? What pattern finding algorithm do you apply?

Let’s see...


Human eyes are remarkable pattern
search algorithms. Image Source.
The human eye, and its software in the visual cortex, is remarkably good in finding patterns, so good in fact it frequently finds patterns where none exist. And so the big data algorithm is to visualize the data and let humans scrutinize it, giving them the possibility to interact with the data while studying it. This interaction might mean selecting different parameters, different axes, rotating in several dimensions, changing colors or markers, zooming in and out. The hardware for this visualization was provided by the Los Almos-Sandia Center for Integrated Nanotechnologies, VIZ@CINT; the software is called ParaView and shareware. Here, big data meets theory again.

Intrigued about how this works in practice, I talked to Hari and Athanasios the other day. Athanasios recalls:
“I was looking at the data before in conventional ways, [producing 2-dimensional cuts in the parameter space], and missed it. But in the 3-d visualization I immediately saw it. It took like 5 minutes. I looked at it and thought “Wow”. To see this in conventional ways, even if I had known what to look for, I would have had to do hundreds of plots.”
The irony being that I had no idea what he was talking about. Because all I had to look at was a (crappy print of) a 2-dimensional projection. “Yes,” Athanasios says, “It’s in the nature of the problem. It cannot be translated into paper.”

So I’ll give it a try, but don’t be disappointed if you don’t see too much in the image because that’s the reason d’être for interactive data visualization software.

3-d bandstructure of GaAs. Image credits: Athanasios Chantis.


The two horizontal axis in the figure show the momentum space of the electrons into the directions away from the high symmetry direction of the crystal. It has a periodic symmetry, so you’re actually seeing four times the same patch, and in the atomic lattice this pattern goes on to repeat. In the vertical direction, there are two different functions shown simultaneously. One is depicted with the height profile whose color code you see on the left and shows the energy of the electrons. The other function shown (rescaled) in the colored bullets, is the spin-splitting of three different conduction bands; you see them in (bright) red, white and pink. Towards the middle of the front, note the white band getting close to the pink one. They don’t cross, but instead they seem to repel and move apart again. This is called an anti-crossing.

The relevant feature in the data, the one that’s hard if not impossible to see in two dimensional projections, is that the energy peaks coincide with the location of these anti-crossings. This property of the conducting bands, caused by the spin-splitting in this type of non-centrosymmetric crystals, affects how electrons travel through the crystal, and in particular it affects how electrons can form pairs. Because of this, materials with an atomic lattice of this symmetry (or rather, absence of symmetry) should be unconventional superconductors. This theoretical prediction has meanwhile been tested experimentally by two independent groups. Both groups observed signs of unconventional pairing, confirming at a strong connection between noncentrosymmetry and unconventional superconductivity.

This isn’t the only dataset that Hari studied by way of interactive visualization, and not the only case where it wasn’t only helpful but necessary to extract scientific information. Another example is this analysis of a data set from the composition of the tip of a scanning tunnel microscope, as well as a few other projects he has worked on.

And so it looks to me that, at least for now, the best pattern-finding algorithm for these big data sets is the eye of a trained theoretical physicist. News about the death of theory, it seems, have been greatly exaggerated.

Tuesday, July 09, 2013

The unshaven valley

A super model. Simple, beautiful, but not your reality.
If you want to test quantum gravity, the physics of the early universe is promising, very promising. Back then, energy densities were high, curvature was large, and we can expect effects were relevant that we’d never see in our laboratories. It is thus not so surprising that there exist many  descriptions of the early universe within one or the other approach to quantized gravity. The intention is to test compatibility with existing data and, ideally, make new predictions and arrive at new insights.

In a recent paper, Burgess, Cicoli and Quevedo contrasted a number of previously proposed string theory models for inflation with the new Planck data (arXiv:1306.3512 [hep-th]). They conclude that by and large most of these models are still compatible with the data because our observations seem to be fairly generic. In the trash bin goes everything that predicted large non-Gaussianities, and the jury is still out on the primordial tensor modes, because Planck hasn’t yet published the data. It’s the confrontation of models with observation that we’ve all been waiting for.

The Burgess et al paper is very readable if you are interested in string inflation models. It is valuable for pointing out difficulties with some of these approaches that gives the reader a somewhat broader perspective than just data fitting. Interesting for a completely different reason is the introduction of the paper with a subsection “Why consider such complicated models?” that is a forward defense against Occam’s razor. I want to spend some words on this.

Occam’s razor is the idea that from among several hypotheses with the same explanatory power the simplest one is the best, or at least the one that scientists should continue with. This sounds reasonable until you ask for definitions of the words “simple” and “explanatory power”.

“Simple” isn’t simple to define. In the hard sciences one may try to replace it with small computational complexity, but that neglects that scientists aren’t computers. What we regard as “simple” often depends on our education and familiarity with mathematical concepts. Eg you might find Maxwell’s equations much simpler when written with differential forms if you know how to deal with stars and wedges, but that’s really just cosmetics. Perceived simplicity also depends on what we find elegant which is inevitably subjective. Most scientists tend to find whatever it is that they are working on simple and elegant.

Replacing “simple” with the number of assumptions in most cases doesn’t help remove the ambiguity because it just raises the question what’s a necessary assumption. Think of quantum mechanics. Do you really want to count all assumptions about convergence properties of hermitian operators on Hilbert-spaces and so on that no physicist ever bothers with?

There’s one situation in which “simpler” seems to have an unambiguous meaning, which is if there are assumptions that are just entirely superfluous. This seems to be the case that Burgess et al are defending against, which brings us to the issue of explanatory power.

Explanatory power begs the question what should be explained with that power. It’s one thing to come up with a model that describes existing data. It’s another thing entirely whether that model is satisfactory, again an inevitably subjective notion.

ΛCDM for example fits the available data just fine. For the theoretician however it’s a highly unsatisfactory model because we don’t have a microscopic explanation for what is dark matter and dark energy. Dark energy in particular comes with the well-known puzzles of why it’s small, non-zero, and became relevant just recently in the history of the universe. So if you want to shave model space, should you discard all models that make additional assumptions about dark matter and dark energy because a generic ΛCDM will do for fitting the data? Of course you shouldn’t. You should first ask what the model is supposed to explain. The whole debate about naturalness and elegance in particular hinges on the question of what requires an explanation.

I would argue that models for dark energy and dark matter aim to explain more than the available data and thus should not be compared to ΛCDM in terms of explanatory power. These models that add onto the structure of ΛCDM with “unnecessary” assumption are studied to make predictions for new data, so that experimentalists know what to look for. If new data comes in, then what requires an explanation can change one day to the next. What was full with seemingly unnecessary assumptions yesterday might become the simplest model tomorrow. Theory doesn’t have to follow experiment. Sometimes it’s the other way round.

The situation with string inflation models isn’t so different. These models weren’t constructed with the purpose of being the simplest explanation for available data. They were constructed to study and better understand quantum effects in the early universe, and to see whether string theoretical approaches are consistent with observation. The answer is, yes, most of them are, and still are. It is true of course that there are simpler models that describe the data. But that leaves aside the whole motivation for looking for a theory of quantum gravity to begin with.

Now one might try to argue that a successful quantization of gravity should fulfill the requirement of simplicity. To begin with, that’s an unfounded expectation. There really is no reason why more fundamental theories should be simpler in any sense of the word. Yes, many people expect that a “theory of everything” will, for example, provide a neat and “simple” explanation for the masses of particles in the standard model and ideally also for the gauge groups and so on. They expect a theory of everything to make some presently ad-hoc assumptions unnecessary. But really, we don’t know that this has to be the case. Maybe it just isn’t so. Maybe quantum gravity is complicated and requires the introduction of 105 new parameters, who knows. After all, we already know that the universe isn’t as simple as it possibly could be just by virtue of existing.

But even if the fundamental theory that we are looking for is simple, this does not mean that phenomenological models on the path to this theory will be of increasing simplicity. In fact we should expect them to be less simple by construction. The whole purpose of phenomenological models is to bridge the gap between what we know and the underlying fundamental theory that we are looking for. On both ends, there’s parsimony. In between, there’s approximations and unexplained parameter values and inelegant ad-hoc assumptions.

Phenomenological models that are not strictly derived from but normally motivated by some approach to quantum gravity are developed with the explicit purpose to quantify effects that have so far not been seen. This means they are not necessary to explain existing data. Their use is to identify promising new observables to look for, like eg tensor modes or non-Gaussianity.

In other words, even if the fundamental theory is simple, we’ll most likely have to go through a valley of ugly, not-so-simple, unshaven attempts. Applying Occam’s razor would cut short these efforts and greatly hinder scientific progress.

It’s not that Occam’s razor has no use at all, just that one has to be aware it marks a fuzzy line because scientists don’t normally agree on exactly what requires an explanation. For every model that offers a genuinely new way of thinking about an open question, there follow several hundred small variations of the original idea that add little or no new insights. Needless to say, this isn’t particularly conductive to progress. This bandwagon effect is greatly driven by present publication tactics and largely a social phenomenon. Occam’s razor would be applicable, but of course everybody will argue that their contribution adds large explanatory value, and we might be better of to err on the unshaven side.

If a ball rolls in front of your car, the simplest explanation for your observation, the one with the minimal set of assumption, is that there’s a ball rolling. From your observation of it rolling you can make a fairly accurate prediction where it’s going. But you’ll probably brake even if you are sure you’ll miss the ball. That’s because you construct a model for where the ball came from and anticipate new data. The situation isn’t so different for string inflation models. True, you don’t need them to explain the ball rolling; the Planck data can be fitted by simpler models. But they are possible answers to the question where the ball came from and what else we should watch out for.

In summary: Occam’s razor isn’t always helpful to scientific progress. To find a fundamentally simple theory, we might have to pass through stages of inelegant models that point us into the right direction.

Monday, June 03, 2013

Why do Science?

I sat down to write a piece explaining why scientific research is essential to our societies and why we should invest in applied and basic science. Then I recalled I don’t believe in free will. This isn’t always easy... So I took out the “should” from the title because it’s not like we have a choice. Evidently, we do science! The question is why? And will we continue?

Natural selection, then and now

Developing accurate theories of nature that allow making predictions about the world are an evolutionary advantage. Understanding our environment and ourselves enables us to construct tools and shape nature to our needs. It makes thus sense that natural selection favors using brains to develop theories of nature.

As it is often the case though, natural selection favored traits that then extend beyond the ones immediately relevant for survival. And so the human brain has become very adept at constructing consistent explanations generally. If we encounter any inconsistency, we mentally chew on it and try to find a solution. This is why we cannot help but write thousands of papers on the black hole information paradox. This is why Dyson’s belief that inconsistencies between quantum mechanics and general relativity will forever remain outside experimental detection does not deter physicists from trying to resolve this inconsistency: It’s nature, not nurture.

In fact, our brain is so eager to create consistent theories that it sometimes does so by denying facts which won’t fit. This is why we are prone to confirmation bias, and in extreme cases paralyzed people deny they are not able to tie their shoes or lift an arm (examples from Ramachandran’s book “Phantoms in the Brain.”)

But leaving aside the inevitable overshooting, evolution has endowed us with a brain that is able and eager to develop consistent explanations. This is why we do science.

The question whether we will continue to do science, and what type of science, is more involved than asking whether scientific thinking has benefitted the reproduction of certain genes. The reason is that we have become so good at using nature to our needs that evolution no longer acts by just selecting the phenotypes best adapted to a given environment. Instead, we can make the environment fit to us.

Today, the major effort of societies is eradicating risks and diseases, optimizing crops and agricultural yields, and developing all kinds of technologies to minimize exposure to natural events. Natural selection of course still proceeds. It’s a process that acts on adaptive systems so generally and unavoidably that Lee Smolin famously uses it to explain the evolution of universes. But what does change is the mechanism that creates the mutations among which the “fittest” has an evolutionary advantage. Since we humans now create large changes on the environment in which we have to survive, the technologies that enable us to make these changes have become part of the random mutations among which selection acts. Backreaction can no longer be neglected.

In other words, natural selection can only act on expressions of genes and ideas together. The innovation provided by scientific progress is now part of the mutations that create species better adapted to the environment.

Applied and basic research

The purpose of scientific research is thus to act as an innovation machine. It enables humans to “fit” better to their environment. This is the case at least for applied research. So what then is the rationale to engage in basic research?

First note that what is usually referred to as “basic research” is rarely “non-applied,” but rather it’s “not immediately applied”. Basic research is commonly pursued on the rationale that it is the precursor of applications in the far future, a future so far that it isn’t yet possible to tell what the application might be. This basic research is necessary to sustain innovation in the long run.

Also note that what is commonly referred to as an “application” doesn’t cover the full scope of innovation that scientific research brings. Scientific insight, especially paradigm shifts, have the potential to entirely reshape the way we perceive of ourselves and our place in the world. This can have major cultural and social impacts that have nothing to do with the development of technologies.

Marxist thought for example has thrived on the belief that we differ only in the chances and opportunities given to us and not by heritable talents that lead to different performances, a fact now known to be scientifically fallacious. Planned economy seems like a good idea if you believe in a clockwork universe in which you can make accurate predictions, an idea that doesn’t seem so good if you know something about chaos theory. Adam Smith’s “invisible hand” is based on the belief that self-organization is natural and leads to desirable outcomes, and we’re only slowly learning the problems in managing risk in complex and highly connected networks. The ongoing change in attitude towards religion is driven by science shining light on inconsistencies in religious storytelling. And many scientists seem to be afraid what it could do to society if people realized that they have no free will. All these are non-technological examples of innovation created by scientific knowledge.

Having said that, we are left to wonder about the scientific research that is neither applied (immediately or in the far future) nor has any other impact on our societies. There very possibly is such research. But we don’t know in advance whether or not a piece of research will become relevant in the future. I previously referred to this research as “knowledge for the sake of knowledge.” Now I am thinking that a better description would have been You-never-know-ledge.

Bottomline

Since we have to manage finite resources on this planet, there is always the question how much energy, time, money, and people to invest into any one human activity for the most beneficial outcome. This is a question which has to be addressed on a case-by-case basis and greatly depends on what is meant with “beneficial”, a word that would bring us back to opinions and “should”s. So the above considerations don’t tell us how much investment into science is enough. But they do tell us that we need continuous investment into scientific research, both applied and basic, to allow mankind to sustain and improve the Darwinian “fit” to the environment that we are changing and creating ourselves.

Tuesday, February 12, 2013

The end of science is near, again.

The recent Nature issue has a comment titled
by Dean Keith Simonton who is professor of psychology at UC Davis. Ah, wait, according to his website he isn't just professor, he is Distinguished Professor. His piece is subscription only, so let me briefly summarize what he writes. Simonton notes it has become rare that new disciplines of science are being created:
“Our theories and instruments now probe the earliest seconds and farthest reaches of the Universe, and we can investigate the tiniest of life forms and the shortest-lived of subatomic particles. It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline alongside astronomy physics, chemistry and biology. For more than a century, any new discipline has been a hybrid of one of these, such as astrophysics, biochemistry or astrobiology. Future advances are likely to build on what is already known rather than alter the foundations of knowledge. One of the biggest recent scientific accomplishments is the discovery of the Higgs boson – the existence of which was predicted decades ago.”
He argues that scientific progress will not stall, but what’s going to happen is that we’ll be filling in the dots in a landscape whose rough features are now known:
“Just as athletes can win an Olympic gold medal by beating the world record only by a fraction of a second, scientists can continue to receive Nobel prizes for improving the explanatory breadth of theories of the preciseness of measurements.”
I have some issues with his argument

First, he doesn’t actually discuss scientific genius or any other type of genius. He is instead talking about the foundation of knowledge that he seems to imagine as building blocks of scientific disciplines. While it seems fair to say that the creation of a new scientific discipline scores high on the genius scale, it’s not a necessary criterion. Simonton acknowledges
“[I]f anything, scientists today might require more raw intelligence to become a first-rate researcher than it took to become a genius during… the scientific revolution in the sixteenth and seventeenth century, given how much information and experience researchers must now acquire to become proficient.”
but one is still left wondering what he means with genius to begin with, or why it appears in the title of his comment if he doesn’t explain or discuss it.

Second, I am unhappy with his imagery of the foundations of knowledge, which I must have as I believe in reductionism. The foundation is, always, what’s the currently most fundamental theory and it presently resides in physics. Other disciplines have their own “knowledge” that exists independently of physics, because the derivation of other discipline’s “knowledge” is not presently possible, or if it was, it would be entirely impractical.

The difference between these two images matters: In Simonton’s image there’s each discipline and its knowledge. In my image there’s physics and the presently unknown relations between physics and other theories (and thereby these theories among each other). You see then what Simonton is missing: Yes, we know the very large and the very small quite well. But our understanding of complex systems and their behavior has only just begun. Now if we understand better the complex systems that are subject of study in disciplines like biology, neuroscience and politics, this might not create a new discipline in that the name would probably not change. But it has the potential to vastly increase our understanding of the world around us, in very contrast to the incremental improvements that Simonton believes we’re headed towards. Simonton’s argument is akin to saying that once one knows the anatomy of the human body, the rest of medicine is just details.

Third, he has a very limited imagination. I am imagining extraterrestrial life making use of chemistry entirely alien to ours, with cultures entirely different from ours, or disembodied conscious beings floating through the multiverse. You can see what I’m saying: there’s more to the universe than we have seen so far and there is really no telling what we’ll find if we keep on looking.

Fourth, he is underestimating the relevance of what we don’t know. Simonton writes
“The core disciplines have accumulated not so much anomalies as mere loose ends that will be tidied up one way or another. A possible exception is theoretical physics, which is as yet unable to integrate gravity with the other three forces of nature.”
I guess he deserves credit for having heard or quantum gravity. Yes, the foundations are incomplete. But that's not a small missing piece, it's huge, and nobody knows how huge.

To draw upon an example I used earlier, imagine that our improved knowledge of the fundamental ingredients of our theories would allow us to create synthetic nuclei (molecei) that would not have been produced by any natural processes anywhere in the universe. They would have their own chemistry, their own biology, and would interact with the matter we already have in novel ways. Now you could complain that this would be just another type of chemistry rather than a new discipline, but that’s just nomenclature. The relevant point is that this would be a dramatic discovery affecting all of the natural sciences. You never know what you’ll find if you follow the loose ends.

In summary: It might be true what Simonton says, that we have made pretty much all major discoveries and everything that is now to come will be incremental. Or it might not be true. I really do not see what evidence his “thesis”, as he calls it, is based upon, other than stating the obvious, that the low hanging fruits are the first to be eaten.

Aside: John Barrow in his book “Impossibility” discussed the three different scenarios of scientific progress: progress ending, asymptotically stagnating, or forever expanding. I found it considerably more insightful than Simonton’s vague comment.

Thursday, November 15, 2012

Book review: “Brain Bugs” by Dean Buonomano

Brain Bugs: How the Brain's Flaws Shape Our Lives
By Dean Buonomano
W. W. Norton & Company (August 6, 2012)


We have to thank natural selection for putting a remarkably well-working and energy-efficient computing unit between our ears. Our brains have allowed us to not only understand the world around us, but also shape nature to suit our needs. However, the changes humans have brought upon the face of the Earth, and in orbit around it, have taken place on timescales much shorter than those on which natural selection works efficiently. And with this comes the biggest problem mankind is facing today: We are changing our environment faster than we can adapt to it - evolution is lagging behind.

The human body did not evolve to sit in an office chair all day long, neither did we have time to adapt to an overabundance of food, travel over different time-zones, or writing a text-message while driving on a 6-lane highway. We have absolutely no experience in governing the lives of billions of people and their impact on ecological systems. These are not situations our brains are well suited to comprehend.

There are four ways to deal with this issue. First, ignore it and wait for evolution to catch up. Not a very enlightened approach as we might go extinct in its execution. Second, the Amish approach: keep the environment in a state that our brains evolved to deal with. Understandable, but not for the curious and not realistically what most people will sign up to. Third, tweak our brains and speed up evolution. Unfortunately, our scientific knowledge isn't yet sufficient for this, at least not without causing even larger problems. This then leaves Fourth: Learn about our shortcomings and try to avoid mistakes by recognizing and preventing situations in which we are prone to make errors of judgement.

I recently reviewed David Kahneman's book "Thinking, Fast and Slow", which focuses on a particular type of shortcoming in our judgement, that is that we're pretty bad in intuitively estimating risks and making statistic assessments. Dean Buonomano's book includes these biases that are focus of Kahneman's work, but offers a somewhat broader picture, covering other "brain bugs" that human have, such as memory lapses, superstition, phobias, and imitative learning. Buonomano is very clear in pointing out that all these "bugs" are actually "features" of our brains and beneficial in many if not most situations. But sometimes what is a useful feature, such as learning from others' mishaps, can go astray, as when watching the movie “Jaws” leaves people more afraid of being eaten by sharks than of falling victim to heart attacks.

Dean Buonomano is professor for neurobiology and psychology at UCLA. His book is easy to follow and well written. It moves forward swiftly, which I have appreciated very much because it turns out I knew almost everything that he wrote about already, a clear sign that I have too many subscriptions in my reader. The illustrations are sparse but useful, the endnotes are helpful, and the reference list is extensive.

I have only one issue to take with this book, which is that Buonomano leaves the reader with little indication on how well established the research is that he writes about. In some cases he offers neurological explanations for "brain bugs" that I suspect are actually quite controversial among specialists - it would be surprising if it wasn't so. He has an interesting opinion to offer on the origin of religious beliefs that he clearly marks as his own, but in other instances he is not as careful. Since I'm not an expert on the topic, but generally suspicious about results from fields with noisy data, small samples, and large media attention, I'm none the wiser for what the durability of the conclusions is concerned.

In summary: This book gives you a good overview on biases and shortcomings of the human brain in a well-written and entertaining way. You will not get a lot of details about the underlying scientific research, but this is partly made up for with a good reference list. I'd say this book deserves four out of five stars.

Tuesday, April 10, 2012

Be careful what you wish for

Michael Nielsen in his book “Reinventing Discovery” relates the following anecdote from the history of science.

In the year 1610, Galileo discovered that the planet Saturn, the most distant then known planet, had a peculiar shape. Galileo’s telescope was not good enough to resolve Saturn’s rings, but he saw two bumps on either side of the main disk. To make sure this discovery would be credited to him, while still leaving him time to do more observations, Galileo followed a procedure common at the time: He sent the announcement of the discovery to his colleagues in form of an anagram
    smaismrmilmepoetaleumibunenugttauiras

This way, Galileo could avoid revealing his discovery, but would still be able to later claim credit by solving the anagram, which meant “Altissimum planetam tergeminum observavi,” Latin for “I observed the highest of the planets to be three-formed.”

Among Galileo’s colleagues who received the anagram was Johannes Kepler. Kepler had at this time developed a “theory” according to which the number of moons per planet must follow a certain pattern. Since Earth has one moon and from Jupiter’s moons four were known, Kepler concluded that Mars, the planet between Earth and Jupiter, must have two moons. He worked hard to decipher Galileo’s anagram and came up with “Salve umbistineum geminatum Martia proles” Latin for “Be greeted, double knob, children of Mars,” though one letter remained unused. Kepler interpreted this as meaning Galileo had seen the two moons of Mars, and thereby confirmed Kepler’s theory.

Psychologists call this effort which the human mind makes to brighten the facts “motivated cognition,” more commonly known as “wishful thinking.” Strictly speaking the literature distinguishes both in that wishful thinking is about the outcome of a future event, while motivated cognition is concerned with partly unknown facts. Wishful thinking is an overestimate of the probability that a future event has a desirable outcome, for example that the dice will all show six. Motivated cognition is an overly optimistic judgment of a situation with unknowns, for example that you’ll find a free spot in a garage whose automatic counter says “occupied,” or that you’ll find the keys under the streetlight.

There have been many small-scale psychology experiments showing that most people are prone to overestimate a lucky outcome (see eg here for a summary), even if they know the odds, which is why motivated cognition is known as a “cognitive bias.” It’s an evolutionary developed way to look at the world that however doesn’t lead one to an accurate picture of reality.

Another well-established cognitive bias is the overconfidence bias, which comes in various expressions, the most striking one being “illusory superiority”. To see just how common it is for people to overestimate their own performance, consider the 1981 study by Svenson which found that 93% of US American drivers rate themselves to be better than the average.

The best known bias is maybe confirmation bias, which leads one to unconsciously pay more attention to information confirming already held believes than to information contradicting it. And a bias that got a lot attention after the 2008 financial crisis is “loss aversion,” characterized by the perception of a loss being more relevant than a comparable gain, which is why people are willing to tolerate high risks just to avoid a loss.

It is important to keep in mind that these cognitive biases serve a psychologically beneficial purpose. They allow us to maintain hope in difficult situations and a positive self-image. That we have these cognitive biases doesn’t mean there’s something wrong with our brain. In contrast, they’re helpful to its normal operation.

However, scientific research seeks to unravel the truth, which isn’t the brain’s normal mode of operation. Therefore scientists learn elaborate techniques to triple-check each and every conclusion. This is why we have measures for statistical significance, control experiments and double-blind trials.

Despite that, I suspect that cognitive biases still influence scientific research and hinder our truth-seeking efforts because we can’t peer review scientists motivations, and we’re all alone inside our heads.

And so the researcher who tries to save his model by continuously adding new features might misjudge the odds of being successful due to loss aversion. The researcher who meticulously keeps track of advances of the theory he works on himself, but only focuses on the problems of rival approaches, might be subject to confirmation bias, skewing his own and other people’s evaluation of progress and promise. The researcher who believes that his prediction is always just on the edge of being observed is a candidate for motivated cognition.

And above all that, there’s the cognitive meta-bias, the bias blind spot: I can’t possibly be biased.

Scott Lilienfeld in his SciAm article “Fudge Factor” argued that scientists are particularly prone to conformation bias because
“[D]ata show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant”

As I scientist, I regard my brain the toolbox for my daily work, and so I am trying to learn what can be done about its shortcomings. It is to some extent possible to work on a known bias by rationalizing it: By consciously seeking out the information that might challenge ones beliefs, asking a colleague for a second opinion on whether a model is worth investing more time, daring to admit to being wrong.

And despite that, not to forget the hopes and dreams.

Mars btw has to our best current knowledge indeed two moons.

Friday, April 06, 2012

Book Review: "The Quest for the Cure" by B.R. Stockwell

The Quest for the Cure: The Science and Stories Behind the Next Generation of Medicines
By Brent R. Stockwell
Columbia University Press (June 1, 2011)

As a particle physicist, I am always amazed when I read about recent advances in biochemistry. For what I am concerned, the human body is made of ups and downs and electrons, kept together by photons and gluons - and that's pretty much it. But in biochemistry, they have all these educated sounding words. They have enzymes and aminoacids, they have proteases, peptides and kineases. They have a lot of proteins, and molecules with fancy names used to drug them. And these things do stuff. Like break up and fold and bind together. All these fancy sounding things and their interactions is what makes your body work; they decide over your health and your demise.

With all that foreign terminology however, I've found it difficult to impossible to read any paper on the topic. In most cases, I don't even understand the title. If I make an effort, I have to look up every second word. I do just fine with the popular science accounts, but these always leave me wondering just how do they know this molecule does this and how do they know this protein breaks there, fits there, and that causes cancer and that blocks some cell-function? What are the techniques they use and how do they work?

When I came across Stockwell's book "The Quest for the Cure" I thought it would help me solve some of these mysteries. Stockwell himself is a professor for biology and chemistry at Columbia university. He's a guy with many well-cited papers. He knows words like oligonucleotides and is happy to tell you how to pronounce them: oh-lig-oh-NOOK-lee-oh-tide. Phosphodiesterase: FOS-foh-dai-ESS-ter-ays. Nicotinonitrile: NIH-koh-tin-oh-NIH-trayl. Erythropoitin: eh-REETH-roh-POIY-oh-ten. As a non-native speaker I want to complain that this pronunciation help isn't of much use for a non-phonetic language; I can think of at least three ways to pronounce the syllable "lig." But then that's not what I bought the book for anyway.

The starting point of "The Quest for the Cure" is a graph showing the drop in drug approvals since 1995. Stockwell sets out to first explain what is the origin of this trend and then what can be done about it. In a nutshell, the issue is that many diseases are caused by proteins which are today considered "undruggable" which means they are folded in a way that small molecules, that are suitable for creating drugs, can't bind to the proteins' surfaces. Unfortunately, it's only a small number of proteins that can be targeted by presently known drugs:
"Here is the surprising fact: All of the 20,000 or so drug products that ever have been approved by the U.S. Food and Drug Administration interact with just 2% of the proteins found in human cells."
And fewer than 15% are considered druggable at all.

Stockwell covers a lot of ground in his book, from the early days of genetics and chemistry to today's frontier of research. The first part of the book, in which he lays out the problem of the undruggable proteins, is very accessible and well-written. Evidently, a lot of thought went into it. It comes with stories of researchers and patients who were treated with new drugs, and how our understanding of diseases has improved. In the first chapters, every word is meticulously explained or technical terms are avoided to the level that "taken orally" has been replaced by "taken by mouth."

Unfortunately, the style deteriorates somewhat thereafter. To give you an impression, it starts more reading like this
"Although sorafenib was discovered and developed as an inhibitor of RAF, because of the similarity of many kinases, it also inhibits several other kinases, including the patelet-derived growth factor, the vascular endothelia growth factor (VEGF) receptors 2 and 3, and the c-KIT receptor."

Now the book contains a glossary, but it's incomplete (eg it neither contains VEGF nor c-KIT). With the large number of technical vocabulary, at some point it doesn't matter anymore if a word was introduced, because if it's not something you deal with every day it's difficult to keep in mind the names of all sorts of drugs and molecules. It gets worse if you put down the book for a day or two. This doesn't contribute to the readability of the book and is somewhat annoying if you realize that much of the terminology is never used again and one doesn't really know why it was necessary to use to begin with.

The second part of the book deals with the possibilities to overcome the problem of the undruggable molecules. In that part of the book, the stories of researchers curing patients are replaced with stories of the pharmaceutical industry, the start-up of companies and the ups and downs of their stock price.

Stockwell's explanations left me wanting in exactly the points that I would have been interested in. He writes for example a few pages about nuclear magnetic resonance and that it's routinely used to obtain high resolution 3-d pictures of small proteins. One does not however learn how this is actually done, other than that it requires "complicated magnetic manipulations" and "extremely sophisticated NMR methods." He spends a paragraph and an image on light-directed synthesis of peptides that is vague at best, and one learns that peptides can be "stapled" together, which improves their stability, yet one has no clue how this is done.

Now the book is extremely well referenced, and I could probably go and read the respective papers in Science. But then I would have hoped that Stockwell's book saves me exactly this effort.

On the upside, Stockwell does an amazingly good job communicating the relevance of basic research and the scientific method, and in my opinion this makes up for the above shortcomings. He tells stories of unexpected breakthroughs that came about by little more than coincidence, he writes about the relevance of negative results and control experiments, and how scientific research works:
"There is a popular notion about new ideas in science springing forth from a great mind fully formed in a dazzling eureka moment. In my experience this is not accurate. There are certainly sudden insights and ideas that apear to you from time to time. Many times, of course, a little further thought makes you realize it is really an absolutely terrible idea... But even when you have an exciting new idea, it begins as a raw, unprocessed idea. Some digging around in the literature will allow you to see what has been done before, and whether this idea is novel and likely to work. If the idea survives this stage, it is still full of problems and flaws, in both the content and the style of presenting it. However, the real processing comes from discussing the idea, informally at first... Then, as it is presented in seminars, each audience gives a series of comments, suggestions, and questions that help mold the idea into a better, sharper, and more robust proposal. Finally, there is the ultimate process of submission for publication, review and revision, and finally acceptance... The scientific process is a social process, where you refine your ideas through repeated discussions and presentations."

He also writes in a moderate dose about his own research and experience with the pharmaceutical industry.

The proposals that Stockwell has how to deal with the undruggable proteins have a solid basis in today's research. He isn't offering dreams or miracle cures, but points out hopeful recent developments, for example how it might be possible to use larger molecules. The problem with large molecules is that they tend to be less stable and don't enter cells readily, but he quotes research that shows possibilities to overcome this problem. He also explains the concept of a "privileged structure," structures that have been found with slight alterations to bind to several proteins. Using such privileged structures might allow one to sort through a vast parameter space of possible molecules with a higher success rate. He also talks about using naturally occurring structures and the difficulties with that. He ends his book by emphasizing the need for more research on this important problem of the undruggable proteins.

In summary: "The Quest for the Cure" is a well-written book, but it contains too many technical expressions, and in many places scientific explanations are vague or lacking. It comes with some figures which are very helpful, but there could have been more. You don't need to read the blurb to figure out that the author isn't a science writer but a researcher. I guess he's done his best, but I also think his editor should have dramatically sorted out the vocabulary or at least have insisted on a more complete glossary. Stockwell makes up for this overdose of biochemistry lingo with communicating very well the relevance of basic research and the power of the scientific method.

I'd give this book four out of five stars because I appreciate Stockwell has taken the time to write it to begin with.

Wednesday, March 21, 2012

What can science do for you?

When I read Dawkin's "God Delusion" some years ago, I had very mixed feelings. On the one hand, I think he has a good point that monotheistic religious beliefs are well explained by neurological, psychological and social factors playing together.

On the neurological side, because it has proven to be a useful survival strategy, the human brain is constantly trying to make sense of the world. God is a convenient and simple explanation for all and everything and probably a side-effect of the sense-making attempts when other explanations are difficult to come by.

On the psychological side, religions address our fear of death and tell us that life is fair after all, the bad guys will be punished - post mortem. They help us to find meaning in the carelessness of the cosmos.

On the social side, small children are likely to believe what elders tell them; indoctrination at young age is highly efficient and hard to overcome later. We all want to fit in.

I learned an interesting new aspect at the latest FQXi meeting from David Eagleman, though he didn't draw a connection to religious thinking. Most mentally healthy people lead internal monologues. It's an input-output cycle that circumvents the external part of the loop (in which you actually speak and hear your voice). Eagleman spoke about his hypothesis according to which a failure of the brain to correctly time the inner monologue would have you think you "heard" your inner voice before you formulated it yourself, creating the illusion that you are hearing voices.

Evidence for the neurological roots of religious believes is mounting, see eg Kapogiannis, D., et al. "Cognitive and neural foundations of religious belief" or this earlier post. Or, if that's too many words, here's a fluff talk about funny things people believe by Michael Shermer











So, I'm with Dawkins on the origin of religious beliefs.

On the other hand I think religions serve a need of our societies, and the big churches have learned to serve it well. They provide a community for their followers, no entry exam required, and they offer help and advice. They have beautiful architecture and music. This used to be my favorite church song:


It's a variant of Gloria in excelsis deo. So what, really, does science have to offer in comparison?

I think that the biggest problem that science in the 21st century faces is to convince religious people that it has something to offer to them; that scientific thinking brings a value added to their life. Unfortunately, scientists, me included, are not good in sharing this value. Most of us, that is. Carl Sagan did a pretty good job. Neil deGrasse Tyson does too.



When the piano music set in, I felt like puking, and I hate Symphony of Science with a passion. But Tyson's speech has been viewed more than 2 million times, and Symphony of Science is wildly popular, so clearly it speaks to people. And the reason is simple: They're awe-inspiring.

These are both brilliant examples that document so nicely what science can do for you: It tells you what is your place in the universe. It explains how the universe works and how you're part of its working. That's more than any monotheistic religions can offer. In fact, the whole purpose of these religions is to get you to stop asking, to stop thinking.

Shawn Otto has written a book about the US American right's war on science, called "Fool Me Twice: Fighting the Assault on Science in America." I haven't read the book and have no intention to but there's an interesting interview with Otto on Daily Kos. Otto makes a case there that
"When one side of the debate is based on knowledge and the other is based on mere belief or opinion, it’s really a battle over freedom versus authoritarianism...

I understand the argument he's trying to lead, but I think it's not going to be successful. This isn't a battle about freedom, it's a battle about happiness. For Otto's argument to work, he'd have to show that a science-based democracy will contribute the most to the societies' well-being. Now, I believe this is indeed the case, but the problem is that Otto can't base his argument on beliefs, otherwise it'll turn upside down. And to my best knowledge there's no scientific proof that democracy and science make people more happy than, say, monarchy and religion. So, in the end we're left with opinions which is why I doubt this will lead anywhere, especially since the "anti-scientific" side isn't burdened by sticking to scientific arguments.

Thus, I think the awe-inspiring approach is much more promising. Chances are, in the course of time, scientific-themed music will become more common (and less sickening). Bjök's Biophilia is maybe a beginning - though that's arguably not everybody's petridish. What science is still lacking though is a broad sense of community that includes the non-professional public. If I had an institute, I'd have a weekly public event, every Sunday morning at 11, open for everybody. We'd summarize this weeks awesome news, see the most amazing images and videos, and talk about a topic that gives everyone something to think about. On occasion, we'd have a guest speaker. After that, we'd all have a brunch and people could stay and talk and make suggestions for the next week.

I think we have a long way to go to convince people that science is more than a collection of numbers and figures, but a way to understand the world and our place in it. But we're well on the way.

Wednesday, January 04, 2012

What is science?

As long as there has been science people have asked themselves how to identify it. Centuries of philosophers have made attempts and I don't intend to offer an answer in the confines of a blogpost. Instead, always the pragmatist, I want to summarize some points of view that I have encountered, more or less explicitly so, and encourage you to share your own in the comments. With this post, I want to pick up a conversation that started in this earlier post.

There is the question of content and that of procedure. The question of content is mainly a matter of definition and custom. When a native English speaker says "science" they almost always mean "natural science." On occasion they include the social sciences too. Even rarer so mathematics. The German word for science is "Wissenschaft" and in its usage is much closer to the Latin root "scientia."

According to the Online Ethymology Dictionary
    Science from Latin scientia "knowledge," from sciens (gen. scientis), present participle of scire "to know," probably originally "to separate one thing from another, to distinguish"

The German "Wissenschaften" include besides the natural sciences not only the social sciences and mathematics, but also "Kunstwissenschaft," "Musikwissenschaft," "Literaturwissenschaft," etc, literally the science of art, the science of music, the science of literature. It speaks for itself that if you Google "Kunstwissenschaft" the first two suggestions are the completions "in English" and "translation." In the following I want to leave the content of "science" as open as the German and Latin expressions leave it, and let it be constrained by procedure, which for me is the more interesting aspect.

As for the procedure, I have come across these three points of view:
  • A: Science is what proceeds by the scientific method

    When pushed, the usually well-educated defender of this opinion will without hesitation produce a definition for scientific method along the lines hypothesis, experimental test, falsification or gradual acceptance as established fact.

    The problem, as Feyerabend pointed out, is that a lot of progress in science did simply not come about this way. Worse, requiring a universal method may in the long run stifle progress for the reason that the scientific method itself can't adapt to changing circumstances. (I'm not sure if Feyerabend said that, but I just did.) Requiring people in a field in which creativity is of vital importance to obey certain rules, however sane they seem, begs for somebody to break the rules - and succeed nevertheless.

    There are many examples of studies that have been pursued for the sake of scientia without the possibility or even intention of experimental test, and they have later become tremendously useful. A lot of mathematics falls into this category and, until not so far ago, a big part of cosmology. Do you know what will be possible in 100 years? Prediction is very difficult, especially about the future, as Niels Bohr said.

    The demand of falsifiability inevitably brings with it the question for patience. How long should we wait for an hypothesis to be tested before we have to discard it as unscientific? And who says so? If you open Pandora's box, out falls string theory and the technological singularity.

    Finally, let me mention that if you sign up to this definition of science, then classifications, that make up big parts of biology and zoology, are not science. Science however are literature studies, for you can well formulate a hypothesis about, say, Goethe's use of the pluralis majestatis and then go and falsify it.

  • B: Science is what scientists do

    This definition begs the question who is a scientist. The answer is that science is a collective enterprise of a community that defines its own membership. Scientists form, if you want to use a fashionable word, a self-organizing system. They define their own rules, and the viability of these rules depends on the rules' success. The rules cannot only change over time, allowing for improvement, there can also exist different ones next to each other that compete in the course of history.

    I personally prefer this explanation of science. I like the way it fits into the evolution of the natural world, and I like how it fits with history. I also like that it's output oriented instead of process oriented: it doesn't matter how you do it as long as it works.

    In this reading, the scientific method, as summarized in A, is so powerful for the same reason that animals have the most amazing camouflage: Selection and adaption. It does not necessitate infallibility. Maybe the criteria of membership we use today are too strict. Maybe in the future they will be different. Maybe there will be several ones.

    The shortcoming of this definition is that there is no clear-cast criterion by which you can tell what of today's efforts are scientific, in much the same way that you can't tell whether some species is well adapted to a changing environment till they go extinct, possibly because they fall prey to a "fitter" species. That means that this definition of science will inevitably be unpopular in circumstances that require short and simple answers, circumstances in which the audience isn't expected to think for themselves.

    Given the time to think, note that the lack of simple criteria doesn't mean one can't say anything. You can clearly say the scientific method, as defined in A, has proven to be enormously successful and, unless you are very certain you have a better idea, discarding it is the intellectual equivalent of an insect dropping its camouflage and hoping birds don't notice. Your act of rebellion might be very short.

    That having been said, in practice there is little difference between A and B. The difference is that B leaves the future open for improvement.

  • C: Science is the creation, collection, and organization of knowledge

    "All science is either physics or stamp collecting," said Ernest Rutherford. This begs the question whether stamp collection is a science. The definition C is the extreme opposite to A; it does not demand any particular method or procedure, just that it results in knowledge. What that knowledge is about or good for, if anything, is left up to the scientist operating under this definition.

    The appeal of this explanation is that scientists are left to do, and collect what they like, with the hope that future generations find something useful in it; it's the "You never know" of the man who never throws anything away, and has carefully sorted and stored his stamps (and empty boxes, and old calendars, and broken pens, and...).

    The problem with this definition is that it just doesn't overlap with most people's understanding of science, not even with the German "Wissenschaft." There is arguably a lot of knowledge that doesn't have any particular use for most people. I know for example that the VW parked in front of the house is our upstairs neighbor's, but who cares. Where exactly does knowledge stop being scientific? Is knowledge scientific if it's not about the real world? These are the question you'll have to answer to make sense of C.


(img sources: click on image)

Wednesday, November 23, 2011

Book review: "Impossibility" by John D. Barrow

Impossibility: The Limits of Science and the Science of Limits
John D. Barrow
Oxford University Press (1999)

In his book "Impossibility: The Limits of Science and the Science of Limits" John Barrow has carried together everything that sheds light on the tricky question what is possible, practically as well as conceptually. It is an extensive answer to the question of FQXi's 2009 essay contest "What is ultimately possible in physics?" but takes into account more than just physics. Barrow also covers economical, biological and, most importantly, mathematical aspects of the question what we can and can't do, what we can and can't know.

The book discusses paradoxa, timetravel, computabily, complexity and the multiverse, though Barrow never uses the word multiverse. The book was written somewhat more than a decade ago, but the summary of eternal inflation and bubble universes, varying constants and the question if it is still science to speculate about something that's unobservable is timely, and Lee Smolin's cosmological natural selection also makes an appearance. Barrow does mention some of his own work (on varying constants and universes with non-trivial topologies) but only in a paragraph or two.

Barrow briefly introduces most of the concepts he needs, but I suspect if you don't already have a rough idea what cosmology and quantum mechanics is about, some sections will not make a lot of sense. He mentions for example the many worlds interpretation in the passing without ever explaining what it is, and has the possibly shortest explanation of inflation and the expanding universe I've ever seen. But if you've read one or the other book that covers these topics you might (as I) be relieved Barrow keeps it short.

The presentation is very non-judgmental. Barrow essentially goes through all aspects of the issue and reports who has contributed what to the discussion, without imposing an opinion on the reader. He also gives an interesting historical perspective on how our view on these questions has changed esp. with Gödel's contributions. However, the writing reads more like a review than a book in that it lacks a narrative, and Barrow also doesn't offer own conclusions, he just summarizes others' arguments. I don't mind so very much about the lack of narrative since I have grown a little tired by the current pop sci fashion to make up a story around the facts so it sells better, but I'd have expected some original thoughts here or there. It is also unfortunate that the book is very superficial on some topics, for example time travel and free will, and if you know a little about that already you won't hear anything new. On the other hand, if you just want a flavor and some references for further reading, Barrow does a good job. I ceartainly learned about some aspects of the possible and impossible that I hadn't thought about before.

Barrow's book is well structured with a summary at the end of each chapter and a final summary in the last chapter. This is very convenient if you put the book down and only pick it up again a few months later and need a reminder what you've already read.

I've been reading for a while on this book. Since 2008 in fact, if I believe the receipt. The reason it took me so long has very little to do with the actual content of the book which, now that I managed to finish it I like very much, and more mundanely with the representation of that content. The book is printed in tiny and in addition the print is crappy, so I get tired just by opening it and looking at a page. It has a few illustrations that are very helpful and to the point, but not particularly inspired. There are also a few photos. As you can guess however, Hubble Deep Field in a crappy black and white print on some square inch isn't too compelling, and it's difficult to see the Château in Magritte's Château de Pyrénées.

Taken together, you may enjoy this book if you are interested in a summary of aspects of the possible and impossible, but you would be disappointed if you're looking for an in-depth treatment of any particular aspect. The book is well written, though not very inspired, and the scientific explanations are well referenced and, for all I can tell, flawless. I'd give four out of five stars if I had stars to give.

Sunday, September 04, 2011

From my notepad

The 2011 FQXi conference was an interesting mix of people. The contributions from the life sciences admittedly caught my attention much more than those of the physicists. Thing is, I’ve heard Julian Barbour’s and Fotini Markopoulou’s talk before, I’ve seen Anthony Aguirre’s piano reassemble from dust before, and while I hadn’t heard Max Tegmark’s and George Ellis’ talk before I’ve read the respective papers. The discussions on physics conferences also seem to have a fairly short recursion time and it’s always the same arguments bouncing back and forth. One thing I learned from David Eagleman’s talk is that neuronal response decreases upon repetitive stimuli – so now I have a good excuse for my limited attention span in recursive discussions ;-)

All the talks on the conference were recorded and they should be on YouTube sooner or later. Stefan also just told me that the talks from the 2009 FQXi conference are on YouTube now. (My talk is here. Beware, despite the title, I didn’t actually speak on Pheno QG. Also, I can’t for the hell of it recall what that thing is I’m wearing.) Anyways, here is what I found on my notepad upon return, so you can decide what recording you might want to watch:

  • Mike Russell gave a very interesting talk on the origin of life or at least its molecular ancestors. He explained the conditions on our home planet 2 billion years ago and the chemical reactions he believes must have taken place back then. He claims that under these circumstances, it was almost certain that life would originate. With that he means a molecule very similar to ADP, the most important cellular energy source, is very easy to form under certain conditions that he claims were present in the environment. From there on, he says, it’s only a small step to protein synthesis, RNA and DNA and they are trying to “re-create” life in the lab.

    Chemical reactions flew by a little too fast on Russell’s slides, and it’s totally not my field, so I have no clue if what Russell says is plausible. Especially I don’t know how sure we really can be the environment was as he envisions. In any case, I took away the message that the molecular origins of life might not be difficult to create in the right environment. Somewhat disturbingly, in the question session he said he has trouble getting his work funded.

  • Kathleen McDermott, a psychologist from Washington University, reports the results of several studies in which they were trying to find out which brain regions are involved in recalling memory and imagining the future. Interestingly enough, in all brain regions they looked at, they found no difference in activity in between people recalling an event in the past and envisioning one in the future.

  • David Eagleman gave a very engaging talk about how our brains slice time and process information without confusing causality. The difficulty is that the time which different sensory inputs needs to reach your brain differs by the type and location of input, and also the time needed for processing that might differ from one part of the brain to the next. I learned for example that the processing of auditory information is faster than that of visual information. So what your brain does to sort out the mess is that it waits till all information has arrived, then presents you with the result and calls it “right now,” just that at this point it might be something like 100ms in the past actually.

    Even more interesting is that your brain, well trained by evolution, goes to lengths to correct for mismatches. Eagleman told us for example that in the early days of TV broadcast, producers were worried that they wouldn’t be able to send audio and video sufficiently synchronized. Yet it turned out, that up to 20ms or so your brain erases a mismatch between audio and video. If it gets larger, all of a sudden you’ll notice it.

    Eagleman told us about several experiments they’ve made, but this one I found the most interesting: They let people push a button that would turn on a light. Then they delayed the light signal by some small amount of time 50ms or so past pushing the button (I might recall the numbers wrong, but the order of magnitude should be okay). People don’t notice any delay because, so the explanation, the brain levels it out. Now they insert one signal that comes without delay. What happens? People think the light went on before they even pushed the button and, since the causality doesn’t make sense, claim it wasn’t them! (Can you write an app for that?) Eagleman says that the brains ability to maintain temporal order, or failure to do so, might be a possible root of schizophrenia (roughly: you talk to yourself but get the time order wrong, so you believe somebody else is talking) and they’re doing some studies on that.

  • From Simon Saunders talk I took away the following quotation from a poem by Henry Austin Dobson on “The Paradox of Time:”

      “Time goes, you say? Ah no!
      Alas, Time stays, we go;
      Or else, were this not so,
      What need to chain the hours,
      For Youth were always ours?
      Time goes, you say?- ah no”


  • Malcom MacIver, who blogs at Discover, studies electric fish. If that makes you yawn, you should listen to his talk, because it is quite amazing how the electric fish have optimized their energy needs. MacIver also puts forward the thesis that the development of consciousness is tied to life getting out of water simply because in air one can see farther and thus arises the need for ahead planning. In a courageous extrapolation of that, he claims that our problem as a species on this planet is that we can’t “see” the problems in other parts of the world (e.g. starving children) and thus fail to properly react to them. I think that’s an oversimplification and I’m not even sure that is the main part of the problem, but it’s certainly an interesting thesis to think about. He has a 3 part series on posts about this here: Part I, Part II, Part III.

  • Henry Roediger from the Memory Lab at Washington University explained us, disturbingly enough, that there is in general no correlation between the accuracy of a memory and the confidence in it. For example, shown a list of 16 words with a similar theme (bed, tired, alarm clock, etc) 60% of people (or so, again: I might mess up the numbers) will “recall” the word “sleep” with high confidence though it was not on the list. A true scientist, he is trying to figure out under which circumstances there is a good correlation and what this means for the legal process.

  • Alex Holcombe told us about his project evidencechart.com, a tool to collect and rate pro and con arguments on a hypothesis. I think this can be very useful, though more so in fields where there actually is some evidence to rate on.

Scott Aaronson's talk on free will deserves a special mentioning, but I found it impossible to summarize. I recommend you just watch the video when it comes out.

Saturday, June 05, 2010

Diamonds in Earth Science

To clarify the situation, experiments would need to push above 120 Gigapascal and 2500 Kelvin. I [...] started laboratory experiments using diamond-anvil cell, in which samples of mantle-like materials are squeezed to high pressure between a couple of gem-quality natural diamonds (about two tenths of a carat in size) and then heated with a laser. Above 80 Gigapascal, even diamond—the hardest known material—starts to deform dramatically. To push pressure even higher, one needs to optimize the shape of the diamond anvils's tip so that the diamond will not break. My colleagues and I suffered numerous diamond failures, which cost not only research funds but sometimes our enthusiasm as well.
(From The Earth's Missing Ingredient)

But in the end, Kei Hirose and his group succeeded in subjecting a small sample of magnesium silicate to the pressure and temperature that prevails in the lower Earth's mantle, about 2700 kilometer below our feet.

Planet Earth has an onion-like structure, as has been revealed by the analysis of seismological data: There is a central core consisting mostly of iron, solid in the inner part, molten and liquid in the upper part. On top of this follows the mantle, which is made up of silicates, compounds of silicon oxides with magnesium and other metals. The solid crust on which we live is just a thin outer skin.

The lower part of the mantle down to the iron core was long thought to consist of MgSiO3 in a crystal structure called perovskite. However, seismological data also revealed that the part of the mantle just above the CMB (in earth science, that's the core-mantle boundary, not the cosmic microwave background... ) somehow is different from the rest of the mantle. This lower-mantle layer was dubbed D″ (D-double-prime, shown in the light shade in the figure), and it was unclear if the difference was by chemical composition or by crystal structure.

As Kei Hirose describes in the June 2010 issue of the Scientific American, his group started a series of experiments to study the properties of magnesium silicate at a pressure up to 130 Gigapascal (water pressure at an ocean depth of 1 kilometer is 0.01 GPa) and a temperature exceeding 2500 Kelvin ‒ the conditions expected for the D″ layer of the lower mantle.

To achieve such extreme conditions, one squeezes a tiny piece of magnesium silicate between the tips of two diamonds, and heats up the probe by a laser. The press used in such experiments is called "laser-heated diamond anvil cell".

The figure shows the core of a diamond anvil cell: The sample to be probed is fixed by a gasket between the tips of two diamonds. The diameter of the tips is about 0.1 millimeter, so applying a moderate force results in huge pressure.

Diamonds are used because of their hardness, but they have the additional bonus of being transparent. Hence, the probe can be observed, or irradiated by a laser for heating, or x-rayed for structure determination.

The diamonds are fixed in cylindrical steel mounts, but creating huge pressure does not require huge equipment: The whole device fits on a hand! (Photo from a SPring-8 press release about Kei Hirose's research.)


Actually, the force on the diamond tips is applied in such a device by tightening screws by hand.

In the experiment, the cell was mounted in a brilliant, thin beam of x-rays created by the SPring-8 synchrotron facility in Japan. This allows to monitor the crystal structure of the probe by observing the pattern of diffraction rings.

It was found that under the conditions of the D″ layer of the lower mantle, magnesium silicate forms a crystal structure unknown before for silicates, which was called "Post-Perovskite". The formation of post-perovskite in the lower mantle is a structural phase transition of the magnesium silicate, and this transition can explain the existence of a separate the D″ layer, and many of its peculiar features. It also facilitates heat exchange between core and mantle, which seems to have quite important implications for earth science.

And here is the heart of the experiment (from the "High pressure and high temperature experiments" site of the Maruyama & Hirose Laboratory at the Department of Earth and Planetary Sciences, Tokyo Institute of Technology) ‒ a diamond used in a diamond anvil pressure cell:


High-quality diamonds of this size cost about US $500 each.