Saturday, January 30, 2010


We've had some more snow the last days. Sweden is very family friendly, but some restrictions do apply ;-)

This reminded me that when I was moving to Stockholm I joked the trajectory I'm on, Santa Barbara - Waterloo - Stockholm, is not good. If I continue this way, I'll get tenure in Novosibirsk. I was terribly wrong with this because Novosibirsk is in fact more south than Stockholm. No kidding. The extrapolation looks in fact like this:

Uuh-ooh. Looks more like the Siberian Islands than Novosibirsk. But of course that's all a matter of perspective.

My Swedish hasn't made significant progress. There's bits and pieces that I understand in the news on the radio. "Olycka" (accident), "kallt väder" (cold weather) "flygbombade" (air bombing). Do I really want to know more? So far I have encountered a total of two Swedes who didn't speak English, one of them an elderly lady who answered my question where to find the exit with "I love you," followed by what I believe meant "This is the only English sentence I know." I found out of the garage anyway, thus the pressure to learn Swedish is low. Only problem is the mail I receive, since I'm typically not in the mood to get a dictionary and find out what exactly it says. Such it turned out that I have happily thrown away letters containing forms about my pension fund, thinking they were advertisements. On the other hand, I received a letter from the German Society in Stockholm, kindly offering me German lessons.

In fact, many Swedes speak English so well that I, not being a native speaker myself, sometimes can't tell whether they are British or Swedish. Tale-telling though is the melody of the language. Meanwhile, my English that used to be American English with a German accent and lately some Canadian impact, threatens to pick up that melody too. The Uncyclopedia kindly offers "Eeengleesh vid de Sveeedeeesh acky-centy eees de moost hilariooos dialectas of de Eeengleeesh lengveeege." If anybody has a good advice how to get rid of the Canadian "ou" pls let me know (watch this, min 1:07, listen to "anything abOUt Newfoundland"). These two Newfoundlanders of course give you a totally wrong picture of Canada. Watch the below to see how successfully they've been working on their inferiority complex...

The Germans of course also have mounted police.

Friday, January 29, 2010

Division by Zero

It started when I was an undergraduate. In his email he explained he had found a theory for the indeterminism in quantum mechanics. I spent 2 weeks trying to explain that dividing both sides of an equation by zero does not create a non-deterministic measurement outcome, but simply nonsense. He insisted I misunderstand his idea and accused me of being narrow-minded.

This was 15 years ago when the internet was young. Since then, I've received hundreds of emails from self-declared geniuses who urgently want me to read their attached paper or visit their website. Some are trying to politely convince (with your qualifications... I would be honored...), some are outright offensive (intellectual elitism!), some are asking for pity (I have nobody to talk to.) Most of them write emails, some write letters, others send their self-printed books. I guess everybody with a PhD in physics has received one or the other such "theory." Baez' crackpot index tells this tale. Writing a blog makes you a preferred target. And during the last week, my inbox has seen a sharp increase in unsolicited mailings which I totally blame on winning a 2nd price in the FQXi essay contest.

I realize that some fraction of this blog's readership very likely consists of people who have themselves written such emails and who are hoping that I recognize their ingenuity. Not despite but exactly because of this I want to offer you some open words. I will not read your paper. I will not visit your website. And, no, I am not interested in your "theory." That's for several reasons. First, I generally will not open any attachments or click on links in emails from people I don't know, period. Second, I have no lack on interesting things to read and don't need your inspiration either. If I add your paper to the pile, I might get around to reading it sometime in the next century, so forget about it. Third, it is entirely obvious from your email that you have never read any of my papers, and have no clue who I am or what I am working on. Why should I waste a single second of my day reading what you wrote?

Let me be clear on this. I totally acknowledge the possibility that your theory is indeed groundbreaking and will fundamentally change our understanding of Nature. Not having read what you wrote, I am not judging your work whatsoever. But I am crucially aware my time on this planet is finite and I select the information that I pipe into my brain carefully. And yes, this means I use the most common crap-filters, peer review and personal connections. It is not impossible your work is groundbreaking. But it's unlikely. More likely, it's just a waste of my time. You can call that ignorant if you like, but it's effective. Tell me a better filter and I'll use it. You can go and complain about the arrogance of PhD holders. But I hope you realize that you and your spiritual brothers (if you have sisters, they are rare) have to blame yourself for this protective wall. If you wouldn't constantly bother us with immature ideas, we'd maybe take you more seriously.

The point is you have to know the rules before you break them. That's true in politics, in the arts, and it's also true in the sciences. No, you don't need a PhD to contribute to research in theoretical physics. But whether you have a title or not, you need the equivalent knowledge. You're not getting there by reading blogs, or posting in a forum. It takes time, it takes effort. And it is abundantly clear if your educational background is insufficient. You're not fooling anyone. You wouldn't go tell your doctor you have a great new idea for how he's supposed to do your bypass, would you? And why not? Because you know he has more education and experience than you. Time to realize that it also takes education and experience to write a paper in theoretical physics.

Having said that, let's look at the lighter side of things. I frequently scribble notes on papers. Most often used are "?" and "!," closely followed by "Check this!" and "nice." Inspired by this site with funny rubber stamps, you'll see in this post a few stamps I'd sometimes like to use ;-) Click to enlarge. And here's for the sisters:

Tuesday, January 26, 2010

Update on the ESQG 2010

As previously mentioned, together with Greg Landsberg and Lee Smolin, I am presently organizing a workshop on "Experimental Search for Quantum Gravity" that will take place here at Nordita in Stockholm, July 12-16. We now have a poster that is about to be printed:

Layout and design: yours truly. We also meanwhile have a few abstracts and a new one coming in every couple of days, so check out the website for more details.

Sunday, January 24, 2010

Reflections on the Sun

Last week I was in Great Britain (as you'll know if you follow me on Twitter). Even when you're flying long-distance Westbound a commercial plane doesn't catch up with the setting sun. Eg the flight from Frankfurt to Toronto takes 9 hours while you gain only 6 time-zones. But when flying South-West from Sweden on a winter afternoon one can amazingly enough see the sun rise again above the horizon. Winters here in Stockholm are cold and dark. Plenty of time to be reminded that life on earth would not be possible without a gigantic ball of hot plasma that our planet happens to be orbiting around.

We're so used to the sun that we often forget what a fascinating object it really is. Far from being the dull blob that it appears from far away, it's 1030 kg of nuclear matter with temperatures ranging from 5,000 K at the surface to 107 K at the core. Some months ago, during Nordita's program on "Solar and stellar dynamos and cycles" in a talk on Helioseismology, I saw this video showing a solar quake, waves on the sun's surface:

This quake from July 1996 was triggered by a solar flare in its center that was recorded just prior to the quake. Not the newest news, but I still think this is totally amazing. There's also a lot of physics in here. Unfortunately, I wasn't able to find the real-time scale is for the video, but I think it's roughly an hour. The actual size of the image shown is 100,000 km in each direction. The data was taken with the Michelson Doppler Imager of NASA's Solar and Heliospheric Observatory (SOHO) mission which basically measures the velocity perpendicular to the sun's surface by use of the Doppler shift in spectral lines. You can find a better resolution of the picture with a brief description of the event on this website.

It is interesting to note, and you can see this on the crappy video already, that unlike water waves you'd see in a puddle, the waves on the sun's surface increase their velocity with time (by roughly factor 10 for what is shown in the video). The explanation for this is that the waves are not surface waves, but pressure waves propagating into the sun's interior. Unlike the puddle the sun is a ball and its density increases towards the middle. With increasing density, the velocity of the waves (essentially the sound velocity) increases. The waves are reflected (similar to light-reflection/refraction on planar surfaces) and appear back on the sun's surface as outgoing rings whose outward velocity increases due to the geometry of the wavefronts and the density gradient. (For more details, the interested reader is referred to astro-ph/0601006 and references therein.)

So next time you look at the sun recall it's a giant ball of plasma held together by gravity, an every-day display of fascinating physics.

See also: Light Bulbs and the Solar Energy Production

    "These smiling eyes are just a mirror for..."

Wednesday, January 20, 2010

This and That

Some bits of information that crossed my way recently:

Tuesday, January 19, 2010

And the Winner is: Second Prize for "At the Frontier of Knowledge" in the FQXi Essay Contest

The Foundational Questions Institute (FQXi) runs an annual Essay Contest, and last year's installment asked for papers on the question: "What is Ultimately Possible in Physics?"

Bee had submitted an essay titled At the Frontier of Knowledge.

In my eyes, it's a wonderful text but, well, I might be biased. But it also convinced the Jury: Out of more than 100 submissions, "At the Frontier of Knowledge" was awarded a second prize, shared with "On the impossibility of superluminal travel: the warp drive lesson" by Carlos Barcelo, Stefano Finazzi and Stefano Liberati. As the FQXi announcement explains:

The essay of Sabine attacks our presumption that anyone could answer the essay question, arguing that we can never know if we have hit a limit of scientific knowledge. Judges praised the witty and logical style and the author's creative questioning of the question.

Congratulations, Sabine!

At the Frontier of Knowledge

At any time, there are areas of science where we are standing at the frontier of knowledge, and can wonder whether we have reached a fundamental limit to human understanding. What is ultimately possible in physics? I will argue here that it is ultimately impossible to answer this question. For this, I will first distinguish three different reasons why the possibility of progress is doubted and offer examples for these cases. Based on this, one can then identify three reasons for why progress might indeed be impossible, and finally conclude that it is impossible to decide which case we are facing.

(continue reading the PDF file of the essay)

Monday, January 18, 2010

Seminar Walkthrough

I've never been much into video games. While I am stunned by the high quality of today's virtual worlds I tend to lose interest in human-created puzzles quickly. On the rare occasions I've played one or the other game (TombRaider, Half-Life) I shamelessly used walkthroughs. Why spend 2 hours opening all the graves when some teenager in Mississippi knows which contains the mummy? In addition, when I'm traveling, I sometimes feel like my days are a bizarre piece of software with unclear purpose. The walkthrough would go something like this:

Doubleclick the basket next to the coffee machine. Inspect the items it contains by hovering over each. One of them is labeled Coffee Dreamer. Put it into your inventory. Leave the hotel and take the bus to Euston Station. Find the ticket machine in the far left corner. Punch in your code and obtain an orange-colored card. Take the escalator right behind you and wait for the blue line Southbound, exit at Victoria Station.

At Victoria Station you have to find platform 23. A security officer will appear and tell you there is no platform 23. To your left you will see a Burger King and next to it two restroom doors. Enter the restroom for the gender that is not your character's. Use the orange card to open the inner door and walk right through the mirror. It will bring you to platform 23. Enter the blueish gleaming train standing there.

On the train you will meet a group of teenagers costumed as cats. Offer them the Coffee Dreamer. In exchange you will get a magic mushroom. Exit the train at Brighton. In front of the station, wait for bus 8 and get off at the Pier. This is not where you need to go, but it will save you a detour. Walk down the seashore to an old railway station. Above it you will see some words written on the wall. This is your code for the University, so write them down. They are different every time. In my case they read "I have great desire - My desire is great."
Continue down the seashore until you see a "Hotel" sign and ask for a room. The receptionist will show you a list with available rooms. Chose the one marked as emergency exit. Rest for some hours. On the next morning, make a safety backup. Take off your chainmail, you will not need it today, but make sure to wear running shoes. Take the bus number 8 again (this doesn't make sense) it will bring you to the university. Enter the grey building across the street. Take the elevator to the uppermost floor. When you step out of the elevator, go down three doors to the right and knock. This is your contact person. No matter what he says, reply with the keywords you found at the pier. He will then give you the number to the secretary's room (a different number every time).

Go down the corridor till you find the right room. Give the secretary your Bank information, then continue down the corridor to the seminar room (a blue double-winged door). Time now to pop the smart-pill you found in the mummy's grave. Open your inventory, double click the pill and confirm. You should make it through the seminar safely. If not, reload your morning backup and try again. Upon completion of the seminar you will gain 4 skill points.

Head back to the hotel (bus number 8 again) and enter your room. Rest some hours to recharge energy, but not more than 3 because the hotel will start burning during the night. Once the fire alarm goes off, open the window and climb up the latter to the roof. Above you there is a helicopter waiting...

Friday, January 15, 2010

Google Streetview: Physics Institutes

Google streetview meanwhile covers quite a decent fraction of North America and Europe. Here's some physics institutes that I found captured. Click on the image to go to Google maps and look around.

The Perimeter Institute in Waterloo, Ontario:

The Kavli Institute in Santa Barbara (turn around and enjoy the scenery):

The Department of Physics at the University of Arizona:

The Department of Physics at Duke University:

CERN, main entry (thanks to Stefan):

Pupin Physics Laboratories at Columbia University:

Caltech's new Cahill Center for Astronomy and Astrophysics:

The physics department at the Technical University Delft, Netherlands (thanks to Arjen):

University of Washington, Seattle, Physics Department (thanks to Evan):

Add your finds in the comments. Show me something I haven't seen before :-)

Wednesday, January 13, 2010

How Is The Internet Changing The Way I Think?

As you've probably heard already, The Edge Annual Question 2010 "How is the Internet Changing the Way You Think?" is making its round in the blogosphere. So let me add my few ascii characters.

I can't say the internet is changing or has changed the way I think. It has however changed the way I post-process what I think in several ways. This has pros and cons.

Pro: The most obvious change is that I share my thoughts with many more people than before. This has frequently resulted in very interesting feedback, opened my eyes to issues I neglected or points of view I wasn't previously aware of. This is one of the prime reasons I'm writing this blog.

Con: On the flipside, while writing down my thoughts I'll typically do some Google searches and come across previous articles on related topics. This likely affects my own opinion, and I'm not sure this is entirely a good thing. And, needless to say, some of the feedback I got has merely taught me that the world is full with ignorant, hostile, and simply crazy people. Knowledge I could I have lived without.

Pro: Clearly, the internet provides a vast amount of easily accessible resources. 15 years ago reading a journal article required going to the library, erring around in search of the right aisle, not finding the ladder, waiting half an hour till the guy with the ladder is done erring around, then realizing that the very volume you're looking for is missing, etc etc. Nowadays, it's a click on a link (unless your acrobat reader has crashed again). If it would take much more than that I probably wouldn't read articles in any other field than physics, so the internet has certainly broadened my horizon.

Con: On the flipside, this is a hard time for perfectionists. If you're trying to read everything available on a topic, you'll never finish anything. So when I'm writing I'm constantly trying to balance the amount of input with the expected benefit of the output, meaning I have to find the right point to stop reading. This typically will leave me with a bad consciousness. All these people, they had something to say too, and lazy me didn't read it.

Generally, the internet has changed what knowledge I regard relevant, and I suspect this is a quite widespread change. Now that you can fast and easily look up a lot of facts, learning them by heart is totally yesterday. Like, who cares if I can't name all presidents of the USA? What's the capital of Qatar again and when was the transistor invented? The problem is though that if you don't have any factual knowledge you won't even know what to look for. So I just hope that modern school education carefully selects what knowledge is really necessary to pipe into children's brains.

Another clearly noticeable change is the obsession with the present that the internet has brought upon us. A week from now, this post will have wandered down the "recent" list and nobody wil recall what I wrote. Maybe it's my European genes that object on the idea that only the Now really exists, but if we don't honor the past we'll just repeat our mistakes. Why does Google return recent entries first? What is it that makes Americans believe what's newer is necessarily better?

Maggie Jackson in her book "Distracted" warns, backed up by research studies, that this "Now-Culture" severely affects the capability of children (meanwhile teenagers) to sustain attention. We're now seeing the first generation grow up that was born with the Internet. If there's any major impact on human cognitive processes caused by the overflow of information we're faced with and the amount of tasks we have to simultaneously deal with then this development can become an obstacle to progress. Something to have an eye on. There's mistakes you only make once.

The other development that I've been writing about (eg in my post "The spirits that we called") is that naive mishandling of information can be a danger for democracy. This point was recently also made very well by Lawrence Krauss in his SciAm essay "War Is Peace: Can Science Fight Media Disinformation?"

"English novelist George Orwell was remarkably prescient about many things, and one of the most disturbing aspects of his masterpiece 1984 involved the blatant perversion of objective reality, using constant repetition of propaganda by a militaristic government in control of all the media.

Centrally coordinated and fully effective reinvention of reality has not yet come about in the U.S. (even though a White House aide in the past administration came chillingly close when he said to a New York Times reporter, “We’re an empire now, and when we act, we create our own reality”). I am concerned, however that something equally pernicious, at least to the free exercise of democracy, has."

So, for now my conclusion is that while I doubt the internet has yet actually changed thought processes, it has certainly affected what we think about. And in the long run, the latter is going to affect the former.

Monday, January 11, 2010

A splendid light has dawned on me …

“Es ist mir ein prächtiges Licht über die Absorption und Emission der Strahlung aufgegangen ‒ es wird Dich interessieren. Eine verblüffend einfache Ableitung der Planck’schen Formel, ich möchte sagen die Ableitung. Alles ganz quantisch.”

“A splendid light has dawned on me about the absorption and emission of radiation ‒ it will be of interest to you. A stunningly simple derivation of Planck's formula, I might say the derivation. All completely quantical.”

Albert Einstein in a letter to his friend Michele Besso on August 11, 1916.

The “splendid light” refers to Einstein's insight that stimulated emission (also called induced emission) of light from excited atoms occurs in nature, and that this yields an elementary explanation of Planck's formula for the spectrum of black body radiation.

And, of course, some 46 years later and 50 years ago this May, the “splendid light” of Einstein's idea became a real “splendid light” with the construction of the Laser, based on the principle of stimulated emission of radiation.

Saturday, January 09, 2010

Utopia and Dystopia of Academia

"Scenarios for the future of the Higher Education sector: Where will we be in 25 years' time?" is the title of a short paper by Eddie Blass, Anne Jasman, and Steve Shelley. One should add they are concerned with the Higher Education sector in the UK in particular. The authors offer 5 dystopian future scenarios as a warning, and with this hope to prevent the developments outlined. The paper is so short it's pointless to summarize it, you can read the PDF here, it's only 2 pages. If that's still too long, The Times Higher Education offers a 1 page summary of the 2 page paper.

The authors say they wrote in a "deliberately provocative language to evoke an emotional response." Maybe there's something wrong with me, but my only "emotional response" is that Brits have a funny idea of what's "provocative." They could probably learn a thing or two from reading certain blogs, but let's not name names. Anyway, this inspired me to come up with a worst case scenario. And since that depressed me, I'll add a best case scenario. Feel free to offer your scenario in the comments. Or, even better, post it on your blog, I'll add a link below. Rules of the game are: One utopian and one dystopian future scenario for academia in 25 years from now in less than 500 words each. (You can count words online here.)


Global competition of universities and research institutes is ranked by a tightly defined metric for scientific success (that grows out of the UK's Research Assessment Exercise). University departments turn into clubs that make deals on recruitment of star scientists. The universities have professional marketing departments that hype every paper, patent, award, and conference. Research performance as defined by The Metric becomes a prime interest of national politics and with that public accountability for all and every pen stroke spells the death for academic freedom. International companies pour money into the leading places to improve their company's image and get a hand on those scientists willing to join the industrial workforce.

Since success becomes a matter of attention rather than quality, public outreach departments schmooze their way into major newspapers and magazines. Star scientists host talkshows and give public lectures in front of thousands of people with lots of technical finesse and no content. Bribery of editors of high impact journals frequently makes headlines. Extreme competitive pressure renders unbiased peer review impossible, so it becomes replaced by "external review" from so-called "unbiased independent experts" frequently recruited among science journalists and research departments of major companies . The media wants its share of the money and eternally celebrates progress that is none, causing frustration and cynicism among the public. Like on the fish market, the winner is who screams the loudest and smells the least. Leading scientists find their private life rewritten in cheap magazines next to Hollywood starlets and members of the royal families.

One part of researchers conforms to the new rules and accordingly optimizes their output, connections and social skills. However, a small but increasing fraction of researchers finds higher education has turned into a farce. They refuse to participate in what they argue will eventually entirely stifle knowledge discovery and cause a breakdown of modern societies who are in need of constant innovation. These researchers form a scientific underground, a globally operating association of scientists most of which have to finance themselves from non-research jobs. Belittled by the star scientists, the scientific underground is helped by a few philanthropists' generous donations that allows to maintain online networks and occasional meetings. The resentment on both sides grows.

John had just handed out the second hot-dog when Lisa's cab came to a screeching halt directly under the no-parking sign. She jumped out and waved her flexiscreen: "Look at this!" John focused on the mustard to prevent his eyes from rolling heavenward. "That's 1.99," he said.

Getting a little carried away here ;-)


Fed up with the inefficient use of resources in academia, scientists decide to take matters of management into their own hand. Instead of further conforming to nonsensical policies and wasting time being swept away by emergent community trends, they systematically study and monitor the dynamics of knowledge discovery itself. Based on scientific models that are constantly refined, they put into place a decentralized application- and hiring system that makes only moderate use of metrics and instead relies on accountability of peer's judgement. All research results are available open access, and suggested research projects and proposals are frequently openly available too, though many of these feature remain subject to constant flow and debate.

Transformative research is implemented depending on the field and stage of research where a community sees the need, and measures are taken to balance competition with collaboration. Social networking tools are refined to optimally filter information and connect dispersed knowledge and researchers all over the globe. Not only national boundaries dissolve: also the educational boundaries of academia soften and participation in research projects more often encompasses a variety of contributors with different background.

But most importantly, scientists' skills and interests become individually recognized, supported, and put to best use. The earlier one-size-fits-all positions that combined the duties of teaching, researching, reviewing, communicating, mentoring, administrating and managing are broken down into different education and career paths. This way, duties other than research are significantly reduced and talents are optimally utilized.

These improvements unleash innovative power that overhauls some sleepy research areas and creates several new ones. Inspired by this success, leaders of a few future-oriented democracies decide to base their increasingly dysfunctional opinion- and decision making processes on modern scientific models. This results in particular in a dramatically different approach to the communication of information, the aggregation of opinions, and significant changes to the content and structure of the educational system.

This era would later become known as the 2nd scientific revolution, but John didn't know that. He was sitting in a seminar and had just decided the women speaking was clearly insane. But he liked the theme she used for the slides; he had not seen it before and wondered where to get it. She flipped to the next slide. "I'm not insane," John read "The theme is called "Sparkles in red" and is shareware."

More visions:

Friday, January 08, 2010

Surfing the Universe

So what's Garrett Lisi up to these days?

“Surfing the Universe” is a unique new reality series that blurs the lines between scientist and athlete, breaking the popular perception of both. These are inspirational stories of young scientists at the forefront of their fields, who also love playing outside -- people living life to the fullest both intellectually and physically.

We call them ‘Scientific Adventurers’.

Host Garrett Lisi (a PhD in theoretical physics who’s also an avid surfer) is the epitome of a Scientific Adventurer. Having reached a high level of academic achievement in physics, Garrett discovered long ago the best way to balance his demanding life in science is with a constant stream of fun and adventure.

Part personal profiles, part science, mixed with sports adventures and travel, each episode will profile a new ‘scientific adventurer’ whose work includes anything from uncovering the mysteries of the human brain, finding a cure for AIDS, or a look inside the high energy particle collisions created by the Large Hadron Collider (LHC) in Geneva.

Then we’ll accompany our ‘scientific adventurer’for an exotic sports adventure: skiing the Alps, surfing in Tahiti, snowboarding in Alaska, kite surfing in Maui, paragliding in Chile, mountain climbing in Thailand, scuba diving in the Great Barrier Reef, and many, many more.

Not sure whether to laugh or to cry.

Wednesday, January 06, 2010

Is Physics Cognitively Biased?

Recently we discussed the question “What is natural?” Today, I want to expand on the key point I was making. What humans find interesting, natural, elegant, or beautiful originates in brains that developed through evolution and were shaped by sensory input received and processed. This genetic history also affects the sort of question we are likely to ask, the kind of theory we search for, and how we search. I am wondering then may it be that we are biased to miss clues necessary for progress in physics?

It would be surprising if we were scientifically entirely unbiased. Cognitive biases caused by evolutionary traits inappropriate for the modern world have recently received a lot of attention. Many psychological effects in consumer behavior, opinion and decision making are well known by now (and frequently used and abused). Also the neurological origins of religious thought and superstition have been examined. One study particularly interesting in this context is Peter Brugger et al’s on the role of dopamine in identifying signals over noise.

If you bear with me for a paragraph, there’s something else interesting about Brugger’s study. I came across this study mentioned in Bild der Wissenschaft (a German popular science magazine, high quality, very recommendable), but no reference. So I checked Google scholar but didn’t find the paper. I checked the author’s website but nothing there either. Several Google web searches on related keywords however brought up first of all a note in NewScientist from July 2002. No journal reference. Then there’s literally dozens of articles mentioning the study after this. Some do refer to, some don’t refer to the NewScientist article, but they all sound like they copied from each other. The article was mentioned in Psychology Today, was quoted in Newspapers, etc. But no journal reference anywhere. Frustrated, I finally wrote to Peter Brugger asking for a reference. He replied almost immediately. Turns out the study was not published at all! Though it is meanwhile, after more than 7 years, written up and apparently in the publication process, I find it astonishing how much attention a study could get without having been peer reviewed.

Anyway, Brugger was kind enough to send me a copy of the paper in print, so I know now what they actually did. To briefly summarize it: they recruited two groups of people, 20 each. One were self-declared believers in the paranormal, the other one self-declared skeptics. This self-description was later quantified with commonly used questionnaires like the Australian Sheep-Goat Scale (with a point scale rather than binary though). These people performed two tasks. In one task they were briefly shown (short) words that sometimes were sensible words, sometimes just random letters. In the other task they were briefly shown faces or just random combination of facial features. (These both tasks apparently use different parts of the brain, but that’s not so relevant for our purposes. Also, they were shown both to the right and left visual field separately for the same reason, but that’s not so important for us either.)

The participants had to identify a “signal” (word/face) from the “noise” (random combination) in a short amount of time, too short to use the part of the brain necessary for rational thought. The researchers counted the hits and misses. They focused on two parameters from this measurement series. The one is the trend of the bias: whether it’s randomly wrong, has a bias for false positives or a bias for false negatives (Type I error or Type II error). The second parameter is how well the signal was identified in total. The experiment was repeated after a randomly selected half of the participants received a high dose of levodopa (a Parkinson medication that increases the dopamine level in the brain), the other half a placebo.

The result was the following. First, without the medication the skeptics had a bias for Type II errors (they more often discarded as noise what really was a signal), whereas the believers had a bias for Type I errors (they more often saw a signal where it was really just noise). The bias was equally strong for both, but in opposite directions. It is interesting though not too surprising that the expressed worldview correlates with unconscious cognitive characteristics. Overall, the skeptics were better at identifying the signal. Then, with the medication, the bias of both skeptics and believers tended towards the mean (random yes/no misses), but the skeptics overall became as bad at identifying signals as the believers who stayed equally bad as without extra dopamine.

The researcher’s conclusion is that the (previously made) claim that dopamine generally increases the signal to noise ratio is wrong, and that certain psychological traits (roughly the willingness to believe in the paranormal) correlates with a tendency to false positives. Moreover, other research results seem to have shown a correlation between high dopamine levels and various psychological disorders. One can roughly say if you fiddle with the dose you’ll start seeing “signals” everywhere and eventually go bonkers (psychotic, paranoid, schizoid, you name it). Not my field, so I can’t really comment on the status of this research. Sounds plausible enough (I’m seeing a signal here).

In any case, these research studies show that our brain chemistry contributes to us finding patters and signals, and, in extreme, also to assign meaning to the meaningless (there really is no hidden message in the word-verification). Evolutionary, type I errors in signal detection are vastly preferable: It’s fine if a breeze moving leaves gives you an adrenaline rush but you only mistake a tiger for a breeze once. Thus, today the world is full of believers (Al Gore is the antichrist) and paranoids who see a tiger in every bush/a feminist in every woman. Such overactive signal identification has also been argued to contribute to the wide spread of religions (a topic that currently seems to be fashionable). Seeing signals in noise is however also a source of creativity and inspiration. Genius and insanity, as they say, go hand in hand.

It seems however odd to me to blame religion on a cognitive bias for Type I errors. Searching for hidden relations on the risk that there are none per se doesn’t only characterize believers in The Almighty Something, but also scientists. The difference is in the procedure thereafter. The religious will see patterns and interpret them as signs of God. The scientist will see patterns and look for an explanation. (God can be aptly characterized as the ultimate non-explanation.) This means that Brugger’s (self-)classification of people by paranormal beliefs is somewhat besides the point (it likely depends on the education). You don’t have to believe in ESP to see patterns where there are none. If you read physics blogs you know there’s an abundance of people who have “theories” for everything from the planetary orbits, over the mass of the neutron, to the value of the gravitational constant. One of my favorites is the guy who noticed that in SI units G times c is to good precision 2/100. (Before you build a theory on that noise, recall that I told you last time the values of dimensionful parameters are meaningless.)

The question then arises, how frequently do scientists see patterns where there are none? And what impact does this cognitive bias have on the research projects we pursue? Did you know that the Higgs VEV is the geometric mean of the Planck mass and the 4th root of the Cosmological Constant? Ever heard of Koide’s formula? Anomalous alignments in the CMB? The 1.5 sigma “detection?” It can’t be coincidence our universe is “just right” for life. Or can it?

This then brings us back to my earlier post. (I warned you I would “expand” on the topic!) The question “What is natural” is a particularly simple and timely example where physicists search for an explanation. It seems though I left those readers confused who didn’t follow my advice: If you didn’t get what I said, just keep asking why. In the end the explanation is one of intuition, not of scientific derivation. It is possible that the Standard Model is finetuned. It’s just not satisfactory.

For example Lubos Motl, a blogger in Pilsen, Czech Republic, believes that naturalness is not an assumption but “tautologically true.” As “proof” he offers us that a number is natural when it is likely. What is likely however depends on the probability distribution used. This argument is thus tautological indeed: it merely shifts the question what is a natural from the numbers to what is a natural probability distribution. Unsurprisingly then, Motl has to assume the probability distribution is not based on an equation with “very awkward patterns,” and the argument collapses to “you won't get too far from 1 unless special, awkward, unlikely, unusual things appear.” Or in other words, things are natural unless they’re unnatural. (Calling it Bayesian inference doesn’t improve the argument. We’re not talking about the probability of a hypothesis, the hypothesis is the probability.) I am mentioning this sad case because it is exactly the kind of faulty argument that my post was warning of. (Motl also seems to find the cosine function more natural than the exponential function. As far as I am concerned the exponential function is very natural. Think otherwise? Well, zis why I’m saying it’s not a scientific argument.)

The other point that some readers misunderstood is my opinion on whether or not asking questions of naturalness is useful. I do think naturalness is a useful guide. The effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained), but it’s definitely well documented. Dimensionless numbers that are much larger or smaller than one have undeniably an itch-factor. I’m not claiming one should ignore this itch. But be aware that this want for explanation is an intuition, call it a brain child. I am not saying thou shell disregard your intuition. I say thou shell be clear what is intuition and what derivation. Don’t misconstrue for a signal what is none. And don’t scratch too much.

But more importantly it is worthwhile to as ask what formed our intuitions. On the one hand they are useful. On the other hand we might have evolutionary blind spots when it comes to scientific theories. We might ask the wrong questions. We might be on the wrong path because we believe to have seen a face in random noise, and miss other paths that could lead us forward. When a field has been stuck for decades one should consider the possibility something is done systematically wrong.

To some extend that possibility has been considered recently. Extreme examples for skeptics in science are proponents of the multiverse, Max Tegmark with his Mathematical Universe ahead of all. The multiverse is possibly the mother of all Type II errors, a complete denial that there is any signal.

In Tegmark’s universe it’s all just math. Tegmark unfortunately fails to notice it’s impossible for us to know that a theory is free of cognitive bias which he calls “human baggage.” (Where is the control group?) Just because we cannot today think of anything better than math to describe Nature doesn't mean there is nothing. Genius and insanity...

For what the multiversists are concerned, the “principle of mediocrity” has dawned upon them, and now they ask for a probability distribution in the multiverse according to which our own universe is “common.” (Otherwise they had nothing left to explain. Not the kind of research area you want to work in.) That however is but a modified probabilistic version of the original conundrum: trying to explain why our theories have the features they have. The question why our universe is special is replaced by why is our universe especially unspecial. Same emperor, different clothes. The logical consequence of the multiversial way is a theory like Lee Smolin’s Cosmological Natural Selection (see also). It might take string theorists some more decades to notice though. (And then what? It’s going to be highly entertaining. Unless of course the main proponents are dead by then.)

Now I’m wondering what would happen if you gave Max Tegmark a dose of levodopa?

It would be interesting if a version of Brugger’s test was available online and we could test for a correlation between Type I/II errors and sympathy for the multiverse (rather than a believe in ESP). I would like to know how I score. While I am a clear non-believer when it comes to NewScientist articles, I do see patterns in the CMB ;-)

[Click here if you don't see what I see]

The title of this post is of course totally biased. I could have replaced physics with science but tend to think physics first.

Conclusion: I was asking may it be that we are biased to miss clues necessary for progress in physics? I am concluding it is more likely we're jumping on clues that are none.

Purpose: This post is supposed to make you think about what you think about.

Reminder: You're not supposed to comment without first having completely read this post.

Tuesday, January 05, 2010

Please leave nothing to my imagination

I've been sick the last few days, thus my silence. Stefan has lovingly fed me with chicken soup and porridge, so I now feel like a bag of oatmeal and am starting to cluck.