Pages

Tuesday, April 29, 2008

Black Holes at the LHC - again

I am presently writing on a post on 'The Illusion of Knowledge', and I can't but find it ironic that while doing so I am distracted by those suffering from it. Peter Steinberg over from Entropy Bound (is this a black hole hanging on his blog, or do I start having halluzinations?) sent me a link to an overexposed YouTube video "The LHC-- the end of the world again?" showing a teenage girl in a garden babbling about how the LHC will cause the end of the world. Starting with the disclaimer "I don't have a very technical brain," the main statement is "So, we're creating a very unnatural situation."

Unnatural situation, my ass. When I was that age I was worried about soil erosion, overfishing, acid rain, desertification, the greenhouse effect, global political instabilities, deforestation, air- and water pollution, population growth, nuclear waste, overuse of fertilizers and pesticides, and a dozen other 'unnatural situations' that are still problems today (and that I'm still worried about). So here's my message to the YouTube generation: if you have too much time on your hand, and have already re-applied make-up three times today, why don't you talk about these infinitely more pressing problems? Because somebody could expect you do something about it?

And how 'natural' do you think YouTube is to begin with, maybe we better shut it down - I am sometimes very sure it will cause the end of the world as we know it.

Following some further links, I eventually came via another video titled "Did Nostradamus predict the LHC will create a Black Hole?" to a site called revelation13.net where you can read the following nonsense
"But perhaps creation of a black hole is a holographic parallel to the world reaching 6.66 billion population in 2008, and the rise to power of the Antichrist in Russia. If a black hole is created by LHC, then initially it might not be noticed, but it could gravitate to the center of earth and start swallowing the earth's core, perhaps over years. Perhaps such an event could be the cause of the Mayan calendar prophesy of the December 2012 destruction of earth. Lets hope for the best in this situation. If that should happen then nothing could be done about it. I think it is an interesting coincidence that CERN is turned on as the world population reaches 6.66 billion (in April 2008), 666 being the number of the Antichrist, and as the possible Antichrist Putin reached 666 months age in April 2008. Note that 666 is the number of the Antichrist in Revelation 13, the Antichrist or Beast being a Satanic imitation of Christ. In Greek, the original language of the Bible's New Testament, each letter is also a number, and therefore a word can be connected to a number by adding the letter-numbers."

And another great find is this: Black hole eating the earth, artist's impression



And what am I doing while the end of the world is coming close and the antichrist is apparently on his way? (Or is it 'her way'? Does the antichrist have a penis? Anybody knows?) Well, what I was doing today, besides wondering whether the antichrist has a penis, is preparing a colloq I'm supposed to give next week about, guess what, black holes at the LHC. (Look at this, they've even put together a poster, isn't that nice?) Too bad I can't download the above video, I'd have loved to embed it, it is just hilarious.

So, here is again all the reasons why the LHC isn't going to create a black hole that will cause the end of the world:
  1. To begin with, please notice that the creation of a black hole at the LHC is *not* possible in the standard framework of Einstein's theory of General Relativity. To produce black holes at the energies LHC can reach, it needs a modification of General Relativity at small distances. This could potentially be the case if our world had large extra dimension. There is however no, absolutely no, evidence so far this is really the case. The scenario is pure speculation, a hypothesis, a theory, or call it wishful thinking [1].


  2. It is not only that there must be compactified extra-dimensions, but the parameters of that model (their size and number) have to be in the right range. We know that the case with one dimension is excluded, and two should also already have shown up in sub-mm measurements, so this case too is strongly disfavoured. There are further various constraints from astrophysics that put strong bounds on the cases with three and four. But most importantly, there is no good reason known why these extra-dimensions should have the radius they need to have so quantum gravity is observable at the LHC - no reason other than it would be nice to have it shown up at these energy scales.


  3. Now to come to the issue of the black holes should they be created. Hawking showed in '75 using quantum field theory in the curved spacetime caused by a collapsing matter distribution that black holes emit thermal radiation. The temperature of this radiation is inverse to the radius of the black hole. The black holes that would be produced at the LHC would be extremely tiny, ~ 10-18 meters, and thus be extremely hot ~ 1016 K (that's a 1 followed by 16 zeros). They would decay within a time scale of roughly 1 fm/c, that is 10-23 seconds. They would not even reach the detector, instead they would decay already in the collision region. The only thing that could be measured are the decay products.


  4. The temperature of these black holes is so hot, they can not grow even if they pass through matter of very high density, like e.g. a gluon plasma or a neutron star. The mass gain from particles coming in the black hole's way (which depends on the density) is far smaller than the mass loss from the evaporation. The density of the earth is further several orders of magnitude smaller than that of nuclear matter, so there is no way the black hole could grow. Even if you assume the black hole has a high γ-factor (and thus experiences a higher density), this is not sufficient to enable it to grow.


  5. Hawking radiation is *not* a quantum gravitational effect. Hawking's calculation uses two very well known ingredients that are classical General Relativity and quantum field theory. It is true that we do not know quantum gravity, but quantum gravitational effects would only become important in the very late stages of the decay, when the black hole comes into the quantum gravitational regime. This would then affect the observables (and this ambiguity is thus somewhat of an annoyance), but it does not mean the black hole could grow. The reason is that if the black hole grew, it would come into the regime where Hawking's calculation applies to very good approximation, and it would lose mass as predicted. The scale for quantum gravitational effects to be important is the curvature at the horizon, which falls with M/R3 when the black hole grows, where M is the mass of the black hole and R is its radius (which again is a function of the mass).

    As to the claim that there are 'people' who doubt black holes radiate, let me first reduce 'people' to 'physicists' since there are apparently also 'people' who doubt that the earth is more then 20,000 years old, or is a sphere (at least to very good accuracy). I know exactly no physicist who doubts that black holes radiate. The one work that I know of has sometimes been referred to is that by Adam Helfer. However, even he states in his paper (gr-qc/0503053) explicitly: "[These results] do not, as emphasized above, mean that black holes do not radiate [...]" [2].


  6. As has been said many times before, the earth is constantly hit by cosmic rays which undergo in interactions with particles in the earth's atmosphere collisions with a higher center-of-mass energy than the LHC will reach. If it was possible to produce a black hole this way which would then swallow the earth, this would not only very likely already have happened some billion years ago, but we should also see stars disappearing more often, especially neutron stars because of their high density. There is no evidence for that.


  7. It has then further been argued that the black holes at the LHC would be created in a different center of mass system, and thus not have the same average velocity with respect to the earth. This is correct but there are two points to be said here.

    For one, the protons at the LHC will be accelerated to 99.9999991% of the speed of light, which is really fast. I mean, really. If you bang them together it is extremely unlikely the created particles will be in rest or even slow moving relative to the earth. Indeed, as Stefan has explained very nicely previously, their velocity will typically be far higher than the escape velocity of the earth. Pictorially speaking, consider a car crash. Things usually fly around quite a lot, already at 0.0000001% of the speed of light.

    Second, even for the few black holes for which that wouldn't be the case, again, they would decay even before they hit the detector. In any case they would definitely not collect in the middle of the earth (or 'gravitate to the center of the earth' or whatever). This is a totally absurd idea that I have however come across several times. It is absurd because the center of the earth would generally not be on the produced object's trajectory (having an initial velocity), and even if it was they wouldn't stop in the center of the earth, why should they? Ever heard of energy conservation? As said previously, they are far to small (cross-section to small) to interact noticeably with the earth's matter so they wouldn't slow down. (If one really pushes it one can now go and estimate how long it would take them to slow down until they get stuck and so on. But frankly, this scenario is already so absurd that such a speculation is totally moot, and an utter waste of time, mine and yours.)


  8. About the claim that the LHC's risk report is biased because it has not been performed by people at "arm's length". Yes, to get a reasonable report about the difficulties the LHC might be facing I would think you ask experts. These experts are usually people working in the field. Would you prefer them to be random sampled from a phone-book? I honestly do not understand why anybody would think people working in theoretical physics have a larger interest in destroying the planet than other human beings.

    To be somewhat cynical here, you'd instead think that a lot of theoretical physicists should be really nervous about the LHC because it will test their theories. And no matter what, very many of these theories will be outruled, dead, speculations no longer viable. One of these theories that can be tested is the one with large extra-dimensions. And if it isn't found hundreds of people who have worked on it must face that they have wasted their time, their publications do not describe nature, and the topic is no longer something you can use for a grant proposal.


  9. Finally, let me say that there is always some amount of uncertainty in everything we do. Yes, there is the possibility we are all wrong. There is also the possibility that you wake up tomorrow morning an have turned into a monstrous bug, because a cosmic ray has modified some virus to being capable altering your DNA. Or, as Arkani-Hamed put it so aptly in the recent NYT article: There is some minuscule probability, he said, “the Large Hadron Collider might make dragons that might eat us up.”


I, and I believe many of my colleagues, would really appreciate if the media - TV, print and online - would not support such catastrophe-scenarios and scientifically completely absurd scary stories just because they sell well. There is, in the community, no argument about whether mini-black holes at the LHC are a risk worth worrying about. The answer is simply no, they are not. The story about black holes created in particle colliders that swallow the earth came up first time in '99 regarding RHIC, so it has a long beard in 2008, and it's getting longer every day. If you are running out of topics for the science section, why don't you go and ask some scientists for inspiration?

I have no specific relation the theories investigated here, in fact, not being influenced by subjective preferences is part of what it means to be a scientist (whether we like that or not). I'm not telling you what I wrote here because I want money or publicity for collider physics, or any other reason of personal advantage you could accuse me of. I am telling you that just because black holes at the LHC is not something you should worry about. Worry about some real problems instead.

Further reading (strongly recommended before asking redundant questions):



Note added May 2nd: Clifford from Asymptotia asked me to clarify that with 'quantum gravity' I mean a theory in which gravity is quantized.



[1] And if you don't take into account the presence of large extra dimensions, you will find correctly that there is a factor 1032 missing. Before you suggest this factor has been overlooked in hundreds of peer-reviewed publications, maybe consider redoing your calculation.
[2] It seems to me that even if one bought his approach they would evaporate only faster. It's hard to say though because he states "it is unrealistic at present to expect to be able to make quantitative theoretic predictions".


TAGS: , , , , ,

Monday, April 28, 2008

Conference Poster

Our previously mentioned conference

now has a poster!

Science in the 21st Century


Download PDF (~1.1 MB).

Thanks to Liz for the design!

Saturday, April 26, 2008

Spooky Action

Thursday I came across an article by Bruno Maddox, on the websites of Discover magazine. Maddox, author of the column 'Blinded by Science', writes about
In this article he describes his fascination with interactions mediated over distances that he shares with many famous physicists. Especially in cases when only the effect is accessible to our senses it seems mysterious and spooky - the needle on the compass turning North, the moon orbiting around the earth. How do they know what to do? Maddox describes how he read "Electronics for Dummies" (by Gordon McComb and Earl Boysen) to tackle the mystery, and 71 days later comes to conclude
"as far as I can tell, nobody knows how a magnet can move a piece of metal without touching it. And for another—more astonishing still, perhaps—nobody seems to care."

Bizarre, I thought. What exactly does he mean with 'knowing'? Is this a philosophical question? I looked Maddox up on Wikipedia, and learned he is 'best known for his satirical magazine essays'. So then maybe it's a joke, I wondered? Maddox continues that in the further pursuit of the topic he then read the 'Mathematics of Classical and Quantum Physics', from which he likely learned the term 'action at a distance', and that "virtual particles are composed entirely of math and exist solely to fill otherwise embarrassing gaps in physics". He eventually summarizes

"What I have learned, in other words, after 71 days of strenuous research, is that I and my fellow Dummies no longer have a seat, if we ever did, at the dinner table of science. If we’re going to find any satisfaction in this gloomy vale of misery and mystery, we’re going to have to take matters into our own hands and start again, from first principles."
I've honestly tried to figure out what he meant to say, but I just can't make sense out of it. You're all welcome to start again, and from first principles. But I think this article sheds a rather odd light on the status of theoretical physics. So here are some comments:

1. Electro and Magnetic

We have experimentally extremely well confirmed theories that allow us to describe electromagnetism to high precision, in the classical as well as in the quantum regime. Maybe that's not satisfactory for everybody. But at least I think the explanation that the electromagnetic interaction is mediated by something called the electromagnetic field is very satisfactory. After all, we are surrounded by electromagnetic waves all the time, and we use them quite efficiently to carry phone calls from here to there, or to maneuver satellites in outer space. To calculate the interaction between two macroscopic objects like a fridge magnet and the fridge, at least I wouldn't use perturbation theory of quantum electrodynamics, but good old Maxwell's equations.

"Electronics for Dummies" maybe isn't exactly the right book to read if you want to understand how electromagnetism works and how to understand the field concept. Since Maddox is concerned with magnets let me point out an often occurring linguistic barrier: Electrodynamics is the theory of the electric and magnetic interaction, as it turns out both are just aspects of the the same field, and parts of the same theory.

To use a well known example, consider two resting electrons. You'd describe their field by the Coulomb interaction without magnetic component. Yet when you move relative to them, you'd assign to them a magnetic field since moving charges cause magnetic fields. This is no disagreement, it just means that under a transformation from one restframe to another the field components transform into each other. It was indeed this feature of Maxwell's equations that lead Einstein to his theory of Special Relativity ("Zur Elektrodynamik bewegter Körper", Annalen der Physik, 17 (1905), p .891–921).

I didn't read "Electronics for Dummies", but browsing the index on Amazon it seems to contain what you'd think, namely what a transistor is and how you outfit your electronic bench. To understand the basics of theoretical physics I would maybe recommend instead


2. The Standard Model

The interaction between a fridge magnet and the fridge is a macroscopic phenomenon that involves a lot of atomic and condensed matter physics. Ferromagnetism is an interesting emergent feature, and there are probably still aspects that are not fully understood. The Standard Model of particle physics describes the fundamental interactions between elementary particles. Complaining it doesn't describe your fridge magnet is completely inappropriate as said fridge magnet is hardly an elementary particle. You could as well say neuroscience doesn't describe the results of election polls.

See also my earlier posts on Models and Theories and Emergence and Reductionism.

3. Action at a Distance

Quantum mechanics has a spooky 'action at a distance', but of a completely different nature than the force between two magnets. In quantum mechanics there is no field that mediates it (at least nobody has ever measured one). Maybe even more importantly, this is an instantaneous 'action': the wave-function collapses non-locally. Very unappealing. That's why it's spooky (still). This well known problem of quantum mechanics however does not appear in classical electrodynamics, it comes in through the quantum mechanical measurement process.

Maxwell's theory that describes electric and magnetic interaction is local. Interactions between charges are mediated by the fields. The interaction needs to propagate, it doesn't happen instantaneously. The same is true for General Relativity. Yes, Newton called it a great absurdity that "one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one to another". But this is because in Newtonian gravity interactions were instantaneous. If you'd change the earth's mass, the moon would immediately know about it. It took Einstein to remove this great absurdity, and he taught us that gravity is mediated by spacetime itself. It propagates locally, there is no spooky action on a distance.

To get a grip on Quantum Electrodynamics I'd recommend



4. Virtual Particles

And yes, virtual particles are mathematical constructs that appear within the perturbation series and are handy devices in Feynman diagrams. The use of this mathematical tool however has proved to be correct to high precision. The effects of the presence of these virtual contributions have been measured, the best known examples are probably the Lamb-shift or the Casimir Effect.

It is a general problem which I encountered myself that popular science books use pictures, metaphors or generalized concepts to describe theories, and then the reader gets stuck with possibly inappropriate impressions that wouldn't occur if one had a derivation and thus a possibility to understand the limitations of these verbal explanations. One can e.g. derive the interaction energy between two pointlike sources as being the Fourier transformation of the propagator, the propagator being what describes also the virtual particle exchange in Feynman diagrams. This interaction energy for the photon propagator is just the Coulomb potential as you'd expect. (If the exchange particle is massive, you get a Yukawa potential). How seriously one should take the picture with the virtual particle is a different question though. The interaction between the fridge and the magnet is hardly a scattering process with asymptotically free in- and outgoing states.

I too like to ponder questions like what actually 'is' a particle, much like one can wonder what actually 'is' space-time. However, I admittedly fail to see what the point is of this rambling about "embarrassing gaps in physics" besides expressing the author's confusion about the books he read.

For an introduction into quantum field theory I recommend

(You can download the first chapter which explains very nicely the relations betwen particles, fields, and forces here.)

Bottomline

If you’re going to find "any satisfaction in this gloomy vale of misery and mystery", you’re going to have to take matters into your own hands and read the right books before abandoning the Standard Model.

PS: My husband lets me know he finds my writing very polite, and wants me to refer you to the Dunning-Kruger effect.


TAGS: , , , , ,

Friday, April 25, 2008

Interna

Apologies for being somewhat quiet the last days. Besides some other time-consuming stuff, I've been filling forms, forms and more forms for the tax return. Once I started doing that I figured I should probably have done this already last year. So now I've twice the paperload, and I promise I'll stop mocking Stefan for being late with his tax return. I printed hundreds of pages information sheets for newcomers, non-residents and general guides about how to deduce your own death and your neighbor's children. Just to fail already at question two: Your spouse's social insurance number. I called the hotline a dozen times, and to my great annoyance constantly messed up the numbers in the calculator because I keep mixing up dots with commas (the German notation is exchanged, e.g. twenty-thousand would be 20.000,00 not 20,000.00).

After having done that a whole day I come home, and there's some animal in the ventilation just above my stove! I can hear it tapping around, just above the fan grill trying to get out. I turn on the light but can't see anything besides shadows, and of course the batteries in the flashlight are dead. I think about just not doing anything, but I honestly don't want whatever it is to die there, just above my stove. Besides, the tapping is kind of creepy.

Not being in the mood to search for my toolbox, I call the landlord who shows up like 1 minute later with a ladder and duct tape. "The starlings!" he says, "They've been attacking the building this year!" He takes off the grill, and just turns on the vent. And out comes, with feathers flying around, a starling. The bird heads into totally the wrong direction and I'm grateful for having closed all doors in a rare case of foresight. The starling makes a turn, shits all over the living room and then bumps full speed against the window. We manage to manoeuvre him out on the balcony with some towels. Then the landlord takes the ladder and the duct tape and seals the outside vent outlets. "They try to nest in there. But we don't want them, they're not paying rent."

He tells me Monday somebody will come to cover the vents with a grill or so. So I spend the next half hour cleaning up birdshit.

Anyway, a nice weekend to all of you!

Thursday, April 24, 2008

HTML/Infected.WebPage.Gen.

We were recently informed by some people that apparently the Avira anti-virus protection shows an alert on MS Internet Explorer 7 for this blog "HTML/Infected.WebPage.Gen.", which is a trojan around since last fall (damage potential: low). This alert which seems to appear since 2-3 weeks is not reproducible neither with Symantec, nor Trend-Micro on neither MS Internet Explorer nor Firefox. Some googling brought up that others have reported the same problem for blogs on blogger or wordpress.

I suspect this is a bug with the virus protection, not with this website, and that it wrongly interprets part of the html code. I haven't changed anything about template (e.g. add-ons) for several months, and the rest of the website is generated by a blogger-script that runs for everybody on blogspot. There also aren't any trackbacks which show up on the entry site, so that can't be a cause either (in some forum you'll find a recommendation to delete all trackbacks, but it doesn't sound plausible to me).

Another bloggy thing: Stefan and I, we had to notice that the 'publish' button under the comment preview presently doesn't work. Again this is a script we have no influence on, so we can't do anything about it. Please use instead the 'publish' button under the word-verification which seems to work just fine. If you use the wrong button and notice your comment doesn't appear (there is no error message), scroll up - the comment isn't lost unless you leave the site, it just stays in the textbox.

Wednesday, April 23, 2008

Max Planck at 150

Max Planck, April 23 1858 - October 4, 1947.
(Credits: Max Planck Society)
Today is the 150th birthday of Max Planck. He was born on April 23, 1858, the son of a professor of law at Kiel on the Baltic coast in northern Germany, and grew up in Kiel and Munich.

In December 1924, in a lecture at Munich on the occasion of the 50th anniversary of the begin of his studies at the university there, he remembered how he came to study physics. Actually, he had been given quite a discouraging advice by physicist Philipp von Jolly back then in 1874, when young Max Planck was unsure whether to chose physics or music. Jolly was convinced that physics had become a mature field and an elaborate science, crowned by the recent, firm establishment of the principle of the conservation of energy, and that only minor "grains of dust and bubbles" were left to explore. Nevertheless, Planck was fascinated by the then brand-new theories of thermodynamics and electrodynamics, and wanted to understand them in depth. And he succeeded in that.

Applying the concept of entropy to electromagnetic radiation, he found in the late 1890s a new constant of nature - today known as the Planck constant. This constant, when combined with the speed of light and Newton's constant of gravitation, allowed to formulate units of mass, length and time "completely independent of special material bodies and substances, and valid for all times and even extraterrestrial and non-human civilisations" - natural units now known as the Planck units. And of course, most of all, this constant allowed Planck to write down the correct theoretical description for the spectrum of electromagnetic radiation emitted by a hot body. Curiously, this formula implied that the energy of this radiation comes in small packets of energy - it is quantised. The rest is history, as they say.

Happy birthday, Max Planck!





  • For more about Max Planck, check out the biographies at Wikipedia, Encyclopedia Britannica, or MacTutor. His role in establishing quantum theory is discussed by Helge Kragh in a short essay for PhysicsWorld, Max Planck: the reluctant revolutionary.

  • Besides opening the door to the quantum, Max Planck was a very gifted organiser of science and long-term editor of the prestigious Annalen der Physik. He "discovered" and strongly supported Albert Einstein. The Max Planck Society, which arose from the Kaiser Wilhelm Gesellschaft presided by Planck over a long time, has organised an interesting online exhibit on the occasion of the 50th anniversary of his death in 1997.

  • Today's Planck Units see the light of day in an addendum to the paper Über irreversible Strahlungsvorgänge, ("On irreversible radiative processes"), published as Sitzungsbericht Deutsche Akad. Wiss. Berlin, Math-Phys Tech. Kl 5 440-480 (1899), and Annalen der Physik 306 [1] (1900) 69-122. The Planck Spectrum was published in Über das Gesetz der Energieverteilung im Normalspectrum ("On the law of energy distribution in the normnal spectrum"), Annalen der Physik 309 [4] (1901) 553-563.

  • Planck relates the story about Jolly in a guest lecture on Vom Relativen zum Absoluten (From the relative to the absolute) at the University of Munich on December 1,1924. The German text of the lecture can be found in the collection Max Planck: Vorträge, Reden, Erinnerungen.




Tags: ,

Tuesday, April 22, 2008

On the Emergence of Lies

lie -- pronunciation [lahy] noun,
verb: lied, ly·ing.

noun:
1. a false statement made with deliberate intent to deceive; an intentional untruth; a falsehood.
2. something intended or serving to convey a false impression.

3. an inaccurate or false statement.
4. the charge or accusation of lying.


verb:
5. to speak falsely or utter untruth knowingly, as with intent to deceive.
6. to express what is false; convey a false impression.

[Source: Dictionary.com]



I've been browsing recently through the references in the previously reviewed book "Complex Adaptive Systems" by Miller and Page about the use of agent based computational models for social interactions. While doing so, I came across a paper that I found quite interesting:

In this paper the authors examine the role that communication plays in the development of strategies. They use a very specific model, but the results they find have the potential to be more general. And since this is a blog, I want to speculate somewhat about it.


The Model

In the model examined in the paper the agents play in the prisoner's dilemma. This is a fairly simple game, in which the players receive payoffs depending on whether they cooperate or defect. Wikipedia summarizes the classical prisoner's dilemma as follows:
    Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must make the choice of whether to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?

Or, in table form:
Prisoner B Stays SilentPrisoner B Betrays
Prisoner A Stays SilentEach serves 6 monthsPrisoner A: 10 years
Prisoner B: goes free
Prisoner A BetraysPrisoner A: goes free
Prisoner B: 10 years
Each serves 5 years


In this game, regardless of what the opponent chooses, each player always receives a higher payoff (lesser sentence) by betraying, and thus betraying is the strictly dominant strategy.

In the model examined in the paper, the players can now in addition exchange communication tokens, where one of the tokens signals that the player selected a move. Their exchange continues until either both players indicate they have made a decision, or until the communication exceeds some chat limit. The additional payoffs from this possibility are that a player who has not chosen a move before reaching the chat limit obtains a punishment (a negative payoff), a player who picks a move but his opponent fails to does receive a payoff between that of mutual defection and mutual cooperation.

As far as I understand it, in each round each player plays with every other player. Payoffs are summed up, and then the player's strategies undergo a selection and mutation process, in which the best strategies have a survival advantage, plus some amount of randomness. And then the next round starts. I think for the following results it is crucial that too long communication without outcome has a negative payoff, whether or not that has to be included in exactly this form I don't know. I would have thought e.g. that those players who talk too much just get to play less in each round, which would also amount to a disadvantage. Either way, interpret it as you wish, the point is that blahblah without outcome sucks.



The Results

So here is in a nutshell the result of running this model a lot of times; I summarized it in the figure below. It sketches the hypothesis the authors put forward to interpret the data that they have collected.


The rounded boxes indicate the dominant strategy, and the arrows are some learning processes.

Suppose we start at the top, in a world in which there is no communication and the players in the prisoner's dilemma thus mutually defect. There might be the occasional mutant that tries to communicate, but if the other player doesn't listen, or doesn't understand this doesn't have any effect. But worse than not having an effect: if the talkative players get trapped into chatting to much, they receive a punishment. The authors further point out that the reason why no communication and mutual defection is a stable strategy is that communicating players are more vulnerable to mutations, which in addition with chatting too much being a disadvantage leads them to suspect this reduces the survivability of the communicating players.

The situation changes if two players meet each other who communicate and understand each other. They can then choose to cooperate, receive a higher payoff, and have a survival advantage. This leads to a rather sudden increase in communication and cooperation.

However, this emergence of communication and cooperation sows the seeds of its own destruction: It doesn't take a large mutation from the cooperative players to those that pretend to cooperate, but then defeat - which results in a higher payoff for them, to the disadvantage of the cooperative ones. Now one could suspect that the cooperative players will try to use some code to identify each other as being of the same type, and the others as being mimics. But whatever the code is they come up with, again it takes only a small mutation to turn it into a mimic.

This then leads to a lot of communication with decreasing cooperation. In the course of this the players will come to notice that talking without outcome is a disadvantage, so to improve the strategy it is beneficial to not talk at all. This leads to a gradual decay of communication back to the initial state.

These outbreaks of communication and cooperation with following decay are neither periodic, not are the outbreaks of equal size.



And We

Now I find this kind of interesting, as I think the development of sophisticated communication among humans, and the possibility to exchange information efficiently, is one of the most important evolutionary advantages. Of course the investigated model is a very simplistic one, and there is no good reason to believe it tells us something about beings playing such complex games as The-Game-Of-Human-Life. Maybe most importantly, in the examined case the players are not able to consider long-term effects of their actions, neither can they learn over the course of various cycles. But it's intriguing to speculate about the analogy.

I am completely convinced the amount of advertisement and commercials is an indicator for the certain decline of civilization. Above all other things it signals a culture of betrayal that we get more or less used to. Thus, we learn to some extend to mistrust information we receive. How many of the pills that you can buy on the internet will actually hold their promise? How many of these lotions will actually make you look younger? How much of what is 'guaranteed' is actually 'guaranteed'? How much of the stuff they try to convince you you can't live without is actually a completely unnecessary waste of resources?

Can you trust your used car dealer? Will the candidate hold his promises after the election? Do you believe what they write in the newspaper, or do they just sex up stories to obtain more attention, higher payoffs? (13 years old boy corrects NASA! - Fact or Fiction?). Are these boobs real?

What can we do to deal with this emergence of deceit, originating in the larger individual advantage? Well, we make up laws to punish lies* that can lead to damage. And make up religions to scare those who lie. In this way, we essentially incorporate the long-term effects of our actions.

However, dishonesty for the own advantage, and the resulting mistrust is a serious political problem on the global scale that corrupts our efforts to address the challenges we are facing in the 21st Century. Jeffrey Sachs said that very aptly:
    "Despite the vast stores of energy, including nonconventional fuels, solar power, geothermal power, nuclear power, and more, there is a pervasive fear of an imminent energy crisis resulting from the depletion of oil. The scramble of powerful countries to control Middle East oil or newly discovered reserves in other parts of the world, such as West Africa and the Arctic, has surely intensified, while investments in alternative and sustainable energy sources have been woefully insufficient. This is an example of a vicious cycle of distrust. The world could adopt a cooperative approach to develop sustainable energy supplies, with sustainability in the dual sense of low greenhouse gas emissions and long-term, low-cost availability. Alternatively, we can scramble for the depleting conventional gas and oil resources. The scramble, very much under way today, reduces global cooperation, spills over into violence and risks great power confrontations, and makes even more distant the good-faith cooperation to pool R&D investments to develop alternative fuels and alternative ways to use nonconventional fossil fuels.
    The Bush administration has been more consumed by the scramble rather than by cooperative global investments in a long-term future [...]"

~Jeffrey D. Sachs, in "Common Wealth", p. 45, typos and emphasis mine.

So, where are we in the diagram?


* I here refer to lie as with being of the intention to deceit the communicating partner to the own advantage. In other instances, lies serve various social purposes, as e.g. politeness, simplification or to cover lack of knowledge.


TAGS: , ,
See also: Communication

Monday, April 21, 2008

This and That in Publishing

The spread of the community-based applications on the internet is a challenge for the traditional publishing industry. Here are just three links I came across these last days related to these changes:

  • Wikipedia is, of course, a serious contender for the classical encyclopedias. The Encyclopedia Britannica offers as new, special program for web publishers, including bloggers, webmasters, and anyone who writes for the Internet. It's called Britannica WebShare, and upon registration, it allows links to fulltext entries of the encyclopedia. So far, I have often been relying on Wikipedia when I thought that some background information on concepts or people or events I am talking about in a post may be useful - mainly because access to wikipedia is free, and the quality often excellent. In the future, I think I'll give a try also to the Britannica.

  • Oxford University Press wants to learn more about the research habits of today’s researchers and scholars. Using a Research Habits Survey, they want to figure out if their customers prefer printed journals over the electronic version, traditional books over e-books, read papers on the screen or on print-outs, use google or specialised data bases to locate relevant literature, rely on bogs or news feeds to stay tuned with the latest developments, and so on. Doing the survey, one realises once more how drastic hanges the last ten years or so have brought for traditional publishers.

  • Finally, the May issue of the Scientific American runs an article on Science 2.0 - Is Open Access Science the Future? It deals with issues and examples coming mostly from the life sciences, and is interesting to read - and it's online here.



Saturday, April 19, 2008

Ninetynine-Ninetynine

"Just ninetynine-ninetynine!" is what they tell me every time I fail to switch the radio station fast enough, is what they print in the ads, is what the shout in the commercials.

When I was about six years old or so, I recall I asked my mom why all prices end with a ninetynine. Because they want you to believe it's cheaper than it is, I was explained. If they print 1.99 it's actually 2, but they hope you'll be fooled and think it's "only" one-something.

I found that a good explanation when I was six, but twentyfive years later I wonder if even six year's old know that can it be a plausible reason? Why keep stores on doing that? Do they really think customers are that stupid? Or has it just become a convention?

Now coincidentally, I recently came across this paper

via Only Human. The study presented in this paper examines the influence of a given 'anchor' price on the 'adjusted' price that people believe to be the actual worth of an object if the only thing they know is the adjusted price is lower than the retail price. A typical question they used in experiments with graduate students sounds like this

"Imagine that you have just earned your first paycheck as a highly paid executive. As a result, you want to reward yourself by buying a large-screen, high-definition plasma TV [...] If you were to guess the plasma TV’s actual cost to the retailer (i.e., how much the store bought it for), what would it be? Because this is your first purchase of a plasma TV, you have very little information with which to base your estimate. All you know is that it should cost less than the retail price of $5,000/$4,988/$5,012. Guess the product’s actual cost. This electronics store is known to offer a fair price [...]"

Where the question had one of the three anchor prices for different sample groups: a rounded anchor (here $5,000), a precise 'under anchor' slightly below the rounded anchor, and a precise 'over anchor' slightly above the rounded anchor. Now the interesting outcome of their experiment is that consistently people's guess for the adjusted price stayed closer to the anchor the higher the perceived precision of this price, i.e. the less zeros in the end. Here is a typical result for a beach house, the anchors in $, followed by the participants' mean estimate

    Rounded anchor: 800,000
    Mean estimate: 751,867

    Precise under anchor: 799,800
    Mean estimate: 784,671

    Precise over anchor: 800,200
    Mean estimate: 778,264

What you see is that the rounded anchor results in an adjustment that is larger
than the average adjustment observed with the precise anchors. Now you might wonder how many graduate students have much experience with buying beach houses, or plasma TV's for 5,000. But they used a whole set of similar questions, in which the measure to be estimated wasn't always a price, but possibly some other value like the protein value of a beverage. There even was a completely context-free question "There is a number saved in a file on this computer. It is just slightly less than 10,000/9,989/ 10,011. Can you guess the number?". The results remain consistent, the more significant digits the anchor has, the less the adjustment. For the context free question the mean estimate was 9,316 (rounded) 9,967 (precise under) 9,918 (precise over).

The paper further contains some other slightly different experiments with students to check other aspects, and it also contains an analysis of behavior in real estate sales. The author's looked at five years of real estate sales somewhere in Florida, and compared list prices with the actual sales prices of homes. They found that sellers who listed their homes more precisely (say $494,500 as opposed to $500,000) consistently got closer to their asking price. The buyers were less likely to negotiate the price down as far when they encountered a precise asking price.

I find this study kind of interesting, as it would indicate that the use of ninetynineing is to fake a precision that isn't there.

Bottomline: The more details are provided, the less likely people are to doubt the larger context.


TAGS: , ,

Thursday, April 17, 2008

This and That

  • Peter Woit informs us that the Journal of Number Theory is planning on introducing video abstracts for papers that they publish. I have previously discussed this question in an earlier post about SciVee.

    To just repeat what I said there: As much as I like watching videos myself, I am afraid this can bias people’s opinions towards those who have the possibilities to come up with great videos. One would expect that the first some videos are low key, but if this becomes an established procedure and gains in importance, researchers and their institutions will try to produce the most convincing videos they can possibly come up with. The analogy to commercials and their influence on the ‘free marketplace’ lies at hand.

    The quality of a video and how well one can sell oneself or the topic does greatly depend on professional support. If the journal doesn’t provide a service that ensures videos can be produced with roughly equal quality, this will just widen the gap between the scientists in institution where there is such a support (e.g. by the public outreach department or by a hired contractor) and where there isn’t.


  • Today's Globe and Mail has a very interesting article on how the Presidential race magnifies Internet's growing role in media. It goes very well with my previous post The Spirits that we Called, in which I argue that the internet does influence our political decision making processes and this poses a challenge for our democratic system that we have to face rather sooner than later.

    "This transformation of the media has transformed many of you, from passive readers to active investigators: researching and digesting information from a variety of sources as you seek your own understanding of what is happening in the world.

    In some ways, this is all very exciting. In other ways, it's frightening. Whatever it is, it's here.

    [...]

    Today, the role of the Internet in shaping election campaigns is exponentially greater. Ninety per cent of the money Illinois Senator Barack Obama is raising consists of online donations of $100 or less. No doubt many of those donors went to YouTube to listen to the incendiary sermons of Mr. Obama's pastor, Rev. Jeremiah Wright. John McCain's supporters can network with each other through Facebook or MySpace.

    [...]

    The correlation between the decline of printed newspapers and the growth of online news sources is not exact, but it is real. Just as the growth in viewers and profits of 24-hour cable news shows coincides with the steady decline in network news ratings, so too the rise of the Internet presages the demise of the daily broadsheet.

    [...]

    One thing is clear: As newspapers cut back on staff and budgets, the quality of journalism suffers. Bureaus close, there are fewer investigative reports and fewer reporters covering elections.

    [...]

    No one knows where this is going. Pessimists believe that the decline of newspapers will lead to an erosion of knowledge, as political spin and Web-fuelled rumour replace objective (well, more objective) journalism."

    Well, the question isn't whether you're a pessimist or an optimist, but what you do to ensure your democratic system doesn't suffer. Information is one of the most important resources in our societies. Sitting around and waiting to see whether knowledge will 'erode' and we'll be left with rumors and gossips isn't helpful.


  • For the German readers: Spiegel Online has an interesting article Wie die Wissensgesellschaft betrogen wird (How the Knowledge-Society is cheated on) that reports on Robert B. Laughlin's contribution to the "Edition Unseld", a collection of essays published by Suhrkamp, in which researcher and writers 'define the relation between men and research' ("definieren Forscher und Schriftsteller das Verhältnis zwischen Mensch und Forschung"). According to the Spiegel article, Laughlin warns we might be facing 'new dark ages of disinformation and ignorance' ("warnt [...] vor einem neuen dunklen Zeitalter der Desinformation und Ignoranz").

    Upon reading the article, his concerns admittedly didn't become so clear to me. But it seems he is worried about patent rights which hinder research, as well as not publicising research results because it can have financial drawbacks, or there is a danger the knowledge will be abused.


  • Picture of the week: bullet shooting four sticks of chalk


    Couldn't find out who made the photo, found it via this site. More photos of things being shot to pieces at BoredStop. See also this video.


  • Quotation of the week:


      "Technology is so much fun but we can drown in our technology. The fog of information can drive out knowledge."




Sunday, April 13, 2008

Emergence and Reductionism

My last week's post on 'Theories and Models' was actually meant to be about emergence and reductionism. While wring however, I figured it would be better to first explain what I mean with a model since my references to sex occasionally seem to confuse the one or the other reader.

Brief summary of last week's post: we want to describe the 'real world out there' by using a model that has explanatory power. The model itself captures some features of the world, it uses the framework of a theory, but should not be confused with the theory itself. I found it useful to think of this much like a function (the theory) acting on a set (some part of the real world out there) to give us a picture (the model).


The model describes some objects and the way they interact with each other (though the interaction can be trivial, or the system just static). To complete the model one usually needs initial conditions and some data as input (to determine parameters). In the following I will refer to the part of the real world out there that the model is supposed to describe as 'the system'.

To reiterate what I said last week: I don't care whether you like that use of words or not, it's just to clarify what I mean when I use them.


I. Emergence

Today's topic is partly inspired by the book on "Complex Adaptive Systems" I just finished reading (see my review here), and partly by Lee's lecture on "The Problem of Time in Quantum Gravity and Cosmology" from April 2nd (PIRSA 08040011 and 08040013). Please don't ask me what happened in the other 13 lectures because I wasn't there.

Hmmm... I missed the first ten minutes on April 2nd. After watching the video I can now reconstruct what was written on the blackboard before I came and what the not completely wiped-off words said. I feel a bit like a time-traveler who just closed the loop. Either way, here is a brief summary of min 11:38 to 20:24. Lee explains there's three types of emergence:
  1. Emergence in scale:
    In which a system described on larger scale has a property that it wouldn't have on smaller scales. As an example he mentions viscosity of fluids that isn't a property which makes sense for an atom, and the fitness of biological species that wouldn't make sense for molecules. "Atoms don't have gender but living things have gender."

  2. Emergence in contingency:
    In which a system develops a property only under certain circumstances. As an example he mentions the temperature dependence of superfluidity.


  3. Emergence in time:
    In which a system develops a property in time. As an example he mentions biological membranes, and that more than 3.8 billions years ago it wouldn't have made sense to speak of these.

As somebody in the audience also pointed out, these are basically different order parameters to change a system (e.g. scale, temperature, or time).

I was somewhat confused by distinguishing between these three cases, and that not only because I don't know what the plural of emergence is. (It can't be emergencies, can it?) No, because I always understood emergence vaguely as a feature the whole has but its parts don't have. Not that I ever actually thought about it very much, but that would be an order parameter like the number of constituents and and their composition - which could, or couldn't fall among point one or two.

Part of my confusion arises because in practical circumstances it isn't always clear to me which of the three cases of 'emergence' on actually has at hand. For example, take the formation of atoms in the early universe. Is this an emergence in time? Or is this an emergence in contingency? After all it's the temperature that matters I would say. Just that the temperature is related to the scale factor, which is a function of time. Also, in most experiments we change the contingent factors in time - like e.g. the cooling of the superfluid medium. So, the second and third cases seem to be very entangled. I think then I should understand emergence of a system's properties in time as it taking place without being caused by a time dependent change of environmental conditions of the system. Like e.g. the emergence of emoticons in the written language ;-) or that of the red spot on Jupiter - cases in which it 'just' takes time.

Here is a nice example for patterns that I'd say emerge in time, an oscillating chemical reaction:



    [An example for a particularly pretty oscillating chemical reaction with emerging patterns. Unfortunately, the video description doesn't contain information about the chemicals used, instead it provides a very bizarre connection to migraine and 'stimulus points'. Either way, this sort of reaction is called a Belousov-Zhabotinsky-Reaction.]


II. Strong and Weak Emergence

Okay, so after some back and forth I figured out why I was feeling somewhat uneasy with these three cases. Besides that I think - as said above - in practical circumstances distinguishing one from the other is difficult, in the second and third case I'd have said a property might be emergent in the sense that it 'arises' and becomes relevant, but was present already in the setup of the model (and if not, you should come up with a better model). E.g. Bose Einstein condensation was predicted to arise at low temperatures. Likewise, I'd say if one knows the initial conditions of a system and its evolution then one knows what will happen in time - it might turn out only later emergent properties become noticeable and important, but it's a predictable emergence. Like e.g. stars that have formed out of collapsing matter or something like this.

Either way, to come back to my rather naive sense of 'emergence' by increasing the number of constituents. If you look at a part of some larger system, specifying or examining its properties just might not be enough to understand how the whole system will behave: it can just be an incomplete description. It can be one needs further information that is the interaction with other parts of the system. As an example take one of these photographic mosaics:

Image: Artensoft.

If you'd only look at one of the smaller photos you'd have no chance of ever 'predicting' that something will 'emerge' if you zoom out.

After looking at the Wikipedia entry on Emergence I learned that this essentially is the difference between 'strong' and 'weak' emergence. In the danger of expressing my total ignorance of various words and names in that Wiki entry I've never heard before and am presently not in the mood to follow up upon, let me say that weak emergence is - at least "in principle" - already contained in a model you use to describe the system, and is thus at least "in principle" predictable, whereas strong emergence isn't.


III. Reductionism

If you want to go back to Lee's lecture, fast forward to min 37:00, where the topic emergence and reductionism comes up again. Somebody in the audience (I believe it's Jonathan Hackett), asks (min 40:00): "Is there a phenomenon which is emergent which is not derivable and is not expected to ever be derivable from something else?" This is essentially the question whether strong emergence actually exists.

Let me paraphrase reductionism as the believe that a system can "in principle" be understood entirely by understanding its parts. Then the argument of whether or not reductionism can "in principle" explain everything is that same question: does strong emergence actually exist? Or are all emergent features 'weakly' emerging, in that they are "in principle" predictable?

Now you might have noticed a lot of "in principles" in the previous paragraphs. I'd think that most physicists believe there is no strong emergence. At least I don't believe it. As such, I do think reductionism does not discard any features. However, this believe is for practical purposes often irrelevant since the models that we use, however sophisticated, are never complete descriptions of reality anyway. Even if you had a 'theory of everything', and there was no strong emergence, it wouldn't automatically provide a useful "model for everything". If we'd find the one fundamental theory of elementary matter it wouldn't describe all of science for the same reason why specifying the properties of all atoms in a car doesn't help you to figure out why the damned thing doesn't want to start. And I doubt we'll be able to derive the 'emergence' of, say, blog memes from QCD any time soon.

[Image: xkcd]

But besides these practical limitations that we encounter when making models that still have to be useful, there is the question whether it is possible to ever figure out if a system has the potential for a 'weak emergence' of a new property. Since it's impossible to rule out something unpredictable will happen, I'd say we can never know all the 'non-relevant' factors or 'unknown unknowns' as Homer-Dixon put it in his book. For example I'd say it is possible that tomorrow the vacuum expectation value of the Higgs flips to zero and that's the end of the world as we know it. Not that I am very concerned this will actually happen, but what the bleep do we know? Anybody wants to estimate the risk this happens and sue somebody over it because we irresponsible physicists might all have completely overlooked a lot of unknown unknowns? Just asking.

I'm not actually sure what Lee is saying later about Stuart Kaufman's view since I didn't read any of Kaufman's books (got stuck in 'At Home in the Universe' somewhere around the history of the DNA or so). But I guess this argument points in the same direction: "What [Stuart] claims is that if you know all the properties that are relevant to compute the fitness function of all the species at some time, you do not know [...] enough to predict what will be the properties that will be relevant for the fitness function 100 million years later."

Thus, no matter whether there is some fundamental theory for everything or not, or whether strong emergence exists or not, we will be faced with systems in which features will unpredictably emerge. Like probably in the evolution of species on our planet, possibly in the climate, but hopefully not in the global economy.

Besides this, since it's impossible to prove that our inability to accurately make a prediction is due to the system and not due to the limitations of the human brain, the hypothesis that strong emergence doesn't exist is unfalsifiable (In other words: if you find an emergent feature you can't explain, you can't prove it can never be explained within any model). So I think I leave this domain over to philosophy.


VI. Summary

Properties of system can emerge in various ways, they can emerge by changes in scale, under certain conditions, or in time. One can distinguish between strong and weak emergence, where weakly emergent features are in principle predictable and strongly emerging ones aren't. However, this difference is a rather philosophical one as all of our models are incomplete descriptions of the real world anyway, so there always can be a 'strong emergence' simply because the description is incomplete. Further, it is of little use to know whether a feature is in principle unpredictable or if it is in practice unpredictable. Weak emergence is not in conflict with reductionism.

Friday, April 11, 2008

Elegant proofs

Here is a little riddle:

Take a checkerboard, and remove two squares at opposite corners. Is it then possible to find an exact and complete cover of the remaining board using dominoes (two are shown in orange), without overlap and overhang?

There is a surprisingly simple and elegant proof for the negative answer to this question. I've just learned it this afternoon, in a great public talk by mathematician Günter Ziegler, coauthor of "Proofs from the book", current president of the German Association of Mathematicians, and main organiser of the "Year of Mathematics 2008" in Germany.

Starting with the "Year of Physics" in 2000, the German Federal Ministry of Science and Education has dedicated each year since to one particular discipline, and following the humanities in 2007, this year is all about math. As Ziegler writes in the March 2008 issue of the Notices of the American Mathematical Society, "The entire year 2008 has been officially declared Mathematics Year in Germany. This has created an unprecedented opportunity to work on the public's view of the subject."

And he used the opportunity, in a talk this afternoon on the occasion of the opening of the Mathematics Year for Frankfurt. He discussed the role of proof in mathematics, and then gave examples of actual elegant proofs of geometrical problems using colorations of the plane. The checkerboard riddle was just the first of them - he ended explaining the steps of a quite surprising proof that a square can not be decomposed into an odd number of triangles of equal area. I was amazed to see how I was guided by him through the steps and the idea of the proof - it's exciting to follow a talk like this! And my impression was that the 300 or so people in the audience have felt the same. Most of them, however, were faces I knew from the math department, or students and teachers. It would be great if such an event will attract even more people from the interested public.

Have a nice weekend - and if you want to solve the checkerboard riddle by yourself, don't read the comments - I'm convinced the answer will be there pretty soon!




Unfortunately, the slides of the talk are not online. Here are, roughly, the steps of the proof of the impossibility to decompose the square into an odd number of triangles of equal area: Start by colouring the rational points of the unit square in three different colours, using a scheme depending on the enumerator and denominator of the coordinates of the point. Then, convince yourself that each decomposition of the square into triangles (with corners in the rational points) contains at least one triangle with three different colours for the three corners. It comes out that this triangle, because of the rules chosen for colouring, has an area with an even denominator. Hence, in a decomposition into triangles with equal area, it cannot be part of a decomposition into an odd number of triangles. Use some heavy machinery to promote this proof from rational corner points to any corner points of the triangles, and you're done. If you find an error in this description, it's probably my fault - you may consult the original papers, "A Dissection Problem" by John Thomas, Mathematics Magazine 41 No. 4 (Sep. 1968), 187-190 (via JSTOR; subscription required), and "On Dividing a Square Into Triangles" by Paul Monsky, The American Mathematical Monthly 77 No. 2 (Feb. 1970) 161-164 (via JSTOR, subscription required).



Tag:

Thursday, April 10, 2008

Poll results: look who's doping

"The prestigious science journal Nature surveyed its readers to find out how many were using cognitive-enhancing drugs, and found one in five have boosted their brain power with compounds such as Ritalin.

The informal Internet survey involved 1,400 people from 60 countries. Most were from the United States, but 78, or 5.5 per cent, were from Canada.

About 20 per cent of respondents said they had tried to improve their memory, concentration and focus by taking drugs for non-medical reasons.

[...]

The readers of the journal are mainly academics and scientists, but include people in other professions as well."


From The Globe and Mail: Science journal finds 20% of its readers are 'doping'.

Read the Nature piece here: Poll results: look who's doping

PS: Sorry for the quick-blogging, gotta catch that seminar.

Wednesday, April 09, 2008

Let Me Entertain You

Spring finally arrived in Waterloo. Within a couple of days, the temperature rose by 15°C. The meters-high snow mountains in the yards and parking lots were slow with melting, and during the last days I could see the Canadians in T-shirts, shorts and flip-flops walking around the remaining snow piles. Meadows are little more than brownish mud, and there isn't yet a single leaf on the trees.

Some fun things I came across recently

Sunday, April 06, 2008

Models and Theories

Words can lead to misunderstandings in the communication of science if their scientific usage has a meaning other than the colloquial one. Examples for this are abundant, like the words 'significant' (see 'Statistical Significance' ), 'optimal' (see 'Optimum'), or 'simple' (see 'Simple Lie Group').

"Model" and "theory" are both words that physicists use frequently, and often with a different meaning than attached to it in the colloquial language. Since we on this blog write about models and theories all the time, I thought it worthwhile to clarify what I mean with that.

Disclaimer: This isn't meant to be a definition, just a clarification. As with all language related issues there is a large grayscale to a word's applicability. I am not claiming the following is a standard for usage-of-words.


I. The Real World Out There

If you are one of our frequent readers then you know that I occasionally feature the idea that all of our commenters are actually manifestations of my multiple personality disorder. Let me call that a 'theory' to describe my 'observation' of comments. It's not scientifically a particularly compelling theory. For one, some commenters have shown up in my office which means to make my theory viable, I should add some numbers to my ICD-10 diagnosis , like F22 (Delusional Disorder) or F44.1 (Dissociative fugue). Alternatively, I could simply argue there is no reality other than what the neurons in my brain produce. That's a theory as well, and since I can never falsify it, it's not something a scientist should spend much time on.

To begin with, it is therefore for practical purposes reasonable to consider the possibility there is something like a "real world out there"[1], including my computer, you, and the CMB radiation - and it's this real world out there that we are trying to understand and describe.

For example, you could speculate that your phone always rings if you take a bath. There is reality out there: you, the bathtub, the phone. To make a theory out of your speculation - call it a 'hypothesis' because it sounds better - you need to do somthing more. To begin with your hypothesis isn't very useful because it's too unspecific. You would want e.g. to add that the phone is switched on. Also, some quantification of your framework is necessary. If you just say the phone will ring, you can shrivel in your bath until you die out of boredom, and never falsify your hypothesis because it could always ring the next minute [2]. Instead, to make your theory credible you would want to clarify "If my phone is turned on, it will ring within 10 minutes after I sat down in a bathtub full of water".

Now consider you do that, but the phone doesn't ring. Then there is always the temptation to add more specifications a posteriori. Like, it only happens on Wednesdays, and only if you use rosemary scented bubble bath, and only if you have large enough extra dimensions. Or so. You can add a long series of 'only ifs' to explain a negative outcome of your experiment. That's still a theory, but with each 'only if' it loses some of its generality, and its applicability becomes more and more limited, which makes it less and less interesting.




II. Theories and their Limitations

What scientists usually mean by theory is a testable hypothesis about an important aspect of the real world out there that has predictable power, and a consistent and well-defined framework.

- 'Testable' means, there is a possibility to prove it false. Unless you have a theory of very limited applicability, you can never verify it to hold in all possible circumstances, for all experiments that will ever be done, by anyone. However, for your theory to be good for something it must at least be possible to falsify it, otherwise it's like claiming you have this invisible friend who is so smart nobody can ever prove he is really there because he's shy and just doesn't want to.

- 'Predictable power' means your theory tells you what will happen under certain circumstances stated in your theory - in physics this usually means prediction of experimental data. Experimental data can also be already available, and await to be explained by a theory. Though one better shouldn't call that a prediction, maybe a 'postdiction'. The possibility for experiments that can be done to confirm a theory differs greatly between the fields. For example, we can't repeat the evolution of species with slightly different initial conditions. Natural Selection such is somewhat weak on the side of useful predictions, but it does a good job explaining the evolution of species that archaeologists find documented. In physics we today have a lot of experimental data, like e.g. dark energy and dark matter, that yells at us theorists because it wants to be explained.

- 'Well-defined framework' means your theory is in a comprehensible form, and not a big pile of glibber that one can't grab because the interpretation is unclear. Roughly speaking, if one can't understand and apply a theory without consulting its creator, it's not a scientific theory. not a theory
[Figure: Not a Theory]


- 'Consistent' means, it does not have internal contradictions. E.g. if you have a theory that explains the world by taking the Bible literally, then I recommend you first make sure to remove inconsistencies.

- 'Important aspect' is very subjective. I'd think your bathtub theory for example isn't so tremendously important for the biggest part of the world, and wouldn't qualify as a 'scientific theory'. But at which stage scientists are inclined to promote a hypothesis to a scientific theory depends very much on the circumstances.

One can have a lot of theories. Theories that become accepted scientific knowledge are those that are experimentally well confirmed and have proved useful. The point about Natural Selection is not that it is a theory but that it is a very well confirmed theory with explanatory power. The point about Einstein's theory of General Relativity is not that it is a theory, but that it is a theory confirmed to very high precision with a large number of experiments. The theory of the luminiferous aether on the other hand is a theory as well, but one that was proved wrong with the experiment by Michelson and Morley [3]. String theory, as well as other approaches to quantum gravity, are difficult cases because they would become important in energy ranges far outside the reach of experiments on earth, so their status is pending.

Theories are reliable typically in a limited range and up to a certain precision. Special Relativity for example reaches its limits when the curvature of space-time becomes important, in which case one has to use General Relativity. Your bath tub theory reaches its limits at a temperature of 100°C at which you'd get problems filling the tub, not to mention getting in.

If somebody proposes a new theory it is most often an improvement of an already existing one, either because it applies with larger generality, to better precision, or both. Still, the 'older' theory might remain useful. For example, the non-relativistic limit is accurate to very high precision for slowly moving objects and using the fully relativistic framework is often an unnecessary overkill.


III. Models

Okay, so far we have the real world out there, and we have a theory. Theories are usually very general concepts from which one constructs a specific model. The model is a simplified version of the real world out there, simplified in the sense that it deals only with a limited amount of details. For example if you want to compute how a cow drops out of a plane, you can forget about her milk-efficiency, and assume to good precision it is a ball, with a mass M and a radius R. You can also attempt to understand a political arguments by the gender of its proponent. This identification of features is a model, underlying which there is your theory that you want to test.


In physics, we have for example the quantum field theories which underlie the Standard Model of particle physics. In this model, we identify particles with states in the Fock space, observables with expectation values, and particle properties with gauge charges that belong to specific gauge groups. Another example is the ΛCDM model in Cosmology, underlying which is Einstein's theory of General Relativity. The identification of the relevant ingredients to the model is crucial to make it useful.

Besides the identification of objects, typically your model will need to use some data as an input to make predictions for further data. In other words, it will have free parameters that the theory can not predict and that just have to be measured. Einstein's theory of Special Relativity for example has a constant that you can show to be the speed of massless particles. Then you go and measure this constant, commonly known as c, with which you can then apply your model to other cases. The more free parameters a model has, the less useful - not to mention, ugly - it is. A model that needs the same amount of parameters as there is data to fit isn't good for anything. There is always a n-th order polynomial which lies exactly on n data points.

Theories and models can come in many different forms. In physics our models use the language of mathematics, and our theories tell us how to identify mathematical quantities with 'real' physical objects. Models can also be computational, in which case you translate the real world into input of your computer code. But in other areas, mathematics or computation is not necessarily the language of choice. For example, in psychology one has the 'Existential Theory' which in a nutshell says humans are driven by four existential fears: death, freedom, isolation, and meaninglessness. Based on this theory, one can then try to understand a patient's problem, i.e. build a model to explain the real world out there. In this case, mathematics isn't the language of choice, mostly because it is too inflexible to cope with something as complex as human behaviour. Another example is Adam Smith's "Wealth of the Nations", which puts forward the 'theory' of the invisible hand. The tragedy with this specific example is that even though this theory is known to be wrong or not applicable in many cases, it is an argument still used that influences the lives of people all over the world.

The example from psychology also illuminates another feature of a good model. I am not much of a psychologist, but even to me the reduction of human behaviour to four existential fears seems to be overly simplistic. And it probably is, but what makes a useful theory is that after stripping off lots of details you have identified some relevant properties that can lead to an improved understanding, even though restrictions may apply.


However, a model doesn't necessarily have to be about describing the real world out there. To achieve a better understanding about a framework, it is often helpful to examine very simplified models even though one knows these do not describe reality. Such a model is called a 'toy-model'. Examples are e.g. neutrino oscillations with only two flavors (even though we know there are at least three), gravity in 2 spatial dimensions (even though we know there are at least three), and the φ4 theory - where we reach the limits of my language theory, because according to what I said previously it should be a φ4 model (it falls into the domain of quantum field theory).

IV. Theoretical Physics

A big challenge especially in theoretical physics is that theories potentially remain untestable for a long period of time, because the farther our theories depart from every day experience, the more effort we have to make to design suitable experiments. In these cases, internal consistency is often the only guide. Quantizing gravity for example is actually not the problem. You can quantize it if you want to. The problem is that the outcome is nonsensical. This way, one can drop a lot of theoretical approaches even without testing them on the real world. It is for this reason that apparent paradoxa appearing within a theory receive a lot of attention, as their investigation and solution can be the source of new insights and progress.

Besides consistency, some people also like to call upon more ethereal values like 'beauty', 'elegance' or 'naturalness' to argue for the appeal of their theory. It is a slippery slope however, as the relevance of these factors is a theory in itself, and not a scientifically well confirmed one.

You could for example have the theory that the real world is made out of tiny vibrating strings. Once you've made sure your theory is internally consistent, and added sufficient 'only ifs', the way to proceed is then to build a model, make a prediction, and line up for the Nobel Price. Alternatively, you could have the theory that the real world is made out of braids, and identify particle properties with braiding patterns. However, if your model doesn't reproduce gauge fields we commonly call photons and gluons, it doesn't seem to describe the real world out there, so it is at utmost a toy model. You can have all sorts of theories. Like you will be reborn as a Boltzmann brain. The value of such theories differs greatly depending on its usefulness.

Such, in theoretical physics you make a living with speculation, with the eternal hope that you manage to catch a glimpse of Nature's ways and experiment will confirm you. It is a difficult task, since every new theory first needs to reproduce all the achievements of the already established ones, plus it needs to lead to new insights. The requirement of consistency is one that people not working in the field typically underestimate - it greatly reduces the amount of freedom we have with our speculation. In a certain way it is as fascinating as frustrating if a theory you have disagrees with you and just doesn't do as you want it to. I am always annoyed by this. It's like I think if I made it, it's supposed to do what I want. Very possibly for this reason I am inclined to say that we don't actually invent theories, but that we discover them.

Another challenge in this procedure is the problem that different theories can under certain circumstances result in the same model. In such cases, one has to look for scenarios in which one can distinguish both theories. If there are none, one can call both theories equivalent, and the difference is one of interpretation. Though without any direct consequences as far as predictions are concerned, establishing an equivalence between two interpretations and a change in perspective can be very fruitful for further developments.



IV. Summary

A scientific theory is a consistent and well-defined framework to test a falsifiable hypothesis about the real world out there. A theory that becomes accepted knowledge is one that has been confirmed to high accuracy, and has proved useful. Theories underlie the models that we use to describe the world. We can also investigate 'toy models' to understand our theoretical framework better, even though the scenario is not realistic. Internal consistency is a strong requirement on a scientific theory that is often underestimated.



Post Scriptum

After finishing this writing, I find that my above explanation disagrees with other's. For example Laurence Moran explains:

"A theory is a general explanation of particular phenomena that has withstood many attempts to disprove it. Because of the evidence supporting the explanation and because it hasn't been refuted, a theory will be widely accepted as provisionally correct within the science community."

As I said previously, arguing about words isn't something I like to engage in, but this would mean that there are no falsified theories, and it constrains the usage of the word 'theory' to those theories that are 'correct' descriptions of nature (to some degree) because there is already evidence supporting them. Though possibly I misunderstand, and he means to say that the science community will generally only call something a 'theory' if it lives up to certain quality standards, withstands the most obvious criticism, and the possibility exists that it describes the real world out there. (Like e.g. if your 'theory' does not have fermions, forget about it.)

Wikipedia quotes the National Academy of Sciences with:

"Some scientific explanations are so well established that no new evidence is likely to alter them. The explanation becomes a scientific theory. In everyday language a theory means a hunch or speculation. Not so in science. In science, the word theory refers to a comprehensive explanation of an important feature of nature that is supported by many facts gathered over time. Theories also allow scientists to make predictions about as yet unobserved phenomena."

Which I think goes well with what I wrote above.



[1] The phrase "The real world out there" is borrowed from Lee's book, who I believe borrowed from elsewhere, but I can currently neither recall the actual origin, nor where I put to book. Sorry about that.
[2] One finds an iteration of this sort of theory that is unfalsifiable within the experimenter's lifetime in Hollywood. It's called the 'One day I will be rich and famous' theory of the unknown actor. Surely fame is just around the corner, hang on for one more day.
[3] The aether theory however seems to have Zombie character and occasionally comes back to haunt us in various alterations that escape the constraints of Michelson-Morley.