Sunday, September 30, 2012

Book review: “The Universe Within” by Neil Turok

The Universe Within: From Quantum to Cosmos (CBC Massey Lecture)
By Neil Turok
House of Anansi Press (October 2, 2012)

Neil Turok is director of Perimeter Institute and founder of the African Institute for Mathematical Sciences. His research is mostly in theoretical cosmology, and he has written a pile of interesting papers with other well-known physicists. Some weeks ago, I found a free copy of Turok’s new book “The Universe Within” in my mail together with a blurb praising it as “the most anticipated nonfiction book of the season” and a “personal, visionary, and fascinating work.” From the back cover, I expected the book to be about the relevance of basic research, physics specifically, and the advances blue sky research has brought to our societies.

You know me for arguing that we need knowledge for the sake of knowledge and it’s a mistake to justify all research by practical applications. To advance my own arguments, I thought I should read Turok’s book.

The book is to accompany the 2012 Massey Lectures that will be broadcast in November 2012.

Turok starts with the old Greeks, then writes about Leonardo da Vinci and Galileo, and lays out the development of the scientific method. He spends some time on Newton’s laws, electrodynamics, special relativity and general relativity. Since Turok’s own work is mostly in cosmology, it is not surprising that quite some space is dedicated to this. The standard model of particle physics appears here and there, and the recent discovery of the Higgs is mentioned. He goes to some length to explain path integrals with the action of the standard model coupled to general relativity, the one equation appearing in the book (without the measure), a courage that I think should be applauded. Turok makes clear he is not a fan of the multiverse. In the final chapter, he goes on to a general praise of basic research.

His explanations about physics are interwoven with his own experiences, growing up in South Africa, the challenges he faced, and the research he has done. This is all content well intentioned and sounds like a good agenda. Unfortunately, the realization of this agenda is poor.

The introductions into the basic physical concepts will be difficult to understand if one doesn’t know already what he is talking about. For example, he talks about inflation before he speaks about general relativity. He talks about the Planck length and the Hubble length without explaining the relevance. To make contact to Euclidean space, Turok wants to explain Minkowski-spacetime by using the “ict” trick that nobody uses anymore and will leave many readers confused. They will be left equally confused about the question how the wavefunction and the path integral is related to actually observable quantities. The reader should also better previously have heard about the multiverse, because that’s only mentioned in the passing to get across the author’s opinion.

The book has several photos and illustrations in color, including the “formula that summarizes all the known laws of physics”, but these are not referenced in the text. You better look at them in advance to know where they belong, or you have to guess while reading that there might be an image belonging to what you read.

The book is also repetitive in several places, where concepts that were introduced earlier, for example extra dimensions, reappear. “As I explained earlier” or similar phrases have been added in some instances, but the overall impression I got is that this book was written in pieces that were later put together sloppily. The picture presented is incoherent at best and superficial at worst. Rather than making a solid case for the relevance of basic research, Turok has focused on introducing the basics of modern physics with some historical background, and then talks mostly about cosmology. Examples of unpredictable payoff appear, in the form of electrodynamics, the transistor, and potentially quantum computing. But the cases are not well made in the sense that he doesn’t drive home the point that none of that research was aimed at producing the next better computer. And they’re not exactly very inspired choices either.

Turok’s argumentation is sometimes just weird or not well thought through. For example, to explain the merits of quantum computers, he writes:
“Quantum computers may also transform our capacities to process data in parallel, and this could enable systems with great social benefit. One proposal now being considered is to install highly sensitive biochemical quantum detectors in every home. In this way, the detailed medical condition of every one of us could be continuously monitored. The data would be transmitted to banks of computers which would process it and screen for signs of any risk.”
He does not add as much as one word on the question if this was desirable. This is pretty bad imo, because it suggests the image of a scientist who doesn’t care about ethical implications. (I mean: the question whether you want information about potentially uncurable diseases is already a topic of discussion today.) Another merit of quantum computers is apparently:
“With a quantum library, one might… be able to search for all possible interesting passages of text without anyone having had to compose them.”
Clearly what mankind needs. And here’s what, according to Turok, is the purpose of writing:
“Writing is a means of extracting ourselves from the world of our experience to focus, form, and communicate our ideas.”
One might maybe say so about scientific writing, at least in its ideal form. But the scientist in the writer seems to have taken over here. Another sentence that strikes me as odd is “I have been fascinated by the problem of how to enable young people to enter science, especially in the developing world.” I’m not sure “fascinating problem” is a particularly emphatic choice of words.

Other odd statements: “M-theory is the most mathematical theory in all of physics, and I won’t even try to describe it here.” He does anyway, but I’m left wondering what “most mathematical” is supposed to mean. Is it just an euphemism for “least practical relevance”? Another fluff sentence is “We are analog beings living in a digital world, facing a quantum future.” Turok also adds a sentence according to which one day maybe we’ll be able to harness dark energy. I can just see his inbox being flooded with proposals on exactly how to do that.

The last chapter of the book starts out quite promising, as it attempts to take on the question of what is the merit of knowledge for the sake of knowledge. Then I got distracted by a five pages long elaboration on “Frankenstein”. (He somehow places the origin of this novel in Italy, and forgets to mention that the Castle of Frankenstein is located in Germany, I pass by every time I visit my parents.) Then Turok seems to recall that the book is to appear with a Canadian publisher and suddenly adds a paragraph to praise the country:
“[T]oday’s Canada… compared to the modern Rome to its south, feels like a haven of civilization. Canada has a great many advantages: strong public education and health care systems; a peaceful, tolerant, and diverse society; a stable economy, and phenomenal natural resources. It is internationally renowned as a friendly and peaceful nation, and widely appreciated for its collaborative spirit and for the modest, practical character of its people.”
It’s not that I disagree. But it makes me wonder what audience he is writing for. The member of parliament who might have to sign in the right place so cash keeps flowing? But what bugs me most about “The Universe Within” is that Turok expresses his concerns about the current use of information technology, and then has nothing to add in terms of evidence that this really is a problem or any idea what can or should be done about it:
“Our society has reached a critical moment. Our capacity to access information has grown to the point where we are in danger of overwhelming our capacity to process it. The exponential growth in the power of or computers and networks, while opening vast opportunities, is outpacing our human abilities and altering our forms of communication in ways that alienate us from each other.”
Where is the evidence?
“We are being deluged with information through electric signals and radio waves, reduced to a digital, super-literal form that can be redistributed at almost no cost. The technology makes no distinction between value and junk.”
This isn’t a problem of technology, this is a problem of economy.
“The abundance and availability of free digital information is dazzling and distracting. It removes us from our own nature as complex, unpredictable, passionate people.”
According to Turok, the solution to this problem has something to do with the “ultraviolet-catastrophe”, I couldn’t quite follow the details. From a scientist, I would have expected a more insightful discussion. Not too long ago, Perimeter Institute had a really bright faculty member by name Michael Nielsen, who thought about the challenges and opportunities of information technology for science and what can be done about it. Turok does not only not explain what evidence it is that has him worried, he also doesn’t comment on any recent developments or suggestions. Maybe he should have spent some time talking to Nielsen.

So in summary, what can I say? This book strikes me as well intentioned, but sloppy and hastily written. If you are looking for a good introduction to the basic concepts of modern physics and cosmology, better read Sean Carroll’s book. If you are looking for a discussion of the challenges science and our societies are facing by rapid information exchange, better read Jaron Lanier’s book, or even Maggie Jackson’s book. If you want to know what the future of science might look like and what steps we should take to advance knowledge discovery, read Michael Nielsen’s book. And if you want to know how our societies economically benefit from basic research, read Mark Henderson’s book because he lists facts and numbers, even if they’re very UK-centric.

Neil Turok’s book might be interesting for you if you want to know something about Neil Turok. At least I found it interesting to learn something about his background, but it’s only a page here or there. I would give this book two out of five stars. That’s because I think he should be thanked for making the effort and taking the time. I hope though next time he gets a better editor.

Friday, September 28, 2012

10 effects you should have heard of

  1. The Photoelectric Effect

    Light falling on a metal plate can lead to emission of electrons, called the "photoelectric effect". Experiments show for this to happen the frequency of the light needs to be above a threshold depending on the material. This was explained in 1905 by Albert Einstein who suggested that the light should be thought of as quanta whose energy is proportional to the frequency of the light, the constant of proportionality being Planck's constant. Einstein received the Nobel Prize in 1921 "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect."

    Recommended reading: Our post on the Photoelectric Effect and the Nobel Prize speech from 1921.


  2. The Casimir Effect

    This effect was first predicted by Hendrik Casimir who explained that, as a consequence of quantum field theory, boundary conditions that may for example be set by conducting (uncharged!) plates, can result in measurable forces. This Casimir force is very weak and can be measured only at very small distances.

    Recommended reading: Our post on the Casimir Effect and R. Jaffe's The Casimir Effect and the Quantum Vacuum.


  3. The Doppler Effect

    The Doppler effect, named after Christian Doppler, is the change in frequency of a wave when the source moves relative to the observer. The most common example is that of an approaching ambulance, where the pitch of the signal is higher when it moves towards you than when it moves away from you. This does not only happen for sound waves, but also for light and leads to red- or blueshifts respectively.

    Recommended reading: The Physics Classroom Tutorial.


  4. The Hall Effect

    Electrons in a conducting plate that is brought into a magnetic field are subject to the Lorentz force. If the plate is oriented perpendicular to the magnetic field, a voltage can be measured between opposing ends of the plate which can be used to determine the strength of the magnetic field. First proposed by Edwin Hall, this voltage is called the Hall voltage, and the effect is called the Hall effect. If the plate is very thin, the temperature low, and the magnetic field very strong, a quantization of the conductivity can be measured, which is also known as the quantum Hall effect.

    Recommended reading: Our post on The Quantum Hall Effect.


  5. The Meissner-Ochsenfeld Effect

    The Meissner-Ochsenfeld effect, discovered by Walther Meissner and his postdoc Robert Ochsenfeld in 1933, is the expulsion of a magnetic field from a superconductor. Most spectacularly, this can be used to let magnets levitate above superconductors since their field lines can not enter the superconductor. I assure you this has absolutely nothing to do with Yogic flying.

    Recommended watching: Amazing Physics on YouTube.


  6. Aharonov–Bohm Effect

    A charged particle in an electromagnetic field acquires a phase shift from the potential of the background field. This phase shift is observable in interference patterns and has been experimentally confirmed. The relevant point is that it's the potential that causes the phase, not the field. Before the Aharonov–Bohm effect one could question the physical reality of the potential.


  7. The Hawking Effect

    Based on a semi-classical treatment of quantum fields in a black hole geometry, Stephen Hawking showed in 1975 that black holes emit thermal radiation with a temperature inverse to the black hole's mass. This emission process of the black hole is called the Hawking Effect. This result has lead to a great progress in understanding the physics of black holes, and is still subject of research, see recent post at Cosmic Variance.

    Recommended reading: Black Hole Thermodynamics by David Harrison and P.K. Townsend's lecture notes on Black Holes.


  8. The Zeeman Effect/Stark Effect

    In the presence of a magnetic field, energy levels of electrons in atomic orbits that are usually degenerated (i.e. equal) can obtain different values, depending on their quantum number. As a consequence, spectral lines corresponding to transitions between these energy levels can split into several lines in the presence of a static magnetic field. This effect is named after the Dutch physicist Pieter Zeeman, who was awarded the 1902 physics Nobel prize for its discovery. The Zeeman effect is an important tool to measure magnetic fields in astronomy. For some historical reasons, the plain vanilla pattern of line splitting is called the Anomalous Zeeman effect.

    A related effect, the splitting of spectral lines in strong electric fields, is called the Stark Effect, after Johannes Stark.

    Recommended reading: HyperPhysics on the Zeeman effect and the Sodium doublet.


  9. The Mikheyev-Smirnov-Wolfenstein Effect

    The Mikheyev-Smirnov-Wolfenstein effect, commonly called MSW effect, is an in-medium modification of neutrino oscillation that can for example take place in the sun or the earth. It it a resonance effect that depends on the density of the medium and can significantly effect the conversion of one flavor into another. The effect is named after Stanislav Mikheyev, Alexei Smirnov and Lincoln Wolfenstein.

    Recommended reading: The MSW effect and Solar Neutrinos.


  10. The Sunyaev-Zel'dovich Effect

    The Sunyaev-Zel'dovich effect, first described by Rashid Sunyaev and Yakov Zel'dovich, is the result of high energy electrons distorting the cosmic microwave background radiation through inverse Compton scattering, in which some of the energy of the electrons is transferred to the low energy CMB photons. Observed distortions of the cosmic microwave background spectrum are used to detect the density perturbations of the universe. Dense clusters of galaxies have been observed with use of this effect.

    Recommended reading: Max Planck Society press release Crafoord Prize 2008 awarded to Rashid Sunyaev and The Sunyaev-Zel'dovich effect by Mark Birkinshaw.


  11. Bonus: The Pauli Effect

    Named after the Austrian theoretical physicist Wolfgang Pauli, the Pauli Effect is well known to every student of physics. It describes a spontaneous failure of technical equipment in the presence of theoretical physicists, who should therefore never be allowed on the vacuum pumps, lasers or oscilloscopes.

    Recommended reading: Our post Happy Birthday Wolfgang Pauli.

[This is a slightly updated and recycled post that originally appeared in March 2008.]

Wednesday, September 26, 2012

Interna

Seems I've been too busy to even give you the family update last month, so here's to catch up.

Lara and Gloria can meanwhile climb up and down chairs quite well, which makes life easier for me, except that they often attempt to climb upwards from there. They can now reach the light switches, and last week they learned to open doors so it's difficult now to keep them in a room. Their favorite pastime is presently hitting me with empty plastic bottles, which seems to be infinitely entertaining. They also have developed the unfortunate habit of throwing their toys in direction of my laptop screen.

The girls have increased their vocabulary with various nouns and can identify images in their picture books. They still haven't learned a single verb, though Stefan insists "cookie" means "look."

Gloria is inseparable from her plush moose, Bo. She takes him everywhere and sleeps with him. Since I'd really like to wash it on occasion, I've now bought a second one and we're doing our best to avoid she sees both at once. (We also have to maneuver carefully around the Arlanda Duty Free shop, where there sits a whole pile of them.) Gloria has developed a bad case of motion sickness in which she'll be sick after ten minutes on the road. We now got some medication from our pediatrician that seems to help, so our mobility radius has expanded again. Lara meanwhile is squinting and we'll have to do something about this.

Right now, they're sitting behind me with their Swedish-English picture book. I am often amazed how well they understand what we say, especially because Stefan and I don't speak the same accent and we both mumble one way or the other. I guess it's because I judge their progress by my lack of progress in learning Swedish. Last week I took a taxi in Stockholm, and this was the first time I had a taxi driver who was actually Swedish. Ironically I noticed that because he spoke British English that was at least to my ears basically accent free. He didn't even try to address me in Swedish. When I asked him about it he said, well, there's so few people on the planet for whom Swedish is useful that they don't expect others to speak it. The Swedes are just so damned nice to immigrants.

We were lucky to get two daycare places starting in January. It's a half-day place, but this will be quite a change for all of us.

The organization of the PI conference on Experimental Search for Quantum Gravity is going very well, thanks to Astrid Eichhorn who has done a great job. We now have a schedule that should appear on the website within the next days. We'll probably have most of the talks recorded, so it's something for all of you. The organization of the November program on Perspectives of Fundamental Cosmology is running a little behind, but it seems everything is slowly falling into place there too.

Besides this, I have been trying to convince my colleagues at Nordita to engage more in public outreach, as I think we're behind in making use of the communication channels the online world has to offer. I'm happy to report that we did get some funding approved by the board last week. Part of this will go into a few videos, another part will go to a workshop for science writers - an idea that goes back to a discussion I had with George Musser earlier this year. I'll let you know how this goes, and I'm open to suggestions for what else we could do. I think I don't have to explain you my motivation for doing this - I'd be preaching to the choir. So let me instead say that it can be difficult to get scientists to make a time commitment to anything that's not research, so the biggest constraint on the matter is personnel.

Friday, September 21, 2012

Quantum Gravity in Tritium Decay?

If you've been working in a field for some while there comes the moment when you feel like you've heard it all before. So I was surprised when the other day I came across an idea to test the phenomenology of quantum gravity that I had not heard about before - and the paper is three years old already:
    Hypersharp Resonant Capture of Neutrinos as a Laboratory Probe of the Planck Length
    R. S. Raghavan
    Phys. Rev. Lett. 102:091804 (2009).
    arxiv:0903.0787

    Time-Energy Uncertainty in Neutrino Resonance: Quest for the Limit of Validity of Quantum Mechanics
    R. S. Raghavan
    arXiv:0907.0878
In a nutshell, the idea is as follows. We previously discussed that the Planck length is expected to play the role of a minimal length. It takes high energies to resolve short distances, and once you reach the Planck scale, this creates large fluctuations of space-time which prevents a better resolution. This idea can be captured in what has become known as the "generalized uncertainty principle", a research direction that has recently become quite popular.

This all goes back, essentially, to Mead's idea which we discussed in the earlier post. Mead however had more to say about this: He wrote another paper, "Observable Consequences of Fundamental-Length Hypotheses" Phys. Rev. 143, 990–1005 (1966), in which he argued that such a Planck scale limit should, in principle, lead to a lower limit on the width of atomic spectral lines; it should create a fundamental blurring that can't be removed by better measurement. Raghavan in his paper now wants to test this limit. Rather than using photon emission though, he suggests to use tritium decay.

Tritium makes a β-decay to Helium and in this emits an electron and anti-electron neutrino. Normally the electron flies off and the energy spread of the outgoing neutrino is quite large, but Raghavan lays out some techniques by which this spread can be dramatically reduced. The starting point is that some fraction of the tritium the electrons doesn't fly off but is instead captured in a bound orbit around the Helium. Now if the tritium, normally a gas at room temperature, can be embedded in a solid, then the recoil energy can be very small; this is essentially the Mössbauer effect, just with neutrino emission, and this gives a hypersharp neutrino line. The first some slides of this pdf are a useful summary of the recoilless bound-state β-decay.

Raghavan estimates ΔE/E to be as small as 10-29. The average lifetime of tritium is about 12 years. There are a lot of techniques involved in this estimate that I don't know much about, so I can't tell how feasible the experiment he proposes is. It sounds plausible to me though, give or take some orders of magnitude.

He then speaks in his paper about the energy-time uncertainty relation and its Planck scale modifications. Now it is true that if you have a generalized uncertainty for spatial spread Δx and momentum spread Δp, you expect there to be also one for ΔEΔt. Yet, normally the deviations from the usual Heisenberg uncertainty scale with the energy over the Planck mass. And for the emitted neutrinos with an average energy of some keV this a ridiculously small correction term.

So here then comes the input from Mead's paper. Mead argues that the ratio ΔE/E is, in the most conservative model, actually proportional to the Planck length over the size of the system lPl/R, which he takes to be the size of the nuclei. This is quite puzzling, because if you take the Planck length bound on a wavelength and make an error propagation to the frequency, what you'd get is actually ΔE/E larger or equal to lPlE which is about 4 orders of magnitude smaller in the case at hand. The reason for this mismatch is that Mead in his argument speaks about the displacement of elementary particles in a potential. Now if the wavelength of the particles is larger than the typical extension of the potential this doesn't make much sense.

That having been said, one can of course consider the proposed parameterization as a model that is to be constrained, but this leaves the question how plausible it is that there be such a modification from quantum gravity. At first sight, I'd have said a low-energetic system like an atom is a hopeless place to look for quantum gravity, but then the precision of the suggested measurement would be amazing indeed. If it works that is. I'll have to do some more thinking to see if I can make sense of the argument for the scaling of the effect. Either way however, an experiment like the one Raghavan discusses, watching the decay of tritium under suitable conditions, would test a new range of parameters, which is always a good thing to do.

Monday, September 17, 2012

Research Areas and Social Identity

Last year, when I was giving the colloquium in Jyväskylä, my host introduced me as "leading the quantum gravity group at Nordita." I didn't object since it's correct to the extent that I'm leading myself, more or less successfully. However, the clustering of physicists into groups with multiple persons is a quite interesting emergent feature of scientific communities. Quantum gravity for example is usually taken to mean quantum gravity excluding string theory, a nomenclature I complained about earlier.

In the literature on the sociology of science it is broadly acknowledged that scientists, as other professionals, naturally segregate into groups to accomplish what's called a "cognitive division of labor": an assignment of specialized tasks which allows the individual to perform at a much higher level than they could achieve if they had to know all about everything. Such a division of labor is often noticeable already on the family level (I do the tax return, you deal with the health insurance). Specialization into niches for the best use of resources can also be seen in ecosystems. It's a natural trend because it's a local optimization process: Everybody dig a little deeper where you are and get a little more.

The problem is of course that a naturally occurring trend might lead to a local optimum that's not a global optimum. In the case of scientific communities the problem is that knowledge which lies at the intersection of different areas of specialization is not or not widely known, but there is a potential barrier preventing the community from making better use of this knowledge. This is unfortunate, because information relevant to progress goes unused. (See for example P. Wilson, “Unused relevant information in research and development,”. Journal of the American Society for Information Science, 45(2), 192203 (1995).)

So this is the rationale why it's necessary to encourage scientists to look out of their box, at least on occasion. And that takes some effort because they're in a local optimum and thus generally unwilling to change anything.

This brings me back then to the grouping of researchers. It does not seem to me very helpful to reach a better global optimum. In fact, it seems to me it instead that it makes the situation worse.

Social identity theory deals with the question what effect it has to assign people to groups; a good review is for example Stryker and Burke “The Past, Present, and Future of an Identity Theory”, Social Psychology Quarterly, Vol. 63, No. 4 (Dec., 2000), pp. 284-297. This review summarizes studies that have shown that the mere act of categorizing people as group members changes their behavior: When assigned a group, one that might not even be meaningful, they favor people in the group over people outside the group and are trying to fit in. The explanation that the researchers put forward is that "after being categorized of a group membership, individuals seek to achieve positive self-esteem by positively differentiating their ingroup from a comparison outgroup."

This leads me to think, it cannot be helpful to knowledge discovery to assign researchers at an institute to a handful of groups. It is also very punched-paper in the age of social tagging.

A suggestion that I had thus put forward some years ago at PI was to get rid of the research groups altogether and instead allow researchers to chose keywords that serve as tags. These tags would contain the existing research areas, but also cover other interests, that might be black holes, networks, holography, the arrow of time, dark matter,  phase transitions, and so on. Then, one could replace the groups on the website with a tag cloud. If you click on a keyword, you'd get a list of all people who've chosen this tag.

Imagine how useful this would be if you were considering to apply. You could basically tell with one look what people at the place are interested in. And if you started working there, it would be one click to find out who has similar interests. No more browsing through dozens of individual websites, half of which don't exist or were last updated in 1998.

I was thinking about this recently because Stefan said that with better indexing of abstracts, which is on the way, it might even be possible in the not-so-far future to create such a tag-cloud from researcher's publication list. Which, with an author ID that lists institutions, could be mostly automatically assembled too.

This idea comes with a compatibility problem though, because most places hire applicants by group. So if one doesn't have groups, then the assignment of faculty to committees and applicants to committees needs to be rethought. This requires a change in procedure, but it's manageable. And this change in procedure would have the benefit of making it much easier to identify emerging areas of research that would otherwise awkwardly fit neither here nor there. Which is the case right now with emergent gravity and analogue gravity, just to name an example.

I clearly think getting rid of institutional group structures would be beneficial to research. Alas, there's a potential barrier that's preventing us from making such a change, a classic example of a collective action problem. However, I am throwing this at you because I am sure this restructuring will come to us sooner or later. You read it here first :o)

Tuesday, September 11, 2012

Book Review “The Geek Manifesto” by Mark Henderson

The Geek Manifesto: Why Science Matters
By Mark Henderson
Bantam Press (10 May 2012)

Henderson’s book is a well-structured and timely summary of why science, both scientific knowledge and the scientific method, matters for the well-being of our societies. Henderson covers seven different areas: why science matters to politics, the government, the media, the economy, education, in court, in healthcare and to the environment. In each case, he has examples of current problems, mostly from the UK and to a lesser extent from the USA, that he uses to arrive at recommendations for improvement.

The book is quite impressive in the breadth of topics covered. The arguments that Henderson leads are well thought through and he has hands-on suggestions for what can be done, for example how and why scientists should take the time to correct journalists, how and why to communicate their concerns to members of the parliament, why random controlled trials matter not only in health care but also for general policies and educational practice, and so on.
“The manifesto’s aim is to win your broad support for its central proposition: that a more scientific approach to problem-solving is applicable to a surprisingly wide range of political issues, and that ignoring it disadvantages us all.”
That having been said, the book is clearly addressed at people who know the value of and apply the scientific method, people he refers to as “geeks.” I’ll admit that I’m not very fond of this terminology. If I hear “geek” I think of a guy who can fix a TV with a fork and salt, and who can recite Star Wars backwards in Klingon. What’s wrong with “scientists”, I am left to wonder?

There’s some more oddities about this book. To begin with it’s set in Times, and the text is in several places broken up with large quotes that repeat a sentence from the page. You see this very frequently in magazines these days, with the idea to get across at least a catchy sentence or two, but it doesn’t make any sense whatsoever to do this in a book every 30 pages or so. It’s just plainly annoying one has to read the same sentence twice.

I’ll also admit that I’m not following British politics whatsoever and most of the names that are being dropped in this book don’t tell me anything. It’s a strangely UK-centric vision of what is a much broader issue really. Plenty of twists and turns of UK politics did not make a compelling read to me. That’s really unfortunate, because Henderson has a lot of good points that are relevant beyond the borders of his country.

Basically, Henderson’s message can be summarized as urging “geeks” to become more active and more vocal about their frustration with how scientific evidence and methods are being treated in various realms of our society. As a call to action however the book is far too long and, being addressed to readers who are fond of science already, it’s preaching to the choir. Thus, it’s a good book, by all means: well-argued, well-referenced, well-written – but I doubt it’ll achieve what its author hopes for.

I have to add however that it is good to see somebody is at least working into the direction of addressing this systemic problem that I’ve been writing about for years. I think that the root of our global political systems is that scientific knowledge and thinking is not, at present, well-integrated into our decision making processes. Instead we have an unfortunate conflation of scientific questions and questions of value when it comes to policy decisions. These really should be disentangled. But I’m preaching to the choir...

You may like “The Geek Manifesto” if you have an interest in how science is integrated into our societies, and what the shortcomings are with this integration. I’d give this book three out of five stars, which is to say I had to fight the repeated desire to skip over a few pages here and there.

Saturday, September 08, 2012

What are you, really?

Last month, I reviewed Jim Holt’s book “Why does the world exist?” This question immediately brings up another question: What exists anyway? Holt does not seem to be very sympathetic to the idea that mathematical objects exist, or at least he makes fun of the idea:
“A majority of contemporary mathematicians (a typical, though disputed, estimate is about two-thirds) believe in a kind of heaven – not a heaven of angels and saints, but one inhabited by the perfect and timeless objects they study: n-dimensional spheres, infinite numbers, the square root of -1, and the like. Moreover, they believe that they commune with this realm of timeless entities through a sort of extra-sensory perception.”
There’s no reference for the mentioned estimate, but what’s worse is that referring to mathematical objects as “timeless” implies a preconceived notion of time already. It makes perfect sense to think of time as a mathematical object itself, and to construct other mathematical objects that depend on that time. Maybe one could say that the whole of mathematics does not evolve in this time, and we have no evidence of it evolving in any other time, but just claiming that mathematics studies “timeless objects” is sloppy and misleading. Holt goes on:
“Mathematicians who buy into this fantasy are called “Platonists”… Geometers, Plato observed, talk about circles that are perfectly round and infinite lines that are perfectly straight. Yet such perfect entities are nowhere to be found in the world we perceive with our sense… Plato concluded that the objects contemplated by mathematicians must exist in another world, one that is eternal and transcendent.”
It is interesting that Holt in his book comes across as very open-minded to pretty much everything his interview partners confront him with, including parallel-worlds, retrocausation and panpsychism, but discards Platonism as a “phantasy.”

I’m not a Platonist myself, but it’s worth spending a paragraph on the misunderstanding that Holt has constructed because this isn’t the first time I’ve come across similar statements about circles and lines and so on. It is arguably true that you won’t find a perfect circle anywhere you look. Neither will you find perfectly straight lines. But the reason for this is simply that circles and perfectly straight lines are not objects that appear in the mathematical description of the world on scales that we see. Does it follow from that they don’t exist?

If you want to ask the question in a sensible way, you should ask instead about something that we presently believe is fundamental: What’s an elementary particle? Is it an element of a Hilbert space? Or is it described by an element of a Hilbert space? Or, to put the question differently: Is there anything about reality that cannot be described by mathematics? If you say no to this question, then mathematical objects are just as real as particles.

What Holt actually says is: “I’ve never seen any of the mathematical objects that I’ve heard about in school, thus they don’t exist and Platonism is a phantasy. “ Which is very different from saying “I know that our reality is not fundamentally mathematical.” With that misunderstanding, Holt goes one to explain Platonism by psychology:
“And today’s mathematical Platonists agree. Among the most distinguished of them is Alain Connes, holder of the Chair of Analysis and Geometry at the College de France, who has averred that “there exists, independently of the human mind, a raw and immutable mathematical reality.”… Platomism is understandably seductive to mathematicians. It means that the entities they study are no mere artifacts of the human mind: these entities are discovered, not invented… Many physicists also feel the allure of Plato’s vision.”
I don’t know if that’s actually true. Most of the physicists that I asked do not believe that reality is mathematics but rather that reality is described by mathematics. But it’s very possibly the case that the physicists in my sample have a tendency towards phenomenology and model building.

Most of them see mathematics as some sort of model space that is mapped to reality. I argued in this earlier post that this is actually not the case. We never map mathematics to reality. We map a simplified system to a more complicated one, using the language of mathematics. Think of a computer simulation to predict the solar cycle. It’s a map from one system (the computer) to another system (the sun). If you do a calculation on a sheet of paper and produce some numbers that you later match with measurements, you’re likewise mapping one system (your brain) to another (your measurement), not some mathematical world to a real one. Mathematics is just a language that you use, a procedure that adds rigor and has proved useful.

I don’t believe, like Max Tegmark does, that fundamentally the world is mathematics. It seems quite implausible to me that we humans should at this point in our evolution already have come up with the best way to describe nature. I used to refer to this as the “Principle of Finite Imagination”: Just because we cannot imagine it (here: something better than mathematics) doesn’t mean it doesn’t exist. I learned from Holt’s book that my Principle of Finite Imagination is more commonly known as the Philosopher’s Fallacy.
“[T]he philosopher’s fallacy: a tendency to mistake a failure of the imagination for an insight into the way reality has to be.”
Though Googling "philopher's fallacy" brings up some different variants, so maybe it's better to stick with my nomenclature.

Anyway, this has been discussed since some thousand years and I have nothing really new to add. But there’s always somebody for whom these thoughts are new, as they once were for me. And so this one is for you.
xkcd: Lucky 10000.

Tuesday, September 04, 2012

Public Attitudes to Science

"Public Attitudes to Science" is a survey that has been conducted in the UK every couple of years since 2000, most recently 2011. It's quite interesting if you're interested in how scientific research is perceived by the public; you can download the full survey results here. Let me just show you some of the figures that I found interesting.

First, here's where people hear or read about new scientific research findings most often. TV and print newspapers are the dominant sources with 54% and 33%, followed by internet excluding blogs. Science blogs come in only at 2% (I don't know what the asterisk means, I took the number from the text to this figure).


Next, a somewhat odd question. People were asked how much they agree or disagree with the statement "The information I hear about science is generally true." It's beyond me how anybody can agree with a statement like that. Anyway, 9% disagree or strongly disagree and an amazing 47% agree or strongly agree.


What's more interesting is that those who agreed or disagreed were asked for their reasons in an unprompted reply. Here's the most frequently named reasons for agreeing that "information I hear about science is generally true." The top answer (no reason to doubt it) means to me essentially they're generally trusting or didn't think very much about their answer. More telling are the subsequent reasons: It's checked by other scientists, science is regulated, it comes directly from scientists, it's checked by someone, checked by journalists. Don't laugh, this is serious.


And here are the top reasons to disagree that scientific information s generally true. The first two replies are variants of why should I believe it. Followed by it's not checked by anyone, not checked by other scientists, not checked by journalists, does not come directly from scientists, and a general mistrust in mass media. This reply is interesting because science blogs can alleviate this trust issue very much, yet, as we have seen above, only very few people seems to use them as a source of information.


This becomes even clearer if you look at the replies to the next question, that is what could increase people's trust in the finding of scientific studies:


I am as shocked as amazed that 47% of people say they would trust information more if it was repeated. Though that shouldn't come as a surprise to me because it's a well-known effect that Kahneman in his book elaborates on for a while. The same goes for the reply that information fitted nicely with what they already new. If you really needed evidence that the human brain easily falls for confirmation bias, here it is. And that's only the people who admitted it! But on the more hopeful side are the replies that ask for review by other scientists and publication in a scientific journal. One might add that at least a proper reference our source would greatly help. I think science blogs do much better in terms of referencing and they're a source of review by other scientists in themselves. So, I come to conclude the world would be a better place if people would read more science blogs. Though that might be a case of confirmation bias ;o)

Saturday, September 01, 2012

Questioning the Foundations

The submission deadline for this year’s FQXi essay context on the question “Which of Our Basic Physical Assumptions Are Wrong?” has just passed. They got many thought-provoking contributions, which I encourage you to browse here.

The question was really difficult for me. Not because nothing came to my mind but because too much came to my mind! Throwing out the Heisenberg uncertainty principle, Lorentz-invariance, the positivity of gravitational mass, or the speed of light limit – been there, done that. And that’s only the stuff that I did publish...

At our 2010 conference, we had a discussion on the topic “What to sacrifice?” addressing essentially the same question as the FQXi essay, though with a focus on quantum gravity. For everything from the equivalence principle over unitarity and locality to the existence of space and time you can find somebody willing to sacrifice it for the sake of progress.

So what to pick? I finally settled on an essay arguing that the quantization postulate should be modified, and if you want to know more about this, go check it out on the FQXi website.

But let me tell you what was my runner-up.

“Physical assumption” is a rather vague expression. In the narrower sense you can understand it to mean an axiom of the theory, but in the broader sense it encompasses everything we use to propose a theory. I believe one of the reasons progress on finding a theory of quantum gravity has been slow is that we rely too heavily on mathematical consistency and pay too little attention to phenomenology. I simply doubt that mathematical consistency, combined with the requirement to reproduce the standard model and general relativity in the suitable limits, is sufficient to arrive at the right theory.

Many intelligent people spent decades developing approaches to quantum gravity, approaches which might turn out to have absolutely nothing to do with reality, even if they would reproduce the standard model. They pursue their research with the implicit assumption that the power of the human mind is sufficient to discover the right description of nature, though this is rarely explicitly spelled out. There is the “physical assumption” that the theoretical description of nature must be appealing and make sense to the human brain. We must be able to arrive at it by deepening our understanding of mathematics. Einstein and Dirac have shown us how to do it, arriving at the most amazing breakthroughs by mathematical deduction. It is tempting to conclude that they have shown the way, and we should follow in their footsteps.

But these examples have been exceedingly rare. Most of the history of physics instead has been incremental improvements guided by observation, often accompanied by periods of confusion and heated discussion. And Einstein and Dirac are not even good examples: Einstein was heavily guided by Michelson and Morley’s failure to detect the aether, and Dirac’s theory was preceded by a phenomenological model proposed by Goudsmit and Uhlenbeck to explain the anomalous Zeeman effect. Their model didn’t make much sense. But it explained the data. And it was later derived as a limit of the Dirac equation coupled to an electromagnetic field.

I think it is perfectly possible that there are different consistent ways to quantize gravity that reproduce the standard model. It also seems perfectly possible to me for example that string theory can be used to describe strongly coupled quantum field theory, and still not have anything to say about quantum gravity in our universe.

The only way to find out which theory describes the world we live in is to make contact to observation. Yet, most of the effort in quantum gravity is still devoted to the development and better understanding of mathematical techniques. That is certainly not sufficient. It is also not necessary, as the Goudsmit and Uhlenbeck example illustrates: Phenomenological models might not at first glance make much sense, and their consistency only become apparent later.

Thus, the assumption that we should throw out is that mathematical consistency, richness, or elegance are good guides to the right theory. They are desirable of course. But neither necessary nor sufficient. Instead, we should devote more effort to phenomenological models to guide the development of the theory of quantum gravity.

In a nutshell that would have been the argument of my essay had I chosen this topic. I decided against it because it is arguably a little self-serving. I will also admit that while this is the lesson I draw from the history of physics, I, as I believe most of my colleagues, am biased towards mathematical elegance, and the equations named after Einstein and Dirac are the best examples for that.