Our conference on "Experimental Search for Quantum Gravity" now has the schedule online. As you can see, this year's format is somewhat different from the previous installations. Based on Astrid's suggestions, we have only a few long talks and otherwise many discussions with short (10-15 min) contributions. I'm curious to see how this goes.
Personally, I find discussion sessions to be of limited use. Participants usually to like them for the social touch, but in my experience they tend to be dominated by always the same people who say always the same things. And I guess I just prefer prepared talks for they are usually better structured and convey information better. Which is why, if I add discussion sessions to a conference I'm organizing, I do my best to encourage participants and esp the discussion leaders to prepare some questions and arguments in advance. Maybe mixing discussions with the short contributions is a good way to avoid these pitfalls. Either way, I think it is worthwhile to try a different format.
Thursday, October 04, 2012
Monday, October 01, 2012
Clearly foggy
"I am ... rather skeptical about "popular" science in general, in particular when I bump into those books pretending to address in "popular" language formidable mathematical conjectures, or esoteric concepts such as black holes, superstrings, and dark matter. Quite often, skimming through their first chapters, the non-professional reader gets the impression that everything is as clear as day, to realize well before the end that it is in fact quite a foggy day."
Sunday, September 30, 2012
Book review: “The Universe Within” by Neil Turok
The Universe Within: From Quantum to Cosmos (CBC Massey Lecture)
By Neil Turok
House of Anansi Press (October 2, 2012)
Neil Turok is director of Perimeter Institute and founder of the African Institute for Mathematical Sciences. His research is mostly in theoretical cosmology, and he has written a pile of interesting papers with other well-known physicists. Some weeks ago, I found a free copy of Turok’s new book “The Universe Within” in my mail together with a blurb praising it as “the most anticipated nonfiction book of the season” and a “personal, visionary, and fascinating work.” From the back cover, I expected the book to be about the relevance of basic research, physics specifically, and the advances blue sky research has brought to our societies.
You know me for arguing that we need knowledge for the sake of knowledge and it’s a mistake to justify all research by practical applications. To advance my own arguments, I thought I should read Turok’s book.
The book is to accompany the 2012 Massey Lectures that will be broadcast in November 2012.
Turok starts with the old Greeks, then writes about Leonardo da Vinci and Galileo, and lays out the development of the scientific method. He spends some time on Newton’s laws, electrodynamics, special relativity and general relativity. Since Turok’s own work is mostly in cosmology, it is not surprising that quite some space is dedicated to this. The standard model of particle physics appears here and there, and the recent discovery of the Higgs is mentioned. He goes to some length to explain path integrals with the action of the standard model coupled to general relativity, the one equation appearing in the book (without the measure), a courage that I think should be applauded. Turok makes clear he is not a fan of the multiverse. In the final chapter, he goes on to a general praise of basic research.
His explanations about physics are interwoven with his own experiences, growing up in South Africa, the challenges he faced, and the research he has done. This is all content well intentioned and sounds like a good agenda. Unfortunately, the realization of this agenda is poor.
The introductions into the basic physical concepts will be difficult to understand if one doesn’t know already what he is talking about. For example, he talks about inflation before he speaks about general relativity. He talks about the Planck length and the Hubble length without explaining the relevance. To make contact to Euclidean space, Turok wants to explain Minkowski-spacetime by using the “ict” trick that nobody uses anymore and will leave many readers confused. They will be left equally confused about the question how the wavefunction and the path integral is related to actually observable quantities. The reader should also better previously have heard about the multiverse, because that’s only mentioned in the passing to get across the author’s opinion.
The book has several photos and illustrations in color, including the “formula that summarizes all the known laws of physics”, but these are not referenced in the text. You better look at them in advance to know where they belong, or you have to guess while reading that there might be an image belonging to what you read.
The book is also repetitive in several places, where concepts that were introduced earlier, for example extra dimensions, reappear. “As I explained earlier” or similar phrases have been added in some instances, but the overall impression I got is that this book was written in pieces that were later put together sloppily. The picture presented is incoherent at best and superficial at worst. Rather than making a solid case for the relevance of basic research, Turok has focused on introducing the basics of modern physics with some historical background, and then talks mostly about cosmology. Examples of unpredictable payoff appear, in the form of electrodynamics, the transistor, and potentially quantum computing. But the cases are not well made in the sense that he doesn’t drive home the point that none of that research was aimed at producing the next better computer. And they’re not exactly very inspired choices either.
Turok’s argumentation is sometimes just weird or not well thought through. For example, to explain the merits of quantum computers, he writes:
Other odd statements: “M-theory is the most mathematical theory in all of physics, and I won’t even try to describe it here.” He does anyway, but I’m left wondering what “most mathematical” is supposed to mean. Is it just an euphemism for “least practical relevance”? Another fluff sentence is “We are analog beings living in a digital world, facing a quantum future.” Turok also adds a sentence according to which one day maybe we’ll be able to harness dark energy. I can just see his inbox being flooded with proposals on exactly how to do that.
The last chapter of the book starts out quite promising, as it attempts to take on the question of what is the merit of knowledge for the sake of knowledge. Then I got distracted by a five pages long elaboration on “Frankenstein”. (He somehow places the origin of this novel in Italy, and forgets to mention that the Castle of Frankenstein is located in Germany, I pass by every time I visit my parents.) Then Turok seems to recall that the book is to appear with a Canadian publisher and suddenly adds a paragraph to praise the country:
So in summary, what can I say? This book strikes me as well intentioned, but sloppy and hastily written. If you are looking for a good introduction to the basic concepts of modern physics and cosmology, better read Sean Carroll’s book. If you are looking for a discussion of the challenges science and our societies are facing by rapid information exchange, better read Jaron Lanier’s book, or even Maggie Jackson’s book. If you want to know what the future of science might look like and what steps we should take to advance knowledge discovery, read Michael Nielsen’s book. And if you want to know how our societies economically benefit from basic research, read Mark Henderson’s book because he lists facts and numbers, even if they’re very UK-centric.
Neil Turok’s book might be interesting for you if you want to know something about Neil Turok. At least I found it interesting to learn something about his background, but it’s only a page here or there. I would give this book two out of five stars. That’s because I think he should be thanked for making the effort and taking the time. I hope though next time he gets a better editor.
By Neil Turok
House of Anansi Press (October 2, 2012)
Neil Turok is director of Perimeter Institute and founder of the African Institute for Mathematical Sciences. His research is mostly in theoretical cosmology, and he has written a pile of interesting papers with other well-known physicists. Some weeks ago, I found a free copy of Turok’s new book “The Universe Within” in my mail together with a blurb praising it as “the most anticipated nonfiction book of the season” and a “personal, visionary, and fascinating work.” From the back cover, I expected the book to be about the relevance of basic research, physics specifically, and the advances blue sky research has brought to our societies.
You know me for arguing that we need knowledge for the sake of knowledge and it’s a mistake to justify all research by practical applications. To advance my own arguments, I thought I should read Turok’s book.
The book is to accompany the 2012 Massey Lectures that will be broadcast in November 2012.
Turok starts with the old Greeks, then writes about Leonardo da Vinci and Galileo, and lays out the development of the scientific method. He spends some time on Newton’s laws, electrodynamics, special relativity and general relativity. Since Turok’s own work is mostly in cosmology, it is not surprising that quite some space is dedicated to this. The standard model of particle physics appears here and there, and the recent discovery of the Higgs is mentioned. He goes to some length to explain path integrals with the action of the standard model coupled to general relativity, the one equation appearing in the book (without the measure), a courage that I think should be applauded. Turok makes clear he is not a fan of the multiverse. In the final chapter, he goes on to a general praise of basic research.
His explanations about physics are interwoven with his own experiences, growing up in South Africa, the challenges he faced, and the research he has done. This is all content well intentioned and sounds like a good agenda. Unfortunately, the realization of this agenda is poor.
The introductions into the basic physical concepts will be difficult to understand if one doesn’t know already what he is talking about. For example, he talks about inflation before he speaks about general relativity. He talks about the Planck length and the Hubble length without explaining the relevance. To make contact to Euclidean space, Turok wants to explain Minkowski-spacetime by using the “ict” trick that nobody uses anymore and will leave many readers confused. They will be left equally confused about the question how the wavefunction and the path integral is related to actually observable quantities. The reader should also better previously have heard about the multiverse, because that’s only mentioned in the passing to get across the author’s opinion.
The book has several photos and illustrations in color, including the “formula that summarizes all the known laws of physics”, but these are not referenced in the text. You better look at them in advance to know where they belong, or you have to guess while reading that there might be an image belonging to what you read.
The book is also repetitive in several places, where concepts that were introduced earlier, for example extra dimensions, reappear. “As I explained earlier” or similar phrases have been added in some instances, but the overall impression I got is that this book was written in pieces that were later put together sloppily. The picture presented is incoherent at best and superficial at worst. Rather than making a solid case for the relevance of basic research, Turok has focused on introducing the basics of modern physics with some historical background, and then talks mostly about cosmology. Examples of unpredictable payoff appear, in the form of electrodynamics, the transistor, and potentially quantum computing. But the cases are not well made in the sense that he doesn’t drive home the point that none of that research was aimed at producing the next better computer. And they’re not exactly very inspired choices either.
Turok’s argumentation is sometimes just weird or not well thought through. For example, to explain the merits of quantum computers, he writes:
“Quantum computers may also transform our capacities to process data in parallel, and this could enable systems with great social benefit. One proposal now being considered is to install highly sensitive biochemical quantum detectors in every home. In this way, the detailed medical condition of every one of us could be continuously monitored. The data would be transmitted to banks of computers which would process it and screen for signs of any risk.”He does not add as much as one word on the question if this was desirable. This is pretty bad imo, because it suggests the image of a scientist who doesn’t care about ethical implications. (I mean: the question whether you want information about potentially uncurable diseases is already a topic of discussion today.) Another merit of quantum computers is apparently:
“With a quantum library, one might… be able to search for all possible interesting passages of text without anyone having had to compose them.”Clearly what mankind needs. And here’s what, according to Turok, is the purpose of writing:
“Writing is a means of extracting ourselves from the world of our experience to focus, form, and communicate our ideas.”One might maybe say so about scientific writing, at least in its ideal form. But the scientist in the writer seems to have taken over here. Another sentence that strikes me as odd is “I have been fascinated by the problem of how to enable young people to enter science, especially in the developing world.” I’m not sure “fascinating problem” is a particularly emphatic choice of words.
Other odd statements: “M-theory is the most mathematical theory in all of physics, and I won’t even try to describe it here.” He does anyway, but I’m left wondering what “most mathematical” is supposed to mean. Is it just an euphemism for “least practical relevance”? Another fluff sentence is “We are analog beings living in a digital world, facing a quantum future.” Turok also adds a sentence according to which one day maybe we’ll be able to harness dark energy. I can just see his inbox being flooded with proposals on exactly how to do that.
The last chapter of the book starts out quite promising, as it attempts to take on the question of what is the merit of knowledge for the sake of knowledge. Then I got distracted by a five pages long elaboration on “Frankenstein”. (He somehow places the origin of this novel in Italy, and forgets to mention that the Castle of Frankenstein is located in Germany, I pass by every time I visit my parents.) Then Turok seems to recall that the book is to appear with a Canadian publisher and suddenly adds a paragraph to praise the country:
“[T]oday’s Canada… compared to the modern Rome to its south, feels like a haven of civilization. Canada has a great many advantages: strong public education and health care systems; a peaceful, tolerant, and diverse society; a stable economy, and phenomenal natural resources. It is internationally renowned as a friendly and peaceful nation, and widely appreciated for its collaborative spirit and for the modest, practical character of its people.”It’s not that I disagree. But it makes me wonder what audience he is writing for. The member of parliament who might have to sign in the right place so cash keeps flowing? But what bugs me most about “The Universe Within” is that Turok expresses his concerns about the current use of information technology, and then has nothing to add in terms of evidence that this really is a problem or any idea what can or should be done about it:
“Our society has reached a critical moment. Our capacity to access information has grown to the point where we are in danger of overwhelming our capacity to process it. The exponential growth in the power of or computers and networks, while opening vast opportunities, is outpacing our human abilities and altering our forms of communication in ways that alienate us from each other.”Where is the evidence?
“We are being deluged with information through electric signals and radio waves, reduced to a digital, super-literal form that can be redistributed at almost no cost. The technology makes no distinction between value and junk.”This isn’t a problem of technology, this is a problem of economy.
“The abundance and availability of free digital information is dazzling and distracting. It removes us from our own nature as complex, unpredictable, passionate people.”According to Turok, the solution to this problem has something to do with the “ultraviolet-catastrophe”, I couldn’t quite follow the details. From a scientist, I would have expected a more insightful discussion. Not too long ago, Perimeter Institute had a really bright faculty member by name Michael Nielsen, who thought about the challenges and opportunities of information technology for science and what can be done about it. Turok does not only not explain what evidence it is that has him worried, he also doesn’t comment on any recent developments or suggestions. Maybe he should have spent some time talking to Nielsen.
So in summary, what can I say? This book strikes me as well intentioned, but sloppy and hastily written. If you are looking for a good introduction to the basic concepts of modern physics and cosmology, better read Sean Carroll’s book. If you are looking for a discussion of the challenges science and our societies are facing by rapid information exchange, better read Jaron Lanier’s book, or even Maggie Jackson’s book. If you want to know what the future of science might look like and what steps we should take to advance knowledge discovery, read Michael Nielsen’s book. And if you want to know how our societies economically benefit from basic research, read Mark Henderson’s book because he lists facts and numbers, even if they’re very UK-centric.
Neil Turok’s book might be interesting for you if you want to know something about Neil Turok. At least I found it interesting to learn something about his background, but it’s only a page here or there. I would give this book two out of five stars. That’s because I think he should be thanked for making the effort and taking the time. I hope though next time he gets a better editor.
Friday, September 28, 2012
10 effects you should have heard of
- The Photoelectric Effect
Light falling on a metal plate can lead to emission of electrons, called the "photoelectric effect". Experiments show for this to happen the frequency of the light needs to be above a threshold depending on the material. This was explained in 1905 by Albert Einstein who suggested that the light should be thought of as quanta whose energy is proportional to the frequency of the light, the constant of proportionality being Planck's constant. Einstein received the Nobel Prize in 1921 "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect."
Recommended reading: Our post on the Photoelectric Effect and the Nobel Prize speech from 1921. - The Casimir Effect
This effect was first predicted by Hendrik Casimir who explained that, as a consequence of quantum field theory, boundary conditions that may for example be set by conducting (uncharged!) plates, can result in measurable forces. This Casimir force is very weak and can be measured only at very small distances.
Recommended reading: Our post on the Casimir Effect and R. Jaffe's The Casimir Effect and the Quantum Vacuum. - The Doppler Effect
The Doppler effect, named after Christian Doppler, is the change in frequency of a wave when the source moves relative to the observer. The most common example is that of an approaching ambulance, where the pitch of the signal is higher when it moves towards you than when it moves away from you. This does not only happen for sound waves, but also for light and leads to red- or blueshifts respectively.
Recommended reading: The Physics Classroom Tutorial. - The Hall Effect
Electrons in a conducting plate that is brought into a magnetic field are subject to the Lorentz force. If the plate is oriented perpendicular to the magnetic field, a voltage can be measured between opposing ends of the plate which can be used to determine the strength of the magnetic field. First proposed by Edwin Hall, this voltage is called the Hall voltage, and the effect is called the Hall effect. If the plate is very thin, the temperature low, and the magnetic field very strong, a quantization of the conductivity can be measured, which is also known as the quantum Hall effect.
Recommended reading: Our post on The Quantum Hall Effect. - The Meissner-Ochsenfeld Effect
The Meissner-Ochsenfeld effect, discovered by Walther Meissner and his postdoc Robert Ochsenfeld in 1933, is the expulsion of a magnetic field from a superconductor. Most spectacularly, this can be used to let magnets levitate above superconductors since their field lines can not enter the superconductor. I assure you this has absolutely nothing to do with Yogic flying.
Recommended watching: Amazing Physics on YouTube. - Aharonov–Bohm Effect
A charged particle in an electromagnetic field acquires a phase shift from the potential of the background field. This phase shift is observable in interference patterns and has been experimentally confirmed. The relevant point is that it's the potential that causes the phase, not the field. Before the Aharonov–Bohm effect one could question the physical reality of the potential. - The Hawking Effect
Based on a semi-classical treatment of quantum fields in a black hole geometry, Stephen Hawking showed in 1975 that black holes emit thermal radiation with a temperature inverse to the black hole's mass. This emission process of the black hole is called the Hawking Effect. This result has lead to a great progress in understanding the physics of black holes, and is still subject of research, see recent post at Cosmic Variance.
Recommended reading: Black Hole Thermodynamics by David Harrison and P.K. Townsend's lecture notes on Black Holes. - The Zeeman Effect/Stark Effect
In the presence of a magnetic field, energy levels of electrons in atomic orbits that are usually degenerated (i.e. equal) can obtain different values, depending on their quantum number. As a consequence, spectral lines corresponding to transitions between these energy levels can split into several lines in the presence of a static magnetic field. This effect is named after the Dutch physicist Pieter Zeeman, who was awarded the 1902 physics Nobel prize for its discovery. The Zeeman effect is an important tool to measure magnetic fields in astronomy. For some historical reasons, the plain vanilla pattern of line splitting is called the Anomalous Zeeman effect.
A related effect, the splitting of spectral lines in strong electric fields, is called the Stark Effect, after Johannes Stark.
Recommended reading: HyperPhysics on the Zeeman effect and the Sodium doublet. - The Mikheyev-Smirnov-Wolfenstein Effect
The Mikheyev-Smirnov-Wolfenstein effect, commonly called MSW effect, is an in-medium modification of neutrino oscillation that can for example take place in the sun or the earth. It it a resonance effect that depends on the density of the medium and can significantly effect the conversion of one flavor into another. The effect is named after Stanislav Mikheyev, Alexei Smirnov and Lincoln Wolfenstein.
Recommended reading: The MSW effect and Solar Neutrinos. - The Sunyaev-Zel'dovich Effect
The Sunyaev-Zel'dovich effect, first described by Rashid Sunyaev and Yakov Zel'dovich, is the result of high energy electrons distorting the cosmic microwave background radiation through inverse Compton scattering, in which some of the energy of the electrons is transferred to the low energy CMB photons. Observed distortions of the cosmic microwave background spectrum are used to detect the density perturbations of the universe. Dense clusters of galaxies have been observed with use of this effect.
Recommended reading: Max Planck Society press release Crafoord Prize 2008 awarded to Rashid Sunyaev and The Sunyaev-Zel'dovich effect by Mark Birkinshaw. - Bonus: The Pauli Effect
Named after the Austrian theoretical physicist Wolfgang Pauli, the Pauli Effect is well known to every student of physics. It describes a spontaneous failure of technical equipment in the presence of theoretical physicists, who should therefore never be allowed on the vacuum pumps, lasers or oscilloscopes.
Recommended reading: Our post Happy Birthday Wolfgang Pauli.
[This is a slightly updated and recycled post that originally appeared in March 2008.]
Wednesday, September 26, 2012
Interna
Seems I've been too busy to even give you the family update last month, so here's to catch up.
Lara and Gloria can meanwhile climb up and down chairs quite well, which makes life easier for me, except that they often attempt to climb upwards from there. They can now reach the light switches, and last week they learned to open doors so it's difficult now to keep them in a room. Their favorite pastime is presently hitting me with empty plastic bottles, which seems to be infinitely entertaining. They also have developed the unfortunate habit of throwing their toys in direction of my laptop screen.
The girls have increased their vocabulary with various nouns and can identify images in their picture books. They still haven't learned a single verb, though Stefan insists "cookie" means "look."
Gloria is inseparable from her plush moose, Bo. She takes him everywhere and sleeps with him. Since I'd really like to wash it on occasion, I've now bought a second one and we're doing our best to avoid she sees both at once. (We also have to maneuver carefully around the Arlanda Duty Free shop, where there sits a whole pile of them.) Gloria has developed a bad case of motion sickness in which she'll be sick after ten minutes on the road. We now got some medication from our pediatrician that seems to help, so our mobility radius has expanded again. Lara meanwhile is squinting and we'll have to do something about this.
Right now, they're sitting behind me with their Swedish-English picture book. I am often amazed how well they understand what we say, especially because Stefan and I don't speak the same accent and we both mumble one way or the other. I guess it's because I judge their progress by my lack of progress in learning Swedish. Last week I took a taxi in Stockholm, and this was the first time I had a taxi driver who was actually Swedish. Ironically I noticed that because he spoke British English that was at least to my ears basically accent free. He didn't even try to address me in Swedish. When I asked him about it he said, well, there's so few people on the planet for whom Swedish is useful that they don't expect others to speak it. The Swedes are just so damned nice to immigrants.
We were lucky to get two daycare places starting in January. It's a half-day place, but this will be quite a change for all of us.
The organization of the PI conference on Experimental Search for Quantum Gravity is going very well, thanks to Astrid Eichhorn who has done a great job. We now have a schedule that should appear on the website within the next days. We'll probably have most of the talks recorded, so it's something for all of you. The organization of the November program on Perspectives of Fundamental Cosmology is running a little behind, but it seems everything is slowly falling into place there too.
Besides this, I have been trying to convince my colleagues at Nordita to engage more in public outreach, as I think we're behind in making use of the communication channels the online world has to offer. I'm happy to report that we did get some funding approved by the board last week. Part of this will go into a few videos, another part will go to a workshop for science writers - an idea that goes back to a discussion I had with George Musser earlier this year. I'll let you know how this goes, and I'm open to suggestions for what else we could do. I think I don't have to explain you my motivation for doing this - I'd be preaching to the choir. So let me instead say that it can be difficult to get scientists to make a time commitment to anything that's not research, so the biggest constraint on the matter is personnel.
Lara and Gloria can meanwhile climb up and down chairs quite well, which makes life easier for me, except that they often attempt to climb upwards from there. They can now reach the light switches, and last week they learned to open doors so it's difficult now to keep them in a room. Their favorite pastime is presently hitting me with empty plastic bottles, which seems to be infinitely entertaining. They also have developed the unfortunate habit of throwing their toys in direction of my laptop screen.
The girls have increased their vocabulary with various nouns and can identify images in their picture books. They still haven't learned a single verb, though Stefan insists "cookie" means "look."
Gloria is inseparable from her plush moose, Bo. She takes him everywhere and sleeps with him. Since I'd really like to wash it on occasion, I've now bought a second one and we're doing our best to avoid she sees both at once. (We also have to maneuver carefully around the Arlanda Duty Free shop, where there sits a whole pile of them.) Gloria has developed a bad case of motion sickness in which she'll be sick after ten minutes on the road. We now got some medication from our pediatrician that seems to help, so our mobility radius has expanded again. Lara meanwhile is squinting and we'll have to do something about this.
Right now, they're sitting behind me with their Swedish-English picture book. I am often amazed how well they understand what we say, especially because Stefan and I don't speak the same accent and we both mumble one way or the other. I guess it's because I judge their progress by my lack of progress in learning Swedish. Last week I took a taxi in Stockholm, and this was the first time I had a taxi driver who was actually Swedish. Ironically I noticed that because he spoke British English that was at least to my ears basically accent free. He didn't even try to address me in Swedish. When I asked him about it he said, well, there's so few people on the planet for whom Swedish is useful that they don't expect others to speak it. The Swedes are just so damned nice to immigrants.
We were lucky to get two daycare places starting in January. It's a half-day place, but this will be quite a change for all of us.
The organization of the PI conference on Experimental Search for Quantum Gravity is going very well, thanks to Astrid Eichhorn who has done a great job. We now have a schedule that should appear on the website within the next days. We'll probably have most of the talks recorded, so it's something for all of you. The organization of the November program on Perspectives of Fundamental Cosmology is running a little behind, but it seems everything is slowly falling into place there too.
Besides this, I have been trying to convince my colleagues at Nordita to engage more in public outreach, as I think we're behind in making use of the communication channels the online world has to offer. I'm happy to report that we did get some funding approved by the board last week. Part of this will go into a few videos, another part will go to a workshop for science writers - an idea that goes back to a discussion I had with George Musser earlier this year. I'll let you know how this goes, and I'm open to suggestions for what else we could do. I think I don't have to explain you my motivation for doing this - I'd be preaching to the choir. So let me instead say that it can be difficult to get scientists to make a time commitment to anything that's not research, so the biggest constraint on the matter is personnel.
Friday, September 21, 2012
Quantum Gravity in Tritium Decay?
If you've been working in a field for some while there comes the moment when you feel like you've heard it all before. So I was surprised when the other day I came across an idea to test the phenomenology of quantum gravity that I had not heard about before - and the paper is three years old already:
This all goes back, essentially, to Mead's idea which we discussed in the earlier post. Mead however had more to say about this: He wrote another paper, "Observable Consequences of Fundamental-Length Hypotheses" Phys. Rev. 143, 990–1005 (1966), in which he argued that such a Planck scale limit should, in principle, lead to a lower limit on the width of atomic spectral lines; it should create a fundamental blurring that can't be removed by better measurement. Raghavan in his paper now wants to test this limit. Rather than using photon emission though, he suggests to use tritium decay.
Tritium makes a β-decay to Helium and in this emits an electron and anti-electron neutrino. Normally the electron flies off and the energy spread of the outgoing neutrino is quite large, but Raghavan lays out some techniques by which this spread can be dramatically reduced. The starting point is that some fraction of the tritium the electrons doesn't fly off but is instead captured in a bound orbit around the Helium. Now if the tritium, normally a gas at room temperature, can be embedded in a solid, then the recoil energy can be very small; this is essentially the Mössbauer effect, just with neutrino emission, and this gives a hypersharp neutrino line. The first some slides of this pdf are a useful summary of the recoilless bound-state β-decay.
Raghavan estimates ΔE/E to be as small as 10-29. The average lifetime of tritium is about 12 years. There are a lot of techniques involved in this estimate that I don't know much about, so I can't tell how feasible the experiment he proposes is. It sounds plausible to me though, give or take some orders of magnitude.
He then speaks in his paper about the energy-time uncertainty relation and its Planck scale modifications. Now it is true that if you have a generalized uncertainty for spatial spread Δx and momentum spread Δp, you expect there to be also one for ΔEΔt. Yet, normally the deviations from the usual Heisenberg uncertainty scale with the energy over the Planck mass. And for the emitted neutrinos with an average energy of some keV this a ridiculously small correction term.
So here then comes the input from Mead's paper. Mead argues that the ratio ΔE/E is, in the most conservative model, actually proportional to the Planck length over the size of the system lPl/R, which he takes to be the size of the nuclei. This is quite puzzling, because if you take the Planck length bound on a wavelength and make an error propagation to the frequency, what you'd get is actually ΔE/E larger or equal to lPlE which is about 4 orders of magnitude smaller in the case at hand. The reason for this mismatch is that Mead in his argument speaks about the displacement of elementary particles in a potential. Now if the wavelength of the particles is larger than the typical extension of the potential this doesn't make much sense.
That having been said, one can of course consider the proposed parameterization as a model that is to be constrained, but this leaves the question how plausible it is that there be such a modification from quantum gravity. At first sight, I'd have said a low-energetic system like an atom is a hopeless place to look for quantum gravity, but then the precision of the suggested measurement would be amazing indeed. If it works that is. I'll have to do some more thinking to see if I can make sense of the argument for the scaling of the effect. Either way however, an experiment like the one Raghavan discusses, watching the decay of tritium under suitable conditions, would test a new range of parameters, which is always a good thing to do.
- Hypersharp Resonant Capture of Neutrinos as a Laboratory Probe of the Planck Length
R. S. Raghavan
Phys. Rev. Lett. 102:091804 (2009).
arxiv:0903.0787
Time-Energy Uncertainty in Neutrino Resonance: Quest for the Limit of Validity of Quantum Mechanics
R. S. Raghavan
arXiv:0907.0878
This all goes back, essentially, to Mead's idea which we discussed in the earlier post. Mead however had more to say about this: He wrote another paper, "Observable Consequences of Fundamental-Length Hypotheses" Phys. Rev. 143, 990–1005 (1966), in which he argued that such a Planck scale limit should, in principle, lead to a lower limit on the width of atomic spectral lines; it should create a fundamental blurring that can't be removed by better measurement. Raghavan in his paper now wants to test this limit. Rather than using photon emission though, he suggests to use tritium decay.
Tritium makes a β-decay to Helium and in this emits an electron and anti-electron neutrino. Normally the electron flies off and the energy spread of the outgoing neutrino is quite large, but Raghavan lays out some techniques by which this spread can be dramatically reduced. The starting point is that some fraction of the tritium the electrons doesn't fly off but is instead captured in a bound orbit around the Helium. Now if the tritium, normally a gas at room temperature, can be embedded in a solid, then the recoil energy can be very small; this is essentially the Mössbauer effect, just with neutrino emission, and this gives a hypersharp neutrino line. The first some slides of this pdf are a useful summary of the recoilless bound-state β-decay.
Raghavan estimates ΔE/E to be as small as 10-29. The average lifetime of tritium is about 12 years. There are a lot of techniques involved in this estimate that I don't know much about, so I can't tell how feasible the experiment he proposes is. It sounds plausible to me though, give or take some orders of magnitude.
He then speaks in his paper about the energy-time uncertainty relation and its Planck scale modifications. Now it is true that if you have a generalized uncertainty for spatial spread Δx and momentum spread Δp, you expect there to be also one for ΔEΔt. Yet, normally the deviations from the usual Heisenberg uncertainty scale with the energy over the Planck mass. And for the emitted neutrinos with an average energy of some keV this a ridiculously small correction term.
So here then comes the input from Mead's paper. Mead argues that the ratio ΔE/E is, in the most conservative model, actually proportional to the Planck length over the size of the system lPl/R, which he takes to be the size of the nuclei. This is quite puzzling, because if you take the Planck length bound on a wavelength and make an error propagation to the frequency, what you'd get is actually ΔE/E larger or equal to lPlE which is about 4 orders of magnitude smaller in the case at hand. The reason for this mismatch is that Mead in his argument speaks about the displacement of elementary particles in a potential. Now if the wavelength of the particles is larger than the typical extension of the potential this doesn't make much sense.
That having been said, one can of course consider the proposed parameterization as a model that is to be constrained, but this leaves the question how plausible it is that there be such a modification from quantum gravity. At first sight, I'd have said a low-energetic system like an atom is a hopeless place to look for quantum gravity, but then the precision of the suggested measurement would be amazing indeed. If it works that is. I'll have to do some more thinking to see if I can make sense of the argument for the scaling of the effect. Either way however, an experiment like the one Raghavan discusses, watching the decay of tritium under suitable conditions, would test a new range of parameters, which is always a good thing to do.
Monday, September 17, 2012
Research Areas and Social Identity
Last year, when I was giving the colloquium in Jyväskylä, my host introduced me as "leading the quantum gravity group at Nordita." I didn't object since it's correct to the extent that I'm leading myself, more or less successfully. However, the clustering of physicists into groups with multiple persons is a quite interesting emergent feature of scientific communities. Quantum gravity for example is usually taken to mean quantum gravity excluding string theory, a nomenclature I complained about earlier.
In the literature on the sociology of science it is broadly acknowledged that scientists, as other professionals, naturally segregate into groups to accomplish what's called a "cognitive division of labor": an assignment of specialized tasks which allows the individual to perform at a much higher level than they could achieve if they had to know all about everything. Such a division of labor is often noticeable already on the family level (I do the tax return, you deal with the health insurance). Specialization into niches for the best use of resources can also be seen in ecosystems. It's a natural trend because it's a local optimization process: Everybody dig a little deeper where you are and get a little more.
The problem is of course that a naturally occurring trend might lead to a local optimum that's not a global optimum. In the case of scientific communities the problem is that knowledge which lies at the intersection of different areas of specialization is not or not widely known, but there is a potential barrier preventing the community from making better use of this knowledge. This is unfortunate, because information relevant to progress goes unused. (See for example P. Wilson, “Unused relevant information in research and development,”. Journal of the American Society for Information Science, 45(2), 192203 (1995).)
So this is the rationale why it's necessary to encourage scientists to look out of their box, at least on occasion. And that takes some effort because they're in a local optimum and thus generally unwilling to change anything.
This brings me back then to the grouping of researchers. It does not seem to me very helpful to reach a better global optimum. In fact, it seems to me it instead that it makes the situation worse.
Social identity theory deals with the question what effect it has to assign people to groups; a good review is for example Stryker and Burke “The Past, Present, and Future of an Identity Theory”, Social Psychology Quarterly, Vol. 63, No. 4 (Dec., 2000), pp. 284-297. This review summarizes studies that have shown that the mere act of categorizing people as group members changes their behavior: When assigned a group, one that might not even be meaningful, they favor people in the group over people outside the group and are trying to fit in. The explanation that the researchers put forward is that "after being categorized of a group membership, individuals seek to achieve positive self-esteem by positively differentiating their ingroup from a comparison outgroup."
This leads me to think, it cannot be helpful to knowledge discovery to assign researchers at an institute to a handful of groups. It is also very punched-paper in the age of social tagging.
A suggestion that I had thus put forward some years ago at PI was to get rid of the research groups altogether and instead allow researchers to chose keywords that serve as tags. These tags would contain the existing research areas, but also cover other interests, that might be black holes, networks, holography, the arrow of time, dark matter, phase transitions, and so on. Then, one could replace the groups on the website with a tag cloud. If you click on a keyword, you'd get a list of all people who've chosen this tag.
Imagine how useful this would be if you were considering to apply. You could basically tell with one look what people at the place are interested in. And if you started working there, it would be one click to find out who has similar interests. No more browsing through dozens of individual websites, half of which don't exist or were last updated in 1998.
I was thinking about this recently because Stefan said that with better indexing of abstracts, which is on the way, it might even be possible in the not-so-far future to create such a tag-cloud from researcher's publication list. Which, with an author ID that lists institutions, could be mostly automatically assembled too.
This idea comes with a compatibility problem though, because most places hire applicants by group. So if one doesn't have groups, then the assignment of faculty to committees and applicants to committees needs to be rethought. This requires a change in procedure, but it's manageable. And this change in procedure would have the benefit of making it much easier to identify emerging areas of research that would otherwise awkwardly fit neither here nor there. Which is the case right now with emergent gravity and analogue gravity, just to name an example.
I clearly think getting rid of institutional group structures would be beneficial to research. Alas, there's a potential barrier that's preventing us from making such a change, a classic example of a collective action problem. However, I am throwing this at you because I am sure this restructuring will come to us sooner or later. You read it here first :o)
In the literature on the sociology of science it is broadly acknowledged that scientists, as other professionals, naturally segregate into groups to accomplish what's called a "cognitive division of labor": an assignment of specialized tasks which allows the individual to perform at a much higher level than they could achieve if they had to know all about everything. Such a division of labor is often noticeable already on the family level (I do the tax return, you deal with the health insurance). Specialization into niches for the best use of resources can also be seen in ecosystems. It's a natural trend because it's a local optimization process: Everybody dig a little deeper where you are and get a little more.
The problem is of course that a naturally occurring trend might lead to a local optimum that's not a global optimum. In the case of scientific communities the problem is that knowledge which lies at the intersection of different areas of specialization is not or not widely known, but there is a potential barrier preventing the community from making better use of this knowledge. This is unfortunate, because information relevant to progress goes unused. (See for example P. Wilson, “Unused relevant information in research and development,”. Journal of the American Society for Information Science, 45(2), 192203 (1995).)
So this is the rationale why it's necessary to encourage scientists to look out of their box, at least on occasion. And that takes some effort because they're in a local optimum and thus generally unwilling to change anything.
This brings me back then to the grouping of researchers. It does not seem to me very helpful to reach a better global optimum. In fact, it seems to me it instead that it makes the situation worse.
Social identity theory deals with the question what effect it has to assign people to groups; a good review is for example Stryker and Burke “The Past, Present, and Future of an Identity Theory”, Social Psychology Quarterly, Vol. 63, No. 4 (Dec., 2000), pp. 284-297. This review summarizes studies that have shown that the mere act of categorizing people as group members changes their behavior: When assigned a group, one that might not even be meaningful, they favor people in the group over people outside the group and are trying to fit in. The explanation that the researchers put forward is that "after being categorized of a group membership, individuals seek to achieve positive self-esteem by positively differentiating their ingroup from a comparison outgroup."
This leads me to think, it cannot be helpful to knowledge discovery to assign researchers at an institute to a handful of groups. It is also very punched-paper in the age of social tagging.
A suggestion that I had thus put forward some years ago at PI was to get rid of the research groups altogether and instead allow researchers to chose keywords that serve as tags. These tags would contain the existing research areas, but also cover other interests, that might be black holes, networks, holography, the arrow of time, dark matter, phase transitions, and so on. Then, one could replace the groups on the website with a tag cloud. If you click on a keyword, you'd get a list of all people who've chosen this tag.
Imagine how useful this would be if you were considering to apply. You could basically tell with one look what people at the place are interested in. And if you started working there, it would be one click to find out who has similar interests. No more browsing through dozens of individual websites, half of which don't exist or were last updated in 1998.
I was thinking about this recently because Stefan said that with better indexing of abstracts, which is on the way, it might even be possible in the not-so-far future to create such a tag-cloud from researcher's publication list. Which, with an author ID that lists institutions, could be mostly automatically assembled too.
This idea comes with a compatibility problem though, because most places hire applicants by group. So if one doesn't have groups, then the assignment of faculty to committees and applicants to committees needs to be rethought. This requires a change in procedure, but it's manageable. And this change in procedure would have the benefit of making it much easier to identify emerging areas of research that would otherwise awkwardly fit neither here nor there. Which is the case right now with emergent gravity and analogue gravity, just to name an example.
I clearly think getting rid of institutional group structures would be beneficial to research. Alas, there's a potential barrier that's preventing us from making such a change, a classic example of a collective action problem. However, I am throwing this at you because I am sure this restructuring will come to us sooner or later. You read it here first :o)
Tuesday, September 11, 2012
Book Review “The Geek Manifesto” by Mark Henderson
The Geek Manifesto: Why Science Matters
By Mark Henderson
Bantam Press (10 May 2012)
Henderson’s book is a well-structured and timely summary of why science, both scientific knowledge and the scientific method, matters for the well-being of our societies. Henderson covers seven different areas: why science matters to politics, the government, the media, the economy, education, in court, in healthcare and to the environment. In each case, he has examples of current problems, mostly from the UK and to a lesser extent from the USA, that he uses to arrive at recommendations for improvement.
The book is quite impressive in the breadth of topics covered. The arguments that Henderson leads are well thought through and he has hands-on suggestions for what can be done, for example how and why scientists should take the time to correct journalists, how and why to communicate their concerns to members of the parliament, why random controlled trials matter not only in health care but also for general policies and educational practice, and so on.
There’s some more oddities about this book. To begin with it’s set in Times, and the text is in several places broken up with large quotes that repeat a sentence from the page. You see this very frequently in magazines these days, with the idea to get across at least a catchy sentence or two, but it doesn’t make any sense whatsoever to do this in a book every 30 pages or so. It’s just plainly annoying one has to read the same sentence twice.
I’ll also admit that I’m not following British politics whatsoever and most of the names that are being dropped in this book don’t tell me anything. It’s a strangely UK-centric vision of what is a much broader issue really. Plenty of twists and turns of UK politics did not make a compelling read to me. That’s really unfortunate, because Henderson has a lot of good points that are relevant beyond the borders of his country.
Basically, Henderson’s message can be summarized as urging “geeks” to become more active and more vocal about their frustration with how scientific evidence and methods are being treated in various realms of our society. As a call to action however the book is far too long and, being addressed to readers who are fond of science already, it’s preaching to the choir. Thus, it’s a good book, by all means: well-argued, well-referenced, well-written – but I doubt it’ll achieve what its author hopes for.
I have to add however that it is good to see somebody is at least working into the direction of addressing this systemic problem that I’ve been writing about for years. I think that the root of our global political systems is that scientific knowledge and thinking is not, at present, well-integrated into our decision making processes. Instead we have an unfortunate conflation of scientific questions and questions of value when it comes to policy decisions. These really should be disentangled. But I’m preaching to the choir...
You may like “The Geek Manifesto” if you have an interest in how science is integrated into our societies, and what the shortcomings are with this integration. I’d give this book three out of five stars, which is to say I had to fight the repeated desire to skip over a few pages here and there.
By Mark Henderson
Bantam Press (10 May 2012)
Henderson’s book is a well-structured and timely summary of why science, both scientific knowledge and the scientific method, matters for the well-being of our societies. Henderson covers seven different areas: why science matters to politics, the government, the media, the economy, education, in court, in healthcare and to the environment. In each case, he has examples of current problems, mostly from the UK and to a lesser extent from the USA, that he uses to arrive at recommendations for improvement.
The book is quite impressive in the breadth of topics covered. The arguments that Henderson leads are well thought through and he has hands-on suggestions for what can be done, for example how and why scientists should take the time to correct journalists, how and why to communicate their concerns to members of the parliament, why random controlled trials matter not only in health care but also for general policies and educational practice, and so on.
“The manifesto’s aim is to win your broad support for its central proposition: that a more scientific approach to problem-solving is applicable to a surprisingly wide range of political issues, and that ignoring it disadvantages us all.”That having been said, the book is clearly addressed at people who know the value of and apply the scientific method, people he refers to as “geeks.” I’ll admit that I’m not very fond of this terminology. If I hear “geek” I think of a guy who can fix a TV with a fork and salt, and who can recite Star Wars backwards in Klingon. What’s wrong with “scientists”, I am left to wonder?
There’s some more oddities about this book. To begin with it’s set in Times, and the text is in several places broken up with large quotes that repeat a sentence from the page. You see this very frequently in magazines these days, with the idea to get across at least a catchy sentence or two, but it doesn’t make any sense whatsoever to do this in a book every 30 pages or so. It’s just plainly annoying one has to read the same sentence twice.
I’ll also admit that I’m not following British politics whatsoever and most of the names that are being dropped in this book don’t tell me anything. It’s a strangely UK-centric vision of what is a much broader issue really. Plenty of twists and turns of UK politics did not make a compelling read to me. That’s really unfortunate, because Henderson has a lot of good points that are relevant beyond the borders of his country.
Basically, Henderson’s message can be summarized as urging “geeks” to become more active and more vocal about their frustration with how scientific evidence and methods are being treated in various realms of our society. As a call to action however the book is far too long and, being addressed to readers who are fond of science already, it’s preaching to the choir. Thus, it’s a good book, by all means: well-argued, well-referenced, well-written – but I doubt it’ll achieve what its author hopes for.
I have to add however that it is good to see somebody is at least working into the direction of addressing this systemic problem that I’ve been writing about for years. I think that the root of our global political systems is that scientific knowledge and thinking is not, at present, well-integrated into our decision making processes. Instead we have an unfortunate conflation of scientific questions and questions of value when it comes to policy decisions. These really should be disentangled. But I’m preaching to the choir...
You may like “The Geek Manifesto” if you have an interest in how science is integrated into our societies, and what the shortcomings are with this integration. I’d give this book three out of five stars, which is to say I had to fight the repeated desire to skip over a few pages here and there.
Saturday, September 08, 2012
What are you, really?
Last month, I reviewed Jim Holt’s book “Why does the world exist?” This question immediately brings up another question: What exists anyway? Holt does not seem to be very sympathetic to the idea that mathematical objects exist, or at least he makes fun of the idea:
I’m not a Platonist myself, but it’s worth spending a paragraph on the misunderstanding that Holt has constructed because this isn’t the first time I’ve come across similar statements about circles and lines and so on. It is arguably true that you won’t find a perfect circle anywhere you look. Neither will you find perfectly straight lines. But the reason for this is simply that circles and perfectly straight lines are not objects that appear in the mathematical description of the world on scales that we see. Does it follow from that they don’t exist?
If you want to ask the question in a sensible way, you should ask instead about something that we presently believe is fundamental: What’s an elementary particle? Is it an element of a Hilbert space? Or is it described by an element of a Hilbert space? Or, to put the question differently: Is there anything about reality that cannot be described by mathematics? If you say no to this question, then mathematical objects are just as real as particles.
What Holt actually says is: “I’ve never seen any of the mathematical objects that I’ve heard about in school, thus they don’t exist and Platonism is a phantasy. “ Which is very different from saying “I know that our reality is not fundamentally mathematical.” With that misunderstanding, Holt goes one to explain Platonism by psychology:
Most of them see mathematics as some sort of model space that is mapped to reality. I argued in this earlier post that this is actually not the case. We never map mathematics to reality. We map a simplified system to a more complicated one, using the language of mathematics. Think of a computer simulation to predict the solar cycle. It’s a map from one system (the computer) to another system (the sun). If you do a calculation on a sheet of paper and produce some numbers that you later match with measurements, you’re likewise mapping one system (your brain) to another (your measurement), not some mathematical world to a real one. Mathematics is just a language that you use, a procedure that adds rigor and has proved useful.
I don’t believe, like Max Tegmark does, that fundamentally the world is mathematics. It seems quite implausible to me that we humans should at this point in our evolution already have come up with the best way to describe nature. I used to refer to this as the “Principle of Finite Imagination”: Just because we cannot imagine it (here: something better than mathematics) doesn’t mean it doesn’t exist. I learned from Holt’s book that my Principle of Finite Imagination is more commonly known as the Philosopher’s Fallacy.
Anyway, this has been discussed since some thousand years and I have nothing really new to add. But there’s always somebody for whom these thoughts are new, as they once were for me. And so this one is for you.
“A majority of contemporary mathematicians (a typical, though disputed, estimate is about two-thirds) believe in a kind of heaven – not a heaven of angels and saints, but one inhabited by the perfect and timeless objects they study: n-dimensional spheres, infinite numbers, the square root of -1, and the like. Moreover, they believe that they commune with this realm of timeless entities through a sort of extra-sensory perception.”There’s no reference for the mentioned estimate, but what’s worse is that referring to mathematical objects as “timeless” implies a preconceived notion of time already. It makes perfect sense to think of time as a mathematical object itself, and to construct other mathematical objects that depend on that time. Maybe one could say that the whole of mathematics does not evolve in this time, and we have no evidence of it evolving in any other time, but just claiming that mathematics studies “timeless objects” is sloppy and misleading. Holt goes on:
“Mathematicians who buy into this fantasy are called “Platonists”… Geometers, Plato observed, talk about circles that are perfectly round and infinite lines that are perfectly straight. Yet such perfect entities are nowhere to be found in the world we perceive with our sense… Plato concluded that the objects contemplated by mathematicians must exist in another world, one that is eternal and transcendent.”It is interesting that Holt in his book comes across as very open-minded to pretty much everything his interview partners confront him with, including parallel-worlds, retrocausation and panpsychism, but discards Platonism as a “phantasy.”
I’m not a Platonist myself, but it’s worth spending a paragraph on the misunderstanding that Holt has constructed because this isn’t the first time I’ve come across similar statements about circles and lines and so on. It is arguably true that you won’t find a perfect circle anywhere you look. Neither will you find perfectly straight lines. But the reason for this is simply that circles and perfectly straight lines are not objects that appear in the mathematical description of the world on scales that we see. Does it follow from that they don’t exist?
If you want to ask the question in a sensible way, you should ask instead about something that we presently believe is fundamental: What’s an elementary particle? Is it an element of a Hilbert space? Or is it described by an element of a Hilbert space? Or, to put the question differently: Is there anything about reality that cannot be described by mathematics? If you say no to this question, then mathematical objects are just as real as particles.
What Holt actually says is: “I’ve never seen any of the mathematical objects that I’ve heard about in school, thus they don’t exist and Platonism is a phantasy. “ Which is very different from saying “I know that our reality is not fundamentally mathematical.” With that misunderstanding, Holt goes one to explain Platonism by psychology:
“And today’s mathematical Platonists agree. Among the most distinguished of them is Alain Connes, holder of the Chair of Analysis and Geometry at the College de France, who has averred that “there exists, independently of the human mind, a raw and immutable mathematical reality.”… Platomism is understandably seductive to mathematicians. It means that the entities they study are no mere artifacts of the human mind: these entities are discovered, not invented… Many physicists also feel the allure of Plato’s vision.”I don’t know if that’s actually true. Most of the physicists that I asked do not believe that reality is mathematics but rather that reality is described by mathematics. But it’s very possibly the case that the physicists in my sample have a tendency towards phenomenology and model building.
Most of them see mathematics as some sort of model space that is mapped to reality. I argued in this earlier post that this is actually not the case. We never map mathematics to reality. We map a simplified system to a more complicated one, using the language of mathematics. Think of a computer simulation to predict the solar cycle. It’s a map from one system (the computer) to another system (the sun). If you do a calculation on a sheet of paper and produce some numbers that you later match with measurements, you’re likewise mapping one system (your brain) to another (your measurement), not some mathematical world to a real one. Mathematics is just a language that you use, a procedure that adds rigor and has proved useful.
I don’t believe, like Max Tegmark does, that fundamentally the world is mathematics. It seems quite implausible to me that we humans should at this point in our evolution already have come up with the best way to describe nature. I used to refer to this as the “Principle of Finite Imagination”: Just because we cannot imagine it (here: something better than mathematics) doesn’t mean it doesn’t exist. I learned from Holt’s book that my Principle of Finite Imagination is more commonly known as the Philosopher’s Fallacy.
“[T]he philosopher’s fallacy: a tendency to mistake a failure of the imagination for an insight into the way reality has to be.”Though Googling "philopher's fallacy" brings up some different variants, so maybe it's better to stick with my nomenclature.
Anyway, this has been discussed since some thousand years and I have nothing really new to add. But there’s always somebody for whom these thoughts are new, as they once were for me. And so this one is for you.
xkcd: Lucky 10000. |
Tuesday, September 04, 2012
Public Attitudes to Science
"Public Attitudes to Science" is a survey that has been conducted in the UK every couple of years since 2000, most recently 2011. It's quite interesting if you're interested in how scientific research is perceived by the public; you can download the full survey results here. Let me just show you some of the figures that I found interesting.
First, here's where people hear or read about new scientific research findings most often. TV and print newspapers are the dominant sources with 54% and 33%, followed by internet excluding blogs. Science blogs come in only at 2% (I don't know what the asterisk means, I took the number from the text to this figure).
Next, a somewhat odd question. People were asked how much they agree or disagree with the statement "The information I hear about science is generally true." It's beyond me how anybody can agree with a statement like that. Anyway, 9% disagree or strongly disagree and an amazing 47% agree or strongly agree.
What's more interesting is that those who agreed or disagreed were asked for their reasons in an unprompted reply. Here's the most frequently named reasons for agreeing that "information I hear about science is generally true." The top answer (no reason to doubt it) means to me essentially they're generally trusting or didn't think very much about their answer. More telling are the subsequent reasons: It's checked by other scientists, science is regulated, it comes directly from scientists, it's checked by someone, checked by journalists. Don't laugh, this is serious.
And here are the top reasons to disagree that scientific information s generally true. The first two replies are variants of why should I believe it. Followed by it's not checked by anyone, not checked by other scientists, not checked by journalists, does not come directly from scientists, and a general mistrust in mass media. This reply is interesting because science blogs can alleviate this trust issue very much, yet, as we have seen above, only very few people seems to use them as a source of information.
This becomes even clearer if you look at the replies to the next question, that is what could increase people's trust in the finding of scientific studies:
I am as shocked as amazed that 47% of people say they would trust information more if it was repeated. Though that shouldn't come as a surprise to me because it's a well-known effect that Kahneman in his book elaborates on for a while. The same goes for the reply that information fitted nicely with what they already new. If you really needed evidence that the human brain easily falls for confirmation bias, here it is. And that's only the people who admitted it! But on the more hopeful side are the replies that ask for review by other scientists and publication in a scientific journal. One might add that at least a proper reference our source would greatly help. I think science blogs do much better in terms of referencing and they're a source of review by other scientists in themselves. So, I come to conclude the world would be a better place if people would read more science blogs. Though that might be a case of confirmation bias ;o)
First, here's where people hear or read about new scientific research findings most often. TV and print newspapers are the dominant sources with 54% and 33%, followed by internet excluding blogs. Science blogs come in only at 2% (I don't know what the asterisk means, I took the number from the text to this figure).
Next, a somewhat odd question. People were asked how much they agree or disagree with the statement "The information I hear about science is generally true." It's beyond me how anybody can agree with a statement like that. Anyway, 9% disagree or strongly disagree and an amazing 47% agree or strongly agree.
What's more interesting is that those who agreed or disagreed were asked for their reasons in an unprompted reply. Here's the most frequently named reasons for agreeing that "information I hear about science is generally true." The top answer (no reason to doubt it) means to me essentially they're generally trusting or didn't think very much about their answer. More telling are the subsequent reasons: It's checked by other scientists, science is regulated, it comes directly from scientists, it's checked by someone, checked by journalists. Don't laugh, this is serious.
And here are the top reasons to disagree that scientific information s generally true. The first two replies are variants of why should I believe it. Followed by it's not checked by anyone, not checked by other scientists, not checked by journalists, does not come directly from scientists, and a general mistrust in mass media. This reply is interesting because science blogs can alleviate this trust issue very much, yet, as we have seen above, only very few people seems to use them as a source of information.
This becomes even clearer if you look at the replies to the next question, that is what could increase people's trust in the finding of scientific studies:
I am as shocked as amazed that 47% of people say they would trust information more if it was repeated. Though that shouldn't come as a surprise to me because it's a well-known effect that Kahneman in his book elaborates on for a while. The same goes for the reply that information fitted nicely with what they already new. If you really needed evidence that the human brain easily falls for confirmation bias, here it is. And that's only the people who admitted it! But on the more hopeful side are the replies that ask for review by other scientists and publication in a scientific journal. One might add that at least a proper reference our source would greatly help. I think science blogs do much better in terms of referencing and they're a source of review by other scientists in themselves. So, I come to conclude the world would be a better place if people would read more science blogs. Though that might be a case of confirmation bias ;o)
Saturday, September 01, 2012
Questioning the Foundations
The submission deadline for this year’s FQXi essay context on the question “Which of Our Basic Physical Assumptions Are Wrong?” has just passed. They got many thought-provoking contributions, which I encourage you to browse here.
The question was really difficult for me. Not because nothing came to my mind but because too much came to my mind! Throwing out the Heisenberg uncertainty principle, Lorentz-invariance, the positivity of gravitational mass, or the speed of light limit – been there, done that. And that’s only the stuff that I did publish...
At our 2010 conference, we had a discussion on the topic “What to sacrifice?” addressing essentially the same question as the FQXi essay, though with a focus on quantum gravity. For everything from the equivalence principle over unitarity and locality to the existence of space and time you can find somebody willing to sacrifice it for the sake of progress.
So what to pick? I finally settled on an essay arguing that the quantization postulate should be modified, and if you want to know more about this, go check it out on the FQXi website.
But let me tell you what was my runner-up.
“Physical assumption” is a rather vague expression. In the narrower sense you can understand it to mean an axiom of the theory, but in the broader sense it encompasses everything we use to propose a theory. I believe one of the reasons progress on finding a theory of quantum gravity has been slow is that we rely too heavily on mathematical consistency and pay too little attention to phenomenology. I simply doubt that mathematical consistency, combined with the requirement to reproduce the standard model and general relativity in the suitable limits, is sufficient to arrive at the right theory.
Many intelligent people spent decades developing approaches to quantum gravity, approaches which might turn out to have absolutely nothing to do with reality, even if they would reproduce the standard model. They pursue their research with the implicit assumption that the power of the human mind is sufficient to discover the right description of nature, though this is rarely explicitly spelled out. There is the “physical assumption” that the theoretical description of nature must be appealing and make sense to the human brain. We must be able to arrive at it by deepening our understanding of mathematics. Einstein and Dirac have shown us how to do it, arriving at the most amazing breakthroughs by mathematical deduction. It is tempting to conclude that they have shown the way, and we should follow in their footsteps.
But these examples have been exceedingly rare. Most of the history of physics instead has been incremental improvements guided by observation, often accompanied by periods of confusion and heated discussion. And Einstein and Dirac are not even good examples: Einstein was heavily guided by Michelson and Morley’s failure to detect the aether, and Dirac’s theory was preceded by a phenomenological model proposed by Goudsmit and Uhlenbeck to explain the anomalous Zeeman effect. Their model didn’t make much sense. But it explained the data. And it was later derived as a limit of the Dirac equation coupled to an electromagnetic field.
I think it is perfectly possible that there are different consistent ways to quantize gravity that reproduce the standard model. It also seems perfectly possible to me for example that string theory can be used to describe strongly coupled quantum field theory, and still not have anything to say about quantum gravity in our universe.
The only way to find out which theory describes the world we live in is to make contact to observation. Yet, most of the effort in quantum gravity is still devoted to the development and better understanding of mathematical techniques. That is certainly not sufficient. It is also not necessary, as the Goudsmit and Uhlenbeck example illustrates: Phenomenological models might not at first glance make much sense, and their consistency only become apparent later.
Thus, the assumption that we should throw out is that mathematical consistency, richness, or elegance are good guides to the right theory. They are desirable of course. But neither necessary nor sufficient. Instead, we should devote more effort to phenomenological models to guide the development of the theory of quantum gravity.
In a nutshell that would have been the argument of my essay had I chosen this topic. I decided against it because it is arguably a little self-serving. I will also admit that while this is the lesson I draw from the history of physics, I, as I believe most of my colleagues, am biased towards mathematical elegance, and the equations named after Einstein and Dirac are the best examples for that.
The question was really difficult for me. Not because nothing came to my mind but because too much came to my mind! Throwing out the Heisenberg uncertainty principle, Lorentz-invariance, the positivity of gravitational mass, or the speed of light limit – been there, done that. And that’s only the stuff that I did publish...
At our 2010 conference, we had a discussion on the topic “What to sacrifice?” addressing essentially the same question as the FQXi essay, though with a focus on quantum gravity. For everything from the equivalence principle over unitarity and locality to the existence of space and time you can find somebody willing to sacrifice it for the sake of progress.
So what to pick? I finally settled on an essay arguing that the quantization postulate should be modified, and if you want to know more about this, go check it out on the FQXi website.
But let me tell you what was my runner-up.
“Physical assumption” is a rather vague expression. In the narrower sense you can understand it to mean an axiom of the theory, but in the broader sense it encompasses everything we use to propose a theory. I believe one of the reasons progress on finding a theory of quantum gravity has been slow is that we rely too heavily on mathematical consistency and pay too little attention to phenomenology. I simply doubt that mathematical consistency, combined with the requirement to reproduce the standard model and general relativity in the suitable limits, is sufficient to arrive at the right theory.
Many intelligent people spent decades developing approaches to quantum gravity, approaches which might turn out to have absolutely nothing to do with reality, even if they would reproduce the standard model. They pursue their research with the implicit assumption that the power of the human mind is sufficient to discover the right description of nature, though this is rarely explicitly spelled out. There is the “physical assumption” that the theoretical description of nature must be appealing and make sense to the human brain. We must be able to arrive at it by deepening our understanding of mathematics. Einstein and Dirac have shown us how to do it, arriving at the most amazing breakthroughs by mathematical deduction. It is tempting to conclude that they have shown the way, and we should follow in their footsteps.
But these examples have been exceedingly rare. Most of the history of physics instead has been incremental improvements guided by observation, often accompanied by periods of confusion and heated discussion. And Einstein and Dirac are not even good examples: Einstein was heavily guided by Michelson and Morley’s failure to detect the aether, and Dirac’s theory was preceded by a phenomenological model proposed by Goudsmit and Uhlenbeck to explain the anomalous Zeeman effect. Their model didn’t make much sense. But it explained the data. And it was later derived as a limit of the Dirac equation coupled to an electromagnetic field.
I think it is perfectly possible that there are different consistent ways to quantize gravity that reproduce the standard model. It also seems perfectly possible to me for example that string theory can be used to describe strongly coupled quantum field theory, and still not have anything to say about quantum gravity in our universe.
The only way to find out which theory describes the world we live in is to make contact to observation. Yet, most of the effort in quantum gravity is still devoted to the development and better understanding of mathematical techniques. That is certainly not sufficient. It is also not necessary, as the Goudsmit and Uhlenbeck example illustrates: Phenomenological models might not at first glance make much sense, and their consistency only become apparent later.
Thus, the assumption that we should throw out is that mathematical consistency, richness, or elegance are good guides to the right theory. They are desirable of course. But neither necessary nor sufficient. Instead, we should devote more effort to phenomenological models to guide the development of the theory of quantum gravity.
In a nutshell that would have been the argument of my essay had I chosen this topic. I decided against it because it is arguably a little self-serving. I will also admit that while this is the lesson I draw from the history of physics, I, as I believe most of my colleagues, am biased towards mathematical elegance, and the equations named after Einstein and Dirac are the best examples for that.
Wednesday, August 29, 2012
Quantum Gravity and Taxes
The other day I got caught in a conversation about the Royal Institute of Technology and how it deals with value added taxes. After the third round of explanation, I still hadn’t quite understood the Swedish tax regulations. This prompted my conversation partner to remark Swedish taxes are more complicated than my research.
The only thing I can say in my defense is that in a very real sense taxes are indeed more complicated than quantum gravity.
True, the tax regulations you have to deal with to get through life are more a matter of available information than of understanding. Applying the right rule in the right place requires less knowledge than you need for, say, the singularity theorems in general relativity. In the end taxes are just basic arithmetic manipulations. But what’s the basis of these rules? Where do they come from?
Tax regulations, laws in general, and also social norms have evolved along with our civilizations. They’re results of a long history of adaption and selection in a highly complex, partly chaotic, system. This result is based on vague concepts like “fairness”, “higher powers”, or “happiness”, that depend on context and culture and change with time.
If you think about it too much, the only reason our societies’ laws and norms work is inertia. We just learn how our environment works and most of us most of the time play by the rules. We adapt and slowly change the rules along with our adaption. But ask where the rules come from or by what principles they evolve, and you’ll have a hard time coming up with a good reason for anything. If you make it more than five why’s down the line, I cheer for you.
We don’t have the faintest clue how to explain human civilization. Nobody knows how to derive the human rights from the initial conditions of the universe. People in general, and men in particular, with all their worries and desires, their hopes and dreams, do not make much sense to me, fundamentally. I have no clue why we’re here or what we’re here for, and in comparison to understanding Swedish taxes, quantizing gravity seems like a neatly well-defined and solvable problem.
The only thing I can say in my defense is that in a very real sense taxes are indeed more complicated than quantum gravity.
True, the tax regulations you have to deal with to get through life are more a matter of available information than of understanding. Applying the right rule in the right place requires less knowledge than you need for, say, the singularity theorems in general relativity. In the end taxes are just basic arithmetic manipulations. But what’s the basis of these rules? Where do they come from?
Tax regulations, laws in general, and also social norms have evolved along with our civilizations. They’re results of a long history of adaption and selection in a highly complex, partly chaotic, system. This result is based on vague concepts like “fairness”, “higher powers”, or “happiness”, that depend on context and culture and change with time.
If you think about it too much, the only reason our societies’ laws and norms work is inertia. We just learn how our environment works and most of us most of the time play by the rules. We adapt and slowly change the rules along with our adaption. But ask where the rules come from or by what principles they evolve, and you’ll have a hard time coming up with a good reason for anything. If you make it more than five why’s down the line, I cheer for you.
We don’t have the faintest clue how to explain human civilization. Nobody knows how to derive the human rights from the initial conditions of the universe. People in general, and men in particular, with all their worries and desires, their hopes and dreams, do not make much sense to me, fundamentally. I have no clue why we’re here or what we’re here for, and in comparison to understanding Swedish taxes, quantizing gravity seems like a neatly well-defined and solvable problem.
Saturday, August 25, 2012
How to beat a cosmic speeding ticket
xkcd: The Search |
As a child I had a (mercifully passing) obsession with science fiction. To this day contact to extraterrestrial intelligent beings is to me one of the most exciting prospects of technological progress.
I think the plausible explanation why we have so far not made alien contact is that they use a communication method we have no yet discovered, and if there is any way to communicate faster than the speed of light, clearly that’s what they would use. Thus, we should work on building a receiver for the faster-than-light signals! Except, well, that our present theories don’t seem to allow for such signals to begin with.
Every day is a winding road, and after many such days I found myself working on quantum gravity.
So when the review was finally submitted, I thought it is time to come back to superluminal information exchange, which resulted in a paper that’s now published
The basic idea isn’t so difficult to explain. The reason that it is generally believed nothing can travel faster than the speed of light is that Einstein’s special relativity sets the speed of light as a limit for all matter that we know. The assumptions for that argument are few, the theory is extremely well in agreement with experiment, and the conclusion is difficult to avoid.
Strictly speaking, special relativity does not forbid faster-than-light propagation. However, since in special relativity a signal moving forward in time faster than the speed of light for one observer might appear like a signal moving backwards in time for another observer, this can create causal paradoxa.
There are three common ways to allow superluminal signaling, and each has its problems:
First, there are wormholes in general relativity, but they generically also lead to causality problems. And how creation, manipulation, and sending signals through them would work is unclear. I’ve never been a fan of wormholes.
Second, one can just break Lorentz-invariance and avoid special relativity altogether. In this case one introduces a preferred frame and observer independence is violated. This avoids causal paradoxa because there’s now a distinguished direction “forward” in time. The difficulty here is that special relativity describes our observations extremely well and we have no evidence for Lorentz-invariance violation whatsoever. There is then explaining to do why we have not noticed violations of Lorentz-invariance before. Many people are working on Lorentz invariance violation, and that by itself limits my enthusiasm.
Third, there are deformations of special relativity which avoid an explicit breaking of Lorentz-invariance by changing the Lorentz-transformations. In this case, the speed of light becomes energy-dependent so that photons with high energy can, in principle, move arbitrarily fast. Since in this case everybody agrees that a photon moves forward in time, this does not create causal paradoxa, at least not just because of the superluminal propagation.
I was quite excited about this possibility for a while, but after some years of back and forth I’ve convinced myself that deformed special relativity creates more problems than it solves. It suffers from various serious difficulties that prevent a recovery of the standard model and general relativity in the suitable limits, notoriously the problem of multi-particle states and non-locality (which we discussed here).
So, none of these approaches is very promising and one is really very constrained in the possible options. The symmetry-group of Minkowski-space is the Lorentz-group plus translations. It has one free parameter and that’s the speed of massless particles. It’s a limiting speed. End of story. There really doesn’t seem to be much wiggle room in that.
Then it occurred to me that it is not actually difficult to allow several different speeds of lights to be invariant, as long as can never measure them at the same time. And that would be the case if one had particles propagating in a background that is a superposition of Minkowski-spaces with different speeds of light. Because in this case then you would use for each speed of light the Lorentz-transformation that belongs to it. In other words, you blow up the Lorentz-group to a one-parameter family of groups that acts on a set of spaces with different speeds of lights.
You have to expect the probability for a particle to travel through an eigenspace that does not belong to the measured speed of light to be small, so that we haven’t yet noticed. To good precision, the background that we live in must be in an eigenstate, but it might have a small admixture of other speeds, faster and slower. Particles then have a small probability to travel faster than the speed of light through one of these spaces.
If you measure a state that was in a superposition, you collapse the wavefunction to one eigenstate, or let us better say it decoheres. This decoherence introduces a preferred frame (the frame of the measurement) which is how causal paradoxa are avoided: there is a notion of forward that comes in through the measurement.
In contrast to the case in which Lorentz invariance is violated though, this preferred frame does not appear on the level of the Lagrangian - it is not fundamentally present. And in contrast to deformations of special relativity, there is no issue here with locality because two observers never disagree on the paths of two photons with different speeds: Instead of there being two different photons, there’s only one, but it’s in a superposition. Once measured, all observers agree on the outcome. So there’s no Box Problem.
That having been said, I found it possible to formulate this idea in the language of quantum field theory. (It wasn’t remotely as straight forward as this summary might make it appear.) In my paper, I then proposed a parameterization of the occupation probability of the different speed of light eigenspaces and the probability of particles to jump from one eigenstate to another upon interaction.
So far so good. Next one would have to look at modifications of standard model cross-sections and see if there is any hope that this theoretical possibility is actually realized in nature.
We still have a long way to go on the way to build the cell phone to talk to aliens. But at least we know now that it’s not incompatible with special relativity.
Wednesday, August 22, 2012
How do science blogs change the face of science?
The blogosphere is coming to age, and I’m doing my annual contemplation of its influence on science.
Science blogs of course have an educational mission, and many researchers use them to communicate the enthusiasm they have for their research, may that be by discussing their own work or that of colleagues. But blogs were also deemed useful to demonstrate that scientists are not all dusty academics, withdrawn professors or introverted nerds who sit all day in their office, shielded by piles of books and papers. Physics and engineering are fields where these stereotypes are quite common – or should I say “used to be quite common”?
Recently I’ve been wondering if not the perception of science that the blogosphere has created is replacing the old nerdy stereotype with a new stereotype. Because the scientists who blog are the ones who are most visible, yet not the ones who are actually very representative characters. This leads to the odd situation in which the avid reader of blogs, who otherwise doesn’t have much contact with academia, is left with the idea that scientists are generally interested in communicating their research. They also like to publicly dissect their colleagues’ work. And, judging from the photos they post, they seem to spend a huge amount of time travelling. Not to mention that, well, they all like to write. Don’t you also think they all look a little like Brian Cox?
I find this very ironic. Because the nerdy stereotype for all its inaccuracy still seems to fit better. Many of my colleagues do spend 12 hours a day in their office scribbling away equations on paper or looking for a bug in their code. They’d rather die than publicly comment on anything. Their Facebook accounts are deserted. They think a hashtag is a drug, and the only photo on their iPhone shows that instant when the sunlight fell through the curtains just so that it made a perfect diffraction pattern on the wall. They're neither interested nor able to communicate their research to anybody except their close colleagues. And, needless to say, very few of them have even a remote resemblance to Brian Cox.
So the funny situation is that my online friends and contacts think it’s odd if one of my colleagues is not available on any social networking platform. Do they even exist for real? And my colleagues still think I’m odd taking part in all this blogging stuff and so on. I’m not sure at all these worlds are going to converge any time soon.
Science blogs of course have an educational mission, and many researchers use them to communicate the enthusiasm they have for their research, may that be by discussing their own work or that of colleagues. But blogs were also deemed useful to demonstrate that scientists are not all dusty academics, withdrawn professors or introverted nerds who sit all day in their office, shielded by piles of books and papers. Physics and engineering are fields where these stereotypes are quite common – or should I say “used to be quite common”?
Recently I’ve been wondering if not the perception of science that the blogosphere has created is replacing the old nerdy stereotype with a new stereotype. Because the scientists who blog are the ones who are most visible, yet not the ones who are actually very representative characters. This leads to the odd situation in which the avid reader of blogs, who otherwise doesn’t have much contact with academia, is left with the idea that scientists are generally interested in communicating their research. They also like to publicly dissect their colleagues’ work. And, judging from the photos they post, they seem to spend a huge amount of time travelling. Not to mention that, well, they all like to write. Don’t you also think they all look a little like Brian Cox?
I find this very ironic. Because the nerdy stereotype for all its inaccuracy still seems to fit better. Many of my colleagues do spend 12 hours a day in their office scribbling away equations on paper or looking for a bug in their code. They’d rather die than publicly comment on anything. Their Facebook accounts are deserted. They think a hashtag is a drug, and the only photo on their iPhone shows that instant when the sunlight fell through the curtains just so that it made a perfect diffraction pattern on the wall. They're neither interested nor able to communicate their research to anybody except their close colleagues. And, needless to say, very few of them have even a remote resemblance to Brian Cox.
So the funny situation is that my online friends and contacts think it’s odd if one of my colleagues is not available on any social networking platform. Do they even exist for real? And my colleagues still think I’m odd taking part in all this blogging stuff and so on. I’m not sure at all these worlds are going to converge any time soon.
Sunday, August 19, 2012
Book review: “Why does the world exist?” by Jim Holt
Why Does the World Exist?: An Existential Detective Story
By Jim Holt
Liveright (July 16, 2012)
Yes, I do sometimes wonder why the world exists. I believe however it is not among the questions that I am well suited to find an answer to, and thus my enthusiasm is limited. While I am not disinterested in philosophy in principle, I get easily frustrated with people who use words as if they had any meaning that’s not a human construct, words that are simply ill-defined unless the humans themselves and their language are explained too.
I don’t seem to agree with Max Tegmark on many points, but I agree that you can’t build fundamental insights on words that are empty unless one already has these fundamental insights - or wants to take the anthropic path. In other words, if you want to understand nature, you have to do it with a self-referential language like mathematics, not with English. Thus my conviction that if anybody is to understand the nature of reality, it will be a mathematician or a theoretical physicist.
For these reasons I’d never have bought Jim Holt’s book. I was however offered a free copy by the editor. And, thinking that I should broaden my horizon when it comes to the origin of the universe and the existence or absence of final explanations, I read it.
Holt’s book is essentially a summary of thoughts on the question why there isn’t nothing, covering the history of the question as well as the opinions of currently living thinkers. The narrative of the book is Holt’s own quest for understanding that lead him to visit and talk to several philosophers, physicists and other intellectuals, including Steven Weinberg, Alan Guth and David Deutsch. Many others are mentioned or cited, such as Stephen Hawking, Max Tegmark and Roger Penrose.
The book is very well written, though Holt has a tendency to list exactly what he ate and drank when and where which takes up more space than it deserves. There are more bottles of wine and more deaths on the pages of his book than I had expected, though that is balanced with a good sense of humor. Since Holt arranges his narrative along his travel rather than by topic, the book is sometimes repetitive when he reminds the reader of something (eg the “landscape”) that was already introduced earlier.
I am very impressed by Holt’s interviews. He has clearly done a lot of own thinking about the question. His explanations are open-minded and radiate well-meaning, but he is sharp and often critical. In many cases what he says is much more insightful than what his interview partners have to offer.
Holt’s book is good summary of just how bizarre the world is. The only person quoted in this book who made perfect sense to me is Woody Allen. On the very opposite end is a philosopher named Derek Parfit who hates the “scientizing” of philosophy, and some of his colleagues who believe in “panpsychism”, undeterred by the total lack of scientific evidence.
The reader of the book is also confronted with John Updike who belabors the miserable state of string theory “This whole string theory business… There’s never any evidence, right? There are men spending their whole careers working on a theory of something that might not even exist”, and Alex Vilenkin who has his own definition of “nothing,” which, if you ask me, is a good way to answer the question.
Towards the end of the book Jim Holt also puts forward his own solution to the problem of why there is something rather than nothing. Let me give you a flavor of that proof:
This book has been very educational for me indeed. Now I know the names of many ism’s that I do not want to know more about. I hate the idea that I’d have missed this book if it hadn't been for the free copy in my mail box. That having been said, to get anything out of this book you need to come with an interest in the question already. Do not expect the book to create this interest. But if you come with this interest, you’ll almost surely enjoy reading it.
By Jim Holt
Liveright (July 16, 2012)
Yes, I do sometimes wonder why the world exists. I believe however it is not among the questions that I am well suited to find an answer to, and thus my enthusiasm is limited. While I am not disinterested in philosophy in principle, I get easily frustrated with people who use words as if they had any meaning that’s not a human construct, words that are simply ill-defined unless the humans themselves and their language are explained too.
I don’t seem to agree with Max Tegmark on many points, but I agree that you can’t build fundamental insights on words that are empty unless one already has these fundamental insights - or wants to take the anthropic path. In other words, if you want to understand nature, you have to do it with a self-referential language like mathematics, not with English. Thus my conviction that if anybody is to understand the nature of reality, it will be a mathematician or a theoretical physicist.
For these reasons I’d never have bought Jim Holt’s book. I was however offered a free copy by the editor. And, thinking that I should broaden my horizon when it comes to the origin of the universe and the existence or absence of final explanations, I read it.
Holt’s book is essentially a summary of thoughts on the question why there isn’t nothing, covering the history of the question as well as the opinions of currently living thinkers. The narrative of the book is Holt’s own quest for understanding that lead him to visit and talk to several philosophers, physicists and other intellectuals, including Steven Weinberg, Alan Guth and David Deutsch. Many others are mentioned or cited, such as Stephen Hawking, Max Tegmark and Roger Penrose.
The book is very well written, though Holt has a tendency to list exactly what he ate and drank when and where which takes up more space than it deserves. There are more bottles of wine and more deaths on the pages of his book than I had expected, though that is balanced with a good sense of humor. Since Holt arranges his narrative along his travel rather than by topic, the book is sometimes repetitive when he reminds the reader of something (eg the “landscape”) that was already introduced earlier.
I am very impressed by Holt’s interviews. He has clearly done a lot of own thinking about the question. His explanations are open-minded and radiate well-meaning, but he is sharp and often critical. In many cases what he says is much more insightful than what his interview partners have to offer.
Holt’s book is good summary of just how bizarre the world is. The only person quoted in this book who made perfect sense to me is Woody Allen. On the very opposite end is a philosopher named Derek Parfit who hates the “scientizing” of philosophy, and some of his colleagues who believe in “panpsychism”, undeterred by the total lack of scientific evidence.
The reader of the book is also confronted with John Updike who belabors the miserable state of string theory “This whole string theory business… There’s never any evidence, right? There are men spending their whole careers working on a theory of something that might not even exist”, and Alex Vilenkin who has his own definition of “nothing,” which, if you ask me, is a good way to answer the question.
Towards the end of the book Jim Holt also puts forward his own solution to the problem of why there is something rather than nothing. Let me give you a flavor of that proof:
“Reality cannot be perfectly full and perfectly empty at the same time. Nor can it be ethically the best and causally the most orderly at the same time (since the occasional miracle could make reality better). And it certainly can’t be the ethically best and the most evil at the same time.”Where to even begin? Every second word in this “proof” is undefined. How can one attempt to make an argument along these lines without explaining “ethically best” in terms that are not taken out of the universe whose existence is supposed to be explained? Not to mention that all along his travel, nobody seems to have told Holt that, shockingly, there isn’t only system of logic, but a whole selection of them.
This book has been very educational for me indeed. Now I know the names of many ism’s that I do not want to know more about. I hate the idea that I’d have missed this book if it hadn't been for the free copy in my mail box. That having been said, to get anything out of this book you need to come with an interest in the question already. Do not expect the book to create this interest. But if you come with this interest, you’ll almost surely enjoy reading it.
Wednesday, August 15, 2012
"Rapid streamlined peer-review" and its results
![]() |
Contains 0% Quantum Gravity. |
"Testing quantum mechanics in non-Minkowski space-time with high power lasers and 4th generation light sources"Note the small volume number, all fresh and innocent.
B. J. B. Crowley et al
Scientific Reports 2, Article number: 491
It's a quite interesting article that calculates the cross-section of photons scattering off electrons that are collectively accelerated by a high intensity laser. The possibility to maybe test Unruh radiation in a similar fashion has lately drawn some attention, see eg this paper. But this is explicitly not the setup that the authors of the present paper are after, as they write themselves in the text.
What is remarkable about this paper is the amount of misleading and wrong statements about exactly what it is they are testing and what not. In the title it says they are testing "quantum mechanics in non-Minkowski space-time." What might that mean, I was wondering?
Initially I thought it's another test of space-time non-commutativity, which is why I read the paper in the first place. The first sentence of the abstract reads "A common misperception of quantum gravity is that it requires accessing energies up to the Planck scale of 1019GeV, which is unattainable for any conceivable particle collider." Two sentences later, the authors no longer speak of quantum gravity but "a semiclassical extension of quantum mechanics ... under the assumption of weak gravity." So what's non-Minkowski then? And where's quantum gravity?
What they do in fact in the paper is that they calculate the effect of the acceleration on the electrons and argue that via the equivalence principle this should be equivalent to testing the influence of gravity. (At least locally, though there's not much elaboration on this point in the paper.) Now, strictly speaking we do of course never make any experiment in Minkowski space - after all we sit in a gravitational field. In the same sense we have countless tests of the semi-classical limit of Einstein's field equations. So I read and I am still wondering, what is it that they test?
In the first paragraph then the reader learns that the Newton-Schrödinger equation (which we discussed here) is necessary "to obtain a consistent description of experimental findings" with a reference to Carlip's paper and a paper by Penrose on state reduction. Clearly a misunderstanding, or maybe they didn't actually read the papers they cite. They also don't actually use the Schrödinger-Newton equation however - as I said, there isn't actually a gravitational field in their setup. "We do not concern ourselves with the quantized nature of the gravitational field itself." Fine, no need to quantize what's not there.
Then on page two the reader learns "Our goal is to design an experiment where it may be possible to test some aspects of general relativity..." Okay, so now they're testing neither quantum mechanics nor quantum gravity, nor the Schrödinger-Newton equation, nor semi-classical gravity, but general relativity? Though, since there's no curvature involved, it would be more like testing the equivalence principle, no?
But let's move on. We come across the following sentence: "[T]he most prominent manifestation of quantum gravity is that black holes radiate energy at the universal temperature - the Hawking temperature." Leaving aside that one can debate how "prominent" an effect black hole evaporation is, it's also manifestly wrong. Black hole evaporation is an effect of quantum field theory in curved spacetime. It's not a quantum gravitational effect, that's the exact reason why it's been dissected since decades. The authors then go on to talk about Unruh radiation and make an estimate showing that they are not testing this regime.
It follows the actual calculation, which, as I said, is in principle interesting. But at the end of the calculation we are then informed that this "provid[es], for the first time, a direct way to determine the validity of the models of quantum mechanics in curved space-time, and the specific details of the coupling between classical and quantized fields." Except that there isn't actually any curved space-time in this experiment, unless they mean the gravitational field of the Earth. And the coupling to this has been tested for example in this experiment (and in some follow-up experiements to this), which the authors don't seem to be aware of or at least don't cite. Again, at the very best I think they're proposing to test the equivalence principle.
In the closing paragraph they then completely discard the important qualifier that the space-time is not actually curved and that it's in the best case an indirect test by claiming that, on the contrary, "[T]he scientific case described in this letter is very compelling and our estimates indicate that a direct test of the semiclassical theory of quantum mechanics in curved space-time will become possible." Emphasis mine.
So, let's see what have we. We started with a test of quantum mechanics in non-Minkowski space, came across some irrelevant mentioning of quantum gravity, a misplaced referral to the Schrödinger-Newton equation, testing general relativity in the lab, further irrelevant and also wrong comments about quantum gravity, to direct tests of quantum mechanics in curved space time. All by looking at a bunch of electrons accelerated in a laser beam. Misleading doesn't even begin to capture it. I can't say I'm very convinced by the quality standard of this new journal.
Sunday, August 12, 2012
What is transformative research and why do we need it?
Since 2007, the US-American National Science Foundation (NSF) has an explicit call for “transformative research” in their funding criteria. Transformative research, according to the NSF, is the type of research that can “radically change our understanding of an important existing scientific or engineering concept or educational practice or leads to the creation of a new paradigm or field of science, engineering, or education.” The European Research Council (ERC) calls it “frontier research” and explains that this frontier research is “at the forefront of creating new knowledge[. It] is an intrinsically risky endeavour that involves the pursuit of questions without regard for established disciplinary boundaries or national borders.”
The best way to understand this type of research is that it’s of high risk with a potential high payoff. It’s the type of blue-sky research that is very unlikely to be pursued in for-profit organizations because it might have no tangible outcome for decades. Since one doesn’t actually know if some research has a high payoff before it’s been done, one should better call it “Potentially Transformative Research.”
Why do we need it?
If you think of science being an incremental slow push on the boundaries of knowledge, then transformative research is a jump across the border in the hope to land on save ground. Most likely, you’ll jump and drown, or be eaten by dragons. But if you’re lucky and, let’s not forget about that, smart, you might discover a whole new field of science and noticeably redefine the boundaries of knowledge.
The difficulty is of course to find out if the potential benefit justifies the risk. So there needs to be an assessment of both, and a weighting of them against each other.
Most of science is not transformative. Science is, by function, conservative. It conserves the accumulated knowledge and defends it. We need some transformative research to overcome this conservatism, otherwise we’ll get stuck. That’s why the NSF and ERC acknowledge the necessity of high-risk, high-payoff research.
But while it is clear that we need some of it, it’s not a priori clear we need more of it than we already have. Not all research should aspire to be transformative. How do we know we’re too conservative?
The only way to reliably know is to take lots of data over a long time and try to understand where the optimal balance lies. Unfortunately, the type of payoff that we’re talking about might take decades to centuries to appear, so that is, at present, not very feasible.
In lack of this the only thing we can do is to find a good argument for how to move towards the optimal balance.
One way you can do this is with measures for scientific success. I think this is the wrong approach. It’s like setting prices in a market economy by calculating them from the product’s properties and future plans. It’s not a good way to aggregate information and there’s no reason to trust whoever comes up with the formula for the success measure knows what they’re doing.
The other way is to enable a natural optimization process, much like the free market prices goods. Just that in science the goal isn’t to price goods but to distribute researchers over research projects. How many people should optimally work on which research so their skills are used efficiently and progress is as fast as possible? Most scientists have the aspiration to make good use of their skills and to contribute to progress, so the only thing we need to do is to let them follow their interests.
Yes, that’s right. I’m saying the best we can do is trust the experts to find out themselves where their skills are of best use. Of course one needs to provide a useful infrastructure for this to work. Note that this does not mean everybody necessarily works on the topic they’re most interested in, because the more people work on a topic the smaller the chances become that there are significant discoveries for each of them to be made.
The tragedy is of course that this is nowhere like science is organized today. Scientists are not free to choose on which problem to use their skills. Instead, they are subject to all sorts of pressures which prevent the optimal distribution of researchers over projects.
The most obvious pressures are financial and time pressure. Short term contracts put a large incentive on short-term thinking. Another problem is the difficulty for researchers to change topics, which has the effect that there is a large (generational) time-lag in the population of research fields. Both of these problems cause a trend towards conservative rather than transformative research. Worse: They cause a trend towards conservative rather than transformative thinking and, by selection, a too small ratio of transformative rather than conservative researchers. This is why we have reason to believe the fraction of transformative research and researchers is presently smaller than optimal.
How can we support potentially transformative research?
The right way to solve this problem is to reduce external pressure on researchers and to ensure the system can self-optimize efficiently. But this is difficult to realize. If that is not possible, one can still try to promote transformative research by other means in the hope of coming closer to the optimal balance. How can one do this?
The first thing that comes to mind is to write transformative research explicitly into the goals of the funding agencies, encourage researchers to propose such projects, and peers to review them favorably. This most likely will not work very well because it doesn’t change anything about the too conservative communities. If you random sample a peer review group for a project, you’re more likely to get conservative opinions just because they’re more common. As a result, transformative research projects are unlikely to be reviewed favorably. It doesn’t matter if you tell people that transformative research is desirable, because they still have to evaluate if the high risk justifies the potential high payoff. And assessment of tolerable risk is subjective.
So what can be done?
One thing that can be done is to take a very small sample of reviewers, because the smaller the sample the larger the chance of a statistical fluctuation. Unfortunately, this also increases the risk that nonsense will go through because the reviewers just weren’t in the mood to actually read the proposal. The other thing you can do is to pre-select researchers so you have a subsample with a higher ratio of transformative to conservative researchers.
This is essentially what FQXi is doing. And, in their research area, they’re doing remarkably well actually. That is to say, if I look at the projects that they fund, I think most of it won’t lead anywhere. And that’s how it should be. On the downside, it’s all short-term projects. The NSF is also trying to exploit preselection in a different form in their new EAGER and CREATIV funding mechanism that are not at all assessed by peers but exclusively by NSF staff. In this case the NSF staff is the preselected group. However, I am afraid that the group might be too small to be able to accurately assess the scientific risk. Time will tell.
Putting a focus on transformative research is very difficult for institutions with a local presence. That’s because when it comes to hire colleagues who you have to get along with, people naturally tend to select those who fit in, both in type of research and in type of personality. This isn’t necessarily a bad thing as it benefits collaborations, but it can promote homogeneity and lead to “more of the same” research. It takes a constant effort to avoid this trend. It also takes courage and a long-term vision to go for the high-risk, high payoff research(er), and not many institutions can afford this courage. So here is again the financial pressure that hinders leaps of progress just because of lacking institutional funding.
It doesn’t help that during the last weeks I had to read that my colleagues in basic research in Canada, the UK and also the USA are looking forward to severe budget cuts:
The best way to understand this type of research is that it’s of high risk with a potential high payoff. It’s the type of blue-sky research that is very unlikely to be pursued in for-profit organizations because it might have no tangible outcome for decades. Since one doesn’t actually know if some research has a high payoff before it’s been done, one should better call it “Potentially Transformative Research.”
Why do we need it?
If you think of science being an incremental slow push on the boundaries of knowledge, then transformative research is a jump across the border in the hope to land on save ground. Most likely, you’ll jump and drown, or be eaten by dragons. But if you’re lucky and, let’s not forget about that, smart, you might discover a whole new field of science and noticeably redefine the boundaries of knowledge.
The difficulty is of course to find out if the potential benefit justifies the risk. So there needs to be an assessment of both, and a weighting of them against each other.
Most of science is not transformative. Science is, by function, conservative. It conserves the accumulated knowledge and defends it. We need some transformative research to overcome this conservatism, otherwise we’ll get stuck. That’s why the NSF and ERC acknowledge the necessity of high-risk, high-payoff research.
But while it is clear that we need some of it, it’s not a priori clear we need more of it than we already have. Not all research should aspire to be transformative. How do we know we’re too conservative?
The only way to reliably know is to take lots of data over a long time and try to understand where the optimal balance lies. Unfortunately, the type of payoff that we’re talking about might take decades to centuries to appear, so that is, at present, not very feasible.
In lack of this the only thing we can do is to find a good argument for how to move towards the optimal balance.
One way you can do this is with measures for scientific success. I think this is the wrong approach. It’s like setting prices in a market economy by calculating them from the product’s properties and future plans. It’s not a good way to aggregate information and there’s no reason to trust whoever comes up with the formula for the success measure knows what they’re doing.
The other way is to enable a natural optimization process, much like the free market prices goods. Just that in science the goal isn’t to price goods but to distribute researchers over research projects. How many people should optimally work on which research so their skills are used efficiently and progress is as fast as possible? Most scientists have the aspiration to make good use of their skills and to contribute to progress, so the only thing we need to do is to let them follow their interests.
Yes, that’s right. I’m saying the best we can do is trust the experts to find out themselves where their skills are of best use. Of course one needs to provide a useful infrastructure for this to work. Note that this does not mean everybody necessarily works on the topic they’re most interested in, because the more people work on a topic the smaller the chances become that there are significant discoveries for each of them to be made.
The tragedy is of course that this is nowhere like science is organized today. Scientists are not free to choose on which problem to use their skills. Instead, they are subject to all sorts of pressures which prevent the optimal distribution of researchers over projects.
The most obvious pressures are financial and time pressure. Short term contracts put a large incentive on short-term thinking. Another problem is the difficulty for researchers to change topics, which has the effect that there is a large (generational) time-lag in the population of research fields. Both of these problems cause a trend towards conservative rather than transformative research. Worse: They cause a trend towards conservative rather than transformative thinking and, by selection, a too small ratio of transformative rather than conservative researchers. This is why we have reason to believe the fraction of transformative research and researchers is presently smaller than optimal.
How can we support potentially transformative research?
The right way to solve this problem is to reduce external pressure on researchers and to ensure the system can self-optimize efficiently. But this is difficult to realize. If that is not possible, one can still try to promote transformative research by other means in the hope of coming closer to the optimal balance. How can one do this?
The first thing that comes to mind is to write transformative research explicitly into the goals of the funding agencies, encourage researchers to propose such projects, and peers to review them favorably. This most likely will not work very well because it doesn’t change anything about the too conservative communities. If you random sample a peer review group for a project, you’re more likely to get conservative opinions just because they’re more common. As a result, transformative research projects are unlikely to be reviewed favorably. It doesn’t matter if you tell people that transformative research is desirable, because they still have to evaluate if the high risk justifies the potential high payoff. And assessment of tolerable risk is subjective.
So what can be done?
One thing that can be done is to take a very small sample of reviewers, because the smaller the sample the larger the chance of a statistical fluctuation. Unfortunately, this also increases the risk that nonsense will go through because the reviewers just weren’t in the mood to actually read the proposal. The other thing you can do is to pre-select researchers so you have a subsample with a higher ratio of transformative to conservative researchers.
This is essentially what FQXi is doing. And, in their research area, they’re doing remarkably well actually. That is to say, if I look at the projects that they fund, I think most of it won’t lead anywhere. And that’s how it should be. On the downside, it’s all short-term projects. The NSF is also trying to exploit preselection in a different form in their new EAGER and CREATIV funding mechanism that are not at all assessed by peers but exclusively by NSF staff. In this case the NSF staff is the preselected group. However, I am afraid that the group might be too small to be able to accurately assess the scientific risk. Time will tell.
Putting a focus on transformative research is very difficult for institutions with a local presence. That’s because when it comes to hire colleagues who you have to get along with, people naturally tend to select those who fit in, both in type of research and in type of personality. This isn’t necessarily a bad thing as it benefits collaborations, but it can promote homogeneity and lead to “more of the same” research. It takes a constant effort to avoid this trend. It also takes courage and a long-term vision to go for the high-risk, high payoff research(er), and not many institutions can afford this courage. So here is again the financial pressure that hinders leaps of progress just because of lacking institutional funding.
It doesn’t help that during the last weeks I had to read that my colleagues in basic research in Canada, the UK and also the USA are looking forward to severe budget cuts:
“Of paramount concern for basic scientists [in Canada] is the elimination of the Can$25-million (US$24.6-million) RTI, administered by the Natural Sciences and Engineering Research Council of Canada (NSERC), which funds equipment purchases of Can$7,000–150,000. An accompanying Can$36-million Major Resources Support Program, which funds operations at dozens of experimental-research facilities, will also be axed.” [Source: Nature]
“Hanging over the effective decrease in support proposed by the House of Representatives last week is the ‘sequester’, a pre-programmed budget cut that research advocates say would starve US science-funding agencies.” [Source: Nature]
“[The] Engineering and Physical Sciences Research Council (EPSRC) [is] the government body that holds the biggest public purse for physics, mathematics and engineering research in the United Kingdom. Facing a growing cash squeeze and pressure from the government to demonstrate the economic benefits of research, in 2009 the council's chief executive, David Delpy, embarked on a series of controversial reforms… The changes incensed many physical scientists, who protested that the policy to blacklist grant applicants was draconian. They complained that the EPSRC's decision to exert more control over the fields it funds risked sidelining peer review and would favour short-term, applied research over curiosity-driven, blue-skies work in a way that would be detrimental to British science.” [Source:Nature]So now more than ever we should make sure that investments in basic research are used efficiently. And one of the most promising ways to do this is presently to enable more potentially transformative research.
Thursday, August 09, 2012
Book review: “Thinking, fast and slow” by Daniel Kahneman
By Daniel Kahneman
Farrar, Straus and Giroux (October 25, 2011)
I am always on the lookout for ways to improve my scientific thinking. That’s why I have an interest in the areas of sociology concerned with decision making in groups and how the individual is influenced by this. And this is also why I have an interest in cognitive biases - intuitive judgments that we make without even noticing; judgments which are just fine most of the time but can be scientifically fallacious. Daniel Kahneman’s book “Thinking, fast and slow” is an excellent introduction to the topic.
Kahneman, winner of the Nobel Price for Economics in 2002, focuses mostly on his own work, but that covers a lot of ground. He starts with distinguishing between two different modes in which we make decisions, a fast and intuitive one, and a slow, more deliberate one. Then he explains how fast intuitions lead us astray in certain circumstances.
The human brain does not make very accurate statistical computations without deliberate effort. But often we don’t make such an effort. Instead, we use shortcuts. We substitute questions, extrapolate from available memories, and try to construct plausible and coherent stories. We tend to underestimate uncertainty, are influenced by the way questions are framed, and our intuition is skewed by irrelevant details.
Kahneman quotes and summarizes a large amount of studies that have been performed, in most cases with sample questions. He offers explanations for the results when available, and also points out where the limits of present understanding are. In the later parts of the book he elaborates on the relevance of these findings about the way humans make decision for economics. While I had previously come across a big part of the studies that he summarizes in the early chapters, the relation to economics had not been very clear to me, and I found this part enlightening. I now understand my problems trying to tell economists that humans do have inconsistent preferences.
The book introduces a lot of terminology, and at the end of each chapter the reader finds a few examples for how to use them in everyday situations. “He likes the project, so he thinks its costs are low and its benefits are high. Nice example of the affect heuristic.” “We are making an additional investment because we not want to admit failure. This is an instance of the sunk-cost fallacy.” Initially, I found these examples somewhat awkward. But awkward or not, they serve very well for the purpose of putting the terminology in context.
The book is well written, reads smoothly, is well organized, and thoroughly referenced. As a bonus, the appendix contains reprints of Kahneman’s two most influential papers that contain somewhat more details than the summary in the text. He narrates along the story of his own research projects and how they came into being which I found a little tiresome after he elaborated on the third dramatic insight that he had about his own cognitive bias. Or maybe I'm just jealous because a Nobel Prize winning insight in theoretical physics isn't going to come by that way.
I have found this book very useful in my effort to understand myself and the world around me. I have only two complaints. One is that despite all the talk about the relevance of proper statistics, Kahneman does not mention the statistical significance of any of the results that he talks about. Now, this is all research which started two or three decades ago, so I have little doubt that the effects he talks about are indeed meanwhile well established, and, hey, he got a Nobel Prize after all. Yet, if it wasn’t for that I’d have to consider the possibility that some of these effects will vanish as statistical artifacts. Second, he does not at any time actually explain to the reader the basics of probability theory and Bayesian inference, though he uses it repeatedly. This, unfortunately, limits the usefulness of the book dramatically if you don’t already know how to compute probabilities. It is particularly bad when he gives a terribly vague explanation of correlation. Really, the book would have been so much better if it had at least an appendix with some of the relevant definitions and equations.
That having been said, if you know a little about statistics you will probably find, like I did, that you’ve learned to avoid at least some of the cognitive biases that deal with explicit ratios and percentages, and different ways to frame these questions. I’ve also found that when it comes to risks and losses my tolerance apparently does not agree with that of the majority of participants in the studies he quotes. Not sure why that is. Either way, whether or not you are subject to any specific bias that Kahneman writes about, the frequency by which they appear make them relevant to understand the way human society works, and they also offer a way to improve our decision making.
In summary, it’s a well-written and thoroughly useful book that is interesting for everybody with an interest in human decision-making and its shortcomings. I'd give this book four out of five stars.
Below are some passages that I marked that gave me something to think. This will give you a flavor what the book is about.
“A reliable way of making people believe in falsehoods is frequent repetition because familiarity is not easily distinguished from truth.”
“[T]he confidence that people experience is determined by the coherence of the story they manage to construct from available information. It is the consistency of the information that matters for a good story, not its completeness.”
“The world in our heads is not a precise replica of reality; our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed.”
“It is useful to remember […] that neglecting valid stereotypes inevitably results in suboptimal judgments. Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is cost-less is wrong.”
“A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.”
“I have always believed that scientific research is another domain where a form of optimism is essential to success: I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the fact of repeated experiences of multiple small failures and rare successes, the fate of most researchers.”
“The brains s of humans and other animals contain a mechanism that is designed to give priority to bad news.”
“Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals.”
“When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that maybe exposed to events no one has yet experienced, this is not good news.”
“We tend to make decisions as problems arise, even when we are specifically instructed to consider them jointly. We have neither the inclination not the mental resources to enforce consistency on our preferences, and our preferences are not magically set to be coherent, as they are in the rational-agent model.”
“The sunk-cost fallacy keeps people for too long in poor jobs, unhappy marriages, und unpromising research projects. I have often observed young scientists struggling to salvage a doomed project when they would be better advised to drop it and start a new one.”
“Although Humans are not irrational, they often need help to make more accurate judgments and better decisions, and in some cases policies and institutions can provide that help.”
Subscribe to:
Posts (Atom)