The other day I got caught in a conversation about the Royal Institute of Technology and how it deals with value added taxes. After the third round of explanation, I still hadn’t quite understood the Swedish tax regulations. This prompted my conversation partner to remark Swedish taxes are more complicated than my research.
The only thing I can say in my defense is that in a very real sense taxes are indeed more complicated than quantum gravity.
True, the tax regulations you have to deal with to get through life are more a matter of available information than of understanding. Applying the right rule in the right place requires less knowledge than you need for, say, the singularity theorems in general relativity. In the end taxes are just basic arithmetic manipulations. But what’s the basis of these rules? Where do they come from?
Tax regulations, laws in general, and also social norms have evolved along with our civilizations. They’re results of a long history of adaption and selection in a highly complex, partly chaotic, system. This result is based on vague concepts like “fairness”, “higher powers”, or “happiness”, that depend on context and culture and change with time.
If you think about it too much, the only reason our societies’ laws and norms work is inertia. We just learn how our environment works and most of us most of the time play by the rules. We adapt and slowly change the rules along with our adaption. But ask where the rules come from or by what principles they evolve, and you’ll have a hard time coming up with a good reason for anything. If you make it more than five why’s down the line, I cheer for you.
We don’t have the faintest clue how to explain human civilization. Nobody knows how to derive the human rights from the initial conditions of the universe. People in general, and men in particular, with all their worries and desires, their hopes and dreams, do not make much sense to me, fundamentally. I have no clue why we’re here or what we’re here for, and in comparison to understanding Swedish taxes, quantizing gravity seems like a neatly well-defined and solvable problem.
Pages
▼
Wednesday, August 29, 2012
Saturday, August 25, 2012
How to beat a cosmic speeding ticket
xkcd: The Search |
As a child I had a (mercifully passing) obsession with science fiction. To this day contact to extraterrestrial intelligent beings is to me one of the most exciting prospects of technological progress.
I think the plausible explanation why we have so far not made alien contact is that they use a communication method we have no yet discovered, and if there is any way to communicate faster than the speed of light, clearly that’s what they would use. Thus, we should work on building a receiver for the faster-than-light signals! Except, well, that our present theories don’t seem to allow for such signals to begin with.
Every day is a winding road, and after many such days I found myself working on quantum gravity.
So when the review was finally submitted, I thought it is time to come back to superluminal information exchange, which resulted in a paper that’s now published
The basic idea isn’t so difficult to explain. The reason that it is generally believed nothing can travel faster than the speed of light is that Einstein’s special relativity sets the speed of light as a limit for all matter that we know. The assumptions for that argument are few, the theory is extremely well in agreement with experiment, and the conclusion is difficult to avoid.
Strictly speaking, special relativity does not forbid faster-than-light propagation. However, since in special relativity a signal moving forward in time faster than the speed of light for one observer might appear like a signal moving backwards in time for another observer, this can create causal paradoxa.
There are three common ways to allow superluminal signaling, and each has its problems:
First, there are wormholes in general relativity, but they generically also lead to causality problems. And how creation, manipulation, and sending signals through them would work is unclear. I’ve never been a fan of wormholes.
Second, one can just break Lorentz-invariance and avoid special relativity altogether. In this case one introduces a preferred frame and observer independence is violated. This avoids causal paradoxa because there’s now a distinguished direction “forward” in time. The difficulty here is that special relativity describes our observations extremely well and we have no evidence for Lorentz-invariance violation whatsoever. There is then explaining to do why we have not noticed violations of Lorentz-invariance before. Many people are working on Lorentz invariance violation, and that by itself limits my enthusiasm.
Third, there are deformations of special relativity which avoid an explicit breaking of Lorentz-invariance by changing the Lorentz-transformations. In this case, the speed of light becomes energy-dependent so that photons with high energy can, in principle, move arbitrarily fast. Since in this case everybody agrees that a photon moves forward in time, this does not create causal paradoxa, at least not just because of the superluminal propagation.
I was quite excited about this possibility for a while, but after some years of back and forth I’ve convinced myself that deformed special relativity creates more problems than it solves. It suffers from various serious difficulties that prevent a recovery of the standard model and general relativity in the suitable limits, notoriously the problem of multi-particle states and non-locality (which we discussed here).
So, none of these approaches is very promising and one is really very constrained in the possible options. The symmetry-group of Minkowski-space is the Lorentz-group plus translations. It has one free parameter and that’s the speed of massless particles. It’s a limiting speed. End of story. There really doesn’t seem to be much wiggle room in that.
Then it occurred to me that it is not actually difficult to allow several different speeds of lights to be invariant, as long as can never measure them at the same time. And that would be the case if one had particles propagating in a background that is a superposition of Minkowski-spaces with different speeds of light. Because in this case then you would use for each speed of light the Lorentz-transformation that belongs to it. In other words, you blow up the Lorentz-group to a one-parameter family of groups that acts on a set of spaces with different speeds of lights.
You have to expect the probability for a particle to travel through an eigenspace that does not belong to the measured speed of light to be small, so that we haven’t yet noticed. To good precision, the background that we live in must be in an eigenstate, but it might have a small admixture of other speeds, faster and slower. Particles then have a small probability to travel faster than the speed of light through one of these spaces.
If you measure a state that was in a superposition, you collapse the wavefunction to one eigenstate, or let us better say it decoheres. This decoherence introduces a preferred frame (the frame of the measurement) which is how causal paradoxa are avoided: there is a notion of forward that comes in through the measurement.
In contrast to the case in which Lorentz invariance is violated though, this preferred frame does not appear on the level of the Lagrangian - it is not fundamentally present. And in contrast to deformations of special relativity, there is no issue here with locality because two observers never disagree on the paths of two photons with different speeds: Instead of there being two different photons, there’s only one, but it’s in a superposition. Once measured, all observers agree on the outcome. So there’s no Box Problem.
That having been said, I found it possible to formulate this idea in the language of quantum field theory. (It wasn’t remotely as straight forward as this summary might make it appear.) In my paper, I then proposed a parameterization of the occupation probability of the different speed of light eigenspaces and the probability of particles to jump from one eigenstate to another upon interaction.
So far so good. Next one would have to look at modifications of standard model cross-sections and see if there is any hope that this theoretical possibility is actually realized in nature.
We still have a long way to go on the way to build the cell phone to talk to aliens. But at least we know now that it’s not incompatible with special relativity.
Wednesday, August 22, 2012
How do science blogs change the face of science?
The blogosphere is coming to age, and I’m doing my annual contemplation of its influence on science.
Science blogs of course have an educational mission, and many researchers use them to communicate the enthusiasm they have for their research, may that be by discussing their own work or that of colleagues. But blogs were also deemed useful to demonstrate that scientists are not all dusty academics, withdrawn professors or introverted nerds who sit all day in their office, shielded by piles of books and papers. Physics and engineering are fields where these stereotypes are quite common – or should I say “used to be quite common”?
Recently I’ve been wondering if not the perception of science that the blogosphere has created is replacing the old nerdy stereotype with a new stereotype. Because the scientists who blog are the ones who are most visible, yet not the ones who are actually very representative characters. This leads to the odd situation in which the avid reader of blogs, who otherwise doesn’t have much contact with academia, is left with the idea that scientists are generally interested in communicating their research. They also like to publicly dissect their colleagues’ work. And, judging from the photos they post, they seem to spend a huge amount of time travelling. Not to mention that, well, they all like to write. Don’t you also think they all look a little like Brian Cox?
I find this very ironic. Because the nerdy stereotype for all its inaccuracy still seems to fit better. Many of my colleagues do spend 12 hours a day in their office scribbling away equations on paper or looking for a bug in their code. They’d rather die than publicly comment on anything. Their Facebook accounts are deserted. They think a hashtag is a drug, and the only photo on their iPhone shows that instant when the sunlight fell through the curtains just so that it made a perfect diffraction pattern on the wall. They're neither interested nor able to communicate their research to anybody except their close colleagues. And, needless to say, very few of them have even a remote resemblance to Brian Cox.
So the funny situation is that my online friends and contacts think it’s odd if one of my colleagues is not available on any social networking platform. Do they even exist for real? And my colleagues still think I’m odd taking part in all this blogging stuff and so on. I’m not sure at all these worlds are going to converge any time soon.
Science blogs of course have an educational mission, and many researchers use them to communicate the enthusiasm they have for their research, may that be by discussing their own work or that of colleagues. But blogs were also deemed useful to demonstrate that scientists are not all dusty academics, withdrawn professors or introverted nerds who sit all day in their office, shielded by piles of books and papers. Physics and engineering are fields where these stereotypes are quite common – or should I say “used to be quite common”?
Recently I’ve been wondering if not the perception of science that the blogosphere has created is replacing the old nerdy stereotype with a new stereotype. Because the scientists who blog are the ones who are most visible, yet not the ones who are actually very representative characters. This leads to the odd situation in which the avid reader of blogs, who otherwise doesn’t have much contact with academia, is left with the idea that scientists are generally interested in communicating their research. They also like to publicly dissect their colleagues’ work. And, judging from the photos they post, they seem to spend a huge amount of time travelling. Not to mention that, well, they all like to write. Don’t you also think they all look a little like Brian Cox?
I find this very ironic. Because the nerdy stereotype for all its inaccuracy still seems to fit better. Many of my colleagues do spend 12 hours a day in their office scribbling away equations on paper or looking for a bug in their code. They’d rather die than publicly comment on anything. Their Facebook accounts are deserted. They think a hashtag is a drug, and the only photo on their iPhone shows that instant when the sunlight fell through the curtains just so that it made a perfect diffraction pattern on the wall. They're neither interested nor able to communicate their research to anybody except their close colleagues. And, needless to say, very few of them have even a remote resemblance to Brian Cox.
So the funny situation is that my online friends and contacts think it’s odd if one of my colleagues is not available on any social networking platform. Do they even exist for real? And my colleagues still think I’m odd taking part in all this blogging stuff and so on. I’m not sure at all these worlds are going to converge any time soon.
Sunday, August 19, 2012
Book review: “Why does the world exist?” by Jim Holt
Why Does the World Exist?: An Existential Detective Story
By Jim Holt
Liveright (July 16, 2012)
Yes, I do sometimes wonder why the world exists. I believe however it is not among the questions that I am well suited to find an answer to, and thus my enthusiasm is limited. While I am not disinterested in philosophy in principle, I get easily frustrated with people who use words as if they had any meaning that’s not a human construct, words that are simply ill-defined unless the humans themselves and their language are explained too.
I don’t seem to agree with Max Tegmark on many points, but I agree that you can’t build fundamental insights on words that are empty unless one already has these fundamental insights - or wants to take the anthropic path. In other words, if you want to understand nature, you have to do it with a self-referential language like mathematics, not with English. Thus my conviction that if anybody is to understand the nature of reality, it will be a mathematician or a theoretical physicist.
For these reasons I’d never have bought Jim Holt’s book. I was however offered a free copy by the editor. And, thinking that I should broaden my horizon when it comes to the origin of the universe and the existence or absence of final explanations, I read it.
Holt’s book is essentially a summary of thoughts on the question why there isn’t nothing, covering the history of the question as well as the opinions of currently living thinkers. The narrative of the book is Holt’s own quest for understanding that lead him to visit and talk to several philosophers, physicists and other intellectuals, including Steven Weinberg, Alan Guth and David Deutsch. Many others are mentioned or cited, such as Stephen Hawking, Max Tegmark and Roger Penrose.
The book is very well written, though Holt has a tendency to list exactly what he ate and drank when and where which takes up more space than it deserves. There are more bottles of wine and more deaths on the pages of his book than I had expected, though that is balanced with a good sense of humor. Since Holt arranges his narrative along his travel rather than by topic, the book is sometimes repetitive when he reminds the reader of something (eg the “landscape”) that was already introduced earlier.
I am very impressed by Holt’s interviews. He has clearly done a lot of own thinking about the question. His explanations are open-minded and radiate well-meaning, but he is sharp and often critical. In many cases what he says is much more insightful than what his interview partners have to offer.
Holt’s book is good summary of just how bizarre the world is. The only person quoted in this book who made perfect sense to me is Woody Allen. On the very opposite end is a philosopher named Derek Parfit who hates the “scientizing” of philosophy, and some of his colleagues who believe in “panpsychism”, undeterred by the total lack of scientific evidence.
The reader of the book is also confronted with John Updike who belabors the miserable state of string theory “This whole string theory business… There’s never any evidence, right? There are men spending their whole careers working on a theory of something that might not even exist”, and Alex Vilenkin who has his own definition of “nothing,” which, if you ask me, is a good way to answer the question.
Towards the end of the book Jim Holt also puts forward his own solution to the problem of why there is something rather than nothing. Let me give you a flavor of that proof:
This book has been very educational for me indeed. Now I know the names of many ism’s that I do not want to know more about. I hate the idea that I’d have missed this book if it hadn't been for the free copy in my mail box. That having been said, to get anything out of this book you need to come with an interest in the question already. Do not expect the book to create this interest. But if you come with this interest, you’ll almost surely enjoy reading it.
By Jim Holt
Liveright (July 16, 2012)
Yes, I do sometimes wonder why the world exists. I believe however it is not among the questions that I am well suited to find an answer to, and thus my enthusiasm is limited. While I am not disinterested in philosophy in principle, I get easily frustrated with people who use words as if they had any meaning that’s not a human construct, words that are simply ill-defined unless the humans themselves and their language are explained too.
I don’t seem to agree with Max Tegmark on many points, but I agree that you can’t build fundamental insights on words that are empty unless one already has these fundamental insights - or wants to take the anthropic path. In other words, if you want to understand nature, you have to do it with a self-referential language like mathematics, not with English. Thus my conviction that if anybody is to understand the nature of reality, it will be a mathematician or a theoretical physicist.
For these reasons I’d never have bought Jim Holt’s book. I was however offered a free copy by the editor. And, thinking that I should broaden my horizon when it comes to the origin of the universe and the existence or absence of final explanations, I read it.
Holt’s book is essentially a summary of thoughts on the question why there isn’t nothing, covering the history of the question as well as the opinions of currently living thinkers. The narrative of the book is Holt’s own quest for understanding that lead him to visit and talk to several philosophers, physicists and other intellectuals, including Steven Weinberg, Alan Guth and David Deutsch. Many others are mentioned or cited, such as Stephen Hawking, Max Tegmark and Roger Penrose.
The book is very well written, though Holt has a tendency to list exactly what he ate and drank when and where which takes up more space than it deserves. There are more bottles of wine and more deaths on the pages of his book than I had expected, though that is balanced with a good sense of humor. Since Holt arranges his narrative along his travel rather than by topic, the book is sometimes repetitive when he reminds the reader of something (eg the “landscape”) that was already introduced earlier.
I am very impressed by Holt’s interviews. He has clearly done a lot of own thinking about the question. His explanations are open-minded and radiate well-meaning, but he is sharp and often critical. In many cases what he says is much more insightful than what his interview partners have to offer.
Holt’s book is good summary of just how bizarre the world is. The only person quoted in this book who made perfect sense to me is Woody Allen. On the very opposite end is a philosopher named Derek Parfit who hates the “scientizing” of philosophy, and some of his colleagues who believe in “panpsychism”, undeterred by the total lack of scientific evidence.
The reader of the book is also confronted with John Updike who belabors the miserable state of string theory “This whole string theory business… There’s never any evidence, right? There are men spending their whole careers working on a theory of something that might not even exist”, and Alex Vilenkin who has his own definition of “nothing,” which, if you ask me, is a good way to answer the question.
Towards the end of the book Jim Holt also puts forward his own solution to the problem of why there is something rather than nothing. Let me give you a flavor of that proof:
“Reality cannot be perfectly full and perfectly empty at the same time. Nor can it be ethically the best and causally the most orderly at the same time (since the occasional miracle could make reality better). And it certainly can’t be the ethically best and the most evil at the same time.”Where to even begin? Every second word in this “proof” is undefined. How can one attempt to make an argument along these lines without explaining “ethically best” in terms that are not taken out of the universe whose existence is supposed to be explained? Not to mention that all along his travel, nobody seems to have told Holt that, shockingly, there isn’t only system of logic, but a whole selection of them.
This book has been very educational for me indeed. Now I know the names of many ism’s that I do not want to know more about. I hate the idea that I’d have missed this book if it hadn't been for the free copy in my mail box. That having been said, to get anything out of this book you need to come with an interest in the question already. Do not expect the book to create this interest. But if you come with this interest, you’ll almost surely enjoy reading it.
Wednesday, August 15, 2012
"Rapid streamlined peer-review" and its results
Contains 0% Quantum Gravity. |
"Testing quantum mechanics in non-Minkowski space-time with high power lasers and 4th generation light sources"Note the small volume number, all fresh and innocent.
B. J. B. Crowley et al
Scientific Reports 2, Article number: 491
It's a quite interesting article that calculates the cross-section of photons scattering off electrons that are collectively accelerated by a high intensity laser. The possibility to maybe test Unruh radiation in a similar fashion has lately drawn some attention, see eg this paper. But this is explicitly not the setup that the authors of the present paper are after, as they write themselves in the text.
What is remarkable about this paper is the amount of misleading and wrong statements about exactly what it is they are testing and what not. In the title it says they are testing "quantum mechanics in non-Minkowski space-time." What might that mean, I was wondering?
Initially I thought it's another test of space-time non-commutativity, which is why I read the paper in the first place. The first sentence of the abstract reads "A common misperception of quantum gravity is that it requires accessing energies up to the Planck scale of 1019GeV, which is unattainable for any conceivable particle collider." Two sentences later, the authors no longer speak of quantum gravity but "a semiclassical extension of quantum mechanics ... under the assumption of weak gravity." So what's non-Minkowski then? And where's quantum gravity?
What they do in fact in the paper is that they calculate the effect of the acceleration on the electrons and argue that via the equivalence principle this should be equivalent to testing the influence of gravity. (At least locally, though there's not much elaboration on this point in the paper.) Now, strictly speaking we do of course never make any experiment in Minkowski space - after all we sit in a gravitational field. In the same sense we have countless tests of the semi-classical limit of Einstein's field equations. So I read and I am still wondering, what is it that they test?
In the first paragraph then the reader learns that the Newton-Schrödinger equation (which we discussed here) is necessary "to obtain a consistent description of experimental findings" with a reference to Carlip's paper and a paper by Penrose on state reduction. Clearly a misunderstanding, or maybe they didn't actually read the papers they cite. They also don't actually use the Schrödinger-Newton equation however - as I said, there isn't actually a gravitational field in their setup. "We do not concern ourselves with the quantized nature of the gravitational field itself." Fine, no need to quantize what's not there.
Then on page two the reader learns "Our goal is to design an experiment where it may be possible to test some aspects of general relativity..." Okay, so now they're testing neither quantum mechanics nor quantum gravity, nor the Schrödinger-Newton equation, nor semi-classical gravity, but general relativity? Though, since there's no curvature involved, it would be more like testing the equivalence principle, no?
But let's move on. We come across the following sentence: "[T]he most prominent manifestation of quantum gravity is that black holes radiate energy at the universal temperature - the Hawking temperature." Leaving aside that one can debate how "prominent" an effect black hole evaporation is, it's also manifestly wrong. Black hole evaporation is an effect of quantum field theory in curved spacetime. It's not a quantum gravitational effect, that's the exact reason why it's been dissected since decades. The authors then go on to talk about Unruh radiation and make an estimate showing that they are not testing this regime.
It follows the actual calculation, which, as I said, is in principle interesting. But at the end of the calculation we are then informed that this "provid[es], for the first time, a direct way to determine the validity of the models of quantum mechanics in curved space-time, and the specific details of the coupling between classical and quantized fields." Except that there isn't actually any curved space-time in this experiment, unless they mean the gravitational field of the Earth. And the coupling to this has been tested for example in this experiment (and in some follow-up experiements to this), which the authors don't seem to be aware of or at least don't cite. Again, at the very best I think they're proposing to test the equivalence principle.
In the closing paragraph they then completely discard the important qualifier that the space-time is not actually curved and that it's in the best case an indirect test by claiming that, on the contrary, "[T]he scientific case described in this letter is very compelling and our estimates indicate that a direct test of the semiclassical theory of quantum mechanics in curved space-time will become possible." Emphasis mine.
So, let's see what have we. We started with a test of quantum mechanics in non-Minkowski space, came across some irrelevant mentioning of quantum gravity, a misplaced referral to the Schrödinger-Newton equation, testing general relativity in the lab, further irrelevant and also wrong comments about quantum gravity, to direct tests of quantum mechanics in curved space time. All by looking at a bunch of electrons accelerated in a laser beam. Misleading doesn't even begin to capture it. I can't say I'm very convinced by the quality standard of this new journal.
Sunday, August 12, 2012
What is transformative research and why do we need it?
Since 2007, the US-American National Science Foundation (NSF) has an explicit call for “transformative research” in their funding criteria. Transformative research, according to the NSF, is the type of research that can “radically change our understanding of an important existing scientific or engineering concept or educational practice or leads to the creation of a new paradigm or field of science, engineering, or education.” The European Research Council (ERC) calls it “frontier research” and explains that this frontier research is “at the forefront of creating new knowledge[. It] is an intrinsically risky endeavour that involves the pursuit of questions without regard for established disciplinary boundaries or national borders.”
The best way to understand this type of research is that it’s of high risk with a potential high payoff. It’s the type of blue-sky research that is very unlikely to be pursued in for-profit organizations because it might have no tangible outcome for decades. Since one doesn’t actually know if some research has a high payoff before it’s been done, one should better call it “Potentially Transformative Research.”
Why do we need it?
If you think of science being an incremental slow push on the boundaries of knowledge, then transformative research is a jump across the border in the hope to land on save ground. Most likely, you’ll jump and drown, or be eaten by dragons. But if you’re lucky and, let’s not forget about that, smart, you might discover a whole new field of science and noticeably redefine the boundaries of knowledge.
The difficulty is of course to find out if the potential benefit justifies the risk. So there needs to be an assessment of both, and a weighting of them against each other.
Most of science is not transformative. Science is, by function, conservative. It conserves the accumulated knowledge and defends it. We need some transformative research to overcome this conservatism, otherwise we’ll get stuck. That’s why the NSF and ERC acknowledge the necessity of high-risk, high-payoff research.
But while it is clear that we need some of it, it’s not a priori clear we need more of it than we already have. Not all research should aspire to be transformative. How do we know we’re too conservative?
The only way to reliably know is to take lots of data over a long time and try to understand where the optimal balance lies. Unfortunately, the type of payoff that we’re talking about might take decades to centuries to appear, so that is, at present, not very feasible.
In lack of this the only thing we can do is to find a good argument for how to move towards the optimal balance.
One way you can do this is with measures for scientific success. I think this is the wrong approach. It’s like setting prices in a market economy by calculating them from the product’s properties and future plans. It’s not a good way to aggregate information and there’s no reason to trust whoever comes up with the formula for the success measure knows what they’re doing.
The other way is to enable a natural optimization process, much like the free market prices goods. Just that in science the goal isn’t to price goods but to distribute researchers over research projects. How many people should optimally work on which research so their skills are used efficiently and progress is as fast as possible? Most scientists have the aspiration to make good use of their skills and to contribute to progress, so the only thing we need to do is to let them follow their interests.
Yes, that’s right. I’m saying the best we can do is trust the experts to find out themselves where their skills are of best use. Of course one needs to provide a useful infrastructure for this to work. Note that this does not mean everybody necessarily works on the topic they’re most interested in, because the more people work on a topic the smaller the chances become that there are significant discoveries for each of them to be made.
The tragedy is of course that this is nowhere like science is organized today. Scientists are not free to choose on which problem to use their skills. Instead, they are subject to all sorts of pressures which prevent the optimal distribution of researchers over projects.
The most obvious pressures are financial and time pressure. Short term contracts put a large incentive on short-term thinking. Another problem is the difficulty for researchers to change topics, which has the effect that there is a large (generational) time-lag in the population of research fields. Both of these problems cause a trend towards conservative rather than transformative research. Worse: They cause a trend towards conservative rather than transformative thinking and, by selection, a too small ratio of transformative rather than conservative researchers. This is why we have reason to believe the fraction of transformative research and researchers is presently smaller than optimal.
How can we support potentially transformative research?
The right way to solve this problem is to reduce external pressure on researchers and to ensure the system can self-optimize efficiently. But this is difficult to realize. If that is not possible, one can still try to promote transformative research by other means in the hope of coming closer to the optimal balance. How can one do this?
The first thing that comes to mind is to write transformative research explicitly into the goals of the funding agencies, encourage researchers to propose such projects, and peers to review them favorably. This most likely will not work very well because it doesn’t change anything about the too conservative communities. If you random sample a peer review group for a project, you’re more likely to get conservative opinions just because they’re more common. As a result, transformative research projects are unlikely to be reviewed favorably. It doesn’t matter if you tell people that transformative research is desirable, because they still have to evaluate if the high risk justifies the potential high payoff. And assessment of tolerable risk is subjective.
So what can be done?
One thing that can be done is to take a very small sample of reviewers, because the smaller the sample the larger the chance of a statistical fluctuation. Unfortunately, this also increases the risk that nonsense will go through because the reviewers just weren’t in the mood to actually read the proposal. The other thing you can do is to pre-select researchers so you have a subsample with a higher ratio of transformative to conservative researchers.
This is essentially what FQXi is doing. And, in their research area, they’re doing remarkably well actually. That is to say, if I look at the projects that they fund, I think most of it won’t lead anywhere. And that’s how it should be. On the downside, it’s all short-term projects. The NSF is also trying to exploit preselection in a different form in their new EAGER and CREATIV funding mechanism that are not at all assessed by peers but exclusively by NSF staff. In this case the NSF staff is the preselected group. However, I am afraid that the group might be too small to be able to accurately assess the scientific risk. Time will tell.
Putting a focus on transformative research is very difficult for institutions with a local presence. That’s because when it comes to hire colleagues who you have to get along with, people naturally tend to select those who fit in, both in type of research and in type of personality. This isn’t necessarily a bad thing as it benefits collaborations, but it can promote homogeneity and lead to “more of the same” research. It takes a constant effort to avoid this trend. It also takes courage and a long-term vision to go for the high-risk, high payoff research(er), and not many institutions can afford this courage. So here is again the financial pressure that hinders leaps of progress just because of lacking institutional funding.
It doesn’t help that during the last weeks I had to read that my colleagues in basic research in Canada, the UK and also the USA are looking forward to severe budget cuts:
The best way to understand this type of research is that it’s of high risk with a potential high payoff. It’s the type of blue-sky research that is very unlikely to be pursued in for-profit organizations because it might have no tangible outcome for decades. Since one doesn’t actually know if some research has a high payoff before it’s been done, one should better call it “Potentially Transformative Research.”
Why do we need it?
If you think of science being an incremental slow push on the boundaries of knowledge, then transformative research is a jump across the border in the hope to land on save ground. Most likely, you’ll jump and drown, or be eaten by dragons. But if you’re lucky and, let’s not forget about that, smart, you might discover a whole new field of science and noticeably redefine the boundaries of knowledge.
The difficulty is of course to find out if the potential benefit justifies the risk. So there needs to be an assessment of both, and a weighting of them against each other.
Most of science is not transformative. Science is, by function, conservative. It conserves the accumulated knowledge and defends it. We need some transformative research to overcome this conservatism, otherwise we’ll get stuck. That’s why the NSF and ERC acknowledge the necessity of high-risk, high-payoff research.
But while it is clear that we need some of it, it’s not a priori clear we need more of it than we already have. Not all research should aspire to be transformative. How do we know we’re too conservative?
The only way to reliably know is to take lots of data over a long time and try to understand where the optimal balance lies. Unfortunately, the type of payoff that we’re talking about might take decades to centuries to appear, so that is, at present, not very feasible.
In lack of this the only thing we can do is to find a good argument for how to move towards the optimal balance.
One way you can do this is with measures for scientific success. I think this is the wrong approach. It’s like setting prices in a market economy by calculating them from the product’s properties and future plans. It’s not a good way to aggregate information and there’s no reason to trust whoever comes up with the formula for the success measure knows what they’re doing.
The other way is to enable a natural optimization process, much like the free market prices goods. Just that in science the goal isn’t to price goods but to distribute researchers over research projects. How many people should optimally work on which research so their skills are used efficiently and progress is as fast as possible? Most scientists have the aspiration to make good use of their skills and to contribute to progress, so the only thing we need to do is to let them follow their interests.
Yes, that’s right. I’m saying the best we can do is trust the experts to find out themselves where their skills are of best use. Of course one needs to provide a useful infrastructure for this to work. Note that this does not mean everybody necessarily works on the topic they’re most interested in, because the more people work on a topic the smaller the chances become that there are significant discoveries for each of them to be made.
The tragedy is of course that this is nowhere like science is organized today. Scientists are not free to choose on which problem to use their skills. Instead, they are subject to all sorts of pressures which prevent the optimal distribution of researchers over projects.
The most obvious pressures are financial and time pressure. Short term contracts put a large incentive on short-term thinking. Another problem is the difficulty for researchers to change topics, which has the effect that there is a large (generational) time-lag in the population of research fields. Both of these problems cause a trend towards conservative rather than transformative research. Worse: They cause a trend towards conservative rather than transformative thinking and, by selection, a too small ratio of transformative rather than conservative researchers. This is why we have reason to believe the fraction of transformative research and researchers is presently smaller than optimal.
How can we support potentially transformative research?
The right way to solve this problem is to reduce external pressure on researchers and to ensure the system can self-optimize efficiently. But this is difficult to realize. If that is not possible, one can still try to promote transformative research by other means in the hope of coming closer to the optimal balance. How can one do this?
The first thing that comes to mind is to write transformative research explicitly into the goals of the funding agencies, encourage researchers to propose such projects, and peers to review them favorably. This most likely will not work very well because it doesn’t change anything about the too conservative communities. If you random sample a peer review group for a project, you’re more likely to get conservative opinions just because they’re more common. As a result, transformative research projects are unlikely to be reviewed favorably. It doesn’t matter if you tell people that transformative research is desirable, because they still have to evaluate if the high risk justifies the potential high payoff. And assessment of tolerable risk is subjective.
So what can be done?
One thing that can be done is to take a very small sample of reviewers, because the smaller the sample the larger the chance of a statistical fluctuation. Unfortunately, this also increases the risk that nonsense will go through because the reviewers just weren’t in the mood to actually read the proposal. The other thing you can do is to pre-select researchers so you have a subsample with a higher ratio of transformative to conservative researchers.
This is essentially what FQXi is doing. And, in their research area, they’re doing remarkably well actually. That is to say, if I look at the projects that they fund, I think most of it won’t lead anywhere. And that’s how it should be. On the downside, it’s all short-term projects. The NSF is also trying to exploit preselection in a different form in their new EAGER and CREATIV funding mechanism that are not at all assessed by peers but exclusively by NSF staff. In this case the NSF staff is the preselected group. However, I am afraid that the group might be too small to be able to accurately assess the scientific risk. Time will tell.
Putting a focus on transformative research is very difficult for institutions with a local presence. That’s because when it comes to hire colleagues who you have to get along with, people naturally tend to select those who fit in, both in type of research and in type of personality. This isn’t necessarily a bad thing as it benefits collaborations, but it can promote homogeneity and lead to “more of the same” research. It takes a constant effort to avoid this trend. It also takes courage and a long-term vision to go for the high-risk, high payoff research(er), and not many institutions can afford this courage. So here is again the financial pressure that hinders leaps of progress just because of lacking institutional funding.
It doesn’t help that during the last weeks I had to read that my colleagues in basic research in Canada, the UK and also the USA are looking forward to severe budget cuts:
“Of paramount concern for basic scientists [in Canada] is the elimination of the Can$25-million (US$24.6-million) RTI, administered by the Natural Sciences and Engineering Research Council of Canada (NSERC), which funds equipment purchases of Can$7,000–150,000. An accompanying Can$36-million Major Resources Support Program, which funds operations at dozens of experimental-research facilities, will also be axed.” [Source: Nature]
“Hanging over the effective decrease in support proposed by the House of Representatives last week is the ‘sequester’, a pre-programmed budget cut that research advocates say would starve US science-funding agencies.” [Source: Nature]
“[The] Engineering and Physical Sciences Research Council (EPSRC) [is] the government body that holds the biggest public purse for physics, mathematics and engineering research in the United Kingdom. Facing a growing cash squeeze and pressure from the government to demonstrate the economic benefits of research, in 2009 the council's chief executive, David Delpy, embarked on a series of controversial reforms… The changes incensed many physical scientists, who protested that the policy to blacklist grant applicants was draconian. They complained that the EPSRC's decision to exert more control over the fields it funds risked sidelining peer review and would favour short-term, applied research over curiosity-driven, blue-skies work in a way that would be detrimental to British science.” [Source:Nature]So now more than ever we should make sure that investments in basic research are used efficiently. And one of the most promising ways to do this is presently to enable more potentially transformative research.
Thursday, August 09, 2012
Book review: “Thinking, fast and slow” by Daniel Kahneman
Thinking, Fast and Slow
By Daniel Kahneman
Farrar, Straus and Giroux (October 25, 2011)
I am always on the lookout for ways to improve my scientific thinking. That’s why I have an interest in the areas of sociology concerned with decision making in groups and how the individual is influenced by this. And this is also why I have an interest in cognitive biases - intuitive judgments that we make without even noticing; judgments which are just fine most of the time but can be scientifically fallacious. Daniel Kahneman’s book “Thinking, fast and slow” is an excellent introduction to the topic.
Kahneman, winner of the Nobel Price for Economics in 2002, focuses mostly on his own work, but that covers a lot of ground. He starts with distinguishing between two different modes in which we make decisions, a fast and intuitive one, and a slow, more deliberate one. Then he explains how fast intuitions lead us astray in certain circumstances.
The human brain does not make very accurate statistical computations without deliberate effort. But often we don’t make such an effort. Instead, we use shortcuts. We substitute questions, extrapolate from available memories, and try to construct plausible and coherent stories. We tend to underestimate uncertainty, are influenced by the way questions are framed, and our intuition is skewed by irrelevant details.
Kahneman quotes and summarizes a large amount of studies that have been performed, in most cases with sample questions. He offers explanations for the results when available, and also points out where the limits of present understanding are. In the later parts of the book he elaborates on the relevance of these findings about the way humans make decision for economics. While I had previously come across a big part of the studies that he summarizes in the early chapters, the relation to economics had not been very clear to me, and I found this part enlightening. I now understand my problems trying to tell economists that humans do have inconsistent preferences.
The book introduces a lot of terminology, and at the end of each chapter the reader finds a few examples for how to use them in everyday situations. “He likes the project, so he thinks its costs are low and its benefits are high. Nice example of the affect heuristic.” “We are making an additional investment because we not want to admit failure. This is an instance of the sunk-cost fallacy.” Initially, I found these examples somewhat awkward. But awkward or not, they serve very well for the purpose of putting the terminology in context.
The book is well written, reads smoothly, is well organized, and thoroughly referenced. As a bonus, the appendix contains reprints of Kahneman’s two most influential papers that contain somewhat more details than the summary in the text. He narrates along the story of his own research projects and how they came into being which I found a little tiresome after he elaborated on the third dramatic insight that he had about his own cognitive bias. Or maybe I'm just jealous because a Nobel Prize winning insight in theoretical physics isn't going to come by that way.
I have found this book very useful in my effort to understand myself and the world around me. I have only two complaints. One is that despite all the talk about the relevance of proper statistics, Kahneman does not mention the statistical significance of any of the results that he talks about. Now, this is all research which started two or three decades ago, so I have little doubt that the effects he talks about are indeed meanwhile well established, and, hey, he got a Nobel Prize after all. Yet, if it wasn’t for that I’d have to consider the possibility that some of these effects will vanish as statistical artifacts. Second, he does not at any time actually explain to the reader the basics of probability theory and Bayesian inference, though he uses it repeatedly. This, unfortunately, limits the usefulness of the book dramatically if you don’t already know how to compute probabilities. It is particularly bad when he gives a terribly vague explanation of correlation. Really, the book would have been so much better if it had at least an appendix with some of the relevant definitions and equations.
That having been said, if you know a little about statistics you will probably find, like I did, that you’ve learned to avoid at least some of the cognitive biases that deal with explicit ratios and percentages, and different ways to frame these questions. I’ve also found that when it comes to risks and losses my tolerance apparently does not agree with that of the majority of participants in the studies he quotes. Not sure why that is. Either way, whether or not you are subject to any specific bias that Kahneman writes about, the frequency by which they appear make them relevant to understand the way human society works, and they also offer a way to improve our decision making.
In summary, it’s a well-written and thoroughly useful book that is interesting for everybody with an interest in human decision-making and its shortcomings. I'd give this book four out of five stars.
Below are some passages that I marked that gave me something to think. This will give you a flavor what the book is about.
“A reliable way of making people believe in falsehoods is frequent repetition because familiarity is not easily distinguished from truth.”
“[T]he confidence that people experience is determined by the coherence of the story they manage to construct from available information. It is the consistency of the information that matters for a good story, not its completeness.”
“The world in our heads is not a precise replica of reality; our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed.”
“It is useful to remember […] that neglecting valid stereotypes inevitably results in suboptimal judgments. Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is cost-less is wrong.”
“A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.”
“I have always believed that scientific research is another domain where a form of optimism is essential to success: I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the fact of repeated experiences of multiple small failures and rare successes, the fate of most researchers.”
“The brains s of humans and other animals contain a mechanism that is designed to give priority to bad news.”
“Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals.”
“When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that maybe exposed to events no one has yet experienced, this is not good news.”
“We tend to make decisions as problems arise, even when we are specifically instructed to consider them jointly. We have neither the inclination not the mental resources to enforce consistency on our preferences, and our preferences are not magically set to be coherent, as they are in the rational-agent model.”
“The sunk-cost fallacy keeps people for too long in poor jobs, unhappy marriages, und unpromising research projects. I have often observed young scientists struggling to salvage a doomed project when they would be better advised to drop it and start a new one.”
“Although Humans are not irrational, they often need help to make more accurate judgments and better decisions, and in some cases policies and institutions can provide that help.”
By Daniel Kahneman
Farrar, Straus and Giroux (October 25, 2011)
I am always on the lookout for ways to improve my scientific thinking. That’s why I have an interest in the areas of sociology concerned with decision making in groups and how the individual is influenced by this. And this is also why I have an interest in cognitive biases - intuitive judgments that we make without even noticing; judgments which are just fine most of the time but can be scientifically fallacious. Daniel Kahneman’s book “Thinking, fast and slow” is an excellent introduction to the topic.
Kahneman, winner of the Nobel Price for Economics in 2002, focuses mostly on his own work, but that covers a lot of ground. He starts with distinguishing between two different modes in which we make decisions, a fast and intuitive one, and a slow, more deliberate one. Then he explains how fast intuitions lead us astray in certain circumstances.
The human brain does not make very accurate statistical computations without deliberate effort. But often we don’t make such an effort. Instead, we use shortcuts. We substitute questions, extrapolate from available memories, and try to construct plausible and coherent stories. We tend to underestimate uncertainty, are influenced by the way questions are framed, and our intuition is skewed by irrelevant details.
Kahneman quotes and summarizes a large amount of studies that have been performed, in most cases with sample questions. He offers explanations for the results when available, and also points out where the limits of present understanding are. In the later parts of the book he elaborates on the relevance of these findings about the way humans make decision for economics. While I had previously come across a big part of the studies that he summarizes in the early chapters, the relation to economics had not been very clear to me, and I found this part enlightening. I now understand my problems trying to tell economists that humans do have inconsistent preferences.
The book introduces a lot of terminology, and at the end of each chapter the reader finds a few examples for how to use them in everyday situations. “He likes the project, so he thinks its costs are low and its benefits are high. Nice example of the affect heuristic.” “We are making an additional investment because we not want to admit failure. This is an instance of the sunk-cost fallacy.” Initially, I found these examples somewhat awkward. But awkward or not, they serve very well for the purpose of putting the terminology in context.
The book is well written, reads smoothly, is well organized, and thoroughly referenced. As a bonus, the appendix contains reprints of Kahneman’s two most influential papers that contain somewhat more details than the summary in the text. He narrates along the story of his own research projects and how they came into being which I found a little tiresome after he elaborated on the third dramatic insight that he had about his own cognitive bias. Or maybe I'm just jealous because a Nobel Prize winning insight in theoretical physics isn't going to come by that way.
I have found this book very useful in my effort to understand myself and the world around me. I have only two complaints. One is that despite all the talk about the relevance of proper statistics, Kahneman does not mention the statistical significance of any of the results that he talks about. Now, this is all research which started two or three decades ago, so I have little doubt that the effects he talks about are indeed meanwhile well established, and, hey, he got a Nobel Prize after all. Yet, if it wasn’t for that I’d have to consider the possibility that some of these effects will vanish as statistical artifacts. Second, he does not at any time actually explain to the reader the basics of probability theory and Bayesian inference, though he uses it repeatedly. This, unfortunately, limits the usefulness of the book dramatically if you don’t already know how to compute probabilities. It is particularly bad when he gives a terribly vague explanation of correlation. Really, the book would have been so much better if it had at least an appendix with some of the relevant definitions and equations.
That having been said, if you know a little about statistics you will probably find, like I did, that you’ve learned to avoid at least some of the cognitive biases that deal with explicit ratios and percentages, and different ways to frame these questions. I’ve also found that when it comes to risks and losses my tolerance apparently does not agree with that of the majority of participants in the studies he quotes. Not sure why that is. Either way, whether or not you are subject to any specific bias that Kahneman writes about, the frequency by which they appear make them relevant to understand the way human society works, and they also offer a way to improve our decision making.
In summary, it’s a well-written and thoroughly useful book that is interesting for everybody with an interest in human decision-making and its shortcomings. I'd give this book four out of five stars.
Below are some passages that I marked that gave me something to think. This will give you a flavor what the book is about.
“A reliable way of making people believe in falsehoods is frequent repetition because familiarity is not easily distinguished from truth.”
“[T]he confidence that people experience is determined by the coherence of the story they manage to construct from available information. It is the consistency of the information that matters for a good story, not its completeness.”
“The world in our heads is not a precise replica of reality; our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed.”
“It is useful to remember […] that neglecting valid stereotypes inevitably results in suboptimal judgments. Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is cost-less is wrong.”
“A general limitation of the human mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed. Once you adopt a new view of the world (or any part of it), you immediately lose much of your ability to recall what you used to believe before your mind changed.”
“I have always believed that scientific research is another domain where a form of optimism is essential to success: I have yet to meet a successful scientist who lacks the ability to exaggerate the importance of what he or she is doing, and I believe that someone who lacks a delusional sense of significance will wilt in the fact of repeated experiences of multiple small failures and rare successes, the fate of most researchers.”
“The brains s of humans and other animals contain a mechanism that is designed to give priority to bad news.”
“Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals.”
“When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that maybe exposed to events no one has yet experienced, this is not good news.”
“We tend to make decisions as problems arise, even when we are specifically instructed to consider them jointly. We have neither the inclination not the mental resources to enforce consistency on our preferences, and our preferences are not magically set to be coherent, as they are in the rational-agent model.”
“The sunk-cost fallacy keeps people for too long in poor jobs, unhappy marriages, und unpromising research projects. I have often observed young scientists struggling to salvage a doomed project when they would be better advised to drop it and start a new one.”
“Although Humans are not irrational, they often need help to make more accurate judgments and better decisions, and in some cases policies and institutions can provide that help.”
Tuesday, August 07, 2012
Why does the baby cry? Fact sheet.
Gloria at 2 months, crying. |
You don’t need a degree to know that baby cries if she’s unhappy. After a few weeks I had developed a trouble-shooting procedure roughly like this: Does she have a visible reason to be unhappy? Does she stop crying if I pick her up? New diaper? Clothes comfortable? Too warm? Too cold? Is she bored? Is it possible to distract her? Hungry? When I had reached the end of my list I’d start singing. The singing almost always helped. After that, there’s the stroller and white noise and earplugs.
Yes, the baby cries when she’s unhappy, no doubt about that. But both Lara and Gloria would sometimes cry for no apparent reason, or at least no reason that Stefan and I were able to figure out. The crying is distressing for the parents and costs the baby energy. So why, if it’s such an inefficient communication channel, does the baby cry so much? If the baby is trying to tell us something, why haven't hundred thousands of years of evolution been sufficient to teach caregivers what it is that she wants? I came up with the following hypotheses:
- A) She doesn’t cry for any reason, it’s just what babies do. I wasn’t very convinced of this because it doesn’t actually explain anything.
B) She cries so I don’t misplace or forget about her. I wasn’t very convinced of this either because after two months or so, my brain had classified the crying as normal background noise. Also, babies seem to cry so much it overshoots the target: It doesn’t only remind the caregivers, it frustrates them.
C) It’s a stress-test. If the family can’t cope well, it’s of advantage for future reproductive success of the child if the family breaks up sooner rather than later.
D) It’s an adaption delay. The baby is evolutionary trained to expect something else than what it gets in modern western societies. If I’d just treat the baby like my ancestors did, she wouldn’t cry so much.
First, let us clarify what we’re talking about. The crying of human infants changes after about 3 months because the baby learns to make more complex sounds and also becomes more interactive. In the following we’ll only consider the first three months that are most likely to be nature rather than nurture.
Here are some facts about the first three months of baby’s crying that seem to be established pretty well. All references can be found in Soltis’ paper.
- Crying increases until about 6 weeks after birth, followed by a gradual decrease in crying until 3 or 4 months, after which it remains relatively stable. Crying is more frequent in the later afternoon and early evening hours. These crying patterns have been found in studies of very different cultures, from the Netherlands, from South African hunter-gatherers, from the UK, Manilia, Denmark, and North America.
- Chimpanzees too have a peak in crying frequency at approximately 6 weeks of life, and a substantial decline in crying frequency by 12 weeks.
- The cries of healthy, non-stressed infants last on the average 0.5-1.5 seconds with a fundamental pitch in the range of 200-600 Hz. The melody is either falling or rising/falling (as opposed to rising, falling/rising or flat).
- Serious illness, both genetic and acquired, is often accompanied by abnormal crying. The most common cry characteristic indicating serious pathology is an unusually high pitched cry, in one case study above 2000 Hz, and in many other studies exceeding 1500 Hz. (That’s higher than most sopranos can sing.) Examples are: bacterial meningitis 750-1000 Hz, Krabbe’s disease up to 1120 Hz, hypoglycemia up to 1600 Hz. Other abnormal cry patters that have been found in illness is biphonation (the simultaneous production of two fundamental frequencies), too low pitch, and deviations from the normal cry melodies.
- Various studies have been conducted to find out how well adults are able to tell the reason for a baby’s cry by playing them previously recorded cries. These studies show mothers are a little bit better than random chance when given a predefined selection of choices (eg pain, anger, other, in one study), but by and large mothers as well as other adults are pretty bad at figuring out the reason for a baby’s cry. Without being given categories, participants tend to attribute all cries to hunger.
- It has been reported in several papers that parents described a baby’s crying as the most proximate cause triggering abuse and infanticide. It has also been shown that especially the high pitched baby cries produce a response of the autonomic nervous system, measureable for example by the heart rate or skin conductance (the response is higher than for smiling babies). It has also been shown that abusers exhibit higher autonomic responses to high-pitched cries than non-abusers.
- Excessive infant crying is the most common clinical complaint of mothers with infants under three months of age.
- Excessive infant crying that begins and ends without warning is called “colic.” It is often attributed to organic disorders, but if the baby has no other symptoms it is estimated that only 5-10% of “colic” go back to an organic disorder, the most common one being lactose intolerance. If the baby has other symptoms (flexed legs, spasm, bloating, diarrhea), the ratio of organic disorder goes up to 45%. The rest cries for unknown reasons. Colic usually improves by 4 months, or so they tell you. (Lara’s didn’t improve until she was 6 months. Gloria never had any.)
- Colic is correlated with postpartum depression which is in turn robustly associated with reduced maternal care.
- Records and media reports kept by the National Center on Shaken Baby Syndrome implicate crying as the most common trigger.
- In a survey among US mothers, more infant crying was associated with lower levels of perceived infant health, more worry about baby’s health, and less positive emotion towards the infant.
- Some crying bouts are demonstrably unsoothable to typical caregiving responses in the first three months. Well, somebody has to do these studies.
- In studies of nurses judging infant pain, the audible cry was mostly redundant to facial activity in the judgment of pain.
- Honest signal of need. The baby cries if and only if she needs or wants something, and she cries to alert the caregivers of that need. This hypothesis is not well supported by the facts. Baby’s cries are demonstrably inefficient of bringing the baby the care it allegedly needs because caregivers don’t know what she wants and in many cases there doesn’t seem to be anything they can do about it. This is the scientific equivalent of my hypothesis D which I found not so convincing.
- Signal of vigor. This hypothesis says that the baby cries to show she’s healthy. The more the baby cries (in the “healthy” pitch and melody range), the stronger she is and the more the mother should care because it’s a good investment of her attention to raise offspring that’s likely to reproduce successfully. Unfortunately, there’s no evidence linking a high amount of crying to good health of the child. In contrast, as mentioned above, parents perceive children as more sickly if they cry more, which is exactly the opposite of what the baby allegedly “wants” to signal. Also, lots of crying is apparently maladaptive according to the evidence listed above, because it can cause violence against the child. It’s also unclear why, if the baby isn’t seriously sick and too weak to cry, a not-so-vigorous child should alert the caregivers to his lack of vigor and trigger neglect. It doesn’t seem to make much sense. This is the scientific equivalent of my hypothesis B which I didn’t find very convincing either.
- Graded signal of distress. The baby cries if she’s in distress, and the more distress the more she cries. This hypothesis is, at least for what pain is concerned, supported by evidence. Pretty much everybody seems to agree on that. As mentioned above however, while distress leads to crying, this leaves open the question why the baby is in distress to begin with and why it cries if caregivers can’t do anything about it. Thus, while this hypothesis is the least controversial one, it’s also the one with the smallest explanatory value.
- Manipulation: The baby cries so mommy feeds her as often as possible. Breastfeeding stimulates the production of the hormone prolactin; prolactin inhibits estrogen production, which often (though not always) keeps the estrogen level below the threshold necessary for the menstrual cycle to set it. This is called lactational amenorrhea. In other words, the more the baby gets mommy to feed her, the smaller the probability that a younger sibling will compete for resources, thus improving the baby’s own well-being. The problem with this hypothesis is that it would predict the crying to increase when the mother’s body has recovered, some months after birth, and is in shape to carry another child. Instead however, at this time the babies cry less rather than more. (It also seems to say that having siblings is a disadvantage to one’s own reproductive success, which is quite a bold statement in my opinion.)
- Thermoregulatory assistance. An infant’s thermoregulation is not very well developed, which is why you have to be so careful to wrap them warm when it’s cold and to keep them in the shade when it’s hot. According to this hypothesis the baby cries to make herself warm and also to alert the mother that it needs assistance with thermoregulation. It’s an interesting hypothesis that I hadn’t heard of before and it doesn’t seem to have been much studied. I would expect however that in this case the amount of crying depends on the external temperature, and I haven’t come across any evidence for that.
- Inadequacy of central arousal. The infant’s brain needs a certain level of arousal for proper development. Baby starts crying if not enough is going on, to upset herself and her parents. If there’s any factual evidence speaking for this I don’t know of it. It seems to be a very young hypothesis. I’m not sure how this is compatible with my finding that the Lara after excessive crying would usually fall asleep, frequently in the middle of a cry, and that excitement (people, travel, noise) were a cause for crying too.
- Underdeveloped circadian rhythm. The infant’s sleep-wake cycle is very different from an adult’s. Young babies basically don’t differentiate night from day. It’s only at around two to three months that they start sleeping through the night and develop a daily rhythm. According to this hypothesis it’s the underdeveloped circadian rhythm that causes the baby distress, probably because certain brain areas are not well synched with other daily variations. This makes a certain sense because it offers a possible explanation for the daily return of crying bouts in the late afternoon, and also for why they fade when the babies sleep through the night. This too is a very young hypothesis that is waiting for good evidence.
- Behavioral state. The baby’s mind knows three states: Sleep, awake, and crying. It’s a very minimalistic hypothesis, but I’m not sure it explains anything. This is the scientific equivalent of my hypothesis A, the baby just cries.
So if your baby is crying and you don’t know why, don’t worry. Even scientists who have spent their whole career on this question don’t actually know why the baby cries.
Sunday, August 05, 2012
Erdös and amphetamines: check
Some weeks ago I wrote a review on Jonah Lehrer's book "Imagine," in which I complained about missing references. Now that it turns out Lehrer fabricated quotes and facts on various occasions (see eg here and here), I recalled that I meant to look up a reference on an interesting story he told, that the famous mathematician Paul Erdös kept up his productivity by taking benzedrine. Benzedrine belongs to the amphetamines, also known as speed. Lehrer did not quote any source for this story.
So I did look it up, and it turns out it's true. In Paul Hoffman's biography of Erdös one finds:
Benzedrine was available on prescription in the USA during this time. Erdös lived to the age of 83. During his lifetime, he wrote or co-authored 1,475 academic papers.
Lehrer also relates the following story in his book
(Note added: I revised the above paragraph, because I hadn't originally seen it in Hoffman's book.)
Partly related: Calculate your Erdős number here, mine is 4.
So I did look it up, and it turns out it's true. In Paul Hoffman's biography of Erdös one finds:
Erdös first did mathematics at the age of three, but for the last twenty-five years of his life, since the death of this mother, he put in nineteen-hour days, keeping himself fortified with 10 to 20 milligrams of Benzedrine or Ritalin, strong espresso, and caffeine tablets. "A mathematician," Erdös was fond of saying, "is a machine for tuning coffee into theorems." When friends urged him to slow down, he always had the same response: "There'll be plenty of time to rest in the grave."(You can read chapter 1 from the book, which contains this paragraph, here).
Benzedrine was available on prescription in the USA during this time. Erdös lived to the age of 83. During his lifetime, he wrote or co-authored 1,475 academic papers.
Lehrer also relates the following story in his book
Ron Graham, a friend and fellow mathematician, once bet Erdos five hundred dollars that he couldn't abstain from amphetamines for thirty days. Erdos won the wager but complained that the progress of mathematicians had been set back by a month: "Before, when I looked at a piece of blank paper, my mind was filled with ideas," he complained. "Now all I see is a blank piece of paper.(Omitted umlauts are Lehrer's, not mine.) Lehrer does not mention Erdös was originally prescribed benzedrine to treat depression after his mother's death. I'm not sure exactly what the origin of this story is. It is mentioned in a slightly different wording in this PDF by Joshua Hill:
Erdős's friends worried about his drug use, and in 1979 Graham bet Erdős $500 that he couldn't stop taking amphetamines for a month. Erdős accepted, and went cold turkey for a complete month. Erdős's comment at the end of the month was "You've showed me I'm not an addict. But I didn't get any work done. I'd get up in the morning and stare at a blank piece of paper. I'd have no ideas, just like an ordinary person. You've set mathematics back a month." He then immediately started taking amphetamines again.Hill's article is not quoted by Lehrer, and there's no reference in Hill's article. It also seems to go back to Paul Hoffman's book (same chapter).
(Note added: I revised the above paragraph, because I hadn't originally seen it in Hoffman's book.)
Partly related: Calculate your Erdős number here, mine is 4.
Friday, August 03, 2012
Interna
Lara and Gloria are presently very difficult. They have learned to climb the chairs and upwards from there; I constantly have to pick them off the furniture. Yesterday, I turned my back on them for a second, and when I looked again Lara was sitting on the table, happily pulling a string of Kleenex out of the box, while Gloria was moving away the chair Lara had used to climb up.
During the last month, the girls have added a few more words to their vocabulary. The one that's most obvious to understand is "lallelalle," which is supposed to mean "empty", and usually a message to me to refill the apple juice. Gloria also has found a liking in the word "Haar" (hair), and she's been saying "Goya" for a while, which I believe means "Gloria". Or maybe yogurt. They both can identify most body parts if you name them. Saying "feet" will make them grab their feet, "nose" will have them point at their nose, and so on. If Gloria wants to make a joke, she'll go and grab her sister's nose instead. Gloria also announces that she needs a new diaper by padding her behind, alas after the fact.
I meanwhile am stuck in proposal writing again. The organization for the conference in October and the program in November is going nicely, and I'm very much looking forward to both events. My recent paper was accepted for publication in Foundations of Physics, and I've wrapped up another project that had been in my drawer for a while. Besides this, I've spent some time reading up the history of Nordita, which is quite interesting actually, maybe I'll have a post on this at some point.
I finally said good bye to my BlackBerry and now have an iPhone, which works so amazingly smoothly I'm deeply impressed.
Below a little video of the girls that I took the other day. YouTube is offering a fix for shaky videos, which is why you might see the borders moving around.
I hope your summer is going nicely and that you have some time to relax!
During the last month, the girls have added a few more words to their vocabulary. The one that's most obvious to understand is "lallelalle," which is supposed to mean "empty", and usually a message to me to refill the apple juice. Gloria also has found a liking in the word "Haar" (hair), and she's been saying "Goya" for a while, which I believe means "Gloria". Or maybe yogurt. They both can identify most body parts if you name them. Saying "feet" will make them grab their feet, "nose" will have them point at their nose, and so on. If Gloria wants to make a joke, she'll go and grab her sister's nose instead. Gloria also announces that she needs a new diaper by padding her behind, alas after the fact.
I meanwhile am stuck in proposal writing again. The organization for the conference in October and the program in November is going nicely, and I'm very much looking forward to both events. My recent paper was accepted for publication in Foundations of Physics, and I've wrapped up another project that had been in my drawer for a while. Besides this, I've spent some time reading up the history of Nordita, which is quite interesting actually, maybe I'll have a post on this at some point.
I finally said good bye to my BlackBerry and now have an iPhone, which works so amazingly smoothly I'm deeply impressed.
Below a little video of the girls that I took the other day. YouTube is offering a fix for shaky videos, which is why you might see the borders moving around.
I hope your summer is going nicely and that you have some time to relax!
Wednesday, August 01, 2012
Letter of recommendation 2.0
I am currently reading Daniel Kahneman’s book “Thinking, fast and slow,” which summarizes a truly amazing amount of studies. Among many other cognitive biases, Kahneman explains that it is difficult for people to accept that often algorithms based on statistical data produce better predictions than experts. This is difficult to accept even when one is shown evidence that the algorithm is better. He cites many examples for that, among them forecasting the future success of military personnel, quality of wine, or treatment of patients.
The reason, Kahneman explains, is that humans are not as efficient screening and aggregating data as software. Humans are prone to miss details, especially if the data is noisy, they get tired or fall for various cognitive biases in their interpretation of data. Generally, the human brain does not effortlessly engage in Bayesian inference. In combination with it trying to save energy and effort, this leads to mistakes. Humans are especially bad in making summary judgements of complex information, Kahneman writes, while at the same time being overly confident about the accuracy of their judgement. One of his examples is: “Experienced radiologists who evaluate chest X-rays as “normal or “abnormal” contradict themselves 20% of the time when they see the same picture on separate occasions.”
Interestingly however, Kahneman also cites evidence that expert intuition can be very valuable, provided the expert’s judgement is about a situation where learning from experience is possible. (Expert judgement is an illusion when a data series is entirely uncorrelated.) He thus suggests that judgements should be based on an analysis of statistical data from past performance, combined with expert intuition. We should overcome our disliking of statistical measures, he writes “to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments” (when prediction is difficult due to a large amount of relevant factors).
This made me question my own objections to using measures for scientific success, as scientific success is of the type of prediction that is very difficult to make because luck plays a big role. Part of my disliking arguably stems from a general unease of leaving decisions about people’s future to a computer. While that is the case, and probably part of the reason I don’t like the idea, it’s not the actual problem I have belabored in my earlier blogposts. For me the main problem with using measures for scientific success is that I’d like to see evidence they are actually working, and do not adversely affect research. I am worried particularly that a widely used measure for scientific success would literally redefine what we mean by success in the first place. A small mistake, implemented and streamlined globally, could in this way dramatically slow down progress.
But I am wondering now if not, based on what Kahneman writes, I have to conclude that in addition to asking for letters of recommendation (the “expert’s intuiton”) it would be valuable to judge researchers’ past performance on a point scale. Consider that you’d be asked to fill out a questionnaire for each of your students and postdocs, ranking him or her from 0 to 5 for those characteristics typically named in letters: technical skills, independence, creativity, and so on, and also add your confidence on these judgements. You could update your scores if your opinion changes. What a hiring committee would do with these scores is a different question entirely.
The benefit of this would be the assembly of a data base needed to discover predictors for future performance, if they exist. The difficulty is that the experts in question are rarely offering a neutral judgement; many have a personal interest in seeing their students succeed, so there needs to be some incentive for accuracy. The risk would be that such a predictor might become a self-fulfilling prophecy. At least until a reality check documents that actually, despite all the honors, prices and awards, very little has happened in terms of actual progress.
Either way, now that I think about it, such a ranking would be temptingly useful for hiring committees to sort through large numbers of applicants quickly. I wouldn’t be surprised if somebody tries this rather sooner or later. Would you welcome it?
The reason, Kahneman explains, is that humans are not as efficient screening and aggregating data as software. Humans are prone to miss details, especially if the data is noisy, they get tired or fall for various cognitive biases in their interpretation of data. Generally, the human brain does not effortlessly engage in Bayesian inference. In combination with it trying to save energy and effort, this leads to mistakes. Humans are especially bad in making summary judgements of complex information, Kahneman writes, while at the same time being overly confident about the accuracy of their judgement. One of his examples is: “Experienced radiologists who evaluate chest X-rays as “normal or “abnormal” contradict themselves 20% of the time when they see the same picture on separate occasions.”
Interestingly however, Kahneman also cites evidence that expert intuition can be very valuable, provided the expert’s judgement is about a situation where learning from experience is possible. (Expert judgement is an illusion when a data series is entirely uncorrelated.) He thus suggests that judgements should be based on an analysis of statistical data from past performance, combined with expert intuition. We should overcome our disliking of statistical measures, he writes “to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments” (when prediction is difficult due to a large amount of relevant factors).
This made me question my own objections to using measures for scientific success, as scientific success is of the type of prediction that is very difficult to make because luck plays a big role. Part of my disliking arguably stems from a general unease of leaving decisions about people’s future to a computer. While that is the case, and probably part of the reason I don’t like the idea, it’s not the actual problem I have belabored in my earlier blogposts. For me the main problem with using measures for scientific success is that I’d like to see evidence they are actually working, and do not adversely affect research. I am worried particularly that a widely used measure for scientific success would literally redefine what we mean by success in the first place. A small mistake, implemented and streamlined globally, could in this way dramatically slow down progress.
But I am wondering now if not, based on what Kahneman writes, I have to conclude that in addition to asking for letters of recommendation (the “expert’s intuiton”) it would be valuable to judge researchers’ past performance on a point scale. Consider that you’d be asked to fill out a questionnaire for each of your students and postdocs, ranking him or her from 0 to 5 for those characteristics typically named in letters: technical skills, independence, creativity, and so on, and also add your confidence on these judgements. You could update your scores if your opinion changes. What a hiring committee would do with these scores is a different question entirely.
The benefit of this would be the assembly of a data base needed to discover predictors for future performance, if they exist. The difficulty is that the experts in question are rarely offering a neutral judgement; many have a personal interest in seeing their students succeed, so there needs to be some incentive for accuracy. The risk would be that such a predictor might become a self-fulfilling prophecy. At least until a reality check documents that actually, despite all the honors, prices and awards, very little has happened in terms of actual progress.
Either way, now that I think about it, such a ranking would be temptingly useful for hiring committees to sort through large numbers of applicants quickly. I wouldn’t be surprised if somebody tries this rather sooner or later. Would you welcome it?