[I was reminded that I’ve promised repeatedly to continue the previous posts Science and Democracy I and II. To my own surprise I found an almost finished draft about the danger of using marketplace tactics in scientific research, and I added some recent comments out of the blogoshpere to underline my arguments.]I vividly recall the first thing my supervisor told me when I was an undergrad:
"You have to learn how to sell yourself." Since then I have repeatedly been given well meant career advises how to survive on the scientific marketplace (most of which I ignored, but I’m still around, so I guess I’m not doing too badly).
Many of my friends and colleagues in physics regard these marketplace tactics as an annoying but necessary part of the job. To begin with, this concerns me because I feel that there is a gap between how science is, and how it should be – and an unnecessary gap in addition. But more importantly, the application of economical considerations to scientific research is inappropriate, and the reason why I did not take these career advises is that I don’t want to support strategies that will hinder scientific progress on the long run. So, if you were hoping for some career advises, you're on the wrong blog.
Though comparisons between science and economics often have a grain of truth in them, they are doomed to fail when extended naively. Whether or not you believe in the infallibility of the 'Invisible Hand'
[1], scientific theories are not sold like candy bars. If one uses an economical model to analyze the dynamics of research programs, one has to be aware of the limitations of this analogy.
I: The Marketplace of Ideas
The '
marketplace of ideas' is often claimed to act as a self-regulating mechanism that ensures progress in science. It is based upon the believe that all scientific theories when made accessible to the public compete freely among each other, until it eventually turns out which idea is the best description of nature.
Being an optimist, I have no doubts that this works if one looks at the history of mankind over centuries when nature is the ultimate judge on our scientific endeavors. However, there is no reason to believe that this automatically works on shorter time periods as well. On a time scale where we do not have nature to judge (typically a couple of years, maybe up to decades - that is how long grants and employments last), the scientific community is its own judge. The obvious difference to the economical marketplace is that we do not offer our ideas ‘for sale’ to a neutral target group, and depending on whether it is bought or not our product is a success or a disaster.
Ia: The Measurement ProblemNo, we are selling our theories inside our own community. And our demand for products can easily be biased if the competitive pressure is high. The situation is significantly worsened by an increasing specialization into many sub-fields and a lack of communication between these fields. Needless to say, the genuine enthusiasm that researchers have for their own field does not improve neutral judgement. If you want to use the analogy to the economical marketplace here, you’d expect it to work even if products are only sold to company managers
[2].
This difficulty to find criteria to judge on research programs might not have been a major issue in the previous decades, but it becomes increasingly important if
a) the community grows to a complex system whose dynamics is little understood (E.g. the increasing influence of 'fashionable topics' is a typical sign for a non-linear feedback effect, the emergence of sub-fields with their own group dynamics is a sign for self-organization)
b) changes in the sociological, cultural and technological context require adjustment of criteria
c) financial and peer pressure endangers neutral judgement
d) timescales that are set through (inappropriate) external constraints.
In other words, it’s a 21st century issue. With increasing complexity, we are left with a decreasing number of people who have an overview on the whole ‘marketplace’. Little people are funded independently of their projects, so their opinions are biased in favor of their own research. Under such circumstances, the ‘marketplace of ideas’ will eventually result in a small number of approaches caught in feedback loops of increasing separation.
As Thomas Dent remarked
at 4:27 PM, June 29, 2006:
It should be obvious that there is no theoretical physics analog of capitalism or the free market. In capitalism there is profit which can be measured objectively, and the one who can make profit wins, the one who cannot must get out of the game. […] Since there is no objective measure of success in theoretical physics, there can never be a free market. Simple.
I would add: there is no such obvious measure on the time scale relevant for funding today. But obvious or not, measures have to be applied, and
are applied, and the least we can do is to chose them wisely. Being scientists, it should not be so hard for us to find out how science works best, and to analyze whether the current conditions are optimal.
Ib: Primary goals and Secondary criteriaThe primary goal: to support the most promising approaches and researchers, is of little help when you are
faced with a 3 inch pile of application documents. Instead, one commonly uses derived secondary criteria that have shown to be useful. There is nothing to object to this procedure, except that the validity of secondary criteria has to be readjusted every now and then. In a time like ours, when the sociological and technological environment changes rapidly, neglect of questioning and re-adjusting applied secondary criteria can result in misleading feedback effects and sub-optimal selection processes.
The best known example might be the citation index and number of publications. These criteria are of course correlated with the originality and quality of the research, but whenever possible, one should ask for primary goals to be met.
(I am not telling you this because it is something new, or because I think people in hiring committees are stupid, but to make the matter less abstract.) Other secondary criteria that have grown important over the last decades are e.g. previous employment at well-known institutions, or classifiable work on mainstream topics.
There is an obvious danger in just rewarding those who meet secondary requirements. If these criteria do not exactly match the primary goals, one promotes tactics that are sub-optimal for scientific progress but optimal for career building (see section 'Survival of the Fittest'). If you combine that with the non-linear feedback effect in complex systems, things can easily go seriously wrong.
An issue related to the necessary re-adjustment of secondary criteria to primary goals is to guarantee fairness on the marketplace. The 'Invisible Hand' always needs to be balanced by politics to ensure the marketplace is really 'free' -- this is one of the earliest lessons we have learned from industrialization. If we want the ‘marketplace of ideas’ to optimize progress in science, we have to ensure that every idea gets a chance, irrespective of its origin
[3] - the matter of origin plays no role for the question whether an idea is worth supporting.
Ic: Risky Research
Another important point is that supporting risky start-ups is one of the most relevant factors for progress. Unfortunately, this factor is severely neglected by present funding strategies. Sounds familiar? Okay, okay, it’s not my idea:
|
"Do you want a revolution in science? Do what businesspeople do when they want a technological revolution: Just change the rules a bit […] Create some opportunities for high-risk/high-payoff people […] The technological companies and investment banks use this strategy. Why not try it in academia? The payoff could be discovering how the universe works."
~ Lee Smolin, The Trouble with Physics (p. 331) |
Risk averseness is a rather unsurprising consequence of insecurity caused by a lack of communication in a community falling apart into sub-fields. It is also supported by chronically short resources (if we hire anybody, then someone who works on what I find interesting), short-term funding (it takes time to work out new ideas, for more info see e.g.
Temporary Display and comments to this post), and by falling for the derived secondary requirement
'If she's interested in what I do, she must be intelligent.' In the absence of a final judgement by nature on our approaches, it is very short-sighted to discard alternative options. However convinced I am of my own research project, I always have to acknowledge the possibility that it turns out to be a dead end. As Albert Einstein said so nicely
"Mathematics are well and good but nature keeps dragging us around by the nose." In
the London debate, Nancy Cartwright underlined the need to keep doors open by referring to
J.S. Mill's essay 'On Liberty'. She argues that in the absence of nature's judgement the smart thing to do is to not prematurely discard alternative options
"We need to allow as much liberty as possible (for people in designing their lives) because we don't know what is the best way (to live). And that's in part because we don't know what are all the good alternatives to chose from."
II. Product PlacementPresenting our research results to colleagues is an essential part of our job, in written form as well as in seminars, talks, and discussions. Clearly presented arguments, and well structured seminars are definitely beneficial to progress. However, as with many things in life, it is a matter of balance. Advertisement should not become more important than content. The most entertaining presentation can not make an idea better than it is, and scientific arguments have to remain as honest as possible – even if this means drawing attention to the flaws of the product.
For example, Thomas Larsson remarked
At 5:53 AM, March 17, 2007,
There are some good things about the string/LQG conflict, though. Without it, I would not know about the limits of the string black hole prediction, nor would I know that LQG quantization does not work for the harmonic oscillator.
and illustrates with that a problem that arises when researchers feel the pressure to advertise their own
work. Today, many praise results in a rather unbalanced way – not because they don’t know better, but because they have to compete with a large number of people. If you put a paper out and don’t have a prominent co-author, a catchy title and exaggerated claims is the way to get others to read it. This tactic is okay for fish sales on mediterranean markets, but it is very dangerous to the standard of scientific research. It leads to rather uncritical status reports in which problems are either not mentioned, or downplayed (and if this shortcoming is pointed out, the author will claim that the problem is obvious and widely known.)
Whether published articles are balanced crucially depends on the referee process
[4]. One could say a lot about peer review, but to say the least, it doesn’t always work as it should, and many reports are not as objective as they ought to be. An example that I have repeatedly witnessed myself: when it comes to numerical simulations, it is common practice to point out where the model fits the data very good, and just not to mention the problematic observables. Most often, numerical simulations are hard to check, even if the code is available, and the not-so-good results just don’t get published.
Though this is not strictly speaking wrong, it is just not good practice as it is exactly understanding the failure of an approach that could lead to improvement. However, those scientists who elaborate on difficulties and drawbacks risk being understood as negative, or maybe just not exciting enough, and cause problems for themselves (and probably get the well meant career advise to better sell themselves): Here we have another gap between what would be beneficial for scientific progress (primary goal: understand model), and what is beneficial for the scientific career (secondary goal: hide bugs or declare them as feature).
|
Now that I think about it, why not include a blurb paragraph to papers with warnings. Like ‘Possible side effects might contain decaying vacua, ghost fields and tachyons.’ Or ‘Do not use this model in Lorentzian signature, and not after the electroweak phase transition. If you consider using it in more than three dimensions, or together with matter fields, please consult a doctor.’
|
:-) I know, I’m being silly. I apologize, it is far easier to retreat to sarcasm than to come up with constructive criticism.
Were was I?
Uhm, this is another example where marketplace tactics fail in scientific research. We don’t want to sell our theories to as many people as possible and optimize the citation index, but we want to optimize the quality and usefulness of publications.
Another excellent example that shows how advertisement can promote scientific nonsense when secondary criteria (here: holding patents) are in conflict with primary goals (here: quality of research), can be found at the post Micro Black Day.
III. Survival of the FittestThe survival of the fittest is another catchy phrase (often used by those who profit from the current system) to claim that a natural selection process ensures progress in scientific research. The irony is that those who argue such actually explain why the system fails.
Survival of the fittest doesn’t mean survival of the strongest, the best looking or the most intelligent. It means literally, survivors are those who ‘fit best’. Survivors are those who adapt the behaviour that minimizes existential conflict with the environment.
Now, ask yourself, what is this environment in the context of scientific research? Well, it is our own community with the selection criteria that we apply. If these criteria are not optimal for scientific progress, we don’t only have the possibility but the duty to change it!
The optimization implied in the ‘survival of the fittest’ crucially depends on the environment and
available resources. Whether you like the subtitle of
Lee's book or not, he makes
the important
point that we have to ask how science works best – how it works
now and
here, how it works in this century, in this sociological and cultural environment - and whether the presently applied selection criteria are indeed optimal for progress. Whether the fitness that we reward is actually the fitness that we need. Whether our secondary criteria agree with the primary goals.
We have to blame ourselves if we accept the current conditions even though we know they are not optimal.
Amara Graps: At 2:51 AM, March 10, 2007
One reason why the current system has been going on for so long is that scientists are a mild-mannered bunch and are passionate about their work. They are prone to self-abuse to pursue those passions too, being willing to absorb the most degrading conditions.
Repeatedly, I have met colleagues who agree that the situation sucks, but they shrug shoulders and say, that’s just the marketplace. Where does it come from, this believe that passivity is a guarantee for progress?
amused: Mar 17th, 2007 at 1:17 am
Of course, that’s hardly a new point in these discussions, and the standard response is to shrug ones shoulders and say “oh well, that’s just market forces”. Which is true, but it’s also relevant to ask whether it is in the best interests of physics. Hopefully it’s not too controversial to suggest that the interests of physics in the long term are best served by ensuring as much as possible that jobs go to the “best” people, regardless of their preferred research topics.
We are scientists. We should be able to analyze the present situation, and to draw
consequences. Science is not coming to an end if we fail to meet the challenges that the increasing complexity of our field has brought. But we run in danger to reproduce the failures of the economic marketplace: bubbles of nothing, that are a waste of time, money and energy.
If left without attention, the naïve believe that the marketplace will make things right ‘somehow’ can seriously hinder progress. Nature might have supported an approach that failed too early – because it wasn’t advertised well enough, or because the capital investment was simply insufficient to allow it to compete.
Bottomline
There are important differences between the economical and the scientific marketplace. The most obvious ones being the absence of a neutral measure (like profit), and the pitfalls of advertisement.
Currently the ‘marketplace of ideas’ works anything but optimal. Times have changed rapidly, and our community has grown significantly. These changes need to reflect in our organizational structure as well, or we run in danger of getting stuck in a dead end.
And it is easy enough to improve the situation:
- Question and doubt. Ask yourself whether the realized strategies are optimal for scientific progress, and if you don't think so, don't shrug shoulders. Don't accept criteria you have been taught are right without taking into account that times have changed.
- Analyze. Peer pressure, intense competition, short resources, project-dependent funding and short-term employment favours mainstream, conservative and low-risk work. Be aware of that. Remind yourself and colleagues that 'Good physics has to be open, critical, and responsive'. Research has shown that simply reminding people to think rationally influences their decisions.
- Trust yourself. Don't work on topics that you don't genuinely believe are relevant because you are afraid of your reputation. If this work is unavoidable, criticise - even if you are defeated, you make a contribution to science. (Hey - I told you, you're not getting career advises on this blog.)
Hmmm...
It seems, this piece got quite lengthy...
One could write books about it...
Footnote [1]: The 'Invisible Hand' was indroduced by Adam Smith in his book 'The Wealth of Nations' (1776) to describe the self-regulation of the marketplace. From Wikipedia: Many economists claim that the theory of the Invisible Hand states that if each consumer is allowed to choose freely what to buy and each producer is allowed to choose freely what to sell and how to produce it, the market will settle on a product distribution and prices that are beneficial to the entire community. Adam Smith already pointed out that the Invisible Hand's regulation mechanism alone does not guarantee the well-being of the society and needs to be balanced by govenmental guidance "[...] uniformity of [the employee's] stationary life naturally corrupts the courage of his mind [..] His dexterity at his own particular trade seems, in this manner, to be acquired at the expence of his intellectual, social, and martial virtues. But in every improved and civilized society this is the state into which [...] the great body of the people must necessarily fall, unless government takes some pains to prevent it." Nevertheless, this metaphor is often abused to praise the merits of capitalism without given sufficient credits to its limitations. [Back] Footnote [2]: In addition, there is also the question how our research is presented to the public - who after all pays us to explore the frontiers of our knowledge. This is an important point on its own but should not be mixed up with the question how the community selects promising researchers and research programs. Most people are crucially aware that it requires an appropriate education to judge on the value of very recent developments, and will rely on expert’s opinions for a good reason. The public is neither dumb nor ignorant. I welcome it very much that in the last decades - maybe starting with Hawking’s Brief History of Time - theoretical physics has become more accessible to the public. The resulting discussions of our research among non-experts are regarded by some scientists with concern and skepticism. I am sure it is only a matter of time until our community gets used to this attention and learns how to deal with this kind of feedback. I myself am perfectly sure this communication is inspiring for both sides - and one of the reason why I maintain this blog. [Back] Footnote [3]: To give a concrete example, research papers should not be judged upon by the author. Researchers should not be selected because of the institutions they have connections to, or the country of origin. Conference invitations should not be made to famous people for the only reason that their name does attract interest – A scientific conference is not a rock concert. [Back] Footnote [4]: At least one should be careful enought to use 'could' instead of 'does' and 'might' instead of 'will'. You can learn about the importance of weasel words here, in case you followed this discussion about this paper. [Back]
TAGS: SCIENCE,
DEMOCRACY,
SCIENCE AND SOCIETY