I have discussed at this blog many times the differences and similarities between the "Marketplace of Ideas" and the free marketplace of products. The most relevant difference is the property the system should optimize. For our economies it is profit and - if you believe the standard theory - this results ideally in a most efficient use of resources. One can debate how well the details work, but by and large it has indeed worked remarkably well. In the academic system however, the property to optimize is "good research" - a vague notion with subjective value. Before nature's judgement on a research proposal is available, what does or doesn't constitute good research is fluid and determined by the scientific community, which is also the first consumer of that research. Problems occur when one tries to impose fixed criteria for the quality of research, some measure of success. It sets incentives that can only deviate the process of scientific discovery (or invention?) from the original goal.
That is, as I see it, the main problem: setting wrong incentives. Here, I want to focus on a particular example, that of accountability and advance planning. In many areas of science, projects can be planned ahead and laid out in advance in details that will please funding agencies. But everybody who works in fundamental research knows that attempting to do the same in this area too is a complete farce. You don't know where your research will take you. You might have an idea of where to start, but then you'll have to see what you find. Forced to come up with a 3-year, 5 point plan, I've found that some researchers apply for grants after a project has already been finished, just not been published, and then spend the grant on what is actually their next project. Of course that turns the whole system ad absurdum, and few can afford that luxury of delaying publication.
The side-effect of such 3-year pre-planned grants is that researchers adapt to the requirements and think in 3-years pre-plannable projects. Speaking about setting incentives. The rest is good old natural selection. The same is true for 2 or 3 year postdoc positions, that just this month thousands of promising young researchers are applying for. If you sow short-term commitment, you reap short-term thinking. And that's disastrous for fundamental research, because the questions we really need answers to will remain untouched, except for those courageous few scientists who willingly risk their future.
Let us look at where the trends are going: The number of researchers in the USA holding faculty positions 7 years after obtaining their degree has dropped from 90% in ’73 to 60% in 2006 (NSF statistics, see figure below). The share of full-time faculty declined from 88% in the early 1970s to 72% in 2006. Meanwhile, postdocs and others in full-time nonfaculty positions constitute an increasing percentage of those doing research at academic institutions, having grown from 13% in 1973 to 27% in 2006.
The American Association of University Professors (AAUP) has compiled similar data showing the same trend, see the figure below depicting the share of tenured (black), tenure-track (grey), non-tenured (stripes) and part-time (dots) faculty for the years 1975, 1989, 1995 and 2007 [source] (click to enlarge).
In their summary of the situation, the AAUP speaks clear words "The past four decades have seen a failure of the social contract in faculty employment... Today the tenure system [in the USA] has all but collapsed... the majority of faculty work in subprofessional conditions, often without basic protections for academic freedom."
In their report, the AAUP is more concerned with the quality of teaching, but these numbers also mean that more and more research is done by people on temporary contracts, who at the time they start their job already have to think about applying for the next one. Been there, done that. And I am afraid, this shifting of weight towards short-term thinking will have disastrous consequences for the fundamental research that gets accomplished, if it doesn't already have them.
In the context of setting wrong incentives and short-term thinking another interesting piece of data is Pierre Azoulay et al's study
- Incentives and Creativity: Evidence from the Academic Life Sciences
By Pierre Azoulay, Joshua S. Gra Zivin, Gustavo Manso
In their paper, the authors compared the success of researchers in the life sciences funded under two different programs, the Howard Hughes Medical Institute (HHMI), which "tolerates early failure, rewards long-term success, and gives its appointees great freedom to experiment" and the National Institute of Health (NIH), with "short review cycles, pre-defined deliverables, and renewal policies unforgiving of failure." Of course the interpretation of the results depends on how appropriate you find the used measure for scientific success, the number of high-impact papers produced under the grant. Nevertheless, I find it tale-telling that, after a suitable adjustment of researcher's average qualification, the HHMI program funding 5 years with good chances of renewal produces a better high-impact output than the NIH 3 year grants.
And speaking of telling tales, let me quote for you from the introduction of Azoulay et al's paper which contains the following nice anecdote:
"In 1980, a scientist from the University of Utah, Mario Capecchi, applied for a grant at the National Institutes of Health (NIH). The application contained three projects. The NIH peer-reviewers liked the first two projects, which were building on Capecchi's past research effeorts, but they were unanimously negative in their appraisal of the third project, in which he proposed to develop gene targeting in mammalian cells. They deemed the probability that the newly introduced DNA would ever fi nd its matching sequence within the host genome vanishingly small, and the experiments not worthy of pursuit.
The NIH funded the grant despite this misgiving, but strongly recommended that Capecchi drop the third project. In his retelling of the story, the scientist writes that despite this unambiguous advice, he chose to put almost all his efforts into the third project: "It was a big gamble. Had I failed to obtain strong supporting data within the designated time frame, our NIH funding would have come to an abrupt end and we would not be talking about gene targeting today." Fortunately, within four years, Capecchi and his team obtained strong evidence for the feasibility of gene targeting in mammalian cells, and in 1984 the grant was renewed enthusiastically. Dispelling any doubt that he had misinterpreted the feedback from reviewers in 1980, the critique for the 1984 competitive renewal started, "We are glad that you didn't follow our advice."
The story does not stop there. In September 2007, Capecchi shared the Nobel prize for developing the techniques to make knockout mice with Oliver Smithies and Martin Evans. Such mice have allowed scientists to learn the roles of thousands of mammalian genes and provided laboratory models of human afflictions in which to test potential therapies."