- An Auction Market for Journal Articles
By Jens Prufer and David Zetland
Public Choice (2010) 145: 379-403
The authors are two economists and the above article proposes an improvement to the current publication system in academia. They propose to introduce a virtual currency, the "Academic Dollar" (A$), that would be traded among editors, authors, and reviewers and create incentives for each involved party to improve the quality of articles.
The idea to measure scientific quality by one single parameter, currency in a market economy, is not new. It has been proposed before, in various forms, to rate scientific papers or ideas by monetary value. The problem with this is twofold. First, the scientific community is global and incomes differ greatly from one institution to the next. If money would influence the rating of scientific quality, the largest influence would rest in the wealthy nations' most wealthy institutions. Second, market economies deal very poorly with intangible, long-term, public benefits, which is exactly why most of basic research is tax-funded. It is thus questionable that a neo-liberal reformation of academic culture would be beneficial.
The introduction of an Academic Dollar that could be exchanged according to its own rules circumvents these problems, so it is an interesting idea. Prufer and Zetland motivate their study as follows
"The [auction market for journal articles] quantifies academic output through A$ income, and academics need an accurate measure now more than ever. Long ago, decisions on professional advancement depended on subjective factors. These were replaced over time by "objective" factors such a publication or citation counts. As publication has grown more important, the number of submitted papers has increased... [T]he multiplication of titles has made measurement (and professional decisions) more difficult. Neither tenure candidates nor committees are happy with current evaluation methods; they need a simple indicator."
In more detail, what the authors suggest is the following: The scientist writes a paper and submits it to a journal auction market where editors bid for the papers. The winning bid gets the permission to send the paper to peer review. If it passes peer review satisfactorily, and the editor decides to publish it, the bid in A$ goes to the authors, editors, and referees of the articles that are cited in the auctioned paper.
Let me repeat this so you don't miss the relevant part: the A$ does not go to the author, it goes to the authors, editors and referees of the cited articles. Authors and referees are obliged to reassign their A$ to any editor they chose within one year to close the circle.
The vision is that
"It is a simple step to sum an individual's A$ income... to get an accurate signal of academic productivity. This signal could facilitate decisions on tenure, promotion, grants, and so on."Five questions that sprang to my mind immediately:
First, I know plenty of researchers who have strong dislikes of certain journals and refuse to work with them. This point the authors address, if I understood correctly, with a "handicap" that the scientist can put on certain journals that would disable or make it more difficult for an editor of these journals to make a bid.
Second, what about self-citations? They write they just wouldn't count them.
Third, where does the A$ come from and who decides who gets what? This is addressed in the article with one bracketed sentence "The initial allocation of A$ may be in proportion to subscribers, citations, impact factor, or some other variable." I am not sure that will be sufficient. There will be a loss of A$ from people who don't care to 'reassign them' for example because they are leaving academia, and a further decrease of the available A$ per person just because the number of scientists is increasing.
Fourth, if the A$ is worth real money because it is relevant for tenure decisions and grants, somebody who has no need for the virtual money will go and trade it for real money. In other words, there'll be a black market for A$, not to mention the problem of smart people hacking the software. The authors write that "The fixed supply of A$, reallocation norm and trading costs are likely to limit the importance of cash in an A$ black market." I think they'd be surprised.
Five, what about editors who are also authors? Are they supposed to have two different accounts of A$ and not mingle them? I couldn't find anything in the paper about this, but suppose this can be addressed somehow.
Prufer and Zetland have added to their paper a calculation of Pareto efficiency, to show that their proposal is beneficial for everybody involved. For this, they have assumed that the quality of a scientific article is a single-valued universal parameter whose optimization is equally well-defined as the optimization of the most cost efficient way to run a factory.
But my biggest problem with the authors proposal is one that we have discussed previously at this blog (for example here). Any measure that is universal streamlines the way research is pursued. Since your measure is in the best case a rough estimate for long-term success, this amplifies behavior that optimizes currently fashionable measures rather than contributes to scientific knowledge in the first line. It might be saving hiring committees time in the short run, but it will cost the community much more time in the long run.
I have preached it many times, and here it is once again: There is no substitute for scientists' judgement. There is no shortcut and there is no universal measure that could improve or replace this individual and, yes, fallible judgement. The individual assessment of quality and potential impact, possibly centuries into the future, if you'd really want to parameterize it, would lie in a very high dimensional space whose dimensions represent very many continuous parameters. If one attempts to project these opinions onto a one-dimensional axis, the universal measure, one inevitably loses information, and optimization becomes dependent on the choice of measure and thus, ultimately ambigious and questionable in its use. At the very least, we should make sure there are several projections and several criteria for what constitutes an "optimal" scientist.
The trend towards use of simple measures is nothing but a way to delegate responsibility for decisions, till they are diluted enough so that one can just go an blame an anonymous "system."
It is far from my intention to make fun of serious and well worked-out proposals to improve the shortcomings of the current academic system, and I find this is a good try. This proposal however has serious shortcomings itself, and it would make a good example for Verschlimmbesserung ;op