Wednesday, January 18, 2012

The Academic Dollar

I didn't know whether to laugh or to cry when I read this article:

The authors are two economists and the above article proposes an improvement to the current publication system in academia. They propose to introduce a virtual currency, the "Academic Dollar" (A$), that would be traded among editors, authors, and reviewers and create incentives for each involved party to improve the quality of articles.

The idea to measure scientific quality by one single parameter, currency in a market economy, is not new. It has been proposed before, in various forms, to rate scientific papers or ideas by monetary value. The problem with this is twofold. First, the scientific community is global and incomes differ greatly from one institution to the next. If money would influence the rating of scientific quality, the largest influence would rest in the wealthy nations' most wealthy institutions. Second, market economies deal very poorly with intangible, long-term, public benefits, which is exactly why most of basic research is tax-funded. It is thus questionable that a neo-liberal reformation of academic culture would be beneficial.

The introduction of an Academic Dollar that could be exchanged according to its own rules circumvents these problems, so it is an interesting idea. Prufer and Zetland motivate their study as follows
"The [auction market for journal articles] quantifies academic output through A$ income, and academics need an accurate measure now more than ever. Long ago, decisions on professional advancement depended on subjective factors. These were replaced over time by "objective" factors such a publication or citation counts. As publication has grown more important, the number of submitted papers has increased... [T]he multiplication of titles has made measurement (and professional decisions) more difficult. Neither tenure candidates nor committees are happy with current evaluation methods; they need a simple indicator."

In more detail, what the authors suggest is the following: The scientist writes a paper and submits it to a journal auction market where editors bid for the papers. The winning bid gets the permission to send the paper to peer review. If it passes peer review satisfactorily, and the editor decides to publish it, the bid in A$ goes to the authors, editors, and referees of the articles that are cited in the auctioned paper.

Let me repeat this so you don't miss the relevant part: the A$ does not go to the author, it goes to the authors, editors and referees of the cited articles. Authors and referees are obliged to reassign their A$ to any editor they chose within one year to close the circle.

The vision is that
"It is a simple step to sum an individual's A$ income... to get an accurate signal of academic productivity. This signal could facilitate decisions on tenure, promotion, grants, and so on."
Five questions that sprang to my mind immediately:

First, I know plenty of researchers who have strong dislikes of certain journals and refuse to work with them. This point the authors address, if I understood correctly, with a "handicap" that the scientist can put on certain journals that would disable or make it more difficult for an editor of these journals to make a bid.

Second, what about self-citations? They write they just wouldn't count them.

Third, where does the A$ come from and who decides who gets what? This is addressed in the article with one bracketed sentence "The initial allocation of A$ may be in proportion to subscribers, citations, impact factor, or some other variable." I am not sure that will be sufficient. There will be a loss of A$ from people who don't care to 'reassign them' for example because they are leaving academia, and a further decrease of the available A$ per person just because the number of scientists is increasing.

Fourth, if the A$ is worth real money because it is relevant for tenure decisions and grants, somebody who has no need for the virtual money will go and trade it for real money. In other words, there'll be a black market for A$, not to mention the problem of smart people hacking the software. The authors write that "The fixed supply of A$, reallocation norm and trading costs are likely to limit the importance of cash in an A$ black market." I think they'd be surprised.

Five, what about editors who are also authors? Are they supposed to have two different accounts of A$ and not mingle them? I couldn't find anything in the paper about this, but suppose this can be addressed somehow.

Prufer and Zetland have added to their paper a calculation of Pareto efficiency, to show that their proposal is beneficial for everybody involved. For this, they have assumed that the quality of a scientific article is a single-valued universal parameter whose optimization is equally well-defined as the optimization of the most cost efficient way to run a factory.

But my biggest problem with the authors proposal is one that we have discussed previously at this blog (for example here). Any measure that is universal streamlines the way research is pursued. Since your measure is in the best case a rough estimate for long-term success, this amplifies behavior that optimizes currently fashionable measures rather than contributes to scientific knowledge in the first line. It might be saving hiring committees time in the short run, but it will cost the community much more time in the long run.

I have preached it many times, and here it is once again: There is no substitute for scientists' judgement. There is no shortcut and there is no universal measure that could improve or replace this individual and, yes, fallible judgement. The individual assessment of quality and potential impact, possibly centuries into the future, if you'd really want to parameterize it, would lie in a very high dimensional space whose dimensions represent very many continuous parameters. If one attempts to project these opinions onto a one-dimensional axis, the universal measure, one inevitably loses information, and optimization becomes dependent on the choice of measure and thus, ultimately ambigious and questionable in its use. At the very least, we should make sure there are several projections and several criteria for what constitutes an "optimal" scientist.

The trend towards use of simple measures is nothing but a way to delegate responsibility for decisions, till they are diluted enough so that one can just go an blame an anonymous "system."

It is far from my intention to make fun of serious and well worked-out proposals to improve the shortcomings of the current academic system, and I find this is a good try. This proposal however has serious shortcomings itself, and it would make a good example for Verschlimmbesserung ;op

22 comments:

Unknown said...

Why don't people just muddle along? Why all this classification, quality control etc? Why don't every one post on some archive like arxiv (albeit one with a powerful tag and search system) and leave it at that? Why is there even a need for glory? Why don't we just wallow in solipsistic self-satisfaction about having contributed to growth of knowledge? Shouldn't science be on a higher plane of human behaviour, away from all the humdrum struggle for existence?

Uncle Al said...

The money is in the vigorish. Arbitrageurs get wealthy, re carbon credits. Producers are flensed.

"Shouldn't science be on a higher plane of human behaviour" Marketing gets free eats, Sales gets commissions, scientists get fired. The cow provides milk not profits.

http://www.youtube.com/watch?v=CCZRXW-pFcE

1) Management obsesses on what is measurable instead of promoting what is important. 2) Management is rewarded for enforcing process not creating product. 3) Management kills the future, for the only trusted employee is one whose sole marketable asset is loyalty. 4) Anybody who criticizes is thereby proven unqualified to comment.

Eric said...

It seems to me the academic dollar is just another attempt by the no-nothings in this world to understand how to value things. There are so many people that live just on the surface of this world. Unfortunately these people who understand nothing also don't understand how stupid they are. It makes things very hard for everybody else because we all end up spending most of our time just preventing them from dragging us backwards.

It would be nice to just have some sort of full proof test, sort of like what they want to inflict on the rest of us, and banish them to their own island where only blowhards are allowed to reside. I can't think of a more fitting punishment than for each of these people to be around others like themselves.

Bee said...

Hi Unknown,

Because there isn't enough money (real) to go around. There are more people who want to work in academic research than there are positions. So one has to select somehow. The question is what's the best way to do that. Best,

B.

Bee said...

Hi Eric,

It is difficult for some people to come to terms with the fact that many academics are primarily interested in their research, and not in making money. Most scientists would earn much better if they'd leave academia and put their brain to use elsewhere, and if you believe that all we do is to optimize profit, that doesn't make any sense.

In any case, for what the above paper is concerned, at least they do acknowledge that academia plays by its own rules and that making money the main incentive is not a good idea. They just don't really take into account the loss of differentiated judgement if one oversimplifies scientific quality. Best,

B.

Giotis said...

These are internal issues but If I may say I don't see the point here. Each university/institution hires and promotes people according to its own criteria and polities. The preconditions are many and diverse. I don't see how a universal measure could apply and moreover why it should apply.

Bee said...

Well, apparently the authors believe that academics would like to have a simple universal measure so their lives are easier. There was a reference between the sentences "Neither tenure candidates nor committees are happy with current evaluation methods;" and "they need a simple indicator." that I didn't quote: Varian, H.R. (1997). I don't know the paper.

Giotis said...

BTW Bee, in industry we call these measures KPIs (Key Performance Indicators) but they have local not universal meaning. They are imposed by management to measure performance. You are right though they never reflect the actually quality of the work. People more that often end up chasing the KPI, some times in the expense of the real quality.

Bee said...

Hi Giotis,

That's interesting, I didn't know that. I guess one of the problems is that one can't measure quality without disturbing the system, because people pay attention to how you're measuring it. Best,

B.

Kay zum Felde said...

Bee, I agree with you. Still the best brain to comment a paper is or are the referees and the editor. No computer program can reach them, even if the referees and the editor fail from to time to time.

Take care Kay

Eric said...

Hi Bee,
Off topic - it looks like there is an ebook textbook alternative in the pipeline that may work for your needs. See here.

Bee said...

Hi Eric,

Thanks, I will have an eye on that! Best,

B.

Phillip Helbig said...

"It is thus questionable that a neo-liberal reformation of academic culture would be beneficial."

It is questionable that any neo-liberal reformation of anything would be beneficial.

Phillip Helbig said...

"Why don't people just muddle along? Why all this classification, quality control etc? Why don't every one post on some archive like arxiv (albeit one with a powerful tag and search system) and leave it at that? Why is there even a need for glory? Why don't we just wallow in solipsistic self-satisfaction about having contributed to growth of knowledge? Shouldn't science be on a higher plane of human behaviour, away from all the humdrum struggle for existence? "

Yes, we can do this, with the result that only people who are independently wealthy will be able to contribute. A huge loss for science. There are two alternatives. The first is that people are paid according to their scientific output (however it is measured, which is a separate question). The other is to pay everyone who claims to be a scientist. Once word gets out, everyone will claim to be a scientist and, since funds are limited, each one will get a few cents. Thus, again, science will be hurt.

Phil Warnell said...

Hi Bee,

I must agree with your criticism of this proposal as if quality were totally measurable in any kind of currency why do we have so much difficulty fostering it more generally. That is any monetary system is based upon the balance of risk against gain and has the pursuit of currency immediately become the focus rather than the product(s) it’s meant to have valued.

That the existence of money does little to assure the search for excellence only the pursuit of money. One would think that this would all have become abundantly clear with the state of the world as its now found, as currency has served only to expose the weaknesses of humanity as opposed to being able to motivate it effectively in regards to the pursuit of excellence. So whether its academic dollars, green backs, Euros or Yen, money only measures what we have effectively taken from others, while offering little to indicate what we have given.

”Innovation has nothing to do with how many R amp&; D dollars you have. When Apple came up with the Mac, IBM was spending at least 100 times more on R & D. It's not about money. It's about the people you have, how you're led, and how much you get it.”

-Steve Jobs

Best,

Phil

Bee said...

Hi Phillip,

A neo-liberal reformation of the Stockholm apartment market sounds like a really, really good idea to me. Best,

B.

Phillip Helbig said...

"A neo-liberal reformation of the Stockholm apartment market sounds like a really, really good idea to me."

Have you looked at the cost of an apartment in London? Since public transportation is relatively cheap in Sweden, you could live outside of the city and commute, saving money for rent and having more space. Time in the train is not lost.

Bee said...

I do live outside the city. The problem isn't primarily the cost, the problem is that it's almost impossible to get an apartment for rent if you're an immigrant. The proper Swedish way to do it is apparently to get on a 15 year long waiting list. You can buy, if you have the money, alas, I don't.

Uncle Al said...

Bee,

An M3 solar flare hits the atmosphere Jan. 21st 22:30 UTC (+/-) 7 hrs. Away from city lights it should be a high latitude sky show! ISS FUBAR asstronaughts are safe. NASA radiation exposure limit is 3000mSv/year, 300 REM/yr. National Council on Radiation Protection 1998.

This is 60 times the limit for nuclear workers (5 REM/yr whole body) and 3000 times the limit for civilian industrial exposure (0.1 REM/yr whole body). US Code of Federal Regulations Title 10, part 20. One Homeland Severity radiation chamber scan blows the civilian annual limit for skin, eye, and reproductive organ exposure.

Human acute lethal full-body dose is 400 - 500 rem. An X-class solar flare can deliver 1100 REM through an aluminum hull. Let's go to Mars! Where are my $A?

EliRabett said...

How the hell do the editors know what to bid, and if they do, what do they need the reviewers for

EliRabett said...

Oh yeah, on those radiation limits, there number of civilians working in industry are > 3000 times the number of astronauts at the ISS, and the astronauts have a better health care plan. Multiplication is often taught in grade school

Bee said...

Hi EliRabett,

A good question, but the authors are not claiming the auction market is a good system; they just claim it's a better system. You could ask the same question in the present system, namely, how to the editors know what papers to reject? Well, most editors are scientists recruited from the community, so they do know one thing or the other. The authors of the here discussed paper claim that in the proposed auction market, editors would be better off, because they could themselves filter the manuscripts they consider, rather than being drowned in submissions that they have to work their way through. The downside of this is of course that editors will use this possibility, thereby narrowing down their interests, and that of their journals. Best,

B.