Saturday, February 02, 2008

Peer Review III

The Publishing Research Consortium published last week the results of a global survey on the attitudes and behaviour of academics in relation to peer review in journals

3040 scientists filled out the survey, me among them. I am very happy to see the vast majority shares my general opinion on the importance of the peer review process:

"The overwhelming majority (93%) disagree that peer review is unnecessary. The large majority (85%) agreed that peer review greatly helps scientific communication [...]

Researchers overwhelmingly (90%) said the main area of effectiveness of peer review was in improving the quality of the published paper. In their own experience as authors, 89% said that peer review had improved their last published paper, both in terms of the language or presentation but also in terms of correcting scientific errors."



But there is also a desire for improvement:

"While the majority (64%) of academics declared themselves satisfied with the current system of peer review used by journals (and just 12% dissatisfied), they were divided on whether the current system is the best that can be achieved, with 36% disagreeing and 32% agreeing. There was a very similar division on whether peer review needs a complete overhaul. There was evidence that peer review is too slow (38% were dissatisfied with peer review times) and that reviewers are overloaded."


Most people also shared my opinion about double blind review review that I regard `in principle' a good idea just that it is doubtful whether it would work in practice, as I also pointed out in the earlier post Peer Review II:

"[W]hen asked which was their most preferred option, there was a preference for double-blind review, with 56% selecting this, followed by 25% for single-blind, 13% for open and 5% for post-publication review. Open peer review was an active discouragement for many reviewers, with 47% saying that disclosing their name to the author would make them less likely to review.

Double-blind review was seen as the most effective. Double-blind review had the most respondents (71%) who perceived it to be effective, followed (in declining order) by single-blind (52%), post-publication (37%) and open peer review (26%) [...]

Many respondents, including some of those supporting double-blind review, did however point out that there were great difficulties in operating it in practice because it was frequently too easy to identify authors from their references, type of work or other internal clues."


You can download the full suvey results here (PDF ~1.4 MB), or the summary here (PDF ~ 1.6 MB).


Thanks to Stefan for the pointer!

21 comments:

Phil Warnell said...

Hi Bee,

“The overwhelming majority (93%) disagree that peer review is unnecessary”

I am certainly happy that this would be the case and yet not surprised since this is a way to determine if ones vocation is truly professional or not. I would suspect the 5% would encompass mainly part of the small crack-pot element that will be found in any profession. It would be interesting to weigh these results against a similar question to those in other considered to be professional disciplines.

“Most people also shared my opinion about double blind review review that I regard `in principle' a good idea”

I don’t suspect it would serve the same purpose as say it does in medicine for I don’t believe that a placebo effect could be found to have bearing on whether a theory or results are sound or not. That would be borne out soon enough by the rules that science adheres to in general; as examples such as stem cell incidents or cold fusion have demonstrated. Its primary value of course would be to omit the human weakness of prejudice (either pro or con) which I’m afraid is unavoidable. It also could at times avoid embarrassment or loss of a publishing opportunity of the journal itself.

Best,

Phil

Kronprinz Rupprecht said...

"were divided on whether the current system is the best that can be achieved, with 36% disagreeing and 32% agreeing"

What! 32% think that the current system cannot be improved?! Are they kidding? What utter crap.

*The* worst aspect of the present system is that, in effect, there is no appeal. Sure, you can complain to the editor --- good luck.... I had a paper rejected by an *editor* who said, "This paper should be rejected because the author claims X, and if that were true it would mean that the well-known paper by Prof Y is false." Now that is a bizarre way for an editor to behave anyway -- "Prof Y is never wrong" -- but the truth is that my paper actually explicitly states the *opposite* of X. I wrote a polite letter to this idiot pointing this out, and --- surprise! -- never got a reply.

My own policy when I am a referee is that I will not reject a paper unless I can find an actual error, or unless it is clear that the result is so trivial that the author is just trying to expand his publication list --- and I only do the latter under extreme provocation. If referees and editors all acted in this way, that would already be a *huge* improvement.....

phil said...

The survey is worthless because it was conducted by the journal publishers themselves. The questions and areas of study were biased to get the result they wanted.

Just look at the headline question which was "Do you agree or disagree with the statement that peer review is completely unnecessary?" Notice the use of the negative and the word "completely" which is left out when the result is reported. The amazing thing is that even with such a loaded question there are still 7% who agree that peer review is completely unnecessary.

Why did they not break down the answers to important questions by subject field? Why did they only look seriously at alternatives within the current journal based system? Why did they include journal editors even if they had not published in two years while filtering out anybody else who had not published recently? Why did they not ask if referees should be paid for there work?

I would like to see an alternative survey conducted on behalf of the research departments that pay for the journals. That would be worth reading.

Bee said...

Hi Phil:

The survey is worthless because it was conducted by the journal publishers themselves. The questions and areas of study were biased to get the result they wanted.

Just look at the headline question which was "Do you agree or disagree with the statement that peer review is completely unnecessary?" Notice the use of the negative and the word "completely" which is [...]


I do not share your impression. Did you actually look at the survey or only read this post? The survey is imho very balanced, includes negatively as well as positively formulated questions, and the 'headline' question is only a 'headline' because I increased the font size so it was readable on a jpg. You find the whole survey in appendix I of this file, and a lot of more details on how the survey was done and evaluated etc on the previous 70 pages.

Best,

B.

Bee said...

Hi Kronprinz:

What! 32% think that the current system cannot be improved?! Are they kidding? What utter crap.

Though I am among those who definitely think the system can be improved (well, I'd say there is always room for improvement, the question is only if it's worth the effort) it seems to fit with the impression I get from talking to my colleagues. It's about the same problem you face with almost every political question. There will always be those people who argue "We've always done it this way, and it works for me, so it's good the way it is, why change it." etc. Call it a lack of vision combined with a low ambition, and the inability to notice that times are changing and procedures have to be readjusted every now and then. I'd actually have expected the fraction to even be higher, but then maybe those people who found the whole topic irrelevant didn't bother to fill out the survey to begin with.

*The* worst aspect of the present system is that, in effect, there is no appeal. Sure, you can complain to the editor --- good luck....

I am not sure I'd call this the worst aspect, but I actually agree with you. In my experience, most editors are apparently either not able or not willing to judge on their own. They leave you with completely ridiculous referee reports and if you complain about it, they will reply, as you say, they assure you the report has been written by somebody who is qualified etc and therefore your paper must be crap. I've had this several times. (Some of the reports were so free of content it's incredible. One referee once argued (I paraphrase): the author has previously published a paper that in my opinion shouldn't have been published, therefore this one (on a different topic) must also be crap. - That's one of the reasons why I'd prefer double blind reviews.)

My own policy when I am a referee is that I will not reject a paper unless I can find an actual error, or unless it is clear that the result is so trivial that the author is just trying to expand his publication list [...]

I recently had to reject several papers because they simply reproduced results that had previously been published by other people. Evidently the authors were not aware of this (even though the papers were only ~5 years old and all on the arxiv). I find this actually a very worrisome and hope it doesn't become a common procedure to send in papers to a journal and hope a referee points out the relevant publications. Best,

B.

Bee said...

Hi Phil:

“The overwhelming majority (93%) disagree that peer review is unnecessary”

I am certainly happy that this would be the case and yet not surprised since this is a way to determine if ones vocation is truly professional or not. I would suspect the 5% would encompass mainly part of the small crack-pot element that will be found in any profession.


No, I wouldn't say this. I know several people who find peer review unnecessary not because they are crackpotty, but for reasons that I would however just call hopelessly naive. E.g. some believe peer review will essentially been done by 'the online community' and feedback they get by email etc, after the paper is publicly available. Some others believe in 'time will tell', i.e. the papers that are worth it are the ones that will remain interesting over the course of time, whether or not they were peer reviewed.

I don't think this is going to work, and since you've read my previous posts on information overflow, and the increasing importance of filtering, ordering and managing information you probably know why. Peer review is the best filter that we currently have to guarantee some level of expertise. The arxiv's endorsement system is also some kind of filter but on a much lower level. The more papers are published the more people WILL apply SOME filter. I capitalize to emphasize that's not an option. There are just too many papers to read them all. The capacity of the human brain is finite. I guess most people scan the arxiv looking for keywords or familiar names, so do I. That's also some kind of filter, but not a very good one. If I do a keyword search I usually pay more attention to the published papers (if older than 1 year or so), just because the quality is most often higher.

The same problem applies for review by 'the online community'. The more papers there are, the less time people have or will take to comment on them. The 'time will tell' option might work on the 'very long time' scale (> a decade), but not on the short run and can result in very many people wasting time. (There is an abundance of examples in our history were people believe things to be important or promising just because many other people think the same.)

Best,

B.

phil said...

Hi Bee,
Yes I did have a look at the survey. The question about peer review being unnecessary is the first result mentioned in the executive summary which is why I called it the headline result.

I am sure you must be aware that the journals have been under some pressure from some accademics recently. If you aren't aware then just look at www.eurekajournalwatch.org

I think that this survey is a response to that, but since it was commissioned by the journals themselves and not the accademics, it is obvious whose side it is going to be supporting. You dont have to read far to see those biases coming through.

P.S. just in case anyone gets confused, there are two separate phil's leaving comments here.

Bee said...

Hi Phil:

I think that this survey is a response to that, but since it was commissioned by the journals themselves and not the accademics, it is obvious whose side it is going to be supporting. You dont have to read far to see those biases coming through.

I too think that this survey is a response to that, and I don't have to read your comments far to see your biases coming through. Could you maybe let me know what exactly the reason is for your objection? I don't even understand why you'd think that the outcome is what 'journals' would want. I am pretty sure editors find the referee process as annoying as the authors and the referees.

If I'd own a journal the outcome that I'd want to construct is: "A huge majority thinks peer review is completely useless, and they further think that citations and online discussions are a much better substitute. They also think that the so provided filter mechanism needs to be overlooked and summarized. The editor's task should therefore just be to find those papers which are destined to become important and set trends (i.e. raise the journal's impact factor). This is an important and necessary mechanism, and people who provide this service should be incredibly high payed." Best,

B.

Neil' said...

What kind of peer review or the operational equivalent, and where, can gifted amateurs or independent scholars get?

stefan said...

Dear Bee,

ah, that's a nice coincidence - I didn't know that you had participated in this survey :-)



Hi phil,

when I noticed the press release about the survey, I first was a bit sceptical first, for reasons similar to yours.

However, among the publishers of the "Publishing Research Consortium Partners" who commissionend the survey, there are also the Professional Society Publishers, like APS or IOPP, who are not driven by commercial interests in the first place. So for me, this speaks in favour of more trustworthy and balanced results.

Best, Stefan

phil said...

Hi Bee,

I dont have the bias you imagine. I have not been connected to any academic institution for over twenty years and have no reason to want to get rid of journals or peer review. In fact I published a paper in a peer reviewed journal last year and since I cannot submit to the arxiv that makes journals the only means for me to publish beyond my own blog.

I am simply pointing out the obvious bias given who has conducted the survey.

I dont agree that journals would want to get rid of peer review. The editors may find the process annoying but without journal based peer review, how else can they justify their high cost? I cant accept that they could justify their existence by claiming that they are necessary to "set trends".

Bee said...

Hi Phil, Hi Stefan:

If somebody who financed an investigation has certain specific interest this is a reason to be cautious, but not sufficient to conclude that the outcome of the investigation must necessarily be biased and to dismiss it. The objection that Phil had concerning the formulation of the question I agree on, it would be doubtful if this was the only question. But as I've mentioned above, it wasn't. The set of questions contained several positively and negatively formulated statements (Scientific communication is greatly helped by peer review of published articles - 45% agree, Peer review is holding back scientific communication - 45% disagree), and I have no reason to believe they cheated on the selection of participants or counting or so. Why would they? It's an inward directed survey, the results are relevant for the journals to know how to move on. What would be the point in doing an examination of the market situation (that probably wasn't for free either) and then shade the results? Just consider it was indeed the case that there are more people who think peer review is unnecessary than the survey suggests. What's the point in telling them differently if they will sit in a committee deciding not to continue a journal description.

Phil:
The editors may find the process annoying but without journal based peer review, how else can they justify their high cost? I cant accept that they could justify their existence by claiming that they are necessary to "set trends".

My formulation was probably somewhat too sarcastic. The scenario is the following: scientists put their papers on the arxiv, discussion starts, others read it/publish follow up work/replies, blogs comment, the Telegraph reports (shit, here it goes again with the sarcasm). The editor's task is to keep track of the developments. If a paper looks promising they make an offer to the author to print it, the best (or earliest) offer wins. The better the editors do their job, the larger the confidence will become that people put in that journal to publish the really important developments, i.e. you pay for the filtering.

Hi Neil:

What kind of peer review or the operational equivalent, and where, can gifted amateurs or independent scholars get?

That's an interesting question, and one that I asked myself too. A while ago I therefore suggested (to Bora, the guy from Blog around the Clock) the following:

There seems to be an increasing amount of people who have a sincere interest in science, but a limited amount of experience in the field. Many of them ask - some more, some less politely - for feedback. As a result, scientists in academic institutions receive an increasing amount of requests to 'please have a look at my interesting theory'. I don't know but I suspect that journals also receive an increasing amount of papers of that sort that however most often won't even make it into the referee process because they don't meet the journal's standard. The problem is that neither the referee's nor researchers usually have the time to closely look at all this, and to give the people some hints how to proceed. It is just a misdirected request that is annoying for both sides.

On the other hand, I believe there are people who would be interested to read these ideas, and to provide some feedback, in form of a written report, if they were payed appropriately. So my suggestion was to set up a website where on the one hand scientists who'd be up for the task could register, and on the other hand people who want some professional answers can submit their theories and pay some fee for a report. It just takes time to read other people's ideas, and to write a report about it, and unlike peer review it's not a community service so needs other incentives, money would be a good try. E.g. I could imagine that in some cases PhD students/ postdocs/ emeritus/ teachers/etc would be interested in that sort of thing, or maybe just the Prof who is bored after dinner.

What would you think about that? I think it wouldn't take all that much to set up an appropriate web-interface, one would need some people to maintain it though, and to control the quality etc.

Best,

B.

Kronprinz Rupprecht said...

"If I do a keyword search I usually pay more attention to the published papers (if older than 1 year or so), just because the quality is most often higher."

I used to do this, but I'm sorry to say that nowadays when I see a paper on the arxiv that has not been published, my instinct is automatic: "Poor bastard got a retard for an editor/referee." So I regret to say that for me, the refereeing process has no value whatever as a filter.

I agree that it is unfortunate that people send in papers without thoroughly checking the literature, but this is a minor problem compared to the harm done by referees and editors who don't bother to read articles at all.

I'm afraid the sad truth is that we all have to develop our own filters. It has bad aspects but it has good ones too: being conscious of one's filtering forces us to think about it carefully and not just blindly say, "oh, accepted in JHEP, it must be good". That would be a bad mistake. So the stupidity of the JHEP etc editors is actually doing us a favour in a sense: we are learning to make up our own minds.

Bee said...

Hi Kronprinz:

Well, I don't see much point arguing about which problem either of us considers the worst. Let me just add the reason why I find it of a certain concern that people send in papers without being aware of the previously published literature is that it indicates they are not able to find the information they need. As I have argued earlier a system that loses memory is not able to learn from mistakes and will repeat them over an over again. It doesn't matter if some result is 'in principle' available somewhere if one can't find it if one needs it, because one doesn't know the right keyword, or the author's name, or how to use the search engine or, or, or. What matters is whether the information is 'in practice' available if one needs it.

I'm afraid the sad truth is that we all have to develop our own filters. It has bad aspects but it has good ones too:

The problem with that is that people DO develop own filters, simply because they have to in order to function somehow. But these filters are not necessarily the best choice. I.e. as I mentioned above, I guess that many people scan the arxiv by looking for keywords in titles/abstracts or by looking for familiar names. This greatly supports fragmentation of the field into subgroup and is not productive to keep an overview on interesting developments. Fact is that most people are simply not aware of the tactics they have developed to filter input they receive. Yes, you 'learn to make up your own mind' but are you sure that you want this to happen in a rather random way? It would imo very much increase the difficulty for an 'outsider' to be heard, if he will just be 'filtered' out, more or less unconsciously, because most people don't consider him important and his abstract doesn't push the right buttons.

Best,

B.

Bee said...

speaking of it I just came across this :

Microcanonical treatment of black hole decay at the Large Hadron Collider
http://arxiv.org/abs/0708.0647

I've re-read the reference list 5 times, but they fail to quote our paper on exactly the same topic, published 5 years earlier

Quasi Stable Black Holes at the Large Hadron Collider
http://arxiv.org/abs/hep-ph/0109085

And why so? I don't know. They cite my review where I mention the topic and refer to my earlier publication. Maybe it's just because neither the title nor the abstract of our paper contains the keyword 'microcanonical'? See what I mean with a system losing memory?

changcho said...

Well, having done and endured peer review myself, I'll make my comment short. I *strongly* disagree with the statement "peer review is unnecessary".

Henry said...

Dear Bee,

I have read your posts about peer-review in physics journals and I have found them really disappointing. I am a referee too and I have a good list of published papers too but I want to let you know what is really going on here with an example.

There is a community actively working to understand what form the gluon propagator should have in the infrared, i.e. they are trying to solve pure Yang-Mills theory at small energies. Of course, the only serious comparison data come from lattice computations. In this community a paradigma formed. Starting with works about confinement due to Zwanzinger, a group of theorists claimed that, with a given truncation of the Dyson-Schwinger equations, one proves that the gluon propagator should go to zero at small momenta, the ghost propagator should go to infinity faster than the free one and that the running coupling should have a fixed point. So strong has been the force of these arguments that Editors and referees convinced that all this should be good and journals with the highest impact factor have rejected, and they still do, all papers proving something different is going on.
These arguments have not proved to be sound mathematically but they formed a paradigma.

As emerging from LATTICE 2007 conference last year, lattice data are proving all this wrong. This means that highest impact factor journals have possibly published rubbish for a lot of years, rejecting sistematically all that could be right. Indeed, to me, if one propose a truncated hierarchy of equations, as a referee, I should ask for a convincing mathematical proof that is really sound. But today physicists seem to not trust mathematics too much and one can also be skeptical about!

What is happening to this not so famous community could be happen at large for other more extended paradigmas that formed, without any sound reason, in fields as particle physics.

My conclusion is that the peer-review process, as applied today in physics, is flawed in a serious manner. What is going on in the above example should make reflect all of us.

Cheers,

Henry

Bee said...

Hi Henry:

Thanks for your interesting comment. I certainly agree that we should reflect on what is going on. But to me the problem that you mention is not one that peer review could solve. What you are critizising is a much deeper ethical problem in the community (lacking criticism supports confirmation bias). What I have been focusing on in my posts are practical, easy to implement, changes in the peer review that might improve the quality of the outcome or at least the satisfaction with the process. Best,

B.

Neil' said...

Bee, I think that idea you put forth about a special board to look at amateur proposals is a good one. The only downside I can think of right ow is possible hardship for poor smart folks. However, most could likely scrape together enough to get something through if they cared a lot about it. There should be an effort to see what good can come out of that.

I think it is also important to try this for "practical ideas" and proposals too, since most of us can't afford time/money/red tape for patenting inventions etc.

Michael Nielsen said...

Hi Bee,

I was looking at the webpage of the organization that published this report. It looks to me like it's a group whose main purpose is to lobby against open access. Since it's an advocacy group, I'm a bit suspicious of anything they publish.

With that said, the report you cite was commissioned from an independent research firm. So it is interesting.

Michael Nielsen said...

Hmm. On further review of their website, I'm not 100% sure of my earlier comment that this is an advocacy group. Sorry for the comment-pollution.