Tuesday, November 18, 2008

Peer Review V

Occasionally, I come across these people who say things like “The peer review process is severely broken,” as Virginia Hughes at Seed Magazine echos in a recent article titled “”. Most of the article however is about open access instead, two issues that she happily mixes up:
“The journal-operated system of peer-review, Lisi says, "is severely broken." On this point, he couldn't find a stronger ally than the science blogosphere. Most of the ScienceBloggers are unwavering advocates of the Open Access (OA) movement, and two of them—Bora Zivkovic, of PLoS ONE, and John Wilbanks, of Science Commons—devote all of their time to it.”

At least she mentions “Most OA advocates are quick to point out that open-access doesn't necessarily mean the end of publishers or peer-review.” Indeed. So that then leaves me to wonder what kind of allies on exactly what does Garrett find in the science blogosphere?

There are three reasons why statements like this upset me: they are a) unverified b) self-fulfilling and c) unconstructive. Let me elaborate:

    a) I actually find the peer review process useful, and I also think it is necessary. There are many people who otherwise would not get any qualified feedback on their work. I am lucky to have colleagues to discuss with, who I can ask for opinions, references, or keywords. But not everybody is that lucky. I certainly agree that peer review can be quite painful, and I have received my share of completely nonsensical reports by referees who evidently didn't read more than the abstract. On other occasions however referees have pointed out important issues, and made suggestions for improvement, and even for further studies. So where are all these people who allegedly think the peer review process is broken?

    b) I frequently referee papers. I do my best trying to understand the author's work, and to write a useful review. This takes time, time I don't have for my own work, and the only thing I get for it is an automated Thank-you email from the publisher. I do it because I believe that peer review is an essential part of the organization of scientific knowledge and important for progress, but it only works if enough people participate constructively. What we'd need is to encourage people to take it more seriously, and not proclaim it is 'broken'.

    c) What are the alternatives? Some people like to advocate 'open peer review,' which seems to mean you put your paper somewhere on the web and hope you'll get comments. This, excuse me, is hopelessly naive. The vast majority of papers would never get any comments. Heck, the vast majority of papers probably wouldn't even get read if it wasn't for peer review. Do me the favour and think two steps ahead. We would be running into a situation in which the well-known people and the well-established topics receive a lot of 'reviews' and a lot of attention, whereas the vast body of work will never get the necessary stamp of having been critically read by somebody with an adequate education. As a consequence, a large fraction of serious researchers would step down on the same level with all the weirdos and their backyard theories that never get published. Sorry, but I really don't want to be a scientist under such circumstances.


That having been said, I certainly don't think peer review works very well. My largest frustation is that people don't take it seriously. It has happened in many instances that I wrote a long report on a flawed paper and recommendend rejection, only to see later that the paper got published in a different journal in exactly the same version. Evidently, the authors were not even remotely interested in improving their work. The biggest problem however is just that we are writing way too many papers. Obviously, the more papers we write, the less time we have to read and comment on other people's papers. If you want to fix the peer review process there's then two easy things to do:

1) Lower pressure on researchers to produce papers.

2) Encourage refereeing e.g. by pointing out its relevance or by providing incentives.


Related: See my comment on the paper “Why Current Publication Practices May Distort Science” in which the role of publishing as a branding process was studied, and my posts Peer Review IV, Peer Review III, Peer Review II.

49 comments:

changcho said...

Hi Bee - I have refereed papers, and of course I have 'enjoyed' (well, that's not the most appropriate word to use here!) peer review as well.

It's as simple as this: the peer review process is an integral part of science, and while it is not perfect (there is room for improvement: blind? double-blind?), it is absolutely necessary. Perhaps those who say that the peer review process is 'severely broken' got burned by a bad review (your paper is bad), or a bad reviewer (your paper is good, but the reviewer is bad); as you point out there are a few out there. But just because one has gotten burned by a bad reviewer does not mean the peer review process is broken; it is not. This is like saying 'hey, I got a lemon of a car, hence all cars are severely broken!'. That's plain silly.

There are, of course pathological cases out there, like the one you mentioned in your previous post about
El Naschie
; but that's a problem with a specific journal, not the peer-review process itself.

Just from my own experience here, but pretty much all of the reviews I've gotten on my papers were constructive, and mostly forced me to write better papers.

Best.

Bee said...

Hi Changcho,

That is good to hear. I sometimes wonder how well voices that seem to be dominant online represent the majority of actual scientists.

As to double-blind review, I think this can only work if you request a submitted manuscript has not been pre-published. Maybe it would be worth offering it as an option, but not sure how many people would actually use it.

Best,

B.

Uncle Al said...

Peer review carefully excludes stupid and revolutionary discourse. Peer review is the viscosity of sciences' dashpots. String theory was originally rejected, illustrating that peer review only works on the average (in both directions).

Virginia Hughes said...

Hi! Glad to see the article being discussed....

Garrett said in the interview that the JOURNAL system of peer-review was severely broken. If you read the full text of the interview with him (also on the Seed Magazine homepage), you'll see that he goes on to specifically criticize the payment structures of journals. So that's why I could say that he wouldn't have a stronger ally than the OA advocates on ScienceBlogs....and, as you point out, I did later make the explicit distinction between open access and peer review. I'd say that on the subject of peer-review, most ScienceBloggers agree with Janet Stemwedel (which is why I ended the piece with a link to her post).

Cheers

Bee said...

Hi Virginia,

Yes, I understood it as being about the journal peer review. As far as I am concerned, there is virtually no other peer review, except possibly in some rare examples. Your article didn't quite make it seem as if you think 'most ScienceBloggers agree with Janet Stemwedel,' on the contrary, it seems as if you only added this as an outlier quote, after you started with mentioning unnamed allies of Lisi. Best,

B.

Moshe said...

I'm completely with you on that. We need to find ways to strengthen peer review, I'm sure creative people like Garrett can come up with brilliant ideas if motivated.

Two more random comments:

1. Conflating open access with lack of standards is one of the tools publishers use in their fight for survival, or at least to extend a little longer their outdated business model (that is the business model where we do all the work, and they get all the profit).

2. Since people still ask me occasionally about Garrett's work, I find that a useful shortcut is mentioning that Garrett refuses to have his papers reviewed, claiming instead that the system is "broken". Somehow that conveys the message more clearly than any detailed explanations. In other words, in the real world most people are still pretty serious about the idea of quality control.

Imam Yahya, Commander of the Faithful said...

I guess one can be a little more specific about the complaint that many referees do not "take it seriously". When I do this [horrible] job I always try to do it in such a way that the author can argue with me. What people really hate is, "Your article has been rejected. Better luck next time." You can try to argue with the editor, of course. But 9 times out of 10 you will be wasting your time.

People might argue that getting into an argument with an author, leading perhaps to multiple re-submissions and perhaps adjudication by an editor, is time-consuming. I have a very clear answer to that: if you are not prepared to invest some time in refereeing, then *don't do it*. Sure, if you have discovered a concrete factual error in a paper and the author cannot or will not rectify it, then you should have the option to terminate the process. Otherwise you should be prepared to *make a case* that a paper should not be accepted. If you aren't, then ask to be taken off the list of referees. If editors insisted on such behaviour from referees, it would go a long way to countering the "peer review is broken" argument.

Having said all that, I have to admit that the ratio of stupid referee reports to helpful ones is so large that it is fair to say that the system is "broken" right now. But it can be fixed.

Anonymous said...

hi Bee, I think that the best review system would be to allow readers registered on arXiv to leave possibly anonymous comments on the arXiv web-page of each paper. After all, papers sometimes happen to have more readers more competent than one official referee: having their comments would be useful.

But of course nobody in arXiv wants to be the moderator of comments, having to deal with Lubos, etc

PhilG said...

It is unfair to say that "Garrett refuses to have his papers reviewed". He has choosen to have them reviewed publicly in the form of internet discussions rather than through closed journal referreeing. He has been very successful in using that method. There are 23 trackbacks to his paper on ArXiv which you can follow to find long discussions about his work.

Dr M said...

Thank you, Bee! I was gathering my thoughts to write a comment to the effect that I thought your views on peer review as it actually works in practice seemed overly optimistic and that underlying problem is sheer amount of publications we are expected to spew out. Then you made exactly this argument -- to the letter exactly as I would have put it -- in the closing paragraphs and spared me the effort. Now I am happy to whole-heartedly agree with everything in this post.

Bee said...

Hi Anonymous:

As I wrote in my post, this isn't going to work. The vast majority of paper would never get any comments. Related attempts have been made previously and failed, see Michael's post on The Future of Science. He is hoping for a cultural change. I don't see it happening. Best,

B.

Thomas said...

Many areas of physics have used an open access model (the arXive) for more than 15 years, so the fight for open access in these fields ended a long time
ago. What is interesting is that the arXive has not eliminated the journals (yet?). Some of this is for the wrong reaons (tenure committees like ``published'' papers), but I think there are also some good reasons for keeping the journals around (journals can highlight your papers, they provide some filtering in a high noise environment, they can enforce open access to data, ..). [Full disclosure: I am an associate editor of PRL]

If the journals survive, then finding a way to improve refereeing is clearly important. It would be good if one could simply rely on people's sense of civic responsibility, but maybe we need both a
stick and a carrot. I could imagine, for example, that between submitting papers to a journal you have to provide some number of reports that the editors deem useful. You can see something like this in the grant reviewing process. The reports are not always great, but at least people respond on time and they put in some effort. The reason they do that is because it is often your grant
officer who gets to read the report and he/she can cause you some harm.

Plato said...

Hi Bee,

Even after peer review there is "something still broken" when scientist can claim a part of their science is broken.

I am remembering the issues with Jacques Distler.

Next, it is the "reduction absurdity,":) relevance of the peer reviewer?

I would think one would like to maintain "this air of professionalism." I would also expect scientists would lead by example?

It is hard then under "this cover" to distinguish who is of the general population and who is that 5%. How shall any science institute say that the topics once reviewed are of a basis that have been cleared, so please do not introduce these nonsensical denouncements and less then desirable innuendo, one may apply to a part of science. Right Uncle?

Best,

Bee said...

Hi Thomas,

I agree with you on the merits of journals. I actually do think editors offer an important service for the community (at least in principle). The filtering is one. I generally think though that accessibility of articles is undervalued in that process (not thinking of PRL here).

There are other ways to 'filter' thought. For example, I have the impression that more and more people filter through friend's recommendations, bookmarks, blogs etc, there are a whole bunch of online tools for that purpose. Just that I am afraid this could lead to a fragmentation into like-minded groups, where like-minded could also mean "We know the author and he is a nice guy". Bottomline is, it might significantly enhance sociological effects which I'd rather not have when it comes to organization of scientific knowledge. Best,

B.

changcho said...

"I have to admit that the ratio of stupid referee reports to helpful ones is so large that it is fair to say that the system is "broken" right now. But it can be fixed."

Well, this surely must depend on the field: where I have published and refereed (planetary science/astronomy/astrodynamics), my experiences with peer review have been generally positive. Then again, my experience is just anecdotal.

Bee said...

It also depends on the journal. Regarding Iman's comment above, there are e.g. journals who limit to one resubmission, after which the editor makes a decision. Working together with the author thus is not an option, unless you contact him/her personally. Best,

B.

stefan said...

Dear Bee,


thanks for pointing out the article! One shouldn't mix peer review with the issue of open access, but I think one should look separately at what is going on in different scientific disciplines.

I mean, Thomas has a very important point, most subfields of physics have an open access outlet since 15 years with the arxiv, and the traditional journals are still there. Even more so, mapping the models of peer-reviewed open-access journals from biology and medicine, such as PubMed Central or PLOS, to physics seems to be not very successful, to put it mildly. My impression is that physicists are quite happy with the non-refereed open-access arxiv and the traditional peer-reviewed journals, as long as these journals are not abused by publishers as money-printing machines...

But that doesn't deal with the number of papers and the need to get them reviewed them by someone. So your suggestions touch very much to the heart of the problem.

Cheers,

Stefan

Anonymous Snowboarder said...

Bee - you bring up an interesting point when you describe reviewing a paper (negatively) and then seeing it published elsewhere.

Wouldn't an improvement on the process be to establish a central database available to all publishers where referees can have their review and recommendations logged? Somewhat like what the insurance companies try to do with speeding tickets issued outside your home state.

Something like this could also be used to keep track of any traits the author(s) have perhaps saving publishers and reviewers some time too.

garrett said...

Hi Sabine,

In my opinion, peer review is important, but the journal-operated system is broken. (Thanks to Virginia for clarifying this.) There are other cases of peer review though: review for grant proposals, for promotion decisions, and the less formal review that takes place on blogs and by conversations between colleagues. I think these other forms of peer-review are evolving and could effectively replace the journal-run system.

You suggest two "easy things to do" to fix the existing system:
1) Lower pressure on researchers to produce papers.
2) Encourage refereeing e.g. by pointing out its relevance or by providing incentives.

How can we get these to happen, now, starting from the existing system? Well, when grant agencies conduct reviews, they select and pay qualified reviewers to do the work -- a good incentive. And when bloggers voluntarily review someone's work publicly, they establish their reputation by the quality of their review, and they get attention for themselves -- more incentive. A well established blogger/reviewer is a good candidate for a grant agency to employ. Furthermore, a voluntary reviewer isn't going to wade through twenty of one researcher's papers -- they're going to pick out one or two, out of the open access pool. So there will be less pressure on researchers to produce many papers with high "impact factor," and motivation instead to produce just the interesting good stuff.

I'm sure an open review process will have its own problems, such as corruption (but at least it will be open corruption). And I am a bit of an optimist, but I think this is where we're heading. I'm trying to help it along as I can, by not supporting the old system, and by encouraging the new one. (Thanks to PhilG for speaking to this on my behalf.)

Best,
Garrett

Bee said...

Hi Garrett,

Yes, for example paying reviewers would be a good incentive. It is certainly one that would work, though with limitations. The real problem is I think that reviewing other people's work is not sufficiently acknowledged as an important contribution to knowledge discovery. That is why I previously suggested to allow scientists a specialization in task, in that they could e.g. for one year be a referee - a person preferably (though not exclusively) to be contacted by editors with referee requests. That would be their main responsibility, they would get some reputation whether they do this well or whether they are useless etc. Something that documents a well-done job that can be put in the CV.

If you mean with 'open peer review' what I said above, I don't think it is going to work for the reasons I mentioned earlier. How many "well established bloggers" are there who would referee papers voluntarily at high quality level and how does that relate to the total number of paper that are produced? It just isn't going to happen unless you dream of the same cultural change as Michael. Corruption I don't think is a real issue, the community ethics is not that dysfunctional.

However, what I could imagine - and I'd be interested to hear your opinion on that - is to decouple the peer review process from the publishers by setting up review agencies (in lack of a better word). Consider you'd collect scientists who would be willing to write a review on a n pages paper for a certain fee. Then if you'd want a review, you would contact some administrators, they would pass (for some moderate service fee) on your paper to an anonymous referee and you'd go through the usual process, possibly somewhat more flexible. Just that the question is not "is this paper publishable in journal X", but what is the paper's quality, how interesting is it, what are its problems, is the argumentation clear, the presentation sensible etc. So the referee would work with the author on that, and in the end the author would obtain some report. Then this report could be used as a 'branding' similar to being published in a journal, as some quality mark.

Then one would probably get several competing agencies of that kind, who will aquire some reputation as to the quality of their reports etc.

Best,

B.

Moshe said...

Hi Garrett,

Like you, I don't think I need the peer review system. I have sufficient training and a network of friends, so I can form my own opinion. I know of published papers which are wrong, and I know unpublished work which is correct and significant. The stamp of approval given by journals would not change anything for me. Why should we bother then?

We should, because we have responsibility towards the less informed ones. Those are people with less training, or in more remote places, or simply someone in some future time that is not immersed in the current sociology and politics. We need a system of certification to distinguish the good from the bad. This is exactly what happens in all parts of life - I dare you to go on the operating table, or go on a flight, where the people in charge of your life were vetted by the internet. Why should we take physics research any less seriously?

I agree that the journal system is less than ideal, let us come up with ideas to strengthen it! abandoning any effective measure of quality control, just because the current system has flaws, is counterproductive. It also not going to happen anytime soon.

Moshe said...

ps: I don't think reviewers for grant agencies or for promotion cases get paid, and in any event those reviews rely heavily on individual journal decisions regarding quality of research (because they are rarely peer reviews, done by people in the same scientific sub-field) .

On the other hand I know of at least one journal (JHEP) which is experimenting with paying referees, and also giving prizes for the best ones. All part of the idea of getting this important job the place it deserves in the incentive system.

Giotis said...

Hi Moshe,

Moshe: "We need a system of certification to distinguish the good from the bad."

How is that possible when leading figures of physics, don't agree about fundamental things in nature and they are trying to ridicule the work of their colleagues?
Recently i heard Susskind in an old radio show in the net and i was truly shocked. There, he was trying to make fun of Lee Smolin (a well known and successful I think physicist) in a very impolite manner i.e. that nobody takes Smolin seriously and all his theories are going to the bottom of the sea (glop, glop, glop... as he so vividly described) as soon as they appear.

The theory Susskind is referring to is LQG. Thus according to him LQG, a very well known and established theory is junk. So should I assume that Rovelli, Ashtekar, Thiemann are crackpots too? So who I'm supposed to trust? My trust to the whole physics world is shaken when i hear things like that. How can i trust a review then?

I say trust nobody and if you are truly interested try to make up your own opinion and way of thinking.

BR

Bee said...

Yes, we mentioned the announcement of JHEP to pay referees here

Bee said...

Hi Giotos,

This interview with Susskind appeared in the aftermath of the publication of Lee's book, which seems to have upset a lot of string theorists. You might enjoy this summary of the discussion (from the post "Good Physics is Conflict"). Best,

B.

Giotis said...

lol:-) That was very funny Bee.

PS. Yes i knew that Susskind was upset about Lee Smolin's book but nevertheless i didn't expect such reaction.

changcho said...

"How is that possible when leading figures of physics, don't agree about fundamental things in nature and they are trying to ridicule the work of their colleagues?"

Well, sure they agree on fundamentals. Almost all physicists agree about General Relativity and the Standard Model. They seem to disagree on cutting-edge theories/speculations. It depends on your definition of 'fundamentals'.

Now, on to your specific case:

"...There, he was trying to make fun of Lee Smolin.."

It appears Susskind is always trying to put down Smolin - I heard a talk by Susskind a couple of months ago and again he briefly mentioned Smolin in negative way. But that's a problem with personalities, not physics.

Plato said...

Hmmm let me see here. All scientists where lab coats? All scientists are crazy? :)

Zephir said...

We should realize, Popper's methodology is completely symmetric, every negation of some theory (like Aether concept) or phenomena (cold fusion) should be considered as a new theory and as such handles with caution. By such way a Popper methodology cannot serve as an ultimate clue for determination of scientific theories validity.


By AWT all theories are phases of Aether foam, which undergoes a nested transform, during which the density of foam increases, until a new generation of foam branes emerge from dense phase. In this moment the significance of intuitive approach in science increases temporarily (a foam phase, where longitudinal wave spreading is more effective) - so that the existing mechanisms maintaining the validity are becoming rather brake of the further development. By my opinion this phase occurs in physics right now.

But such period is temporal only: when the newly created theory formalizes, the more thorough approach will take its place again.

amused said...

There have been occasions when I put a fair bit of time and effort into a review and thought to myself "Dammit I want some credit for this!" It's not as if it's a fun and rewarding activity in its own right (most of the time). So here's a practical suggestion for encouraging/rewarding good reviews:

How about journal editors writing letters on behalf of reviewers in connection with their job or promotion applications etc, describing how much reviewing the person has done and how well he/she did it. Hiring committees etc would probably be interested in this, so it would count for something and would be of more value than a token monetary payment. It should be quite easy for the editors to do this: After writing a series of "standard letters" for reviewers of various levels of competence and diligence they will be able to just select one of these and fine-tune it a bit each time they write a letter for someone.

Another way to to strengthen peer review would be to make publishing in journals actually count for something in career advancement. Contrast the present situation in physics with that in maths:

A mathematician who gets asked to review a paper submitted to Ann. Math. (the top maths journal) has a strong self-interest in doing a competent job: Her own standing in the maths community is to a significant extent tied up with the number of papers she has published there and in other high-quality journals, which provides the primary measure of the quality of her work. If she allows the journal standards to slip though doing a lousy refereeing job, the credit she gets from the community for her own papers in that journal decreases as well. (This is the same reasoning that motivates people to donate to their alma mater: they want it to flourish, because if it doesn't then the prestige and value of their degree (on the job market etc) goes
down.)

On the other hand, in physics this self-interest motivation is absent: Journal publications count for almost nothing, so no one has anything personal at stake in maintaining high journal standards. The result is that, even in "top" journals like JHEP, you have papers of the highest quality published side by side with papers making small incremental advances, insubstantial papers, and occasionally papers that are wrong or ridiculous. This this lack of standards makes journal publications useless as a quality measure, and makes it even harder for someone to motivate himself to write a good review.

If publishing in journals, or some subset of them, actually counted for something in physics for people's funding and career advancement then I'm sure we would have a situation similar to what the maths community has. Otherwise the present situation will just continue, with the pie being divided up among special interest groups based on their success in the hype competition, and with the leaders of these groups distributing the spoils to their favorite sons and daughters based on how useful they have been to them.

Christine said...

in physics this self-interest motivation is absent: Journal publications count for almost nothing, so no one has anything personal at stake in maintaining high journal standards.

Journal publications count a lot here in Brazil. More exactly: the number of publications. The more, the better, and more respect you get. People go to congresses to "find collaborators" (so that they increase the chances of having more papers), try to find a great number of students to supervise (so that they increase the chances of having more papers), and recycle their incremental results into other incremental results (so that they increase the chances of having more papers).

I once heard a young graduate student talking to another one that he would dedicate the next years in writing several papers, "because researchers appreciate that".

Some researchers are simply "paper production machines". Publish in several journals, whatever their pretigiousness. Their daily routine is to read arxiv papers in order to search for possible "incremental modifications" so that they can form new papers and publish them. (nano El-Naschies???)

All those strategies are considered by (I would dare to say) the majority of researchers here as "science".

And that idea of "science" is passed strongly to the young ones.

There are good researchers in Brazil, but the majority is mediocre IMO. Paper production machines.

Andrei Kirilyuk said...

We know, don't we, Christine, that if we change “Brazil” here for “the world”, all conclusions remain valid, including various “high” and “advanced” places of science? That's the REAL result of peer review, which is but a “legal” (though nowhere legally adopted) way to perpetuate that kind of fraudulent, fruitless and truth-suppressing “research”, everywhere. You could have noted that “this idea of "science" is passed strongly” also to almost all contributors to this post. A good illustration of the world-wide situation, beyond any national borders... Having many “mediocre” scientists would not be a problem in itself (it's even inevitable if science becomes a massive profession, e.g. just because society decides it can practically feed them all), but the real catastrophe is that today's “peer-review-enhanced”, arrogant mediocrity very effectively kills any occasional advance with respect to its own level, thus creating practically impenetrable barrier for real problem solutions, just at a time when those unsolved problems, in both fundamental and applied science, become really pressing and accumulate into a practically singular development impasse. It is so much resembling of that recent financial crisis situation (very vividly denounced by many “advanced” scientists): while knowing very well that it's all strongly wrong, almost all “professionals” continue to “grab easy money”, despite multiple “warning signs”, right until the explicit catastrophe outburst, with heavy consequences and uncertain recovery (after which they start asking for help to themselves in the first place as being the biggest losers!). However, in the case of science we deal with allegedly the “best”, most intelligent and “conscious” part of social elites, contrary to “all those money makers”. The difference is, however, opposite to what one might expect: money makers, even the worst of them, are tremendously useful with respect to today's “advanced science” priests and adherents. And the more inconsistent, absurd and harmful for science progress their results are, the higher is their “peer-reviewed” position and luxurious remuneration. Welcome to apocalyptic reality...

Phil Warnell said...

Hi Bee,

I’m confused by this seemingly lack of clarity when it comes to the matter of "pear review" ,or is there something I missed perhaps? :-)

Best,

Phil
P.S. I apologize for being off topic yet with all the gloomy world news as of late I just can’t help thinking a little more levity as being in order.

amused said...

Hi Christine, I had some contact with Brasilian researchers at one point (including a wonderful trip to Brasil) and noticed the situation you mentioned. It seems to be a common situation in developing countries; e.g., being the motivation behind the Turkish plagiarism scandal earlier this year. Probably these countries started out with honorable intentions to make a meritocratic system. It is sad to see how it turned out.

Anyway, the "paper factory" approach would not be enough by itself to be successful in USA or Europe. The papers would have to address the issues of main interest to some club of important people. Being a junior author on papers written by the important people would be even better.

While I'm here, just a quick remark to Moshe's comment above. Besides a strong journal system being important for isolated and less informed researchers, it is also important for another group: journalists. To do their job they need to be able to say what the status of a piece of work is.

moshe said...

amused, this is probably correct in an ideal world. It is my understanding that in the real world speed and entertainment value are at least as important as accuracy, and nobody is going to wait for an interesting story to be checked by the community of experts. I'm always wondering if, for example, medical science stories I read in newspapers and magazines are all about the loud outliers, or we are the only lucky ones. I have my own system of detecting BS, like we all do, but there's really no way to know for sure.

amused said...

Moshe, I know what you mean but had something a bit different in mind. One episode of "hype" I remember back from when I was a student was the announcement by a couple of little-known mathematicians that they had proved the Poincare conjecture (something that used to happen at regular intervals). The excited authors went to the press, resulting in newspaper stories titled "Major math problem (reported) solved". The newspaper articles reported skepticism from leading mathematicians, along with the authors' retort that the skepticism was largely a reaction to their outsider status. But one thing that the newspaper articles made clear was that the work (a 200 page manuscript) would be subjected to peer review, and that this would clarify its validity or lack thereof. The impression the public would have gotten from reading those articles was that, although the work was too complicated for mathematicians to say immediately if it was correct or not, the maths community nevertheless had an assessment process (peer review, via submission to a journal) that was trusted by all sides and that could be counted on to sort out the validity issue in due course.

This is in stark and unfortunate contrast with the situation in physics (formal theory at least) as the recent "exceptionally simple TOE" episode made clear. I'm reluctant to put all the blame for this with Lisi and Smolin; I think a large part of it belongs to the leaders of the formal theory community who decided some time ago that a strong journal system was superfluous. In a culture where publishing in journals counts for practically nothing it is hard to blame someone if they decide they can't be bothered to submit their controversial paper to a journal.

Moshe said...

amused, I'm not sure which leaders decided that strong journal system is not necessary. I know of many leaders that play active role in that system, including coming up with ideas of strengthening it. Also, in the real world, full of deans and hiring committees and grant agencies, all of whom cannot evaluate research directly, published papers in prestigious journals is by and large still the currency of the trade (but lets' not get into the PRL dispute). There simply cannot be any other way, any trade needs a system of evaluation and certification.

We both may want to see the journal system even stronger, but exaggerating its status is a cop out - journalists and others who choose to ignore the existing standard of assessment in the field do it for their own reasons. Claiming the system is "broken" is then just a convenient slogan.

amused said...

Moshe,

"in the real world, full of deans and hiring committees and grant agencies, all of whom cannot evaluate research directly, published papers in prestigious journals is by and large still the currency of the trade"

That seems to be true in maths, and also in the life sciences, and maybe in some subfields of physics (condensed matter?). But not in formal theory, since there are no prestigious journals there. (I would like to be able to say that PRL is one, but Jeff Harvey has told me otherwise and I think he is right.) Instead, it is the opinions of the senior influential people that determine everything; that is what the deans and committees pay attention to rather than journal publication records.

My impression is that in the "old days" NPB and PRL did play the role of "prestigious journals" in formal theory to some extent; that having publications there really counted for something when assessing people. But somehow that went out the window, especially after the arxiv appeared. E.g., I heard stories about certain prominent people declaring that they weren't going to bother sending their papers to journals anymore since there was no longer any need for it. These days I don't think many researchers in formal theory care much about getting their papers published in any particular journal; instead their main wish is that when they put it on the arxiv the leaders in their subfield will see it and be favorably impressed.

If the leaders in formal theory really want a strong journal system it is easy to achieve: just make journal publication records the basis for assessing people (in connection with career advancement etc). But that wouldn't be sensible in the present situation since the threshold for getting published is so low. Therefore, first it would be necessary to (re)introduce prestigious journals in formal theory. This would probably also fix the problem that Bee mentioned about too many papers being written, since people would take more time to write higher-quality papers so that they could get published in the prestigious journal(s).

This isn't to say that peer review is broken at present. My strong impression is that, by and large, papers on topics of central interest to a subcommunity of serious researchers get seriously reviewed at major journals. The same can't be said for the papers for which this criteria doesn't apply though; for them the outcome of the refereeing process seems quite random. Also, there is no differentiation between lesser quality papers making small incremental advances and high quality ones making major advances - they get published side by side. Both of these things are obstructions to having a strong journal system.

moshe said...

amused, there is very little we disagree on. I'd also like to see hierarchy of journals and more rigor in the refereeing process, especially for the more influential journals. The arXiv took over the task of dissemination of research from the journals, but not that of evaluation. This is still an important job that needs to be done and cannot be replaced by flimsy structures like the ones Garrett offers.

It's frustrating to see the issues so mangled in this discussion, you'd think that intelligent people would be able to distinguish between unrelated issues, dissemination versus evaluation, online versus paper, etc., and also not confuse the call for more rigor (which we both share) with the call for less rigor (which is the topic of the original article quoted in the post).

Anonymous said...

Those days, there is some discussion, at sci.physics.research and sci.physics.foundations nws, about the limitations and flaws of peer review.

This includes notorious cases where peer review failed. Several improvements over the traditional model of peer review a proposed in the next report

(\abstract
This report presents a nonidealized vision of 21st century science. It
handles some social, political, and economic problems that affect the
heart of scientific endeavour and are carrying important consequences
for scientists and the rest of society.

The problems analyzed are the current tendency to limit the size of
scholarly communications, the funding of research, the rates and page
charges of journals, the wars for the intellectual property of the data
and results of research, and the replacement of impartial reviewing by
anonymous censorship. The scope includes an economic analysis of PLoS'
finances, the wars APS versus Wikipedia and ACS versus NIH, and a list
of thirty four Nobel Laureates whose awarded work was rejected by peer
review.

Several suggestions from Harry Morrow Brown, Lee Smolin, Linda Cooper,
and the present author for solving the problems are included in the
report. The work finishes with a brief section on the reasons to be
optimists about the future of science.
)

http://www.canonicalscience.org/en/publicationzone/
canonicalsciencetoday/20081113.html

Bee said...

Hi Juan,

I would appreciate if you'd omit your self-advertisement. Also, I would generally recommend if you want others to pay attention to your writing, you start with paying attention to their writing. You find plenty to think about in the sidebar, see for example this and this. Best,

B.

Christophe de Dinechin said...

As far as I am concerned, there is virtually no other peer review, except possibly in some rare examples.

I think that you are talking about physics or science here. Open-source software has developed a very efficient model of collective peer review for code. This is used effectively for example in the Linux operating system.

One benefit compared to the journal peer review system is that it scales better, i.e. "given a sufficiently large number of eyeballs, all bugs are shallow". By contrast, journal reviews tend to take forever.

As an aside, arXiv is no longer open-access at all. Google "arxiv blacklist" for interesting counter-examples.

Bee said...

Hi Christophe,

Yes, I was talking about peer review for publications of scientific research results, not coding. There are likely other cases where peer review also works quite well. Fashion watch of starlets for example ;-)

Regarding the arXiv blacklist, I think you are confusing open access with being open for everybody to submit. Best,

B.

Count Iblis said...

If your math article is rejected then it should be submitted to this journal

amused said...

Moshe, it is kind of amusing that first you speak of prestigious journals playing a major role in physics formal theory, then I disagree and dispute the existence of such journals, and then you change the description to "more influential" journals and claim we are almost in agreement...

Anyway, I certainly agree with you about Garrett's proposed alternative to the current system being flimsy and naive, and that strengthening the current system is the way to go. But you realize that for this to happen the powers that be would have to accept something quite unpalatable: Occasionally one of their favorite sons and daughters may lose out to some bad-smelling mongrel, without the right breeding and blessings, but who nevertheless managed to build up a superior publication record. Do you think they could ever live with this? Personally I doubt it. But until they do, people like Garrett will have an entirely justifiable excuse for not submitting to journals. They can just say: "Why should I take your journal system seriously when you don't even take it seriously yourselves?"

Moshe said...

amused, you are easily amused. If you read that sentence again you can see I was talking about the wish for an hierarchy of journals, and in that wished for hierarchy rigorous standards are more important for the more influential journals. In other words, I was not talking about the existing system (which incidentally does have some hierarchy and rigor, but probably not enough for my taste, and I take it not for yours either).

In any event, I think we are not going to resolve all these issues today, let's continue this some other time.

amused said...

Moshe, the problem I have with your viewpoint (as I understand it) is that it seems analogous to calling for more denominations in a currency without facing up to the problem that the currency has no purchasing power to begin with. To address that problem I think it is unavoidable to take a discussion about what kinds of mongrels the powers-that-be might be willing to admit into the priesthood... But I will acquiesce to your wish to postpone that discussion for a future occasion.

Alethea said...

Bee wrote in one of the comments:
"Paying reviewers would be a good incentive (...) reviewing other people's work is not sufficiently acknowledged as an important contribution to knowledge discovery. That is why I previously suggested to allow scientists a specialization in task, in that they could e.g. for (a fixed period) be a referee - a person preferably (though not exclusively) to be contacted by editors with referee requests. That would be their main responsibility, they would get some reputation whether they do this well or whether they are useless etc."

Upon a recent particularly steep charge for publication based on the cost for color figures and Open Access, I asked the editor if I couldn't exchange some of that cost, as we are cash-poor, against my availability for a fixed number of reviews (the suitability of which would be at the editor's discretion). The principle of washing dishes in the restaurant for which you find out you can't pay. I am still awaiting a response and suspect it won't fly, but I like the concept of taking a compensated review sabbatical.

Alethea said...

Followup on that last comment - the publisher knocked a little off the fee (we had provided a cover image as well, and it was a good excuse) but my offer was ignored.

I suppose I would settle for amused's suggestion as compensation, also:
"How about journal editors writing letters on behalf of reviewers in connection with their job or promotion applications etc, describing how much reviewing the person has done and how well he/she did it."