Pages

Showing posts with label Peer Review. Show all posts
Showing posts with label Peer Review. Show all posts

Saturday, February 15, 2020

The Reproducibility Crisis: An Interview with Prof. Dorothy Bishop

On my recent visit to Great Britain (the first one post-Brexit) I had the pleasure of talking to Dorothy Bishop. Bishop is Professor of Psychology at the University of Oxford and has been a leading force in combating the reproducibility crisis in her and other disciplines. You find her on twitter under the handle @deevybee . The comment for Nature magazine which I mention in the video is here.

Thursday, May 19, 2016

The Holy Grail of Crackpot Filtering: How the arXiv decides what’s science – and what’s not.

Where do we draw the boundary between science and pseudoscience? It’s is a question philosophers have debated for as long as there’s been science – and last time I looked they hadn’t made much progress. When you ask a sociologist their answer is normally a variant of: Science is what scientists do. So what do scientists do?

You might have heard that scientists use what’s called the scientific method, a virtuous cycle of generating and testing hypotheses which supposedly separates the good ideas from the bad ones. But that’s only part of the story because it doesn’t tell you where the hypotheses come from to begin with.

Science doesn’t operate with randomly generated hypotheses for the same reason natural selection doesn’t work with randomly generated genetic codes: it would be highly inefficient and any attempt to optimize the outcome would be doomed to fail. What we do instead is heavily filtering hypotheses, and then we consider only those which are small mutations of ideas that have previously worked. Scientists like to be surprised, but not too much.

Indeed, if you look at the scientific enterprise today, almost all of its institutionalized procedures are methods not for testing hypotheses, but for filtering hypotheses: Degrees, peer reviews, scientific guidelines, reproduction studies, measures for statistical significance, and community quality standards. Even the use of personal recommendations works to that end. In theoretical physics in particular the prevailing quality standard is that theories need to be formulated in mathematical terms. All these are requirements which have evolved over the last two centuries – and they have proved to work very well. It’s only smart to use them.

But the business of hypotheses filtering is a tricky one and it doesn’t proceed by written rules. It is a method that has developed through social demarcation, and as such it has its pitfalls. Humans are prone to social biases and every once in a while an idea get dismissed not because it’s bad, but because it lacks community support. And there is no telling how often this happens because these are the stories we never get to hear.

It isn’t news that scientists lock shoulders to defend their territory and use technical terms like fraternities use secret handshakes. It thus shouldn’t come as a surprise that an electronic archive which caters to the scientific community would develop software to emulate the community’s filters. And that is, in a nutshell, basically what the arXiv is doing.

In an interesting recent paper, Luis Reyes-Galindo had a look at the arXiv moderators and their reliance on automated filters:


In the attempt to develop an algorithm that would sort papers into arXiv categories automatically, thereby supporting arXiv moderators to decide when a submission needs to be reclassified, it turned out that papers which scientists would mark down as “crackpottery” showed up as not classifiable or stood out by language significantly different from that in the published literature. According to Paul Ginsparg, who developed the arXiv more than 20 years ago:
“The first thing I noticed was that every once in a while the classifier would spit something out as ‘I don't know what category this is’ and you’d look at it and it would be what we’re calling this fringe stuff. That quite surprised me. How can this classifier that was tuned to figure out category be seemingly detecting quality?

“[Outliers] also show up in the stop word distribution, even if the stop words are just catching the style and not the content! They’re writing in a style which is deviating, in a way. [...]

“What it’s saying is that people who go through a certain training and who read these articles and who write these articles learn to write in a very specific language. This language, this mode of writing and the frequency with which they use terms and in conjunctions and all of the rest is very characteristic to people who have a certain training. The people from outside that community are just not emulating that. They don’t come from the same training and so this thing shows up in ways you wouldn’t necessarily guess. They’re combining two willy-nilly subjects from different fields and so that gets spit out.”
It doesn’t surprise me much – you can see this happening in comment sections all over the place: The “insiders” can immediately tell who is an “outsider.” Often it doesn’t take more than a sentence or two, an odd expression, a term used in the wrong context, a phrase that nobody in the field would ever use. It is only consequential that with smart software you can tell insiders from outsiders even more efficiently than humans. According to Ginsparg:
“We've actually had submissions to arXiv that are not spotted by the moderators but are spotted by the automated programme [...] All I was trying to do is build a simple text classifier and inadvertently I built what I call The Holy Grail of Crackpot Filtering.”
Trying to speak in the code of a group you haven’t been part of at least for some time is pretty much impossible, much like it’s impossible to fake the accent of a city you haven’t lived in for some while. Such in-group and out-group demarcation is subject of much study in sociology, not specifically the sociology of science, but generally. Scientists are human and of course in-group and out-group behavior also shapes their profession, even though they like to deny it as if they were superhuman think-machines.

What is interesting about this paper is that, for the first time, it openly discusses how the process of filtering happens. It’s software that literally encodes the hidden rules that physicists use to sort out cranks. For what I can tell, the arXiv filters work reasonably well, otherwise there would be much complaint in the community. But the vast majority of researchers in the field are quite satisfied with what the arXiv is doing, meaning the arXiv filters match their own judgement.

There are exceptions of course. I have heard some stories of people who were working on new approaches that fell between the stools and were flagged as potential crackpottery. The cases that I know of could eventually be resolved, but that might tell you more about the people I know than about the way such issues typically end.

Personally, I have never had a problem with the arXiv moderation. I had a paper reclassified from gen-ph to gr-qc once by a well-meaning moderator, which is how I learned that gen-ph is the dump for borderline crackpottery. (How would I have known? I don’t read gen-ph. I was just assuming someone reads it.)

I don’t so much have an issue with what gets filtered on the arXiv, what bothers me much more is what does not get filtered and hence, implicitly, gets approval by the community. I am very sympathetic to the concerns of John The-End-Of-Science Horgan that scientists don’t clean enough on their own doorsteps. There is no “invisible hand” that corrects scientists if they go astray. We have to do this ourselves. In-group behavior can greatly misdirect science because, given sufficiently many people, even fruitless research can become self-supportive. No filter that is derived from the community’s own judgement will do anything about this.

It’s about time that scientists start paying attention to social behavior in their community. It can, and sometimes does, affect objective judgement. Ignoring or flagging what doesn’t fit into pre-existing categories is one such social problem that can stand in the way of progress.

In a 2013 paper published in Science, a group of researchers quantified the likeliness of combinations of topics in citation lists and studied the cross-correlation with the probability of the paper becoming a “hit” (meaning in the upper 5th percentile of citation scores). They found that having previously unlikely combinations in the quoted literature is positively correlated with the later impact of a paper. They also note that the fraction of papers with such ‘unconventional’ combinations has decreased from 3.54% in the 1980s to 2.67% in the 1990, “indicating a persistent and prominent tendency for high conventionality.”

Conventional science isn’t bad science. But we also need unconventional science, and we should be careful to not assign the label “crackpottery” too quickly. If science is what scientists do, scientists should pay some attention to the science of what they do.

Tuesday, March 01, 2016

Tim Gowers and I have something in common. Unfortunately it’s not our math skills.

Heavy paper.
What would you say if a man with British accent cold-calls you one evening to offer money because he likes your blog?

I said no.

In my world – the world of academic paper-war – we don’t just get money for our work. What we get is permission to administrate somebody else’s money according to the attached 80-page guidelines (note the change in section 15b that affects taxation of 10 year deductibles). Restrictions on the use of funds are abundant and invite applicants to rest their foreheads on cold surfaces.

The German Research Foundation for example, will – if you are very lucky – grant you money for a scientific meeting. But you’re not allowed to buy food with it. Because, you must know, real scientists don’t eat. And to thank you for organizing the meeting you don’t yourself get paid – that wouldn’t be an allowed use of funds. No, they thank you by requesting further reports and forms.

At least you can sometimes get money for scientific meetings. But convincing a funding agency to pay a bill for public outreach or open access initiatives is like getting a toddler to eat broccoli: No matter how convincingly you argue it’s in their own interest, you end up eating it yourself. And since writing proposals sucks, I mean, sucks up time, at some point I gave up trying to make a case that this blog is unpaid public outreach that you'd think research foundations should be supportive of. I just write – and on occasion I carefully rest my forehead on cold surfaces.

Then came the time I was running low on income – unemployed between two temporary contracts – and decided to pitch a story to a magazine. I was lucky and landed an assignment instantly. And so, for the first time in my life, I turned in work to a deadline, wrote an invoice, and got paid in return. I. Made. Money. Writing. It was a revelation. Unfortunately, my published masterwork is now hidden behind a paywall. I am not happy about this, you are not happy about this, and the man with the British accent wasn’t happy about it either. Thus his offer.

But I said no.

Because all I could see was time wasted trying to justify proper means of spending someone else’s money on suitable purposes that might be, for example, a conference fee that finances the first class ticket of the attending Nobel Prize winner. That, you see, is an allowed way of spending money in academia.

My cold-caller was undeterred and called again a week later to inquire whether I had changed my mind. I was visiting my mom, and mom, always the voice of reason, told me to just take the damn money. But I didn’t.

I don’t like being reminded of money. Money is evil. Money corrupts. I only pay with sanitized plastic. I swipe a card through a machine and get handed groceries in return – that’s not money, that’s magic. I look at my bank account statements so rarely I didn’t notice for three years I accidentally paid a gym membership fee in a country I don’t even live. In case my finances turn belly-up I assume the bank will call and yell at me. Which, now that I think of it, seems unlikely because I moved at least a dozen times since opening my account. And I’m not good updating addresses either. I did call the gym though and yelled at them – I got my money back.

Then the British man told me he also supports Tim Gowers new journal. “G-O-W-ers?,” I asked. Yes, that Tim. That would be the math guy responsible for the equations in my G+ feed.

Tim Gowers. [Not sure whose photo, but not mine]
Tim Gowers, of course, also writes a blog. Besides that, he’s won the 1998 Fields Medal which makes him officially a genius. I sent him an email inquiring about our common friend. Tim wrote back he reads my blog. He reads my blog! A genius reads my blog! I mean, another genius – besides my mom who gets toddlers to eat broccoli.

Thusly, I thought, if it’s good enough for Gowers, it’s probably good enough for me. So I said yes. And, after some more weeks of consideration, sent my bank account details to the British man. You have to be careful with that kind of thing, says my mom.

That was last year in December. Then I forgot about the whole story and returned to my differential equations.

Tim, meanwhile, got busy setting up the webpage for his new journal “Discrete Analysis” which covers the emerging fields related to additive combinatorics (not to be confused with addictive combinatorics, more commonly known as Sudoku). His open-access initiative has attracted some attention because the journal’s site doesn’t itself host the articles it publishes – it merely links to files which are stored on the arXiv. The arXiv is an open-access server in operation since the early 1990s. It allows researchers in physics, math, and related disciplines to upload and share articles that have not, or not yet, been peer-reviewed and published. “Discrete Analysis” adds the peer-review, with minimal effort and minimal expenses.

Tim’s isn’t the first such “arxiv-overlay” journal – I myself published last year in another overlay-journal called SIGMA – but it is still a new development that is eyed with some skepticism. By relying on the arXiv to store files, the overlays render server costs somebody else’s problem. That’s convenient but doesn’t make the problem go away. Another issue is that the arXiv itself already moderates submissions, a process that the overlay journals have no control over.

Either way, it is a trend that I welcome because overlays offer scientists what they need from journals without the strings and costs attached by commercial publishers. It is, most importantly, an opportunity for the community to reclaim the conditions under which their research is shared, and also to innovate the format as they please:

“I wanted it to be better than a normal journal in important respects,” says Tim, “If you visit the website, you will notice that each article gives you an option to click on the words ‘Editorial introduction.’ If you do so, then up comes a description of the article (not on a new webpage, I hasten to add), which sets it in some kind of context and helps you to judge whether you want to find out more by going to the arXiv and reading it.”

But even overlay journals don’t operate at zero cost. The website of “Discrete Analysis” was designed by Scholastica’s team, and their platform will also handle the journal’s publication process. They charge $10 per submission and there are a couple of other expenses that the editorial board has to cover, such as services necessary to issue article DOIs. Tim wants to avoid handing on the journal expenses to the authors. Which brings in, among others, the support from my caller with the British accent.

In the two months that passed since I last heard from him, I found out that 10 years ago someone proved there is no non-trivial solution to the equations I was trying to solve. Well, at least that explains why I couldn’t find one. My hence scheduled two-day cursing retreat was interrupted by a message from The British Man. Did the money arrive?, he wanted to know. This way forced to check my bank account, it turned out not only didn’t his money arrive, but neither did I ever receive salary for my new job.

This gives me an excuse to lecture you on another pitfall of academic funding. Even after you have filed five copies of various tax-documents and sent the birth dates of the University President and Vice-president to an institution that handles your grant for another institution and is supposed to wire it to a third institution which handles it for your institution, the money might get lost along the way – and frequently does.

In this case they simply forgot to put me on the payroll. Luckily, the issue could be resolved quickly, and the next day also the wire transfer from Great Britain arrived. Good thing because, as mommy guilt reminded me, this bank account pays for the girls’ daycare and lunch. My writer friends won’t be surprised to hear however that I also had to notice several payments for my freelance work did not come through. When I grow up, I hope someone tells me how life works. /lecture

Tim Gowers invited submissions for “Discrete Analysis” starting last September, and the website of the new journal launched todayyou can read his own blogpost here. For the community, they key question is now whether arxiv-overlay journals like Tim’s will be able to gain a status similar to that of traditional journals. The only way to find out is to try.

Public outreach in general, and science blogging in particular, is vital for the communication of science, both within our communities and to the public. And so are open access initiatives. Even though they are essential to advance research and integrate it into our society, funding agencies have been slow to accept these services as part of their mission.

While we wait for academia to finally digest the invention of the world wide web, it is encouraging to see that some think forward. And so, I am happy today to acknowledge this blog is now supported by the caller with the British accent, Ilyas Khan of Cambridge Quantum Computing. Ilyas has quietly supported a number of scientific endeavors. Although he is best known for enabling Wittgenstein's Nachlass to become openly and freely accessible by funding the project that was implemented by Trinity College Cambridge, he is also a sponsor of Tim Gowers' new journal Discrete Analysis.

Friday, September 11, 2015

How to publish your first scientific paper

I get a lot of email asking me for advice on paper publishing. There’s no way I can make time to read all these drafts, let alone comment on them. But simple silence leaves me feeling guilty for contributing to the exclusivity myth of academia, the fable of the privileged elitists who smugly grin behind the locked doors of the ivory tower. It’s a myth I don’t want to contribute to. And so, as a sequel to my earlier post on “How to write your first scientific paper”, here is how to avoid roadblocks on the highway to publication.

There are many types of scientific articles: comments, notes, proceedings, reviews, books and book chapters, for just to mention the most common ones. They all have their place and use, but in most of the sciences it is the research article that matters most. It’s what we all want, to get our work out there in a respected journal, and it’s what I will focus on.

Before we start. These days you can publish literally any nonsense in a scam journal, usually for a “small fee” (which might only be mentioned at a late stage in the process, oops). Stay clear of such shady business, it will only damage your reputation. Any journal that sends you an unsolicited “call for papers” is a journal to avoid (and a sender to put on the junk list). When in doubt, check Beall’s list of Potential, Probable and Possible Predators.

1. Picking a good topic

There are two ways you can go on a road trip: Find a car or hitchhike. In academic publishing, almost everyone starts out a hitchhiker, coauthoring a work typically with their supervisor. This moves you forward quickly at first, but sooner or later you must prove that you can drive on your own. And one day, you will have to kick your supervisor off the copilot seat. While you can get lucky with any odd idea as topic, there are a few guidelines that will increase your chances of getting published.

1.1 Novelty

For research topics as with cars it holds that a new one will get you farther than an interesting one. If you have a new result, you will almost certainly eventually get it published in a decent journal. But no matter how interesting you think a topic is, the slightest doubt that it’s new will prevent publication.

As a rule of thumb, I therefore recommend you stay far away from everything older than a century. Nothing reeks of crackpottery as badly as a supposedly interesting find in special relativity or classical electrodynamics or the foundations of quantum mechanics.

At first, you will not find it easy to come up with a new topic at the edge of current research. A good way to get ideas is to attend conferences. This will give you an overview on the currently open questions, and an impression where your contribution would be valuable. Every time someone answers a question with “I don’t know,” listen up.

1.2. Modesty

Yes, I know, you really, really want to solve one of the Big Problems. But don’t claim in your first paper that you did, it’s like trying to break a world record first time you run a mile. Except that in science you don’t only have to break the record, you also have to convince others you did.

For the sake of getting published, by all means refer to whatever question it is that inspires you in the introductory paragraph, but aim at a solid small contribution rather than a fluffy big one. Most senior researchers have a grandmotherly tolerance for the exuberance and naiveté of youth, but forgiveness ends at the journal’s front door. As encouraging as they may be in personal conversation, a journal reference serves as quality check for scientific standard, and nobody wants to be the one to blame for not keeping up the standard. So aim low, but aim well.

1.3. Feasibility

Be realistic about what you can achieve and how much time you have. Give your chosen topic a closer look: Do you already know all you need to know to get started, or will you have to work yourself into an entirely or partially new field? Are you familiar with the literature? Do you know the methods? Do you have the equipment? And lastly, but most importantly, do you have the motivation that will keep you going?

Time management is chronically hard, but give it a try and estimate how long you think it will take, if only to laugh later about how wrong you were. Whatever your estimate, multiply it by two. Does it fit in you plans?

2. Getting the research done

Do it.


3. Preparing the manuscript 

Many scientists dislike the process of writing up their results, thinking it only takes time away from real science. They could not be more mistaken. Science is all about the communication of knowledge – a result not shared is a result that doesn’t matter. But how to get started?

3.1. Collect all material

Get an overview on the material that you want your colleagues to know about: calculations, data, tables, figures, code, what have you. Single out the parts you want to publish, collect them in a suitable folder, and convert them into digital form if necessary, ie type off equations, make vector graphics of sketches, render images, and so on.

3.2. Select journals

If you are unsure what journals to choose, have a look at the literature you have used for your research. Most often this will point you towards journals where your topic will fit in. Check the website to see whether they have length restrictions and if so, if these might become problematic. If all looks good, check their author guidelines and download the relevant templates. Read the guidelines. No, I mean, actually read them. The last thing you want is that your manuscript gets returned by an annoyed editor because your image captions are in the wrong place or similar nonsense.

Select the four journals that you like best and order them by preference. Chances are your paper will get rejected at the first, and laying out a path to continue in advance will prevent you from dwelling on your rejection for an indeterminate amount of time.

3.3. Write the paper

For how to structure a scientific paper, see my earlier post.

3.4. Get feedback

Show your paper to several colleagues and ask for feedback, but only do this once you are confident the content will no longer substantially change. The amount of confusion returning to your inbox will reveal which parts of the manuscript are incomprehensible or, heaven forbid, just plainly wrong.

If you don’t have useful contacts, getting feedback might be difficult, and this difficulty increases exponentially with the length of the draft. It dramatically helps to encourage others to actually read your paper if you tell them why it might be useful for their own research. Explaining this requires that you actually know what their research is about.

If you get comments, make sure to address them.

3.5. Pre-publish or not?

Pre-print sharing, for example on the arxiv, is very common in some areas and less common in others. I would generally recommend it if you work in a fast moving field where the publication delay might limit your claim to originality. Pre-print sharing is also a good way to find out whether you offended someone by forgetting to cite them, because they’ll be in your inbox the next morning.

3.6. Submit the paper

The submission process depends very much on the journal you have chosen. Many journals now allow you to submit an arxiv link, which dramatically simplifies matters. However, if you submit source-files, always check the complied pdf “for your approval”. I’ve seen everything from half-empty pages to corrupted figures to pdfs that simply didn’t load.

Some journals allow you to select an editor to whom the manuscript will be sent. It is generally worth checking the list to see if there is someone you know. Or maybe ask a colleague who they have made good or bad experience with. But don’t worry if you don’t know any one of them.

4. Passing peer review

After submission your paper will generally first be screened to make sure it fulfills the journal requirements. This is why it is really important that the topic fits well. If you pass this stage your paper is sent to some of your colleagues (typically two to four) for the dreaded peer review. The reviewers’ task is to read the paper, send back comments on it, and to assign it one of four categories: publish, minor changes, major changes, reject. I have never heard of any paper that was accepted without changes.

In many cases some of the reviewers are picked from your reference list, excluding people you have worked with yourself or who are in the acknowledgements. So stop and think for a moment whether you really want to list all your friends in the acknowledgements. If you have an archenemy who shouldn’t be commenting on your paper, let the editor know about this in advance.

Never submit a paper to several journals at the same time. Also don’t do this if the papers have even partly overlapping content. You might succeed, but trying to boost your publication count by repeating yourself is generally frowned upon and not considered good practice. The exception is conference proceedings, which often summarize longer paper’s content.

When you submit your paper you will be asked to formally accept the ethics code of the journal. If it’s your first submission, take a moment to actually read it. If nothing else, it will make you feel very grown up and sciency.

Some journals ask you to sign a copyright form already with submission. I have no clue what they are thinking. I never sign a copyright form until the revisions are done.

Peer review can be annoying, frustrating, infuriating even. To keep your sanity and to maximize your chance of passing try the following:

4.1. Keep perspective

This isn’t about you personally, it’s about a manuscript you wrote. It is easy to forget, but in the end you, the reviewers, and the editor have the same goal: to advance science with good research. Work towards that common end.

4.2. Stay polite and professional

Unless you feel a reviewer makes truly inappropriate comments, don’t complain to the editor about strong wording – you will only waste your time. Inappropriate comments are everything that refers to your person or affiliation (or absence thereof), any type of ideological arguments, and opinions not backed up by science. All other comments you should go through and address them one by one, in a reply attached to the resubmission and by changes to the manuscript where appropriate. Never ignore a question posed by a referee, it provides a perfect reason to reject your manuscript.

In case a referee finds an actual mistake in your paper, be reasonable about whether you can fix it in the time given until resubmission. If not, it is better to withdraw the submission.

4.3. Revise, revise, revise

Some journals have a maximum number of revisions that they allow after which an editor will make a final decision. If you don’t meet the mark and your paper is eventually rejected, take a moment to mull over the reason of rejection and revise the paper one more time before you submit it to the next journal.

Goto 3.6. Repeat as necessary.

5. Proofs

If all goes well, one day you will receive a note saying that your paper has been accepted for publication and you will soon receive page proofs. Congratulations!

It might feel like a red light five minutes from home when you have to urgently pee, but please read the page proofs carefully. You will never forgive yourself if you don’t correct that sentence which a well-meaning copy editor rearranged to mean the very opposite of what you intended.

6. And then…

… it’s time to update your CV! Take a walk, and then make plans for the next trip.

Friday, April 17, 2015

Publons

The "publon" is "the elementary quantum of scientific research which justifies publication" and it's also a website that might be interesting for you if you're an active researcher. Publons helps you collect records of your peer review activities. On this website, you can set up an account and then add your reviews to your profile page.

You can decide whether you want to actually add the text of your reviews, or not, and to which level you want your reviews to be public. By default, only the journal for which you reviewed and the month during which the review was completed will be shown. So you need not be paranoid that people will know all the expletives you typed in reply to that idiot last year!

You don't even have to add the text of your review at all, you just have to provide a manuscript number. Your review activity is then checked against the records of the publisher, or so is my understanding.

Since I'm always interested in new community services, I set up an account there some months ago. It goes really quickly and is totally painless. You can then enter your review activities on the website or - super conveniently - you just forward the "Thank You" note from the publisher to some email address. The record then automatically appears on your profile within a day or two. I forwarded a bunch of "Thank You" emails from the last months, and now my profile page looks like follows:



The folks behind the website almost all have a background in academia and probably know it's pointless trying to make money from researchers. One expects of course that at some point they will try to monetize their site, but at least so far I have received zero spam, upgrade offers, or the dreaded newsletters that nobody wants to read.

In short, the site is doing exactly what it promises to do. I find the profile page really useful and will probably forward my other "Thank You" notes (to the extent that I can dig them up), and then put the link to that page in my CV and on my homepage.

Sunday, February 15, 2015

Open peer review and its discontents.

Some days ago, I commented on an arxiv paper that had been promoted by the arxiv blog (which, for all I know, has no official connection with the arxiv). This blogpost had an aftermath that gave me something to think.

Most of the time when I comment on a paper that was previously covered elsewhere, it’s to add details that I found missing. More often than not, this amounts to a criticism which then ends up on this blog. If I like a piece of writing, I just pass it on with approval on twitter, G+, or facebook. This is to explain, in case it’s not obvious, that the negative tilt of my blog entries is selection bias, not that I dislike everything I haven’t written myself.

The blogpost in question pointed out shortcomings of a paper. Trying to learn from earlier mistakes, I was very explicit about what that means, namely that the conclusion in the paper isn’t valid. I’ve now written this blog for almost nine years, and it has become obvious that the careful and polite scientific writing style plainly doesn’t get across the message to a broader audience. If I write that a paper is “implausible,” my colleagues will correctly parse this and understand I mean it’s nonsense. The average science journalist will read that as “speculative” and misinterpret it, either accidentally or deliberately, as some kind of approval.

Scientists also have a habit of weaving safety nets with what Peter Woit once so aptly called ‘weasel words’, ambiguous phrases that allow them on any instance to claim they actually meant something else. Who ever said the LHC would discover supersymmetry? The main reason you most likely perceive the writing on my blog as “unscientific” is lack of weasel words. So I put my head out here on the risk of being wrong without means of backpedalling, and as a side-effect I often come across as actively offensive.

If I got a penny each time somebody told me I’m supposedly “aggressive” because I read Strunk’s `Elements of Style,’ then I’d at least get some money for writing. I’m not aggressive, I’m expressive! And if you don’t buy that, I’ll hit some adjectives over your head. You can find them weasel words in my papers though, in the plenty, with lots of ifs and thens and subjunctives, in nested subordinate clauses with 5 syllable words just to scare off anybody who doesn’t have a PhD.

In reaction to my, ahem, expressive blogpost criticizing the paper, I very promptly got an email from a journalist, Philipp Hummel, who was writing on an article about the paper for spectrum.de, the German edition of Scientific American. His article has meanwhile appeared, but since it’s in German, let me summarize it for you. Hummel didn’t only write about the paper itself, but also about the online discussion around it, and the author’s, mine, and other colleagues’ reaction to it.

Hummel wrote by email he found my blogpost very useful and that he had also contacted the author asking for a comment on my criticism. The author’s reply can be found in Hummel’s article. It says that he hadn’t read my blogpost, wouldn’t read it, and wouldn’t comment on it either because he doesn’t consider this proper ‘scientific means’ to argue with colleagues. The proper way for me to talk to him, he let the journalist know, is to either contact him or publish a reply on the arxiv. Hummel then asked me what I think about this.

To begin with I find this depressing. Here’s a young researcher who explicitly refuses to address criticism on his work, and moreover thinks this is proper scientific behavior. I could understand that he doesn’t want to talk to me, evil aggressive blogger that I am, but that he refuses to explain his research to a third party isn’t only bad science communication, it’s actively damaging the image of science.

I will admit I also find it slightly amusing that he apparently believes I must have an interest talking to him, or in him talking to me. That all the people whose papers I have once commented on show up wanting to talk is stuff of my nightmares. I’m happy if I never hear from them again and can move on. There’s lots of trash out there that needs to be beaten.

That paper and its author, me, and Hummel, we’re of course small fish in the pond, but I find this represents a tension that presently exists in much of the scientific community. A very prominent case was the supposed discovery of “arsenic life” a few years ago. The study was exposed flawed by online discussion. The arsenic authors refused to comment on this, arguing that:
“Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated […] This is a common practice not new to the scientific community. The items you are presenting do not represent the proper way to engage in a scientific discourse and we will not respond in this manner.”
Naïve as I am, I thought that theoretical physics is less 19th century than that. But now it seems to me this outdated spirit is still alive also in the physics community. There is a basic misunderstanding here about necessity and use of peer review, and the relevance of scientific publication.

The most important aspect of peer review is that it assures that a published paper has been read at least by the reviewers, which otherwise wouldn’t be the case. Public peer review will never work for all papers simply because most papers would never get read. It works just fine though for papers that receive much attention, and in these cases anonymous reviewers aren’t any better than volunteer reviewers with similar scientific credentials. Consequently, public peer review, when it takes place, should be taken as least as seriously as anonymous review.

Don’t get me wrong, I don’t think that all scientific discourse should be conducted in public. Scientists need private space to develop their ideas. I even think that most of us go out with ideas way too early, because we are under too much pressure to appear productive. I would never publicly comment on a draft that was sent to me privately, or publicize opinions voiced in closed meetings. You can’t hurry thought.

However, the moment you make your paper publicly available you have to accept that it can be publicly commented on. It isn’t uncommon for researchers, even senior ones, to have stage fright upon arxiv submission for this reason. Now you’ve thrown your baby into the water and have to see whether it swims or sinks.

Don’t worry too much, almost all babies swim. That’s because most of my colleagues in theoretical physics entirely ignore papers that they think are wrong. They are convinced that in the end only truth will prevail and thus practice live-and-let-live. I used to do this too. But look at the evidence: it doesn’t work. The arxiv now is full with paid research so thin a sneeze could wipe it out. We seem to have forgotten that criticism is an integral part of science, it is essential for progress, and for cohesion. Physics leaves me wanting more every year. It is over-specialized into incredibly narrow niches, getting worse by the day.

Yes, specialization is highly efficient to optimize existing research programs, but it is counterproductive to the development of new ones. In the production line of a car, specialization allows to optimize every single move and every single screw. And yet, you’ll never arrive at a new model listening to people who do nothing all day than looking at their own screws. For new breakthroughs you need people who know a little about all the screws and their places and how they belong together. In that production line, the scientists active in public peer review are the ones who look around and say they don’t like their neighbor’s bolts. That doesn’t make for a new car, all right, but at least they do look around and they show that they care. The scientific community stands much to benefit from this care. We need them.

Clearly, we haven’t yet worked out a good procedure for how to deal with public peer review and with these nasty bloggers who won’t shut up. But there’s no going back. Public peer review is here to stay, so better get used to it.

Saturday, January 03, 2015

Your g is my e – Has time come for a physics notation standard?

Standards make sure the nuts fit the bolts.
[Image Source: nutsandbolts.mit.edu]

The German Institute for Standardization, the “Deutsches Institut für Normung” (DIN), has standardized German life since 1917. DIN 18065 sets the standard for the height of staircase railings, DIN 18065 the surface of school bags to be covered with reflective stripes, and DIN 8270-2 the length of the hands of a clock. The Germans have a standard for pretty much everything from toilets to sleeping bags to funeral service.

Many of the German standards are now identical to European Standards, EN, and/or International Standards, ISO. According to DIN ISO 8610 for example the International Standard Day begins on Monday and the week has seven days. DIN EN 1400-1 certifies that a pacifier has two holes so that baby can still breathe if it manages to suck the pacifier into its mouth (it happens). The international standard DIN EN ISO 20126 assures that every bristle of your toothbrush can withhold a pull of at least 15 Newton (“Büschelauszugskraftprüfung” bristle-pull-off-force-test as the Germans call it). A lot of standards are dedicated to hardware supply and electronic appliances; they make sure that the nuts fit the bolts, the plugs fit the outlets, and the fuses blow when they should.

DIN EN 45020 is the European Standard for standards.

Where standards are lacking, life becomes cumbersome. Imagine every time you bought envelopes or folders you’d have to check they will actually fit to the paper you have. The Swedes have a different standard for paper punching than the Germans, neither of which is identical to the US American one. Filing cross-country taxes is painful for many reasons, but the punch issue is the straw that makes my camel go nuts. And let me not even get started about certain nations who don’t even use the ISO paper sizes because international is just the rest of the world.

Standards are important for consumer safety and convenience, but they have another important role which is to benefit the economic infrastructure by making reuse and adaptation dramatically easier. The mechanical engineers have figured that out a century ago, why haven’t the physicists?

During the summer I read a textbook on in-medium electrodynamics, a topic I was honestly hoping I’d never again have anything to do with, but unfortunately it was relevant for my recent paper. I went and flipped over the first 6 chapters or so because they covered the basics that I thought I know, just to then find that the later chapters didn’t make any sense. They gradually started making sense after I figured out that q wasn’t the charge and η not the viscosity.

Anybody who often works with physics textbooks will have encountered this problem before. Even after adjusting for unit and sign conventions, each author has their own notation.

Needless to say this isn’t a problem of textbooks only. I quite frequently read papers that are not directly in my research area, and it is terribly annoying having to waste time trying to decode the nomenclature. In one instance I recall being very confused about an astrophysics paper until it occurred to me that M probably wasn’t the mass of the galaxy. Yeah, haha, how funny.

I’m one of these terrible referees who will insist that every variable, constant, and parameter is introduced in the text. If you write p, I expect you to explain that it’s the momentum. (Or is it a pressure?) If you write g, I expect you to explain it’s the metric determinant. (Or is it a coupling constant? And what again is your sign convention?) If you write S, I expect you to explain it’s the action. (Or is it the entropy?)

I’m doing this mostly because if you read papers dating back to the turn of the last century it is very apparent that what was common notation then isn’t common notation any more. If somebody in a hundred years downloads today’s papers, I still want them to be able to figure out what the papers are about. Another reason I insist on this is that not explaining the notation can add substantial interpretational fog. One of my pet peeves is to ask whether x denotes a position operator or a coordinate. You can build whole theories of mixing these up.

You may wnat to dsicard this as some German maknig am eelphnat out of a muose, but think twice. You almots certainly have seen tihs adn smiliar memes that supposedly show how amazingly well the human brain is at sense-making and error correction. If we can do this, certainly we are able to sort out the nomenclature used in scientific papers. Yes, we are able to do this like you are able to decipher my garbled up English. But would you want to raed a whoel essay liek this?

The extra effort it takes to figure out somebody else’s nomenclature, even if it isn’t all that big a hurdle, creates friction that makes interdisciplinary work, even collaboration within one discipline, harder and thus discourages it. Researchers within one area often settle on a common or at least similar nomenclature, but this happens typically within groups that are already very specialized, and the nomenclature hurdle further supports this overspecialization. Imagine how much easier it would be to learn about a new subject if each paper used a standard notation or at least had a list of used notation added at the end, or in a supplement.

There aren’t all that many letters in the alphabets we commonly use, and we’d run out of letters quickly would we try to keep them all different. But they don’t need to be all different – more practical would be palettes for certain disciplines. And of course one doesn’t really have to fix each and every twiddle or index if it is explained in the text. Just the most important variables, constants, and observables would already be a great improvement. Say, that T that you are using there, does or doesn’t that include complex conjugation? And the D, is that the number of spatial coordinates only, or does it include the time-coordinate? Oh, and N isn’t a normalization but an integer, how stupid of me.

In fact, I think that the benefit, especially for students who haven’t yet seen all that many papers, would be so large that we will almost certainly sooner or later see such a nomenclature standard. And all it really takes is for somebody to set up a wiki and collect entries, then for authors to add a note that they used a certain notation standard. This might be a good starting point.

Of course a physics notation standard will only work if sufficient people come to see the benefit. I don’t think we’re quite there yet, but I am pretty sure that the day will come when some nation expects a certain standard for lecture notes and textbooks, and that day isn’t too far into the future.

Sunday, February 17, 2013

The Future of Peer Review

This week's cover of The Economist.
A year ago, I told you what I think is the future of scientific peer review: Peer review that is conducted independently from the submission of a manuscript to a journal. You would get a report from an institution offering such a service, possibly some already existing publisher, possibly some new institution specifically created for this purpose. This report you could then use together with submission of your paper to a journal, but you could also use it with open access databases. You could even use it in company with your grant proposals if that seems suitable. I call it pre-print peer review.

I argued earlier that, irrespective of what you think about this, it's going to happen. You just have to extrapolate the present situation: There is a lot of anger among scientists about publishers who charge high subscription fees. And while I know some tenured people who simply don't bother with journal publication any more and just upload their papers to the arXiv, most scientists need the approval stamp that a journal publication presently provides: it shows that peer review has taken place. The easiest way to break this dependence on journals is to offer peer review by other means. This will make the peer review process more to the point and more effective.

The benefit of this change over other, more radical, changes that have been proposed is that it stays very close to the present model in that the procedure of peer review itself need not be changed. It's just the provider that changes.

I am thus really excited that the recent issue of Nature reports that one such service exists now and another one is about to be created:
The one that already exists is called Peerage of Science, based in Jyväskylä, Finland. Yeah, right, the Nordic people, they're always a little faster than the rest of the world. Peerage of Science seems to have launched a little more than a year ago, but this is the first time I've heard of it. The one in the making is US based and the project is managed by a guy called Keith Collier.

Of course it's difficult to say whether such a change will catch on. Academia has a large inertia, and it depends a lot on whether people will accept independent reviews. But I am confident, so let me make a prediction, just for the fun of it: In 5 years there will be a dozen of such services, some run by publishers. In ten years, most of peer review will take place this way.

Thursday, February 16, 2012

Pre-Print Peer Review

Nature news titled recently that "Rebel academics ponder how to break free of commercial publishers". The rebels would be better off if they'd read this blog, because we have discussed here a solution to their problem!

The solution is Pre-Print Peer Review (PPPR). The idea is a simple as obvious: Scientists and publishers likewise would benefit if we'd just disentangle the quality assessment from the selection for journal publication. There is no reason why peer review should be tied to the publishing process, so don't. Instead, create independent institutions (ideally several) that mediate peer review. These institutions may be run by scientific publishers. In fact, that would be the easiest and fastest way to do it, and the way most likely to succeed because the infrastructure and expertise is already in place.

The advantages of PPPR over the present system are: There is no more loss of time (and thereby cost) by repeated reviews in different journals. Reports could be used with non-peer-reviewed open access databases, or with grant applications.

Editors of scientific journals could still decide for themselves if they want to follow the advice of these reports. Initially, it is likely they will be skeptical and insist on further reports. The hope is that over time, PPPR would gain trust, and the reports would become more widely accepted.

In contrast to more radical options, PPPR has a good chance of success because it is very close to the present system and would work very similar. And it is of advantage for everybody involved.

I have a longer outline of the idea here, comments and suggestions are very welcome!

Tuesday, November 18, 2008

Peer Review V

Occasionally, I come across these people who say things like “The peer review process is severely broken,” as Virginia Hughes at Seed Magazine echos in a recent article titled “”. Most of the article however is about open access instead, two issues that she happily mixes up:
“The journal-operated system of peer-review, Lisi says, "is severely broken." On this point, he couldn't find a stronger ally than the science blogosphere. Most of the ScienceBloggers are unwavering advocates of the Open Access (OA) movement, and two of them—Bora Zivkovic, of PLoS ONE, and John Wilbanks, of Science Commons—devote all of their time to it.”

At least she mentions “Most OA advocates are quick to point out that open-access doesn't necessarily mean the end of publishers or peer-review.” Indeed. So that then leaves me to wonder what kind of allies on exactly what does Garrett find in the science blogosphere?

There are three reasons why statements like this upset me: they are a) unverified b) self-fulfilling and c) unconstructive. Let me elaborate:

    a) I actually find the peer review process useful, and I also think it is necessary. There are many people who otherwise would not get any qualified feedback on their work. I am lucky to have colleagues to discuss with, who I can ask for opinions, references, or keywords. But not everybody is that lucky. I certainly agree that peer review can be quite painful, and I have received my share of completely nonsensical reports by referees who evidently didn't read more than the abstract. On other occasions however referees have pointed out important issues, and made suggestions for improvement, and even for further studies. So where are all these people who allegedly think the peer review process is broken?

    b) I frequently referee papers. I do my best trying to understand the author's work, and to write a useful review. This takes time, time I don't have for my own work, and the only thing I get for it is an automated Thank-you email from the publisher. I do it because I believe that peer review is an essential part of the organization of scientific knowledge and important for progress, but it only works if enough people participate constructively. What we'd need is to encourage people to take it more seriously, and not proclaim it is 'broken'.

    c) What are the alternatives? Some people like to advocate 'open peer review,' which seems to mean you put your paper somewhere on the web and hope you'll get comments. This, excuse me, is hopelessly naive. The vast majority of papers would never get any comments. Heck, the vast majority of papers probably wouldn't even get read if it wasn't for peer review. Do me the favour and think two steps ahead. We would be running into a situation in which the well-known people and the well-established topics receive a lot of 'reviews' and a lot of attention, whereas the vast body of work will never get the necessary stamp of having been critically read by somebody with an adequate education. As a consequence, a large fraction of serious researchers would step down on the same level with all the weirdos and their backyard theories that never get published. Sorry, but I really don't want to be a scientist under such circumstances.


That having been said, I certainly don't think peer review works very well. My largest frustation is that people don't take it seriously. It has happened in many instances that I wrote a long report on a flawed paper and recommendend rejection, only to see later that the paper got published in a different journal in exactly the same version. Evidently, the authors were not even remotely interested in improving their work. The biggest problem however is just that we are writing way too many papers. Obviously, the more papers we write, the less time we have to read and comment on other people's papers. If you want to fix the peer review process there's then two easy things to do:

1) Lower pressure on researchers to produce papers.

2) Encourage refereeing e.g. by pointing out its relevance or by providing incentives.


Related: See my comment on the paper “Why Current Publication Practices May Distort Science” in which the role of publishing as a branding process was studied, and my posts Peer Review IV, Peer Review III, Peer Review II.

Tuesday, February 05, 2008

Peer Review IV

[My weekend post on the recent survey on peer review reminded me I meant to write about a peer review problem that's only rarely mentioned: citations.]

Horizontal and Vertical Citations

Specifically, there seems to be a trend to increasingly cite horizontally instead of mostly vertically. What I mean with horizontal citations are citations of related works that go back to the same initial ideas or concepts, but are neither actually necessary to understand the content of a paper, nor have they investigated a closely related aspect of a problem. Citing horizontally is attached with a lot of politics. People cite others horizontally to be polite, because it seems smart for networking reasons, because they hope the favour will be returned, as a reply to annoying emails, or just because they believe conventions require it.

Typically it looks like this
    "Recently, topic XYZ has been explored by many groups [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24]."

As an example, see 0706.3155 ( "Collider Phenomenology of Unparticle Physics" by Cheung, Keung, & Yuan). In the more advanced version one quotes simply as
    "Many studies have investigated the implications of ... [11]."

or
    "Various considerations of ... have recently been developed in the literature [8] [9]."

And then clumps together 25 papers in citation [11], see e.g. 0801.1534 ( "Unparticle Self-Interactions and Their Collider Implications" by Feng, Rajaraman, & Tu). For an extreme version, try reference [8] of 0801.0018 ( "Unparticle physics at the photon collider" by Kikuchi, Okada & Takeuchi) which fills more than a whole page, and is probably just a complete list of what an arXiv keyword search brought up.

This kind of citation seems to be especially common in the first some years after a topic received interest, but has a cut-off length that I'd estimate to be somewhere around 50 papers where it simply is no longer feasible. For example in the first years when black holes at the LHC where hip ('01-'04 or so), citations of the sort "in a number of recent papers people have studied..." (e.g. hep-ph/0405054 "SUSY Production From TeV Scale Blackhole at LHC" by Chamblin, Cooper & Nayak) were quite common, but around 2005 references condensed to review articles and the few papers where the idea originated.

Also, in my impression the more established researchers take horizontal citing less seriously (who are you to request I cite your paper?).

In contrast to this I mean with vertical citations the papers that were actually used for a new publication, that are necessary to its understanding (whether they are sufficient to understanding is a different issue), or previous work on the same topic (even if unfortunately unknown to the authors during writing). Of course scientists need to pay proper credit to other people's works, and to back up arguments with references. But should they just group-cite 'various considerations'?

Reasons

This kind of citation was a very useful feature in the days before one could do a keyword search in a database, or click on 'cited by'. Horizontal citing serves the purpose to let the interested reader know who else has worked on a given topic and what other related studies have already been done. However, this is a good example where one sees how technological improvements together with the increase of our community can result in developments that have unwanted consequences.

Consequences

Whether we like it or not, the citation index of a researcher matters to his or her career. If many people cite horizontally out of politeness - possible often without even reading all papers themselves - it encourages fast publications on hip topics. These works contain more horizontal citations, which makes the topic look even more like the place to be. Most importantly, researchers have to act fast to be among the earliest papers because then they makes it onto the citation lists if those who come later. A mechanism like this is called positive feedback: interest causes increasing interest. Nowadays, one can literally make a living out of jumping on and off topics with a good timing.

An effect like this can considerably distort scientist's judgement on which areas they regard worth spending their time on.

Another annoying side effect is that people try to get citations, just because it seems possible, and because every single one improves their cite index. As a result, if I put a paper on the arxiv, the following day I will receive several emails of the type
    "Dear Prof. Hossenfelder,
    Today I read your interesting paper on X. I want to draw your attention to my interesting paper(s) on Y. [latex bibitem follows] "

Depending on temper people request more or less bluntly to be mentioned in my reference list. In some cases these references are interesting and might be useful for later papers. In rare cases I did indeed miss a previous publication on the same topic, which is as annoying as embarrassing. In most cases though, people seem to send these emails for no other reasons than that the title or abstract of my paper contains a word that appears also in their paper. I actually knew a guy who wrote a script to check the new arxiv submissions for keywords and to produce emails like the one above.

And you know what? I can't even blame people for doing this. Even if chances are low, if you send out enough annoying emails one or the other recipient will just cite you, and isn't that what matters*? It's one of these cases where the incentives lead people to focus on meeting secondary criteria (high citation index) instead of primary goals (good research). For more on primary goals and secondary criteria, see The Marketplace of Ideas.

Peer Review

Although peer review does to a certain degree ensure relevant previous publications are appropriately mentioned (restrictions apply), it rarely happens that it is pointed out to the author he has plenty of redundant papers on the reference list. What people put on the arXiv is their business, but if it was clear peer reviewed publications wouldn't support the citation of only weakly related papers this trend would calm down considerably. That's why I think peer review would be the place to address the issue.

How to

If you don't want to cite everybody, don't cite anybody. It sounds silly but it seems people get easily pissed off if one cites a colleague who has worked on topic X, but not themselves. If one doesn't cite the colleague either, it doesn't bother them. Just sticking stubbornly to the publications that were actually used and are relevant to a work seems to be acceptable (and if that isn't sufficient blame it on a page limit). It has the drawback however that colleagues are less likely to return the favor and cite you - Science or Sociology?.

Bottomline

In times where keyword searches and 'cited by' queries are possible, horizontal citations are unnecessary. They have however the side-effect of causing a positive feedback on fashionable topics that can distort objectivity.

Disclaimer

Nothing of what I've speculated here is backed up by an actual study, it's just my impression. It would be interesting to see an analysis of the citation distribution with regard to the cut-off length of clustered citations. I am not criticising the content of any of the papers I mentioned above (that in fact I didn't even read).


See also: Peer Review II, III and the related posts Science and Democracy I, II, and III.


* Footnote to the younger readers: That's meant in a sarcastic way, please don't take it seriously. You can easily spoil your reputation with that kind of behaviour. Nobody wants to work with somebody who is just incredibly annoying and self-centered.

Saturday, February 02, 2008

Peer Review III

The Publishing Research Consortium published last week the results of a global survey on the attitudes and behaviour of academics in relation to peer review in journals

3040 scientists filled out the survey, me among them. I am very happy to see the vast majority shares my general opinion on the importance of the peer review process:

"The overwhelming majority (93%) disagree that peer review is unnecessary. The large majority (85%) agreed that peer review greatly helps scientific communication [...]

Researchers overwhelmingly (90%) said the main area of effectiveness of peer review was in improving the quality of the published paper. In their own experience as authors, 89% said that peer review had improved their last published paper, both in terms of the language or presentation but also in terms of correcting scientific errors."



But there is also a desire for improvement:

"While the majority (64%) of academics declared themselves satisfied with the current system of peer review used by journals (and just 12% dissatisfied), they were divided on whether the current system is the best that can be achieved, with 36% disagreeing and 32% agreeing. There was a very similar division on whether peer review needs a complete overhaul. There was evidence that peer review is too slow (38% were dissatisfied with peer review times) and that reviewers are overloaded."


Most people also shared my opinion about double blind review review that I regard `in principle' a good idea just that it is doubtful whether it would work in practice, as I also pointed out in the earlier post Peer Review II:

"[W]hen asked which was their most preferred option, there was a preference for double-blind review, with 56% selecting this, followed by 25% for single-blind, 13% for open and 5% for post-publication review. Open peer review was an active discouragement for many reviewers, with 47% saying that disclosing their name to the author would make them less likely to review.

Double-blind review was seen as the most effective. Double-blind review had the most respondents (71%) who perceived it to be effective, followed (in declining order) by single-blind (52%), post-publication (37%) and open peer review (26%) [...]

Many respondents, including some of those supporting double-blind review, did however point out that there were great difficulties in operating it in practice because it was frequently too easy to identify authors from their references, type of work or other internal clues."


You can download the full suvey results here (PDF ~1.4 MB), or the summary here (PDF ~ 1.6 MB).


Thanks to Stefan for the pointer!

Tuesday, March 28, 2006

Peer review II

Last week, I received a manuscript that I was asked to referee. Since already 3 other manuscripts are lying on my desk (somewhere) waiting to be refereed, and the topic was pretty outside my field, I decided to let the editors know that I am not suitable as a referee. Being blessed with one of these online services, I clicked on the wrong link in the email and got thanked for having agreed to referee the paper.

Now I actually have to read this thing. Though at second sight it turned out to be more interesting than I initially thought.

Anyway, I had an engaged discussion on the weekend on the publishing issue that I would like to share with you.

We all know that the pressure to publish in peer reviewed journals is not improving quality in research. On the contrary, it leads people to favor publishing more of less quality. The reasons why some papers get published, whereas others don't, are sometimes just mysterious, sometimes clearly due to the authors names. Peer review, it seems, does not work as it should. And the number of publications, and their citations are - at least in my opinion - not necessarily a way to single out good research. However, the question is, what can there be done about it.

Some points that we came up with.

  1. The referee should have some advantage from refereeing a manuscript. In such a way that (s)he is motivated to think about the content and make reasonable suggestions. I am mostly thinking in terms of credibility. I suspect that most journals probably have an internal ranking for referees anyway, but what does the referee ever get out of writing good reports? One might also consider giving some kind of bonus for writing reports in time. I would happily pay $ 50 for actually receiving a report within 2 weeks!
  2. Being mentioned in acknowledgements should be rated higher. People who are frequently mentioned in acknowledgements show that they are engaged in discussions, are able to understand and criticize theories, and are active part of research. This is the more important, as those who don't want to be part of fashion waves often end up with less publications. As to my papers, the quality improves significantly with every person I can discuss its content. (Restrictions apply).
  3. The number of citations should be normalized to the number of active workers on the field. At least approximately.
  4. I would find it enormously helpful if the arxiv would allow reviews on the papers, maybe similar to those at amazon. You might argue that a good physicist should be able to judge on the quality on a paper by himself. Though that is in principle true, it is absolutely inapplicable if you are new in a field and try to get into it. Some kind of quality index, or references to basic papers on the field will help newcomers to get to the central questions much faster - and with less wasted toner. In addition, the possibility of having reviews on the arxiv would make it unnecessary to have follow up papers titled 'A note on gr-qc/...' and 'A remark on a note on ...' etc.

Another point that I have argued against is the idea of double blind refereeing process. First, it does not work when the papers are already on the pre-print archive before submitted to the journal. You then would have to make sure to only accept manuscripts not on the pre-print server, which would make the pre-print idea completely absurd. Second, it would only lead authors to write their papers such that for everyone in the field it's clear who the author is. Third, I actually do think that the credibility of the author is an input the referee might want to consider.

If you have further suggestions, let me know!

Best,

B.

Wednesday, March 15, 2006

Peer review

Folks, you might have seen this very interesting paper

Interpretation of Quantum Field Theories with a Minimal Length Scale
Authors: S. Hossenfelder

which I have discussed and discussed and discussed on New Year, then forgotten, then rediscovered on my desk, rewritten, more discussed and eventually posted on the arxiv!
Yeah! Its always a great day when a paper is so finished, SO FINISHED, its screaming to be read by as many people as possible.

Reactions so far: none.

Except the ususal email from Padmanabhan, who claims I should cite every single of his papers, related or not. Which reminds me that I looked up his website at some point, and read this absolutely great article, which let me laugh the whole day. Some quotes:

God expects us to be moral, kind to others and brush our teeth twice each day.

[Note: Since this set encompasses most of the religions, obviously there are variations in the theme; some Gods expect us to brush the teeth only once a day. Such differences of opinion, of course, have led to major conflicts and wars.]


[...] Once this is realised, it is clear why all the answers given above are correct or why all of them are incorrect and -- most importantly -- why it does not matter. In fact, all those answers -- and millions more which can be constructed -- are all the same answer in different disguises. In a way, they are not even different from the questions!

I hope that clarifies everything. Maybe I should send the latter remark as an answer to the referee's report. When I obtain one.

As a side remark: at the same day I submitted a manuscript to PRD and another manuscript to PLB. The day PRD acknowledged receipt of the manuscript, PLB accepted its manuscript for publication...