Thursday, October 16, 2008

Distorted Science

Following some links in Stefan's recent 'This and That', I came across this interesting paper
    Why Current Publication Practices May Distort Science
    By Neal S. Young, John P. A. Ioannidis, Omar Al-Ubaydli
    PLoS Med 5(10): e201


    Abstract: This essay makes the underlying assumption that scientific information is an economic commodity, and that scientific journals are a medium for its dissemination and exchange. While this exchange system differs from a conventional market in many senses, including the nature of payments, it shares the goal of transferring the commodity (knowledge) from its producers (scientists) to its consumers (other scientists, administrators, physicians, patients, and funding agencies). The function of this system has major consequences. Idealists may be offended that research be compared to widgets, but realists will acknowledge that journals generate revenue; publications are critical in drug development and marketing and to attract venture capital; and publishing defines successful scientific careers. Economic modelling of science may yield important insights.

Though the authors constantly talk about 'science' generally, upon closer look it turns out that their concern is actually with biomedical research. This becomes particularly clear if one looks at one of the author's previous essays “Why Most Published Research Findings Are False" (PLoS Med 2(8): e124), which is primarily concerned with lacking reproduction of data finds in various fields of life sciences, failure to pay sufficient attention to negative finds, and the tendency to temper with samples to 'improve' statistical significance with the effect of skewing results.

However, I do think that some of the problems the authors raise are present also in my field of research. For example, they point out “For much (most?) scientific work, it is difficult or impossible to immediately predict future value, extensions, and practical applications,” but nevertheless an early and quite rigorous selection and branding process takes place. Here with branding they refer to getting published in journals with high-impact factor, after a selection through peer review and editors. Since getting a high quality 'branding' is important, scientists adapt strategies to succeed according to these criteria. Meanwhile, journals “strive to attract specific papers, such as influential trials that generate publicity and profitable reprint sales” and try to increase their impact factor:

“Impact factors are widely adopted as criteria for success, despite whatever qualms have been expressed. They powerfully discriminate against submission to most journals, restricting acceptable outlets for publication. “Gaming” of impact factors is explicit. Editors make estimates of likely citations for submitted articles to gauge their interest in publication.”

They proceed with discussing an economic analogy in which the forced selection of some few research findings as being suitable for the desired publication in high-impact journals is an example for 'artificial scarcity': Even though a commodity (here, journal publication) exists in abundance, it is restricted in access, distribution or availability to make it rare and raise its value.

This has disadvantages that lie at hand. The importance of getting published can lead to increased conformity of research interests (“herding”) since outlying topics are risky, and it favours publishing of new and surprising findings over negative results that would attract less attention: “Negative or contradictory data may be discussed at conferences or among colleagues, but surface more publicly only when dominant paradigms are replaced.”

Many of this might sound familiar to you from my earlier post We have only ourselves to judge on each other, where I argued that increasing pressure (like financial and peer pressure) has the side-effect that marketing tactics become important for scientific topics, which goes on the expenses of objectivity and open criticism.

The authors then go on to closer investigate their criticism of the current publishing system and offer 10 options to deal with the problems that I want to briefly comment on:

Potential Competing or Complementary Options and Solutions for Scientific Publication
  1. Accept the current system as having evolved to be the optimal solution to complex and competing problems.
  2. Promote rapid, digital publication of all articles that contain no flaws, irrespective of perceived “importance”.
  3. Adopt preferred publication of negative over positive results; require very demanding reproducibility criteria before publishing positive results.
  4. Select articles for publication in highly visible venues based on the quality of study methods, their rigorous implementation, and astute interpretation, irrespective of results.
  5. Adopt formal post-publication downward adjustment of claims of papers published in prestigious journals.
  6. Modify current practice to elevate and incorporate more expansive data to accompany print articles or to be accessible in attractive formats associated with high-quality journals: combine the “magazine” and “archive” roles of journals.
  7. Promote critical reviews, digests, and summaries of the large amounts of biomedical data now generated.
  8. Offer disincentives to herding and incentives for truly independent, novel, or heuristic scientific work.
  9. Recognise explicitly and respond to the branding role of journal publication in career development and funding decisions.
  10. Modulate publication practices based on empirical research, which might address correlates of longterm successful outcomes (such as reproducibility, applicability, opening new avenues) of published papers.

My thoughts on these suggestions are:

  1. Though I suspect the authors included this point to have the reader realize this is not a preferable option and thus action required, one should keep in mind that after all the system is so far not a complete disaster. People keep telling me peer review is failing, journals are dead, and other slogans of that sort. But I can easily imagine ways to make the situation even worse, and everybody who wants to 'improve' the system should make sure that improvement doesn't have unwanted side-effects that are even more distorting.
  2. I can't but think the authors missed an essential point with this claim. Even if one forgets about the impact factor, there is a huge pressure to publish, period. Scientists are publishing more and more, and one of the main reasons, so I think, peer review is struggling is that the more papers are produced, the less time there is to review them. Plus the problem that this time-consuming process isn't well acknowledge. Rapid distribution of all papers that 'contain no flaws' irrespective of their importance sounds like a great goal, but won't be feasible without changes in the publication and review culture. For example, one could consider my suggestions to have a division of labor in task.
  3. As much as I am in favor of publishing negative results, I don't see why they should be preferred over positive results. A result is a result and should be treated equally.
  4. Well, yes, of course. That's what science is all about. You don't decide whether a paper is worth publishing depending on whether you like the result or not.
  5. That is an interesting suggestion indeed that would directly discourage exaggerated claims. Just that I don't know how this could be done in practice.
  6. This is already the case. Increasingly more journals offer supplements of various type.
  7. Another very good suggestion. This can however only work if such reviews and summaries are a work acknowledged as an important contribution by the community. Otherwise it will just be regarded as a waste of time, time in which one could do 'real research'.
  8. That would indeed be good, but again it remains unclear to me how so. As usual however I am skeptic about setting incentives in a specific way because if done so without any sensible feedback mechanism, these incentives might develop their own life as well and eventually have backlashes in counterproductive strategies, that leaves people aiming to fulfil certain given secondary criteria instead of primary goals (for more on secondary criteria and primary goals, see The Marketplace of Ideas).
  9. That is basically what I usually refer to as creating awareness about the problem. Without that, nothing works. The paper does quite a good job with that.
  10. That's a question of science policy, together with the need to have more studies of what practices lead to desirable long-term outcomes. I totally agree this is a topic that needs to be payed more attention to, and that in a practical way such that results lead to change. As much as I like the general sense of the paper, on the practical side it is somewhat weak.

The authors conclude by asking whether we have created a system for the exchange of scientific ideas that serves best “the striving for knowledge,” the “search for truth” and “the free competition of thought”. It seems pretty clear that the present system does not serve best this purpose. The topic investigated in the article is a good example for what I mean with management and organization of knowledge. Scientific publishing is an essential ingredient to structuring and disseminating scientific knowledge, and we should pay close attention to this process not negatively affecting the way scientists chose or present research topics. I find it overly simplistic though to put the blame on the publishers. To a much larger extend the problem is caused by scientists accepting the system they are faced with and complying to the pressures excerted on them without asking about the long-term effects.



Neal S. Young, John P. A. Ioannidis, Omar Al-Ubaydli (2008). Why Current Publication Practices May Distort Science PLoS Medicine, 5 (10) DOI: 10.1371/journal.pmed.0050201

10 comments:

  1. While one may seek legitimacy to scientific evidence, then most certainly. But one still cannot be naive to "not recognize" a framework.

    There are always some interesting conclusions that can be based on economics, to consider that consumerism, will and does dictate an outcome(confidence).

    If such a force is gathered, then money can be allocated to the responsibility and direction that such "democratic institutions" would seek to mandate.

    This assumption is based on a "recognition of the economic realization" that sets root causes as mathematical descriptions on trade, and an assessment of the sociological and psychological directions inclined by persuasion by recognition of consumerism.

    Nobel prizes on Economics do not exclude a mathematical framework?:)

    Best,

    ReplyDelete
  2. Offer disincentives to herding and incentives for truly independent, novel, or heuristic scientific work.

    Grant funding embraces prior performance and citation index support. Young faculty and original ideas are analytically unfundable. "Lacks precedent" Herd or be unheard.

    Proton magnetic moment, a Dirac equation spin 1/2 particle, had to be one nuclear magneton. Then... Zeits. f. Physik 85 4 (1933). Do opposite parity atomic mass distributions violate the Equivalence Principle?

    ReplyDelete
  3. Regarding nos. 3, 4. This is a reflection of their experience w/ the biomedical sciences. It shown 3 to 4 (?) years ago that there was a definite positive bias in results, esp. in submissions to the FDA. There is more urgency here,since unlike the majority of theoretical physics, biomedical research is actually important, as it finds its way into clinical practice (I'm a physicist - that really pained me).

    Further, negative results are generally more robust and help to focus on work that DOES produce positive results.

    ReplyDelete
  4. Hi Anonymous,

    Yes, I figured that from Ioannidis' earlier paper. However, the way the whole paper is formulated it raises the impression it would speak for 'all of science'. They maybe should have made clearer that with 'negative results' they were referring to data samples. What I was trying to say is I don't think this is a useful recommendation for other fields, I would think e.g. a proof for 'X is true' is as worth publishing as one for 'X is wrong'.

    It is a bit annoying if e.g. the Wall Street Journal sweepingly expands these problems to other areas by titling "Most Science Studies Appear to Be Tainted By Sloppy Analysis". I don't have the impression that this is the case in experimental physics. It takes a long time and numerous reproductions before a new result becomes accepted.

    Best,

    B.

    ReplyDelete
  5. Hi Plato,

    I don't doubt one can draw interesting conclusions based on economic considerations. I actually think that overall their comparison is quite accurate. What I was trying to say is if you institutionalize simplistic measures, you will always get a trend that people strive to fulfil these measures. If there is no learning process within this system it will sooner or later always run into absurdity. That's the case with our economy (in many regards), that's the case with our political systems, that's the case with the academic systems. When will we ever learn?

    Best,

    B.

    ReplyDelete
  6. Hi Bee,

    In as I’ve never written a paper, yet at best have only aided in the past in the production of a few, I really have no credentials to speak. Despite this it appears to me that since these authors deal with primarily biomedical research and you that of theoretical physics, there is the difference in terms of feed back as it relates to the methodologies that should be considered. In the biomedical field experiment still largely impacts by way of limiting what can be considered reasonable, while in your discipline experiment provides few clues beyond providing the scope and general direction of the questions to be answered while providing little help in terms of determining solution.

    Likewise it has been evident for some time that to a large degree this is not about to change for physics and as such I feel that many of the problems and frustrations relate directly to this. In general one could say that biomedical research is still largely an inductively driven science, whereas theoretical physics is becoming largely a deductively guided one. The problem I see is that with any deductive process premise is the key and to a large degree in theoretical physics this difference has been greatly downplayed. What I mean by this is that foundational issues are considered by many to be nothing or little more then metaphysics and thus for the greatest part ignored. I’ve always believed this area must be more heavily concentrated on and considered if theoretical physics is to go much further in any practical way. In as the focus of the author’s paper doesn’t even consider this I’m then not surprised it doesn’t provide to be much help.

    Best,

    Phil

    ReplyDelete
  7. Hi Phil,

    Yes, I guess the difference between inductively and deductively driven fields is one point. Besides this it probably also makes a difference how 'hard' a field of science is, with physics classically being considered the hardest whatsoever. I never really figured how to pin this distinction down though.

    Regarding fundamental research, it sometimes pains me a great deal that foundational questions are often justified only through their potential applicability in later times. I wonder whatever happened to knowledge for the sake of knowledge. I often think that if you'd plot a curve with the dependence of applications of a research field on its broader interest (like, for the general public), I'd think it doesn't drop monotonically with less applicability, but is rather u-shaped. It is easy and tempting to dismiss 'metaphysical' questions as being not productive, but this fails to take into account that, so I believe, we all spend time wondering who we are, where we come from or go to (well, where I'll be going to is certainly a question that's presently on my mind ;-). It's a permanent search that scientists have a contribution to make to.

    You are right that this is to a large extend a deductive process and thus one would think it is even more important we pay very close attention to the corrective processes in our communities functioning optimally, may that be in peer review or debate. Unfortunately, I don't have the impression that is the case, and some of the reasons the authors of the above paper discussed (e.g. about 'herding') are very to the point I think (and have been raised previously by other people in other context).

    Best,

    B.

    ReplyDelete
  8. Hi Bee,

    “I'd think it doesn't drop monotonically with less applicability, but is rather u-shaped.”

    That’s an interesting way to look at it, as at the same time it lends reason as to why it may be so often ignored; for this is sort of like looking at a bell curve from the opposite direction. On one end you have the highly interested general public and on the other the deeply concerned and thus strongly motivated theorists; with both groups being in the minority and yet what they are concerned with so important to progress. This then of course leaves the majority gravitating to the middle where nothing much seems to change or happen.

    It could likewise be said that the existence and maintenance of either is codependent. On this point I we most certainly agree, for I have long believed that the well being of science can only be assured by the existence and fostering of a well informed as to remain curiously interested public.

    Best,

    Phil

    ReplyDelete
  9. Hi Bee,

    “I wonder whatever happened to knowledge for the sake of knowledge.”

    Just as a belated postscript to your comment above I have often wondered the same. This of course translates another way as what’s the good of it if it has no utility. What’s often missed is utility comes down to being for most to asking how can it be made to work for us and ignores that such knowledge can render insight into why we are like we are or become what we become. I would insist this to be the greater utility rather then the lesser. The interesting thing about it all is that this has happened before as when Greek philosophy/science was adopted by the Romans yet only primary used in a technical and materialistic manner. One could say the same has transpired from the Renaissance to now.

    Best,

    Phil

    ReplyDelete
  10. Those days, there is some discussion, at sci.physics.research and sci.physics.foundations nws, about how 21st century physics is being distorted.

    Debate is based in next report

    (\abstract
    This report presents a nonidealized vision of 21st century science. It
    handles some social, political, and economic problems that affect the
    heart of scientific endeavour and are carrying important consequences
    for scientists and the rest of society.

    The problems analyzed are the current tendency to limit the size of
    scholarly communications, the funding of research, the rates and page
    charges of journals, the wars for the intellectual property of the data
    and results of research, and the replacement of impartial reviewing by
    anonymous censorship. The scope includes an economic analysis of PLoS'
    finances, the wars APS versus Wikipedia and ACS versus NIH, and a list
    of thirty four Nobel Laureates whose awarded work was rejected by peer
    review.

    Several suggestions from Harry Morrow Brown, Lee Smolin, Linda Cooper,
    and the present author for solving the problems are included in the
    report. The work finishes with a brief section on the reasons to be
    optimists about the future of science.
    )

    http://www.canonicalscience.org/en/publicationzone/
    canonicalsciencetoday/20081113.html

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.