Pages

Wednesday, April 10, 2013

Proximate and Ultimate Causes for Publication

I am presently reading Steven Pinker’s “Blank Slate”. He introduces the terms “proximate cause” and “ultimate cause,” a distinction I find enlightening:
“The difference between the mechanisms that impel organisms to behave in real time and the mechanisms that shaped the design of the organism over evolutionary time is important enough to merit some jargon. A proximate cause of behavior is the mechanism that pushes behavior buttons in real time, such as the hunger and lust that impel people to eat and have sex. An ultimate cause is the adaptive rationale that led the proximate cause to evolve, such as the need for nutrition and reproduction that gave us the drives of hunger and lust.” ~Steven Pinker, The Blank Slate: The Modern Denial of Human Nature, p 54.

It is the same distinction I have made in an entirely different context between “primary” and “secondary” goals, my context being the use of measures for scientific success. In Pinker’s terminology then, enhancing our understanding of nature is the “ultimate cause” of scientific research. Striving to excel according to some measure for scientific success – like the h-factor, or the impact factor of journals on one’s publication list – is a “proximate cause”.


The comparison to evolution illuminates the problem with introducing measures for scientific success. Humans do not, in practice, evaluate each of their action as to their contribution to the ultimate cause. They use instead readily available simplifications that previously proved to be correlated with the ultimate cause. Alas, over time the proximate cause might no longer lead toward the ultimate cause. Increasing the output of publications does no more contribute to our understanding of nature than does deep-fried butter on a stick contribute to health and chances of survival.

There is an interesting opinion piece, “Impacting our young” in the Proceedings of the National Academy of Sciences of the USA (ht Jorge) that reflects on the impact that the use of measures for scientific success has on the behavior of researchers:
“Today, the impact factor is often used as a proxy for the prestige of the journal. This proxy is convenient for those wishing to assess young scientists across fields, because it does not require knowledge of the reputation of individual journals or specific expertise in all fields… [T]he impact factor has become a formal part of the evaluation process for job candidates and promotions in many countries, with both salutatory and pernicious consequences.

Not surprisingly, the journals with the highest impact factor (leaving aside the review journals) are those that place the highest premium on perceived novelty and significance. This can distort decisions on how to undertake a scientific project. Many, if not most, important scientific findings come from serendipitous discovery. New knowledge is new precisely because it was unanticipated. Consequently, it is hard to predict which projects are going to generate useful and informative data that will add to our body of knowledge and which will generate that homerun finding. Today, too many of our postdocs believe that getting a paper into a prestigious journal is more important to their career than doing the science itself.”
In other words, the proximate cause of trying to publish in a high impact journal erodes the ultimate cause of doing good science.

Another example for a proxy that distracts from recognizing good science is paying too much attention to research coming out of highly ranked universities, “highly ranked” according to some measure. This case was recently eloquently made in Nature by Keith Weaver in a piece titled “Scientists are Snobs” (sorry, subscription only):
“We all do it. Pressed for time at a meeting, you can only scan the presented abstracts and make snap judgments about what you are going to see. Ideally, these judgments would be based purely on what material is of most scientific interest to you. Instead, we often use other criteria, such as the name of the researchers presenting or their institution. I do it too, passing over abstracts that are more relevant to my work in favor of studies from star universities such as Stanford in California or Harvard in Massachusetts because I assume that these places produce the “best” science…

Such snobbery arises from a preconceived idea that many scientists have –that people end up at smaller institutions because their science has less impact or is of lower quality than that from larger places. But many scientists choose smaller institutions for quality-of-life reasons…”
He goes on to explain how his laboratory was the first to publish a scientific finding, but “recent papers… cited only a more recent study from a large US National Institutes of Health laboratory. Losing this and other worthy citations could ultimately affect my ability to get promoted and attain grants.”

In other words, using the reputation of institutions as a proxy for scientific quality does not benefit the ultimate goal of doing good science.

Now let us contrast these problems with what we can read in another recent Nature article “Beyond the Paper” by Jason Priem. He wipes away such concerns as follows:
“[A] criticism is that the very idea of quantifying scientific impact is misguided. This really will not do. We scientists routinely search out numerical data to explain everything from subatomic physics to the appreciation of Mozart; we cannot then insist that our cogitations are uniquely exempt. The ultimate judge of scientific quality is the scientific community; its judgements are expressed in actions and these actions may be measured. The only thing to do is to find good measures to replace the slow, clumsy and misleading ones we rely on today. The great migration of scholarship to the Web promises to help us to do this.”
This argument implicitly assumes that making use of a quantifiable measure for scientific impact does not affect the judgement of scientists. But we have all reason to believe it does because it replaces the ultimate cause with a proximate cause, the primary goal with a secondary goal. (Priem's article is otherwise very interesting and readable, I recommend you give it a closer look.)

I’m not against using measures for scientific success in principle. But I wish people would pay more attention to the backreaction that comes from doing the measurements and providing people with a time-saving simplified substitute for the ultimate goal of doing good science.

23 comments:

  1. "New knowledge is new precisely because it was unanticipated." Discovery is apostasy that survives management, Chandrasekhar vs. Eddington. Gravitation theory elegantly excludes geometric chirality (opposite shoes violating the Equivalence Principle). Physical theory must survive empirical falsification.

    1) This is not the solution we are seeking. 2) There is no precedent. If it were of value, somebody else would have done it. 3) It contradicts theory. Bell Labs was murdered with monthly DCF/ROI metrics.

    ReplyDelete
  2. What's really interesting is that this same backreaction of the measure affecting the behaviour that the measure was developed from in the first place is essentially the same fundamental problem with capitalism. Financial activity uses measures of value determined by the market to allocate capital where it is most needed, but that measure becomes the standard by which value is quantified which affects in turn the behaviour of investors. George Soros calls this reflexivity. You can also find parallels in game theory scenarios.

    ReplyDelete
  3. /*In other words, the proximate cause of trying to publish in a high impact journal erodes the ultimate cause of doing good science.*/

    It's nothing new for me and you can find a hundreds of references for it at my blog. For example, the Nature article points to the fact, that scientists tend to publish unoriginal research (with many references to earlier work), rather then new, potentially controversial research (with few references to earlier work). The PLOS article argues, that scientists tend to publish positive, rather then negative articles (these denying existing theories the less), because they can get more citations for it. And so on.. The contemporary decadence of science is deeply hardwired inside the system of its preferences.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. Regarding the "Proximate and Ultimate Causes" the dense aether model distinguishes causality mediated with transverse and longitudinal waves (i.e. deterministic and emergent one). These two causalities are mutually dual and one negated the another one up to certain level.

    ReplyDelete
  6. Hi Bee,

    As you recognize there is a evolutionary discourse here about the process of that scientific evolution. One does not just have to be creative with a brush stroke, but having used written examples to accomplish the artistic same adventure and stroke?:)

    So you stand "in front/blog" with your blackboard/whiteboard.

    We are not asking what you as a blank slate represent, but have decided to design this question in relation to what exists in terms of the existence of the soul as having no before?

    There is a division here about what might have been implied by Meno and the slave boy. Lee understands this point from a symmetrical standpoint. I think as being outside of the question and regard of that science.?:)

    So that is where one begins, and you discard the emergent process(all the history) and quickly surmise the science when experiment is done will lead us further? You are just then dealing with this cosmos.

    Best,

    ReplyDelete
  7. Hi Tevong,

    That is a good point but I think it's only partly true. In capitalism, at least you have some reason to believe that optimizing profit will lead to a good allocation of capital, in the sense that it's conductive to progress. Then you can question whether or not this model accurately describes the world, but that's another question. But when it comes to the academic system, we don't even have that. We really have *no* reason to believe that optimizing one or the other measure will lead to a good allocation of researchers over research fields. No reason other than some vague handwaving by people like Priem.

    I've said this a couple of times before, but we're really missing a model for knowledge discovery here. Best,

    B.

    ReplyDelete
  8. And don't forget the enormous psychological pressure on scientists to come up with novel papers 2 or 3 times a year.

    I don't know how they handle this kind of pressure but I can imagine how these people feel.

    Moreover such pressure by definition will mainly produce garbage and only few good papers.

    ReplyDelete
  9. Giotis,

    Yes, you are right of course.

    There's the usual tricks in that a) people try to publish more or less the same article repeatedly. I know several examples where people have a whole list of published papers that are in content more or less indistinguishable. (Ironically, they sometimes get flagged on the arxiv for "content overlap", yet the publishers either don't care or don't notice.) And b) they take apart papers into units as small as possible, a tactic that has even gotten a Wikipedia entry. Needless to say, that neither contributes to the quality nor the actual quantity of research.

    The interesting question is though: If this is so widely known and so widely recognized as nonsense, why do people still engage in it? And the only answer I can come up with is that the proximate causes combine with what I recently learned is called pluralistic ignorance. Ie, we all say we disapprove, but still think we have to play by the rules because everybody else does. Best,

    B.

    ReplyDelete
  10. Hi Sabine

    Do you do personal favors to each other also? For example put someone as a collaborator in a paper although he didn't really contribute...

    ReplyDelete
  11. "There's the usual tricks in that a) people try to publish more or less the same article repeatedly. I know several examples where people have a whole list of published papers that are in content more or less indistinguishable."

    John Bahcall was honest. Like many, he had a publication list with numbered articles, divided into "refereed journals", "conference proceedings" etc. Like many, he often gave the same talk or very similar talks at more than one conferences. These were all listed in his publication list, but under the same number, with a letter to differentiate.
    3305

    ReplyDelete
  12. "Do you do personal favors to each other also? For example put someone as a collaborator in a paper although he didn't really contribute..."

    Of course not.

    ReplyDelete
  13. Of all absurdities in bibliometry, impact factor is one of the most absurd. A high impact factor means a high average number of publications. Note "average". In reality, a few articles in such journals have a very high number of citations and most have very few. The obvious thing to do, assuming, and that's a big assumption, that a high number of citations indicates quality, or at least relevance, is to count the citations for the article in question. Why this isn't done is beyond me. (OK, if the article is very new, there won't have been enough time for citations to be collected, but this is a rare case.) Valuing impact factor when hiring someone is like hiring someone to be a surgeon because he lives next door to a good surgeon.

    The proximate/ultimate problem is described in Pirsig's Zen and the Art of Motorcycle Maintenance when Phaedrus, when teaching a class, suggests doing away with marks ("grades" in the American original). I student replies "You can't do that; after all, that's what we're here for!"

    Evolution is a good example. In some cases, previously useful behaviour is no longer useful, or even contraproductive. An example would be mistrusting strangers, unquestioning belief in authority etc. In other cases, like sex, the proximate behaviour can become an end in itself and enjoyed without any ultimate results. Certainly better than deep-fried butter on a stick. (One could even combine the two, but I digress.)


    ReplyDelete
  14. I am presently reading Steven Pinker’s “Blank Slate”.

    Please provide a review when you have finished the book.

    ReplyDelete
  15. Hi Giotis,

    Some of my early papers have co-authors that didn't actually contribute to the paper. If you're a student, there isn't really much you can do about this. All my papers past PhD have only coauthors that actually contributed to the paper. If you look at my publication list though you'll notice that I don't have a lot of coauthors altogether. I don't like to have to drag people after me and often end up doing things myself.

    It is very common though, at least in the community where I come from, that "the guy with the grant" becomes coauthor whether or not he did something to the paper, because there is a report that has to be written that will explain what came out of the grant, etc. You know how this game goes. It's not a good arrangement, but as long as funding agencies remain narrowminded, this isn't going to change. Best,

    B.

    ReplyDelete
  16. Hi Phillip,

    Sure, I'll write a review. It will take a while though, I'm only making very slow progress. Best,

    B.

    ReplyDelete
  17. Hi Phillip,


    "Of all absurdities in bibliometry, impact factor is one of the most absurd... Valuing impact factor when hiring someone is like hiring someone to be a surgeon because he lives next door to a good surgeon."

    I think the impact factor was originally meant to be a help to librarians who might not know much about the given research areas but have to decide which journals to keep and which to toss. It's become a proxy for quality in other context because it exists and is available and, most of all, it's simple. That really is, I think, the most relevant reason why it's being used: it's available and it's simple. Plus the believe that "everybody does it". Note btw, how the guy Weaver starts his article with the "everybody does it" argument. Best,

    B.

    ReplyDelete
  18. "Some of my early papers have co-authors that didn't actually contribute to the paper. If you're a student, there isn't really much you can do about this."

    But that wasn't you doing the co-author a favour, at least not because you wanted to just put him on to do him a favour; as you mention later, probably a case of the grant holder being on the paper to justify the grant to the funding agency.

    ReplyDelete
  19. Using the impact factor of a journal to determine whether one should subscribe to that journal is probably OK. The problem comes when valuing people who have papers in a high-impact-factor journal, even if their papers there don't have any citations. A classic example of the proximate being confused with the ultimate.

    ReplyDelete
  20. Another tactic is to have many BSc/MSc/PhD students and to supervise many post-docs. Each will produce at least a paper and the supervisor will have his/her name on it. I know several people who not only use this tactic, but also have a large set of "collaborators". E.g., you write a paper and put other 5 people on it. Then each of the 5 other people will do the same. This agreement will make each publish at least 6 papers in this case, having each actually worked in only one.

    ReplyDelete
  21. Hi Christine,

    Yes, this is in fact quite common. This partly plays into the problem of getting grants to justify your existence. One has to add though that the supervisors are only partly to blame for this, esp when it comes to students. In many places it is the case that the department gets a certain amount of money per student, and each of these students has to get a topic for a masters' thesis. So that kind of forces professors to spit out suitable topics. It is a quite recent trend, and one that is most pronounced in physics, that these works actually are expected to be publishable original work. That clearly didn't used to be the case. The PhD thesis used to be the first instance in which an aspiring researcher had to show they're able to work independently. Now it is quite common that students, by the time they have a PhD, already have a bunch of published papers. I don't think that's a good trend because it requires people to specialize too early, imo. Best,

    B.

    ReplyDelete
  22. Yes, Sabine, true.

    Requirements here vary but all state that a PhD should have some (depends on the university) papers in international journals to get the title, among other requirements. Now even some MSc programs are required to have a published paper. In my case, I had one international publication in my MSc and 3 in my PhD (my supervisor said that 3 was his "number" for approval)... Here the trend is to skip MSc altogether and go directly to the PhD, as in the US. That is, to "gain time"...

    ReplyDelete
  23. "E.g., you write a paper and put other 5 people on it. Then each of the 5 other people will do the same. This agreement will make each publish at least 6 papers in this case, having each actually worked in only one."

    This is another reason, assuming one is doing evaluation by bibliometry at all, to divide the relevant metric by the number of authors. All else being equal, anyone who can't see that 10 first-author papers with a total of 1000 citations is better than 100 10-author papers with a total of 5000 citations, and much better than 10 100-author papers with a total of 30,000 citations, is not qualified to be in a position to make the decisions being made.

    A realistic comparison would have 30 100-author papers with maybe 10,000 citations---100 per author and about 330 per paper---and many people would see this as being worth more than 10 single-author papers with just 500 citations, though the latter is 5 times more per author and almost twice as many per paper.


    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.