A year ago, I wrote a blogpost declaring that “academia is fucked up,” to quote myself because my words are the best words. In that blogpost, I had some suggestions how to improve the situation, for example by offering ways to quantify scientific activity other than counting papers and citations.
But ranting on a blog is like screaming in the desert: when the dust settles you’re still in the desert. At least that’s been my experience.
Not so this time! In the week following the blogpost, three guys wrote to me expressing their interest in working on what I suggested. One of them I never heard of again. The other two didn’t get along and one of them eventually dropped out. My hopes sank.
But then I got a small grant from the Foundational Questions Institute and was able to replace the drop-out with someone else. So now we were three again. And I could actually pay the other two, which probably helped keeping them interested.
One of the guys is Tom Price. I’ve never met him, but – believe it or not – we now have a paper on the arXiv.
- Measuring Scientific Broadness
Tom Price, Sabine Hossenfelder
Who has not read letters of recommendations that comment on a student's `broadness' and wondered what to make of it? We here propose a way to quantify scientific broadness by a semantic analysis of researchers' publications. We apply our methods to papers on the open-access server arXiv.org and report our findings.
arXiv:1805.04647 [physics.soc-ph]
In the paper we propose a way to measure how broad or specialized a scientists’ research interests are. We quantify this by analyzing the words they use in the title and abstracts of their arXiv papers.
Tom tried several ways to quantify the distribution of keywords, and so our paper contains four different measures for broadness. We eventually picked one for the main text, but checked that the other ones give largely comparable results. In the paper, we report the results of various analyses of the arXiv data. For example, here is the distribution of broadness over authors:
It’s a near-perfect normal distribution!
I should add that you get this distribution only after removing collaborations from the sample, which we have done by excluding all authors with the word “collaboration” in the name and all papers with more than 30 authors. If you don’t do this, the distribution has a peak at small broadness.
We also looked at the average broadness of authors in different arXiv categories, where we associate an author with a category if it’s the primary category for at least 60% of their papers. By that criterion, we find physics.plasm-ph has the highest broadness and astro-ph.GA the lowest one. However, we have only ranked categories with more than 100 associated authors to get sensible statistics. In this ranking, therefore, some categories don’t even appear.
That’s why we then also looked at the average broadness associated with papers (rather than authors) that have a certain category as the primary one. This brings physics.pop-ph to the top, while astro-ph.GA stays on the bottom.
That astrophysics is highly specialized also shows up in our list of keywords, where phrases like “star-forming” or “stellar mass” are among those of the lowest broadness. On the other hand, the keywords “agents”, “chaos,” “network”, or “fractal” have high values of broadness. You find the top broadest and top specialized words in the below table. See paper for reference to full list.
We also compared the average broadness associated with authors who have listed affiliations in certain countries. The top scores of broadness go to Israel, Austria, and China. The correlation between the h-index and broadness is weak. Neither did we find a correlation between broadness and gender (where we associated genders by common first names). Broadness also isn’t correlated with the Nature Index, which is a measure of a country’s research productivity.
A correlation we did find though is that researchers whose careers suddenly end, in the sense that their publishing activity discontinues, are more likely to have a low value of broadness. Note that this doesn’t necessarily say much about the benefits of some research styles over others. It might be, for example, that research areas with high competition and few positions are more likely to also be specialized.
Let me be clear that it isn’t our intention to declare that the higher the broadness the better. Indeed, there might well be cases when broadness is distinctly not what you want. Depending on which position you want to fill or which research program you have announced, you may want candidates who are specialized on a particular topic. Offering a measure for broadness, so we hope, is a small step to honoring the large variety of ways to excel in science.
I want to add that Tom did the bulk of the work on this paper, while my contributions were rather minor. We have another paper coming up in the next weeks (or so I hope), and we are also working on a website where everyone will be able to determine their own broadness value. So stay tuned!
And what is Sabine's broadness rating?
ReplyDeleteMy work will not appear in your data set.
Very interesting Dr. H! Seems like this technique (broadly speaking) can be applied to almost any academic field given appropriate adjustment in keywords. I think also you may have just contributed to a new academic specialty!
ReplyDeleteSabine,
ReplyDeleteThat sounds very interesting and I will read the paper in more detail when I have time.
For the time being, let me just mention my own semi-serious measure of broadness: it's called the ABCDE criterion. You get one point if you have published one or more papers in each of the Physical Review journals, PRA, PRB, etc. So your broadness score can go from 1 to 5. Currently I am an A-B-D-E guy, score=4. Nuclear physicists willing to help me publish in PRC are welcome...
Management, [(power + ignorance)/time], "efficiently" seeks worth by minimizing risk (real cost/projected value). Publication numbers game that. Currently honest semantic content will weaponize "broadness" by gaming linguistics through algorithms. Financial reports are written by QUILL, not people.
ReplyDeletehttps://physics.aps.org/assets/dfd553a9-961d-4b8d-b22a-2f2068725011/e48_1.png
... Important by every measure except detection.
https://narrativescience.com/Products
... "Meeting your enterprise challenges with data-driven narratives"
https://electricliterature.com/how-to-write-elevator-pitch-novel-publicity-infographic-a8ec74ecf7ce
Good job! (A flurry of inconsequential papers will erupt.)
I'm playing devil's advocate for a moment, how/or did you avoid making adjustments to your model until you simply got the results you wanted?
ReplyDeleteDon,
ReplyDeleteI don't know - it actually didn't occur to me to look. Most likely it's average ;) But as I said we're planning to make this available on the website, so then everyone can check theirs.
Louis,
ReplyDeleteI am not sure what you mean because it's not like we are trying to predict something, so what "results you wanted" are you referring to? We are proposing a way of quantifying something that hasn't previously been quantified and we show that this method seems to work. As we point out in the paper, maybe there are better ways to do it. If so, I certainly hope others will propose improved measures of broadness. The paper is there to document you can do it this way and, look, that's what you learn from it. Also, note that the appendix lists some other possible ways that we tried and a comparison between them. By and large, the results are comparable with some exceptions that are discussed there.
Matthew,
ReplyDeleteIndeed, the method could easily be applied to other fields, which was one of the reasons we didn't want to use existing classification systems. The problem is though that in other fields the data isn't as easy to obtain. In any case, if we can manage to get continued funding, we might look into this in the future.
I think other fields would be difficult, but I suppose it depends on why you want the data. If you leave some people out, but they were not on your "want" list, I suppose it would not matter. If I take myself as an example, I consider myself "Broad" - although I am sufficiently old not to be relevant other than as an example of the difficulties. I am basically a chemist, and have stopped publishing in scientific journals because (a) age, and (b) funding stopped. First, chemists did not have a preprint server until very recently (apart from a brief period, which failed because journals refused to publish if there had been a reprint, and so chemists did not use it.) so their efforts are all over the place. I had published in physics (theory, and photo physics, which also includes dyeing, archaeology ) various fields of chemistry, and in botany journals. Also, when the academic-type funding stopped, I worked and published some patents, but these are not key-worded. Worse, if you try to use a computer to find my name, you will find me mixed-up with an awful lot of others who have the two major names. On the other hand, i suppose if you were head-hunting for a specific job, that could be rather useful. You will not get the lot, but you would probably find someone suitable.
ReplyDeletesimple but brilliant approach!
ReplyDeleteI would compliment someone on their "breadth", not their "broadness". I don't think I've ever heard of "broadness" before. It doesn't matter much, but this is what some people say.
ReplyDeletehttps://www.thefreedictionary.com/breadth
https://wikidiff.com/breadth/broadness
John,
ReplyDeleteYes, good point... It's not something we even thought about, but maybe you are right and we should call it "breadth". Otoh, maybe it's an advantage that it's not a word in common use? I don't have strong opinions on this. Best,
B.
I think other fields would be difficult
ReplyDeleteWith this approach, sure, but meta-analysis is well established in the biological sciences, with the Cochrane Collaboration probably being the most well known.
For what its worth, both "breadth" and "broadness" are correct English words, both going back hundreds of years. However, today, "breadth" is much more common. (Similarly, "depth" is much more common than "deepness". "Height" and "highness" both exist as well, with, similarly, the former more common, though the latter is used today almost only as a royal appellation. Wiktionary defines it as "[t]he state of being high", which suggests even more possibilities.)
ReplyDeleteNow I am thinking we should have a "highness" score for papers...
ReplyDeleteWould spread be a good one?
ReplyDeletehttps://en.wikipedia.org/wiki/Statistical_dispersion
Very interesting. However, either I am overlooking something or there is a step missing:
ReplyDeleteYou define a quantity you call "broadness" in a rigorous way - but from the motivation, what you actually want to do is find a way to quantify something that people intuitively understand in some way - the "breadth of knowledge".
So how do you know that your quantity actually correlates with that understanding?
Martin,
ReplyDeleteWe cannot, of course, analyze people's brains to figure out how much they know. We are instead analyzing the research interests they have publicly demonstrated in their papers.
As broad also means woman, broadness may be interpreted as womanness. Perhaps a good reason to write another paper to that effect :-)
ReplyDeleteKnowledge, ability, and creativity are neither 2D nor linear. "Spacity" is awkward. Geology won't mind: Orogeny as the process, orogenes as the outputs, orogenics as the field - "...a study of people who study things, or sometimes people."
ReplyDeleteHuman Resources will bask in another diversity, "orogenic deficit reparation."
Dear Sabine,
ReplyDeleteI haven’t read the paper, I was asking because the distribution graph in your blog represents expectations so well and you note you needed to remove collaborations and papers with 30 or more authors to achieve that result. In my prior comments you may have noted I have great deal of respect for your integrity and discipline with regards to separating personal bias and beliefs from scientific reasoning. What I was asking (playing devil’s advocate) is when you decided on adjusting parameters such as excluding collaborations, and removing papers with 30 or more authors, how did you determine it was scientifically justified rather than just being changes that gave a graph you were expecting?
The graph represents what I would have expected. As a scientific paper it’s important for me to understand how the result was not a consequence of simply changing parameters until you got the result you expected. Perhaps that is explained in the paper? I won’t be able to read it for a while but the blog comments, and the notation “It’s a near-perfect distribution!” prompted me to ask.
Louis,
ReplyDeleteIt is actually standard procedure in the literature of bibliographic analysis to remove collaborations because they operate and publish by entirely different practices. (We also remove papers with more than 30 authors because not all collaborations actually use an author name with the word "collaboration".)
Having said that, I don't see how it matters. We could have plotted the graph with the collaborations and then fitted a different curve to it, eg a normal distribution plus a power-law. Presumably we could have done that for all the analyses, but it would have been much more complicated and I don't see what insight one would have gained from this other than that collaborations are different than individual researchers? It's not like we were trying to verify or falsify the hypothesis that the curve has this or that shape, in which case your concern would be a very valid one. We were just looking what we got and then plotted the results in a way we thought would be most useful for the reader.
I am not sure what I expected for the broadness. I wasn't convinced it was a normal distribution to such good precision until I saw the fitted curve. (I keep thinking the normal distribution is less peaked than it really is.) Why do you say you expected a normal distribution and not, I dunno, a power law?
I think that it may be time to add another degree beyond doctorate that is for publishing papers. No requirement for masters and doctorate. Just Broder and more in-depth study for masters and doctorate.
ReplyDeleteSabine (in response to Louis),
ReplyDelete"Why do you say you expected a normal distribution and not, I dunno, a power law?"
Because a normal distribution is what one should expect when there is no reason to expect something else!
It is the minimal expectation for the distribution of uncorrelated random events. Power law would require some form of correlations, which should be justified "physically". In the absence of such extra hypotheses, the best bet is to expect a normal distribution.
opamanfred,
ReplyDeleteScientific papers are not uncorrelated random events.
Sabine,
ReplyDeleteWe are talking about the Gaussian graph you show in the post,ie, the distribution of broadness over # of authors. What should be uncorrelated is the broadness of DIFFERENT authors. The fact that the the scientific papers of ONE author are not uncorrelated random events is irrelevant.
What your normal distribution shows is that each author achieves his/her own level of broadness thanks to a series of random uncorrelated events, at least for the sample you considered. This is not too surprising, since the sample was probably not representative of anything in particular.
Now, it may be interesting to look at samples where some correlations could a priori be expected: for instance, the distribution of broadness for the scientists belonging to the same institution, say FIAS. Does the fact of working at FIAS (ie, being surrounded by certain scientists) influences your broadness? If so, then correlations, then non normal distribution. I think this could be valuable information for policy-makers or heads of departments, if they want to increase the broadness of their scientific workforce.
opamanfred,
ReplyDeleteYou misunderstood my comment. I was trying to tell you that the papers of different authors are of course not uncorrelated. You might have heard that people often work on similar topics, yes? That they cite each other? It is also of course the case that scientists look what others around them do and use this information to decide what is the social norm in their community. In other words, I think your hypothesis is entirely unjustified. And having said that, as I pointed out before, we didn't propose a model that we tried to test with this data, so I don't know how it matters what you or Louis think I might have expected, which, frankly, strikes me as an particularly odd example of amateur psychology. You are more than welcome to do your own data analysis. Best,
B.
"I think your hypothesis is entirely unjustified." What hypothesis? I'm just saying that since you find a normal distribution that means no correlations. It's a fact not a hypothesis.
ReplyDeleteThis may be due to the fact that the scientists in your sample come from very diverse communities, so no-correlation is unsurprising.
That's why I pointed out that it may be interesting to look at a more restricted sample where some correlations might be expected.
I know you don't propose a model and of course I'm not asking that you check my amateurish ideas, but when you post your research on a blog with an open comment section, you should expect that people, well, comment on it.
Best,
opamanfred,
ReplyDeleteYou seem to have entirely missed what I am saying. I was responding to Louis accusation that we find an approximate normal distribution because we expected one, and my response was to point out that besides not expecting anything and not having proposed a model, I don't know why you'd expect that. Hence your comment that we find a normal distribution is a fact not a hypothesis is correct but doesn't make any sense in context. I expect that people who comment on my comments at least try to figure out what that comment was about to begin with.
@Sabine
ReplyDeleteSure, you cannot look into people's heads.
But it still seems dangerous to me to use the name of a quantity for which we have an intuitive understanding for another quantity without checking (at least for a set of examples) that what you measure and call "broadness" actually corresponds to broadness.
It's a similar problem to things like IQ measurement - if you define intelligence from the IQ, then suddenly "being intelligent" means "being able to solve IQ tests".
It is a bit what Kahnemann calls "answering the easier question".
And if this criterion would become widely adopted, sooner or later we would only test how creatively people can pepper their papers with words that suggest broadness.
(I once blogged about this - in German - here http://scienceblogs.de/hier-wohnen-drachen/2015/11/05/klausuren-aktien-und-das-messproblem/?all=1)
None of this in any way diminishes the interesting work you did.
MartinB,
ReplyDeleteI don't find it particularly insightful to debate words. If you don't like broadness, use something else.
Interestingly, you seem to have assumed that higher broadness is better. We aren't suggesting any such thing. Broadness may be desirable for some projects or jobs, and specialization may be preferable for other projects or jobs. Sure, there are always people who will try to game the system to their advantage. And sure, maybe measuring broadness will become less useful because of that at some point. But I don't think that's a reason to not even consider it in the first place.
It's not so much about the word per se - but the paper clearly states as motivation that "broadness" is a term that is applied to people ("student's broadness") and that it presents a way to measure it:
ReplyDelete"We here propose a way to quantify scientific broadness by a semantic analysis of researchers' publications."
So I would expect the paper to at least try to show that what is measured actually corresponds to what is commonly understood as "broadness", otherwise you are making a similar mistake to people who think that "action" in physics has something to do with "action" in everyday live.
I did not want to imply that broadness is always good - the example at the end was meant to illustrate that using a definition like this more or less automatically changes the concept defined because people will adapt to it. (Similar to people being able to prepare for IQ tests.) So if in some field low broadness is the accepted norm, people will take words that increase their broadness score out of their papers.
B.
ReplyDeleteI read your paper (well, yours and Tom's!) last night. It will be interesting if this method is tested by others or adopted.
In any event, you are experiencing in this thread something I'm sure you know comes with contributing something new. I believe it goes:
No Good Deed Goes Unpunished.
Well done.
sean s.
I pretty much agree with Uncle Al, especially with respect to weaponizing, but I also agree with you that Publish or Perish seriously undermines scientific integrity.
ReplyDeleteI don't quite understand how anything short of being right about your theories can override the system (which I think you are).
But until you are proven right:
https://www.youtube.com/watch?v=8PaoLy7PHwk
I agree with MartinB that it is not trivial to see how well the "broadness" you define corresponds to what we commonly mean - e.g., reading the paper, you clearly use the word "simple" very differently from most! Just to clarify - if an author has published 5 papers each in categories A, B, C, and D - would that be broader than if the author had published 10 papers in A but still 5 each in B, C, and D? Because that is how I understood it (correct me if I'm wrong). I think the term "broadness" for many would imply that the author is equally broad in both cases. A simple example like this in the paper would help to clarify.
ReplyDeletealphapsa,
ReplyDeleteAs I said above, you are more than welcome to define your own measures if you think they are better. Indeed (as I explained in my earlier post), the long-term vision is to offer people a way to assemble their own measures.
For the example you mention, if the researcher has published more papers in A than in B,C, and D, he or she has expressed more interest in A than in the other fields and hence their interests are less broad, or more specialized (on A), respectively.
Hi Sabine,
ReplyDeleteI was skimming through the article and I noticed that you use define broadness as the Shannon entropy of the keywords distribution you associate with an author.
I was wondering whether you get a Gaussian out of your fit because of this choice: Shannon entropy is more or less the sum of independent p log(p) associated with the keywords. So you have an independent ensemble of sums over lot of numbers. It seems reasonable to expect CLT to kick in and give you a Gaussian.
I know this is somewhat vague, but maybe a starting point...