Showing posts with label Academia. Show all posts
Showing posts with label Academia. Show all posts

Wednesday, March 22, 2017

Academia is fucked-up. So why isn’t anyone doing something about it?

A week or so ago, a list of perverse incentives in academia made rounds. It offers examples like “rewarding an increased number of citations” that – instead of encouraging work of high quality and impact – results in inflated citation lists, an academic tit-for-tat which has become standard practice. Likewise, rewarding a high number of publications doesn’t produce more good science, but merely finer slices of the same science.

Perverse incentives in academia.
Source: Edwards and Roy (2017). Via.

It’s not like perverse incentives in academia is news. I wrote about this problem ten years ago, referring to it as the confusion of primary goals (good science) with secondary criteria (like, for example, the number of publications). I later learned that Steven Pinker made the same distinction for evolutionary goals, referring to it as ‘proximate’ vs ‘ultimate’ causes.

The difference can be illustrated in a simple diagram (see below). A primary goal is a local optimum in some fitness landscape – it’s where you want to go. A secondary criterion is the first approximation for the direction towards the local optimum. But once you’re on the way, higher-order corrections must be taken into account, otherwise the secondary criterion will miss the goal – often badly.


The number of publications, to come back to this example, is a good first-order approximation. Publications demonstrate that a scientist is alive and working, is able to think up and finish research projects, and – provided the paper are published in peer reviewed journals – that their research meets the quality standard of the field.

To second approximation, however, increasing the number of publications does not necessarily also lead to more good science. Two short papers don’t fit as much research as do two long ones. Thus, to second approximation we could take into account the length of papers. Then again, the length of a paper is only meaningful if it’s published in a journal that has a policy of cutting superfluous content. Hence, you have to further refine the measure. And so on.

This type of refinement isn’t specific to science. You can see in many other areas of our lives that, as time passes, the means to reach desired goals must be more carefully defined to make sure they still lead where we want to go.

Take sports as example. As new technologies arise, the Olympic committee has added many additional criteria on what shoes or clothes athletes are admitted to wear, which drugs make for an unfair advantage, and they’ve had to rethink what distinguishes a man from a woman.

Or tax laws. The Bible left it at “When the crop comes in, give a fifth of it to Pharaoh.” Today we have books full of ifs and thens and whatnots so incomprehensible I suspect it’s no coincidence suicide rates peak during tax season.

It’s debatable of course whether current tax laws indeed serve a desirable goal, but I don’t want to stray into politics. Relevant here is only the trend: Collective human behavior is difficult to organize, and it’s normal that secondary criteria to reach primary goals must be refined as time passes.

The need to quantify academic success is a recent development. It’s a consequence of changes in our societies, of globalization, increased mobility and connectivity, and is driven by the increased total number of people in academic research.

Academia has reached a size where accountability is both important and increasingly difficult. Unless you work in a tiny subfield, you almost certainly don’t know everyone in your community and can’t read every single publication. At the same time, people are more mobile than ever, and applying for positions has never been easier.

This means academics need ways to judge colleagues and their work quickly and accurately. It’s not optional – it’s necessary. Our society changes, and academia has to change with it. It’s either adapt or die.

But what has been academics’ reaction to this challenge?

The most prevalent reaction I witness is nostalgia: The wish to return to the good old times. Back then, you know, when everyone on the committee had the time to actually read all the application documents and was familiar with all the applicants’ work anyway. Back then when nobody asked us to explain the impact of our work and when we didn’t have to come up with 5-year plans. Back then, when they recommended that pregnant women smoke.

Well, there’s no going back in time, and I’m glad the past has passed. I therefore have little patience for such romantic talk: It’s not going to happen, period. Good measures for scientific success are necessary – there’s no way around it.

Another common reaction is the claim that quality isn’t measurable – more romantic nonsense. Everything is measurable, at least in principle. In practice, many things are difficult to measure. That’s exactly why measures have to be improved constantly.

Then, inevitably, someone will bring up Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” But that is clearly wrong. Sorry, Goodhard. If you want to indeed optimize the measure, you get exactly what you asked for. The problem is that often the measure wasn’t what you wanted to begin with.

With use of the terminology introduced above, Goodhard’s Law can be reformulated as: “When people optimize a secondary criterion, they will eventually reach a point where further optimization diverts from the main goal.” But our reaction to this should be to improve the measure, not throw the towel and complain “It’s not possible.”

This stubborn denial of reality, however, has an unfortunate consequence: Academia has gotten stuck with the simple-but-bad secondary criteria that are currently in use: number of publications, the infamous h-index, the journal impact factor, renown co-authors, positions held at prestigious places, and so on. 

We all know they’re bad measures. But we use them anyway because we simply don’t have anything better. If your director/dean/head/board is asked to demonstrate how great your place is, they’ll fall back on the familiar number of publications, and as a bonus point out who has recently published in Nature. I’ve seen it happen. I just had to fill in a form for the institute’s board in which I was asked for my h-index and my paper count.

Last week, someone asked me if I’d changed my mind in the ten years since I wrote about this problem first. Needless to say, I still think bad measures are bad for science. But I think that I was very, very naïve to believe just drawing attention to the problem would make any difference. Did I really think that scientists would see the risk to their discipline and do something about it? Apparently that’s exactly what I did believe.

Of course nothing like this happened. And it’s not just because I’m a nobody who nobody’s listening to. Similar concerns like mine have been raised with increasing frequency by more widely known people in more popular outlets, like Nature and Wired. But nothing’s changed.

The biggest obstacle to progress is that academics don’t want to admit the problem is of their own making. Instead, they blame others: policy makers, university administrators, funding agencies. But these merely use measures that academics themselves are using.

The result has been lots of talk and little action. But what we really need is a practical solution. And of course I have one on offer: An open-source software that allows every researcher to customize their own measure for what they think is “good science” based on the available data. That would include the number of publications and their citations. But there is much more information in the data which currently isn’t used.

You might want to know whether someone’s research connects areas that are only loosely connected. Or how many single-authored papers they have. You might want to know how well their keyword-cloud overlaps with that of your institute. You might want to develop a measure for how “deep” and “broad” someone’s research is – two terms that are often used in recommendation letters but that are extremely vague.

Such individualized measures wouldn’t only automatically update as people revise criteria, but they would also counteract the streamlining of global research and encourage local variety.

Why isn’t this happening? Well, besides me there’s no one to do it. And I have given up trying to get funding for interdisciplinary research. The inevitable response I get is that I’m not qualified. Of course it’s correct – I’m not qualified to code and design a user-interface. But I’m totally qualified to hire some people and kick their asses. Trust me, I have experience kicking ass. Price tag to save academia: An estimated 2 million Euro for 5 years.

What else has changed in the last ten years? I’ve found out that it’s possible to get paid for writing. My freelance work has been going well. The main obstacle I’ve faced is lack of time, not lack of opportunity. And so, when I look at academia now, I do it with one leg outside. What I see is that academia needs me more than I need academia.

The current incentives are extremely inefficient and waste a lot of money. But nothing is going to change until we admit that solving the problem is our own responsibility.

Maybe, when I write about this again, ten years from now, I’ll not refer to academics as “us” but as “they.”

Friday, June 24, 2016

Wissenschaft auf Abwegen

Ich war am Montag in Regensburg und habe dort einen öffentlichen Vortrag gegeben zum Thema “Wissenschaft auf Abwegen” für eine Reihe unter dem Titel “Was ist Wirklich?” Das ganze ist jetzt auf YouTube. Das Video besteht aus etwa 30 Minuten Vortrag und danach noch eine Stunde Diskussion. Alles in Deutsch. Nur was für eche Fans ;)

Thursday, May 19, 2016

The Holy Grail of Crackpot Filtering: How the arXiv decides what’s science – and what’s not.

Where do we draw the boundary between science and pseudoscience? It’s is a question philosophers have debated for as long as there’s been science – and last time I looked they hadn’t made much progress. When you ask a sociologist their answer is normally a variant of: Science is what scientists do. So what do scientists do?

You might have heard that scientists use what’s called the scientific method, a virtuous cycle of generating and testing hypotheses which supposedly separates the good ideas from the bad ones. But that’s only part of the story because it doesn’t tell you where the hypotheses come from to begin with.

Science doesn’t operate with randomly generated hypotheses for the same reason natural selection doesn’t work with randomly generated genetic codes: it would be highly inefficient and any attempt to optimize the outcome would be doomed to fail. What we do instead is heavily filtering hypotheses, and then we consider only those which are small mutations of ideas that have previously worked. Scientists like to be surprised, but not too much.

Indeed, if you look at the scientific enterprise today, almost all of its institutionalized procedures are methods not for testing hypotheses, but for filtering hypotheses: Degrees, peer reviews, scientific guidelines, reproduction studies, measures for statistical significance, and community quality standards. Even the use of personal recommendations works to that end. In theoretical physics in particular the prevailing quality standard is that theories need to be formulated in mathematical terms. All these are requirements which have evolved over the last two centuries – and they have proved to work very well. It’s only smart to use them.

But the business of hypotheses filtering is a tricky one and it doesn’t proceed by written rules. It is a method that has developed through social demarcation, and as such it has its pitfalls. Humans are prone to social biases and every once in a while an idea get dismissed not because it’s bad, but because it lacks community support. And there is no telling how often this happens because these are the stories we never get to hear.

It isn’t news that scientists lock shoulders to defend their territory and use technical terms like fraternities use secret handshakes. It thus shouldn’t come as a surprise that an electronic archive which caters to the scientific community would develop software to emulate the community’s filters. And that is, in a nutshell, basically what the arXiv is doing.

In an interesting recent paper, Luis Reyes-Galindo had a look at the arXiv moderators and their reliance on automated filters:


In the attempt to develop an algorithm that would sort papers into arXiv categories automatically, thereby supporting arXiv moderators to decide when a submission needs to be reclassified, it turned out that papers which scientists would mark down as “crackpottery” showed up as not classifiable or stood out by language significantly different from that in the published literature. According to Paul Ginsparg, who developed the arXiv more than 20 years ago:
“The first thing I noticed was that every once in a while the classifier would spit something out as ‘I don't know what category this is’ and you’d look at it and it would be what we’re calling this fringe stuff. That quite surprised me. How can this classifier that was tuned to figure out category be seemingly detecting quality?

“[Outliers] also show up in the stop word distribution, even if the stop words are just catching the style and not the content! They’re writing in a style which is deviating, in a way. [...]

“What it’s saying is that people who go through a certain training and who read these articles and who write these articles learn to write in a very specific language. This language, this mode of writing and the frequency with which they use terms and in conjunctions and all of the rest is very characteristic to people who have a certain training. The people from outside that community are just not emulating that. They don’t come from the same training and so this thing shows up in ways you wouldn’t necessarily guess. They’re combining two willy-nilly subjects from different fields and so that gets spit out.”
It doesn’t surprise me much – you can see this happening in comment sections all over the place: The “insiders” can immediately tell who is an “outsider.” Often it doesn’t take more than a sentence or two, an odd expression, a term used in the wrong context, a phrase that nobody in the field would ever use. It is only consequential that with smart software you can tell insiders from outsiders even more efficiently than humans. According to Ginsparg:
“We've actually had submissions to arXiv that are not spotted by the moderators but are spotted by the automated programme [...] All I was trying to do is build a simple text classifier and inadvertently I built what I call The Holy Grail of Crackpot Filtering.”
Trying to speak in the code of a group you haven’t been part of at least for some time is pretty much impossible, much like it’s impossible to fake the accent of a city you haven’t lived in for some while. Such in-group and out-group demarcation is subject of much study in sociology, not specifically the sociology of science, but generally. Scientists are human and of course in-group and out-group behavior also shapes their profession, even though they like to deny it as if they were superhuman think-machines.

What is interesting about this paper is that, for the first time, it openly discusses how the process of filtering happens. It’s software that literally encodes the hidden rules that physicists use to sort out cranks. For what I can tell, the arXiv filters work reasonably well, otherwise there would be much complaint in the community. But the vast majority of researchers in the field are quite satisfied with what the arXiv is doing, meaning the arXiv filters match their own judgement.

There are exceptions of course. I have heard some stories of people who were working on new approaches that fell between the stools and were flagged as potential crackpottery. The cases that I know of could eventually be resolved, but that might tell you more about the people I know than about the way such issues typically end.

Personally, I have never had a problem with the arXiv moderation. I had a paper reclassified from gen-ph to gr-qc once by a well-meaning moderator, which is how I learned that gen-ph is the dump for borderline crackpottery. (How would I have known? I don’t read gen-ph. I was just assuming someone reads it.)

I don’t so much have an issue with what gets filtered on the arXiv, what bothers me much more is what does not get filtered and hence, implicitly, gets approval by the community. I am very sympathetic to the concerns of John The-End-Of-Science Horgan that scientists don’t clean enough on their own doorsteps. There is no “invisible hand” that corrects scientists if they go astray. We have to do this ourselves. In-group behavior can greatly misdirect science because, given sufficiently many people, even fruitless research can become self-supportive. No filter that is derived from the community’s own judgement will do anything about this.

It’s about time that scientists start paying attention to social behavior in their community. It can, and sometimes does, affect objective judgement. Ignoring or flagging what doesn’t fit into pre-existing categories is one such social problem that can stand in the way of progress.

In a 2013 paper published in Science, a group of researchers quantified the likeliness of combinations of topics in citation lists and studied the cross-correlation with the probability of the paper becoming a “hit” (meaning in the upper 5th percentile of citation scores). They found that having previously unlikely combinations in the quoted literature is positively correlated with the later impact of a paper. They also note that the fraction of papers with such ‘unconventional’ combinations has decreased from 3.54% in the 1980s to 2.67% in the 1990, “indicating a persistent and prominent tendency for high conventionality.”

Conventional science isn’t bad science. But we also need unconventional science, and we should be careful to not assign the label “crackpottery” too quickly. If science is what scientists do, scientists should pay some attention to the science of what they do.

Wednesday, April 27, 2016

If you fall into a black hole

If you fall into a black hole, you’ll die. That much is pretty sure. But what happens before that?

The gravitational pull of a black hole depends on its mass. At a fixed distance from the center, it isn’t any stronger or weaker than that of a star with the same mass. The difference is that, since a black hole doesn’t have a surface, the gravitational pull can continue to increase as you approach the center.

The gravitational pull itself isn’t the problem, the problem is the change in the pull, the tidal force. It will stretch any extended object in a process with technical name “spaghettification.” That’s what will eventually kill you. Whether this happens before or after you cross the horizon depends, again, on the mass of the black hole. The larger the mass, the smaller the space-time curvature at the horizon, and the smaller the tidal force.

Leaving aside lots of hot gas and swirling particles, you have good chances to survive crossing the horizon of a supermassive black hole, like that in the center of our galaxy. You would, however, probably be torn apart before crossing the horizon of a solar-mass black hole.

It takes you a finite time to reach the horizon of a black hole. For an outside observer however, you seem to be moving slower and slower and will never quite reach the black hole, due to the (technically infinitely large) gravitational redshift. If you take into account that black holes evaporate, it doesn’t quite take forever, and your friends will eventually see you vanishing. It might just take a few hundred billion years.

In an article that recently appeared on “Quick And Dirty Tips” (featured by SciAm), Everyday Einstein Sabrina Stierwalt explains:
“As you approach a black hole, you do not notice a change in time as you experience it, but from an outsider’s perspective, time appears to slow down and eventually crawl to a stop for you [...] So who is right? This discrepancy, and whose reality is ultimately correct, is a highly contested area of current physics research.”
No, it isn’t. The two observers have different descriptions of the process of falling into a black hole because they both use different time coordinates. There is no contradiction between the conclusions they draw. The outside observer’s story is an infinitely stretched version of the infalling observer’s story, covering only the part before horizon crossing. Nobody contests this.

I suspect this confusion was caused by the idea of black hole complementarity. Which is indeed a highly contest area of current physics research. According to black hole complementarity the information that falls into a black hole both goes in and comes out. This is in contradiction with quantum mechanics which forbids making exact copies of a state. The idea of black hole complementarity is that nobody can ever make a measurement to document the forbidden copying and hence, it isn’t a real inconsistency. Making such measurements is typically impossible because the infalling observer only has a limited amount of time before hitting the singularity.

Black hole complementarity is actually a pretty philosophical idea.

Now, the black hole firewall issue points out that black hole complementarity is inconsistent. Even if you can’t measure that a copy has been made, pushing the infalling information in the outgoing radiation changes the vacuum state in the horizon vicinity to a state which is no longer empty: that’s the firewall.

Be that as it may, even in black hole complementarity the infalling observer still falls in, and crosses the horizon at a finite time.

The real question that drives much current research is how the information comes out of the black hole before it has completely evaporated. It’s a topic which has been discussed for more than 40 years now, and there is little sign that theorists will agree on a solution. And why would they? Leaving aside fluid analogies, there is no experimental evidence for what happens with black hole information, and there is hence no reason for theorists to converge on any one option.

The theory assessment in this research area is purely non-empirical, to use an expression by philosopher Richard Dawid. It’s why I think if we ever want to see progress on the foundations of physics we have to think very carefully about the non-empirical criteria that we use.

Anyway, the lesson here is: Everyday Einstein’s Quick and Dirty Tips is not a recommended travel guide for black holes.

Wednesday, March 09, 2016

A new era of science

[img source: changingcourse.com]
Here in basic research we all preach the gospel of serendipity. Breakthroughs cannot be planned, insights not be forced, geniuses not be bred. We tell ourselves – and everybody willing to listen – that predicting the outcome of a research project is more difficult than doing the research in the first place. And half of all discoveries are made while tinkering with something else anyway. Now please join me for the chorus, and let us repeat once again that the World Wide Web was invented at CERN – while studying elementary particles.

But in theoretical physics the age of serendipitous discovery is nearing its end. You don’t tinker with a 27 km collider and don’t coincidentally detect gravitational waves while looking for a better way to toast bread. Modern experiments succeed by careful planning over the course of decades. They rely on collaborations of thousands of people and cost billions of dollars. While we always try to include multipurpose detectors hoping to catch unexpected signals, there is no doubt that our machines are built for very specific purposes.

And the selection is harsh. For every detector that gets funding, three others don’t. For every satellite mission that goes into orbit, five others never get off the ground. Modern physics isn’t about serendipitous discoveries – it’s about risk/benefit analyses and impact assessments. It’s about enhanced design, horizontal integration, and progressive growth strategies. Breakthroughs cannot be planned, but you sure can call in a committee meeting to evaluate their ROI and disruptive potential.

There is no doubt that scientific research takes up resources. It requires both time and money, which is really just a proxy for energy. And as our knowledge increases, new discoveries have become more difficult, requiring us too pool funding and create large international collaborations.

This process is most pronounced in basic research in physics – cosmology and particle physics – because in this area we deal with the smallest and the most distant objects in the universe. Things that are hard to see, basically. But the trend towards Big Science can be witnessed also in other discipline’s billion-dollar investments like the Human Genome Project, the Human Brain Project, or the National Ecological Observatory Network. “It's analogous to our LHC, ” says Ash Ballantyne, a bioclimatologist at the University of Montana in Missoula, who has never heard of physics envy and doesn’t want to be reminded of it either.

These plus-sized projects will keep a whole generation of scientists busy - and the future will bring more of this, not less. This increasing cost of experiments in frontier research has slowly, but inevitably, changed the way we do science. And it is fundamentally redefining the role of theory development. Yes, we are entering a new era of science – whether we like that or not.

Again, this change is most apparent in basic research in physics. The community’s assessment of a theory’s promise must be drawn upon to justify investment in an experimental test of that theory. Hence the increased scrutiny that theory-assessment gets as of recently. In the end it comes down to the question where we should put our money.

We often act like knowledge discovery is a luxury. We act like it’s something societies can support optionally, to the extent that they feel like funding it. We act like it’s something that will continue, somehow, anyway. The situation, however, is much scarier than that.

At every level of knowledge we have the capability to exploit only a finite amount of resources. To unlock new resources, we have to invest the ones we have to discover new knowledge and develop new technologies. The newly unlocked resources can then be used for further exploration. And so on.

It has worked so far. But at any level in this game, we might fail. We might not succeed in using the resources we have smartly enough to upgrade to the next level. If we don’t invest sufficiently into knowledge discovery, or invest into the wrong things, we might get stuck – and might end up unable to proceed beyond a certain level of technology. Forever.

And so, when I look at the papers on hep-th and gr-qc, I don’t think about the next 3 years or 5 years, as my funding agency wants me to. I think about the next 3000 or 5000 years. Which of this research holds the promise of discovering knowledge necessary to get to the next level? The bigger and more costly experiments become, the larger the responsibility of theorists who claim that testing a theory will uncover worthwhile new insights. Do we live up to this responsibility?

I don’t think we do. Worse, I think we can’t because funding pressures force theoreticians to overemphasize the promise of their own research. The necessity of marketing is now a reality of science. Our assessment of research agendas is inevitably biased and non-objective. For most of the papers I see on hep-th and gr-qc, I think people work on these topics simply because they can. They can get this research published and they can get it funded. It tells you all about academia and very little about the promise of a theory.

While our colleagues in experiment have entered a new era of science, we theorists are still stuck in the 20st century. We still believe our task is being fighters for our own ideas, when we should instead be working together on identifying those experiments most likely to advance our societies. We still pretend that science is somehow self-correcting because a failed experiment will force us to discard a hypothesis – and we ignore the troubling fact that there are only so many experiments we can do, ever. We better place our bets very carefully because we won’t be able to bet arbitrarily often.

The reality of life is that nothing is infinite. Time, energy, manpower – all of this is limited. The bigger science projects become, the more carefully we have to direct our investments. Yes, it’s a new era of science. Are we ready?

Tuesday, March 01, 2016

Tim Gowers and I have something in common. Unfortunately it’s not our math skills.

Heavy paper.
What would you say if a man with British accent cold-calls you one evening to offer money because he likes your blog?

I said no.

In my world – the world of academic paper-war – we don’t just get money for our work. What we get is permission to administrate somebody else’s money according to the attached 80-page guidelines (note the change in section 15b that affects taxation of 10 year deductibles). Restrictions on the use of funds are abundant and invite applicants to rest their foreheads on cold surfaces.

The German Research Foundation for example, will – if you are very lucky – grant you money for a scientific meeting. But you’re not allowed to buy food with it. Because, you must know, real scientists don’t eat. And to thank you for organizing the meeting you don’t yourself get paid – that wouldn’t be an allowed use of funds. No, they thank you by requesting further reports and forms.

At least you can sometimes get money for scientific meetings. But convincing a funding agency to pay a bill for public outreach or open access initiatives is like getting a toddler to eat broccoli: No matter how convincingly you argue it’s in their own interest, you end up eating it yourself. And since writing proposals sucks, I mean, sucks up time, at some point I gave up trying to make a case that this blog is unpaid public outreach that you'd think research foundations should be supportive of. I just write – and on occasion I carefully rest my forehead on cold surfaces.

Then came the time I was running low on income – unemployed between two temporary contracts – and decided to pitch a story to a magazine. I was lucky and landed an assignment instantly. And so, for the first time in my life, I turned in work to a deadline, wrote an invoice, and got paid in return. I. Made. Money. Writing. It was a revelation. Unfortunately, my published masterwork is now hidden behind a paywall. I am not happy about this, you are not happy about this, and the man with the British accent wasn’t happy about it either. Thus his offer.

But I said no.

Because all I could see was time wasted trying to justify proper means of spending someone else’s money on suitable purposes that might be, for example, a conference fee that finances the first class ticket of the attending Nobel Prize winner. That, you see, is an allowed way of spending money in academia.

My cold-caller was undeterred and called again a week later to inquire whether I had changed my mind. I was visiting my mom, and mom, always the voice of reason, told me to just take the damn money. But I didn’t.

I don’t like being reminded of money. Money is evil. Money corrupts. I only pay with sanitized plastic. I swipe a card through a machine and get handed groceries in return – that’s not money, that’s magic. I look at my bank account statements so rarely I didn’t notice for three years I accidentally paid a gym membership fee in a country I don’t even live. In case my finances turn belly-up I assume the bank will call and yell at me. Which, now that I think of it, seems unlikely because I moved at least a dozen times since opening my account. And I’m not good updating addresses either. I did call the gym though and yelled at them – I got my money back.

Then the British man told me he also supports Tim Gowers new journal. “G-O-W-ers?,” I asked. Yes, that Tim. That would be the math guy responsible for the equations in my G+ feed.

Tim Gowers. [Not sure whose photo, but not mine]
Tim Gowers, of course, also writes a blog. Besides that, he’s won the 1998 Fields Medal which makes him officially a genius. I sent him an email inquiring about our common friend. Tim wrote back he reads my blog. He reads my blog! A genius reads my blog! I mean, another genius – besides my mom who gets toddlers to eat broccoli.

Thusly, I thought, if it’s good enough for Gowers, it’s probably good enough for me. So I said yes. And, after some more weeks of consideration, sent my bank account details to the British man. You have to be careful with that kind of thing, says my mom.

That was last year in December. Then I forgot about the whole story and returned to my differential equations.

Tim, meanwhile, got busy setting up the webpage for his new journal “Discrete Analysis” which covers the emerging fields related to additive combinatorics (not to be confused with addictive combinatorics, more commonly known as Sudoku). His open-access initiative has attracted some attention because the journal’s site doesn’t itself host the articles it publishes – it merely links to files which are stored on the arXiv. The arXiv is an open-access server in operation since the early 1990s. It allows researchers in physics, math, and related disciplines to upload and share articles that have not, or not yet, been peer-reviewed and published. “Discrete Analysis” adds the peer-review, with minimal effort and minimal expenses.

Tim’s isn’t the first such “arxiv-overlay” journal – I myself published last year in another overlay-journal called SIGMA – but it is still a new development that is eyed with some skepticism. By relying on the arXiv to store files, the overlays render server costs somebody else’s problem. That’s convenient but doesn’t make the problem go away. Another issue is that the arXiv itself already moderates submissions, a process that the overlay journals have no control over.

Either way, it is a trend that I welcome because overlays offer scientists what they need from journals without the strings and costs attached by commercial publishers. It is, most importantly, an opportunity for the community to reclaim the conditions under which their research is shared, and also to innovate the format as they please:

“I wanted it to be better than a normal journal in important respects,” says Tim, “If you visit the website, you will notice that each article gives you an option to click on the words ‘Editorial introduction.’ If you do so, then up comes a description of the article (not on a new webpage, I hasten to add), which sets it in some kind of context and helps you to judge whether you want to find out more by going to the arXiv and reading it.”

But even overlay journals don’t operate at zero cost. The website of “Discrete Analysis” was designed by Scholastica’s team, and their platform will also handle the journal’s publication process. They charge $10 per submission and there are a couple of other expenses that the editorial board has to cover, such as services necessary to issue article DOIs. Tim wants to avoid handing on the journal expenses to the authors. Which brings in, among others, the support from my caller with the British accent.

In the two months that passed since I last heard from him, I found out that 10 years ago someone proved there is no non-trivial solution to the equations I was trying to solve. Well, at least that explains why I couldn’t find one. My hence scheduled two-day cursing retreat was interrupted by a message from The British Man. Did the money arrive?, he wanted to know. This way forced to check my bank account, it turned out not only didn’t his money arrive, but neither did I ever receive salary for my new job.

This gives me an excuse to lecture you on another pitfall of academic funding. Even after you have filed five copies of various tax-documents and sent the birth dates of the University President and Vice-president to an institution that handles your grant for another institution and is supposed to wire it to a third institution which handles it for your institution, the money might get lost along the way – and frequently does.

In this case they simply forgot to put me on the payroll. Luckily, the issue could be resolved quickly, and the next day also the wire transfer from Great Britain arrived. Good thing because, as mommy guilt reminded me, this bank account pays for the girls’ daycare and lunch. My writer friends won’t be surprised to hear however that I also had to notice several payments for my freelance work did not come through. When I grow up, I hope someone tells me how life works. /lecture

Tim Gowers invited submissions for “Discrete Analysis” starting last September, and the website of the new journal launched todayyou can read his own blogpost here. For the community, they key question is now whether arxiv-overlay journals like Tim’s will be able to gain a status similar to that of traditional journals. The only way to find out is to try.

Public outreach in general, and science blogging in particular, is vital for the communication of science, both within our communities and to the public. And so are open access initiatives. Even though they are essential to advance research and integrate it into our society, funding agencies have been slow to accept these services as part of their mission.

While we wait for academia to finally digest the invention of the world wide web, it is encouraging to see that some think forward. And so, I am happy today to acknowledge this blog is now supported by the caller with the British accent, Ilyas Khan of Cambridge Quantum Computing. Ilyas has quietly supported a number of scientific endeavors. Although he is best known for enabling Wittgenstein's Nachlass to become openly and freely accessible by funding the project that was implemented by Trinity College Cambridge, he is also a sponsor of Tim Gowers' new journal Discrete Analysis.

Friday, February 26, 2016

"Rate your Supervisor" comes to High Energy Physics

A new website called the "HEP Postdoc Project" allows postdocs in high energy physics to rate their supervisors in categories like "friendliness," "expertise," and "accessibility."

I normally ignore emails that more or less explicitly ask me to advertise sites on my blog, but decided to make an exception for this one. It seems a hand-made project run by a small number of anonymous postdocs who want to help their fellows find good supervisors. And it's a community that I care much about.

While I appreciate the initiative, I have to admit being generally unenthusiastic about anonymous ratings on point scales. Having had the pleasure of reading though an estimated several thousand of recommendation letters, I have found that an assessment of skills is only useful if you know the person it comes from.

Much of this is cultural. A letter from a Russian prof that says this student isn't entirely bad at math might mean the student is up next for the Fields Medal. On the other hand, letters from North Americans tend to exclusively contain positive statements, and the way to read them is to search for qualities that were not listed.

But leaving aside the cultural stereotypes, more important are personal differences in the way people express themselves and use point scales, even if they are given a description for each rating (and that is missing on the website). We occasionally used 5 point rating scales in committees. You then notice quickly that some people tend to clump everyone in the middle-range, while others are more comfortable using the high and low scores. Then again others either give a high rating or refuse to have any opinion. To get a meaningful aggregate, you can't just take an average, you need to know roughly how each committee member uses the scale. (Which will require endless hours of butt-flattening meetings. Trust me, I'd be happy being done with clicking on a star scale.)

You could object that any type of online rating suffers from these problems and yet they seem to serve some purpose. That's right of course, so this isn't to say they're entirely useless. Thus I am sharing this link thinking it's better than nothing. And at the very least you can have some fun browsing through the list to see who got the lowest marks ;)

Monday, February 15, 2016

What makes an idea worthy? An interview with Anthony Aguirre

That science works merely by testing hypotheses has never been less true than today. As data have become more precise and theories have become more successful, scientists have become increasingly careful in selecting hypotheses before even putting them to test. Commissioning an experiment for every odd idea would be an utter waste of time, not to mention money. But what makes an idea worthy?

Pre-selection of hypotheses is especially important in fields where internal consistency and agreement with existing data are very strong constraints already, and it therefore plays an essential role in the foundation of physics. In this area, most new hypotheses are born dead or die very quickly, and researchers would rather not waste time devising experimental tests for ill-fated non-starters. During their career, physicists must thus constantly decide whether a new ideas justifies spending years of research on it. Next to personal interest, their decision criteria are often based on experience and community norms – past-oriented guidelines that reinforce academic inertia.

Philosopher Richard Dawid coined the word “post-empirical assessment” for the practice of hypotheses pre-selection, and described it as a non-disclosed Bayesian probability estimate. But philosophy is one thing, doing research another thing. For the practicing scientist, the relevant question is whether a disclosed and organized pre-selection could help advance research. This would require the assessment to be performed in a cleaner way than is presently the case, a way that is less prone to error induced by social and cognitive biases.

One way to achieve this could be to give researchers incentives for avoiding such biases. Monetary incentives are a possibility, but to convince a scientist that their best path of action is putting aside the need to promote their own research would mean incentives totaling research grants for several years – an amount that adverts on nerd pages won’t raise, and thus an idea that seems one of these ill-fated non-starters. But then for most scientists their reputation is more important than money.

Anthony Aquirre.
Image Credits: Kelly Castro.
And so Anthony Aquirre, Professor of Physics at UC Santa Cruz, devised an algorithm by which scientists can estimate the chances that an idea succeeds, and gain reputation by making accurate predictions. On his website Metaculus, users are asked to evaluate the likelihood of success for various scientific and technological developments. In the below email exchange, Antony explains his idea.

Bee: Last time I heard from you, you were looking for bubble collisions as evidence of the multiverse. Now you want physicists to help you evaluate the expected impact of high-risk/high-reward research. What happened?

Anthony: Actually, I’ve been thinking about high-risk/high-reward research for longer than bubble collisions! The Foundational Questions Institute (FQXi) is now in its tenth year, and from the beginning we’ve seen part of FQXi’s mission as helping to support the high-risk/high-reward part of the research funding spectrum, which is not that well-served by the national funding agencies. So it’s a long-standing question how to best evaluate exactly how high-risk and high-reward a given proposal is.

Bubble collisions are actually a useful example of this. It’s clear that seeing evidence of an eternal-inflation multiverse would be pretty huge news, and of deep scientific interest. But even if eternal inflation is right, there are different versions of it, some of which have bubble and some of which don’t; and even of those that do, only some subset will yield observable bubble collisions. So: how much effort should be put into looking for them? A few years of grad student or postdoc time? In my opinion, yes. A dedicated satellite mission? No way, unless there were some other evidence to go on.

(Another lesson, here, in my opinion, is that if one were to simply accept the dismissive “the multiverse is inherently unobservable” critique, one would never work out that bubble collisions might be observable in the first place.)

B: What is your relation to FQXi?

A: Max Tegmark and I started FQXi in 2006, and have had a lot of fun (and only a bit of suffering!) trying to build something maximally useful to community of people thinking about the type of foundational, big-picture questions we like to think about.

B: What problem do you want to address with Metaculus?

Predicting and evaluating (should “prevaluating” be a word?) science research impact was actually — for me — the second motivation for Metaculus. The first grew out of another nonprofit I helped found, the Future of Life Institute (FLI). A core question there is how major new technologies like AI, genetic engineering, nanotech, etc., are likely to unfold. That’s a hard thing to know, but not impossible to make interesting and useful forecasts for.

FLI and organizations like it could try to build up a forecasting capability by hiring a bunch of researchers to do that. But I wanted to try something different: to generate a platform for soliciting and aggregating predictions that — with enough participation and data generation — could make accurate and well-calibrated predictions about future technology emergence as well as a whole bunch of other things.

As this idea developed, my collaborators (including Greg Laughlin at UCSC) and I realized that it might also be useful in filling a hole in our community’s ability to predict the impact of research. This could in principle help make better decisions about questions ranging from the daily (“Which of these 40 papers in my “to read” folder should I actually carefully read”) to the large-scale (“Should we fund this $2M experiment on quantum cognition?”).

B: How does Metaculus work?

The basic structure is of a set of (currently) binary questions about the occurrence of future events, ranging from predictions about technologies like self-driving cars, Go-playing AIs and nuclear fusion, to pure science questions such as the detection of Planet 9, publication of experiments in quantum cognition or tabletop quantum gravity, or announcement of the detection of gravitational waves.

Participants are invited assess the likelihood (1%-99%) of those events occurring. When a given question ‘resolves’ as either true or false, points are award depending upon a user's prediction, the community’s predictions, and what actually happened. These points add a competitive game aspect, but serve a more important purpose of providing steady feedback so that predictors can learn how to predict more accurately, and with better calibration. As data accumulations, predictors will also amass a track record, both overall and in particular subjects. This can be used to aggregate predictions into a single, more accurate, one (at the moment, the ‘community’ predictions is just a straight median).

An important aspect of this, I think is not ‘just’ to make better predictions about well-known questions, but to create lots and lots of well-posed questions. It really does make you think about things differently when you have to come up with a well-posed question that has a clear criterion for resolution. And there are lots of questions where even a few predictions (even one!) by the right people can be a very useful resource. So a real utility is for this to be a sort of central clearing-house for predictions.

B: What is the best possible outcome that you can imagine from this website and what does it take to get there?

A: The best outcome I could imagine would be this becoming really large-scale and useful, like a Wikipedia or Quora for predictions. It would also be a venue in which the credibility to make pronouncements about the future would actually be based on one’s actual demonstrated ability to make good predictions. There is, sadly, nothing like that in our current public discourse, and we could really use it.

I’d also be happy (if not as happy) to see Metaculus find a more narrow but deep niche, for example in predicting just scientific research/experiment success, or just high-impact technological rollouts (such as AI or Biotech).

In either case, it will take continued steady growth of both the community of users and the website’s capabilities. We already have all sorts of plans for multi-outcome questions, contingent questions, Bayes nets, algorithms for matching questions to predictors, etc. — but that will take time. We also need feedback about what users like, and what they would like the system to be able to do. So please try it out, spread the word, and let us know what you think!

Thursday, December 03, 2015

Peer Review and its Discontents [slide show]

I have made a slide-show of my Monday talk a the Munin conference and managed to squeeze a one-hour lecture into 23 minutes. Don't expect too much, nothing happens in this video, it's just me mumbling over the slides (no singing either ;)). I was also recorded on Monday, but if you prefer the version with me walking around and talking for 60 minutes you'll have to wait a few days until the recording goes online.



I am very much interested in finding a practical solution to these problems. If you have proposals to make, please get in touch with me or leave a comment.

Monday, November 09, 2015

Another new social networking platform for scientists: Loop

Logo of Loop Network website.

A recent issue of Nature magazine ran an advertisement feature for “Loop,” a new networking platform to “maximize the impact of researchers and their discoveries.” It’s an initiative by Frontiers, an open access publisher. Of course I went and got an account to see what it does and I’m here to report back.

In the Nature advert, the CEO of Frontiers interviews herself and answers the question “What makes Loop unique?” with “Loop is changing research networking on two levels. Firstly, researchers should not have to go to a network; it should come to them. Secondly, researchers should not have to fill in dozens of profiles on different websites.”

So excuse me when I was awaiting a one-click registration that made use of one of my dozens of other profiles. Instead I had to fill in a lengthy registration form that, besides name and email, didn’t only ask for my affiliation and country of residence and job description and field of occupation, domain, and speciality, but also for my birthdate, before I had to confirm my email address.

Since that network was so good at “coming to me” it wasn’t possible after registration to import my profile from any other site, Google Scholar, ORCID, Linkedin, ResearchGate, Academia.edu or whathaveyou, facebook, G+, twitter if you must. Instead, I had to fill in my profile yet another time. Neither, for all I can tell, can you actually link your other accounts to the Loop account.

If you scroll down the information pages, it turns out what the integration refers to is “Your Loop profile is discoverable via the articles you have authored on nature.com and in the Frontiers journals.” Somewhat underwhelming.

Then you have to assemble a publication list. I am lucky to have a name that, for all I know, isn’t shared by anybody else on the planet, so it isn’t so difficult to scan the web for my publications. The Loop platform came up with 86 suggested finds. These appeared in separate pop-up windows. If you have ever done this process before you can immediately see the problem: Typically in these lists there are many duplicate entries. So going through the entries one by one without seeing what is already approved means you have to memorize all previous items. Now I challenge you to recall whether item number 86 had appeared before on the list.

Finally done with this, what do you have there? A website that shows a statistic for how many people have looked at your profile (on this site, presumably), how many people have downloaded your papers (from this site, presumably) and a number of citations which shows zero for me and for a lot of other profiles I looked at. A few people have a number there from the Scopus database. I conclude that Loop doesn’t have its own citation metric, and neither uses the one from Google Scholar or Spires.



As to the networking, you get suggestions for people you might know. I don’t know any of the suggested people, which isn’t surprising because we already noticed they’re not importing information, so how are they supposed to know who I know? I’m not sure what I would like to follow any of these people for, why that would be any better than following them elsewhere, or not at all. I followed some random person just because. If that person actually did something (which he doesn’t, much like everybody else whose profile I looked at), presumably it would appear in my feed. From that angle, it looks much like any other networking website. There is also a box that asks me to share something with my network of one.

In summary, for all I can tell this website is as useless as it gets. I don’t have the faintest clue what they think it’s good for. Even if it’s good for something it does a miserable job at telling me what that something is. So save your time.

Thursday, September 24, 2015

The power of words – and its limit

What images does the word “power” bring to your mind? Weapons? Bulging muscles? A mean-looking capitalist smoking cigar? Whatever your mind’s eye brings up, it is most likely not a fragile figure hunched over a keyboard. But maybe it should.

Words have power, today more than ever. Riots and rebellions can be arranged by text-messages, a single word made hashtag can create a mass movement, 140 characters will ruin a career, and a video gone viral will reach out to the world. Words can destroy lives or they can save them: “If you find the right tone and the right words, you can reach just about anybody,” says Cid Jonas Gutenrath who worked for years at the emergency call center of the Berlin police force [1]:

“I had worked before as a bouncer, and I was always the shortest one there. I had an existential need, so to speak, to solve problems through language. I can express myself well, and when that’s combined with some insights into human nature it’s a valuable skill. What I’m talking about is every conversation with the police, whether it’s an emergency call or a talk on the street. Language makes it possible for everyone involved to get out of the situation in one piece.”

Words won’t break your bones. But more often than not, words – or their failure – decide whether we take to weapons. It is our ability to convince, cooperate, and compromise that has allowed us to conquer an increasingly crowded planet. According to recent research, what made humans so successful might indeed not have been superior intelligence or skills, but instead our ability to communicate and work together.

In a recent SciAm issue, Curtis W. Marean, Professor for Archeology at Arizona State University, lays out a new hypothesis to explain what was the decisive development that allowed humans to dominate Earth. According to Marean, it is not, as previously proposed, handling fire, building tools, or digesting a large variety of food. Instead, he argues, what sets us apart is our willingness to negotiate a common goal.

The evolution of language was necessary to allow our ancestors to find solutions to collective problems, solutions other than hitting each other over the head. So it became possible to reach agreements between groups, to develop a basis for commitments and, eventually, contracts. Language also served to speed up social learning and to spread ideas. Without language we wouldn’t have been able to build a body of knowledge on which the scientific profession could stand today.

“Tell a story,” is the ubiquitous advice given to anybody who writes or speaks in public. And yet, some information fits badly into story-form. In popular science writing, the reader inevitably gets the impression that much of science is a series of insights building onto each other. The truth is that most often it is more like collecting puzzle pieces that might or might not actually belong to the same picture.

The stories we tell are inevitably linear; they follow one neat paragraph after the other, one orderly thought after the next, line by line, word by word. But this simplicity betrays the complexity of the manifold interrelations between scientific concepts. Ask any researcher for their opinion on a news report in their discipline and they will almost certainly say “It’s not so simple…”

Links between different fields of science.
Image Source: Bollen et al, PLOS ONE, March 11, 2009.
There is a value to simple stories. They are easily digestible and convey excitement about science. But the reader who misses the entire context cannot tell how well a new finding is embedded into existing research. The term “fringe science” is a direct visual metaphor alluding to this disconnect, and it’s a powerful one. Real scientific progress must fit into the network of existing knowledge.

The problem with linear stories doesn’t only make the writing about science difficult, but also the writing in science. The main difficulty when composing scientific papers is the necessity to convert a higher-dimensional network of associations into a one-dimensional string. It is plainly impossible. Many research articles are hard to follow, not because they are badly written, but because the nature of knowledge itself doesn’t lend itself to the narrative.

I have an ambivalent relation to science communication because a good story shouldn’t make or break an idea. But the more ideas we are exposed to, the more relevant good presentation becomes. Every day scientific research seems a little less like a quest for truth, and a little more like a competition for attention.

In an ideal world maybe scientists wouldn’t write their papers themselves. They would call for an independent reporter and have to explain their calculations, then hand over their results. The paper would be written up by an unbiased expert, plain, objective, and comprehensible. Without exaggerations, without omissions, and without undue citations to friends. But we don’t live in an ideal world.

What can you do? Clumsy and often imperfect, words are still the best medium we have to convey thought. “I think therefore I am,” Descartes said. But 400 years later, the only reason his thought is still alive is that he put into writing.


[1] Quoted in: Evonik Magazine 2015-02, p 17.

Friday, September 11, 2015

How to publish your first scientific paper

I get a lot of email asking me for advice on paper publishing. There’s no way I can make time to read all these drafts, let alone comment on them. But simple silence leaves me feeling guilty for contributing to the exclusivity myth of academia, the fable of the privileged elitists who smugly grin behind the locked doors of the ivory tower. It’s a myth I don’t want to contribute to. And so, as a sequel to my earlier post on “How to write your first scientific paper”, here is how to avoid roadblocks on the highway to publication.

There are many types of scientific articles: comments, notes, proceedings, reviews, books and book chapters, for just to mention the most common ones. They all have their place and use, but in most of the sciences it is the research article that matters most. It’s what we all want, to get our work out there in a respected journal, and it’s what I will focus on.

Before we start. These days you can publish literally any nonsense in a scam journal, usually for a “small fee” (which might only be mentioned at a late stage in the process, oops). Stay clear of such shady business, it will only damage your reputation. Any journal that sends you an unsolicited “call for papers” is a journal to avoid (and a sender to put on the junk list). When in doubt, check Beall’s list of Potential, Probable and Possible Predators.

1. Picking a good topic

There are two ways you can go on a road trip: Find a car or hitchhike. In academic publishing, almost everyone starts out a hitchhiker, coauthoring a work typically with their supervisor. This moves you forward quickly at first, but sooner or later you must prove that you can drive on your own. And one day, you will have to kick your supervisor off the copilot seat. While you can get lucky with any odd idea as topic, there are a few guidelines that will increase your chances of getting published.

1.1 Novelty

For research topics as with cars it holds that a new one will get you farther than an interesting one. If you have a new result, you will almost certainly eventually get it published in a decent journal. But no matter how interesting you think a topic is, the slightest doubt that it’s new will prevent publication.

As a rule of thumb, I therefore recommend you stay far away from everything older than a century. Nothing reeks of crackpottery as badly as a supposedly interesting find in special relativity or classical electrodynamics or the foundations of quantum mechanics.

At first, you will not find it easy to come up with a new topic at the edge of current research. A good way to get ideas is to attend conferences. This will give you an overview on the currently open questions, and an impression where your contribution would be valuable. Every time someone answers a question with “I don’t know,” listen up.

1.2. Modesty

Yes, I know, you really, really want to solve one of the Big Problems. But don’t claim in your first paper that you did, it’s like trying to break a world record first time you run a mile. Except that in science you don’t only have to break the record, you also have to convince others you did.

For the sake of getting published, by all means refer to whatever question it is that inspires you in the introductory paragraph, but aim at a solid small contribution rather than a fluffy big one. Most senior researchers have a grandmotherly tolerance for the exuberance and naiveté of youth, but forgiveness ends at the journal’s front door. As encouraging as they may be in personal conversation, a journal reference serves as quality check for scientific standard, and nobody wants to be the one to blame for not keeping up the standard. So aim low, but aim well.

1.3. Feasibility

Be realistic about what you can achieve and how much time you have. Give your chosen topic a closer look: Do you already know all you need to know to get started, or will you have to work yourself into an entirely or partially new field? Are you familiar with the literature? Do you know the methods? Do you have the equipment? And lastly, but most importantly, do you have the motivation that will keep you going?

Time management is chronically hard, but give it a try and estimate how long you think it will take, if only to laugh later about how wrong you were. Whatever your estimate, multiply it by two. Does it fit in you plans?

2. Getting the research done

Do it.


3. Preparing the manuscript 

Many scientists dislike the process of writing up their results, thinking it only takes time away from real science. They could not be more mistaken. Science is all about the communication of knowledge – a result not shared is a result that doesn’t matter. But how to get started?

3.1. Collect all material

Get an overview on the material that you want your colleagues to know about: calculations, data, tables, figures, code, what have you. Single out the parts you want to publish, collect them in a suitable folder, and convert them into digital form if necessary, ie type off equations, make vector graphics of sketches, render images, and so on.

3.2. Select journals

If you are unsure what journals to choose, have a look at the literature you have used for your research. Most often this will point you towards journals where your topic will fit in. Check the website to see whether they have length restrictions and if so, if these might become problematic. If all looks good, check their author guidelines and download the relevant templates. Read the guidelines. No, I mean, actually read them. The last thing you want is that your manuscript gets returned by an annoyed editor because your image captions are in the wrong place or similar nonsense.

Select the four journals that you like best and order them by preference. Chances are your paper will get rejected at the first, and laying out a path to continue in advance will prevent you from dwelling on your rejection for an indeterminate amount of time.

3.3. Write the paper

For how to structure a scientific paper, see my earlier post.

3.4. Get feedback

Show your paper to several colleagues and ask for feedback, but only do this once you are confident the content will no longer substantially change. The amount of confusion returning to your inbox will reveal which parts of the manuscript are incomprehensible or, heaven forbid, just plainly wrong.

If you don’t have useful contacts, getting feedback might be difficult, and this difficulty increases exponentially with the length of the draft. It dramatically helps to encourage others to actually read your paper if you tell them why it might be useful for their own research. Explaining this requires that you actually know what their research is about.

If you get comments, make sure to address them.

3.5. Pre-publish or not?

Pre-print sharing, for example on the arxiv, is very common in some areas and less common in others. I would generally recommend it if you work in a fast moving field where the publication delay might limit your claim to originality. Pre-print sharing is also a good way to find out whether you offended someone by forgetting to cite them, because they’ll be in your inbox the next morning.

3.6. Submit the paper

The submission process depends very much on the journal you have chosen. Many journals now allow you to submit an arxiv link, which dramatically simplifies matters. However, if you submit source-files, always check the complied pdf “for your approval”. I’ve seen everything from half-empty pages to corrupted figures to pdfs that simply didn’t load.

Some journals allow you to select an editor to whom the manuscript will be sent. It is generally worth checking the list to see if there is someone you know. Or maybe ask a colleague who they have made good or bad experience with. But don’t worry if you don’t know any one of them.

4. Passing peer review

After submission your paper will generally first be screened to make sure it fulfills the journal requirements. This is why it is really important that the topic fits well. If you pass this stage your paper is sent to some of your colleagues (typically two to four) for the dreaded peer review. The reviewers’ task is to read the paper, send back comments on it, and to assign it one of four categories: publish, minor changes, major changes, reject. I have never heard of any paper that was accepted without changes.

In many cases some of the reviewers are picked from your reference list, excluding people you have worked with yourself or who are in the acknowledgements. So stop and think for a moment whether you really want to list all your friends in the acknowledgements. If you have an archenemy who shouldn’t be commenting on your paper, let the editor know about this in advance.

Never submit a paper to several journals at the same time. Also don’t do this if the papers have even partly overlapping content. You might succeed, but trying to boost your publication count by repeating yourself is generally frowned upon and not considered good practice. The exception is conference proceedings, which often summarize longer paper’s content.

When you submit your paper you will be asked to formally accept the ethics code of the journal. If it’s your first submission, take a moment to actually read it. If nothing else, it will make you feel very grown up and sciency.

Some journals ask you to sign a copyright form already with submission. I have no clue what they are thinking. I never sign a copyright form until the revisions are done.

Peer review can be annoying, frustrating, infuriating even. To keep your sanity and to maximize your chance of passing try the following:

4.1. Keep perspective

This isn’t about you personally, it’s about a manuscript you wrote. It is easy to forget, but in the end you, the reviewers, and the editor have the same goal: to advance science with good research. Work towards that common end.

4.2. Stay polite and professional

Unless you feel a reviewer makes truly inappropriate comments, don’t complain to the editor about strong wording – you will only waste your time. Inappropriate comments are everything that refers to your person or affiliation (or absence thereof), any type of ideological arguments, and opinions not backed up by science. All other comments you should go through and address them one by one, in a reply attached to the resubmission and by changes to the manuscript where appropriate. Never ignore a question posed by a referee, it provides a perfect reason to reject your manuscript.

In case a referee finds an actual mistake in your paper, be reasonable about whether you can fix it in the time given until resubmission. If not, it is better to withdraw the submission.

4.3. Revise, revise, revise

Some journals have a maximum number of revisions that they allow after which an editor will make a final decision. If you don’t meet the mark and your paper is eventually rejected, take a moment to mull over the reason of rejection and revise the paper one more time before you submit it to the next journal.

Goto 3.6. Repeat as necessary.

5. Proofs

If all goes well, one day you will receive a note saying that your paper has been accepted for publication and you will soon receive page proofs. Congratulations!

It might feel like a red light five minutes from home when you have to urgently pee, but please read the page proofs carefully. You will never forgive yourself if you don’t correct that sentence which a well-meaning copy editor rearranged to mean the very opposite of what you intended.

6. And then…

… it’s time to update your CV! Take a walk, and then make plans for the next trip.

Wednesday, July 08, 2015

The loneliness of my notepad

Few myths have been busted as often and as cheerfully as that of the lone genius. “Future discoveries are more likely to be made by scientists sharing ideas than a lone genius,” declares Athene Donald in the Guardian, and Joshua Wolf Shenk opines in the NYT that “the lone genius is a myth that has outlived its usefulness.” Thinking on your own is so yesterday; today is collaboration. “Fortunately, a more truthful model is emerging: the creative network,” Shenk goes on. It sounds scary.

There is little doubt that with the advance of information technology, collaborations have been flourishing. As Mott Greene has observed, single authors are an endangered species: “Any issue of Nature today has nearly the same number of Articles and Letters as one from 1950, but about four times as many authors. The lone author has all but disappeared. In most fields outside mathematics, fewer and fewer people know enough to work and write alone.”

Science Watch keeps tracks of the data. The average number of authors per paper has risen from 2.5 in the early 1980s to more than five today. At the same time, the fraction of single authored papers has declined from more than 30% to about 10%.



Part of the reason for this trend is that the combination of expertise achieved by collaboration opens new possibilities that are being exploited. This would suggest that the increase is temporary and will eventually stagnate or start declining again once the potential in these connections has been fully put to use.

But I don’t think the popularity of larger collaborations is going to decline anytime soon, because for some purposes one paper with five authors counts as five papers. If I list a paper on our institutional preprint list, nobody cares how many coauthors it has – it counts as one more paper, and my coauthors’ institutions can be equally happy about adding a paper to their count. If you work on the average with 5 coauthors and divide up the work fairly, your publication list will end up being five times as long as if you’d be working alone. The logical goal of this accounting can only be that we all coauthor every paper that gets published.

So, have the numbers spoken and demonstrated the lone genius is no more?

Well, most scientists aren’t geniuses and nobody really agrees what that means anyway, so let us ask instead what happened to the lone scientists.

The “lone scientist” isn’t so much a myth than an oxymoron. Science is ultimately a community enterprise – an idea not communicated will never become accepted part of science. But the lone scientist has always existed and certainly still exists as a mode of operation, as a step on the path to an idea worth developing. Declaring lonely work a myth is deeply ironic for the graduate student stuck with an assigned project nobody seems to care about. In theoretical physics research often means making yourself world expert on whatever topic, whether you picked it for yourself or whether somebody else thought it was a good idea. And loneliness is the flipside of this specialization.

That researchers may sometimes be lonely in their quest doesn’t imply they are alone or they don’t talk to each other. But when you are the midst of working out an idea that isn’t yet fully developed, there is really nobody who will understand what you are trying to do. Possibly not even you yourself understand.

We use and abuse our colleagues as sounding boards, because attempting to explain yourself to somebody else can work wonders to clarify your own thoughts, even though the person doesn’t understand a word. I have been both at the giving and receiving end of this process. A colleague, who shall remain unnamed, on occasion simply falls asleep while I am trying to make sense. My husband deserves credit for enduring my ramblings about failed calculations even though he doesn’t have a clue what I mean. And I’ve learned to listen rather than just declaring that I don’t know a thing about whatever.

So the historians pointing out that Einstein didn’t work in isolation, and that he met frequently with other researchers to discuss, do not give honor to the frustrating and often painful process of having to work through a calculation one doesn’t know how to do in the first place. It is evident from Einstein’s biography and publications that he spent years trying to find the right equations, working with a mathematical apparatus that was unfamiliar to most researchers back then. There was nobody who could have helped him trying to find the successful description that his intuition looked for.

Not all physicists are Einstein of course, and many of us work on topics where the methodology is well developed and widely shared. But it is very common to get stuck while trying to find a mathematically accurate framework for an idea that, without having found what one is looking for, remains difficult or impossible to communicate. This leaves us alone with that potentially brilliant idea and the task of having to sort out our messy thoughts well enough to even be able to make sense to our colleagues.

Robert Nemiroff, Professor of Physics at Michigan Tech, noted down his process of working on a new paper aptly in a long string of partly circular and confused attempts intersected by doubts, insights and new starts:
“writing short bits of my budding new manuscript on a word processor; realizing I don't know what I am talking about; thinking through another key point; coming to a conclusion; coming to another conclusion in contradiction to the last conclusion; realizing that I have framed the paper in a naive fashion and delete sections (but saving all drafts just in case); starting to write a new section; realizing I don't know what I am talking about; worrying that this whole project is going nowhere new; being grouchy because if this is going nowhere then I am wasting my time; comforting myself with the thought that at least now I better understand something that I should have better understood earlier; starring at my whiteboard for several minute stretches occasionally sketching a small diagram or writing a key equation; thinking how cool this is and wondering if anyone else understands this mini- sub- topic in this much detail; realizing a new potential search term and doing a literature search for it in ADS and Google; finding my own work feeling reassured that perhaps I am actually a reasonable scientist; finding key references that address some part of this idea that I didn't know about and feeling like an idiot; reading those papers and thinking how brilliant those authors are and how I could never write papers this good...”
At least for now we’re all alone in our heads. And as long as that remains so, the scientist struggling to make sense alone will remain reality.

Tuesday, June 16, 2015

The plight of the postdocs: Academia and mental health

This is the story of a friend of a friend, a man by name Francis who took his life at age 34. Francis had been struggling with manic depression through most of his years as a postdoc in theoretical physics.

It is not a secret that short-term contracts and frequent moves are the norm in this area of research, but rarely do we spell out the toll it takes one our mental health. In fact, most of my tenured colleagues who profit from cheap and replaceable postdocs praise the virtue of the nomadic lifestyle which, so we are told, is supposed to broaden our horizon. But the truth is that moving is a necessary, though not sufficient, condition to build your network. It isn’t about broadening your horizon, it’s to make the contacts for which you are later being bought in. It’s not optional, it’s a misery you are expected to pretend enjoying.

I didn’t know Francis personally, and I would never have heard of him if it wasn’t for the acknowledgements in Oliver Roston’s recent paper:

“This paper is dedicated to the memory of my friend, Francis Dolan, who died, tragically, in 2011. It is gratifying that I have been able to honour him with work which substantially overlaps with his research interests and also that some of the inspiration came from a long dialogue with his mentor and collaborator, Hugh Osborn. In addition, I am indebted to Hugh for numerous perceptive comments on various drafts of the manuscript and for bringing to my attention gaps in my knowledge and holes in my logic. Following the appearance of the first version on the arXiv, I would like to thank Yu Nakayama for insightful correspondence.

I am firmly of the conviction that the psychological brutality of the post-doctoral system played a strong underlying role in Francis’ death. I would like to take this opportunity, should anyone be listening, to urge those within academia in roles of leadership to do far more to protect members of the community suffering from mental health problems, particularly during the most vulnerable stages of their careers.”
As a postdoc, Francis lived separated from his partner, and had trouble integrating in a new group. Due to difficulties with the health insurance after an international move, he couldn’t continue his therapy. And even though highly gifted, he must have known that no matter how hard he worked, a secure position in the area of research he loved was a matter of luck.

I found myself in a very similar situation after I moved to the US for my first postdoc. I didn’t fully realize just how good the German health insurance system is until I suddenly was on a scholarship without any insurance at all. When I read the fineprint, it became pretty clear that I wouldn’t be able to afford an insurance that covered psychotherapy or medical treatment for mental disorders, certainly not when I disclosed a history of chronic depression and various cycles of previous therapy.

With my move, I had left behind literally everybody I knew, including my boyfriend who I had intended to marry. For several months, the only piece of furniture in my apartment was a mattress because thinking any further was too much. I lost 30 pounds in six months, and sometimes went weeks without talking to a human being, other than myself.

The main reason I’m still here is that I’m by nature a loner. When I wasn’t working, I was hiking in the canyons, and that was pretty much all I did for the better part of the first year. Then, when I had just found some sort of equilibrium, I had to move again to take on another position. And then another. And another. It still seems a miracle that somewhere along the line I managed to not only marry the boyfriend I had left behind, but to also produce two wonderful children.

Yes, I was lucky. But Francis wasn’t. And just statistically some of you are in that dark place right now. If so, then you, as I, have heard them talk about people who “managed to get diagnosed” as if depression was a theater performance in which successful actors win a certificate to henceforth stay in bed. You, as I, know damned well that the last thing you want is that anybody who you may have to ask for a letter sees anything but the “hard working” and “very promising” researcher who is “recommended without hesitation.” There isn’t much advice I can give, except that you don’t forget it’s in the nature of the disease to underestimate one’s chances of recovery, and that mental health is worth more than the next paper. Please ask for help if you need it.

Like Oliver, I believe that the conditions under which postdoctoral researchers must presently sell their skills are not conductive to mental health. Postdocs see friends the same age in other professions having families, working independently, getting permanent contracts, pension plans, and houses with tricycles in the yard. Postdoctoral research collects some of the most intelligent and creative people on the planet, but in the present circumstances many are unable to follow their own interests, and get little appreciation for their work, if they get feedback at all. There are lots of reasons why being a postdoc sucks, and most of them we can do little about, like those supervisors who’d rather die then say you did a good job, only once. But what we can do is improve employment conditions and lower the pressure to constantly move.

Even in the richest countries on the planet, like Germany and Sweden, it is very common to park postdocs on scholarships without benefits. These scholarships are tax-free and come, for the employer, at low cost. Since the tax evasion is regulated by law, the scholarships can typically last only one or two years. It’s not that one couldn’t hire postdocs on longer, regular contracts with social and health benefits, it’s just that in current thinking quantity counts more than quality: More postdocs produce more papers, which looks better in the statistic. That’s practiced, among many others, at my own workplace.

There are some fields of research which lend themselves to short projects and in these fields one or two year gigs work just fine. In other fields that isn’t so. What you get from people on short-term contracts is short-term thinking. It isn’t only that this situation is stressful for postdocs, it isn’t good for science either. You might be saving money with these scholarships, but there is always a price to pay.

We will probably never know exactly what Francis went through. But for me just the possibility that the isolation and financial insecurity, which are all too often part of postdoc life, may have contributed to his suffering is sufficient reason to draw attention to this.

The last time I met Francis’ friend Oliver, he was a postdoc too. He now has two children, a beautiful garden, and has left academia for a saner profession. Oliver sends the following message to our readers:
“I think maybe the best thing I can think of is advising never to be ashamed of depression and to make sure you keep talking to your friends and that you get medical help. As for academia, one thing I have discovered is that it is possible to do research as a hobby. It isn't always easy to find the time (and motivation!) but leaving academia needn't be the end of one's research career. So for people wondering whether academia will ultimately take too high a toll on their (mental) health, the decision to leave academia needn't necessarily equate with the decision to stop doing research; it's just that a different balance in one's life has to be found!”

[If you speak German or trust Google translate, the FAZ blogs also wrote about this.]