Friday, August 30, 2013

Should you write a science blog?

I get asked a lot how I keep up the blogging. It might be the second most asked question right after “What happened to your hair?” (Answer: It’s a natural disaster, get used to it.) The third frequently asked question, especially by students, is “Do you have any advice if I want to start blogging?” Yeah, I do, but I’m not sure you want to hear it.

I used to think there should really be more scientists blogging. That’s because for me science journalism not so much a source of information but a source of news. It tells me where the action is and points into a direction. If it seems interesting I’ll go and look up the references, but if it’s not a field close to my own I prefer if somebody who actually works on the topic offers an opinion. And I don’t mean a cropped sentence with a carefully chosen adjective and politically correct grammar. In some research areas, quantum gravity one of them, there really aren’t many researchers offering first-hand opinions. Shame on you.

So yeah, I think there should be more scientists blogging. But over the years I’ve seen quite a few of them starting to blog like penguins start to fly. If I had a penny for every deserted science blog I’ve seen I’d be wondering why some deranged British tourist stuffed their coins into my pockets. What’s so difficult about writing a blog, I hear you asking now. You’re asking the wrong person, said the flying penguin, but what blogger would I be if I only had opinions on things I know something about? So here’s my 5 cents (about 4.27 pennies).

As everybody in quantum gravity knows, first there’s the problem of time. So here’s

    Advice #1: Don’t start blogging if you don’t have the time.

Do you really want to invest the time you could be teaching your daughter basketball? Do you really think it’s more important than rewriting that grant proposal for the twentieth time? If you had the time to write a blog wouldn’t you rather use it to learn Chinese, train for a marathon, or become an expert in power napping? If you answered yes to any of these questions, thank you and good bye. Also, give me my money back. If you answered yes to all of these questions, I suggest you touch base with the local drug scene.

But how much time will it take, is your next question. Depends on your ambition of course, said the penguin and flapped her wings. You should produce at least one post a week if you ever want to get off the ground, which brings me to

    Advice #2: Don’t start blogging if you don’t like writing.

The less you like writing, the longer it will take and the more time becomes an issue. The more time becomes an issue, the more you’ll hate blogging and esp those people who seem to produce blogposts, seemingly effortlessly, 5 times a day, apparently while cooking for a family of twelve and jetting around the globe in a self-made, wooden plane sponsored by their three million subscribers.

Are you sure you like writing? No, I didn’t mean you gave it a thumb up on facebook. Are you really sure you like the process of converting thought into keyboard clatter? Ok, good start. But just because you like it doesn’t mean it’s easy.

I’ll admit it took me years to realize it, but evidently I have a lot of colleagues who fight with words. Did you notice that this blog has a second contributor? Yes, it does. It’s just that the frequency of my posts is a factor 300 or so higher than his. He can be forgiven for making himself rare because he’s got a full-time job and two kids and a wife who blogs rather than doing the laundry. But mostly the problem is that he’s fighting with words.

Words – Once upon a time I went to a Tai Chi class. The first class was also the last because I realized quickly that my back problem wasn’t up to the task of throwing people around. I used the opportunity though to punch the trainer straight into the solar plexus a second before he had finished his encouragement to do so. I hope he learned not to use more words than necessary. But I also took away a lesson, one that’s been useful for my writing: Don’t try to take hits frontally, deviate them and use the momentum. So here’s my

    Advice #3: Don’t be afraid of words.

Words aren’t your enemies. It they come at you, use their momentum and go with it. That’s easier said than done, I know, especially if you’re a scientist and have been trained to be precise and accurate and to decorate every sentence with 20 references and footnotes. But don’t think you actually have to be a good writer. Because most likely your readers aren’t good readers either, which is only fair. If you can really write well, you shouldn’t blog, you should… you should… write my damned grant proposal. What I mean is if you try to blog like you write research articles, you’ll almost certainly turn out to be a flying penguin, so don’t overthink it.

However, nobody is born flying, so here’s

    Advice #4: Be patient.

It takes time until you’re integrated into the blogosphere. You can help your integration by using social networks to make yourself, your expertise, and your blog known. Unless you are already well known in your field, it will probably take at least a year, more likely several years, till readership catches on. Until then, make contacts, make friends, learn from others, have fun. Above everything, don’t call a blogpost a blog, it’s mistaking the weather for the climate.

If you still think you want to write a blog, then go ahead. I honestly don’t think it takes more than that: Time, and a good relation to the written word, and patience. The main reason I’m still blogging is that I like writing and verbal TaiChi doesn’t take me a lot of effort. It arguably also helps that since 2006 I’ve been employed at pure research institutes and don’t have teaching duties, see advice #1.

Then let me address some worries. This might be more an issue for the, eh, more senior people, but it should be said

    Advice #5: Don't be afraid of the technology.

As with everything in life, you can make it arbitrarily complicated if you want, but as long as you have an IQ above 70 you'll find some way to blog. It really is not difficult. Another worry that newcomers seem to have is that they’ll run out of ideas, so let me assure you

    Advice #6: Don’t worry that you’ll run out of things to say.

Topics will come flying at you faster than you can get out of the way. There’s always somebody who’s said something about something that you also want to say something about. There’s always some science writer who got it so totally wrong. There’s always somebody’s seminar that was interesting and somebody’s paper that you just read. And if all of that fails, there’s always somebody who has thrown sexist comments around, ten things you wish you had known when you were twenty, and down at the very bottom of the list there’s blogging advice. So don’t worry, just take notes when you come across something interesting or have an idea for a blogpost. I pin post-its to my desk.

Yes, in principle you can fill your blog otherwise than with words. This might work if you have a lot of visual content, pictures, videos, infographics, applets, etc. Alas, the way things have developed the primarily visual stuff has migrated to other platforms and blogs are today the format primarily used for verbal content. And since the spread of twitter, facebook and Google+, sharing links with brief comments has also left the blogosphere. Blogging started out mostly being about writing, and it boomeranged back to this.

Having said that however, blogging of course isn’t only about writing, it’s also about reading. So here’s my

    Advice #7: Care about your readers.

They’ll give you feedback as to whether you’re expressing yourself clearly. If the comments don’t have any relation to the content of your posts, you’re not expressing yourself clearly enough. If insults pile up in your comment section, you’re expressing yourself too clearly. If you’re not getting any comments, see advice #4. However, please

    Advice #8: Don’t be afraid of your readers.

If everybody would like what you write, somebody would hate it just because everybody likes it, so it’s futile. If I’ve learned one thing from blogging, it’s that misunderstandings are unavoidable. They’re part of the process and that’s a two-way process. Just don’t take hits frontally, use their momentum. That misunderstanding really makes a good topic for your next blogpost, no?

You’ll have noticed that I didn’t say anything about content. That’s because the content is up to you. It really doesn’t matter all that much what you write because blog readers are self-selecting. The ones who’ll stay are the ones who like what you write. If it matters to you to attract a sizeable audience then you should spend some time thinking about content, but I’m not the right penguin to give advice on that. I basically just write what comes to my mind, minus some self-censorship for the sake of my readers’ sanity. You don’t really want to know how I lost my virginity, do you?

So should you write a science blog?

You and I both might think you should blog, but that’s wishful thinking. Be honest and ask yourself if you really want to write a blog. Without motivation it’ll be painful both for you and your readers. I wouldn’t want to eat in a restaurant where the cook hates cooking and I wouldn’t want to read a blog where the writer hates writing. If you’re not sure though, I want to encourage you to give it a try because writing might just change your life.

For me the blogging has been very useful, especially because it has taught me to quickly extract the main points of other people’s work and to coherently summarize them, which in return has made it much easier for me to recall this information later. I have also over the years made many friends through this blog, some of whom I have met in person and whose friendship I value very much. I see a lot of cynicism these days about the emptiness of social networking. But I appreciate social media for making it so much easier to stay in touch with people I know who have distributed all over the planet.

Homework assignment: Open the book closest to you on a random page and take the first noun that you see. Imagine it’s a chapter title in your autobiography. Write that chapter.

Wednesday, August 28, 2013

Can we test quantum gravity with gravitational bremsstrahlung?

When A falls into the black hole B
gets thermally distributed headache.
If Blogger had space for a subtitle it would be “A paper I can’t make up my mind about”. A few months ago a paper appeared on the arxiv that proposed to test quantum gravitational effects with neutrino oscillations.

    Quantum Gravity effect on neutrino oscillations in a strong gravitational field
    Jonathan Miller, Roman Pasechnik
    arXiv:1305.4430 [hep-ph]
Models in quantum gravity phenomenology span a spectrum that reaches from conservative but boring to interesting but flaky. This craziness factor is of course somewhat subjective, but the paper in question at first sight seemed to fall somewhere in the middle. In a nutshell, the authors are arguing that neutrino oscillation would be affected in the vicinity of black holes by interaction with gravitons and that this may cause a potentially observable phase distortion. For this they made the assumption that it’s the neutrino mass eigenstates (not flavor eigenstates) that couple to the graviton, and then they had some rather vague explanation that the type of this coupling would depend on the fundamental theory of gravity and thus could be used as a test.

Since it wasn’t originally really clear to me what assumption they made on top of perturbatively quantized gravity and why, I had a longer exchange with the authors in which they patiently answered my dumb questions. They updated the paper two months later and version two is a remarkable improvement over the first version. Alas, I’m still not sure the effect is real. But neither can I find a reason why it’s not real. Let me explain.

First, forget about the neutrino oscillation. That really isn’t so relevant, it’s just that neutrinos can deliver a particularly clean signal because they interact weakly with other stuff. Second, calling the gravitational field that they are concerned with a “strong” field is somewhat misleading. The term is commonly used to mean in the Planckian regime, but the field they talk about is that of a solar mass black hole close by the horizon. That’s strong compared to the field you just sit in, but still far off the Planckian regime. Also forget the stuff about collapse in the abstract, it doesn’t make much sense to me.

But then, note that while it’s often said that gravity is a weak interaction that’s a sloppy statement. Yes, that little fridge magnet and its electromagnetic interaction can overcome the gravitational pull of the whole planet Earth. But if you slam the door the magnet falls down, meaning the forces are quite comparable. How strong gravity is depends on how much mass you accumulate. In the paper the authors make the point that the cross-section for gravitational bremsstrahlung (that’s exchange of a virtual graviton and emission of a real graviton) is tiny for masses of elementary particles all right. But if you put in the mass of a solar mass black hole as one of the interacting ‘particles,’ the cross-section becomes comparable to that of other cross-sections in the standard model.

The original calculation of this cross-section goes back to a paper in the 60s. This is just perturbatively quantized gravity and besides the coupling constants, indices on the propagators, and polarization tensors very similar to the respective qed effect. Having said that, there’s no particular reason bremsstrahlung should be coherent or at least I don’t see one. This would mean then that a particle that passes by the black hole experiences a phase blurring, essentially because the background field is not in fact classical but because the interaction with the gravitational source is mediated by virtual gravitons. Or so the idea. Then they claim that this effect is large enough as to be potentially observable.

Having pulled out the origin of their proposed effect however, the paper suddenly moved to the very conservative end of the spectrum. On that conservative end, you typically find lots of effects that are almost certainly there, but way too small to be observable. If it was possible to find evidence for the quantization of the gravitational background field, evidence that virtual gravitons have been exchanged, this would be amazing.

However, my headache with the paper, which prevails in its revised version, is the following. Treating the black hole as a point particle is almost certainly a bad approximation. In some sense one might say a black hole is as close to a perfect point particle as we’ll ever get. But the distance in which the particle passes by the ‘point’ that is at the center of the black hole is large, much larger than the wavelength of the particle. It takes some time for the particle to pass by the horizon. It shouldn’t exchange one graviton at a fairly high energy (comparable to that of the neutrino in the black hole restframe with non-negligible probability) but it should exchange a lot of very low energy gravitons. This must be so simply because the equivalence principle prevents you from noticing anything on distance scales below the curvature radius. If this passage by the black hole was treated correctly, the effect would almost certainly get smaller. The question is how much smaller. It seems implausible it would vanish completely.

For me this also raises the question whether the cross-section would depend on what you believe is inside the black hole or at its horizon respectively. Eg if you’re a fuzzball fan, the coupling might look quite different than when you believe in baby universes. And let me not even get started on firewalls.

So I’m quite convinced that the effect is actually much smaller than they say, but this only raises the question just how small it is. I’m also not sure whether such an effect, if it exists, would be truly a sign for the quantization of the gravitational field. I mean, to first approximation the graviton exchange just has the effect that the particle moves on a geodesis. If you take into account that the particle itself is quantum and not a point particle, you should also notice some dispersion in a non-homogeneous background. But that’s not a signal for the quantization of the background, just for the quantum nature of the particle. Ie, it is conceivably possible that even if the effect is real, it’s not a signal for quantum gravity.

Having said that, if the particle acquires a phase-blurring as a correction from the quantum nature of the background field, the same effect should exist for charged particles passing by large charged objects in conceivable distance. Let me know if you have a useful reference.

The paper is now on the pile with unsettled cases...

Sunday, August 25, 2013

Can we measure scientific success? Should we?

My new paper.
Measures for scientific success have become a hot topic in the community. Many scientists have spoken out in view of the increasingly widespread use of these measures. They largely all agree that the attempt to quantify, even predict, scientific success is undesirable if not flawed. In this blog’s archive, you find me too banging the same drum.

Scientific quality assessment, so the argument goes, can’t be left to software crunching data. An individual’s promise can’t be summarized in a number. Success can’t be predicted on past achievements, look at all the historical counterexamples. Already Einstein said. I’m sure he said something.

I’ve had a change of mind lately. I think science need measures. Let me explain.

The problem with measures for scientific success has two aspects. One is that measures are used by people outside the community to rank institutions or even individuals for justification and accountability. That’s problematic because it’s questionable this leads to smart research investments, but I don’t think it’s the root of the problem.

The aspect that concerns me more, and that I think is the root of all evil, is that any measure for success feeds back into the system and affects the way science is conducted. The measure will be taken on by the researchers themselves. Rather than defining success individually, scientists are then encouraged to work towards an external definition of scientific achievement. They will compare themselves and others on these artificially created scales. So even if a quantifiable marker of scientific output was once an indicator for success, its predictive power will inevitably change as scientists work specifically towards it. What was meant to be a measure instead becomes a goal.

This has already happened in several cases. The most obvious examples are the number of publications or the number of research grants obtained. On the average, both are plausibly correlated with scientific success. And yet a scientist who increases her paper output doesn’t necessarily increase the quality of her research, and employing more people to work on a certain project doesn’t necessarily mean its scientific relevance increases.

A correlation is not a causation. If Einstein didn’t say that he should have. And another truth that comes courtesy of my grandma is that too much of a good thing can be a bad thing. My daughter reminds me we’re not born with that wisdom. If sunlight falls on my screen and I close the blinds, she’ll declare that mommy is tired. Yesterday she poured a whole bottle of body lotion over herself.

Another example comes from Lee Smolin’s book “The Trouble with Physics”. Smolin argued that the number of single authored papers is a good indicator for a young researcher’s promise. He’s not alone in this belief. Most young researchers are very aware that a single authored paper will put a sparkle on their publication list. But maybe a researcher with many single authored papers just a bad collaborator.

Simple measures, too simple measures, are being used in the community. And this use affects what researchers strive for, distracting them from their actual task of doing good research.

So, yes, I too dislike attempts to measure scientific success. But if we all agree that it stinks why are we breathing the stink? Why are not only funding agencies and other assessment ‘exercises’ using these measures, but why are scientists themselves using them?

Ask any scientist if they think the number of papers shows a candidate’s promise and they’ll probably say no. Ask if they think publications in high impact journals are indicators for scientific quality and they’ll probably say no. Look at what they do, and the length of the publication list and occurrence of high impact journals on that list is suddenly remarkably predictive of their opinion. And then somebody will ask for the h-index. The very reason that politically savvy researchers tune their score on these scales is that, sadly, it does matter. Analogies to natural selection are not coincidental. Both are examples of complex adaptive systems.

The reason for the widespread use of oversimplified measures is that they’ve become necessary. They stink, all right, but they’re the smallest evil among the options we presently have. They’re the least stinky option.

The world has changed and the scientific community with it. Two decades ago you’d apply for jobs by carrying letters to the post office, grateful for the sponge so wouldn’t have to lick all these stamps. Today you apply by uploading application documents within seconds all over the globe and I'm not sure they still sell lickable stamps. This, together with increasing mobility and connectivity, has greatly inflated the number of places researchers apply to. And with that, the number of applications every place gets has skyrocketed.

Simplified measures are being used because it has become impossible to actually do the careful, individual assessment that everybody agrees would be optimal. And that has lead me to think that instead of outright rejecting the idea of scientific measures, we have to accept them and improve them and make them useful to our needs, not to that of bean counters.

Scientists, in hiring committees or on some funding agency’s review panel, have needs that presently just aren’t addressed by existing measures. Maybe one would like to know what’s the overlap of some person’s research topics with those represented at a department? How often have they been named in acknowledgements? Do you share common collaborators? What administrational skills does the candidate bring? Is there somebody in my network who knows this person and could give me a firsthand assessment? Have they experience with conference organization? What’s their h-index relative to the typical h-index in a field? What would you like to know?

You might complain these are not measures for scientific quality and that’s correct. But science is done by humans. These aren’t measures for scientific quality, they’re indicators for how well a candidate might fit on an open position and into a new environment. And that, in return, is relevant for both their success and that of the institution.

Today, personal relations are highly relevant for successful applications. That is a criterion which sparks interest that is being used in absence of better alternatives. We can improve on that by offering possibilities to quantify, for example, the vicinity of research areas. This can provide a fast way to identify interesting candidates that one might not have heard of before.

And so I think “Can we measure scientific success?” is the wrong question to ask. We should ask instead what measures serve scientists in their profession. I’m aware there are meanwhile several alt-metrics being offered, but they don’t address the issue, they merely take into account more data sources to measure essentially the same.

That concerns the second aspect of the problem, the use of measures in the community. For what the first aspect is concerned, the use of measures by accountants who are not scientists themselves: The reason they use certain measures for success or impact is that they believe scientists themselves regard them useful. Administrators use these measures simply because they exist and because scientists, in lack of better alternatives, draw upon them to justify and account for their success or that of their institution. If you have argued that the value of your institute is in the amount of papers produced or conferences held, in the number of visitors pushed through or distinguished furniture bought, you’ve contributed to that problem. Yes, I’m talking about you. Yes, I know not using these numbers would just make matters worse. That’s my point: They’re a bad option, but still the best available one.

So what to do?

Feedback in complex systems and network dynamics have been studied extensively during the last decade. Dirk Helbig recently had a very readable brief review in Nature (pdf here) and I’ve tried to extract some lessons from this.
  1. No universal measures.
    Nobody has a recipe for scientific success. Picking a single measure bears a great risk of failure. We need a variety so that the pool remains heterogeneous. There is a trend towards standardized measures because people love ordered lists. But we should have a large number of different performance indicators.
  2. Individualizable measures.
    Measures must be possible to individualize, so that they can take into account local and cultural differences as well as individual opinions and different purposes. You might want to give importance to the number of single authored papers. I might want to give importance to science blogging. You might think patents are of central relevance. I might think a long-term vision is. Maybe your department needs somebody who is skilled in public outreach. Somebody once told me he wouldn’t hire a postdoc who doesn’t like Jazz. One size doesn’t fit all.
  3. Self-organized and network solutions
    Measures should take into account locations and connections in the various scientific networks, may that be social networks, coauthor networks or networks based on research topics. If you’re not familiar with somebody’s research, can you find somebody who you trust to give you a frank assessment? Can I find a link to this person’s research plans?
  4. No measure is ever final.
    Since the use of measures feeds back into the system, they need to be constantly adapted and updated. This should be a design feature and not an afterthought.
Some time between Pythagoras and Feynman, scientists had to realize that it had become impossible to check the accuracy of all experimental and theoretical knowledge that their own work depended upon. Instead they adopted a distributed approach in which scientists rely on the judgment of specialists for topics in which they are not specialists themselves; they rely on the integrity of their colleagues and the shared goal of understanding nature.

If humans lived forever and were infinitely patient then every scientists could trace down and fact-check every detail that their work makes use of. But that’s not our reality. The use of measures to assess scientists and institutions represents a similar change towards a networked solution. Done the right way, I think that measures can make science fairer and more efficient.

Wednesday, August 21, 2013

Physics Outreach Event at Kungsträdgården, Sep 7

Yes, there's Swedish Umlauts in the header. Our local readers might be interested to hear that Nordita will take part in the bi-annual outreach event "Fysik i Kungsan", which is scheduled for Sept 7, 11am to 5pm, at Kungsträdgården, Stockholm. Here are some impressions from two years ago:



If you're in the area, it would be nice to see you there! I'll be the audio stream for a poster on the question "What is Quantum Gravity" so that's your opportunity to ask me everything you ever wanted to ask. I'll be there only part of the day though, because some months ago I signed up for a 10k run that happens to be the same Saturday.

Sunday, August 18, 2013

Researchers and coffee consumption

You might have seen this collection of 40 world maps in your news feed recently. It's interesting and worth a look. When I scrolled down the list I thought it looks like the number of researchers (per million inhabitants) is correlated with the coffee consumption (in kg per capita). So I pulled down the data and plotted it in excel and here we go:

Coffee consumption vs number of researchers. The red dot is Germany.

I passionately hate excel and I have no idea how to convince it to give me a p-value, but I've seen worse correlations being published. More coffee consumption linked to more research!

If you want to play with the data, you can download the excel sheet here. I've left out Singapore from the table because I wasn't sure whether the entry "0" meant there's no data, or nobody in Singapore drinks coffee. I've made a second plot where I left out the 15 main coffee export countries (according to Wikipedia), but visually it doesn't make much of a difference so I'm not showing you the graph. (It's in the excel sheet.) According to chartsbin.com the data on researchers per million inhabitants is from the UNESCO Institute for Statistics, and the data on coffee consumption is from the World Resources Institute.

Don't take this too seriously. I'd guess that you'd find a similar correlation for many consume goods. It has some amusement value though :o)

Thursday, August 15, 2013

You are likely special and your friends probably not normal

Squaring the melon. Image source.
I think of myself as a very average person. I like the music on the radio and enjoy books on bestseller lists. I’m somewhat short but not unusually so, my reaction time is average for my age, and I look as old as I am.

Yes, I thought I was normal. Then I read that the average person is cognitively biased to think they’re special. Now I have a problem. I can either think I’m normal then I’m not, or I can think I’m special then I’m normal. Either way, I’m facing mental inconsistency. That shit bothers me. Is this normal?

Things you think about when stuck in small town traffic that’s suffered cardiac arrest by way of garbage truck blockage.

But, I thought, what are the odds of being normal?

Let’s take any variable with a normal distribution and define somebody as “normal” if within, say, a 2σ deviation of the mean. You are probably normal, by definition. Now let’s take N uncorrelated variables that are similarly distributed, like income, follicle density, number of friends on facebook, annual coffee consumption, amount of clothes owned, spectral distribution of these clothes, average number of words spoken per minute, time spent sleeping before the age of ten, and so on and so forth.

I’m sure you could list a few hundred such individual characteristics if somebody pointed a pun at your head. The probability that you’re average according to all characteristics is (0.95)N. This means if you look at about 400 different ways that people celebrate their individuality with, the probability that anybody is normal is less than one in a few billion.

This means there’s probably no normal person living on Earth today. In other words, it’s normal to be special.

That’s why the teenager from across the street’s got a million followers on YouTube, our downstairs neighbor wears shoes in two different sizes, and my colleague has meaningful conversations with moths. That’s why my older daughter is obsessed with boogers, that seventy year old just finished a marathon in 3 hours, the blonde woman is an undercover agent in search of pressure cookers, and the garbage truck driver can probably recite Goethe, backwards, in Latin. Which, for all I know, is exactly what he’s been doing instead of driving the damned truck.

Let’s not miss an educational opportunity here and mention that’s also why, if you analyze a dataset according to sufficiently many properties you’ll almost certainly eventually find something special about it. Or, if you study correlations between sufficiently many parameters you’ll eventually find a correlation. Being special really is normal.

And I - I have a particle data booklet in the glove box. What are the odds?

Monday, August 12, 2013

Book Review: “Information is Beautiful” by David McCandless

Information is Beautiful (New Edition)
By David McCandless
Collins (6 Dec 2012)

The more information, the more relevant it becomes to present it in human-digestible form, whence springs the flood of infographics in your news feed. There are good examples and bad examples of data visualization, and McCandless’ graphics are among the cleanest, neatest and well-designed ones that I’ve come across. McCandless describes himself as a “data journalist” and “information designer” and with that fills in a niche in the economic ecosystem that isn’t presently populated by many.

The book is a print-version of examples from his website. It’s not the kind of book you read front to back, but one that you browse through for the sake of curiosity, for distraction, or in search of a conversation topic. It does this job quite well; it also looks good, feels nice and is interesting. Some of the graphics in the book are however quite useless or seem to be based on data, or interpretation of data, that I find questionable. This is to say, the emphasis of these graphics is on design, not on science.

I got this book as a gift and spent a cozy afternoon with it on the couch, something I haven’t yet managed to achieve with digital media. (Not to mention that I’d rather have the kids wreck a book than a screen, should I fall asleep over it.) I’m more interested in the science of information than the design of information, and from the scientific side the graphics leave wanting. But they’re an interesting reflection on contemporary thought and I’d say the book is is well worth the price.

Monday, August 05, 2013

Are physicists hot or not?

It has become trendy to study scientists. Two weeks ago, a group of network researchers published a paper in “Scientific Reports” that aims to analyze in how far scientists pay attention to what is trendy.


The title is however misleading for several reasons.

The most obvious reason is that the analysis presented in the paper was performed exclusively on papers published in the Physical Review journals (in the years 1976-2009), meaning the word ‘scientists’ would better be replaced with ‘physicists’. Even that would be misleading though, because it’s questionable that papers published in the Physical Review are representative for the whole of physics. Physical Review is a high quality journal and it tends to be conservative. If your research is speculative or on a highly specialized topic then it might not be your journal of choice, or so a friendly editor will write before marking your manuscript as “no longer under consideration.” Besides this, the sample also includes the “rapid communication” Physical Review Letters with the declared policy that topics have to be “of broad interest” -- clearly not representative for physics by large, if you excuse the sarcasm.

But to understand what the authors mean with “hot”, let us look at what they have done. They quantify the physicists’ ‘tracing’ of hotness by the probability that the subject of a new paper depends on the number of papers already published on the topic. Topics are identified by the PACS number of the paper (a paper can thus belong to several fields). If new papers are not evenly distributed over existing topics, but those topics with many publications already are more likely to attract new ones than random chance would suggest, this is known as preferential attachment. It’s more commonly known as the “rich get richer effect” and can be quantified by fitting a power-law to the distribution.

The authors find that the physics papers in their sample do show preferential attachment, ie who has will be given. The effect is not as pronounced as for some social networks (eg Flickr) where similar studies have been done, but it clearly exists. They have further looked at the scaling in subsamples broken down by the country of origin of the first author and done the same analysis separately for a selection of four countries: Japan, China, Germany and the USA. They find that the preferential attachment is the strongest for China, followed by Japan, Germany, USA. Yes, that’s right. According to this study, Americans are less likely to follow “hot” topics than Germans.

In the introduction of the paper the authors remark “It is believed among many scientists that there are many more Chinese scientists that are followers than original thinkers compared with many other countries.” I find this an interesting statement for a scientific paper, seeing that it’s little more than spelling out a perceived stereotype. Though they may be forgiven their bleak view of Chinese scientists since, for all I can tell, the authors are all Chinese, or are at least working in China. They interpret the results of their study as confirming this stereotype.

It should be mentioned that the sample which the authors analyzed also contains comments, replies and errata that I’d have thought should be mostly evenly distributed over topics. I would guess if these were taken out the sample, the overall effect would increase somewhat.

But is this preferential attachment a sign that physicists follow “hot” topics?

What this analysis actually shows is that Physical Review preferably publishes papers on topics that already have a literature base. I wouldn’t call that tracing of “hotness”, I’d call it conservative. If you wanted to quantify how eager physicists are to jump on ‘hot’ topics, you’d have to measure how likely new papers are to be in rapidly growing fields, as opposed to fields with many publications already. And to add my own perceived stereotype, I’d be very surprised if you’d find the Germans jump faster than the Americans.

In summary, this study isn’t uninteresting but the interpretation of the data is highly misleading.

Friday, August 02, 2013

Video about Loop Quantum Cosmology

The below YouTube video on Loop Quantum Cosmology was brought to my attention.


I'm not sure who this video is aimed at. It seems to me you'll only understand what they are talking about when you already know what they are talking about, in which case you're not learning anything new, except possibly that Abhay Ashtekar's family moved a lot. The visuals are pretty much useless, the audio is on occasion extremely bad, and who is that person with the flipboard? They somehow fail to mention that Loop Quantum Gravity isn't the same as Loop Quantum Cosmology, so there's a glaring gap in the narrative. I don't know what the CMB anomalies mentioned at the end have to do with anything and why are people still discussing black holes at the LHC, that topic is as dead as dead can be, not to mention that Loop Quantum Gravity didn't have much if anything to do with it anyway. At around 32 minutes, they start talking about phenomenology and it gets more interesting then. Note how very careful they avoid making predictions...

That having been said, I'm not at all sure you want to spend 45 minutes on this. But please ignore my opinion and make up your own. I'm just in a foul mood because I couldn't find my favorite socks this morning.