Showing posts with label Random Thoughts. Show all posts
Showing posts with label Random Thoughts. Show all posts

Wednesday, August 09, 2017

Outraged about the Google diversity memo? I want you to think about it.

Chairs. [Image: Verco]
That leaked internal memo from James Damore at Google? The one that says one shouldn’t expect employees in all professions to reflect the demographics of the whole population? Well, that was a pretty dumb thing to write. But not because it’s wrong. Dumb is that Damore thought he could have a reasoned discussion about this. In the USA, out of all places.

The version of Damore’s memo that first appeared on Gizmodo missed references and images. But meanwhile, the diversity memo has its own website and it comes with links and graphics.

Damore’s strikes me as a pamphlet produced by a well-meaning, but also utterly clueless, young white man. He didn’t deserve to get fired for this. He deserved maybe a slap on the too-quickly typing fingers. But in his world, asking for discussion is apparently enough to get fired.

I don’t normally write about the underrepresentation of women in science. Reason is I don’t feel fit to represent the underrepresented. I just can’t seem to appropriately suffer in my male-dominated environment. To the extent that one can trust online personality tests, I’m an awkwardly untypical female. It’s probably unsurprising I ended up in theoretical physics.

There is also a more sinister reason I keep my mouth shut. It’s that I’m afraid of losing what little support I have among the women in science when I fall into their back.

I’ve lived in the USA for three years and for three more years in Canada. On several occasions during these years, I’ve been told that my views about women in science are “hardcore,” “controversial,” or “provocative.” Why? Because I stated the obvious: Women are different from men. On that account, I’m totally with Damore. A male-female ratio close to one is not what we should expect in all professions – and not what we should aim at either.

But the longer I keep my mouth shut, the more I think my silence is a mistake. Because it means leaving the discussion – and with it, power – to those who shout the loudest. Like CNBC. Which wants you to be “shocked” by Damore’s memo in a rather transparent attempt to produce outrage and draw clicks. Are you outraged yet?

Increasingly, media-storms like this make me worry about the impression scientists give to the coming generation. Give to kids like Damore. I’m afraid they think we’re all idiots because the saner of us don’t speak up. And when the kids think they’re oh-so-smart, they’ll produce pamphlets to reinvent the wheel.

Fact is, though, much of the data in Damore’s memo is well backed-up by research. Women indeed are, on the average, more neurotic than men. It’s not an insult, it’s a common term in psychology. Women are also, on the average, more interested in people than in things. They do, on the average, value work-life balance more, react differently to stress, compete by other rules. And so on.

I’m neither a sociologist nor psychologist, but my understanding of the literature is that these are uncontroversial findings. And not new either. Women are different from men, both by nature and by nuture, though it remains controversial just what is nurture and what is nature. But the cause is besides the point for the question of occupation: Women are different in ways that plausibly affect their choice of profession.

No, the problem with Damore’s argument isn’t the starting point, the problem is the conclusions that he jumps to.

To begin with, even I know most of Google’s work is people-centric. It’s either serving people directly, or analyzing people-data, or imagining the people-future. If you want to spend your life with things and ideas rather than people, then go into engineering or physics, but not into software-development.

That coding actually requires “female” skills was spelled out clearly by Yonatan Zunger, a former Google employee. But since I care more about physics than software-development, let me leave this aside.

The bigger mistake in Damore’s memo is one I see frequently: Assuming that job skills and performance can be deduced from differences among demographic groups. This just isn’t so. I believe for example if it wasn’t for biases and unequal opportunities, then the higher ranks in science and politics would be dominated by women. Hence, aiming at a 50-50 representation gives men an unfair advantage. I challenge you to provide any evidence to the contrary.

I’m not remotely surprised, however, that Damore naturally assumes the differences between typically female and male traits mean that men are more skilled. That’s the bias he thinks he doesn’t have. And, yeah, I’m likewise biased in favor of women. Guess that makes us even then.

The biggest problem with Damore’s memo however is that he doesn’t understand what makes a company successful. If a significant fraction of employees think that diversity is important, then it is important. No further justification is needed for this.

Yes, you can argue that increasing diversity may not improve productivity. The data situation on this is murky, to say the least. There’s some story about female CEOs in Sweden that supposedly shows something – but I want to see better statistics before I buy that. And in any case, the USA isn’t Sweden. More importantly, productivity hinges on employees’ well-being. If a diverse workplace is something they value, then that’s something to strive for, period.

What Damore seems to have aimed at, however, was merely to discuss the best way to deal with the current lack of diversity. Biases and unequal opportunities are real. (If you doubt that, you are a problem and should do some reading.) This means that the current representation of women, underprivileged and disabled people, and other minorities, is smaller than it would be in that ideal world which we don’t live in. So what to do about it?

One way to deal with the situation is to wait until the world catches up. Educate people about bias, work to remove obstacles to education, change societal gender images. This works – but it works very slowly.

Worse, one of the biggest obstacles that minorities face is a chicken-and-egg problem that time alone doesn’t cure. People avoid professions in which there are few people like them. This is a hurdle which affirmative action can remove, fast and efficiently.

But there’s a price to pay for preferably recruiting the presently underrepresented. Which is that people supported by diversity efforts face a new prejudice: They weren’t hired because they’re skilled. They were hired because of some diversity policy!

I used to think this backlash has to be avoided at all costs, hence was firmly against affirmative action. But during my years in Sweden, I saw that it does work – at least for women – and also why: It makes their presence unremarkable.

In most of the European North, a woman in a leading position in politics or industry is now commonplace. It’s nothing to stare at and nothing to talk about. And once it’s commonplace, people stop paying attention to a candidate’s gender, which in return reduces bias.

I don’t know, though, if this would also work in science which requires an entirely different skill-set. And social science is messy – it’s hard to tell how much of the success in Northern Europe is due to national culture. Hence, my attitude towards affirmative action remains conflicted.

And let us be clear that, yes, such policies mean every once in a while you will not hire the most skilled person for a job. Therefore, a value judgement must be made here, not a logical deduction from data. Is diversity important enough for you to temporarily tolerate an increased risk of not hiring the most qualified person? That’s the trade-off nobody seems willing to spell out.

I also have to spell out that I am writing this as a European who now works in Europe again. For me, the most relevant contribution to equal opportunity is affordable higher education and health insurance, as well as governmentally paid maternity and parental leave. Without that, socially disadvantaged groups remain underrepresented, and companies continue to fear for revenue when hiring women in their fertile age. That, in all fairness, is an American problem not even Google can solve.

But one also doesn’t solve a problem by yelling “harassment” each time someone asks to discuss whether a diversity effort is indeed effective. I know from my own experience, and a poll conducted at Google confirms, that Damore’s skepticism about current practices is widespread.

It’s something we should discuss. It’s something Google should discuss. Because, for better or worse, this case has attracted much attention. Google’s handling of the situation will set an example for others.

Damore was fired, basically, for making a well-meant, if amateurish, attempt at institutional design, based on woefully incomplete information he picked from published research studies. But however imperfect his attempt, he was fired, in short, for thinking on his own. And what example does that set?

Monday, May 01, 2017

May-day Pope-hope

Pope Francis meets Stephen Hawking.
[Photo: Piximus.]
My husband is a Roman Catholic, so is his whole family. I’m a heathen. We’re both atheists, but dear husband has steadfastly refused to leave the church. That he throws out money with the annual “church tax” (imo a great failure of secularization) has been a recurring point of friction between us. But as of recently I’ve stopped bitching about it – because the current Pope is just so damn awesome.

Pope Francis, born in Argentina, is the 266th leader of the Catholic Church. The man’s 80 years old, but within only two years he has overhauled his religion. He accepts Darwinian evolution as well as the Big Bang theory. He addresses ecological problems – loss of biodiversity, climate change, pollution – and calls for action, while worrying that “international politics has [disregarded] well-founded scientific opinion about the state of our planet.” He also likes exoplanets:
“How wonderful would it be if the growth of scientific and technological innovation would come along with more equality and social inclusion. How wonderful would it be, while we discover faraway planets, to rediscover the needs of the brothers and sisters orbiting around us.”
I find this remarkable, not only because his attitude flies in the face of those who claim religion is incompatible with science. More important, Pope Francis succeeds where the vast majority of politicians fail. He listens to scientists, accepts the facts, and bases calls for actions on evidence. Meanwhile, politicians left and right bend facts to mislead people about what’s in whose interest.

And Pope Francis is a man whose word matters big time. About 1.3 billion people in the world are presently members of his Church. For the Catholics, the Pope is the next best thing to God. The Pope is infallible, and he can keep going until he quite literally drops dead. Compared to Francis, Tweety-Trump is a fly circling a horse’s ass.

Global distribution of Catholics.
[Source: Wikipedia. By Starfunker226CC BY-SA 3.0Link.]

This current Pope is demonstrably not afraid of science, and this gives me hope for the future. Most of the tension between science and religion that we witness today is caused by certain aspects of monotheistic religions that are obviously in conflict with science – if taken literally. But it’s an unnecessary tension. It would be easy enough to throw out what are basically thousand years old stories. But this will only happen once the religious understand it will not endanger the core of their beliefs.

Science advocates like to argue that religion is incompatible with science for religion is based on belief, not reason. But this neglects that science, too, is ultimately based on beliefs.

Most scientists, for example, believe in an external reality. They believe, for the biggest part, that knowledge is good. They believe that the world can be understood, and that this is something humans should strive for.

In the foundations of physics I have seen more specific beliefs. Many of my colleagues, for example, believe that the fundamental laws of nature are simple, elegant, even beautiful. They believe that logical deduction can predict observations. They believe in continuous functions and that infinities aren’t real.

None of this has a rational basis, but physicists rarely acknowledge these beliefs as what they are. Often, I have found myself more comfortable with openly religious people, for at least they are consciously aware of their beliefs and make an effort to prevent it from interfering with research. Even my own discipline, I think, would benefit from a better awareness of the bounds of human rationality. Even my own discipline, I think, could learn from the Pope to tell Is from Ought.

You might not subscribe to the Pope’s idea that “tenderness is the path of choice for the strongest, most courageous men and women.” Honestly, to me doesn’t sound so different from believing that love will quantize gravity. But you don’t have to share the values of the Catholic Church to appreciate here is a world leader who doesn’t confuse facts with values.

Sunday, February 19, 2017

Fake news wasn’t hard to predict – But what’s next?

In 2008, I wrote a blogpost which began with a dark vision – a presidential election led astray by fake news.

I’m not much of a prophet, but it wasn’t hard to predict. Journalism, for too long, attempted the impossible: Make people pay for news they don’t want to hear.

It worked, because news providers, by and large, shared an ethical code. Journalists aspired to tell the truth; their passion was unearthing and publicizing facts – especially those that nobody wanted to hear. And as long as the professional community held the power, they controlled access to the press – the device – and kept up the quality.

But the internet made it infinitely easy to produce and distribute news, both correct and incorrect. Fat headlines suddenly became what economists call an “intangible good.” No longer does it rely on a physical resource or a process of manufacture. News now can be created, copied, and shared by anyone, anywhere, with almost zero investment.

By the early 00s, anybody could set up a webpage and produce headlines. From thereon, quality went down. News makes the most profit if it’s cheap and widely shared. Consequently, more and more outlets offer the news people want to read –that’s how the law of supply and demand is supposed to work after all.

What we have seen so far, however, is only the beginning. Here’s what’s up next:
  • 1. Fake News Get Organized

    An army of shadow journalists specializes in fake news, pitching it to alternative news outlets. These outlets will mix real and fake news. It becomes increasingly hard to tell one from the other.

  • 2. Fake News Becomes Visual

    “Picture or it didn’t happen,” will soon be a thing of the past. Today, it’s still difficult to forge photos and videos. But software becomes better, and cheaper, and easier to obtain, and soon it will take experts to tell real from fake.

  • 3. Fake News Get Cozy

    Anger isn’t sustainable. In the long run, most people want good news – they want to be reassured everything’s fine. The war in Syria is over. The earthquake risk in California is low. The economy is up. The chocolate ratio has been raised again.

  • 4. Cooperations Throw the Towel

    Facebook and Google and Yahoo conclude out it’s too costly to assess the truth value of information passed on by their platforms, and decide it’s not their task. They’re right.
  • 5. Fake News Has Real-World Consequences

    We’ll see denial of facts leading to deaths of thousands of people. I mean lack of earthquake warning systems because the risk was believed fear-mongering. I mean riots over terrorist attacks that never happened. I mean collapsed buildings and toxic infant formula because who cares about science. We’ll get there.

The problem that fake news poses for democratic societies attracted academic interest already a decade ago. Triggered by the sudden dominance of Google as search engine, it entered the literature under the name “Googlearchy.”

Democracy relies on informed decision making. If the electorate doesn’t know what’s real, democratic societies can’t identify good ways to carry out the people’s will. You’d think that couldn’t be in anybody’s interest, but it is – if you can make money from misinformation.

Back then, the main worry focused on search engines as primary information providers. Someone with more prophetic skills might have predicted that social networks would come to play the central role for news distribution, but the root of the problem is the same: Algorithms are designed to deliver news which users like. That optimizes profit, but degrades the quality of news.

Economists of the Chicago School would tell you that this can’t be. People’s behavior reveals what they really want, and any regulation of the free market merely makes the fulfillment of their wants less efficient. If people read fake news, that’s what they want – the math proves it!

But no proof is better than its assumptions, and one central assumption for this conclusion is that people can’t have mutually inconsistent desires. We’re supposed to have factored in long-term consequences of today’s actions, properly future-discounted and risk-assessed. In other words, we’re supposed to know what’s good for us and our children and grand-grand-children and make rational decisions to work towards that goal.

In reality, however, we often want what’s logically impossible. Problem is, a free market, left unattended, caters predominantly to our short-term wants.

On the risk of appearing to be inconsistent, economists are right when they speak of revealed preferences as the tangible conclusion of our internal dialogues. It’s just that economists, being economists, like to forget that people have a second way of revealing preferences – they vote.

We use democratic decision making to ensure the long-term consequences of our actions are consistent with the short-term ones, like putting a price on carbon. One of the major flaws of current economic theory is that it treats the two systems, economic and political, as separate, when really they’re two sides of the same coin. But free markets don’t work without a way to punish forgery, lies, and empty promises.

This is especially important for intangible goods – those which can be reproduced with near-zero effort. Intangible goods, like information, need enforced copyright, or else quality becomes economically unsustainable. Hence, it will take regulation, subsidies, or both to prevent us from tumbling down into the valley of alternative facts.

In the last months I’ve seen a lot of finger-pointing at scientists for not communicating enough or not communicating correctly, as if we were the ones to blame for fake news. But this isn’t our fault. It’s the media which has a problem – and it’s a problem scientists solved long ago.

The main reason why fake news is hard to identify, and why it remains profitable to reproduce what other outlets have already covered, is that journalists – in contrast to scientists – are utterly intransparent about their doings.

As a blogger, I see this happening constantly. I know that many, if not most, science writers closely follow science blogs. And the professional writers frequently report on topics previously covered by bloggers – without doing as much as naming their sources, not to mention referencing them.

This isn’t merely a personal paranoia. I know this because in several instances science writers actually told me that my blogpost about this-or-that has been so very useful. Some even asked me to share links to their articles they wrote based on it. Let that sink in for a moment – they make money from my expertise, don’t give me credits, and think that this is entirely appropriate behavior. And you wonder why fake news is economically profitable?

For a scientist, that’s mindboggling. Our currency is citations. Proper credits is pretty much all we want. Keep the money, but say my name.

I understand that journalists have to protect some sources, so don’t misunderstand me. I don’t mean they have to spill beans about their exclusive secrets. What I mean is simply that a supposed news outlet that merely echoes what’s been reported elsewhere should be required to refer to the earlier article.

Of course this would imply that the vast majority of existing news sites were revealed as copy-cats and lose readers. And of course it isn’t going to happen because nobody’s going to enforce it. If I saw even a remote chance of this happening, I wouldn’t have made the above predictions, would I?

What’s even more perplexing for a scientist, however, is that news outlets, to the extent that they do fact-checks, don’t tell customers that they fact-check, or what they fact-check, or how they fact-check.

Do you know, for example, which science magazines fact-check their articles? Some do, some don’t. I know for a few because I’ve been border-crossing between scientists and writers for a while. But largely it’s insider knowledge – I think it should be front-page information. Listen, Editor-in-Chief: If you fact-check, tell us.

It isn’t going to stop fake news, but I think a more open journalistic practice and publicly stated adherence to voluntary guidelines could greatly alleviate it. It probably makes you want to puke, but academics are good at a few things and high community standards are one of them. And that is what journalisms need right now.

I know, this isn’t exactly the cozy, shallow, good news that shares well. But it will be a great pleasure when, in ten years, I can say: I told you so.

Wednesday, December 21, 2016

Reasoning in Physics

I’m just back from a workshop about “Reasoning in Physics” at the Center for Advanced Studies in Munich. I went because it seemed a good idea to improve my reasoning, but as I sat there, something entirely different was on my mind: How did I get there? How did I, with my avowed dislike of all things -ism and -ology, end up in a room full of philosophers, people who weren’t discussing physics, but the philosophical underpinning of physicists’ arguments. Or, as it were, the absence of such underpinnings.

The straight-forward answer is that they invited me, or invited me back, I should say, since this was my third time visiting the Munich philosophers. Indeed, they invited me to stay some longer for a collaborative project, but I’ve successfully blamed the kids for my inability to reply with either yes or no.

So I sat there, in one of these awkwardly quiet rooms where everyone will hear your stomach gargle, trying to will my stomach not to gargle and instead listen to the first talk. It was Jeremy Butterfield, speaking about a paper which I commented on here. Butterfield has been praised to me as one of the four good physics philosophers, but I’d never met him. The praise was deserved – he turned out to be very insightful and, dare I say, reasonable.

The talks of the first day focused on multiple multiverse measures (meta meta), inflation (still eternal), Bayesian inference (a priori plausible), anthropic reasoning (as observed), and arguments from mediocrity and typicality which were typically mediocre. Among other things, I noticed with consternation that the doomsday argument is still being discussed in certain circles. This consterns me because, as I explained a decade ago, it’s an unsound abuse of probability calculus. You can’t randomly distribute events that are causally related. It’s mathematical nonsense, end of story. But it’s hard to kill a story if people have fun discussing it. Should “constern” be a verb? Discuss.

In a talk by Mathias Frisch I learned of a claim by Huw Price that time-symmetry in quantum mechanics implies retro-causality. It seems the kind of thing that I should have known about but didn’t, so I put the paper on the reading list and hope that next week I’ll have read it last year.

The next day started with two talks about analogue systems of which I missed one because I went running in the morning without my glasses and, well, you know what they say about women and their orientation skills. But since analogue gravity is a topic I’ve been working on for a couple of years now, I’ve had some time to collect thoughts about it.

Analogue systems are physical systems whose observables can, in a mathematically precise way, be mapped to – usually very different – observables of another system. The best known example is sound-waves in certain kinds of fluids which behave exactly like light does in the vicinity of a black hole. The philosophers presented a logical scheme to transfer knowledge gained from observational test of one system to the other system. But to me analogue systems are much more than a new way to test hypotheses. They’re fundamentally redefining what physicists mean by doing science.

Presently we develop a theory, express it in mathematical language, and compare the theory’s predictions with data. But if you can directly test whether observations on one system correctly correspond to that of another, why bother with a theory that predicts either? All you need is the map between the systems. This isn’t a speculation – it’s what physicists already do with quantum simulations: They specifically design one system to learn how another, entirely different system, will behave. This is usually done to circumvent mathematically intractable problems, but in extrapolation it might just make theories and theorists superfluous.

It then followed a very interesting talk by Peter Mattig, who reported from the DFG research program “Epistemology of the LHC.” They have, now for the 3rd time, surveyed both theoretical and experimental particle physicists to track researchers’ attitudes to physics beyond the standard model. The survey results, however, will only get published in January, so I presently can’t tell you more than that. But once the paper is available you’ll read about it on this blog.

The next talk was by Radin Dardashti who warned us ahead that he’d be speaking about work in progress. I very much liked Radin’s talk at last year’s workshop, and this one didn’t disappoint either. In his new work, he is trying to make precise the notion of “theory space” (in the general sense, not restricted to qfts).

I think it’s a brilliant idea because there are many things that we know about theories but that aren’t about any particular theory, ie we know something about theory space, but we never formalize this knowledge. The most obvious example may be that theories in physics tend to be nice and smooth and well-behaved. They can be extrapolated. They have differentiable potentials. They can be expanded. There isn’t a priori any reason why that should be so; it’s just a lesson we have learned through history. I believe that quantifying meta-theoretical knowledge like this could play a useful role in theory development. I also believe Radin has a bright future ahead.

The final session on Tuesday afternoon was the most physicsy one.

My own talk about the role of arguments from naturalness was followed by a rather puzzling contribution by two young philosophers. They claimed that quantum gravity doesn’t have to be UV-complete, which would mean it’s not a consistent theory up to arbitrarily high energies.

It’s right of course that quantum gravity doesn’t have to be UV-complete, but it’s kinda like saying a plane doesn’t have to fly. If you don’t mind driving, then why put wings on it? If you don’t mind UV-incompleteness, then why quantize gravity?

This isn’t to say that there’s no use in thinking about approximations to quantum gravity which aren’t UV-complete and, in particular, trying to find ways to test them. But these are means to an end, and the end is still UV-completion. Now we can discuss whether it’s a good idea to start with the end rather than the means, but that’s a different story and shall be told another time.

I think this talk confused me because the argument wasn’t wrong, but for a practicing researcher in the field the consideration is remarkably irrelevant. Our first concern is to find a promising problem to work on, and that the combination of quantum field theory and general relativity isn’t UV complete is the most promising problem I know of.

The last talk was by Michael Krämer about recent developments in modelling particle dark matter. In astrophysics – like in particle-physics – the trend is to go away from top-down models and work with slimmer “simplified” models. I think it’s a good trend because the top-down constructions didn’t lead us anywhere. But removing the top-down guidance must be accompanied by new criteria, some new principle of non-empirical theory-selection, which I’m still waiting to see. Otherwise we’ll just endlessly produce models of questionable relevance.

I’m not sure whether a few days with a group of philosophers have improved my reasoning – be my judge. But the workshop helped me see the reason I’ve recently drifted towards philosophy: I’m frustrated by the lack of self-reflection among theoretical physicists. In the foundations of physics, everybody’s running at high speed without getting anywhere, and yet they never stop to ask what might possibly be going wrong. Indeed, most of them will insist nothing’s wrong to begin with. The philosophers are offering the conceptual clarity that I find missing in my own field.

I guess I’ll be back.

Wednesday, March 09, 2016

A new era of science

[img source: changingcourse.com]
Here in basic research we all preach the gospel of serendipity. Breakthroughs cannot be planned, insights not be forced, geniuses not be bred. We tell ourselves – and everybody willing to listen – that predicting the outcome of a research project is more difficult than doing the research in the first place. And half of all discoveries are made while tinkering with something else anyway. Now please join me for the chorus, and let us repeat once again that the World Wide Web was invented at CERN – while studying elementary particles.

But in theoretical physics the age of serendipitous discovery is nearing its end. You don’t tinker with a 27 km collider and don’t coincidentally detect gravitational waves while looking for a better way to toast bread. Modern experiments succeed by careful planning over the course of decades. They rely on collaborations of thousands of people and cost billions of dollars. While we always try to include multipurpose detectors hoping to catch unexpected signals, there is no doubt that our machines are built for very specific purposes.

And the selection is harsh. For every detector that gets funding, three others don’t. For every satellite mission that goes into orbit, five others never get off the ground. Modern physics isn’t about serendipitous discoveries – it’s about risk/benefit analyses and impact assessments. It’s about enhanced design, horizontal integration, and progressive growth strategies. Breakthroughs cannot be planned, but you sure can call in a committee meeting to evaluate their ROI and disruptive potential.

There is no doubt that scientific research takes up resources. It requires both time and money, which is really just a proxy for energy. And as our knowledge increases, new discoveries have become more difficult, requiring us too pool funding and create large international collaborations.

This process is most pronounced in basic research in physics – cosmology and particle physics – because in this area we deal with the smallest and the most distant objects in the universe. Things that are hard to see, basically. But the trend towards Big Science can be witnessed also in other discipline’s billion-dollar investments like the Human Genome Project, the Human Brain Project, or the National Ecological Observatory Network. “It's analogous to our LHC, ” says Ash Ballantyne, a bioclimatologist at the University of Montana in Missoula, who has never heard of physics envy and doesn’t want to be reminded of it either.

These plus-sized projects will keep a whole generation of scientists busy - and the future will bring more of this, not less. This increasing cost of experiments in frontier research has slowly, but inevitably, changed the way we do science. And it is fundamentally redefining the role of theory development. Yes, we are entering a new era of science – whether we like that or not.

Again, this change is most apparent in basic research in physics. The community’s assessment of a theory’s promise must be drawn upon to justify investment in an experimental test of that theory. Hence the increased scrutiny that theory-assessment gets as of recently. In the end it comes down to the question where we should put our money.

We often act like knowledge discovery is a luxury. We act like it’s something societies can support optionally, to the extent that they feel like funding it. We act like it’s something that will continue, somehow, anyway. The situation, however, is much scarier than that.

At every level of knowledge we have the capability to exploit only a finite amount of resources. To unlock new resources, we have to invest the ones we have to discover new knowledge and develop new technologies. The newly unlocked resources can then be used for further exploration. And so on.

It has worked so far. But at any level in this game, we might fail. We might not succeed in using the resources we have smartly enough to upgrade to the next level. If we don’t invest sufficiently into knowledge discovery, or invest into the wrong things, we might get stuck – and might end up unable to proceed beyond a certain level of technology. Forever.

And so, when I look at the papers on hep-th and gr-qc, I don’t think about the next 3 years or 5 years, as my funding agency wants me to. I think about the next 3000 or 5000 years. Which of this research holds the promise of discovering knowledge necessary to get to the next level? The bigger and more costly experiments become, the larger the responsibility of theorists who claim that testing a theory will uncover worthwhile new insights. Do we live up to this responsibility?

I don’t think we do. Worse, I think we can’t because funding pressures force theoreticians to overemphasize the promise of their own research. The necessity of marketing is now a reality of science. Our assessment of research agendas is inevitably biased and non-objective. For most of the papers I see on hep-th and gr-qc, I think people work on these topics simply because they can. They can get this research published and they can get it funded. It tells you all about academia and very little about the promise of a theory.

While our colleagues in experiment have entered a new era of science, we theorists are still stuck in the 20st century. We still believe our task is being fighters for our own ideas, when we should instead be working together on identifying those experiments most likely to advance our societies. We still pretend that science is somehow self-correcting because a failed experiment will force us to discard a hypothesis – and we ignore the troubling fact that there are only so many experiments we can do, ever. We better place our bets very carefully because we won’t be able to bet arbitrarily often.

The reality of life is that nothing is infinite. Time, energy, manpower – all of this is limited. The bigger science projects become, the more carefully we have to direct our investments. Yes, it’s a new era of science. Are we ready?

Sunday, January 10, 2016

Free will is dead, let’s bury it.

I wish people would stop insisting they have free will. It’s terribly annoying. Insisting that free will exists is bad science, like insisting that horoscopes tell you something about the future – it’s not compatible with our knowledge about nature.

According to our best present understanding of the fundamental laws of nature, everything that happens in our universe is due to only four different forces: gravity, electromagnetism, and the strong and weak nuclear force. These forces have been extremely well studied, and they don’t leave any room for free will.

There are only two types of fundamental laws that appear in contemporary theories. One type is deterministic, which means that the past entirely predicts the future. There is no free will in such a fundamental law because there is no freedom. The other type of law we know appears in quantum mechanics and has an indeterministic component which is random. This randomness cannot be influenced by anything, and in particular it cannot be influenced by you, whatever you think “you” are. There is no free will in such a fundamental law because there is no “will” – there is just some randomness sprinkled over the determinism.

In neither case do you have free will in any meaningful way.

These are the only two options, and all other elaborations on the matter are just verbose distractions. It doesn’t matter if you start talking about chaos (which is deterministic), top-down causation (which doesn’t exist), or insist that we don’t know how consciousness really works (true but irrelevant). It doesn’t change a thing about this very basic observation: there isn’t any known law of nature that lets you meaningfully speak of “free will”.

If you don’t want to believe that, I challenge you to write down any equation for any system that allows for something one could reasonably call free will. You will almost certainly fail. The only thing really you can do to hold on to free will is to wave hands, yell “magic”, and insist that there are systems which are exempt from the laws of nature. And these systems somehow have something to do with human brains.

The only known example for a law that is neither deterministic nor random comes from myself. But it’s a baroque construct meant as proof in principle, not a realistic model that I would know how to combine with the four fundamental interactions. As an aside: The paper was rejected by several journals. Not because anyone found anything wrong with it. No, the philosophy journals complained that it was too much physics, and the physics journals complained that it was too much philosophy. And you wonder why there isn’t much interaction between the two fields.

After plain denial, the somewhat more enlightened way to insist on free will is to redefine what it means. You might settle for example on speaking of free will as long as your actions cannot be predicted by anybody, possibly not even by yourself. Clearly, it is presently impossible to make such a prediction. It remains to be seen whether it will remain impossible, but right now it’s a reasonable hope. If that’s what you want to call free will, go ahead, but better not ask yourself what determined your actions.

A popular justification for this type of free will is insisting that on comparably large scales, like those between molecules responsible for chemical interactions in your brain, there are smaller components which may have a remaining influence. If you don’t keep track of these smaller components, the behavior of the larger components might not be predictable. You can then say “free will is emergent” because of “higher level indeterminism”. It’s like saying if I give you a robot and I don’t tell you what’s in the robot, then you can’t predict what the robot will do, consequently it must have free will. I haven’t managed to bring up sufficient amounts of intellectual dishonesty to buy this argument.

But really you don’t have to bother with the details of these arguments, you just have to keep in mind that “indeterminism” doesn’t mean “free will”. Indeterminism just means there’s some element of randomness, either because that’s fundamental or because you have willfully ignored information on short distances. But there is still either no “freedom” or no “will”. Just try it. Try to write down one equation that does it. Just try it.

I have written about this a few times before and according to the statistics these are some of the most-read pieces on my blog. Following these posts, I have also received a lot of emails by readers who seem seriously troubled by the claim that our best present knowledge about the laws of nature doesn’t allow for the existence of free will. To ease your existential worries, let me therefore spell out clearly what this means and doesn’t mean.

It doesn’t mean that you are not making decisions or are not making choices. Free will or not, you have to do the thinking to arrive at a conclusion, the answer to which you previously didn’t know. Absence of free will doesn’t mean either that you are somehow forced to do something you didn’t want to do. There isn’t anything external imposing on you. You are whatever makes the decisions. Besides this, if you don’t have free will you’ve never had it, and if this hasn’t bothered you before, why start worrying now?

This conclusion that free will doesn’t exist is so obvious that I can’t help but wonder why it isn’t widely accepted. The reason, I am afraid, is not scientific but political. Denying free will is considered politically incorrect because of a wide-spread myth that free will skepticism erodes the foundation of human civilization.

For example, a 2014 article in Scientific American addressed the question “What Happens To A Society That Does not Believe in Free Will?” The piece is written by Azim F. Shariff, a Professor for Psychology, and Kathleen D. Vohs, a Professor of Excellence in Marketing (whatever that might mean).

In their essay, the authors argue that free will skepticism is dangerous: “[W]e see signs that a lack of belief in free will may end up tearing social organization apart,” they write. “[S]kepticism about free will erodes ethical behavior,” and “diminished belief in free will also seems to release urges to harm others.” And if that wasn’t scary enough already, they conclude that only the “belief in free will restrains people from engaging in the kind of wrongdoing that could unravel an ordered society.”

To begin with I find it highly problematic to suggest that the answers to some scientific questions should be taboo because they might be upsetting. They don’t explicitly say this, but the message the article send is pretty clear: If you do as much as suggest that free will doesn’t exist you are encouraging people to harm others. So please read on before you grab the axe.

The conclusion that the authors draw is highly flawed. These psychology studies always work the same. The study participants are engaged in some activity in which they receive information, either verbally or in writing, that free will doesn’t exist or is at least limited. After this, their likeliness to conduct “wrongdoing” is tested and compared to a control group. But the information the participants receive is highly misleading. It does not prime them to think they don’t have free will, it instead primes them to think that they are not responsible for their actions. Which is an entirely different thing.

Even if you don’t have free will, you are of course responsible for your actions because “you” – that mass of neurons – are making, possibly bad, decisions. If the outcome of your thinking is socially undesirable because it puts other people at risk, those other people will try to prevent you from more wrongdoing. They will either try to fix you or lock you up. In other words, you will be held responsible. Nothing of this has anything to do with free will. It’s merely a matter of finding a solution to a problem.

The only thing I conclude from these studies is that neither the scientists who conducted the research nor the study participants spent much time thinking about what the absence of free will really means. Yes, I’ve spent far too much time thinking about this.

The reason I am hitting on the free will issue is not that I want to collapse civilization, but that I am afraid the politically correct belief in free will hinders progress on the foundations of physics. Free will of the experimentalist is a relevant ingredient in the interpretation of quantum mechanics. Without free will, Bell’s theorem doesn’t hold, and all we have learned from it goes out the window.

This option of giving up free will in quantum mechanics goes under the name “superdeterminism” and is exceedingly unpopular. There seem to be but three people on the planet who work on this, ‘t Hooft, me, and a third person of whom I only learned from George Musser’s recent book (and whose name I’ve since forgotten). Chances are the three of us wouldn’t even agree on what we mean. It is highly probable we are missing something really important here, something that could very well be the basis of future technologies.

Who cares, you might think, buying into the collapse of the wave-function seems a small price to pay compared to the collapse of civilization. On that matter though, I side with Socrates “The unexamined life is not worth living.”

Wednesday, December 30, 2015

How does a lightsaber work? Here is my best guess.

A lightsaber works by emitting a stream of magnetic monopoles. Magnetic monopoles are heavy particles that source magnetic fields. They are so-far undiscovered but many physicists believe they are real due to theoretical arguments. For string theorist Joe Polchinski, for example, “the existence of magnetic monopoles seems like one of the safest bets that one can make about physics not yet seen.” Magnetic monopoles are so heavy however that they cannot be produced by any known processes in the universe – a minor technological complication that I will come back to below.




Depending on the speed at which the monopoles are emitted, they will either escape or return back to the saber’s hilt which has the opposite magnetic charge. You could of course just blast your opponent with the monopoles, but that would be rather boring. The point of a lightsaber isn’t to merely kill your enemies, but to kill them with style.



So you are emitting this stream of monopoles. Since the hilt has the opposing magnetic charge they pull after them magnetic force lines. Next you eject some electrically charged particles – electrons or ions – into this field with an initial angular velocity. These will circle in spirals around the magnetic field and, due to the circular motion, they will emit synchroton radiation, which is why you can see the blade.

Due to the emission of light and the occasional collision with air molecules, the electrically charged particles slow down and eventually escape the magnetic field. That doesn’t sound really healthy, so you might want to make sure that their kinetic energy isn’t too high. To then still get an emission spectrum with a significant contribution in the visible range, you need a huge magnetic field. Which can’t really be healthy either, but at least it decays inversely proportional to the distance from the blade.

Letting the monopoles escape has the advantage that you don’t have to devise a complicated mechanism to make sure they actually return back to the hilt. It has the disadvantage though that one fighter’s monopoles can be sucked up by the other’s saber if that has opposite charge. Can the blades pass through each other? Well, if they both have the same charges, they repel. You couldn’t easily pass them through each other, but they would probably distort each other to some extent. How much depends on the strength of the magnetic field that keeps the electrons caught.


Finally, there is the question how to produce the magnetic monopoles to begin with. For this, you need a pocket-sized accelerator that generates collision energies at the Planck scale. The most commonly used method for this is to use a Kyber crystal. This also means that you need to know string theory to accurately calculate how a lightsaber operates. May the Force be with you.

[For more speculation, see also Is a Real Lightsaber Possible? by Don Lincoln.]

Thursday, September 24, 2015

The power of words – and its limit

What images does the word “power” bring to your mind? Weapons? Bulging muscles? A mean-looking capitalist smoking cigar? Whatever your mind’s eye brings up, it is most likely not a fragile figure hunched over a keyboard. But maybe it should.

Words have power, today more than ever. Riots and rebellions can be arranged by text-messages, a single word made hashtag can create a mass movement, 140 characters will ruin a career, and a video gone viral will reach out to the world. Words can destroy lives or they can save them: “If you find the right tone and the right words, you can reach just about anybody,” says Cid Jonas Gutenrath who worked for years at the emergency call center of the Berlin police force [1]:

“I had worked before as a bouncer, and I was always the shortest one there. I had an existential need, so to speak, to solve problems through language. I can express myself well, and when that’s combined with some insights into human nature it’s a valuable skill. What I’m talking about is every conversation with the police, whether it’s an emergency call or a talk on the street. Language makes it possible for everyone involved to get out of the situation in one piece.”

Words won’t break your bones. But more often than not, words – or their failure – decide whether we take to weapons. It is our ability to convince, cooperate, and compromise that has allowed us to conquer an increasingly crowded planet. According to recent research, what made humans so successful might indeed not have been superior intelligence or skills, but instead our ability to communicate and work together.

In a recent SciAm issue, Curtis W. Marean, Professor for Archeology at Arizona State University, lays out a new hypothesis to explain what was the decisive development that allowed humans to dominate Earth. According to Marean, it is not, as previously proposed, handling fire, building tools, or digesting a large variety of food. Instead, he argues, what sets us apart is our willingness to negotiate a common goal.

The evolution of language was necessary to allow our ancestors to find solutions to collective problems, solutions other than hitting each other over the head. So it became possible to reach agreements between groups, to develop a basis for commitments and, eventually, contracts. Language also served to speed up social learning and to spread ideas. Without language we wouldn’t have been able to build a body of knowledge on which the scientific profession could stand today.

“Tell a story,” is the ubiquitous advice given to anybody who writes or speaks in public. And yet, some information fits badly into story-form. In popular science writing, the reader inevitably gets the impression that much of science is a series of insights building onto each other. The truth is that most often it is more like collecting puzzle pieces that might or might not actually belong to the same picture.

The stories we tell are inevitably linear; they follow one neat paragraph after the other, one orderly thought after the next, line by line, word by word. But this simplicity betrays the complexity of the manifold interrelations between scientific concepts. Ask any researcher for their opinion on a news report in their discipline and they will almost certainly say “It’s not so simple…”

Links between different fields of science.
Image Source: Bollen et al, PLOS ONE, March 11, 2009.
There is a value to simple stories. They are easily digestible and convey excitement about science. But the reader who misses the entire context cannot tell how well a new finding is embedded into existing research. The term “fringe science” is a direct visual metaphor alluding to this disconnect, and it’s a powerful one. Real scientific progress must fit into the network of existing knowledge.

The problem with linear stories doesn’t only make the writing about science difficult, but also the writing in science. The main difficulty when composing scientific papers is the necessity to convert a higher-dimensional network of associations into a one-dimensional string. It is plainly impossible. Many research articles are hard to follow, not because they are badly written, but because the nature of knowledge itself doesn’t lend itself to the narrative.

I have an ambivalent relation to science communication because a good story shouldn’t make or break an idea. But the more ideas we are exposed to, the more relevant good presentation becomes. Every day scientific research seems a little less like a quest for truth, and a little more like a competition for attention.

In an ideal world maybe scientists wouldn’t write their papers themselves. They would call for an independent reporter and have to explain their calculations, then hand over their results. The paper would be written up by an unbiased expert, plain, objective, and comprehensible. Without exaggerations, without omissions, and without undue citations to friends. But we don’t live in an ideal world.

What can you do? Clumsy and often imperfect, words are still the best medium we have to convey thought. “I think therefore I am,” Descartes said. But 400 years later, the only reason his thought is still alive is that he put into writing.


[1] Quoted in: Evonik Magazine 2015-02, p 17.

Monday, July 20, 2015

The number-crunchers. How we learned to stop worrying and love to code.

My grandmother was a calculator, and I don’t mean to say I’m the newest from Texas Instruments. I mean my grandmother did calculations for a living, with pencil on paper, using a slide rule and logarithmic tables. She calculated the positions of stars on the night sky, for five minute intervals, day by day, digit by digit.

Today you can download one of a dozen free apps to display the night sky for any position on Earth, any time, any day. Not that you actually need to know stellar constellations to find True North. Using satellite signals, your phones can now tell your position to within a few meters, and so can 2 million hackers in Russia.

My daughters meanwhile are thoroughly confused as to what a phone is, since we use the phone to take photos but make calls on the computer. For my four-year old a “phone” is pretty much anything that beeps, including the microwave, which for all I know by next year might start taking photos of dinner and upload them to facebook. And the landline. Now that you say it. Somebody called in March and left a voicemail.

Jack Myers dubbed us the “gap generation,” the last generation to remember the time before the internet. Myers is a self-described “media ecologist” which makes you think he’d have heard of search engine optimization. Unfortunately, when queried “gap generation” it takes Google 0.31 seconds to helpfully bring up 268,000,000 hits for “generation gap.” But it’s okay. I too recall life without Google, when “viral” meant getting a thermometer stuffed between your lips rather than being on everybody’s lips.

I wrote my first email in 1995 with a shell script called “mail” when the internet was chats and animated gifs. Back then, searching a journal article meant finding a ladder and blowing dust off thick volumes with yellowish pages. There were no keyword tags or trackbacks; I looked for articles by randomly browsing through journals. If I had an integral to calculate, there were Gradshteyn and Ryzhik’s tables, or Abramovitz and Stegun's Handbook of Special Functions, and else, good luck.

Our first computer software for mathematical calculations, one of the early Maple versions, left me skeptical. It had an infamous error in one of the binomial equations that didn’t exactly instill trust. The program was slow and stalled the machine for which everybody hated me because my desk computer was also the institute’s main server (which I didn’t know until I turned it off, but then learned very quickly). I taught myself fortran and perl and java script and later some c++, and complained it wasn’t how I had imagined being a theoretical physicist. I had envisioned myself thinking deep thoughts about the fundamental structure of reality, not chasing after missing parentheses.

It turned out much of my masters thesis came down to calculating a nasty integral that wasn’t tractable numerically, by computer software, because it was divergent. And while I was juggling generalized hypergeometric functions and Hermite polynomials, I became increasingly philosophic about what exactly it meant to “solve an integral.”

We say an integral is solved if we can write it down as a composition of known functions. But this selection of functions, even the polynomials, are arbitrary choices. Why not take the supposedly unsolvable integral, use it to define a function and be done with it? Why are some functions solutions and others aren’t? We prefer particular functions because their behaviors are well understood. But that again is a matter of how much they are used and studied. Isn’t it in the end all a matter of habit and convention?

After two years I managed to renormalize the damned integral and was left with an expression containing incomplete Gamma functions, which are themselves defined by yet other integrals. The best thing I knew to do with this was to derive some asymptotic limits and then plot the full expression. Had there been any way to do this calculation numerically all along, I’d happily have done it, saved two years of time, and gotten the exact same result and insight. Or would I? I doubt the paper would even have gotten published.

Twenty years ago, I like most physicists considered numerical results inferior to analytical, pen-on-paper, derivations. But this attitude has changed, changed so slowly I almost didn’t notice it changing. Today numerical studies are still often considered suspicious, fighting a prejudice of undocumented error. But it has become accepted practice to publish results merely in forms of graphs, figures, and tables, videos even, for (systems of) differential equations that aren’t analytically tractable. Especially in General Relativity, where differential equations tend to be coupled, non-linear, and with coordinate-dependent coefficients – ie as nasty as it gets – analytic solutions are the exception not the norm.

Numerical results are still less convincing, but not so much because of a romantic yearning for deep insights. They are less convincing primarily because we lack shared standards for coding, whereas we all know the standards of analytical calculation. We use the same functions and the same symbols (well, mostly), whereas deciphering somebody else’s code requires as much psychoanalysis as patience. For now. But imagine you could check code with the same ease you check algebraic manipulation. Would you ditch analytical calculations over purely numerical ones, given the broader applicability of the latter? How would insights obtained by one method be of any less value than those obtained by the other?

The increase of computing power has generated entirely new fields of physics by allowing calculations that previously just weren’t feasible. Turbulence in plasma, supernovae explosion, heavy ion collisions, neutron star mergers, or lattice qcd to study the strong nuclear interaction, these are all examples of investigations that have flourished only with the increase in processing speed and memory. Such disciplines tend to develop their own, unique and very specialized nomenclature and procedures that are difficult if not impossible to evaluate for outsiders.
Lattice QCD. Artist’s impression.

Then there is big data that needs to be handled. May that be LHC collisions or temperature fluctuations in the CMB or global fits of neutrino experiments, this isn’t data any human can deal with by pen on paper. In these areas too, subdisciplines have sprung up, dedicated to data manipulation and -handling. Postdocs specialized in numerical methods are high in demand. But even though essential to physics, they face the prejudice of somehow being “merely calculators.”

The maybe best example is miniscule corrections to probabilities of scattering events, like those taking place at the LHC. Calculating these next-to-next-to-next-to-leading-order contributions is an art as much as a science; it is a small subfield of high energy physics that requires summing up thousands or millions of Feynman diagrams. While there are many software packages available, few physicists know all the intricacies and command all the techniques; those who do often develop software along with their research. They are perfecting calculations, aiming for the tiniest increase in precision much like pole jumpers perfect their every motion aiming after the tiniest increase in height. It is a highly specialized skill, presently at the edge of theoretical physics. But while we admire the relentless perfection of professional athletes, we disregard the single-minded focus of the number-crunchers. What can we learn from it? What insight can be gained from moving the bar an inch higher?

What insight do you gain from calculating the positions of stars on the night sky, you could have asked my grandmother. She was the youngest of seven siblings, her father died in the first world war. Her only brother and husband were drafted for the second world war and feeding the family was left to the sisters. To avoid manufacturing weapons for a regime she detested, she took on a position in an observatory, calculating the positions of stars. This appointment came to a sudden stop when her husband was badly injured and she was called to his side at the war front to watch him die, or so she assumed. Against all expectations, my grandfather recovered from his skull fractures. He didn’t have to return to the front and my grandma didn’t return to her job. It was only when the war was over that she learned her calculations were to help the soldiers target bombs, knowledge that would haunt her still 60 years later.

What insight do we gain from this? Precision is the hallmark of science, and for much of our society science is an end for other means. But can mere calculation ever lead to true progress? Surely not with the computer codes we use today, which execute operations but do not look for simplified underlying principles, which is necessary to advance understanding. It is this lacking search for new theories that leaves physicists cynical about the value of computation. And yet some time in the future we might have computer programs doing exactly this, looking for underlying mathematical laws better suited than existing ones to match observation. Will physicists one day be replaced by software? Can natural law be extracted by computers from data? If you handed all the LHC output to an artificial intelligence, could it spit out the standard model?

In an influential 2008 essay “The End of Theory,” Chris Anderson argued that indeed computers will one day make human contributions to theoretical explanations unnecessary:
“The reason physics has drifted into theoretical speculation about n-dimensional grand unified models over the past few decades (the "beautiful story" phase of a discipline starved of data) is that we don't know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on[...]

The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”
His future vision was widely criticized by physicists, me included, but I’ve had a change of mind. Much of the criticism Anderson took was due to vanity. We like to believe the world will fall into pieces without our genius and don’t want to be replaced by software. But I don’t think there’s anything special about the human brain that an artificial intelligence couldn’t do, in principle. And I don’t care very much who or what delivers insights, as long as they come by. In the end it comes down to trust.

If a computer came up with just the right string theory vacuum to explain the standard model and offered you the explanation that the world is made of strings to within an exactly quantified precision, what difference would it make whether the headline was made by a machine rather than a human? Wouldn’t you gain the exact same insight? Yes, we would still need humans to initiate the search, someone to write a code that will deliver to our purposes. And chances are we would celebrate the human, rather than the machine. But the rest is overcoming prejudice against “number crunching,” which has to be addressed by setting up reliable procedures that ensure a computer’s results are sound science. I’ll be happy if your AI delivers a theory of quantum gravity; bring it on.

My grandmother outlived her husband who died after multiple strokes. Well in her 90s she still had a habit of checking all the numbers on her receipts, bills, and account statements. Despite my conviction that artificial intelligences could replace physicists, I don’t think it’s likely to happen. The human brain is remarkable not so much for its sheer computing power, but for its efficiency, resilience, and durability. You show me any man-made machine that will still run after 90 years in permanent use.

Wednesday, July 08, 2015

The loneliness of my notepad

Few myths have been busted as often and as cheerfully as that of the lone genius. “Future discoveries are more likely to be made by scientists sharing ideas than a lone genius,” declares Athene Donald in the Guardian, and Joshua Wolf Shenk opines in the NYT that “the lone genius is a myth that has outlived its usefulness.” Thinking on your own is so yesterday; today is collaboration. “Fortunately, a more truthful model is emerging: the creative network,” Shenk goes on. It sounds scary.

There is little doubt that with the advance of information technology, collaborations have been flourishing. As Mott Greene has observed, single authors are an endangered species: “Any issue of Nature today has nearly the same number of Articles and Letters as one from 1950, but about four times as many authors. The lone author has all but disappeared. In most fields outside mathematics, fewer and fewer people know enough to work and write alone.”

Science Watch keeps tracks of the data. The average number of authors per paper has risen from 2.5 in the early 1980s to more than five today. At the same time, the fraction of single authored papers has declined from more than 30% to about 10%.



Part of the reason for this trend is that the combination of expertise achieved by collaboration opens new possibilities that are being exploited. This would suggest that the increase is temporary and will eventually stagnate or start declining again once the potential in these connections has been fully put to use.

But I don’t think the popularity of larger collaborations is going to decline anytime soon, because for some purposes one paper with five authors counts as five papers. If I list a paper on our institutional preprint list, nobody cares how many coauthors it has – it counts as one more paper, and my coauthors’ institutions can be equally happy about adding a paper to their count. If you work on the average with 5 coauthors and divide up the work fairly, your publication list will end up being five times as long as if you’d be working alone. The logical goal of this accounting can only be that we all coauthor every paper that gets published.

So, have the numbers spoken and demonstrated the lone genius is no more?

Well, most scientists aren’t geniuses and nobody really agrees what that means anyway, so let us ask instead what happened to the lone scientists.

The “lone scientist” isn’t so much a myth than an oxymoron. Science is ultimately a community enterprise – an idea not communicated will never become accepted part of science. But the lone scientist has always existed and certainly still exists as a mode of operation, as a step on the path to an idea worth developing. Declaring lonely work a myth is deeply ironic for the graduate student stuck with an assigned project nobody seems to care about. In theoretical physics research often means making yourself world expert on whatever topic, whether you picked it for yourself or whether somebody else thought it was a good idea. And loneliness is the flipside of this specialization.

That researchers may sometimes be lonely in their quest doesn’t imply they are alone or they don’t talk to each other. But when you are the midst of working out an idea that isn’t yet fully developed, there is really nobody who will understand what you are trying to do. Possibly not even you yourself understand.

We use and abuse our colleagues as sounding boards, because attempting to explain yourself to somebody else can work wonders to clarify your own thoughts, even though the person doesn’t understand a word. I have been both at the giving and receiving end of this process. A colleague, who shall remain unnamed, on occasion simply falls asleep while I am trying to make sense. My husband deserves credit for enduring my ramblings about failed calculations even though he doesn’t have a clue what I mean. And I’ve learned to listen rather than just declaring that I don’t know a thing about whatever.

So the historians pointing out that Einstein didn’t work in isolation, and that he met frequently with other researchers to discuss, do not give honor to the frustrating and often painful process of having to work through a calculation one doesn’t know how to do in the first place. It is evident from Einstein’s biography and publications that he spent years trying to find the right equations, working with a mathematical apparatus that was unfamiliar to most researchers back then. There was nobody who could have helped him trying to find the successful description that his intuition looked for.

Not all physicists are Einstein of course, and many of us work on topics where the methodology is well developed and widely shared. But it is very common to get stuck while trying to find a mathematically accurate framework for an idea that, without having found what one is looking for, remains difficult or impossible to communicate. This leaves us alone with that potentially brilliant idea and the task of having to sort out our messy thoughts well enough to even be able to make sense to our colleagues.

Robert Nemiroff, Professor of Physics at Michigan Tech, noted down his process of working on a new paper aptly in a long string of partly circular and confused attempts intersected by doubts, insights and new starts:
“writing short bits of my budding new manuscript on a word processor; realizing I don't know what I am talking about; thinking through another key point; coming to a conclusion; coming to another conclusion in contradiction to the last conclusion; realizing that I have framed the paper in a naive fashion and delete sections (but saving all drafts just in case); starting to write a new section; realizing I don't know what I am talking about; worrying that this whole project is going nowhere new; being grouchy because if this is going nowhere then I am wasting my time; comforting myself with the thought that at least now I better understand something that I should have better understood earlier; starring at my whiteboard for several minute stretches occasionally sketching a small diagram or writing a key equation; thinking how cool this is and wondering if anyone else understands this mini- sub- topic in this much detail; realizing a new potential search term and doing a literature search for it in ADS and Google; finding my own work feeling reassured that perhaps I am actually a reasonable scientist; finding key references that address some part of this idea that I didn't know about and feeling like an idiot; reading those papers and thinking how brilliant those authors are and how I could never write papers this good...”
At least for now we’re all alone in our heads. And as long as that remains so, the scientist struggling to make sense alone will remain reality.

Sunday, June 28, 2015

I wasn’t born a scientist. And you weren’t either.

There’s a photo which keeps cropping up in my facebook feed and it bothers me. It shows a white girl, maybe three years, kissing a black boy the same age. The caption says “No one is born racist.” It’s adorable. It’s inspirational. But the problem is, it’s not true.

Children aren’t saints. We’re born mistrusting people who look different from us, and we treat those who look like us better. Toddlers already have this “in-group bias” research says. Though I have to admit that, as a physicist, I am generally not impressed by what psychologists consider statistically significant, and I acknowledge it is generally hard to distinguish nature from nurture. But that a preference for people of similar appearance should be a result of evolution isn’t so surprising. We are more supportive to who we share genes with, family ahead of all, and looks are a giveaway.

As we grow up, we should become aware that our bias is both unnecessary and unfair, and take measures to prevent it from being institutionalized. But since we are born being extra suspicious about anybody not from our own clan, it takes conscious educational effort to act against the preference we give to people “like us.” Racist thoughts are not going away by themselves, though one can work to address them – or at least I hope so. But it starts with recognizing one is biased to begin with. And that’s why this photo bothers me. Denying a problem rarely helps solving it.

On the same romantic reasoning I often read that infants are all little scientists, and it’s only our terrible school education that kills curiosity and prevents adults from still thinking scientifically. That is wrong too. Yes, we are born being curious, and as children we learn a lot by trial and error. Ask my daughter who recently learned to make rainbows with the water sprinkler, mostly without soaking herself. But our brains didn’t develop to serve science, they developed to serve ourselves in the first place.

My daughters for example haven’t yet learned to question authority. What mommy speaks is true, period. When the girls were beginning to walk I told them to never, ever, touch the stove when I’m in the kitchen because it’s hot and it hurts and don’t, just don’t. They took this so seriously that for years they were afraid to come anywhere near the stove at any time. Yes, good for them. But if I had told them rainbows are made by garden fairies they’d have believed this too. And to be honest, the stove isn’t hot all that often in our household. Still today much of my daughters’ reasoning begins with “mommy says.” Sooner or later they will move beyond M-theory, or so I hope, but trust in authorities is a cognitive bias that remains with us through adulthood. I have it. You have it. It doesn’t go away by denying it.

Let me be clear that human cognitive biases aren’t generally a bad thing. Most of them developed because they are, or at least have been, of advantage to us. We are for example more likely to put forward opinions that we believe will be well-received by others. This “social desirability bias” is a side-effect of our need to fit into a group for survival. You don’t tell the tribal chief his tent stinks if you have a dozen fellows with spears in the back. How smart of you. While opportunism might benefit our survival, it rarely benefits knowledge discovery though.

It is because of our cognitive shortcomings that scientists have put into place many checks and methods designed to prevent us from lying to ourselves. Experimental groups for example go to lengths preventing bias in data analysis. If your experimental data are questionnaire replies then that’s that, but in physics data aren’t normally very self-revealing. They have to be processed suitably and be analyzed with numerical tools to arrive at useful results. Data has to be binned, cuts have to be made, background has to be subtracted.

There are usually many different ways to process the data, and the more ways you try the more likely you are to find one that delivers an interesting result, just by coincidence. It is pretty much impossible to account for trying different methods because one doesn’t know how much these methods are correlated. So to prevent themselves from inadvertently running multiple searches for a signal that isn’t there, many experimental collaborations agree on a method for data analysis before the data is in, then proceed according to plan.

(Of course if the data are made public this won’t prevent other people to reanalyze the same numbers over and over again. And every once in a while they’ll find some signal whose statistical significance they overestimate because they’re not accounting, can’t account, for all the failed trials. Thus all the CMB anomalies.)

In science as in everyday life the major problems though are the biases we do not account for. Confirmation bias is the probably most prevalent one. If you search the literature for support of your argument, there it is. If you try to avoid that person who asked a nasty question during your seminar, there it is. If you just know you’re right, there it is.

Even though it often isn’t explicitly taught to students, everyone who succeeded making a career in research has learned to work against their own confirmation bias. Failing to list contradicting evidence or shortcomings of one’s own ideas is the easiest way to tell a pseudoscientist. A scientist’s best friend is their inner voice saying: “You are wrong. You are wrong, wrong, W.R.O.N.G.” Try to prove yourself wrong. Then try it again. Try to find someone willing to tell you why you are wrong. Listen. Learn. Look for literature that explains why you are wrong. Then go back to your idea. That’s the way science operates. It’s not the way humans normally operate.

(And lest you want to go meta on me, the title of this post is of course also wrong. We are scientists in some regards but not in others. We like to construct new theories, but we don’t like being proved wrong.)

But there are other cognitive and social biases that affect science which are not as well-known and accounted for as confirmation bias. “Motivated cognition” (aka “wishful thinking”) is one of them. It makes you believe positive outcomes are more likely than they really are. Do you recall them saying the LHC would find evidence for physics beyond the standard model. Oh, they are still saying it will?

Then there is the “sunk cost fallacy”: The more time and effort you’ve spent on SUSY, the less likely you are to call it quits, even though the odds look worse and worse. I had a case of that when I refused to sign up for the Scandinavian Airline frequent flyer program after I realized that I'd be a gold member now had I done this 6 years ago.

I already mentioned the social desirability bias that discourages us from speaking unwelcome truths, but there are other social biases that you can see in action in science.

The “false consensus effect” is one of them. We tend to overestimate how much and how many other people agree with us. Certainly nobody can disagree that string theory is the correct theory of quantum gravity. Right. Or, as Joseph Lykken and Maria Spiropulu put it:
“It is not an exaggeration to say that most of the world’s particle physicists believe that supersymmetry must be true.” (Their emphasis.)
The “halo effect” is the reason we pay more attention to literally every piece of crap a Nobelprize winner utters. The above mentioned “in-group bias” is what makes us think researchers in our own field are more intelligent than others. It’s the way people end up studying psychology because they were too stupid for physics. The “shared information bias” is the one in which we discuss the same “known problems” over and over and over again and fail to pay attention to new information held only by a few people.

One of the most problematic distortions in science is that we consider a fact more likely the more often we have heard of it, called the “attentional bias” or the “mere exposure effect”. Oh, and then there is the mother of all biases, the “bias blind spot,” the insistence that we certainly are not biased.

Cognitive biases we’ve always had of course. Science has progressed regardless, so why should we start paying attention now? (Btw, it’s called the “status-quo-bias”.) We should pay attention now because shortcomings in argumentation become more relevant the more we rely on logical reasoning detached from experimental guidance. This is a problem which affects some areas of theoretical physics more than any other field of science.

The more prevalent problem though is the social biases whose effects become more pronounced the larger the groups are, the tighter they are connected, and the more information is shared. This is why these biases are so much more relevant today than a century, even two decades ago.

You can see these problems in pretty much all areas of science. Everybody seems to be thinking and talking about the same things. We’re not able to leave behind research directions that turn out fruitless, we’re bad at integrating new information, we don’t criticize our colleagues’ ideas because we are afraid of becoming “socially undesirable” when we mention the tent’s stink. We disregard ideas off the mainstream because these come from people “not like us.” And we insist our behavior is good scientific conduct, purely based on our unbiased judgement, because we cannot possibly be influenced by social and psychological effects, no matter how well established.

These are behaviors we have developed not because they are stupid, but because they are beneficial in some situations. But in some situations they can become a hurdle to progress. We weren’t born to be objective and rational. Being a good scientist requires constant self-monitoring and learning about the ways we fool ourselves. Denying the problem doesn’t solve it.

What I really wanted to say is that I’ve finally signed up for the SAS frequent flyer program.

Friday, June 12, 2015

Where are we on the road to quantum gravity?

Damned if I know! But I got to ask some questions to Lee Smolin which he kindly replied to, and you can read his answers over at Starts with a Bang. If you’re a string theorist you don’t have to read it of course because we already know you’ll hate it.

But I would be acting out of character if not having an answer to the question posed in the title did prevent me from going on and distributing opinions, so here we go. On my postdoctoral path through institutions I’ve passed by string theory and loop quantum gravity, and after some closer inspection stayed at a distance from both because I wanted to do physics and not math. I wanted to describe something in the real world and not spend my days proving convergence theorems or doing stability analyses of imaginary things. I wanted to do something meaningful with my life, and I was – still am – deeply disturbed by how detached quantum gravity is from experiment. So detached in fact one has to wonder if it’s science at all.

That’s why I’ve worked for years on quantum gravity phenomenology. The recent developments in string theory to apply the AdS/CFT duality to the description of strongly coupled systems are another way to make this contact to reality, but then we were talking about quantum gravity.

For me the most interesting theoretical developments in quantum gravity are the ones Lee hasn’t mentioned. There are various emergent gravity scenarios and though I don’t find any of them too convincing, there might be something to the idea that gravity is a statistical effect. And then there is Achim Kempf’s spectral geometry that for all I can see would just fit together very nicely with causal sets. But yeah, there are like two people in the world working on this and they’re flying below the pop sci radar. So you’d probably never have heard of them if it wasn’t for my awesome blog, so listen: Have an eye on Achim Kempf and Raffael Sorkin, they’re both brilliant and their work is totally underappreciated.

Personally, I am not so secretly convinced that the actual reason we haven’t yet figured out which theory of quantum gravity describes our universe is that we haven’t understood quantization. The so-called “problem of time”, the past hypothesis, the measurement problem, the cosmological constant – all this signals to me the problem isn’t gravity, the problem is the quantization prescription itself. And what a strange procedure this is, to take a classical theory and then quantize and second quantize it to obtain something more fundamental. How do we know this procedure isn’t scale dependent? How do we know it works the same at the Planck scale as in our labs? We don’t. Unfortunately, this topic rests at the intersection of quantum gravity and quantum foundations and is dismissed by both sides, unless you count my own small contribution. It’s a research area with only one paper!

Having said that, I found Lee’s answers interesting because I understand better now the optimism behind the quote from his 2001 book, that predicted we’d know the theory of quantum gravity by 2015.

I originally studied mathematics, and it just so happened that the first journal club I ever attended, in '97 or '98, was held by a professor for mathematical physics on the topic of Ashtekar’s variables. I knew some General Relativity and was just taking a class on quantum field theory, and this fit in nicely. It was somewhat over my head but basically the same math and not too difficult to follow. And it all seemed to make much sense! I switched from math to physics and in fact for several years to come I lived under the impression that gravity had been quantized and it wouldn’t take long until somebody calculated exactly what is inside a black hole and how the big bang works. That, however, never happened. And here we are in 2015, still looking to answer the same questions.

I’ll restrain from making a prediction because predicting when we’ll know the theory for quantum gravity is more difficult than finding it in the first place ;o)