Pages

Monday, April 29, 2013

Book review: “Time Reborn” by Lee Smolin

Time Reborn: From the Crisis in Physics to the Future of the Universe
By Lee Smolin
Houghton Mifflin Harcourt (April 23, 2013)

This is a difficult review for me to write because I disagree with pretty much everything in Lee’s new book “Time Reborn,” except possibly the page numbers. To begin with there is no “Crisis in Physics” as the subtitle suggests. But then I’ve learned not to blame authors for title and subtitles.

Oddly enough however, I enjoyed reading the book. Not despite, but because I had something to complain about on every page. It made me question my opinions, and though I came out holding on to them, I learned quite something on the way.

In “Time Reborn” Lee takes on the seemingly puzzling fact that mathematical truth is eternal and timeless, while the world that physicists are trying to describe with that mathematics isn’t. The role of time in contemporary physics is an interesting topic, and gives opportunity to explain our present understanding of space and time, from Newton over Special and General Relativity to modern Cosmology, Quantum Mechanics and all the way to existing approaches to Quantum Gravity.

Lee argues that our present procedures must fail when we attempt to apply them to describe the whole universe. They fail because we’re presently treating the passing of time as emergent, but as emergent in a fundamentally timeless universe. Only if we abandon the conviction, held by the vast majority of physicists, that this is the correct procedure, then can we understand the fundamental nature of reality – and with it quantum gravity of course. Lee further summarizes a few recent developments that treat time as real, though the picture he presents remains incoherent, some loosely connected, maybe promising, recent ideas that you can find on the arXiv and I don’t want to promote here.

More interesting for me is that Lee doesn’t stop at quantum gravity, which for most people on the planet arguably does not rank very high among the pressing problems. Thinking about nature as fundamentally timeless, Lee argues, is cause of very worldly problems that we can only overcome if we believe that we ourselves are able to create the future:
“We need to see everything in nature, including ourselves and our technologies, as time-bound and part of a larger, ever evolving system. A world without time is a world with a fixed set of possibilities that cannot be transcended. If, on the other hand, time is real and everything is subject to it, then there is no fixed set of possibilities and no obstacle to the invention of genuinely novel ideas and solutions to it.”
I’ll leave my objections to Lee’s arguments for some other time. For now, let me just say that I explained in this earlier post that a deterministic time evolution doesn’t relieve us from making decisions, and it doesn’t prevent “genuinely novel ideas” in any sensible definition of the phrase.

In summary: Lee’s book is very thought provoking and it takes the reader on a trip through the most fundamental questions about nature. The book is well written and nicely embedded in the long history of mankind’s wonderment about the passing of time and the circle of life. You will almost certainly enjoy this book if you want to know what contemporary physics has to say, and not to say, about the nature of time. You will almost certainly hate this book if you're a string theorist, but then you already knew that.

Friday, April 26, 2013

The Enantiomers’ Swimming Competition

Image Source.
The spatial arrangement of some large molecules can exist in two different versions which are mirror images of each other, yet their chemical composition is entirely identical. These mirror versions of molecules are said to have a different “chirality” and are called “enantiomers.” The image to the right shows the two chiralities of alanine, known as L-alanine and D-alanine.

Many chemical reactions depend not only on the atomic composition of molecules but also on their spatial arrangement, and thus enantiomers can have very different chemical behaviors. Since organisms are not chirally neutral, medical properties of drugs made from enantiomers depend on which chirality of the active ingredient is present. One enantiomer might have a beneficial effect, while the other one is harmful. This is the case for example for Ethambutol (one enantiomer treats tuberculosis, the other causes blindness), or Naproxen (one enantiomer treats arthritis pain, the other causes liver poisoning).

The chemical synthesis of molecules however typically produces molecules of both chiralities in approximately equal amounts, which creates the need to separate them. One way to do this is to use chemical reactions that are sensitive to the molecules’ chirality. Such a procedure has the disadvantage though that it is specific to one particular molecule and cannot be used for any other.

Now three physicists have shown, by experimental and numerical analysis, that there may be a universal way to separate enantiomers
It’s strikingly simple: chiral particles swim differently in a stream of water that has a swirl to it. How fast they travel with the stream depends on whether their chirality is the same or the opposite of the water swirl’s orientation. Wait far enough downstream, and the particles that arrive first will almost exclusively be the ones whose chirality matches that of the water swirl.

They have shown this as follows.

Molecules are typically of the size of some nanometers or so, and the swimming performance for molecules of different chirality is difficult to observe. Instead, the authors used micrometer-sized three-dimensional particles made of a type of polymer (called SU-8) by a process called photolithography. The particles created this way are the simplest example of configurations of different chirality. They labeled the right-handed particles with a blue fluorescent dye, and the left-handed particles with a green fluorescent dye. This allows taking images of them by a fluorescent microscope. Below you see a microscope image of the particles



Next you need a narrow channel through which water flows under some pressure. The swirl is created by gratings in the wall of the channel. The length of this channel is about a meter, but its height and width is only of the order 150 μm. Then you let bunches of the mixed chiral particles flow through the channel and photograph them on a handful of locations. From the amount of blue and green that you see in the image, you can tell how many of each type were present at a given time. Here’s what they see (click to enlarge)


This figure is an overlay of measurements at 5 different locations as a function of time (in seconds). The green shade is for molecules with the chirality that matches the water swirl orientation, the blue shade is for those with the opposite chirality. They start out, at x=32.5mm, in almost identical concentration. Then they begin to run apart. Look at the left tail of the x=942.5 mm measurement. The green distribution is almost 200 seconds ahead of the blue one.

If you aren’t impressed by this experiment, let me show you the numerical results. They modeled the particles as rigidly coupled spheres in a flow field with friction and torque, added some Gaussian white noise, and integrated the equations. Below is the result of the numerical computation for 1000 realizations (click to enlarge)


I am seriously amazed how well the numerical results agree with the experiment! I’d have expected hydrodynamics to be much messier.

The merit of the numerical analysis is that it provides us with understanding of why this separation is happening. Due to the interaction of the fluid with the channel walls, the flow is slower towards the walls than in the middle. The particles are trying to minimize their frictional losses with the fluid, and how to best achieve this depends on their chirality relative to the swirl of the fluid. The particles whose chirality is aligned with the swirl preferably move towards the middle where the flow is faster, while the particles of the opposite chirality move towards the channel walls where the flow is slower. This is what causes them to travel at different average velocities.

This leaves the question whether this study of particles of micrometer size can be scaled down to molecules of nanometer size. To address this question, the authors demonstrate with another numerical simulation that the efficiency of the separation (the amount of delay) depends on the product of the length of the channel and the velocity of the fluid, divided by the particle’s diffusion coefficient in the fluid. This allows one to estimate what is required for smaller particles. If this scaling holds, particles of about 120 nm size could be separated in a channel of about 3cm length and 3.2 μm diameter, at a pressure of about 108 Pa, which is possible with presently existing technology.

Soft matter is not anywhere near by my area of research, so it is hard for me to tell whether there are effects at scales of some hundred nanometers that might become relevant and spoil this simple scaling, or whether more complicated molecule configurations alter the behavior in the fluid. But if not, this seems to me a tremendously useful result with important applications.

Monday, April 22, 2013

Listen to Spacetime

Quantum gravity researcher at work.
We normally think about geometry as distances between points. The shape of a surface is encoded in the distances between the points on in. If the set of points is discrete, then this description has a limited resolution.

But there’s a different way to think about geometry, which goes back about a century to Hermann Weyl. Instead of measuring distances between points, we could measure the way a geometric shape vibrates if we bang it. From the frequencies of the resulting tones, we could then extract information about the geometry. In maths speech we would ask for the spectrum of the Laplace-operator, which is why the approach is known as “spectral geometry”. Under which circumstances the spectrum contains the full information about the geometry is today still an active area of research. This central question of spectral geometry has been aptly captured in Mark Kac's question “Can one hear the shape of a drum?”

Achim Kempf from the University of Waterloo recently put forward a new way to think about spectral geometry, one that has a novel physical interpretation which makes it possibly relevant for quantum gravity

The basic idea, which is still in a very early phase, is the following.

The space-time that we live in isn’t just a classical geometric object. There are fields living on it that are quantized, and the quantization of the fluctuations of the geometry themselves are what physicists are trying to develop under the name of quantum gravity. It is a peculiar, but well established, property of the quantum vacuum that what happens at one point is not entirely independent from what happens at another point because the quantum vacuum is a spatially entangled state. In other words, the quantum vacuum has correlations.

The correlations of the quantum vacuum are encoded in the Greensfunction which is a function of pairs of points, and the correlations that this function measures are weaker the further away two points are. Thus, we expect the Greensfunction for all pairs in a set of points on space-time to carry information about the geometry.

Concretely, consider a space-time of finite volume (because infinities make everything much more complicated), and randomly sprinkle a finite number of points on it. Then measure the field's fluctuating amplitudes at these points, and measure them again and again to obtain an ensemble of data. From this set of amplitudes at any two of the points you then calculate their correlators. The size of the correlators is the quantum substitute for knowing the distance between the two chosen points.

Achim calls it “a quantum version of yard sticks.”

Now the Greensfunction is an operator and has eigenvalues. These eigenvalues, importantly, do not depend on the chosen set of points, though the number of eigenvalues that one obtains does. For N points, there are N eigenvalues. If one sprinkles fewer points, one loses the information of structures at short distances. But the eigenvalues that one has are properties of the space-time itself.

The Greensfunction however is the inverse of the Laplace-operator, so its eigenvalues are the inverses of the eigenvalues of the Laplace-operator. And here Achim’s quantum yard sticks connect to spectral geometry, though he arrived there from a completely different starting point. This way one rederives the conjecture of (one branch of) spectral geometry, namely that the specrum of a curved manifold encodes its shape.

That is neat, really neat. But it’s better than that.

There exist counter examples for the central conjecture of spectral geometry, where the shape reconstruction was attempted from the scalar Laplace-operator's spectrum alone but the attempt failed. Achim makes the observation that the correlations in quantum fluctuations can be calculated for different fields and argues that to reconstruct the geometry it is necessary to not only consider scalar fields, but also vector and symmetric covariant 2-tensor fields. (Much like one decomposes fluctuations of the metric into these different types.) Whether taking into account also the vector and tensor fields is relevant or not depends on the dimension of the space-time one is dealing with; it might not be necessary for lower-dimensional examples.

In his paper, Achim suggests that to study whether the reconstruction can be achieved one may use a perturbative approach in which one makes small changes to the geometry and then tries to recover these small changes in the change of correlators. Look how nicely the physicists’ approach interlocks with thorny mathematical problems.

What does this have to do with quantum gravity? It is a way to rewrite an old problem. Instead of trying to quantize space-time, one could discretize it by sprinkling the points and encode its properties in the eigenvalues of the Greensfunctions. And once one can describe the curvature of space-time by these eigenvalues, which are invariant properties of space-time, one is in a promising new starting position for quantizing space-time.

I’ve heard Achim giving talks about the topic a couple of times during the years, and he has developed this line of thought in a series of papers. I have no clue if his approach is going to lead anywhere. But I am quite impressed how he has pushed forward the subject and I am curious to see how this research progresses.

Wednesday, April 17, 2013

Excuse me, where is the mainstream?

More than once I went away from a discussion, confused about what exactly my conversation partners meant with “physics mainstream”. The “mainstream,” it seems, is typically employed as reference point for why some research projects get funded and others not. If it’s not “mainstream physics”, I gather, it’s difficult to get it funded. But can we come up with a good explanation for what is “mainstream”?

My first attempt to define the “mainstream” would be by the context it is most often used, the ease by which a research topic can be funded: the easier, the more mainstream. But on second thought this is not a helpful definition because it’s not based on properties of the research itself, which makes it circular. We could as well say a topic attracts funding easily because it’s mainstream, and so we are none the wiser. What is it that puts a research area into the main of the stream to begin with?

Reference to “mainstream science” is often made by pseudoscientists who are using the term in an attempt to downgrade scientific research and to appear original. (Ironically they then often boldly use and abuse vocabulary from the unoriginal mainstream. Google for quantum healing to see what I mean.) In that case the “mainstream” is just all that deserves to be called scientific research.

The way that pseudoscientists use the expression is not what I want to discuss today. It’s the way that researchers themselves speak about the “mainstream” that has left me wondering if it is possible to make this a meaningful, and useful, terminology. The way the expression is used by researchers it seems to have connotations of popularity, fashionableness, timeliness, and attracting large numbers of people. Below I have tried to make sense of each of these properties, and then I’ll offer the best definition that I could come up with. I invite you to submit your own!

Public attention

At any given time, some topics are popular. In physics there is presently direct detection of dark matter, quantum computing, topologic insulators and cold atom gases, for just to mention a few. But in many cases these popular topics constitute only a small fraction of the research that is actually happening. The multiverse, to name another example, is in reality a fringe area of gr-qc that just happens to capture public attention. The same is the case for the black hole firewall. In fact, in many cases what makes headlines in the press are singular or controversial findings. It’s not the type of research that funding agencies have on their agenda, which is why popularity is not a good defining property for the mainstream.

Fashionableness

High energy physics is a very fad-driven area of physics and quite often you can see a topic appearing and gathering momentum within a matter of months, just to then exponentially decay and hardly be mentioned some years later. Anybody remembers unparticles? The pentaquark? The so-called OPERA anomaly? These fads aren’t as extreme in other areas of physics (or so I am told) but they exist, if less pronounced.

Fashionableness indeed seems to some extent correlated with the ease of getting funding. But a trend must have been around for a while and have attracted a base of research findings to appear solid and worthy of funding, so that cannot be the whole story. This brings me to the next point.

Occupation number

The more people work on a topic, the easier it is to make a case that the topic is relevant and deserves being funded. Thus the number of researchers in an area seems a plausible measure for it being mainstream. Or does it?

The total number of people is in fact a highly misleading quantity. A topic may attract many researchers because it’s very rich and there is a lot that can be done. Another topic might just not support such a large number of researchers, but this says more about the nature of research in an area than about its relevance or its promise.

String theory is an example of a research area that is very rich and supports many independent studies, which I believe is reason it has become so dominant in quantum gravity – there’s just a lot that can be done. But does that make it mainstream research? Nanoscience is another example of a research area that attracts a lot of people, but does so for an entirely different reason, that being the potential of developing patents and applications. Throwing them both together doesn’t seem to make much sense. The total number of people does not seem a good defining property for the mainstream either.

The important factor for the ease of obtaining funding is not so much the total number of people, but the amount of tangible open problems that can be attacked. This then leads me to the next point.

Saturation level

A refinement of the occupation number is the amount to which a research area attracts people that study presently open questions on the topic. Mainstream physics, then, would be those areas that attract at least as many researchers as are necessary to push forward on all presently open questions. Since this is a property relative to the number of possible research projects, a small research area can be mainstream as much as a large one. It has some aspects of fashionableness, yet requires a more solid base already.

This definition makes sense to me because ease of funding should have something to do with the availability of research projects as well as their promise, which would be reflected in the willingness of researchers to spend time on these projects.

Except that this definition doesn’t seem to agree with reality because funding usually lags behind, leaving research areas overpopulated: It is easy to obtain funding for projects in areas whose promise is already on the decline because funding decisions are made based on reports by people who work in the very same area. So this definition, though appealing at first sight, just doesn’t seem to work.

Timeliness

Thus, in the end neither popularity, fads, the number of people, nor the availability of promising research topics seem to make for a good definition of the mainstream. Then let me try something entirely different, based on an analogy I used in the Nordita video.

Knowledge discovery is like the mapping of unknown territory. At any time, we have a map with a boundary beyond which we do not know what to expect. Applied research is building on the territory that we have mapped. Basic research is planning expeditions into the unknown to extend the map and with it the area that we can build on.

In either case, building on known territory and planning expeditions, researchers can take small steps and stay close to the known base. Or they can aim high and far and risk both failure and disconnect from colleagues.

This image offers the following definition for mainstream research.

Mainstream research is the research that aims just far enough to be novel and contribute to knowledge discovery, but not so far as to disconnect from what is already known. It’s new, but not too new. It’s familiar, but not too familiar. It’s baby steps. It builds on what is known without creating uncomfortable gaps.  It uses known methods. It connects. It doesn’t shock. It’s neither too ambitious nor too yesterday. It’s neither too conservative nor too tomorrow. It is what makes the community nod in appraisal.

Mainstream research is what surprises, but doesn’t surprise too much.

We previously discussed the relevance of familiarity in an entirely different context, that of appreciating musing. There too, you want it to be predictable, but not too predictable. You want it to have just the right amount of complexity.

I think that’s really the essence of what makes a field mainstream: how tight the connection is to existing knowledge and how well the research is embedded into what is already known. If a new field comes up, there will be a phase when there aren’t many connections to anything. But over the course of time, given all goes well, research on the topic will create a map of new territory that then can be built upon.

Wednesday, April 10, 2013

Proximate and Ultimate Causes for Publication

I am presently reading Steven Pinker’s “Blank Slate”. He introduces the terms “proximate cause” and “ultimate cause,” a distinction I find enlightening:
“The difference between the mechanisms that impel organisms to behave in real time and the mechanisms that shaped the design of the organism over evolutionary time is important enough to merit some jargon. A proximate cause of behavior is the mechanism that pushes behavior buttons in real time, such as the hunger and lust that impel people to eat and have sex. An ultimate cause is the adaptive rationale that led the proximate cause to evolve, such as the need for nutrition and reproduction that gave us the drives of hunger and lust.” ~Steven Pinker, The Blank Slate: The Modern Denial of Human Nature, p 54.

It is the same distinction I have made in an entirely different context between “primary” and “secondary” goals, my context being the use of measures for scientific success. In Pinker’s terminology then, enhancing our understanding of nature is the “ultimate cause” of scientific research. Striving to excel according to some measure for scientific success – like the h-factor, or the impact factor of journals on one’s publication list – is a “proximate cause”.


The comparison to evolution illuminates the problem with introducing measures for scientific success. Humans do not, in practice, evaluate each of their action as to their contribution to the ultimate cause. They use instead readily available simplifications that previously proved to be correlated with the ultimate cause. Alas, over time the proximate cause might no longer lead toward the ultimate cause. Increasing the output of publications does no more contribute to our understanding of nature than does deep-fried butter on a stick contribute to health and chances of survival.

There is an interesting opinion piece, “Impacting our young” in the Proceedings of the National Academy of Sciences of the USA (ht Jorge) that reflects on the impact that the use of measures for scientific success has on the behavior of researchers:
“Today, the impact factor is often used as a proxy for the prestige of the journal. This proxy is convenient for those wishing to assess young scientists across fields, because it does not require knowledge of the reputation of individual journals or specific expertise in all fields… [T]he impact factor has become a formal part of the evaluation process for job candidates and promotions in many countries, with both salutatory and pernicious consequences.

Not surprisingly, the journals with the highest impact factor (leaving aside the review journals) are those that place the highest premium on perceived novelty and significance. This can distort decisions on how to undertake a scientific project. Many, if not most, important scientific findings come from serendipitous discovery. New knowledge is new precisely because it was unanticipated. Consequently, it is hard to predict which projects are going to generate useful and informative data that will add to our body of knowledge and which will generate that homerun finding. Today, too many of our postdocs believe that getting a paper into a prestigious journal is more important to their career than doing the science itself.”
In other words, the proximate cause of trying to publish in a high impact journal erodes the ultimate cause of doing good science.

Another example for a proxy that distracts from recognizing good science is paying too much attention to research coming out of highly ranked universities, “highly ranked” according to some measure. This case was recently eloquently made in Nature by Keith Weaver in a piece titled “Scientists are Snobs” (sorry, subscription only):
“We all do it. Pressed for time at a meeting, you can only scan the presented abstracts and make snap judgments about what you are going to see. Ideally, these judgments would be based purely on what material is of most scientific interest to you. Instead, we often use other criteria, such as the name of the researchers presenting or their institution. I do it too, passing over abstracts that are more relevant to my work in favor of studies from star universities such as Stanford in California or Harvard in Massachusetts because I assume that these places produce the “best” science…

Such snobbery arises from a preconceived idea that many scientists have –that people end up at smaller institutions because their science has less impact or is of lower quality than that from larger places. But many scientists choose smaller institutions for quality-of-life reasons…”
He goes on to explain how his laboratory was the first to publish a scientific finding, but “recent papers… cited only a more recent study from a large US National Institutes of Health laboratory. Losing this and other worthy citations could ultimately affect my ability to get promoted and attain grants.”

In other words, using the reputation of institutions as a proxy for scientific quality does not benefit the ultimate goal of doing good science.

Now let us contrast these problems with what we can read in another recent Nature article “Beyond the Paper” by Jason Priem. He wipes away such concerns as follows:
“[A] criticism is that the very idea of quantifying scientific impact is misguided. This really will not do. We scientists routinely search out numerical data to explain everything from subatomic physics to the appreciation of Mozart; we cannot then insist that our cogitations are uniquely exempt. The ultimate judge of scientific quality is the scientific community; its judgements are expressed in actions and these actions may be measured. The only thing to do is to find good measures to replace the slow, clumsy and misleading ones we rely on today. The great migration of scholarship to the Web promises to help us to do this.”
This argument implicitly assumes that making use of a quantifiable measure for scientific impact does not affect the judgement of scientists. But we have all reason to believe it does because it replaces the ultimate cause with a proximate cause, the primary goal with a secondary goal. (Priem's article is otherwise very interesting and readable, I recommend you give it a closer look.)

I’m not against using measures for scientific success in principle. But I wish people would pay more attention to the backreaction that comes from doing the measurements and providing people with a time-saving simplified substitute for the ultimate goal of doing good science.

Monday, April 08, 2013

Black holes and the Planck length

According to Special Relativity, an object in motion relative to you appears shortened. The faster it is, the shorter it appears. This is effect known as Lorentz-contraction. According to General Relativity, an object that has a sufficiently high mass-density in a small volume collapses to a black hole. Does this mean that if a particle moves fast enough relative to you it turns into a black hole? No, it doesn't. But it's a confusion I've come across frequently. Take Craig Hogan's recent "paper", where he writes:
"[B]elow the Planck length... it is no longer consistent to ignore the quantum character of the matter that causes space-time to curve. Even a single quantum particle of shorter wave-length has more energy than a black hole of the same size, an impossibility in classical relativity..."
The wave-length of a particle depends on the motion you have relative to it. For every particle there is a reference frame in which the wave-length of the particle appears shorter than the Planck length. If it was true what Hogan writes, this would imply that large relative velocities are a problem with classical general relativity. Of course they are not, for the following reasons.

A black hole is characterized by the existence of an event horizon. The event horizon describes the causal connectivity of space-time. It's a global property. Describing an object from the perspective of somebody moving relative to this object is a coordinate transformation. A coordinate transformation changes the way the physics appears, but not the physics itself. It just makes things look different. You cannot create an event horizon by a change of coordinates. Ergo, you cannot create a black hole just by looking at a particle that is moving rapidly relative to you.

There are three points I believe contribute to this confusion:

First, one can take the Schwarzschild metric for a black hole and describe it from the perspective of an observer moving relative to it. This is known as the Aichelburg-Sexl metric. The Aichelburg-Sexl metric is commonly used to handle black hole formation in particle collisions. The argument about the Planck length being a minimal length makes use of black hole formation too. But note that in these cases there isn't one, but at least two particles. These particles have a center-of-mass energy. They create a curvature which depends on the distance between them. They either do or don't form an horizon. These are statements independent on the choice of coordinates. This case should not be confused with just looking at one particle.

Second is forgetting that black holes have no hair. Leaving aside angular momentum, they're spherically symmetric which implies there are preferred frames. Normally one uses a frame in which the black hole is in rest, which then leads to the normal nomenclature with the Schwarzschild radius and so on.  But you better don't apply an argument about concentrating energy inside a volume that you'd have in the static case to the metric in a different coordinate system.

Third is a general confusion about the Planck length being called a "length". That the Planck length has the dimension of a length does not mean that it behaves the same way as a length of some rod. Neither is it generally expected that something funny happens at distance scales close by the Planck length - as we already saw above, this statement doesn't even have an observer-independent meaning.

The Planck length appears in General Relativity as a coupling constant. It couples the curvature to the stress-energy tensor. Most naturally, one expects quantum gravitational effects to become strong, not at distances close by the Planck length, but at curvatures close to one over Planck length squared. (Or higher powers of the curvature close to the appropriately higher powers of the inverse Planck length respectively.) The curvature is an invariant. This statement is therefore observer-independent.

What happens in the two particle collisions is that the curvature becomes large, which is why we expect quantum gravitational effects in this case. It is also the case that in the commonly used coordinate systems these notions agree with each other. Eg, in the normal Schwarzschild coordinates the curvature becomes Planckian if the radius is of Planck length. This also coincides with the mass of the black hole being about the Planck mass. (No coincidence: there is no other scale that could play a role here.) Thus, Planck mass black holes can be expected to be quantum gravitational objects. The semi-classical approximation (that treats gravity classical) breaks down at these masses. This is when Hawkings calculation for the evaporation of black holes runs into trouble.

For completeness, I want to mention that Deformed Special Relativity is a modification of Special Relativity which is based on the assumption that the Planck length (or its inverse respectively) does transform like the the spatial component of a four-vector, contrary to what I said above. In this case one modifies Special Relativity in such a way that the inverse of the Planck length remains invariant. I've never found this assumption to be plausible for reasons I elaborated on here. But be that as it may, it's an hypothesis that leads to consequences and that can then be tested. Note however that this is a modification of Special Relativity and not the normal version.

Friday, April 05, 2013

First Issue of the New Nordita Newsletter!

The Nordita Newsletter has gotten a major technical upgrade and the first issue of the new version is now online at
Most notably, you can now subscribe and unsubscribe yourself and we have rss feeds for the different Newsletter categories.

Subscribing to the Nordita Newsletter might be interesting to you if you are interested in the research we do, want to be informed about job opportunities and other application deadlines like program proposals or PhD visiting fellowships, want timely information on which upcoming conferences, schools or programs you can now register for, or if you work in physics or related field anywhere in the Nordic countries and are interested in our "Nordic News" about research in this part of the world.

A big benefit of the upgraded Newsletter is that individual news items can now easily be shared, which we're hoping will make this information more useful to pass on via social media.

The highlight of this issue is this little video that we produced about the institute:


Part of the shots were made during last year's programs on holography in October and the on Cosmology program in November so you might recognize a few faces here or there. Jump to 2:42 and see if you recognize the guy to the right. 3:16 onward, anybody looks familiar? 3:08 and 5:22, that was during the program I organizied.

And in another video you can meet Oksana, who you got to know earlier in my blogpost about Nematic Films. Here she explains her research in her own words:

Tuesday, April 02, 2013

Twitter Enthusiasm Dwindling

I've spent Easter in bed with a high fever. After a few days of this my brain is deep fried, my inbox a disaster, and I'm not in the shape to do anything besides occasionally scrolling through my news feeds. On the search for something to write that doesn't require much brain use, let me offer a self-observation.

I've pretty much stopped using Twitter. Years ago it seemed like a useful platform to aggregate and share information quickly and easily. But it's turned out to be pretty much useless when it comes to organizing information. My Twitter feed is inevitably dominated by a handful of people who don't seem to be doing anything else than tweeting, at a frequency 100 times higher than everybody else. I did create a few lists to circumvent the problem, but it's cumbersome and also doesn't really solve the main issue, that most tweets are just not of interest to me. They're replies to somebody's reply, or comments on somebody's comment. Ordered by time, not by topic. And needless to say, there's things that don't fit into 140 characters. This guy has made a similar self-observation of Twitter tiredness.

Which is why, these days I'm pretty much relying on facebook for my social networking. I've looked at G+, but not much seems to be going on there, or at least not among the people in my circles. And that what's going on seems to be an echo of what they're doing on facebook. My main news source is still RSS feeds, but facebook does a good job with the interesting and amusing bits that pass through the network. And it has the big benefit that commenting is easy, so it's turned out to be a comfortable platform to discuss those recent news, more so than Twitter or Blogger.

So how about you? Still using twitter?