Pages

Tuesday, April 29, 2014

FQXi essay contest 2014: How Should Humanity Steer the Future?

This year’s essay contest of the Foundational Questions Institute “How Should Humanity Steer the Future?” broaches a question that is fundamental indeed, fundamental not for quantum gravity but for the future of mankind. I suspect the topic selection has been influenced by the contest being “presented in partnership with” (which I translate into “sponsored by”) not only the John Templeton foundation and Scientific American, but also a philanthropic organization called the “Gruber Foundation” (which I had never heard of before) and Jaan Tallinn.

Tallinn is no unknown, he is one of the developers of Skype and when I type his name into Google the auto completion is “net worth”. I met him at the 2011 FQXi conference where he gave a little speech about his worries that artificial intelligence will turn into a threat to humans. I wrote back then a blogpost explaining that I don’t share this particular worry. However, I recall Tallinn’s speech vividly, not because it was so well delivered (in fact, he seemed to be reading off his phone), but because he was so very sincere about it. Most people’s standard reaction in the face of threats to the future of mankind is cynicism or sarcasm, essentially a vocal shoulder shrug, whereas Tallinn seems to have spent quite some time thinking about this. And well, somebody really should be thinking about this...

And so I appreciate the topic of this year’s essay contest has a social dimension, not only because it gets tiresome to always circle the same question of where the next breakthrough in theoretical physics will be and the always same answers (let me guess, it’s what you work on), but also because it gives me an outlet for my interests besides quantum gravity. I have always been fascinated by the complex dynamics of systems that are driven by the individual actions of many humans because this reaches out to the larger question of where life on planet Earth is going and why and what all of this is good for.

If somebody asks you how humanity should steer the future, a modest reply isn’t really an option, so I have submitted my five step plan to save the world. Well, at least you can’t blame me for not having a vision. The executive summary is that we will only be able to steer at all if we have a way to collectively react to large scale behavior and long-term trends of global systems, and this can only happen if we are able to make informed decisions intuitively, quickly and without much thinking.

A steering wheel like this might not be sufficient to avoid running into obstacles, but it is definitely necessary, so that is what we have to start with.

The trends that we need to react to are those of global and multi-leveled systems, including economic, social, ecological and politic systems, as well as various infrastructure networks. Presently, we basically fail to act when problems appear. While the problems arise from the interaction of many people and their environment, it is still the individual that has to make decisions. But the individual presently cannot tell how their own action works towards their goals on long distance or time scales. To enable them to make good decisions, the information about the whole system has to be routed back to the individual. But that feedback loop doesn’t presently exist.

In principle it would be possible today, but the process is presently far too difficult. The vast majority of people do not have the time and energy to collect the necessary information and make decisions based on it. It doesn’t help to write essays about what we ‘should’ do. People will only act if it’s really simple to do and of immediate relevance for them. Thus my suggestion is to create individual ‘priority maps’ that chart personal values and provide people with intuitive feedback for how well a decision matches with their priorities.

A simple example. Suppose you train some software to tell what kind of images you find aesthetically pleasing and what you dislike. You now have various parameters, say colors, shapes, symmetries, composition and so on. You then fill out a questionnaire about preferences for political values. Now rather than long explanations which candidate says what, you get an image that represents how good the match is by converting the match in political values to parameters in an image. You pick the image you like best and are done. The point is that you are being spared having to look into the information yourself, you only get to see the summary that encodes whether voting for that person would work towards what you regard important.

Oh, I hear you say, but that vastly oversimplifies matters. Indeed, that is exactly the point. Oversimplification is the only way we’ll manage to overcome our present inability to act.

If mankind is to be successful in the long run, we have to evolve to anticipate and react to interrelated global trends in systems of billions of people. Natural selection might do this, but it would take too much time. The priority maps are a technological shortcut to emulate an advanced species that is ‘fit’ in the Darwinian sense, fit to adapt to its changing environment. I envision this to become a brain extension one day.

I had a runner up to this essay contribution, which was an argument that research in quantum gravity will be relevant for quantum computing, interstellar travel and technological progress in general. But it would have been a quite impractical speculation (not to mention a self-advertisement of my work on superdeterminism, superluminal information exchange and antigravity). In my mind of course it’s all related – the laws of physics are what eventually drive the evolution of consciousness and also of our species. But I decided to stick with a proposal that I think is indeed realizable today and that would go a long way to enable humanity to steer the future.

I encourage you to check out the essays which cover a large variety of ideas. Some of the contributions seem to be very bent towards the aim of making a philosophical case for some understanding of natural law rather than the other, or to find parallels to unsolved problems in physics, but this seems quite a stretch to me. However, I am sure you will find something of interest there. At the very least it will give you some new things to worry about...

Saturday, April 26, 2014

Academia isn’t what I expected

The Ivory Tower from
The Neverending Story. [Source]
Talking to the students at the Sussex school let me realize how straight-forward it is today to get a realistic impression of what research in this field looks like. Blogs are a good source of information about scientist’s daily life and duties, and it has also become so much easier to find and make contact with people in the field, either using social networks or joining dedicated mentoring programs.

Before I myself got an office at a physics institute I only had a vague idea of what people did there. Absent the lauded ‘role models’ my mental image of academic research formed mostly by reading biographies of the heroes of General Relativity and Quantum Mechanics, plus a stack of popular science books. The latter didn’t contain much about the average researcher’s daily tasks, and to the extent that the former captured university life, it was life in the first half of the 20nd century.

I expected some things to have changed during 50 years, notably in technological advances and the ease of travel, publishing, and communication. I finished high school in ’95, so the biggest changes were yet to come. I also knew that disciplines had drifted apart, that philosophy and physics were mostly going separate ways now, and that the days in which a physicist could also be a chemist could also be an artist were long gone. It was clear that academia had generally grown, become more organized and institutionalized, and closer linked to industrial research and applications. I had heard that applying for money was a big part of the game. Those were the days.

But my expectations were wrong in many other ways. 20 years, 9 moves and 6 jobs later, here’s the contrast of what I believed theoretical physics would be like to reality:
  1. Specialization

    While I knew that interdisciplinarity had given in to specialization I thought that theoretical physicists would be in close connection to the experimentalists, that they would frequently discuss experiments that might be interesting to develop, or data that required explanation. I also expected theoretical physicists to work closely together with mathematicians, because in the history of physics the mathematics has often been developed alongside the physics. In both cases the reality is an almost complete disconnect. The exchange takes place mostly through published literature or especially dedicated meetings or initiatives.

  2. Disconnect

    I expected a much larger general intellectual curiosity and social responsibility in academia. Instead I found that most researchers are very focused on their own work and nothing but their own work. Not only do institutes rarely if ever have organized public engagement or events that are not closely related to the local research, it’s also that most individual researchers are not interested. In most cases, they plainly don’t have the time to think about anything than their next paper. That disconnect is the root of complaints like Nicholas Kristof’s recent Op-Ed, where calls upon academics: “[P]rofessors, don’t cloister yourselves like medieval monks — we need you!”

  3. The Machinery

    My biggest reality shock was how much of research has turned into manufacturing, into the production of PhDs and papers, papers that are necessary for the next grant, which is necessary to pay the next students, who will write the next papers, iterate. This unromantic hamster wheel still shocks me. It has its good side too though: The standardization of research procedures limits the risks of the individual. If you know how to play along, and are willing to, you have good chances that you can stay. The disadvantage is though that this can force students and postdocs to work on topics they are not actually interested in, and that turns off many bright and creative people.

  4. Nonlocality

    I did not anticipate just how frequent travel and moves are necessary these days. If I had known about this in advance, I think I would have left academia after my diploma. But so I just slipped into it. Luckily I had a very patient boyfriend who turned husband who turned father of my children.

  5. The 2nd family

    The specialization, the single-mindedness, the pressure and, most of all, the loss of friends due to frequent moves create close ties among those who are together in the same boat. It’s a mutual understanding, the nod of been-there-done-that, the sympathy with your own problems that make your colleagues and officemates, driftwood as they often are, a second family. In all these years I have felt welcome at every single institute that I have visited. The books hadn’t told me about this.

Experience, as they say, is what you get when you were expecting something else. By and large, I enjoy my job. Most of the time anyway.

My lectures at the Sussex school went well, except that the combination of a recent cold and several hours of speaking stressed my voice box to the point of total failure. Yesterday I could only whisper. Today I get out some freak sounds below C2 but that’s pretty much it. It would be funny if it wasn’t so painful.

You can find the slides of my lectures here and the guide to further reading here. I hope they live up to your expectations :)

Monday, April 21, 2014

Away note

I will be traveling the rest of the week to give a lecture at the Sussex graduate school "From Classical to Quantum GR", so not much will happen on this blog. For the school, we were asked for discussion topics related to our lectures, below are my suggestions. Leave your thoughts in the comments, additional suggestions for topics are also welcome.


  • Is it socially responsible to spend money on quantum gravity research? Don't we have better things to do? How could mankind possibly benefit from quantum gravity?
  • Can we make any progress on the theory of quantum gravity without connection to experiment? Should we think at all about theories of quantum gravity that do not produce testable predictions? How much time do we grant researchers to come up with predictions?
  • What is your favorite approach towards quantum gravity? Why? Should you have a favorite approach at all?
  • Is our problem maybe not with the quantization of gravity but with the foundations of quantum mechanics and the process of quantization?
  • How plausible is it that gravity remains classical while all the other forces are quantized? Could gravity be neither classical nor quantized?
  • How convinced are you that the Planck length is at 10-33cm? Do you think it is plausible that it is lower? Should we continue looking for it?
  • What do you think is the most promising area to look for quantum gravitational effects and why?
  • Do you think that gravity can be successfully quantized without paying attention to unification?
Lara and Gloria say hello and wish you a happy Easter :o)

Thursday, April 17, 2014

The Problem of Now

[Image Source]

Einstein’s greatest blunder wasn’t the cosmological constant, and neither was it his conviction that god doesn’t throw dice. No, his greatest blunder was to speak to a philosopher named Carnap about the Now, with a capital.

“The problem of Now”, Carnap wrote in 1963, “worried Einstein seriously. He explained that the experience of the Now means something special for men, something different from the past and the future, but that this important difference does not and cannot occur within physics”

I call it Einstein’s greatest blunder because, unlike the cosmological constant and indeterminism, philosophers, and some physicists too, are still confused about this alleged “Problem of Now”.

The problem is often presented like this. Most of us experience a present moment, which is a special moment in time, unlike the past and unlike the future. If you write down the equations governing the motion of some particle through space, then this particle is described, mathematically, by a function. In the simplest case this is a curve in space-time, meaning the function is a map from the real numbers to a four-dimensional manifold. The particle changes its location with time. But regardless of whether you use an external definition of time (some coordinate system) or an internal definition (such as the length of the curve), every single instant on that curve is just some point in space-time. Which one, then, is “now”?

You could argue rightfully that as long as there’s just one particle moving on a straight line, nothing is happening, and so it’s not very surprising that no notion of change appears in the mathematical description. If the particle would scatter on some other particle, or take a sudden turn, then these instances can be identified as events in space-time. Alas, that still doesn’t tell you whether they happen to the particle “now” or at some other time.

Now what?

The cause for this problem is often assigned to the timeless-ness of mathematics itself. Mathematics deals in its core with truth values and the very point of using math to describe nature is that these truths do not change. Lee Smolin has written a whole book about the problem with the timeless math, you can read my review here.

It may or may not be that mathematics is able to describe all of our reality, but to solve the problem of now, excuse the heresy, you do not need to abandon a mathematical description of physical law. All you have to do is realize that the human experience of now is subjective. It can perfectly well be described by math, it’s just that humans are not elementary particles.

The decisive ability that allows us to experience the present moment as being unlike other moments is that we have a memory. We have a memory of events in the past, an imperfect one, and we do not have memory of events in the future. Memory is not in and by itself tied to consciousness, it is tied to the increase of entropy, or the arrow of time if you wish. Many materials show memory; every system with a path dependence like eg hysteresis does. If you get a perm the molecule chains in your hair remember the bonds, not your brain.

Memory has nothing to do with consciousness in particular which is good because it makes it much easier to find the flaw in the argument leading to the problem of now.

If we want to describe systems with memory we need at the very least two time parameters: t to parameterize the location of the particle and τ to parameterize the strength of memory of other times depending on its present location. This means there is a function f(t,Ï„) that encodes how strong is the memory of time τ at moment t. You need, in other words, at the very least a two-point function, a plain particle trajectory will not do.

That we experience a “now” means that the strength of memory peaks when both time parameters are identical, ie t-Ï„ = 0. That we do not have any memory of the future means that the function vanishes when Ï„ > t. For the past it must decay somehow, but the details don’t matter. This construction is already sufficient to explain why we have the subjective experience of the present moment being special. And it wasn’t that difficult, was it?

The origin of the problem is not in the mathematics, but in the failure to distinguish subjective experience of physical existence from objective truth. Einstein spoke about “the experience of the Now [that] means something special for men”. Yes, it means something special for men. This does not mean however, and does not necessitate, that there is a present moment which is objectively special in the mathematical description. In the above construction all moments are special in the same way, but in every moment that very moment is perceived as special. This is perfectly compatible with both our experience and the block universe of general relativity. So Einstein should not have worried.

I have a more detailed explanation of this argument – including a cartoon! – in a post from 2008. I was reminded of this now because Mermin had a comment in the recent issue of Nature magazine about the problem of now.

In his piece, Mermin elaborates on qbism, a subjective interpretation of quantum mechanics. I was destined to dislike this just because it’s a waste of time and paper to write about non-existent problems. Amazingly however, Mermin uses the subjectiveness of qbism to arrive at the right conclusion, namely that the problem of the now does not exist because our experiences are by its very nature subjective. However, he fails to point out that you don’t need to buy into fancy interpretations of quantum mechanics for this. All you have to do is watch your hair recall sulphur bonds.

The summary, please forgive me, is that Einstein was wrong and Mermin is right, but for the wrong reaons. It is possible to describe the human experience of the present moment with the “timeless” mathematics that we presently use for physical laws, it isn’t even difficult and you don’t have to give up the standard interpretation of quantum mechanics for this. There is no problem of Now and there is no problem with Tegmark’s mathematical universe either.

And Lee Smolin, well, he is neither wrong nor right, he just has a shaky motivation for his cosmological philosophy. It is correct, as he argues, that mathematics doesn’t objectively describe a present moment. However, it’s a non sequitur that the current approach to physics has reached its limits because this timeless math doesn’t constitute a conflict with our experience. observation.

Most people get a general feeling of uneasiness when they first realize that the block universe implies all the past and all the future is equally real as the present moment, that even though we experience the present moment as special, it is only subjectively so. But if you can combat your uneasiness for long enough, you might come to see the beauty in eternal mathematical truths that transcend the passage of time. We always have been, and always will be, children of the universe.

Saturday, April 12, 2014

Book review: “The Theoretical Minimum – Quantum Mechanics” By Susskind and Friedman

Quantum Mechanics: The Theoretical Minimum
What You Need to Know to Start Doing Physics
By Leonard Susskind, Art Friedman
Basic Books (February 25, 2014)

This book is the second volume in a series that we can expect to be continued. The first part covered Classical Mechanics. You can read my review here.

The volume on quantum mechanics seems to have come into being much like the first, Leonard Susskind teamed up with Art Friedman, a data consultant whose role I envision being to say “Wait, wait, wait” whenever the professor’s pace gets too fast. The result is an introduction to quantum mechanics like I haven’t seen before.

The ‘Theoretical Minimum’ focuses, as its name promises, on the absolute minimum and aims at being accessible with no previous knowledge other than the first volume. The necessary math is provided along the way in separate interludes that can be skipped. The book begins with explaining state vectors and operators, the bra-ket notation, then moves on to measurements, entanglement and time-evolution. It uses the concrete example of spin-states and works its way up to Bell’s theorem, which however isn’t explicitly derived, just captured verbally. However, everybody who has made it through Susskind’s book should be able to then understand Bell’s theorem. It is only in the last chapters that the general wave-function for particles and the Schrödinger equation make an appearance. The uncertainty principle is derived and path integrals are very briefly introduced. The book ends with a discussion of the harmonic oscillator, clearly building up towards quantum field theory there.

I find the approach to quantum mechanics in this book valuable for several reasons. First, it gives a prominent role to entanglement and density matrices, pure and mixed states, Alice and Bob and traces over subspaces. The book thus provides you with the ‘minimal’ equipment you need to understand what all the fuzz with quantum optics, quantum computing, and black hole evaporation is about. Second, it doesn’t dismiss philosophical questions about the interpretation of quantum mechanics but also doesn’t give these very prominent space. They are acknowledged, but then it gets back to the physics. Third, the book is very careful in pointing out common misunderstandings or alternative notations, thus preventing much potential confusion.

The decision to go from classical mechanics straight to quantum mechanics has its disadvantages though. Normally the student encounters Electrodynamics and Special Relativity in between, but if you want to read Susskind’s lectures as self-contained introductions, the author now doesn’t have much to work with. This time-ordering problem means that every once in a while a reference to Electrodynamics or Special Relativity is bound to confuse the reader who really doesn’t know anything besides this lecture series.

It also must be said that the book, due to its emphasis on minimalism, will strike some readers as entirely disconnected from history and experiment. Not even the double-slit, the ultraviolet catastrophe, the hydrogen atom or the photoelectric effect made it into the book. This might not be for everybody. Again however, if you’ve made it through the book you are then in a good position to read up on these topics elsewhere. My only real complaint is that Ehrenfest’s name doesn’t appear together with his theorem.

The book isn’t written like your typical textbook. It has fairly long passages that offer a lot of explanation around the equations, and the chapters are introduced with brief dialogues between fictitious characters. I don’t find these dialogues particularly witty, but at least the humor isn’t as nauseating as that in Goldberg’s book.

All together, the “Theoretical Minimum” achieves what it promises. If you want to make the step from popular science literature to textbooks and the general scientific literature, then this book series is a must-read. If you can’t make your way through abstract mathematical discussions and prefer a close connection to example and history, you might however find it hard to get through this book.

I am certainly looking forward to the next volume.

(Disclaimer: Free review copy.)

Monday, April 07, 2014

Will the social sciences ever become hard sciences?

The term “hard science” as opposed to “soft science” has no clear definition. But roughly speaking, the less the predictive power and the smaller the statistical significance, the softer the science. Physics, without doubt, is the hard core of the sciences, followed by the other natural sciences and the life sciences. The higher the complexity of the systems a research area is dealing with, the softer it tends to be. The social sciences are at the soft end of the spectrum.

To me the very purpose of research is making science increasingly harder. If you don’t want to improve on predictive power, what’s the point of science to begin with? The social sciences are soft mainly because data that quantifies the behavior of social, political, and economic systems is hard to come by: it’s huge amounts, difficult to obtain and even more difficult to handle. Historically, these research areas therefore worked with narratives relating plausible causal relations. Needless to say, as computing power skyrockets, increasingly larger data sets can be handled. So the social sciences are finally on the track to become useful. Or so you’d think if you’re a physicist.

But interestingly, there is a large opposition to this trend of hardening the social sciences, and this opposition is particularly pronounced towards physicists who take their knowledge to work on data about social systems. You can see this opposition in the comment section to every popular science article on the topic. “Social engineering!” they will yell accusingly.

It isn’t so surprising that social scientists themselves are unhappy because the boat of inadequate skills is sinking in the data sea and physics envy won’t keep it afloat. More interesting than the paddling social scientists is the public opposition to the idea that the behavior of social systems can be modeled, understood, and predicted. This opposition is an echo of the desperate belief in free will that ignores all evidence to the contrary. The desperation in both cases is based on unfounded fears, but unfortunately it results in a forward defense.

And so the world is full with people who argue that they must have free will because they believe they have free will, the ultimate confirmation bias. And when it comes to social systems they’ll snort at the physicists “People are not elementary particles”. That worries me, worries me more than their clinging to the belief in free will, because the only way we can solve the problems that mankind faces today – the global problems in highly connected and multi-layered political, social, economic and ecological networks – is to better understand and learn how to improve the systems that govern our lives.

That people are not elementary particles is not a particularly deep insight, but it collects several valid points of criticism:

  1. People are too difficult. You can’t predict them.

    Humans are made of a many elementary particles and even though you don’t have to know the exact motion of every single one of these particles, a person still has an awful lot of degrees of freedom and needs to be described by a lot of parameters. That’s a complicated way of saying people can do more things than electrons, and it isn’t always clear exactly why they do what they do.

    That is correct of course, but this objection fails to take into account that not all possible courses of action are always relevant. If it was true that people have too many possible ways to act to gather any useful knowledge about their behavior our world would be entirely dysfunctional. Our societies work only because people are to a large degree predictable.

    If you go shopping you expect certain behaviors of other people. You expect them to be dressed, you expect them to walk forwards, you expect them to read labels and put things into a cart. There, I’ve made a prediction about human behavior! Yawn, you say, I could have told you that. Sure you could, because making predictions about other people’s behavior is pretty much what we do all day. Modeling social systems is just a scientific version of this.

    This objection that people are just too complicated is also weak because, as a matter of fact, humans can and have been modeled with quite simple systems. This is particularly effective in situations when intuitive reaction trumps conscious deliberation. Existing examples are traffic flows or the density of crowds when they have to pass through narrow passages.

    So, yes, people are difficult and they can do strange things, more things than any model can presently capture. But modeling a system is always an oversimplification. The only way to find out whether that simplification works is to actually test it with data.

  2. People have free will. You cannot predict what they will do.

    To begin with it is highly questionable that people have free will. But leaving this aside for a moment, this objection confuses the predictability of individual behavior with the statistical trend of large numbers of people. Maybe you don’t feel like going to work tomorrow, but most people will go. Maybe you like to take walks in the pouring rain, but most people don’t. The existence of free will is in no conflict with discovering correlations between certain types of behavior or preferences in groups. It’s the same difference that doesn’t allow you to tell when your children will speak the first word or make the first step, but that almost certainly by the age of three they’ll have mastered it.

  3. People can understand the models and this knowledge makes predictions useless.

    This objection always stuns me. If that was true, why then isn’t obesity cured by telling people it will remain a problem? Why are the highways still clogged at 5pm if I predict they will be clogged? Why will people drink more beer if it’s free even though they know it’s free to make them drink more? Because the fact that a prediction exists in most cases doesn’t constitute any good reason to change behavior. I can predict that you will almost certainly still be alive when you finish reading this blogpost because I know this prediction is exceedingly unlikely to make you want to prove it wrong.

    Yes, there are cases when people’s knowledge of a prediction changes their behavior – self-fulfilling prophecies are the best-known examples of this. But this is the exception rather than the rule. In an earlier blogpost, I referred to this as societal fixed points. These are configurations in which the backreaction of the model into the system does not change the prediction. The simplest example is a model whose predictions few people know or care about.

  4. Effects don’t scale and don’t transfer.

    This objection is the most subtle one. It posits that the social sciences aren’t really sciences until you can do and reproduce the outcome of “experiments”, which may be designed or naturally occurring. The typical social experiment that lends itself to analysis will be in relatively small and well-controlled communities (say, testing the implementation of a new policy). But then you have to extrapolate from this how the results will be in larger and potentially very different communities. Increasing the size of the system might bring in entirely new effects that you didn’t even know of (doesn’t scale), and there are a lot of cultural variables that your experimental outcome might have depended on that you didn’t know of and thus cannot adjust for (doesn’t transfer). As a consequence, repeating the experiment elsewhere will not reproduce the outcome.

    Indeed, this is likely to happen and I think it is the major challenge in this type of research. For complex relations it will take a long time to identify the relevant environmental parameters and to learn how to account for their variation. The more parameters there are and the more relevant they are, the less the predictive value of a model will be. If there are too many parameters that have to be accounted for it basically means doing experiments is the only thing we can ever do. It seems plausible to me, even likely, that there are types of social behavior that fall into this category, and that will leave us with questions that we just cannot answer.

    However, whether or not a certain trend can or cannot be modeled we will only know by trying. We know that there are cases where it can be done. Geoffry West’s city theory I find a beautiful example where quite simple laws can be found in the midst of all these cultural and contextual differences.
In summary.

The social sciences will never be as “hard” as the natural sciences because there is much more variation among people than among particles and among cities than among molecules. But the social sciences have become harder already and there is no reason why this trend shouldn’t continue. I certainly hope it will continue because we need this knowledge to collectively solve the problems we have collectively created.

Tuesday, April 01, 2014

Do we live in a hologram? Really??

Physicists fly high on the idea that our three-dimensional world is actually two-dimensional, that we live in a hologram, and that we’re all projections on the boundary of space. Or something like this you’ve probably read somewhere. It’s been all over the pop science news ever since string theorists sang the Maldacena. Two weeks ago Scientific American produced this “Instant Egghead” video which is a condensed mashup of all the articles I’ve endured on the topic:

The second most confusing thing about this video is the hook “Many physicist now believe that reality is not, in fact, 3-dimensional.”

To begin with, physicists haven’t believed this since Minkowski doomed space and time to “fade away into mere shadows”. Moyer in his video apparently refers only to space when he says “reality.” That’s forgiveable. I am more disturbed by the word “reality” that always creeps up in this context. Last year I was at a workshop that mixed physicists with philosophers. Inevitably, upon mentioning the gauge-gravity duality, some philosopher would ask, well, how many dimensions then do we really live in? Really? I have some explanations for you about what this really means.

Q: Do we really live in a hologram?

A: What is “real” anyway?

Q: Having a bad day, yes?

A: Yes. How am I supposed to answer a question when I don’t know what it means?

Q: Let me be more precise then. Do we live in a hologram as really as, say, we live on planet Earth?

A: Thank you, much better. The holographic principle is a conjecture. It has zero experimental evidence. String theorists believe in it because their theory supports a specific version of holography, and in some interpretations black hole thermodynamics hints at it too. Be that as it may, we don’t know whether it is the correct description of nature.

Q: So if the holographic principle was the correct description of nature, would we live in a hologram as really as we live on planet Earth?

A: The holographic principle is a mathematical statement about the theories that describe nature. There’s a several thousand years long debate about whether or not math is as real as that apple tree in your back yard. This isn’t a question about holography in particular, you could also ask that question also in general relativity: Do we really live in a metric manifold of dimension four and Lorentzian signature?

Q: Well, do we?

A: On most days I think of the math of our theories as machinery that allows us to describe nature but is not itself nature. On the remaining days I’m not sure what reality is and have a lot of sympathy for Platonism. Make your pick.

Q: So if the holographic principle was true, would we live in a hologram as really as we previously thought we live in the space-time of Einstein’s theory of General Relativity?

A: A hologram is an image on a 2-dimensional surface that allows one to reconstruct a 3-dimensional image. One shouldn’t take the nomenclature “holographic principle” too seriously. To begin with actual holograms are never 2-dimensional in the mathematical sense; they have a finite width. After all they’re made of atoms and stuff. They also do not perfectly recreate the 3-dimensional image because they have a resolution limit which comes from the wavelength of the light used to take (and reconstruct) the image. A hologram is basically a Fourier transformation. If that doesn’t tell you anything, suffices to say this isn’t the same mathematics as that behind the holographic principle.

Q: I keep hearing that the holographic principle says the information of a volume can be encoded on the boundary. What’s the big deal with that? If I get a parcel with a customs declaration, information about the volume is also encoded on the boundary.

A: That statement about the encoding of information is sloppy wording. You have to take into account the resolution that you want to achieve. You are right of course in that there’s no problem in writing down the information about some volume and printing it on some surface (or a string for that matter). The point is that the larger the volume the smaller you’ll have to print.

Here’s an example. Take a square made out of N2 smaller squares and think of each of them as one bit. They’re either black or white. There are 2N2 different patterns of black and white. In analogy, the square is a box full of matter in our universe and the colors are information about the particles in the inside.

Now you want to encode the information about the pattern of that square on the boundary using pieces of the same length as the sidelength of the smaller squares. See image below for N=3. On the left is the division of the square and the boundary, on the right is one way these could encode information.


There’s 4N of these boundary pieces and 24N different patterns for them. If N is larger than 4, there are more ways the square can be colored than you have different patterns for the boundary. This means you cannot uniquely encode the information about the volume on the boundary.

The holographic principle says that this isn’t so. It says yes, you can always encode the volume on the boundary. Now this means, basically, that some of the patterns for the squares can’t happen.

Q: That’s pretty disturbing. Does this mean I can’t pack a parcel in as many ways as I want to?

A: In principle, yes. In practice the things we deal with, even the smallest ones we can presently handle in laboratories, are still far above the resolution limit. They are very large chunks compared to the little squares I have drawn above. There is thus no problem encoding all that we can do to them on the boundary.

Q: What then is the typical size of these pieces?

A: They’re thought to be at the Planck scale, that’s about 10-33 cm. You should not however take the example with the box too seriously. That is just an illustration to explain the scaling of the number of different configurations with the system size. The theory on the surface looks entirely different than the theory in the volume.

Q: Can you reach this resolution limit with an actual hologram?

A: No you can’t. If you’d use photons with a sufficiently high energy, you’d just blast away the sample of whatever image you wanted to take. However, if you loosely interpret the result of such a high energy blast as a hologram, albeit one that’s very difficult to reconstruct, you would eventually notice these limitations and be able to test the underlying theory.

Q: Let me come back to my question then, do we live in the volume or on the boundary?

A: Well, the holographic principle is quite a vague idea. It has a concrete realization in the gauge-gravity correspondence that was discovered in string theory. In this case one knows very well how the volume is related to the boundary and has theories that describe each. These both descriptions are identical. They are said to be “dual” and both equally “real” if you wish. They are just different ways of describing the same thing. In fact, depending on what system you describe, we are living on the boundary of a higher-dimensional space rather than in a volume with a lower dimensional surface.

Q: If they’re the same why then do we think we live in 3 dimensions and not in 2? Or 4?

A: Depends on what you mean with dimension. One way to measure the dimensionality is, roughly speaking, to count the number of ways a particle can get lost if it moves randomly away from a point. The result then depends on what particle you use for the measurement. The particles we deal with will move in 3 dimensions, at least on the distance scales that we typically measure. That’s why we think, feel, and move like we live in 3 dimensions, and nothing wrong with that. The type of particles (or fields) you would have in the dual theories do not correspond to the ones we are used to. And if you ask a string theorist, we live in 11 dimensions one way or the other.

Q: I can see then why it is confusing to vaguely ask what dimension “reality” has. But what is the most confusing thing about Moyer’s video?

A: The reflection on his glasses.

Q: Still having a bad day?

A: It’s this time of the month.

Q: Okay, then let me summarize what I think I learned here. The holographic principle is an unproved conjecture supported by string theory and black hole physics. It has a concrete theoretical formalization in the gauge-gravity correspondence. There, it identifies a theory in a volume with a theory on the boundary of that volume in a mathematically rigorous way. These theories are both equally real. How “real” that is depends on how real you believe math to be to begin with. It is only surprising that information can always be encoded on the boundary of a volum if you request to maintain the resolution, but then it is quite a mindboggling idea indeed. If one defines the number of dimensions in a suitable way that matches our intuition, we live in 3 spatial dimensions as we always thought we do, though experimental tests in extreme regimes may one day reveal that fundamentally our theories can be rewritten to spaces with different numbers of dimensions. Did I get that right?

A: You’re so awesomely attentive.

Q: Any plans on getting a dog?

A: No, I have interesting conversations with my plants.

Tuesday, March 25, 2014

Does nature hide strong curvature regions?

Quantum gravitational effects are strong when space-time curvature becomes large, so large that it reaches the Planckian regime. Unfortunately, space-time around us is barely curved. For all practical purposes, you sit in a flat space-time. This is why you don’t have to worry about post-post Newtonian corrections if you ask Siri for directions, but also why it takes some experimental effort to detect the subtle consequences of Einstein’s theory of General Relativity – and that’s the classical case. In the almost flat background around us, quantum effects of gravity are hopelessly small.

But space-time curvature isn’t small everywhere. When matter collapses to a black hole, the matter density and also the curvature become very large, and eventually, long after you’d been spaghettified by tidal forces, reach the regime where quantum gravitational effects are sizeable. The problem is that this area, even though it almost certainly exists inside the black holes that astronomers watch, is hidden below the black hole’s horizon and not accessible to observation.

Or is it? Could there be strong curvature regions in our universe that are not hidden behind event horizons and allow us to look straight onto large quantum gravitational effects?

The “Cosmic Censorship” conjecture states that singularities which form when matter density becomes infinitely large are always hidden behind horizons. But more than 40 years after this conjecture was put forward by Roger Penrose, there is still no proof that it is correct. On the contrary, recent developments, supported by numerical calculations which were impossible in the 1970s, indicate that singularities might form without being censored. These singularities might be “naked”, and yes that is the technical expression.

It has been known for a long time that General Relativity admits for solutions that have naked singularities, but it was believed that these do not form in realistic systems because they require special initial conditions which are never to be found in nature. However, today several physically realistic situations are known to result in naked singularities. Now that we cannot rule out naked singularities on theoretical grounds, we are left to wonder how we could detect them if they exist for real. And if this means strong curvature regions are within sight, what is the potential for observational evidence of quantum gravity?

It turns out these questions are more difficult to answer than you’d expect. Evidence for black hole horizons comes primarily from not seeing evidence of the surface of a compact object. A naked singularity however also doesn’t have a hard surface, so these observations are not of much use. If matter collapses and heats up, it makes a difference for the emitted radiation whether a horizon forms or not. This difference however is so small that it cannot be detected.

This has lead researchers to look for other ways to distinguish between a black hole and a naked singularity. For example by asking how a naked singularity would act as a gravitational lens in comparison to a black hole. However, the timelike naked singularities considered in this work is not of the type that has shown to be created in physically realistic collapse.

The so far most promising study is a recent paper by a group of physicists located in Morelia, Mexico
    Observational distinction between black holes and naked singularities: the role of the redshift function
    Néstor Ortiz, Olivier Sarbach, Thomas Zannias
    arXiv:1401.4227 [gr-qc]
In this paper, the authors have studied if one can distinguish between black holes and naked singularities not by the light that is emitted from the object itself during collapse but by light from a different source that travels through the collapse region. They find that the luminosity curves of the two cases differ on a timescale that, for a stellar black hole, is about 10-5s. In this work the authors do not evaluate if it is feasible to detect the difference with presently existing technology, but the signal does not seem hopelessly small.

The space-times that are considered in the above have a Cauchy-horizon, which is an interesting but also somewhat troubling concept which the cosmic censorship conjecture is supposed to avoid. The presence of the Cauchy-horizon basically means that after a certain moment in time you need additional initial data. You could interpret this as a classical instance of indeterminism. However, quantum gravity is generally expected to remove the singularity anyway, so don’t get too much of a headache over this. More interesting is the question if not the difference between the presence and absence of the horizon would be easier to detect if quantum gravitational effects were taken into account.

I am sure we will hear more about this in the soon future. Maybe we’ll even see it.

Monday, March 17, 2014

Do scientists deliberately use technical expressions so they cannot be understood?

Secret handshake?
Science or gibberish?
“[E]xisting pseudorandom and introspective approaches use pervasive algorithms to create compact symmetries. The development of interrupts would greatly amplify Byzantine fault tolerance. We construct a novel method for the investigation of online algorithms.”

“[T]he effective diminution of the relevant degrees of freedom in the ultraviolet (on which morally speaking all approaches agree) is interpreted as universality in the statistical physics sense in the vicinity of an ultraviolet renormalization group fixed point. The resulting picture of microscopic geometry is fractal-like with a local dimensionality of two.”
IEEE and Springer recently withdrew 120 papers that turned out to be random generated nonsense and Schadenfreude spread among the critics of commercial academic publishing. The internet offers a wide variety of random text generators, including the one used to create the now withdrawn Springer papers, called SciGen. The difficult part of creating random academic text is the grammar, not the vocabulary. If you start with a grammatically correct sentence it is easy enough to fill in technical language.

Take as example the above sentence
“The difficult part of creating random text is the grammar, not the vocabulary.”
And just replace some nouns and adverbs:
“The difficult part of creating completely antisymmetric turbulence is the higher order correction, not the parametric resonance.”
Or maybe
“The difficult part of creating parametric turbulence is the completely antisymmetric resonance, not the higher order correction.”
Sounds very educated, yes? I have some practice with that ;o)The problem is that if you don’t know the technical terms you can’t tell if the relations implied by the grammar make sense. There is thus, not so surprisingly, a long history of cynics abusing this narrow target group of academic writing, and this cynicism spreads rapidly now that academic writing has become more widely available. With the open access movement there swells the background choir chanting that availability isn’t the same as accessibility. Nicholas Kristof recently complained about academic writing in an NYT op-ed:
“[A]cademics seeking tenure must encode their insights into turgid prose. As a double protection against public consumption, this gobbledygook is then sometimes hidden in obscure journals — or published by university presses whose reputations for soporifics keep readers at a distance.”
Kristof calls upon academics to better communicate with the public, which I certainly support. At the same time however he also claims professional language is unnecessary and deliberately exclusive:
“Ph.D. programs have fostered a culture that glorifies arcane unintelligibility while disdaining impact and audience. This culture of exclusivity is then transmitted to the next generation through the publish-or-perish tenure process.”
Let me take these two issues apart. First deliberately exclusive, and second unnecessary.

Steve Fuller, who is a professor for Social Epistemology at the University of Warwick, argues (for example in his book “Knowledge Management Foundations”) that the value of knowledge is related to the scarcity of access to it. For that reason, academics have an incentive to put hurdles in the way of those wanting to get into the ivory tower and make it more difficult than it has to be. It is a good argument, though it is hard to tell how much of this exclusivity is deliberate. At least when it comes to my colleagues in math and physics, the exclusivity seems more a matter of neglect than of intent. Inclusivity takes effort and most academics don’t make this effort.

This brings me to the argument that academic slang is unnecessary. Unfortunately, this is a very common belief. For example, in reaction to my recent post about the tug-of-war between accuracy and popularity in science journalism, several journalists remarked that surely I must have meant precision rather than accuracy, because good journalism can be accurate even though it avoids technical language.

But no, I did in fact mean accuracy. If you don’t use the technical language, you’re not accurate. The whole raison d’être [entirely unnecessary French expression meaning “reason for existence”] of professional terminology is that it is the most accurate description available. And PhD programs don’t “glorify unintelligible gibberish”, they prepare students to communicate accurately and efficiently with their colleagues.

For physicists the technical language is equations, the most important ones carry names. If you want to avoid naming the equation, you inevitably lose accuracy.

The second Friedmann equation, for example, does not just say the universe undergoes accelerated expansion with the present values of dark matter and dark energy, which is a typical “non-technical” description of this relation. The equation also tells you that you’re dealing with a differentiable, metric manifold of dimension 4 and Lorentzian signature and are within Einstein’s theory of general relativity. It tells you that you’ve made an assumption of homogeneity and isotropy. It tells you exactly how the acceleration relates to the matter content. And constraining the coupling constants for certain Lorentz-invariance violating operators of order 5 is not the same as testing “space-time graininess” or testing whether the universe is a computer simulation, to just name some examples.

These details are both irrelevant and unintelligible for the average reader of a pop sci article, I agree. But, I insist, without these details the explanation is not accurate, and not useful for the professional.

Technical terminology is an extremely compressed code that carries a large amount of information for those who have learned to decipher it. It is used in academia because without compression nobody could write, let alone read, a paper. You’d have to attach megabytes worth of textbooks, lectures and seminars.

In science, most terms are cleanly defined, others have various definitions and some I admit are just not well-defined. In the soft sciences, the situation is considerably worse. In many cases trying to pin down the exact meaning of an -ism or -ology opens a bottomless pit of various interpretations and who-said-whats that date back thousands of years. This is why my pet peeve is to discard soft science arguments as useless due to undefined terminology. However, one can’t really blame academics in these disciplines – they are doing the best they can building castles on sand. But regardless of whether their terminology is very efficient or not compared to the hard sciences, it too is used for the sake of compression.

So no, academic slang is not unnecessary. But yes, academic language is exclusive as a consequence of this. It is in that not different from other professions. Just listen to your dentist and her assistant discuss their tools and glues, or look at some car-fanatics forum, and you’ll find the same exclusivity there. The difference is gradual and lies in the amount of time you need to invest to be one of them, to learn their language.

Academic language is not purposefully designed to exclude others, but it arguably serves this purpose once in place. Pseudoscientists tend to underestimate just how obvious their lack of knowledge is. It often takes a scientist not more than a sentence to recognize an outsider as such. Are you be able to tell the opening sentences of this blogpost from gibberish? Can you tell the snarxiv from the arxiv?

Indeed, it is in reality not the PhD that marks the science-insider from the outsider. The PhD defense is much like losing your virginity, vastly overrated. It looms big in your future, but once in the past you note that nobody gives a shit. You mark your place in academia not by hanging a framed title on your office door, but by using the right words at the right place. Regardless of whether you do have a PhD, you’ll have to demonstrate the knowledge equivalent of a PhD to become an insider. And there’s no shortcuts to this.

For scientists this demarcation is of practical use because it saves them time. On the flipside, there is the occasional scientist who goes off the deep end and who then benefits from having learned the lingo to make nonsense sound sophisticated. However, compared to the prevalence of pseudoscience this is a rare problem.

Thus, while the exclusivity of academic language has beneficial side effects, technical expressions are not deliberately created for the purpose of excluding others. They emerge and get refined in the community as efficient communication channels. And efficient communication inside a discipline is simply not the same as efficient communication with other disciplines or with the public, a point that Kristof in his op-ed is entirely ignoring. Academics are hired and get paid for communicating with their colleagues, not with the public. That is the main reason academic writing is academic. There is probably no easy answer to just why it has come to be that academia doesn’t make much effort communicating with the public. Quite possibly Fuller has a point there in that scarcity of access protects the interests of the communities.

But leaving aside the question of where the problem originates, at prima facie [yeah, I don’t only know French, but also Latin] the reason most academics are bad at communicating with the public is simple: They don’t care. Academia presently very strongly selects for single-minded obsession with research. Communicating with the public, about one’s own research or to chime in with opinions on scientific policy, it is in the best case useless in the worst case harmful to do the job that pays their rent. Accessibility and popularity does for academics not convert into income, and even an NYT Op-Ed isn’t going to change anything about this. The academics you find in the public sphere are primarily those who stand to benefit from the limelight: Directors and presidents of something spreading word about their institution, authors marketing their books, and a few lucky souls who found a way to make money with their skills and gigs. You do not find the average academic making an effort to avoid academic prose because they have nothing to gain with that.

I’ve read many flowery words about how helpful science communication – writing for the public, public lectures, outreach events, and so on – can be to make oneself and one’s research known. Yes, can be, and anecdotally this has helped some people find good jobs. But this works out so rarely that on the average it is a bad investment of time. That academics are typically overworked and underpaid anyway doesn’t help. That’s not good, but that’s reality.

I certainly wish more academics would engage with the public and make that effort of converting academic slang to comprehensible English, but knowing how hard my colleagues work already, I can’t blame them for not doing so. So please stop complaining that academics do what they were hired to do and that they don’t work for free on what doesn’t feed their kids. If you want more science communication and less academic slang, put your money where your mouth is and pay those who make that effort.

The first of the examples at the top of this post is random nonsense generated with SciGen. The second example is from the introduction of the Living Review on Asymptotic Safety. Could you tell?

Tuesday, March 04, 2014

10 Misconceptions about Creativity

Lara, painting. She says
it's a snake and a trash can.

The American psyche is deeply traumatized by the finding that creativity scores of children and adults have been constantly declining since 1990. The consequence is a flood of advice on how to be more creative, books and seminars and websites. There’s no escaping the message: Get creative, now!

Science needs a creative element, and so every once in a while I read these pieces that come by my newsfeed. But they’re like one of these mildly pleasant songs that stop making sense when you listen to the lyrics. Clap your hands if you’re feeling like a room without a ceiling.

It’s not like I know a terrible lot about research on creativity. I’m sure there must be some research on it, right? But most of what I read isn’t even logically coherent.
  1. Creativity means solving problems.

    The NYT recently wrote in an article titled “Creativity Becomes an Academic Discipline”:
    “Once considered the product of genius or divine inspiration, creativity — the ability to spot problems and devise smart solutions — is being recast as a prized and teachable skill.”
    Yes, creativity is an essential ingredient to solving problems, but equating creativity with problem solving is like saying curiosity is a device to kill cats. It’s one possible use, but it’s not the only use and there are other ways to kill cats.

    Creativity is in the first place about creation, the creation of something new and interesting. The human brain has two different thought processes to solve problems. One is to make use of learned knowledge and proceed systematically step by step. This is often referred to as ‘convergent thinking’ and dominantly makes use of the left side of the brain. The other process is a pattern-finding, a free association, often referred to as ‘divergent thinking’ which employs more brain regions on the right side. It normally kicks in only if the straight-forward left-brain attempt failed because it’s energetically more costly. Exactly what constitutes creative thinking is not well known, but most agree it is a combination of both of these thought processes.

    Creative thinking is a way to arrive at solutions to problems, yes. Or you might create a solution looking for a problem. Creativity is also an essential ingredient to art and knowledge discovery, which might or might not solve any problem.

  2. Creativity means solving problems better.

    It takes my daughter about half an hour to get dressed. First she doesn’t know how to open the buttons, then she doesn’t know how to close them. She’ll try to wear her pants as a cap and pull her socks over the jeans just to then notice the boots won’t fit.

    It takes me 3 minutes to dress her – if she lets me – not because I’m not creative but because it’s not a problem which calls for a creative solution. Problems that can be solved with little effort by a known algorithm are in most cases best solved by convergent thinking.

    Xkcd nails it:

    But Newsweek bemoans:
    “Preschool children, on average, ask their parents about 100 questions a day. Why, why, why—sometimes parents just wish it’d stop. Tragically, it does stop. By middle school they’ve pretty much stopped asking.”
    There’s much to be said about schools not teaching children creative thinking – I agree it’s a real problem. But the main reason children stop asking question is that they learn. And somewhat down the line they learn how to find answers themselves. The more we learn, the more problems we can address with known procedures.

    There’s a priori nothing wrong with solving problems non-creatively. In most cases creative thinking just wastes time and brain-power. You don’t have to reinvent the wheel every day. It’s only when problems do not give in to standard solutions that a creative approach becomes useful.

  3. Happiness makes you creative.

    For many people the problem with creative thought is the lack of divergent thinking. If you look at the advice you find online, they’re almost all guides to divergent thinking, not to creativity: “Don’t think. Let your thoughts unconsciously bubble away.” “Sourround yourself with inspiration”Be open and aware. Play and pretend. List unusual uses for common household objects.” And so on. Happiness then plays a role for creativity because there is some evidence that happiness makes divergent thinking easier:
    “Recent studies have shown […] that everyday creativity is more closely linked with happiness than depression. In 2006, researchers at the University of Toronto found that sadness creates a kind of tunnel vision that closes people off from the world, but happiness makes people more open to information of all kinds.”
    Writes Bambi Turner who has a business degree and writes stuff. Note the vague term “closely linked” and look at the research.

    It is a study showing that people who listened to Bach’s (“happy”) Brandenburg Concerto No. 3 were better solving a word puzzle that required divergent thinking. In science speak the result reads “positive affect enhanced access to remote associates, suggesting an increase in the scope of semantic access.” Let us not even ask about the statistical significance of a study with 24 students of the University of Toronto in their lunch break, or its relevance for real life. The happy people participating this study were basically forced to think divergently. In real life happiness might instead divert you from hacking on a problem.

    In summary, the alleged “close link” should read: There is tentative evidence that happiness increases your chances of being creative in a laboratory setting, if you are among those who lack divergent thinking and are student at the University of Toronto.

  4. Creativity makes you happy.

    There’s very little evidence that creativity for the sake of creativity improves happiness. Typically it’s arguments of plausibility like this that solving a problem might improve your life generally:
    “creativity allows [people] to come up with new ways to solve problems or simply achieve their goals.”
    That is plausible indeed, but it doesn’t take into account that being creative has downsides that counteract the benefits.

    This blog is testimony to my divergent thinking. You might find this interesting in your news feed, but ask my husband what fun it is to have a conversation with somebody who changes topic every 30 seconds because it’s all connected! I’m the nightmare of your organizing committee, of your faculty meeting, and of your carefully assembled administration workflow. Because I know just how to do everything better and have ten solutions to every problem, none of which anybody wants to hear. It also has the downside that I can only focus on reading when I’m tired because otherwise I’ll never get though a page. Good thing all my physics lectures were early in the morning.

    Thus, I am very skeptic of the plausibility argument that creativity makes you happy. If you look at the literature, there is in fact very little that has shown to lastingly increase people’s happiness at all. Two known procedures that have proved some effect in studies is showing gratitude and getting to know ones’ individual strengths.

    For more evidence that speaks against the idea that creativity increases happiness, see 7 and 8. There is some evidence that happiness and creativity are correlated, because both tend to be correlated with other character traits, like openness and cognitive flexibility. However, there is also evidence to the contrary, that creative people have a tendency to depression: “Although little evidence exists to link artistic creativity and happiness, the myth of the depressed artist has some scientific basis.” I’d call this inconclusive. Either way, correlations are only of so much use if you want to actively change something.

  5. Creativity will solve all our problems.

    “All around us are matters of national and international importance that are crying out for creative solutions, from saving the Gulf of Mexico to bringing peace to Afghanistan to delivering health care. Such solutions emerge from a healthy marketplace of ideas, sustained by a populace constantly contributing original ideas and receptive to the ideas of others.”
    [From Newsweek again.] I don’t buy this at all. It’s not that we lack creative solutions, just look around, look at TED if you must. We’re basically drowning in creativity, my inbox certainly is. But they’re solutions to the wrong problems.

    (One of the reasons is that we simply do not know what a “healthy marketplace of ideas” is even supposed to mean, but that’s a different story and shell be told another time.)

  6. You can learn to be creative if you follow these simple rules.

    You don’t have to learn creative thinking, it comes with your brain. You can however train it if you want to improve, and that’s what most of the books and seminars want to sell. It’s much like running. You don’t have to learn to run. Everybody who is reasonably healthy can run. How far and how fast you can run depends on your genes and on your training. There is some evidence that creativity has a genetic component and you can’t do much about this. However, you can work on the non-genetic part of it.

  7. “To live creatively is a choice.”

    This is a quote from the WSJ essay “Think Inside the Box.” I don’t know if anybody ever looked into this in a scientific way, it seems a thorny question. But anecdotally it’s easier to increase creativity than to decrease it and thus it seems highly questionable that this is correct, especially if you take into account the evidence that it’s partially genetic. Many biographies of great writers and artists speak against this, let me just quote one:
    “We do not write because we want to; we write because we have to.”
    W. Somerset Maugham, English dramatist and novelist (1874 - 1965).

  8. Creativity will make you more popular.

    People welcome novelty only in small doses and incremental steps. The wilder your divergent leaps of imagination, the more likely you are to just leave people behind you. Creativity might be a potential source for popularity in that at least you have something interesting to offer, but too much of it won’t do any good. You’ll end up being the misunderstood unappreciated genius whose obituary says “ahead of his times”.

  9. Creativity will make you more successful.

    Last week, the Washington post published this opinion piece which informs the reader that:
    “Not for centuries has physics been so open to metaphysics, or more amenable to an ancient attitude: a sense of wonder about things above and within.”
    This comes from a person named Michael Gerson who recently opened Max Tegmark’s book and whose occupation seems to be, well, to write opinion pieces. I’ll refrain from commenting on the amenability of professions I know nothing about, so let me just say that he has clearly never written a grant proposal. I warmly recommend you put the word “metaphysics” into your next proposal to see what I mean. I think you should all do that because I clearly won’t, so then maybe I stand a chance then in the next round.

    Most funding agencies have used the 2008 financial crisis as an excuse to focus on conservative and applied research to the disadvantage of high risk and basic research. They really don’t want you to be creative – the “expected impact” is far too remote, the uncertainty too high. They want to hear you’ll use this hammer on that nail and when you’ve been hitting at it for 25 months and two weeks, out will pop 3 papers and two plenary talks. Open to metaphysics? Maybe Gerson should have a chat with Tegmark.

    There is indeed evidence showing that people are biased against creativity to the favor of practicality, even if they state they welcome creativity. This study relied on 140 American undergraduate students. (Physics envy, anybody?) The punchline is that creative solutions by their very nature have a higher risk of failure than those relying on known methods and this uncertainty is unappealing. It is particularly unappealing when you are coming up with solutions to problems that nobody wanted you to solve.

    So maybe being creative will make you successful. Or maybe your ideas will just make everybody roll their eyes.

  10. The internet kills creativity.

    The internet has made life difficult for many artists, writers, and self-employed entrepreneurs, and I see a real risk that this degrades the value of creativity. However, it isn’t true that the mere availability of information kills creativity. It just moves it elsewhere. The internet has made many tasks that previously required creative approaches to step-by-step procedures. Need an idea for a birthday cake? Don’t know how to fold a fitted sheet? Want to know how to be more creative? Google will tell you. This frees your mind to get creative on tasks that Google will not do for your. In my eyes, that’s a good thing. 
So should you be more creative?

My summary of reading all these articles is that if you feel like your life lacks something, you should take score of your strengths and weaknesses and note what most contributes to your well-being. If you think that you are missing creative outlets, by all means, try some of these advice pages and get going. But do it for yourself and not for others, because creativity is not remotely as welcome as they want you to believe.

On that note, here’s the most recent of my awesomely popular musical experiments:

Wednesday, February 26, 2014

What is analogue gravity and what is it good for?

Image source: Redbubble

Gravity is an exceedingly weak force compared to the other known forces. It dominates at long distances just because, in contrast to the strong and electroweak force, it cannot be neutralized. When not neutralized however the other forces easily outplay gravity. The electrostatic repulsion between two electrons for example is about 40 orders of magnitude larger than their gravitational attraction: Just removing some electrons from the atoms making up your hair is sufficient for the repulsion to overcome the gravitational pull of the whole planet Earth.

That gravity is so weak also means its effects are difficult to measure, and its quantum effects are so difficult to measure that it was believed impossible for many decades. That belief is a troublesome one for scientists because a theory that cannot be tested is not science – in the best case it’s mathematics, in the worst case philosophy. Research on how to experimentally test quantum gravity, by indirect signals not involving the direct production of quanta of the gravitational field, is a recent development. It is interesting to see this area mature, accompanied by the conference series “Experimental Search for Quantum Gravity”.

Alongside the search for observable consequences of quantum gravity – often referred to as the ‘phenomenology’ of quantum gravity – the field of analogue gravity has recently seen a large increase in activity. Analogue gravity deals with the theory and experiment of condensed matter systems that resemble gravitational systems, yet can be realized in the laboratory. These systems are “analogues” for gravity.

If you take away one thing from this post it should be that, despite the name, analogue gravity does not actually mimic Einstein’s General Relativity. What it does mimic is a curved background space-time on which fields can propagate. The background however does not itself obey the equations of General Relativity; it obeys the equation of whatever fluid or material you’ve used. The background is instead set up to be similar to a known solution of Einstein’s field equations (at least that is presently the status).

If the fields propagating in this background are classical fields it’s an analogue to a completely classical gravitational system. If the fields are quantum fields, it’s an analogue to what is known as “semi-classical gravity”, in which gravity remains unquantized. Recall that the Hawking effect falls into the territory of semi-classical gravity and not quantum gravity, and you can see why such analogues are valuable. From the perspective of quantum gravity phenomenology, the latter case of quantized fields is arguably more interesting. It requires that the analogue system can have quantum states propagating on it. It is mostly phonons in Bose-Einstein condensates and in certain materials that have been used in the experiments so far.

The backgrounds that are most interesting are those modelling black hole inflation or the propagation of modes during inflation in the early universe. In both cases, the theory has left physicists with open questions, such as the relevance of very high (transplanckian) modes or the nature of quantum fluctuations in an expanding background. Analogue gravity models allow a different angle of attack to these problems. They are also a testing ground for how some proposed low-energy consequences of a fundamentally quantum space-time might come about and/or affect the quantum fields like deviations from Lorentz-invariance and space-time defects. It should be kept in mind though that global properties of space-time cannot strictly speaking ever be mimicked in the laboratory if space-time in these solutions is infinite. As we discussed recently for example, the event horizon of a black hole is a global property, it is defined as existing forever. This situation can only be approximately reproduced in the laboratory.

Another reason why analogue gravity, though it has been around for decades, is receiving much more attention now is that approaches to quantum gravity have diversified as string theory is slowly falling out of favor. Emergent and induced gravity models are often based on condensed-matter-like approaches  in which space-time is some kind of condensate. The big challenge is to reproduce the required symmetries and dynamics. Studying what is possible with existing materials and fluids in analogue gravity experiments certainly serves as both inspiration and motivation for emergent gravity.

While I am not a fan of emergent gravity approaches, I find the developments in analogue gravity interesting from an entirely different perspective. Consider that mathematics is not in fact a language able to describe all of nature. What would we do if we had reached its limits? We could take out maths as the middle-man and directly study systems that resemble more complicated or less accessible systems. That’s exactly what analogue gravity is all about.

I am sure that this research area will continue to flourish.

(If you really want to know all the details and references, this Living Review is a good starting point.)

Monday, February 24, 2014

8 Years Backreaction!

Thanks to all my readers, the new ones and the regulars, the occasionals and the lurkers, and most of all our commenters: Without you this blog wouldn't be what it is. I have learned a lot from you, laughed about your witty remarks, and I appreciate your feedback. Thanks for being around and enriching my life by sharing your thoughts.

As you have noticed, I am no longer using the blog to share links. To that end you can follow me on twitter or facebook. I'm also on G+, but don't use it very often.

If you have a research result to share that you think may be interesting to readers of this blog, you can send me a note, email is hossi at nordita dot org. I don't always have time to reply, but I do read and consider all submissions.

Friday, February 21, 2014

The eternal tug of war between science journalists and scientists. A graphical story.

I am always disappointed by the media coverage on my research area. It forever seems to misrepresent this and forgets to mention that and raises a wrong impression about something. Ask the science journalist and they'll tell you they have to make concessions in accuracy to match the knowledge level of the average reader. The scientist will argue that if the accuracy is too low there's no knowledge to be transferred at all, and that a little knowledge is worse than no knowledge at all. Then the journalist will talk about the need to sell and point to capitalism executed by their editor. In the end everybody is unhappy: The scientists because they're being misrepresented, the journalist because they feel misunderstood, and the editor because they are being blamed for everything.

We can summarize the problem in this graph:



The black curve is the readership as a function of accuracy. Total knowledge transfer is roughly the amount of readers times the information conveyed. An article with very little information might have a large target group, but not much educational value. An article with very much information will be read by few people. The sweet spot, the maximum of the total knowledge transfer as a function of accuracy, lies somewhere in the middle. Problem is that scientists and journalists tend to disagree about where the sweet spot lies.

Scientists are on the average more pessimistic about the total amount of information that can be conveyed to begin with because they do not only believe but know that you cannot really understand their research without getting into the details, yet the details require background knowledge to appreciate. I sometimes hear that scientists wish for more accuracy because they are afraid of the criticism of their colleagues, but I think this is nonsense. Their colleagues will assume that the journalist is responsible for lack of accuracy, not the scientist. No, I think they want more accuracy because they correctly know it is important and because if one is familiar with a topic one tends to lose perspective on how difficult it once was to understand. They want, in short, an article they themselves would find interesting to read.

So it seems this tug of war is unavoidable, but let us have a look at the underlying assumptions.

To begin with I've assumed that science writers and scientists likewise want to maximize information transfer and not simply readership, which would push the sweet spot towards the end of no information at all. That's a rosy world-view disregarding the power of clicks, but in my impression it's what most science journalists actually wish for.

One big assumption is that most readers have very little knowledge about the topic, which is why the readership curve peaks towards the low accuracy end. This is not the case for other topics. Think for example of the sports section. It usually just assumes that the readers know the basic rules and moves of the games and journalists do not hesitate to comment extensively on these moves. For somebody like me, whose complete knowledge about basketball is that a ball has to go into a basket, the sports pages aren't only uninteresting but impenetrable vocabulary. However, most people seem to bring more knowledge than that and thus the journalists don't hesitate assuming it.

If we break down the readership by knowledge level, for scientific topics it will look somewhat like shown in the figure below. The higher the knowledge, the more details the reader can digest, but the fewer readers there are.


Another assumption is that this background level is basically fixed and readers can't learn. This is my great frustration with science journalism, that the readership is rarely if ever exposed to the real science and thus the background knowledge never increases. Readers don't ever hear the technical terms, don't see the equations, and aren't explained the figures. I think that popular science reporting just shouldn't aim at meeting people in their comfort zone, at the sweet spot, because the long-term impact is nil. But that again hits the wall of must-sell.

The assumption that I want to focus on here is that the accuracy of an article is a variable independent of the reader themself. This is mostly true for print media because the content is essentially static and not customizable. However, for online content it is possible to offer different levels of detail according to the reader's background. If I read popular science articles in fields I do not work in myself, I find it very annoying if they are so dumbed down that I can't make a match to the scientific literature, because technical terms and references are missing.  It's not that I do not appreciate the explanation at a low technical level, because without it I wouldn't have been interested to begin with. But if I am interested in a topic, I'd like to have a guide to find out more.

So then let us look at the readership as a function of knowledge and accuracy. This makes a three-dimensional graph roughly like the one below.


If you have a fixed accuracy, the readership you get is the integral over the knowledge-axis in the direction of the white arrow. This gives you back the black curve in the first graph. However, if accuracy is adjustable to meet the knowledge level, readers can pick their sweet spot themselves, which is along the dotted line in the graph. If this match is made, then the readership is no longer dependent on the accuracy, but just depends on the number of people at any different knowledge background. The total readership you get is the sum of all those.


How much larger this total readership is than the readership in the sweet spot of fixed accuracy depends on many variables. To begin with it depends on the readers' flexibility of accepting accuracy that is either too low or too high for them. It also depends on how much they like the customization and how well that works etc. But I'm a theoretician, so let me not try to be too realistic. Instead, I want to ask how that might be possible to do.

A continuous level of accuracy will most likely remain impossible, but a system with a few layers - call them beginner, advanced, pro - would already make a big difference. One simple way towards this would be to allow the frustrated scientist whose details got scraped to add explanations and references in a way that readers can access them when they wish. This would also have the benefit of not putting more load on the journalist.

So I am cautiously hopeful: Maybe technology will one day end the eternal tug of war between scientist and science writers.