Thursday, May 31, 2012

Wow, I said, What's the Kuiper belt?

In the late summer of 1992, I was in the long middle years of my graduate studies at Berkeley [...] midway through a Ph.D. dissertation about the planet Jupiter and its volcanic moon Io. [...] One afternoon, as on many times previous, after spending too much time staring at data on my computer screen and reading technical papers [...], I opened the door of my little graduate student office on the roof of the astronomy building, stepped into the enclosed rooftop courtyard, and climbed the metal stairs that went to the very top of the roof to an open balcony. As I stared at the San Francisco Bay laid out in front of me, trying to pull my head back down to the earth by watching the boats blowing across the water, Jane Luu, a friend and researcher in the astronomy department who had an office across the rooftop courtyard, clunked up the metal stairs and looked out across the water in the same direction I was staring. Softly and conspiratorially she said, “Nobody knows it yet, but we just found the Kuiper belt.”

I could tell that she knew she was onto something big, could sense her excitement, and I was flattered that here she was telling me this astounding information that no one else knew.

“Wow,” I said. “What’s the Kuiper belt?”
From Mike Brown's How I Killed Pluto and Why It Had It Coming, page 5f.

Congratulations to David Jewitt and Jane Luu for the award of the Shaw Prize in Astronomy 2012, and to, again, David Jewitt and Jane Luu, and Mike Brown, for the award of the Kavli Prize in Astrophysics 2012, for bringing the Kuiper belt from hypothesis to reality!

Our Solar System is much larger today than it was 20 years ago, and I would be glad if these prizes help to make this known to a much wider public.

Wednesday, May 30, 2012

Black hole information paradox flow diagram

I'm preparing a talk about the black hole information loss paradox. As I was thinking about a good way to summarize the vast number of attempts that have been made to address the issue, I thought a diagram would be helpful, and here is what I came up with. It's essentially a summary of the list in this earlier post. The red bars indicate the challenges one has to face in each case.


Do you find this diagram helpful, or is it more confusing? Have I forgotten something important?

Update: I've fixed a bug, thanks to Foresme's comment.


Monday, May 28, 2012

Book Review: "Imagine" by Jonah Lehrer

Imagine: How Creativity Works
By Jonah Lehrer
Houghton Mifflin Harcourt (March 19, 2012)

Jonah Lehrer's book promises to address the question how creativity works. Lehrer begins with the discussion of some insights from the field of neurobiology, brain scans and so on, and moves on to the social sciences and some more or less recent studies about the productivity of groups and cities. He has visited several places were creative work is done, Pixar and the New Orleans Center for Creative Arts for example, and spoke to some researchers about their insights.

That all sounds very promising. But the one word that you should pay attention to in the previous paragraph is the word "some."

Lehrer seems to have collected some studies that crossed his desk and that roughly fit to the theme of his book. He quotes study results but he doesn't actually tell you much about the details of these studies, or the assumptions that went into them. For a popular science book one can forgive that - brevity arguably has its merits. Worse is that Lehrer does not discuss the status of these studies, does not mention if there have been conflicting results, or how the results are judged by the community other than the people who have done the studies to begin with.

That lack of discussion about the relevance of the research he refers to unfortunately has me questioning the accuracy of pretty much everything he says. 

The theme of the book is "creativity" but Lehrer doesn't actually explain what he means by that to begin with, so everything remains very vague. He goes on to collect evidence that goes nicely with the story he wants to tell and with the advice he wants to give. Basically his message is that creativity has two ingredients, imagination and perspiration, and that, to make an idea work, you need both. To remain creative, you should on occasion change the field you work in so you remain "an outsider" and you have to travel and talk to a lot of people.

Lehrer doesn't so much as touch on the question of whether there are individual differences in people's creativity.

His book centers very much around US America. Of course he does mention some artists or researchers who are not American (eg he spends some time on Shakespeare) but most of the companies, labs and cities he writes about are. The exception are a few pages about the technology boom in Israel which, if you believe Lehrer, came about because it's a small country and most men below the age of 45 have to serve for the military a few weeks annually. (Which, incidentally, is also the case for Switzerland.) He doesn't so much as acknowledge that his collection of points in favor of his message (connectivity is the key to creativity) is a little bit of an oversimplification, or that he has ignored to look outside his own cocoon of culture.

He also in some places promotes correlations that have been found in the study of creativity into causations. There is for example the study that shows that high impact papers are more likely to have multiple authors, which he takes as evidence for the benefit of connectivity for creativity. He mentions in the passing that the mathematician Paul Erdos was on speed most of his time as a researcher, yet no reference for that, and explains "A system has entropy when it's defined by the presence of disorder" which gives you a good impression of the level of explanation you get in that book.

In summary, it's a sloppy book. It's not a bad book in the sense that it's fluently written and is entertaining and I haven't noticed any typos. It also contained references to some studies I hadn't known about. Yet, the summary of these studies is so useless I'll just have to go and look them up myself. The one thing that I really like about the book is the cover. If it came printed on a shirt, preferably sleeveless, I'd probably buy it. I'd give this book two out of five stars.

Friday, May 25, 2012

How does the world of science differ from the world of art?

"How does the world of science differ from the world of art?" was a question posed to Scott Snibbe in an interview published in the recent Nature issue. Scott Snibbe is a media designer who, among other things, was executive producer of Björk's "Biophilia" project. If you're nuts you can pay 32 bucks for the interview, which fills roughly one page, here. I'll quote Snibbe's answer for you:
"There is an irreproducible uniqueness to an artist's work that makes the field less stressful than science. In science, if you don't make a certain discovery, someone else will, so even people in the same lab are competing with one another. In art, innovation and risk-taking are lauded, but in science there is an aversion to risk because people need to get grant money from conservative review boards. I know scientists who could speak a single sentence that would completely ruin their careers. [...]"
And if you've collected enough sentences that could ruin other people's careers, you get tenure.

Wednesday, May 23, 2012

Testing variations of Planck's constant with the GPS

Sketch of GPS satellite orbits.
Not to scale. Image source.
In a recent PRL, James Kentosh and Makan Mohageg of California State University at Northridge, report on a neat little analysis of GPS time correction data that they used to put bounds on a position dependence of Planck's constant:
The GPS satellites orbit the Earth in about 12 hours in orbits at about 26,600 km altitude with very small eccentricity (ie the orbits are almost circular). Each satellite carries an atomic clock which is needed for the position measurements that are made, essentially, by triangulation with several satellites.

The clocks in the satellites move relative to ground-based clocks at roughly 4 km/s in a weaker gravitational field, so their proper times differ. Just how much they differ can be computed from their position data, using general relativity. Since the exact synchronization between the different clocks is crucial for the precision of position measurements, the GPS clock data that the satellites transmit is followed by a correction that synchronizes the clocks with each other and with the ground base. The correction data is updated every 15 minutes and is publicly available. It is this clock correction data that was used for the study.

Kentosh and Mohageg looked if the correction data has an altitude-dependent excess that is not explained by general relativistic corrections. For this, they selected seven of the satellites that had slightly more eccentric orbits and small standard deviations in the changes of the clock corrections. They used the clock corrections for a period somewhat longer than a year. They first extracted the relics of the corrections that are off the general relativistic predictions. The unfiltered result has several long-term oscillations, for example on annual and monthly periods, that are probably due to atmospheric effects whose exact origin is unknown. These oscillations however are not what they are looking for. In the end, they are left with average off-sets for each of the satellites. According to general relativity, one expects them to be statistically distributed around zero. The offsets they found are consistent with zero (ie consistent with general relativity) though the maximum is a little off.

So far so good. What it a little murky is the relation to a position-dependence of Planck's constant, ℏ. If Planck's constant was position-dependent and would change with altitude, then this would affect the proper time of the satellite clocks as follows. Assuming that the atomic transition energies remain constant, the number of oscillations per unit of time depends on ℏ. If ℏ was position-dependent, this would then add to the clock-correction. The problem with this argument is that ℏ itself is dimensionful and such a statement thus rests on the assumption that the units (for example for energy and time) themselves are not changing. (A point also made in this Physics Today article.)

Besides this, I can't make much sense of a position-dependent ℏ. I mean, depending on your choice of coordinate system, positions can change all the time or never. With some imagination, one could consider a dependence on a physical quantity, maybe the strength of the gravitational field. But then I'd expect there to be much stronger constraints from other astrophysical observations.

In any case, leaving aside that it's not so clear this study actually tests the constancy of ℏ, it's a great documentation for what can be done with publicly available data of technological gadgets we use every day.

Sunday, May 20, 2012

Are your search results Google’s opinion?

New information technology is a challenge for all of us, but particularly for lawyers who have to sort out the integration into existing law. A question that has been on my mind for a while, and one that I think we will hear more about in the future, is what is the legal status of search engines, or their search results respectively?

There has now been an interesting development in which Google asked a prominent law professor, Eugene Volokh, for an assessment of the legal classification of their service.

In a long report titled “First Amendment Protection for Search Engine Results,” which was summarized on Wired, Volokh argues that Google’s search engine is a media enterprise, and its search results are their opinion. They are thus covered by the US American First Amendment, which protects their “opinion” on what it might have been you were looking for, as a matter of “free speech.” (It should be noted that this report was funded by Google.)

It is hard for me to tell whether that is a good development or not. Let me explain why I am torn about this.

Search engine results, and Google in particular, have become an important, if not the most important, source of information for our societies. This information is the input to our decision making. If it is systematically skewed, democratic decision making can be altered, leading to results that are not beneficial to our well-being in the long run.

Of course this has been the case with media before the internet, and this tension always existed. However, non-commercial public broadcasting, often governmentally supported, does exist in pretty much all developed nations (though more prominently so in some countries than in others). Such non-commercial alternatives are offered in cases when it is doubtful that the free market alone will lead to an optimal result. When it comes to information in particular, the free market tends to optimize popularity because it correlates with profit, a goal which can differ from accuracy and usefulness.

There is also what I called the “key in the trunk” effect, the unfortunate situation in which the solution to a problem can only be assessed if the problem is solved: You need information to understand you lack information. Information plays a very crucial role to democracy.

Research in sociology and psychology has shown over and over again that people, when left to their own devices, will not seek for and think about the information they would need to make good decisions. We are, simply put, not always good in knowing what is good for us. Many problems that can be caused by making wrong decisions are irreversible – by the time the problem becomes obvious, it might be too late to change anything about it. This is often the case when it comes to information, and also in many other areas whose regulation is therefore not left to the free market alone. That’s why we have restrictions on the use of chemicals in food, and that’s why no developed nation leaves education entirely to the free market.

(Sometimes when I read articles in the US American press, I have the impression that especially the liberals like to think of the government as “they” who are regulating “us.” However, in any democratic nation, we do impose rules and regulations on ourselves. The government is not a distinct entity regulating us. It’s an organizational body we have put into power to help improve our living together.)

That it is sometimes difficult for the individual to accurately foresee consequences is also why we have laws protecting us from misinformation: Extreme misinformation makes democratic decision making prone to error. Laws preventing misinformation sometimes conflict with free speech.

Where the balance lies differs somewhat from one country to the next. In the USA, the First Amendment plays a very prominent role. In Germany, the first “Basic Right” is not the protection of free speech, but the protection of human dignity. Insults can bring you up to a year prison sentence. Either way, freedom of the press is a sacred right in all developed nations, and a very powerful argument in legal matters. (It is notoriously difficult to sue somebody for insult in any of its various manifestations. Aphrodite's middle-finger, see image to the right, which was on the cover of a German print magazine in February 2010, was covered by press freedom.)

So how should we smartly deal with an information source as important as Google has become?

On the one hand, I think that governmental intervention should be kept to a minimum, because it is most often economically inefficient and bears the risk of skewing the optimization a free market can bring. If you don’t like Google’s search result, just use a different search engine and trust in the dance of supply and demand. George Orwell's dystopian vision told us aptly what can happen if a government abuses power and skews information access to its favor, putting the keys into the trunk and slamming it shut.

On the other hand, Google is a tremendously influential player in the market of search engines already, and many other search engines are very similar anyway. Add to this that it’s not clear our preferences which Google is catering to are actually the ones that are beneficial in the long run.

This point has for example been made by Eli Pariser in his TED talk on “Filter Bubbles” that we discussed here. Confirmation bias is one of the best documented cognitive biases, and our ability to filter information to create a comfort zone can lead to a polarization of opinions. Sunstein elaborated on the problem of polarization and its detrimental effect to intelligent decision making to quite some length in his book “Infotopia.”

It is sometimes said that the internet is “democratic” in that everybody can put their content online for everybody else to see. However, this sunny idea is clearly wrong. It totally doesn't matter what information you put online if nobody ever looks at it, because it has no links going to it and is badly indexed by search engines. It might be that, in principle, your greatly informative website appears on rank 313 on Google. In practice that means nobody will ever look at it. Information doesn't only need to exist, it also has to be cheap in the sense that it doesn't necessitate great cost of time or energy to access it, otherwise it will remain unused.

Then you can go and say, but Google is a nice company and they do no evil and you can't buy a good Google ranking (though spending money on optimizing and advertising your site definitely helps). But that really isn't the point. The point is that it could be done. And if the possibility exists that some company's “opinion” can virtually make relevant information disappear from sight, we should think about a suitable mode of procedure to avoid that.

Or you could go and say, if some search engine starts censoring information, the dynamics of the free market will ensure some other takes the place, or people will go on the street and throw stones. Maybe. But actually it's far from clear that will happen. Because you need to know information is missing to begin with. And, seeing that Google's “opinion” is entirely automated, censorship might occur simply by mistake instead by evil thought.

So, are your search results Google's opinion? I'd say they are a selection of other people's opinions, arranged according to some software engineers' opinion on what you might find useful.

That having been said, I don't think it is very helpful to extrapolate from old-fashioned print magazines to search engines and argue that they play a similar role, based on a 1980 case in which an author, unsuccessfully, tried to sue the NYT over the accuracy of their best-seller list, a case which Volokh refers to. The diversity in search engines is dramatically lower than opinions that could be found in print in the 80s. At the same time the impact of online search on our information availability is much larger. Would you even know how to find a best-seller list without Google?

Now, I'm not a lawyer, and my opinion on this matter is as uninformed as irrelevant. However, I think that any such consideration should take into account the following questions:

First, what is the impact that search engine rankings can have on democratic decision making?

Second, what market share should make us get worried? Can we find some measure to decide if we are moving towards a worrisome dominance?

Third, is there any way to prevent this, should it be the case? For example by offering non-commercial alternatives or monitoring by independent groups (like it is the case eg with press freedom)?

Friday, May 18, 2012

Books in E major

Look! Listen! I've made a music video for your smooth start into the weekend:



I finally realized I won't be able to do any painting till the girls are old enough to not try to lick on the brushes. So, I was looking for a new hobby and that is my first try. I made a series of mistakes that I'll try to learn from.

Stefan and I needed three attempts to throw the books down. First, a book hit the camera stand. At second try, I lost my glasses and had to laugh. Third, I realized belatedly that I had moved the camera and cut off my own head. I then decided it will do, after all I don't want to win an Oscar. I also had a small disagreement with the video editing software and accidentally didn't save the file, so no more editing on that one. Stefan kindly said that the brightness correction of the camera which overreacted to passing clouds "adds drama."

In case you wondered, we had cushions on the floor and the books fell softly. For all I can tell they were not seriously injured.

Anyway, the biggest shortcoming is that I can't actually sing. Worse, I noticed somewhat belatedly that I particularly can't sing anything below the middle C. Somewhat surprisingly so, because I always thought my voice is rather deep for a woman. After two weeks or so of practice, I finally managed to hit the C#5 (at 2:04 min). Luckily our downstairs neighbor is on vacation and the upstairs neighbor doesn't hear well.

3 hours after uploading the video to YouTube I got a message saying they suspect a copyright violation, if I could please provide documentation that I have permission to use the song. A quite dramatic shift in procedure there.

Below the lyrics. I promise the next one won't be quite as serious ;o)

[Intro]
What do you know?

[Verse 1]
You know it all
So many things I do not understand
You know it all
The wrong and the right
You like to talk
So many items I had marked all read
Right or wrong
Hard to decide

[Verse 2]
I see the world
Neatly filtered for my Daily Me
I have it all
Too much and too fast
No need to see
So many things I do not want to see
We have it all
No time to ask

[Chorus 1]
What do you know?
I would appreciate some input here
What do you know?
Your conclusions are not clear to me
What do you know?
I could recommend some books to you
What do you know
We got a long way to go

[Verse 3]
Is it too much for you to watch
Is it finally enough
Do you follow, do you share
Do you listen, do you care
We know it all
So many things that we simply take for such
Who do you trust
And do you dare

[Repeat Chorus 1] 

[Interlude]
What do you know? I don't know.
What do you know? I don't know.

[Chorus 2]
What do you know?
The facts are not consistent with your claim
What do you know?
The assumptions that were used are not the same
What do you know?
I could recommend some books to you
What do you know
We got a long way to go

Wednesday, May 16, 2012

Tales from the Future

Europe has a plan. It's called Europe 2020 and it's "the EU's growth strategy for the coming decade." Part of that plan is an initiative called "Innovation Union" which "aims to improve conditions and access to finance for research and innovation in Europe". So many nice words.

In what I found to be a great idea for communicating science, part of this initiative is a collection of short science fiction stories based on actual research projects. The short stories, called "Tales from the Future," are written by Robert Billing, and while they seemed to me a little constructed towards their aim, they are not bad at all. You can read them online here. Enjoy!

Monday, May 14, 2012

A note on the black hole entropy in LQG

If you know anything about Loop Quantum Gravity, you know that people working on it suffer from an inferiority complex because, when counting black hole microstates, they only get the Bekenstein-Hawking entropy of black holes up to a factor. This factor, known as the Barbero-Immirzi parameter, enters the theory through the quantization condition and then has to be fixed to match the correct semi-classical result.

Now there has been a recent paper on the arXiv by Eugenio Bianchi
which addresses the issue, or at least that's what I thought when I first read the abstract. Bianchi derives the black hole entropy in a spin-foam formalism and finds the usual Bekenstein-Hawking entropy - without any additional factors.

I've been scratching my head over this paper for a while now. The purpose of this blogpost is twofold: First to draw your attention to what is potentially an important contribution in the field, check. And second, I want to offer you my interpretation of that finding, and hope some reader who knows more about LQG than I will correct me when I'm wrong.

The Bekenstein-Hawking entropy is not a quantum gravitational result. One finds that black holes have a temperature by considering a quantum field (usually a massless scalar field but that doesn't matter) in the classical background geometry of a black hole. If one has the mass of the black hole, one can identify it with the total energy and integrate dE = TdS to get the entropy. The validity of this argument breaks down at the Planck scale, but that's not the regime of interest here. One can also abuse the Unruh effect to argue that black holes have a temperature, same result.

If one has some candidate theory for quantum gravity, one can ideally go and compute the microstates of a black hole. In LQG, areas and volumes are quantized in multiples of the Barbero-Immirzi parameter. Even without knowing the details, this leads one to expect that the number of possible microstates depends on that parameter. Thus, the number of microstates will generically not reproduce the Bekenstein-Hawking entropy, unless the parameter is chosen suitably. Now what I would conclude at this point is that the Bekenstein-Hawking entropy is not a measure for the microstates of the black holes. Alas, most of my colleagues seem to believe it is, especially the string theorists, and there's then the origin of the loop quantized inferiority complex.

So, with that avant-propos, why does Bianchi get a result different from the previous LQG results, a result that reproduces the Bekenstein-Hawking entropy?

Well, it looks to me like that's because he doesn't count the black hole microstates to begin with. He considers an observer in the black hole background with a two-level detector and finds the temperature, then S=E/T, and no Barbero-Immirzi parameter ever appears because it's a kinetic effect that has nothing to do with the quantization of areas and volumes. I am greatly simplifying and omitting many details, but that is what it looks like to me.

It is good to see this can be done by constructing the worldline of the observer in the spin-network and express the acceleration and so on in the proper kinetic formalism; that is an interesting calculation in its own right. But does that solve the problem with the black hole entropy in LQG?

In my opinion, it doesn't. In fact, it only manifests the problem further. Now not only is the microstate counting inconsistent with the Bekenstein-Hawking entropy unless a free parameter of the model is fixed appropriately, but the kinetic result is inconsistent with the microstate counting within the same theory.

Truth be said, this paper has created more questions for me than it has answered. I am wondering now for example, what really is the observer fundamentally? It ought to be described by quantum fields. But these quantum fields have a quantization prescription. And that quantization prescription, not having anything to do with gravity, doesn't have an additional parameter in it. That after all is why the Bekenstein-Hawking result is reproduced, because it doesn't have anything to do with the quantization of gravity. But the fields interact with the geometry, so how can they have a different quantization prescription?

If somebody can point me into the direction of a helpful reference or a bottle of ibuprofen, please dump in the comments.

Thursday, May 10, 2012

Top Ten

This is a repost and update of a six year old post in which I listed what I think are the most interesting and pressing open problems in theoretical physics, or at least the area that I work in, quantum gravity. I thought it might be worthwhile to revisit. This list doesn't even pretend to be objective - it omits entire areas of theoretical physics - it is mainly a reflection of my personal interests; a summary of puzzles I find promising to spend brain time on.
  1. How can the apparent disagreement between general relativity and quantum gravity be resolved? Does it require to quantize gravity?
    (Still 1. Haven't changed my mind about that.) 
  2. Can we understand quantization?
    (Up from 9. The more I think about it, the more I believe our problem with quantizing gravity is in the quantum part, not in the gravity part.)
  3. Do black holes destroy information? What happens to the matter that collapses to a black hole?
    (I think we spent enough time on the black hole information loss problem. It would be fruitful to instead think about what happens to matter at Planckian densities. Down from 2.)
  4. Are the electroweak and strong interaction unified at high energies? If so, are they also unified with gravity?
    (That is, is there a theory of everything? Up from 8. I'm undecided whether or not unification is helpful to the problem of quantizing gravity.)
  5. Are the currently known particles of the standard model (SM) elementary? Are there more so far unobserved particles? Why are the parameters of the SM what they are and are they in yet unknown ways related to each other? Why are the gauge groups of the SM what they are? Is it even possible to uniquely answer this question?
    (Formerly 8, minus the question for unification plus the question whether there's a unique answer.)
  6. Did the universe start with a big bang, a big bounce or something else entirely?
    (This is a reformulation of the earlier question number 4 which focused on singularities specifically. Down from 3.)
  7. Why do we experience 3+1 dimensions? Are there extra dimensions? Does the effective dimension of space-time decrease at short distances?
    (This is an extension of the earlier question 7, taking into account that dimensional reduction to 2 dimensions has received some attention recently.)
  8. Why is gravity so much weaker than the other interactions?
    (Up from 10.)
  9. Does dark energy exist? If so, what is it? Is the coincidence problem more than a coincidence?
    (Down from 4. I think that the dark energy puzzle is possibly a relevant hint for quantum gravity. But then, maybe not.)
  10. How do we correctly assign an entropy to gravitational degrees of freedom? Is this testable at all?
    (Newcomer.)
Dark matter has dropped off the list. I think we're well on the way to finding some direct evidence for dark matter. It will be difficult to pin down, but at this point it seems unlikely to me that it will be relevant for quantum gravity.

Tuesday, May 08, 2012

Imitation Nation

The Edge features a half hour talk by Mark Pagel that I found very thought stimulating. Pagel is an evolutionary biologist, and his talk is provocatively titled “Infinite Stupidity.” The title primarily describes itself though and doesn’t have much to do with Pagel’s argument, which is a speculation about the origin and evolution of ideas.

Pagel starts with the analogy that ideas evolve similarly to genes, in that they reproduce and are selected for performance. Only good ideas continue to reproduce, which is however a tautology if you define a good idea by its ability to spread, and questionable otherwise. I am not particularly fond of the comparison to natural selection that he uses. The evolution of organisms and ideas are both examples for adaptive systems, and the reference to natural selection imo sets an unnecessary anchor.

In any case, this idea reproduction works well among humans because we are very good at social learning, that is learning by imitating each other. The ability of humans to copy behavior sets us apart from all other species on the planet. Sure, some other mammals are able to learn new tricks when offered rewards, but these abilities and the animals’ understanding of purpose are very limited. Humans have very little hardwired knowledge, which has the advantage that we adapt very well to new circumstances, but has the disadvantage that it takes a long time for human infants to have learned enough to be able to survive independently.

Hang on, Lara is trying to eat my post-its.

During the evolution of mankind, our ability to communicate ideas has steadily improved. Beginning with the evolution of language, over the written word, print, telegraph, phone and to the internet, we have improved on our connectivity. Since we are so good at copying others, this means, so argues Pagel, that we need fewer and fewer people to produce ideas. Innovation takes time and energy, and if we can shortcut this investment by relying on somebody else’s knowledge, we can avoid this cost.

Pagel is speaking here not primarily about innovation in the sense of technological development. He refers to things like, say, building a house. If you want to build a house, you don’t invent architecture from scratch. You ask somebody who knows, or you read a book, or, most likely, you hire somebody to do it for you. Either way, you’re copying other people’s innovations rather than innovating yourself, and in terms of time- and energy-investment that’s arguably the smart thing to do.

A consequence of our high skills in social learning combined with increasing connectivity is thus that we have become good copiers and less good inventors, which is unfortunate since we need innovators to come up with smart solutions to our problems. Or so are Pagels concerns. He says
“And so, we might see that there has been this tendency for our psychology and our humanity to be less and less innovative, at a time when, in fact, we may need to be more and more innovative, if we're going to be able to survive the vast numbers of people on this earth.”

I don’t know what he means with “tendency for our psychology.” It could mean two things. Either the (conjectured) decline in innovation is hardwired, ie it’s a genetic change. Or, it’s a reversible adaption to changing circumstances. Given the short time that has passed since the invention of print, it is most likely a social or cultural change he is referring to. But if that is so, then I have to conclude that Pagel’s perception of “a need to be more and more innovative” is apparently not reflected in our environment, at which point we’re left with opinions about societal investments in research and development, rather than facts. This is not to say that I disagree with Pagel, just that I don’t really know what insight to gain here.

So let me get to the next point he’s making, which I actually found more interesting, that is the question where ideas originate. Pagel says that ideas are probably randomly produced in our brains, like genetic mutations are randomly produced. This hypothesis doesn’t seem to be based on any actual study for all I can tell, it’s mostly an argument from plausibility.

That ideas might be random produced in our brains doesn’t mean though that we try them all. No, luckily our brains are large enough to virtually explore consequences of an idea before actually acting on it. And in that process, we discard most of the nonsensical random ideas, possibly already unconsciously. I am not sure how well this idea of Pagel fits with current research, but it makes a certain sense to me.

However, I think Pagel omitted to point out that a random generation of ideas cannot mean random from scratch, but random over a pool of already existing material. That is to say, you can only generate ideas on the information that your brain contains. Which brings me back to the need of education, and investment into research and development.

Pagel's argument is interesting, but it lacks substance. Maybe it is worth checking out his book, Wired for Culture, which might offer more support for his idea.

Saturday, May 05, 2012

Book review: "Infotopia" by Cass Sunstein

Infotopia: How Many Minds Produce Knowledge
By Cass R. Sunstein
Oxford University Press, USA (2006)

It's taken me a while to get through Sunstein's book though I am very interested in the topic. "Infotopia" addresses the question under which circumstances, and with which aggregation mechanisms, groups can make good decisions - and under which circumstances groups fail. With that, Sunstein's book offers the details that I found missing in Surowiecky's "Wisdom of Crowds."

Sunstein summarizes a lot of research that has been done on how groups deal with information, and how they aggregate it, and how good or bad they make decisions. He has classified modern aggregation tools into markets and prediction markets, wikis, open source, and blogs. This order seems to be a declining one for Sunstein's judgement of usefulness; he is clearly enthusiastic about prediction markets and critical about the blogosphere.

The book has grown out of his review article for the New York University Law Review, and that is, unfortunately, very noticeable. "Infotopia" contains a lot of information and many references, but it is not very engagingly written. It is essentially a long list of who did what study when and where. It is in several places repetitive, as if the author himself had forgotten what he had already covered. It is repetitive also in the choice of words, eg the word "blunder" seems to appears like every other page.

I learned a lot from this book, most notably what difficulties befall groups that want to come to good decisions.

The major problems are that group members might not disclose information that they have, and that information which is held by few or single group members has less influence on the decision than the information shared by many, irrespective of actual relevance. Sunstein discusses many studies that have shown that deliberation in groups, under very general circumstances, makes decisions worse and polarizes opinions. The reason is that people tend to focus on what they have in common and reinforce their views rather than to diversify. So, after talking it out, people often edge towards more extreme views, and are more certain about them too because they then know others share their opinion. An additional problem is that people might have a conflict of motivation, ie their personal motivation to not look stupid might not agree with the goal of coming to a good decision in the group.

Most of the examples that Sunstein draws upon are 6 years later already outdated, but the general lessons for good decision making are pretty much timeless. In the final chapter Sunstein makes suggestions for how to alleviate these problems in different situations: online communities, group meetings and so on. I'll try to learn from that book, and hope to realize some of the suggestions in the future.

In summary, this book is very useful, but it's not very inspiring and not very well written. I'd give three out of five stars.

Thursday, May 03, 2012

Surprise me - But not too much

Flute recording. Source.
A decades old paper made me philosophical.

In 1975, Voss and Clarke, two physicist from Berkeley, studied noise in electronic devices. For the fun of it, they also analyzed the spectra of different types of music. They found that the fluctuations in loudness and pitch decrease with the inverse of the frequency; they have a 1/f spectrum. This finding was basically independent of the type of music; Western, Oriental, Blues, Jazz, Classic all showed the same pattern. Voss and Clarke’s musical power spectrum made it into Nature.

A Fourier-transformation of a 1/f power spectrum leads to a power-law decay in the autocorrelation time of the fluctuations, meaning there are correlations over all times, rather than over a characteristic decay time as is most often the case. Physicists like power-law autocorrelated fluctuations because systems show them on a critical point, ie if something cool is going on, you get a power law. The opposite is not necessarily true though; there are more mundane ways to get a power law, but that hasn’t deterred enthusiasts. The 1/f spectrum is scale-invariant, so it has – theoretically – no preferred time scale or frequency to it, as one might have expected to be present in music.

In the 70s and 80s everything power-law was chique, and not all of that power-law-finding was very meaningful. To some approximation, in some parameter range, everything is a power-law. If you put your data on a log-log plot, you can make a linear fit, at least over some range. Yet, strictly speaking nothing really is a power law. And of course music doesn’t really have a 1/f spectrum either. To begin with, because it doesn’t use the full frequency spectrum, most of which we couldn’t hear. Also, Mozart hasn’t been composing since the Big Bang.

Scale-invariance is a property also shared by fractals. When I first heard about Voss and Clarke’s study, I jokingly asked when we’ll be listening to fractal music. Needless to say, I learned that had been said and done when I was still wearing diapers. Google “fractal music” to see where this thought leads you.

I’m not sure what the power-law finding means for the origin of music, but intuitively what it means for what you hear is that music (at least the type we find appealing) lives on the edge between predictability and unpredictability. White noise has a constant spectral density and no autocorrelation. A random walk moving a melody along adjacent pitches has a strong correlation and a 1/f2 spectrum. Somewhere in between are Bach and Adele.

When you turn on the radio, you want to be surprised – but not too much. Popular music today follows quite simple recipes. In most cases, you’ll be able to sing along when the chorus repeats. If you’ve heard a song a dozen times it gets dull though – it’s become too predictable. Symphonies are more complex, but they all have recurring motives and variations around that.

However, the musical edge must have a finite width. For some purposes, music can be more predictable than for others. What amount of predictability we find appealing doesn’t only depend on the occasion, it is also individually different. If you spend a lot of time analyzing pop songs, I suspect what’s in the charts today will sound very repetitive to you, though for the casual listener it arguably has an appeal.

It is tempting to extrapolate this to areas where autocorrelations are less easily measureable than pitch, for example to ideas in the written and spoken form. A scientific paper or a talk needs to strike a balance between the known and the unknown. Repeat too much common knowledge, and you’re obvious. Jump too far, and you’re crazy. The scientific pop stars are the ones on the edge. That also means the pop stars are the ones not too far ahead of their time.

It seems to me today the width of the scientific edge is very thin, maybe too thin. Sometimes, the obvious must be stated just so it remains in awareness. And sometimes the crazy starts making sense if you’ve listened to it often enough.

There’s another lesson. While fashions seem to come back, the cycle never is perfectly periodic, but always comes with a new twist. Thus, when the colors of the 70s will return to haunt us, maybe they’ll come with a metallic shine. And so, my impression that we’re having the same discussions over and over again must be wrong. They can’t be periodic, I am missing a change on longer time scales. History may be self-similar, but it’s not repeating. Though that's one of my all-time favorite songs.