Pages

Wednesday, January 30, 2008

Messenger, Mercury, and General Relativity

"I think Isaac Newton is doing most of the driving now."

Apollo 8 Lunar Module pilot Bill Anders, when asked who was driving the capsule on the return from the Moon to the Earth, 26 December 1968.


On January 14, 2008, the Messenger spacecraft had a spectacular flyby at Mercury, passing about 200 kilometres (124 miles) above the night-side surface of the planet. While the probe is transmitting an amazing amount of exciting photos from this encounter, I would like to focus here on something more ethereal, the influence of General Relativity on Messenger's orbit.

As reported on the Planetary Society Weblog, the Mercury flyby was a case of "spectacular targeting": Messenger missed the previously planned aimpoint at Mercury by only 1.43 kilometres in altitude, and that after a flight of nearly 100 million km without firing its engines. In fact, Messenger needs some trajectory fine-tuning from time to time, and the last correction before the flyby, trajectory correction manoeuvre 19 (TCM-19), had occurred 26 days before, on December 19, 2007.

Speaking of a space probe flying to Mercury to a bunch of physics aficionados inevitably brings up General Relativity. After all, the explanation of the extra shift of the perihelion of Mercury, a tiny 43 arc seconds per century not accounted for by Newtonian gravitation, was the first big success of General Relativity. So, it seems natural to ask, if there is such a high precision in the determination of Messenger's trajectory, what is the role of General Relativity in this?

While trying to figure out how the engineers at NASA actually are handling gravitation in the trajectory calculations, I realised that a simple back-of-the-envelope calculations already yields a good estimate for the influence of General Relativity on the space probe. Just applying the formula for the relativistic perihelion shift shows that relativistic effects add up to a few kilometres for the trajectory between the TCM-19 correction and the flyby.

Comments by readers who know more about this stuff are welcome!

Actually, Messenger was, in the months before the flyby, on a quite eccentric elliptic orbit in between the orbits of Venus and Mercury. Here is part of an illustration of the orbit from the Messenger web site

Credits: NASA/JHU Applied Physics Laboratory/Carnegie Institution of Washington


The part of the Messenger orbit before the flyby (marked by the arrow on the right hand side) is shown in pale red - it's a nice elliptical orbit. Hence, it seems reasonable to apply the relativistic perihelion formula to both the Messenger and the Mercury orbits, and to look what comes out.

The angular shift of the perihelion per revolution as stemming from relativistic corrections to Newtonian gravitation is given by
Here, a is the so-called semi-major axis of the orbit (that's half the longer diameter of the ellipse), and e the eccentricity - for a circle, e = 0, and the larger e, the more elongated the ellipse. Sometimes, the quantity a(1-e²) is called the semi-latus rectum, L (sorry, the geometry of conic sections is pretty old, hence all the Latin and Greek). Since the perihelion distance, p, is related to the semi-major axis by p = a(1-e), we can also write
The fraction GM/c² is half the so-called Schwarzschild radius - for the Sun, GM/c² ∼ 1.5 km. Since this is very small compared to Mercury's perihelion distance of 46 million km from the solar centre, the perihelion shift per revolution is a tiny angle.

However, what we actually need to know when we want to navigate a space probe very close to Mercury is not this angle, but the actual motion of the perihelion, as measured in kilometres. This motion then quantifies the offset Δ along the orbit due to relativistic effects. But it is easy to calculate: Since the angle is given in radians, we just have to multiply by the perihelion distance, and obtain
Curiously, this offset only depends on the eccentricity, and is even there if the orbit is a perfect circle!

Now, we can apply this formula to the orbits of Mercury and Messenger and plug in some numbers:

The eccentricity of Mercury is e = 0.20, which yields Δ = 25 km. This is the relativistic offset of the orbit that accumulates over the 88 days of one revolution. Now, however, we are not dealing with an entire revolution. But the crucial point is that for shorter periods, we can just take the respective part of this shift. Thus, for the 26 days between the trajectory correction manoeuvre 19 (TCM-19) on December 19 and the Messenger flyby on January 14, the relativistic offset amounts to about 7 km.

The orbital elements of Messenger can be obtained from the JPL Horizons web site. This is a very cool interactive site where you can get all kinds of coordinates for nearly all the Solar System! The elliptic orbit Messenger was on around the Sun on, say, January 1, 2008 had an eccentricity e = 0.38, and a period of about 140 days. The relativistic offset Δ of this orbit amounts to 22 km, and scales down, for 26 days, to about 4 km.

Thus, taken all together, the relativistic effects on the Messenger trajectory over the period of 26 days since the last correction manoeuvre can add up to an uncertainty of a few kilometres at the moment of the Mercury flyby.

Since the height of the flyby was about 200 km above ground, this is probably not critical - Newton is safe enough a pilot. However, the trajectory was calculated to a much higher precision, and since an "experimental" error of 1.43 kilometre of the actual versus the calculated trajectory could be measured, it's clear that all the calculations are done with a strong assisting hand by Einstein!






Calculating orbits around a central mass in General Relativity amounts to calculate geodesics in the Schwarzschild metric. As it comes out, for large enough distances from the centre, the motion of a mass corresponds very well to the motion according to Newtonian gravity, but in addition to the 1/r potential of Newton, there is an extra term, proportional to 1/r³. This extra term acts as a perturbation to the Newtonian elliptic orbits, and yields a shift of the perihelion of the ellipse.

Actually, this 1/r³ term is taken into account in the calculation of the Messenger trajectory (thanks to Amara's inquiry)

In general, these days General relativity

I've found a lot of theoretical background and useful information on the role of General Relativity in the calculation of trajectories in the "Monograph 2: Formulation for Observed and Computed Values of Deep Space Network Data Types for Navigation" by Theodore D. Moyer at the web site of the JPL. It shows, for example, this impressive set of relativistic equations of motion (page 4-19) -
The equation, describing the acceleration of mass i due to gravity, makes use of the so-called PPN formalism of Will and Nordtvedt (1972). The first term is the Newtonian term, and the first term in the curly brackets the first correction by General Relativity, which yields the perihelion shift. All other terms are much smaller (some of them depend on the velocity, that's what is called gravitomagnetism and yields the Thirring-Lense effect...)

However, for actual calculations, not all these terms are really taken into account - it all depends on the precision that is required (and that can be actually measured by spacecraft telemetry)



Tuesday, January 29, 2008

Christmas World

Christmas is over, since more than a month - so, what does this huge Christmas tree next to the Messeturm in Frankfurt do?

Well, after Christmas is before Christmas, and in the last week of January it is time for Christmas World, "presenting the latest party and festive decorating trends and innovations".

After I moved to Frankfurt, I couldn't make sense out of the Christmas lighting that was switched on again at the fairground end of Januay, until I figured out by chance there is this special event. It seems if you're in the Christmas business, you can relax for a week or so after the holidays - but then, you have to prepare for the new season. If I think about this, it's just natural that we can get chocolate Santas and gingerbread and all the other Christmas goodies in the supermarkets starting around the end of August, at the latest.

This morning I could see how the tree was dismantled: Lots of plastic, mounted on a cone-shaped scaffolding. So, it's as artifical as this whole Christmas industry.

Monday, January 28, 2008

PS on Cast Away

It turned out part of my previous post Cast Away was a pretty good 50-pages-extrapolation of the book I am currently reading "The Ingenuity Gap" by Thomas Homer-Dixon. So, after having proceeded these 50 pages, here is how the paragraph with the car's increasing complexity reads if one knows how to write:

"In fact, even most professional mechanics are little more than diagnosticians now. The modern car is trundled into the shop and hooked to computerized diagnostic systems, and faulty engine modules are replaced in their entirety. If the faulty modules are repaired at all - rather than simply junked - they are rarely fixed in the shop but instead shipped to specialized facilities with the specific expertise needed. As the complexity and sophistication of our cars have increased, we can no longer repair them in our backyards or in our own garage grease pits. Instead, we increasingly rely on distant expertise and knowledge. In short, the rising complexity of our machines has reduced our independence and self-sufficiency. It's ironic that as technology does its job better and empowers us in various ways, it leaves us with less control, power, and freedom in other ways."

In this spirit I have to trundle my not-so modern car into a shop now. Besides some minor bugs that it has developed over the last year - like something being fishy with the left front wheel and the adjustment mechanism of the driver's seat, or the red light in the dash panel that is asking for attention since last winter or so - I had to notice on Friday that the horn doesn't work. This, my friends, is simply intolerable. Wish me good luck that I have some money left in my bank account after this for a vacation.

Saturday, January 26, 2008

Cast Away

My first car was a red Ford Fiesta '89. It was squeezed to death on a rainy day in April '97 between two Mercedes Benz. Luckily the grey suit emerging out of the Benz in my back took one look at me and my Fiesta, all three of us equally sad in the pouring rain, and assured me his insurance would take care of everything. His Benz had hardly suffered a scratch. Then he rushed away to what I am sure was an incredibly important meeting. His insurance payed more than my Fiesta was actually worth.

I loved my first car. It had only one drawback: the trunk would lock on being closed, so you'd better not put the keys in there while unloading. I've always wondered why somebody would construct a car this way, it seems to me like a disaster waiting to happen. An irreversible process, resulting in a potentially expensive, and certainly annoying, need to unlock a lock to access the key to that very lock.

This Fiesta feature has come back into my mind repeatedly. After my move to the USA I called a customer service hotline (see also) to figure out how I was supposed to configure my dial-up connection. The customer service person told me to download the required software from the internet. "Great," I told the women, "but I want to set up the dial-up service for the very reason to connect to the internet." - "Yes," she explained, "You go to the website ..., click on ..., and follow the instructions..." - "Listen," I interrupted her, "Are you telling me I can't use your service to connect to the internet without downloading a software that would already require a connection?" After some back and forth we were both sufficiently pissed off not to continue this inspiring dialogue long enough for me to find out whose ingenious idea this was. (I ended up connecting through my former university's dial-up connection which did not require additional software, paying more than 50 bucks for a long distance call. The dial-up company went bankrupt 1 year later.)

Another example for this specifically smart way to store the key in the locked trunk are system administrators (which shell remain unnamed) that provide the FAQ on how to connect to the internal network from the outside in the internal network, or files that contain information on what to do in case you can't open the file. Also nice is the typo in Joe Polchinski's big book on String Theory - the typo in the URL for the website with the errata, scissors sealed in wrappings that one can't open without a scissor, and manuals that can't be accessed without first reading the manual:



I was recently again reminded of the closed-trunk problem in a larger context through the Nature review on the previously mentioned 'Open Laboratory 2007':

"The Open Laboratory 2007: the Best Science Writing on Blogs (Lulu.com, 2008) takes the curious approach of using dead tree format to highlight the diversity of scientific ideas, opinions and voices flowing across the Internet.


Being a self-confessed treehugger I am all for reducing unnecessary paper waste, and I am a dedicated recycler. But the trend to online storage of information of almost all kind is not without drawbacks. Consider the incredible amount of data that is today stored on computers harddisks, on CDs, magnetic tapes etc. Data that is stored there only. In contrast to a book that one can just open, read, and extract the information it contains, none of the bits and bites in the virtual world are accessible to human senses without further help. And where do we find today information on troubleshooting. Well, on the internet. And if that doesn't help, call customer service. You find the phone-number on the internet. Hey, Wikipedia knows everything, why pay several thousand bucks for an Encyclopedia Britannica. And even if, you can have it on CD, isn't that more timely? Well, CDs have a lifetime of 30-100 years, besides that you can't read them without an appropriate device. And if you think burning CDs is a good idea to backup your data, think twice.

Most of the products of our daily lives are incredibly complex. Take a simple lightbulb. How many single processes, how many people, how much technological knowledge was necessary to produce it? And how much of that knowledge do you have - without Googling for it? The screen you are currently staring at, the harddisk in your computer, are several orders of magnitude more complex.

More complexity isn't necessarily good, though many people seem to consider it as an indicator for 'progress'. I don't know much about cars but if my ex-boyfriend's VW broke down, one could open the hub and check the vitals, V-belt, battery, ignition plug. Almost all of the bugs my parent's new cars have are malfunctioning automatic 'helpers', problems that sit on microchips the engineer has to identify via a complicated diagnose system, problems that despite their ridiculousness can render the car completely useless (try opening the stupid door if the battery in the remote is dead, try driving during rain if the wipers don't work, try starting the car if the security system won't let you). If one opens the hub all one sees is a plastic cover with a huge arrow pointing to the dipstick to check the oil level. To me this isn't progress. This is regress. It is an increase in complexity that lowers resilience of the system, as a result it can break down suddenly, abruptly, and without you being able to fix the problem on your own.

Realizing the complexity on which our daily lives rest is a recurring story in Robinson-themed movies like 'Cast Away'. The hero is suddenly faced with the task to make everything from scratch, thereby usually telling a story that praises human ingenuity.

Most of us are realistic enough to understand that there are limits to how much of our modern society's knowledge we can possibly reproduce on our own, in a single lifetime. Extrapolate the current trend to rely on the eternal availability of information on the internet, and consider in 50 years from now some unfortunate accident occurs, a natural catastrophe, a world war, a disease that leads to lacking maintenance and a breakdown of vital resource flows. How much of the previous decades technological and scientific insights would all of a sudden become unavailable? And how much of that knowledge would be needed to reaccess it?

"Dead tree format" might seem old-fashioned, and the term appeals to the ecological consciousness that is currently en vogue. But do you really think it is a good idea to store information about our research, especially on information networks themselves, entirely on this very network? It's like putting the key in the trunk. A disaster waiting to happen. It is plainly against any responsibility we have for coming generations to let our society run into a situation where a small regress would imply a following even larger regress, because information on how to deal with it is not accessible.

After my Ford Fiesta died, I bought a new car. It was a white Ford Fiesta '91 with the same trunk lock. Against all probabilities my distracted self never forgot the keys in the trunk. I guess that means the disaster is still out there, waiting to happen.

Friday, January 25, 2008

Sexed Up

I'm currently reading Dawkin's God Delusion. I haven't gotten far (that might be related to the fact that I am reading at least 10 other books parralel), but according to his definition I am classified as a "sexed-up atheist", just so you know what you're at. In this spirit, here a slighly different take on geometry that is truly bizarre. YouTube informs me 'This video may contain content that is inappropriate for some users', not sure exactly what since it seems to be entirely computer generated, but anyway.


For more weirdness of that sort one can do with 3D images, see Naked Geometry.

Have a nice weekend.

Thursday, January 24, 2008

PS on GZK Cutoff

A somewhat belated answer to Eric Gisse's question to our earlier post Skymap of AGNs with Cosmic Ray Events about the GZK cutoff:
    "why is the limit imposed by pion pair production as opposed to electron pair production? the energies required for electron pair production is /significantly/ less than pion pair production."

I recently came across the figure below, which depicts the energy loss of a proton times 1/E (the relative energy loss per year), due to electron-positron pair production, and - at higher energies - pion production:

[Fig 1 (a) from V.Berezinsky, A.Z.Gazizov, S.I.Grigorieva "On astrophysical solution to ultra high energy cosmic rays", Phys.Rev. D74 (2006) 043005, arXiv:hep-ph/0204357]


As one sees, the electron-positron pair production leads to a loss and is the dominant contribution at energies below ≈ 1019 eV, but at higher energies pion production takes over, increasing the energy loss by a factor of ~ 100 which is the effect responsible for the cut-off in the spectrum.

Essentially the same is depicted differently in the figure below from Roberto Aloisio's talk, slide 2. I didn't hear the talk but a good guess is that the y axis shows the attenuation length of the protons in the CMB background. Again one sees the electron-positron pair production having an effect already at smaller energies, but it does not result in a sharp cut-off as the pion production: If energy loss would be caused by electron-positron production only, the proton could travel as far as a third the size of the observable universe.


For an excellent introduction into the physics of ultra-high-energetic air showers, I recommend Angela Olinto's recent PI colloquium, PIRSA: 08010000.

Wednesday, January 23, 2008

Light Deflection at the Sun

 
 
 
 
 
 
 


Albert Einstein shot to fame in November 1919 when an announcement was made by British astronomers at a scientific meeting in London: Measurements of starlight during a solar eclipse earlier that year had vindicated the gravitational deflection of light at the Sun, as predicted by General Relativity. Moreover, it was argued, the data were not compatible with Newton's time-honoured theory of gravitation.


In Einstein's theory of General Relativity, just four years old in November 1919, space-time in the vicinity of a large mass such as the Sun is curved, and light rays passing nearby the Sun get bent. As a consequence, the actual location of a star observed in the vicinity of the Sun ("wahre Position") is not at the position where it appears to be ("scheinbare Position"), but a bit closer to the Sun. The deflection angle δ for the light passing just at the rim of the Sun can be calculated from the speed of light, c, the gravitational constant, G, and the mass and radius of the Sun. It is


- we will come back to the extra factor (1+γ)/2 in a second. If wee keep in mind that the Sun has an apparent diameter of half a degree, or 1800 arc seconds, the deflection of light from a star close to the rim of the Sun is just 1/1000 of the Sun's diameter. Moreover, for light passing at larger distances d from the Sun, the angle drops as 1/d - so this is a tiny effect!

Actually, a similar deflection of light had already been calculated before Einstein. The British physicist Henry Cavendish, and, most notably, the German astronomer Johann Georg von Soldner had used Newton's mechanics and calculated the hyperbolic trajectory of a particle which passes at the speed of light nearby a large mass. This calculation yields a deflection angle that is just half as big as the value obtained from General Relativity.

This difference between the two calculations is nowadays encoded in a parameter called γ, where γ = 1, or (1+γ)/2 = 1, corresponds to the bending of light as predicted by General Relativity, while γ = 0, or (1+γ)/2 = 1/2, is the value for the Newtonian calculation. Actually, γ is just one out of a set of several parameters which are used in a framework called parametrised post-Newtonian formalism. This formalism had been developed to describe, in a unified framework, the observable consequences of different possible theories of gravitation. For example, γ describes how much space curvature is produced by a unit rest mass. Newtonian gravity comes with γ = 0 (no curvature), and General Relativity has &gamma = 1. Other conceivable theories of gravitation might come with still other values of γ, and the measurement of γ is a way to distinguish between them.

Negative of the solar eclipse of May 29, 1919, photographed by Andrew Crommelin in Sobral, Brazil. Stars are marked by horizontal lines. (via Wikipedia, from F. W. Dyson, A. S. Eddington, and C. Davidson, Phil. Trans. Royal Soc. London. Series A 220 (1920) 291-333, page 332)
While Soldner had apologised to the readers of his 1801 paper for calculating an effect that he judged unobservable, astronomers a hundred years later had more confidence in their capabilities. Since it is not possible to observe stars in the vicinity of the Sun under normal circumstances, they had to seize the rare opportunity of a total eclipse of the Sun, when stars nearby are visible. By comparing the apparent positions of these stars to the true positions (measured at night, at a different time of the year, when the effect of gravitational bending by the Sun can be neglected), the deflection of light by the gravitational field of the Sun could be established.

Motivated by these considerations, the British astronomer and relativity aficionado Arthur Stanley Eddington organised two expeditions to observe a solar eclipse on May 29, 1919, with a zone of totality roughly along the equator. He travelled to Principe, an island in the Atlantic ocean, while a second team observed the event from Sobral in Brazil.

The results of these observations were made public at the meeting in London in November 1919 that made Einstein a scientific star: The measured deflection of light did fit to the Einstein value, while it was much less compatible with the Newtonian bending.

Of course, not only Einstein denialists point out the huge error bars of the eclipse measurements. Eddington had only a few stars on his photographic plates, due to bad weather, and the main telescope of the Sobral team had suffered misalignment caused by heating in the plain daylight before the eclipse. As a result, data taken with this instrument had been discarded - which is a tricky point, since they seem to have favoured a Newtonian value for the light deflection.

However, the 1919 eclipse data have just been the beginning of a long series of ever-improving measurements of the gravitational deflection of light. This finally brings us to our plottl.

Its upper part shows, as a function of time along the horizontal axis, the results for measurements of the light-bending parameter (1+γ)/2, as established by different methods.

Improvement of the measurement of the gravitational bending of light and radio waves by the Sun (upper part of the figure) over the last 80 years. The horizontal black line at (1+γ)/2 = 1 corresponds to the prediction of General Relativity. [Source: The Confrontation between General Relativity and Experiment by Clifford Will, Living Rev. Relativity 9 (2006), cited on <2008/01/13> (http://www.livingreviews.org/lrr-2006-3), Figure 5.]

Marked in red are data from measurements with visible light, all but one taken at solar eclipses. The 1919 eclipse is the left-most data point. As we know already, these eclipse data come with large uncertainties and huge error bars - some do not even fit on the plot (the red arrows at the upper edge) - and there has been been only a modest progress up to the 1970s. The last eclipse expedition with published data about light deflection was to Mauritania, resulting in the paper Gravitational deflection of light: Solar eclipse of 30 June 1973. I. Description of procedures and final result. by Brune et al., Astron. J. 81 (1976) 452-454.

However, since the advent of satellites, it is not necessary anymore to wait for an eclipse to observe stars in the vicinity of the Sun. Thus, an analysis of the star catalogue established by the astrometry satellite Hipparcos could confirm the Einstein value for the bending of light, (1+γ)/2 = 1, to within 0.1 percent (Froeschlé, M., Mignard, F., and Arenou, F.: Determination of the PPN parameter γ with the Hipparcos data, Proceedings from the Hipparcos Venice ’97 Symposium (ESA, Noordwijk, Netherlands, 1997) - PDF)

And, of course, light is only part of the electromagnetic spectrum. For example, one can use radio telescopes to measure the deflection of radio signals from quasars, completely analogous to the the measurement of starlight, but with the bonus that there is no need to wait for eclipses. Early attempts to do so from the 1960s and 1970s are marked by the blue dots - the results vindicate, and improve on, the optical observations (E. B. Fomalont, R. A. Sramek: Measurements of the Solar Gravitational Deflection of Radio Waves in Agreement with General Relativity, Phys. Rev. Lett. 36 (1976) 1475-1478.)

And finally, the interferometric combination of radio telescopes from all over the globe has further improved the quasar data. These so called VLBI (Very Long Baseline Interferometry) light deflection measurements have reached an accuracy of 0.02 percent, and they fit perfectly well to the predictions of General Relativity (Shapiro, S.S., Davis, J.L., Lebach, D.E., and Gregory, J.S.: Measurement of the solar gravitational deflection of radio waves using geodetic very-long-baseline interferometry data, 1979-1999, Phys. Rev. Lett. 92 (2004) 121101.)

Thus, Eddington had got it right, and as it looks from today's data, General Relativity rules!



PS: The lower part of the plot shows the so far best determination of γ. It uses a different but related effect, the so-called Shapiro time delay, which is based on the apparently reduced speed of light in the vicinity of large masses. This time delay can now be measured extremely precisely thanks to the telemetry data of spacecraft travelling around in the Solar System - Viking, Voyager, Cassini. The Shapiro time-delay measurements using the Cassini spacecraft yielded an agreement with General relativity to 0.001 percent (B. Bertotti, L. Iess and P. Tortora: A test of general relativity using radio links with the Cassini spacecraft, Nature 425 (2003) 374-376).




Einstein Online has a great first introduction to the Gravitational deflection of light by Steven and Irwin Shapiro.

For the calculations of Cavendish and Soldner, see Clifford M. Will: Henry Cavendish, Johann von Soldner, and the deflection of light, American Journal of Physics 56 (1988) 413-415 (subscription required).

The 1919 eclipse expedition and its motivation and background by Einstein's prediction of the bending of light is described, e.g., by Peter Coles: Einstein, Eddington and the 1919 Eclipse, arXiv:astro-ph/0102462v1. For a recent discussion about the analysis of the photographic data, see Daniel Kennefick: Not Only Because of Theory: Dyson, Eddington and the Competing Myths of the 1919 Eclipse Expedition, arXiv:0709.0685v2 [physics.hist-ph]. The original paper of the 1919 eclipse expedition is F. W. Dyson, A. S. Eddington, and C. Davidson: A Determination of the Deflection of Light by the Sun's Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919, Philosophical Transactions of the Royal Society of London. Series A 222 (1920) 291-333.





This post is a latecomer to our A Plottl A Day series.

Tuesday, January 22, 2008

Still haven't closed my US Bank account...

... this is just to remind myself of doing it. I've spend at least half an hour yesterday with one of these stupid automatic menus ( "Sorry, I could not hear you, let's try again" - "CUSTOMER SERVICE REPRESENTATIVE" ) before figuring out that it was a holiday in the USA. I think they should add an option "To decode German accent, press 5". Either way, for your further amusement here is an anonymous comment you find in the thread On the Edge, referring to my remark
    Bee: "What is currently much more scary is the global economical instability. I am not much of an an economist, but even I sense there will be some major economical crisis rather soon, possibly even this year. If you need any indicators, take Bush talking about the 'fundamentals of the economy being strong'."

    anonymous: "Incidentally, I don't agree with the premise that the world or US economy is in dire straits, nor would most economists. In fact its the best it ever has been viewed on historical timescales, despite the fact that we are on the tail end of a business cycle"

If you're still around I would be interested to hear your opinion on this:
World Markets Plunge on Fears of U.S. Slowdown

FRANKFURT — Fears that the United States may be in a recession reverberated around the world on Monday, sending stock markets from Mumbai to Frankfurt into a tailspin and puncturing the hopes of many investors that Europe and Asia would be able to sidestep an American downturn.


Here is what the White House said
In reference to the global stock sell-off, Jeanie Mamo, a spokeswoman for the White House, said: “We don’t comment on daily market moves. We’re confident that the global economy will continue to grow and that the U.S. economy will return to stronger growth with the economic policies the president called for.”


Feel free to comment on daily market moves. I know nothing about economy except that I'm so far happy to have left my savings in Euro, so don't expect any sensible replies from me.

Update: 3 hours later. After being transferred back and forth several times I failed on closing the account. Options are either wiring the money to my Canadian account, but for this I had to visit a local bank branch in California, or writing a check to myself. Which I can't do because I have no checks left. They have renamed 'customer service' into 'customer satisfaction'.

Update: 3 weeks later. I finally managed to close my account.

Sunday, January 20, 2008

Growing Mountains

In previous posts I mentioned repeatedly how essential it is that information is ordered, structured and filtered in a sensible way: badly ordered information is no information. The passing on of information from one generation to the next in a useful way is crucial to progress. Unfortunately, it seems to me the importance of this aspect and the impact of the consequences is not yet appropriately acknowledged. Instead, we are drowning in a sea of information, and social tagging doesn't even remotely address the problem. (Neither do blogs for that matter.) For some aspects of the question, see e.g. The Spirits that we Called, Communication, and The Right not to Know. In other places, I mentioned the problem of increasing specialization in our communities which makes the communication between areas, and consequently the information exchange harder, see e.g. Science and Democracy II or The Marketplace of Ideas.

I this regard, I recently came across an article by Vannevar Bush. He writes
    "Science has provided the swiftest communication between individuals; it has provided a record of ideas and has enabled man to manipulate and to make extracts from that record so that knowledge evolves and endures throughout the life of a race rather than that of an individual.

    There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.

    Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose."

He goes then on praising recent technological developments...
    "Adding is only one operation. To perform arithmetical computation involves also subtraction, multiplication, and division, and in addition some method for temporary storage of results, removal from storage for further manipulation, and recording of final results by printing. Machines for these purposes are now of two types: keyboard machines for accounting and the like, manually controlled for the insertion of data, and usually automatically controlled as far as the sequence of operations is concerned; and punched-card machines in which separate operations are usually delegated to a series of machines, and the cards then transferred bodily from one to another. Both forms are very useful; but as far as complex computations are concerned, both are still in embryo."

The above quotes are from the article As We May Think, which was published in Atlantic Monthly, July 1945! It is worth reading the full text. Bush basically foresaw social tagging
    "When data of any sort are placed in storage, they are filed alphabetically or numerically, and information is found (when it is) by tracing it down from subclass to subclass. It can be in only one place, unless duplicates are used; one has to have rules as to which path will locate it, and the rules are cumbersome. Having found one item, moreover, one has to emerge from the system and re-enter on a new path.

    The human mind does not work that way. It operates by association. With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain."


What is more interesting however is that he suggests to remember not only the associative keywords, but to store the paths people have taken to arrive at a certain piece of information.

Now I am wondering whether this suggestion would be worthwhile to try for navigation in the web. To begin with, I would sometimes be grateful to find the paths that I myself have taken before. (Never delete the browser history. Curse those who set A:visited = A:link ). Consider one could visualize the website you are viewing on a map with other people's paths going in and out, some major roads, some smaller sideways. I could imagine this to be useful to help with the keyword problem I occasionally encounter: what do you do with a search engine if you can't find the right keywords? Well, you guess something that might maybe come into the direction you hope for. It might be a bad guess though. E.g. if I start this way on the arxiv, I subsequently use a couple 'refers to' links, upon which one sooner or later always finds the relevant publications - rivers running to the sea. Now imagine you could instead just select among the paths others have taken - all these people must be good for something, do we have to repeat such path finding over and over again?

Just some Sunday afternoon random thoughts.

Saturday, January 19, 2008

Mirror

Update on the (quotable) bathroom-mirror situation: my landlord finally installed a new one.

Friday, January 18, 2008

PIRSA

The online availability of recorded seminars is to me one of the most interesting recent technological developments that influence scientific research. It allows the researchers at smaller institutions to benefit from the places with better infrastructure, it allows the interested colleagues to get an introduction into somebodies research program without sorting through dozens of papers, it gives the public the opportunity to look into our seminar rooms, and it gives me the opportunity to skip the morning talks without too much of a bad conscience.

The recording and archiving of seminars though is sometimes a bit of a mess, and depends very much on the institution. PIRSA, the Perimeter Institute Recorded Seminar Archive, aims to provide an interface that makes recorded seminars easily searchable, and allows to refer to them by providing an unique and permanent PIRSA number, much like the arXiv.

The PIRSA websites have just been launched, you can have a look yourself at

pirsa.org

The archive presently contains PI’s scientific seminar series (including colloquia), summer schools, courses, workshops, conferences, public lectures, and special events - in total that currently amounts to about 1700. The recording is done using a combination of A/V equipment and Mediasite, which captures both a video feed of the speaker and a VGA feed of any supporting materials – such as presentation slides, transparencies, or black board notes and figures.

Almost like being there. Except that nobody notices when you fall asleep.

The driving force behind PIRSA is PI faculty member Lucien Hardy, who explains
    “Seminars have always played an important role in propagating knowledge. However, it has been the written rather than the spoken word by which scientific knowledge has been recorded, archived, and passed down. These words were written on paper and archived in libraries.

    Now technology has progressed further to the point that we can archive seminars. We have modeled PIRSA on arXiv.org. It is not so much a YouTube for science as it is a video arXiv for seminars. It is designed to be a useful resource for researchers rather than an entertainment channel. A permanent archive of seminars allows researchers to watch presentations they were unable to attend, to revisit them many years after they were recorded, and to cite them in their own work just as they would cite a regular article”

Steve Bradwell from our IT department who has been in charge of the software development adds
    “We believe PIRSA’s success as a global, web based physics archive lies in both the quality of content provided and the accuracy and consistency of the supporting information and media formats. It’s more than just feature rich services, people want consistency and cross platform support. PIRSA offers that.”

The ambitious long term goal would be to establish a general recording and archiving standard that other academical institutions could also use.

Thursday, January 17, 2008

Germenglish

Without any specific reason, a couple of confusions with the German language I've come across repeatedly:
  • The plural of ansatz is neither ansatzes nor ansatze or ansatz's, but ansätze. One produces ä in LaTeX by typing \"a.


  • The German vowel i is pronounced like a short version of the English ee, and NOT like the ai e.g. in aisle. Ernst Ising, the guy from the Ising-model, was German. Consequently, his name is pronounced Ee-sing, not Ai-sing. It's the same reason why the Lie-group is pronounced Lee-group


  • The German language is pronounced the way it is written. I've been told this is a sentence a native English speaker can't even make sense of. Don't know whether that's true (you tell me?), but just to make sure: it means you can learn the alphabet, and know how to pronounce a word by just aligning the letters after each other [1,2]. This means every letter needs to be spoken, particularly the 'e' at the end of words - it sounds like the e in 'set'. Schadenfreude. Luftwaffe.


  • The German word Schild like in Schwarzschild, has nothing to do with the English word 'child'. Instead Schild means 'shield', and Schwarzschild means 'Black Shield' - I would guess it goes back to some kind of family crest. The German sch is pronounced close to the English 'sh' (for the following vowel i, see first point.)


  • Vielbein is not a typo that should read vierbein. Vierbein means 'four legs', whereas viel means 'many'. Thus, vielbein is the appropriate word in an arbitrary number of dimension. (Similarly Dreibein means 'three legs'). Pronounce roughly as fearbain, feelbain, draibain.


  • There is no 'th' in German. The English 'th' is therefore hard learn for the German native speaker. The easiest way to fake a German accent is replacing every 'th' with a 'z'. It is zat easy. Eizer way, therefore the 'th' in Bethe (the guy from the Bethe-Weizsäcker-Zyklus) is pronounced just like a 't'. Recall not to drop the 'e' in the end.



[1] Restrictions apply. The only exception you need to recall is that the 'ie' replaces 'ii', like in die (the) or wie (how), pronounced dee and vee.
[2] Doesn't work in English. Try it: live. L-ai-v-ee? Worse, it's not reproducible: infinity, but finite? break, steak, bleak? More of this?

Wednesday, January 16, 2008

Mercury looks like the Moon, nearly...

On Monday, the spacecraft Messenger visited Mercury, and shot this photo of the planet closest to the Sun:

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington (Source)

The photo was taken at 20:25 UTC on January 14, 2008, about 80 minutes after Messenger's closest approach to Mercury, from a distance of about 27,000 kilometres (17,000 miles). Mercury had been visited 30 years ago by the space probe Mariner 10, but most details of the planet visible in this photo have never been seen before!

Here is another photo, taken about 56 minutes before the closest encounter from a distance of 18,000 kilometres (11,000 miles), showing a region roughly 480 kilometres (300 miles) across, including craters less than a mile wide:

Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington (Source)

The dark, eye-like structure close to the terminator had been glimpsed by Mariner 10 and is called Vivaldi, after the Italian composer.

Messenger has shot more photos during the fly-by, but transmission to Earth has been slow so far. The main reason is that also on Monday, the Ulysses spacecraft passed the north pole of the Sun, taking lots of exciting data, and the transmission bandwidth had to be shared between Ulysses and Messenger. You find many interesting details about this delay, and the Messenger mission in general, on the site and blog of the Planetary Society, including a discussion why despite the obvious resemblance, there are are also differences between Mercury and the Moon that can be seen in the photos.

Messenger has been launched in August 2004 and is under way on a quite complicated trajectory that will, eventually, bring it in a closed orbit around Mercury in March 2011. It's quite surprsing to me that it's so complicated to get rid of the extra kinetic energy one gains when falling into the potential well of the Sun.

So, Mercury will stay quite elusive for three more years.

Despite rumours to the contrary, it's very well possible that Copernicus had seen Mercury in his lifetime. (Source: Wikipedia)
Apropos elusiveness: Despite rumours to the contrary, Mercury can be spotted by the naked eye in the morning or evening sky, albeit only a few times a year - I've seen it once as a bright "star" in the evening twilight. There is the "very pretty tale" that Copernicus complained that he had never seen Mercury himself. However, this most probably is only a myth, originating from a misreading of quotations from Copernicus' "De Revolutionibus" and from the "Life of Copernicus" by philosopher and astronomer Pierre Gassendi (Gassendi was the first to observe a transit of Mercury across the Sun in 1631). It seems that this myth goes to back to the much-read accounts "Astronomie populaire", by the French Astronomer François Arago, and "Kosmos", by his friend, the German naturalist Alexander von Humboldt. These accounts were extremely popular in the mid-19th century, and the catchy story developed a life on its own, despite complaints by contemporary astronomers that there was actually no real evidence for it. (PS: If someone could help me with translations of the Latin quotes from Copernicus and Gassendi as cited in the W. T. Lynn paper (The Observatory 15 (1892) 321-322), I will appreciate. It seems that in the 1890s, Latin quotes did not demand translations in astronomy papers...)



Tags: , , ,

Saturday, January 12, 2008

The Spirits that We Called

MARCH 13th 2008: Until Wednesday, the Presidential candidate [insert name here] scored high in the polls. Then a Google search for his name showed up as first hit a report on an alleged child abuse committed by the candidate, published by Mary S. (name changed) on her personal website. The story was backed up by the following highly ranked hits that indicated two similar events during his youth, though reliable sources were missing. Within less than one hour, the reports were echoed on thousands of weblogs, appeared on digg and reddit, the original websites received 200,000 hits within the first 6 hours, until the server crashed down. Immediate press releases by the candidate's PR groups did not appear on the Google listing, and could only be accessed by secondary links. It took until the next day that printed newspapers could attempt to clarify the situation.

NOVEMBER 9th 2011: Two independent eye witnesses report on their weblogs about Chinese military violently overtaking the government in Khartoum, Sudan. The reports score first hit on the key words 'world news', 'news' and 'foreign politics' at Google, later also on 'Sudan', and 'China'. Reports by the Chinese government denying the events did not appear on the Google ranking. The events were picked up by various TV stations, using the provided YouTube videos of extremely bad quality and doubtful sources. Dozens of reporters asked the White House for a statement. The President said he would not tolerate China getting a grip on Sudan's oil resources. The Shanghai Composite Index fell 541.12 points.

DECEMBER 30th 2015: Six months after Google and Yahoo was bought by Frederic F., multi-billionaire and president of several global companies, it was officially announced that Google will further improve the quality of search results, and counteract the drawbacks of information overload. Beginning New Year's day 2016, the algorithm will filter out "low quality sites, sites of obscure origin, and doubtful content", as the press release states. Yet it remains unclear who sets these criteria. Frederic F. answered inquires with "Customers trust us. We will not disappoint them, and remain truthful to our philosophy to do no evil." He explained the need for such a change with the accumulation of outdated and irrelevant information on the web and added "Google will do its best to provide the user with correct information. Our employees are working hard to provide an excellent service to foster global knowledge."

I. Information Overload

Information is one of the most important resources in today's world. In a rapidly changing environment that gets complexer every day, the availability and accuracy of information is essential already to preserve the status quo, and indispensable to further progress.

‘Information overload’ isn't just an error message my brain produces when I check the arXiv, and an expression that I've made up for fun, but a rather unsurprising and well known side effect of a tightly connected world. The human brain's capacity to process input is limited. Today you are confronted with more information than you a) need and b) can deal with. The challenge today is not to collect all information you can possibly get, but to filter it and extract the relevant bits.

You can notice how your brain has learned to deal with information overload: by only skimming this side, losing attention already at this paragraph - because it's not obvious to you what it might be good for listening to me [1].


“[I]nformation overload is not only caused by the sheer volume of information, but also because of the complexity or confusing structure of information that might overtax the user’s cognitive skill to focus on relevant information ... Therefore Helmersen et al. (p. 2) characterize information overload as “difficulties in locating, retrieving, processing, storing and/or reretrieving information due to the volume of available information.” Information overload may lead to stress, health problems, frustration, disillusionment, depression, as well as impaired judgment and bad decision making ... From an ethical perspective, these consequences of information overload are problematic, because they undermine several basic principles, especially the requirement of participants’ autonomy/self-determination and the nonmaleficence principle.”
Behr, Nosper, Klimmt & Hartmann (2005) Some Practical Considerations of Ethical Issues in Virtual Reality Research, Presence Teleoperators & Virtual Environments 14:6, 668 (2005).


The internet collects and hosts an increasing amount of data. Besides potentially resulting in “frustration, disillusionment, and depression” as claimed in the above quote, a database without tools to find the relevant information however is above everything else useless: Badly ordered information is no information, and badly ordered information can be fatal. Envision a library without any cataloguing. Of what use is it if you're told everything you need to know can be found somewhere on these four floors, filled with bookshelves up to the ceiling?

Luckily, thanks to ingenious software masters, we have today powerful search engines that help us structure the available information.

Und nun komm, du alter Besen!
Nimm die schlechten Lumpenhüllen
Bist schon lange Knecht gewesen:
nun erfülle meinen Willen!
Come on now, old broom, get dressed,
these old rags will do just fine!
You're a slave in any case,
and today you will be mine!



II. Filtering Information

It is 2008. Today's school kids have grown up with the internet. It promises answers to all questions you can possibly have. And if you can't find an answer, ask the expert. Even better, you will find support no matter which opinion you happen to hold, or which side of an argument you want to defend. And you will somewhere come across a forum of likeminded friends that confirm your convictions.

What criteria is it that people use to filter information? A high Google ranking is without doubt useful to pass a first filter. Note, I neither said a high Google ranking is an indicator for quality, nor do I assume people are not aware of that. It is just a fact, that what ranks highly on search engines is more likely to be read [2]. And what is more likely to be read is more likely to stick. Esp. children who haven't been taught how to deal with information they find on the internet are prone to make mistakes in judgement, but confirmation bias is a fairly wide spread habit among all ages.

Besides this, people give a higher value of importance to information just because they hear or read it repeatedly. What's in your face is in your mind. There must be something going on when many people point into the sky. That's what advertisements take advantage of, that's what meta filters like digg and reddit do, and that's what search engines do: directing attention, filtering your information.

Now you can tell me everyone of us should be rational, we should always check sources, doubt unverified reports even if repeated several times. We shouldn't believe what we read without questioning it. We should seek accuracy and not easy entertainment. We should, we should, we should [3]. But face it, many people don't. Because they just don't have the time, or are not interested enough, and the most commonly used criteria in this case is to follow the masses. Read what others read (the posts with the most comments?) go where many people link to, talk what others talk about, pay attention to what many people consider relevant. Majority offers security, Wikipedia is trustworthy, Google has proved useful.

No go back to my opening line: “Information is one of the most important resources in today's world.” Accuracy and availability of information is essential for the progress of our societies. You can direct people's opinions with the information you given them, and in the way you provide it. You don't need hard censorship for that, it is more efficient to leave people the illusion of knowledge. It doesn't matter if there's a right for free speech, if you can make sure little people listen to what they shouldn't hear. Majority offers security, Wikipedia is trustworthy, Google has proved useful?

The preface of this post are three examples of how easily tempering with search engine algorithms can today affect opinions. Effectively, this interferes with our political systems since information is the basis for our decisions. Note, our decisions are *not* based in the information that is 'theoretically' available - somewhere, somehow - but on the information that is 'practically' available in our head, because we've read it, because we recall it, because we consider it consciously or unconsciously relevant.


Ein verruchter Besen!
der nicht hören will!
Stock, der du gewesen,
steh doch wieder still!
Be you damned, old broom,
why won't you obey?
Be a stick once more,
please, I beg you, stay!



III. Politics on the Web

The internet today has aspects of different political systems: capitalist anarchy and direct democracy, that reflect in the most frequently used services on the web.

III a. Googlism

Information has always been filtered by the media, and there has always been an influence by this on our political opinion making process. People have always fought for attention. New is

a) The necessity. The increasing need for such a filter, and the the relevance this ordering mechanism obtains through this. Consider Google, Yahoo, and MSN were down for 24 hours and the consequences.

b) The centralization. Google isn't the only search engine, but without doubt the presently most popular one. Millions of people world wide rely on it. How many would even notice if all hits after page 3 were missing?

Combine that with the problems into which old-fashioned print media runs, because they have trouble selling yesterday's news. Those who structure the information of many people have political influence. This is not a virtual, but a very real reality. The internet affects our daily lives, and it is still mostly a legal vacuum [4]. If I was a terrorist, I'd overtake the Google headquarter, and prominently place a couple of fake reports causing the US economy tumbling down, setting the stage for another war. You think that wouldn't work? Think twice. We life in a world where a couple of cartoons can kill dozens of people.

The Google ranking of a website can be pushed by various means. If a company, or lobby, can afford to hire an expert in search engine optimization, they can literally buy a good ranking. Even better if they invest in paid links, or further advertisement.

The internet is presently mostly a capitalist anarchy with communist (shareware/no private properties) areas, that are struggling for structure. It offers the possibility to focus lots of influence in the hand of little people. The archetypical nerd community the web started with is today a small minority among those who just use the net they are being offered.

One can hope that there are self-regulatory mechanisms that save our societies from being influenced by a small groups of people because users would just chose different companies, different information sources. Or maybe some nerdy guys would set up their own 'better' search engine. That might work. But there is no guarantee it will. It is far from clear what a majority of people would consider 'better' for whatever reason. If I see what a majority of people considers interesting on digg, I have my concerns about relying on a self-regulatory mechanism to work out.

Relying on the good will and rationality of a majority of people is a decent approach, but it is naive. I hope that by now you see how much power lies in the hand of those who order, filter and structure our information, and that this has an impact on our political system, the opinion making process, and the decisions that we reach. In my nightmares I see the President's consultants putting together their briefings by Googling some keywords. Yes, maybe users would indeed just chose different companies, and everything would work out fine, but do you want to rely on it?

People make mistakes, and the majority doesn't always make the right choice.

That's why we have a representative democracy [5]. That's why our countries have constitutions that can't be changed from one day to the other. That's why we have laws to protect our freedom, that's why we have "executive and judicial officers" that are "bound by oath or affirmation, to support this Constitution" [6]. We shouldn't privatize part of the executive and we shouldn't hand over filtering information to private companies. It is almost tragically comic to me that all concerns about Google that I find in the media circle around economical power:

"One senior executive at Time Warner, who did not want to be identified, because Time Warner’s AOL division is a Google partner, says, “Sometimes I don’t know what to think of Google. We have the best relationship of anyone with Google. On the other hand, you always have to worry when someone gets so much more powerful than all the competition out there. This is why I come down to this: I hope the government starts understanding this power sooner rather than later.”"
The Search Party, by Ken Auletta, The New Yorker, Jan 14th 2008


Yes, I too hope the government starts understanding this power - the political power.

IIIb. Wikiarchy

I find it quite interesting to follow the developments of non-profit collaborative projects like Wikipedia. Wikipedia has certainly proved useful, and in my perception the quality of articles has tremendously improved over the last years. It has been a while since I came across a statement that I could immediately tell was blatantly wrong. It is a good quick reference, and I prefer it often over other sites if only for the simple reason that the sites are well maintained, easily readable, and cross-referenced.

However, here as much as with Google, I find the influence exerted by these sites worrisome, because I believe many people are not sufficienly aware of this influence, especially the younger generation. In a certain sense, Wikipedia seems to appear very trustworthy and likable, up to the point that I have to feel bad for criticising it and expect some comments to vehemently defend it. Isn't it after all a community project? Anti-authoritarian? Democratic? GOOD in capital letters? Danah Boyd from Many 2 Many expressed her concerns as follows:

"My concern - and that of many of my colleagues - is that students are often not media-savvy enough to recognize when to trust Wikipedia and when this is a dreadful idea. They quote from it as though it cannot be inaccurate. I certainly distrust many classic sources, but i don’t think that an “anti-elitist” (a.k.a. lacking traditional authority and expertise) alternative is automatically better. Such a move stinks of glorifying otherness simply out of disdain for hegemonic practices, a tactic that never gets us anywhere."
Danah Boyd, Academia and Wikipedia, Jan 4th 2005


It's not only a tactic that doesn't get us anywhere, but a tactic that can simply go wrong for the same reason I mentioned above: the majority isn't always right. Wikipedia works as long as those who are not experts realize they are not experts, know that they don't know, respect the rules and don't execute their potential power.

In his article 'Digital Maoism', Jaron Lanier formulated his concerns like this

"The problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy."

I recommend you read the full article, it's one of the most insightful writings I've come across for a long time [7]. I find it so to the point, I'll borrow another paragraph:
"A core belief of the wiki world is that whatever problems exist in the wiki will be incrementally corrected as the process unfolds. This is analogous to the claims of Hyper-Libertarians who put infinite faith in a free market, or the Hyper-Lefties who are somehow able to sit through consensus decision-making processes. In all these cases, it seems to me that empirical evidence has yielded mixed results. Sometimes loosely structured collective activities yield continuous improvements and sometimes they don't. Often we don't live long enough to find out."

The belief that problems will be corrected I find very nice because it puts faith in mankind, but I wouldn't want to rely on it. It is quite an interesting trend that people rely so much on the common sensus. It is a trend though that can go wrong exactly because it does not necessarily have a self-correcting mechanism. Relevant for people's decisions is not only the information they have. But how much information they believe they have, and how accurate they believe it to be. Combine that with the faith in Google and Wikipedia. Is there information not on the internet?

On the other hand, what I find very interesting in these developments is that Wikipedia isn't just a direct democracy, it has guidelines for editing, and de facto indeed has a power structure. This is interesting to me because it is pretty much like witnessing the formation of a political system. Though it is quite clear where it should go, if Wikipedia wants to remain a high quality source of information, and that's what you'll read in the next section.

Herr und Meister, hör' mich rufen!
Ach, da kommt der Meister!
Herr, die Not ist groß!
Die ich rief, die Geister,
werd' ich nun nicht los.
Lord and master, hear my call!
Ah, here comes the master!
I have need of Thee!
from the spirits that I called
Sir, deliver me!


IV: Representative Democracy

In the early nineties, I was a member of the social democratic party in Germany, and I was a strong believer in direct democracy. The internet was fresh and new, and it seemed to me like the perfect tool to make reality what I thought had been given up for practical reasons: decisions being made by all the people. It wasn't difficult to extrapolate that internet access would catch on like a fire and that in some decades almost all households would be connected, fast, easily, with access to all the information they need to make decisions.

At this time I was really excited about it, registered the first domain I ever had (demokratie-im-netz/democracy on the web) and tried to get a critical mass of people behind me. I didn't get very far though. Essentially, nobody was listening to me. What I should have expected given that the average age of the people I was faced with was somewhere in the upper 50ies, and most of them had not the faintest idea what I was talking about. World wide what?

Either way, though I am not generally easy to discourage there were two reasons I gave up on this. For one, at some point I had to make a decision between politics and physics. The latter won, that's why I am today where I am [8]. The other reason was that I came to realize that the adjective 'representative' is an essential ingredient to the democratic system.

The tasks in our society have become increasingly specialized. Most jobs require a years long education. If you want a good performance you look for a specialist, for an expert, somebody who has experience where you don't. I wouldn't want to make important decisions if I don't have the time or the education to do it well. So we elect representatives to do this job, people that have a good qualification for this. (I am afraid though big part of the frustration with politics/politicans today is a result of the low expertise status in the government. What can I say. It's a democracy. You get what you vote for.)

Either way, the election of representatives is beneficial for two reasons. The one is to increase expertise in decisions, above what could possibly be done by all people - most of which have a day job and other things to do. The other reason is that people's opinions are easily influenced by events with large emotional impact, and are prone to irrational fluctuations on too short timescales. Lanier put it like this: "One service performed by representative democracy is low-pass filtering." The drawbacks of direct democracy, and the reasons why we today have representative democracies however haven't yet been fully acknowledged. Let me quote from the Google Corporate Philosophy:

4. Democracy on the web works.

Google works because it relies on the millions of individuals posting links on websites to help determine which other sites offer content of value. Google assesses the importance of every web page using a variety of techniques, including its patented PageRank™ algorithm which analyzes which sites have been "voted" the best sources of information by other pages across the web.


Determining 'content of value' and 'assessing the importance' of webpages is limited by the way the system operates, which presently does no use the advantages of the small adjective 'representative'.

The point is that the problems we are facing on the internet have already been solved. Read your constitution. What is missing are elected representatives whose task it is to pursue the majority's goals, formulated as a set of rules/criteria/regulations. I will give you a concreate example: What I would consider a useful search engine is one that has a rating of websites by varios criteria like accuracy, entertainment, visual appeal, whatever. I don't want such a rating to be done by everybody clicking on a scale of stars. I don't want judgement to be made by anonymous people, nor an algorithm, nor an algorithm modified by anonymous people.

I do not care how many links go to a site if it is only an echo of another article, or - even worse - contains nothing than advertisements and links to other sites. I want there to be a group of people who is responsible to provide a certain quality, a group of people whose names are known, and who explain their qualifications, opinions and decisions (I don't mind pseudoanonymity if it suffices to prove expertise). The internet is the ideal tool to provide representatives with feedback, and a useful platform for people to explain their qualifications, and to convince a majority they are trustworthy.

The recently launched Wikia Search, a search engine wikipepia-style with open source algorithm and user feedback, is an interesting experiment. The concept doesn't really convince me though for the reasons that should have become clear by now. There is the possibility the user feedback will optimize popularity (like digg), and not quality in the site's ranking. And though popularity is one interesting criteria to order sites, it shouldn't remain the only one. The "wiki-style social ranking" they advertise doesn't seem to me to a sufficient guarantee that expertise will be increased among those who provide the feedback.

V. Summary

The Sorcerer's apprentice that I have quoted here is a poem by J.W. Goethe. The apprentice is excited about the power he has witnessed, and while the master is away plays around with the broom. Unfortunately, things don't go as expected. He involuntarily causes trouble he doesn't know how to deal with, eventually culminating in the famous line "Die ich rief, die Geister, werd' ich nun nicht los," which translates roughly into: "I can't get rid of the spirits I called." In the poem, the master comes to help and sends the broom back into the closet.

The internet is a great invention and a powerful tool. It has a large and increasing influence on our daily lifes, as well as on our opinion- and decision making processes that eventually affect the quality of our living. It is as much an opportunity as a danger. We should be very cautios to ensure that self-organized structures on the internet - that are presently (still!) operating mostly in legal vacuum - do not interfere with our political systems. And this potential very really exists.

„In die Ecke
Besen, Besen!
Seids gewesen,
denn als Geister
ruft euch nur zu seinem Zwecke
erst hervor der alte Meister!”
“Back now, broom,
into the closet!
Be thou as thou
wert before!
Until I, the real master
call thee forth to serve once more!”


Der Zauberlehrling, by Johann Wolfgang von Goethe -- The Sorcerer's Apprentice, Translation by Brigitte Dubiel



Download print version (pdf) of this text.



[1] And now you're confused because I don't tell you ;-)
[2] If you're one of the 100 visitors per day that comes to this blog searching for 'Map of America' you know what I mean.
[3] New Year's resolutions, anybody? Yawn.
[4] And each time I have to read through insults in blog's comment section, it is the first
Basic Law that sounds in my ears: "Article 1 (1) Human dignity shall be inviolable. To respect and protect it shall be the duty of all state authority."
[5] The overwhelming majority of visitors on this blogs comes from North America and Central Europe. Apologies to those who can not identify with my use of 'we' when it comes to the political system.
[6] The basics of the politial system are quite similar in Germany, with exception of the President's role and details of the election processes.
[7] Except that the title doesn't make sense to me, I can't see any Mao in the game.
[8] Unpacking the boxes of my 5th move in 4 years, writing this blog post on a Sunday afternoon, while my husband is on the other side of the atlantic ocean. Wait, who ordered that?

Thursday, January 10, 2008

Sense About Science

Sense About Science is, according to their website, an independent charitable trust that seeks to disentangle fact from fiction in public debates about scientific questions. They are UK based, though internationally active. Sense of Science has a board of trustees from various directions of science, an advisory council and a database with 2,000 more scientists whose judgement they can call upon. To me it presently looks somewhat dominated by medicine and biology, but given that this area receives the biggest part of public interest, it isn't so surprising. One also finds some physicists on their list.

According to their information sheet their aims are to:

  • work with scientists to respond to inaccuracies and attacks on science

  • work with scientific bodies to promote the benefits of scientific research

  • explain to stakeholders how and why science is different from opinion

  • bring scientists into direct contact with interested groups and opinion formers

  • provide a facility for every kind of organisation to contact scientists about controversial or worrying issues

  • arrange briefings on scientific developments for non-specialists


And they state:

"Scientific evidence should be central in debates about science, medicine and technology. It is vital for clear public deliberation, scientific development and good policy. Often, though, evidence is ignored or even misrepresented. From scares about the contraceptive pill, fluoride and the MMR vaccine to controversies about genetic modification, stem cell research and radiation, society has paid with unfounded anxiety, poor decisions, and lost opportunities for research and development.

Sense About Science responds to the misrepresentation of science and scientific evidence on issues that matter to society. We do this by promoting respect for evidence and by urging scientists to engage actively with a wide range of groups, particularly when debates are controversial or difficult.

We also work proactively for a wider understanding of the nature of evidence and recognition of the value of scientific enquiry."


It probably tells about my pessimistic world view that I've expected their site to actually be a well hidden attack on science. Like, we just want a reasonable discussion about intelligent design, and all of our listed Professors who bought their PhD online, confirm the dairy compartment in my fridge was intelligently designed such that butter does *not* fit in. I've turned their site upside down, but could neither find any suspicious advertisements (buy your PhD HERE), nor any support of obvious crackpottism, or attacks on scientific methods altogether. They explicitly state they regard peer review as an essential ingredient of science:

"We promote:

  • the principle of independent peer review

  • scientific inquiry free from stigma and intimidation

  • constructive discussion"


They offer to answer to inquires by email and phone, and their administration and funding* seems to be sufficiently transparent. If you're a scientist you can join their database (called 'Evidence Base'), or contact them by phone and email with inquiries, questions or pointers to press articles that could be misunderstood.

Overall it looks to me like a great idea that I hope will fly.

www.senseaboutscience.org.uk
More Info from SourceWatch


* According to LobbyWatch: Funding is said to derive from 'corporations and learned societies'. Funders include the Association of the British Pharmaceutical Industry (ABPI), the 'life science' company Amersham Biosciences plc, BBSRC, BP plc, GlaxoSmithKline, ISAAA, John Innes Centre, The John Innes Trust, Mr M. Livermore (a biotech PR consultant who formerly worked for DuPont and has links to Scientific Alliance and IPN), the biopharmaceutical companies AstraZeneca plc, Pfizer plc and Oxford GlycoSciences plc, Dr M. Ridley (links to IEA, Julian Morris etc.), and the Social Issues Research Centre (SIRC) and the related Health and Science Communication Trust.



Thanks to Klaus!