Pages

Thursday, November 29, 2007

Robert Berdan: Photography, Art and Science

Caffeine CrystalRecently, I stumbled across Robert Berdan's collection of amazing photos on the websites Moods of Nature and Science and Art. The picture to the left shows a caffeine crystal under a polarized light microscope, which allows one to visualize the birefringent properties of the sample. Just looking at it helps me wake up :-)


The picture below shows an isolated nerve cell with glia attached (yellow) taken with a scanning electron microscope - from Robert's research on nerve regeneration and cell-cell contacts. (I think my nervecells currently need some more of these caffeine crystals to regenerate ...)


Nerve Cell


But he also has a lot of great nature photos:

Hay

Reading Robert's CV I saw he has an education in science, and I asked him to write a few lines about how he came to photography:

Grizzly"I entered science because I had an interest in taking pictures through the microscope which I started at the age of 13. I spent 15 years as a medical researcher but writing grant proposals and admin work started to take me away from doing the research so I left to start up my own business, Science & Art Multimedia in order to focus on web development, teaching and photography. Now I get to spend more time outdoors and travel. My focus is on Canadian Nature photography - right now I am concentrating on the Great Bear Rainforest and I am planning on visiting Yellowknife next year to photograph more of the Northern Lights.


What fascinates me about photography is that it uses a combination of Science and technology to capture the image but requires a sense of art to make the images compelling. I believe photography can enhance one's sensitivity to visual elements around us and increases our observational powers. All good science begins with the ability to observe things that others ignore or did not notice. The camera is also a tool to record moments in time and preserve these instances for the future. For me taking pictures provides a number of thrills, the thrill of being there and capturing an exciting moment, the thrill of re-living the experience when I view the picture again and finally the thrill of sharing that experience with others. Another thing I love about photography is that unlike other activities such as sports, with photography age is not a factor. Anyone can take beautiful pictures at any age and one can continue to improve with age. Autofocus and vibration reduction technologies certainly help make sharper pictures. Digital photography is particularly exciting because of the instant feedback. I am seeing folks improve their ability to take effective photographs significantly faster then they would if using film."



Robert Berdan is a photographer and photo guide, living in Calgary, AB. His background includes 35 years of photography experience, degrees in Zoology, Cell Biology and Neuroscience. He has been teaching for more then 25 years including at the University of Alberta, University of Calgary, Southern Alberta Institute of Technology, Calgary Science Centre and privately as part of his business "Science & Art". He is author of several scientific and photo-publications. You find more information on the websites www.moodsofnature.com and www.scienceandart.com.



TAGS: , ,

Wednesday, November 28, 2007

A Limerick

    Some people who write their own blog
    Do so to insult and mock
    Because they have fun
    But in the long run
    The result is just big clouds of fog.


That is to say I am presently pretty blogged out.

Monday, November 26, 2007

Fact or Fiction?

Ham Radio Operator finds Cure for Cancer! - Surfer Dude finds Theory of Everything! -Teenager builds Nuclear Reactor in the Back Yard! - You are made of a Braid! - Our world is a brane in a throat! - Hints for a Breakdown of Relativity Theory! - Physicist Explains how the Present can Affect the Past! - The Future Influences Construction of Particle Colliders! - Physicists Discover Imprint of Another Universe!


Science Fiction

When I was a kid I read a lot of science fiction and fantasy stories. Both create a world in which the impossible becomes possible. Still, the fictional logic should be without contradictions. The biggest annoyances are mistakes of the author, a plot that doesn’t make sense, a scenario that is inconsistent. Why didn’t they just use the transmitter? Time travel stories were always the worst.

For a good story the imaginary world is based on only a few modifications: Our memory can be stored on a chip and transferred to another person. We can breed dinosaurs. We can travel to a parallel world; shield gravity; slow time. Big brother is watching us. We have a worldwide communications network. We can travel faster than the speed of light. IMAGINE!

In many cases, science fiction writers have predicted technological advances that later became true. Science and fiction have always been closely linked. In contrast to science which is about the possible, the imagination is unconstrained, and free to make a 'could' into 'will', thereby creating a new world.

Virtual ‘reality’ has promoted fictional worlds to a new level. Video games can break the laws of nature, photo editing and three dimensional pictures tamper with our perception of reality, computer animated movies make the impossible possible. Today, everybody can publish his or her interpretation of the world. Einstein was wrong! Hijacked by an alien! A new quantum mechanics! Consciousness is the tenth dimension! A giant Mandala explains the elementary particles! The world is really a flat plate supported on the back of a giant tortoise, and it's turtles all the way down.


Does it matter whether it is fact or fantasy as long as it is entertaining? Does it matter whether it is science or fiction as long as visitors click on the advertisement banners? Does it matter whether it is virtual or reality? If your first one is too boring, why don't you just get a SecondLife?

"Around 1900, most inventions concerned physical reality: cars, airplanes, zeppelins, electric lights, vacuum cleaners, air conditioners, bras, zippers. In 2005, most inventions concern virtual entertainment — the top 10 patent-recipients are usually IBM, Matsushita, Canon, Hewlett-Packard, Micron Technology, Samsung, Intel, Hitachi, Toshiba, and Sony — not Boeing, Toyota, or Wonderbra. We have already shifted from a reality economy to a virtual economy, from physics to psychology as the value-driver and resource-allocator."




Popular Science

Science is done for the society we live in. Communicating our research, within and outside the community, is an essential part of our job. I am very happy to see that physics - even the theoretical side! - recently receives an increasing amount of attention in the broader public. I am all in favor of popular science books (even though these simplifications sometimes make me grind my teeth), and I think PI is doing an absolutely great job with its public outreach program. I am not sure how much influence blogs have in this regard, but I hope they contribute their share to making the ivory tower a little less detached.

The result of this trend is twofold.

First, scientists have to live with simplified and generalized statements, and with a lack of details. I hope on the long run people will become more familiar with some basic concepts and the standard can be raised. I guess that many people don’t understand the details of the stock market or tax systems, but that doesn’t scare them from reading about it in the newspapers. Likewise, everybody who writes about theoretical physics should learn how to use an equation editor. The myth that every equation lowers the accessibility for the reader is a result of cowardice and unfamiliarity. Scientific journalism could provide significantly more content than today. But still, scientists will have to learn how to live with unqualified commentaries, much like writers have to live with reviews by people who have no clue about literature.

Second, it seems that science, much as art, becomes a hobby. Some hobby artists believe they are actually the new Pollock, and their paintings belong in the Museum of Modern Arts. Some hobby scientists believe they are actually the new Einstein, and their theory belongs in PRD. Yes, it might be that occasionally there is a true talent to find there, but mostly it’s just splashing color on a canvas. Scientists have yet to learn how to deal with the attention of the interested layman. However, this problem is greatly amplified by the first point. Oversimplification in the media leaves people with the impression a PhD is a hoax, and our education unnecessary. Hey, the surfer dude can do it. Theoretical physics seems to be a specifically easy and appealing target.


Science

Natural sciences are about describing nature. As a theoretical physicist I see my task as contributing to the understanding of the world that we live in. However, I - as many of my colleagues - have fun with wild speculations. Call it a brain exercise. What would the world look like if the parameters in the standard model were different? If there were additional dimensions? If the elementary particles were composite? If particles could jump non-locally in no time? Eventually however, we have to constrain ourselves to the reality we live in.

Theoretical physics explores the unknown, and goes beyond the limits of our current knowledge. We play around with ideas and examine their consequences in the hope of finding out how nature works. This might sound like there is an awful lot of freedom in such doing, but this is not the case. It is science, not fiction, because we have to make sure these ideas are not in conflict with the reality we observe. We have to make sure theories are internally consistent, and in agreement with what we already know. If you think this is easy, you have no idea what you are talking about. It’s like saying Miro just smeared color on a canvas, can’t be that hard to do.

Unfortunately, I have the impression that the tendency in theoretical physics is to make wilder and wilder speculations, to throw several of them together, to work out details of still unconfirmed ideas. Stories that are interesting in some regard, but increasingly detached from reality.
    The first point in my referee reports is insisting the author changes ‘will’ into ‘may’ and ‘does’ into ‘could’. You call that nitpicking? I call that the difference between science and fiction.

In many cases, the assumptions underlying such speculations are clear for those working in the field, but often not very well documented in publications. The combination of this tendency with the media can be disastrous. In popular science articles, the speculative character of theories is often poorly laid out. For one, because it might not be apparent to the journalist, but also because caution seems to be interpreted as just adding boring details.

The result is a blurring of the boundary between science and fiction.


Uncertainty

“We are faced with all kinds of questions to which we would like unequivocal answers […] There is a huge pressure on scientists to provide concrete answers […] But the temptation to frame these debates in terms of certainty is fraught with danger. Certainty is an unforgiving taskmaker. […] If we are honest and say the scientists conclusions aren’t certain, we may find this being used as justification for doing nothing, or even to allow wiggle room for the supernatural to creep back in again. If we pretend we’re certain when we are not, we risk being unmasked as liars.”



If you want to witness science in the making, you will have to face there are no easy answers. There are pros and cons, and differing opinions. If they care about it at all, journalists seem happy if two persons provide the extremes; candidates that are naïve or stupid enough to either call things fabulous or nonsense, instead of making matters complicated. It seems to be enormously difficult to understand that even scientists change their mind, or feel torn into different directions as long as a situation is not entirely clarified. Indeed, it seems to be so difficult to grasp that a simple mind might insist a scientist who can allow for different alternatives is not one person, but two.

It doesn't sound easy to write a good and balanced article, does it? Well, that's why journalists have an eduction. It's their job to figure out the facts and present them in an accessible way, and luckily there are a number of very good science journalists out there. Unfortunately, often an inital article gets copied and watered down a hundred times with decreasing content and increasing headlines. I can understand it must be difficult to write about science without working in the field, but I am not willing to excuse arbitrarily cheap echoing and copying, deliberate omittances and polarizations of facts for the sake of catchy titles. Reporting on science in the making comes necessarily with uncertainties. If you can't cope with that stick to last century's headlines.

“Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales.”
~ Bertram Russell



Backlashes

The hype of science in the media just reflects a general trend caused by information overflow. In today's world you have to scream really loud to be heard at all, and headlines are the better the fatter. I generally dislike this, as it leads to inaccurate reporting, unnecessary confusion, and bubbles of nothing. All of which obscures sensible discussions and is a huge waste of time.

However, despite this general trend, what worries me specifically about popular science reporting is how much our community seems to pay attention to it. This is a very unhealthy development. The opinion making process in science should not be affected by popular opinions. It should not be relevant whether somebody makes for a good story in the media, or whether he or she neglects advertising himself. What concerns me is not so much the media re-re-repeating fabulous sentences, but how many physicists get upset about it. This clearly indicates that they think this public discussion is relevant, and this should not be the case.

Concerns about the public opinion arise from the fear it might affect the funding of some research areas. But it's not the media who creates fashions and hypes who is to blame. Neither is it the scientists who are not careful enough when talking to journalists who are to blame. To blame is everybody who tolerates that the funding in science is subject to irrelevant factors.

It seems to me that scientists, as well as funding agencies, urgently need to learn how to deal with such popularizations. I have written previously about the dangers on the "Marketplace of Ideas": The more attention scientists (have to) pay to advertising themselves because they believe it might help their career, the more the objectivity of scientific discussions suffers. The more time scientists (have to) spend with irrelevant distractions for the sake of their career, the less time they have for research. That's the reality we do our research in ? - !

My concerns about science journalism are similar. I believe that freedom of the press is one of the most important ingredients for a well functioning democracy. Free not only to write without censorship, but free to write without being influenced by lobbyism and financial pressure. The necessity to write what sells promotes cheap and large headlines, premature reporting, extreme polarization, and goes on the expenses of content and quality. It causes people to uncritically echo and copy stories because it’s fast and easy whereas research requires time and effort. The harder it becomes to sell print versions, the worse the situation becomes because online the clocks tick even faster.

    Is the the goal of good writing that visitors click on the advertisement banner?
I hope that we find a good way to deal with different levels of scientific reporting without letting the public opinion affect our research, and without blurring the line between science and fiction.


Summary

Information is precious. It is one of the most important goods in our modern society. Each time one of us spreads inaccurate information we contribute to confusion and hinder progress. Whether a scientist, a journalist, an editor, or a blogger: what we say, write, and promote publicly, is in our responsibility.

We are all human. Bloggers get overenthusiastic, write before they think, and make mistakes. Scientists say stupid things, speak before they think, and make mistakes. Journalists misunderstand, publish before anybody else thinks, and make mistakes. Deliberate repetition turns mistakes into lies.

The triangle of Science, Society and Information Technology noticeably affects our daily lifes. In the absence of any sensible way to cope with side-effects, the very least we can do is to be aware of the developments and find out how to deal with them.

Sunday, November 25, 2007

Progress

2002

"I have shown here that there exist relativistic theories in which the GZK cutoff is shifted significantly upward (enough to explain the cosmic-ray puzzle) but there is no preferred class of inertial observers."

~ Giovanni Amelino-Camelia, "Kinematical solution of the UHE-cosmic-ray puzzle without a preferred class of inertial observers", Int.J.Mod.Phys. D12 (2003) 1211-1226, astro-ph/0209232, p. 7

2007

"What could instead accomodate exactly the situation where there is no GZK anomaly but we do find anomalous in vacua despersion? Well [...] you basically could go to the direction I prososed a few years ago, which is allow for departures from Lorentz symmetry but without the emergence of a preferred frame [...] No preferred frame. These turn out to be a much softer class [..] of departures from standard Poincare symmetry [...] They are unable [...] to provide an observably large GZK threshold anomaly.

Despite of what is said in many Doubly Special Relativity papers [...] That's not possible."

~ Giovanni Amelino-Camelia, talk given at the workshop on Experimental Search for Quantum Gravity, pirsa:07110057, min. 26:23

Saturday, November 24, 2007

Baruch Spinoza born 375 years ago

Baruch Spinoza (via wikipedia)
When Albert Einstein was asked by the New Yorker Rabbi Herbert S. Goldstein "Do you believe in God?", his answer was, "I believe in Spinoza's God, who reveals Himself in the lawful harmony of the world, not in a god who concerns Himself with the fate and the doings of mankind."

Baruch Spinoza, the Dutch philosopher who had so strongly influenced Einstein's philosophical and religious views, was born in Amsterdam 375 years ago today, on November 24, 1632, the son of Jewish refugees from Portugal. His conviction based on his studies of Renaissance philosophers, and especially of the writings of Descartes, brought him in conflict with the Amsterdam Jewish community, who expelled him in 1656. Earning his living as a lensmaker for microscopes, glasses and telescopes - a booming industry at the time - he could continue his studies, and while his way of life was very modest, he soon gained a reputation as quite a radical thinker. In his posthumously published Ethics, he tried to present his philosophical reasoning "in geometrical order", in a rigourous way modelled after Euclid's Elements. Spinoza was convinced that God exists in everything in nature, a pantheism that was soon interpreted as plain atheism.

If you want to know more about Spinoza than I can tell you, good starting points on the web are the biographies at wikipedia and the SEP, the BBC Radio 4 "In Our Time" program on Spinoza, and the collection of links "Studia Spinoziana" I've found via the Spinoza Institute.

Friday, November 23, 2007

ESQG 2007 Summary

[I was asked to write a brief summary of our previosly mentioned workshop Experimental Search for Quantum Gravity for the PI newsletter. Here is what I wrote:]

The suggestion for the workshop came up during our discussion group on the phenomenology of quantum gravity, held at Perimeter Institute last winter. We wanted to bring people together who are working on this still young field; particle physicists with the kappa-Poincaré mafia, cosmologists with analogue modelers, and astrophysicists with space-time foamers. There is by now a wide variety of approaches examining different possible phenomenological effects. On the longer run, it should be possible to find out whether some of them can be understood as arising from the same concept, such that their applicability can be extended to cover a broader range of phenomenology. The meeting was quite successful in fostering the discussion about the relation between different models.

One of the main points was raised by Daniel Sudarsky while talking about the 'Shrtvomings of the Standard Lore' (a mentionable 3 sigma event of typos):

    "The QG arena is the natural place to put things you don't understand."

Well, I hope this conjecture doesn't apply for the next postdoc hires. But more seriously, a recurring question in the field is of course whether a model has actually something to do with quantizing gravity, or whether it is just some generally possible speculation for physics beyond the standard model:

John Moffat:"Why violate Lorentz invariance ?!"
Jens Niemeyer:"Oh, just because everybody does it?"

John EllisMy favorite among the talks was Steve Giddings', who gave a very nice introduction into high energy scattering processes and black hole formation. John Ellis talked about the prospects of testing quantum gravity, and a special event was the talk by Aaron Chou who reported on new results from the AUGER collaboration that confirm the GZK cutoff. Giovanni Amelino-Camelia brought us up to date with the progress in Deformed Special Relativity, and Matt Visser gave us good advises how to deal with our office mates:

    "If you want to give your colleagues a heart attack, you talk about the
    velocity of the sub-quantic aether."

It was a very lively and interactive meeting that run very smoothly thanks to D, K and A's help with the organization. The worst disaster being that a participant lost his dinner voucher.

ESQG 2007

All of the talks were recorded and can be found on the PI websites.

Thursday, November 22, 2007

Happy Thanksgiving!

I missed the Canadian Thanksgiving, I missed the German one, so let me try the US one! This year, I am very grateful to the GZK cutoff being where it's supposed to be. I am grateful to Mike Lazaridis for paying our coffee and cookies, to the guys who fixed the air condition in my office, and to SISSA for the travel reimbursement. I thank the Maple software engineers for the tensor package, and the inventors of Facebook for making it possible that Stanford Profs throw turkeys at me. But most of all, I am thankful for having such a lovely husband who stays up all night because his wife is on Eastcoast time and never to reach before midnight.

Wednesday, November 21, 2007

Blaise Pascal, Florin Périer, and the Puy de Dôme experiment

Puy de DomeThe peak of the Puy de Dôme in central France (wikipedia.fr)
November days can be depressing, when thick fog is hanging in town for days and there is no real daylight. Here in Frankfurt, one can at least try to escape, and with some luck, the top of the Feldberg is above the mist, and in the sun. November 1647 in Paris may have been similar gloomy, and made Blaise Pascal daydream of the mountain peaks around Clermont-Ferrand, a town in provincial Auvergne where he had been born in 1623 and grown up, and where his sister and her husband Florin Périer still were living.

What is known for sure, however, is that in that fall, Pascal had the idea for an experiment that, simple as it may be, nevertheless revolutionised our knowledge about the atmosphere and atmospheric pressure. So, on November 15, he sat down and wrote a long letter to his brother-in-law to persuade him to conduct this experiment. It was nothing for an armchair scientist, however, since it involved ascending the more than 1000 meters to the top of the Puy de Dôme, the highest mountain in the vicinity of Clermont-Ferrand.

From Rome ...

Berti's experimentGaspare Berti's experiment in Rome (via the Institute and Museum of History of Science, Florence).


A few years before, engineers in Italy had begun seriously to wonder why they could not succeed in building suction pumps that would be able lift water for the usage in fountains or for the supply of buildings to a height of more than about 10 metres. They even had asked Galileo about this, but he could merely confirm the fact, and not provide a clear answer. In Rome, an amateur scientist named Gasparo Berti set up a series of experiments to study this curious phenomenon in detail. He used a long tube, sealed at one end, filled it completely with water, immersed the open end in a tub also filled with water, and then brought the tube in a vertical position. As he noted, some of the water poured in the tub, but not all, leaving a vertical column of water with a height of about 10 metres in the pipe - this was, obviously, exactly the limiting height of the pumps that had puzzled the engineers.

At the time, in the early 1640s, these experiments stirred quite an excitement. The big question was, is the space at the top of the pipe really empty? And what inhibits the water to pour out completely into the tub? According to the still prevailing physical theories going back to Aristotle, nature abhors the vacuum - there cannot be an empty space, thus, the space above the water line in the pipe has to be filled with some substance (not quite wrong, as we know today, since this space is filled with water vapour with the small vapour pressure at room temperature, but this was mostly not thought of to be the explanation). And maybe, nature does not like to fill space with this substance, and that stops the water from pouring out? However, some people also argued that perhaps the atmospheric pressure - the weight of the air in the atmosphere, which was known since a few years to have a measurable density - may be the culprit instead. There was a definite need for more experiments.

Torricelli-type experiments with pipes filled with mercury - the height of the mercury level is independent of the shape of the vessel, or the inclination angle (via the Institute and Museum of History of Science, Florence).
... via Florence

In Florence, Evangelista Torricelli, a student of Galileo, had the ingenious idea to replace the water in Berti's experiment by mercury - a fluid about 13 times denser than water. By this procedure, Berti's setup became much more easy to handle - what was necessary now were vessels just a tenth in size then those used by Berti. When Torricelli repeated the Berti-type experiment, he found that the column of mercury dropped to a height of about 760 millimetre - that was the origin of the instrument we now call a barometer. But what the most interesting result: the height of the mercury level in the pipe above the level in the tub does not depend of the inclination of the pipe, and it does not depend either of the apparently empty volume above the mercury level in the pipe. This was easy to see if the experiment was repeated with vessels with all kind of sizes and shapes. Now, especially this second result was very hard to arrange with any concept of the abhorrence of the vacuum - apparently, the size of this vacuum didn't play a role at all. But was it indeed the atmospheric pressure, the weight of the air around us, which stopped the mercury from completely pouring out of the pipe into the tub? It was a prime candidate, but how could one know that for sure?

and Rouen ...

News about Torricelli's experiments spread quite fast across Europe: travelling scholars told about them to their colleagues, and some scientists maintaining already regular newsletter services sent around descriptions to their interested readers. Blaise Pascal lived at the time in Rouen, some miles west of Paris, where his father worked as a tax collector for the town. Both father and son had a vivid interest in science, and they heard the news from Italy by a friend who visited them and suggested that they join forces to repeat the experiments of both Berti and Torricelli. In fact, Rouen was a well-suited place to do so, because it was the location of the best glass manufacture of France of the time, and high-quality glass jars were essential for successful experiments.

So, in early 1647, the Pascals repeated and refined Berti's experiment with big glass tubes filled with water and with red wine, as well as Torricelli's experiments with mercury. Some of the experiments were done in public, with Rouen citizens as interested witnesses. Especially the experiment with wine rose much interest - even for the science's sake, since some had argued that the empty space at the top of the tubes would be filled with the vapour of the fluid (correctly, in fact), and that this vapour would push the fluid away (not true) - thus, the level of the more volatile vine should be lower that the one of water. In the experiment, the column of wine was higher - because, as Blaise Pascal explained, the density of wine is lower than that of water, and thus a higher column of wine is required to balance the same external atmospheric pressure. Pascal, like many others, was convinced that the pressure of the air kept the fluids from pouring out of the tubes - but he still needed a way to prove this.


... to the Puy the Dôme

There is a debate among historians who had suggested first to conduct the Torricelli experiment on a mountaintop. Some argue that it was Descartes' idea, some assign priority to Pascal. Anyway, while writing up a report on the Rouen experiments and discussing their results with other scholars some time in late 1647, Pascal understood that if the weight of the air is indeed to driving force in all these experiments, it should be lower the higher the place where experiment is done, since then the layer of air above is thinner. Thus, the column of mercury in Torricelli's experiment should be the lower the higher the place where the experiment is performed. Pascal had no clue how big the effect might be, but he thought, and hoped for, that the about 1000 metre of difference in altitude between his hometown of Clermont-Ferrand and the peak of nearby Puy de Dôme might be enough. Thus, the letter to his brother-in-law, Florin Périer.

I am not sure if it took Périer some time to get warm to the idea to carry some pounds of mercury and fragile glasswork on top of Puy de Dôme just because his brother-in-law, sitting comfortably at home in Paris, had some new ideas about esoteric topics such as the weight of the air and empty space. The experiment was performed only about one year after Pascals letter - but for sure, after it was done, Périer like everyone else was excited about its outcome.

On September 19, 1648, Florin Périer and some friends perform the Torricelli experiment on top of Puy de Dôme in central France. The height of the mercury column is 85 mm less than in Clermont-Ferrand at the base of the mountain, about 1000 metre below. (From Louis Figuier, Les merveilles de la science, Vol. 1, 1867. According to this illustration, Périer and company not only climbed up the mountain, they also travelled in time, since their clothes follow the latest fashion of the 18th century.)


Finally, on Saturday, September 19, 1648, Florin Périer and some of his friends from Clermont-Ferrand embarked on the experiment. Early in the morning, they measured the height of the mercury column in two Torricelli experiments at a low-lying place in town, the Jardin des Minimes, the garden of a monastery - it was 711 mm. While one of the instruments was left behind there and observed during the day by a monk, the other was carried on top of the Puy the Dôme. To the big surprise of all, there, about 1000 metre higher than where they had started, the height of the column was only 627 mm! Florin and his friends repeated the measurement several times, and took several measurements on their way back. It was all consistent: while they climbed down the mountain again, the column of mercury climbed up in the glass tube, and back to the monastery, it was again at 711 mm, the height the stationary reference instrument had held during the whole day.

Florin Périer was so surprised and amazed by this big effect that he repeated the experiment the next day. This time, less arduously and fitting to a Sunday, he carried the instrument only the 50 metres on top of the tower of the cathedral of Clermont-Ferrand. This difference in height was enough to be clearly measurable, about 4 mm. Blaise Pascal, when hearing of the result, immediately set out to reproduce the experiment at the Tour Saint-Jacques in Paris, where a statue now pays tribute to Pascal and the experiment.

The results of the Puy de Dôme provided very strong evidence that it is indeed the weight of the air, thus the atmospheric pressure, which balances the weight of the mercury column in Torricelli's experiment. Hence, Torricelli's instrument measures this pressure - it is a barometer. And since the change in pressure with height is very well detectable, the barometer serves as an altimeter at the same time - as it is still used in aviation today.

In appreciation of the contributions of Torricelli and Pascal, two units of pressure have been named after them: one Torr, now officially out of use, is the equivalent of one "mm Hg", the pressure a mercury column with a height of one millimetre. And the derived SI unit for pressure is the Pascal, where 1 Pa is the pressure of a force of one Newton exerted on an area of one square metre.

As for the nature of the empty space above the level of the fluid in the barometer, the situation was not immediately settled after the Puy de Dôme experiment. Descartes, and later his students, insisted that it was not empty, but filled with some aether, which was just everywhere. Now we know that up to the vapour pressure, which can be minimised by reducing the temperature, it is indeed a vacuum - but the vacuum is complicated anyway.







TAGS: , ,

Tuesday, November 20, 2007

Sand Fantasy



More? Try this, this, or this. For info, and better resolution videos, see Sandfantasy.com.

Sigh. I always loved to draw in sand.

Sunday, November 18, 2007

The Cosmological Constant

wanted
Dead or Alive

The Cosmological Constant

Aka: Einstein's biggest blunder [1], or the vacuum energy

Committed Crimes: Being 'the most embarrassing observation in physics' [2]; being the 'the worst prediction in physics' [3]; being either too small, or too large, or too coincidental; being bad for astronomy, and being generally an annoyance

Last seen: In high redshift supernovae and the WMAP data
Lambda



Preliminaries

Watch out, here comes an equation!
Einstein's Field Equations


Apologies if I scared any unprepared readers but I *really* can't do without. These are Einstein's field equations [4] of General Relativity, and aren't they just pretty? Here is in a nutshell what they say:

The quantities on the left side are the g, which is the metric of our spacetime. The metric tells us how we measure angles and distances. Then there are the R's with varying amount of indices. They describe the curvature of space time, and are build up of second order derivatives of the metric. Thus, the left side has dimensions Energy2 [5]. If space-time is flat, the curvature is identically zero.

On the other side of the equation we have a T which is called the stress-energy tensor, and describes the matter and energy content in the space-time. It has dimension energy per volume, and contains energy density, as well as pressure and energy flux. In energy units it has dimension Energy4. The G is a coupling constant, and one now easily concludes it has dimension of 1/Energy2. If one investigates the Newtonian limit, one finds that G=1/MP2, where MP is the Planck Mass.

Thus, the equations say how matter and energy (right side) affects the curvature of the space-time we live in (left side). If space time is flat, there are no matter sources (Tμν = 0). An important point is that you can not just chose matter that moves as you like it, because it generally will not be consistent with what the equations say. You can only chose an initial configuration, and the equations will tell you how that system will evolve, matter and space-time together. Different matter types have different effects, and result in different time-evolutions.

That's the thing to keep in mind for the next section: different stuff causes different curvature. The details are what you need the PhD for.

Now cosmology is an extremely simplified case in which one describes the universe as if it was roughly speaking the same everywhere (homogeneous), and the same in all directions (isotropic). This is called the Cosmological Principle, and if you look around you, it is evidently complete nonsense. However, whether or not such a description is useful is a matter of scales.

Look e.g. at the desk in front of you. It looks like a plane surface with a certain roughness. If you look really close you'd find lots of structure, but if you are asking for some large scale effect - like how far your coffee cup will slide - the exact shape of a single tiny hills or dips in the surface doesn't matter. It's the same with the universe. If you look from far enough away, the finer details don't matter, galaxies are roughly equally distributed over the sky. With the cosmological principle, one neglects the details of the structures. One describes matter by an average density ρ and pressure p that does not depend on the position in spacetime. It has the same value everywhere, but can depend on time.

We have today extremely strong evidence that the universe is expanding, thus its volume grows. The ratio of this expansion is usually measured with the scale-factor a(t) , a dimensionless, increasing function of time. The universe's expansion is the same in all three spatial directions, so a given volume grows with ~ a(t)3. When the volume grows, stuff inside it thins out. The energy density of ordinary matter drops just inversely to the volume ~ 1/a(t)3.

The energy density of radiation drops even faster, because not only does the volume increase - in addition its wavelength gets also stretched, and therefore the frequency drops with an additional 1/a(t). Taken together, the energy density of radiation drops with 1/a(t)4. Thus, the density of all all kind of matter that we know, and have observed on earth should drop.

Because the expansion of the universe causes light to be stretched to higher wavelengths, and be shifted towards lower - 'redder' - frequencies, cosmologists like to date events not by the time t, but by the redshift, commonly denoted as z.


The Cosmological Constant and its Relatives



The Cosmological Constant (CC) is usually denoted with the Greek symbol Lambda, Λ. It is the constant in front of an additional term that can be added to Einstein's field equations. Depending on your taste, you can either interpret it as belonging on the left 'space-time' side, or the right 'matter' side of the equation. For the calculations this doesn't matter, so lets put it to the matter-side:

Einstein's field equations with a Cosmological Constant term

What have we done? Well, consider as before the case of 'empty' space, where Tμν is equal to zero. This empty space can no longer be flat: if it was, the curvature and thus the left side would vanish. But the right side doesn't. Thus, with the CC term, empty space is no longer flat. It is therefore very tempting to interpret this as the energy density of the vacuum which creates curvature even if 'nothing' is really there. As is appropriate for an energy density, the dimension of the CC is Energy4.

If one plugs the matter content and the CC term into Einstein's field equations, one obtains the Friedmann equations that relate the behavior of the scale factor to the density and pressure

The appearing constant κ is either 0, +1 or -1, depending on the spatial curvature. The first equation has a square on the left side, meaning that the right side needs to be positively valued. The second equation determines the acceleration of the universe. Note that for usual matter, energy density and pressure are positively valued. Thus, the only factor that can make a positive contribution to the acceleration is the CC.

The most stunning fact about the Cosmological Constant is that it is constant. No kidding: remember we've seen before that all kind of matter that we know dilutes when the universe expands.

But the Cosmological Constant is constant.

The corollary of this insight is that if you start with an arbitrary amount of usual matter, sooner or later it will have dropped to the value of the CC term. And if you wait even longer, the CC term will eventually dominate, causing an eternally accelerating universe. Who could possibly want that?

A term with a CC is not the only way to get a positive contribution to the universe's acceleration. Unusual equations of state that relate ρ with p in a way no standard matter could do would have a similar effect. The family of stuff with such behavior is the mafia of the 'dark energy'. A lot of creativity has gone into its investigation. The suspects in the family include quintessence, k-essence, h-essence, phantom fields, tachyon fields, Chaplygin gas, ghost condensates, and probably a couple more aunts and uncles that I haven't been introduced to.


Observational Evidence

The CC appears in the Einstein's field equation and can be treated as a source term. For this analysis it is irrelevant whether the term might actually be of geometrical origin. In this context, the constant Λ just is a parameter to describe a specific behaviour.
    Supernovae redshift

Supernovae of type Ia show a very uniform and reliable dependence of luminosity on time. This makes them ideal candidates for observation, as they allow (to a certain extent) to disentangle the source effects from the effects occurring during light propagation. The emitted light travels towards us, and while it does so it has to follow the curvature of space-time. The frequency and luminosity that we receive then depends on the way the universe has evolved while the light was traveling. From the observation on can then extract knowledge about the curvature, and thus about the matter content of the universe.

As it turns out, distant supernova (z > .5) are fainter than would be expected for a decelerating universe with vanishing CC. If one explains the data with the dynamics of the scale factor, one is lead to the conclusion that the universe presently undergoes accelerated expansion. As we have seen above, this can only be caused by a positive cosmological constant. In addition, the data also shows that the transition from deceleration to acceleration happened rather recently, i.e. around z ~ 0.3. For more detail's see Ned Wright's excellent tutorial.

Last November, the observations of high redshift supernovae could be extended above z~1, which allows us to conclude that the dark energy component at these times didn't do anything too spectacular. For more, see Sean's post Dark Energy has long been Dark-Energy-Like.
    The Age of the Universe

A recurring argument for a CC has come from the age and present expansion rate of the universe. The presence of the CC influences the expansion of the universe. If it exists, the age of the universe one can extract from today's value of the Hubble parameter would be larger than without a CC. The age of the oldest stars that have been observed seems to indicate the necessity of the CC. However, this analysis is not without difficulties since determining the age of these stars, as well as the present value of the Hubble parameter, is still subject to uncertainties that affect the conclusion [6].
    The Cosmic Microwave Background

WMAP measures the temperature inhomogeneities imprinted in the Cosmic Microwave Background (CMB). These very low temperature photons have been traveling freely since the time the universe became transparent for them, called the 'surface of last scattering'. The photon's distribution shows small fluctuations on specific scales, a snapshot of the universe as it was only 300,000 years old. Commonly depicted are the temperature fluctuations as a function of the multipole moment, roughly speaking the angular size of the spots (for a brief intro, see here). Recall from above that different stuff causes different curvature. Thus, from these structures in the CMB one can draw conclusions about the evolution, and thus about the matter content, of the universe.

Since the CC only became important recently, its dominant effect is to change the distance to the last scattering surface, which determines the angular scale of the observed CMB anisotropies. Most prominently, it affects the position of the first peak in this figure. Based on current measurements of the Hubble scale, the WMAP data is best fitted by a spatially flat universe in which 70% of the matter is described by the CC term.

The value of the CC that can be inferred from the presently available data is approximately Λ1/4 ~ 10-12GeV.



Committed Crimes

    First Crime: Diverging
In my previous post on the Casimir Effect, I briefly talked about the vacuum in quantum field theory. It is not just empty. Instead, there are constantly virtual particles created an annihilated. One can calculate the stress-energy tensor of these contributions. It is proportional to the metric, like the CC term. If you calculate the constant itself, the result is embarrassingly infinity.

    Second Crime: Being large


Infinity is not a particularly good result. Now you might argue that we can't trust quantum field theory up to arbitrarily high energy scales, because quantum gravity should become important there. If you drop virtual particles with energy higher than the Planck scale, you find that Λ should be of the order MP4, a huge value.


Of one neglects gravity, one can argue that the absolute value of the energy density can't be observed, and one should only be interested in energy differences. One thus can set 'by hand' the vacuum energy to zero, a process that is called renormalization. Unfortunately, one can't do this with gravity, because all kind of energy creates curvature, and it is not only energy differences that are observable. However, one can't take the above huge value seriously. If the CC was really that large, we wouldn't exist. In fact, this value for the CC is 120 orders of magnitude larger than the observed one. Thus the reason why it has been called 'The worst prediction in physics'.

    Third Crime: Being Small
It had long be hoped that the CC was actually zero, possible protected by some yet unknown symmetry. Supersymmetry e.g. would do it if it was unbroken. When I finished high school the status was, the CC is zero. However, observations now show that is is not actually zero. Instead it is very small, but nonzero, far away from any naturally occuring scale. Who ordered that?
    Forth Crime: Why now?!
Why is the CC such that it just recently became important, as can be inferred from the supernovae data? This is also called 'the coincidence problem'.
    Fifth Crime: Other Coincidences
Some other coincidences that make my colleagues scratch their heads:
  • The CC is the fourth power of an energy scale, which happens to be close by scale in which the (differences of the squared) neutrino masses have been measured, and the absolute masses of the lightest neutrinos are believed to fall. Coincidence?

  • It further turns out that the ratio of that mass scale Λ1/4 to the vacuum expectation value (VEV) of the Higgs, is about the same as the ratio of the Higgs VEV to the Planck mass. Coincidence?

  • And then the CC is about the geometric mean of the (forth power of) the Hubble scale and the Planck mass [7]. Coincidence?



But

This data analysis is of course not completely watertight. To begin with, one has to notice that all of the above mentioned rests on General Relativity. If instead the equations of motions were modified, if might be possible to do without dark energy. However, studies in this direction so far have not been particularly convincing, as consistent modifications of GR are generally hard to do.

But besides this, the data interpretation is still subject of discussion. An input in all the analysis is the value of the Hubble constant. It has been argued for various reasons that the presently used value should be corrected by taking into account local peculiar motions, or a possible spatial variations. Measuring the Hubble value through time delays in gravitationally lensed systems e.g. yielded a significantly lower result.

Likewise, the supernovae data could be biased through the sample, or the effect could stem from other effects during propagation, like dust with unusual properties. For an recent summary of all such uncertainties, see [6].

Generally, on can say that there remains the possibility that the data can be fitted by other models. But to date the the CC term is the simplest, and most widely accepted explanation we have. One should keep in mind however that the desire to come up with a theory that produces a kind of dark energy uses GR as relay station. Instead it might be possible that a satisfactory explanation reproduces the observational facts, yet is not cast into the form of GR with dark energy because it eludes the parametrization implied in writing down the Friedmann equations.



Summary

Our observations can currently be best described by to so-called ΛCDM model. It has a Cosmological Constant, and a significant amount of cold 'dark matter' (explained in our earlier post). The parameters in this model have been well constrained by experiments, but the theoretical understanding is still very unsatisfactory. ΛCDM is a macroscopic description, but we have so far no good theory explaining the microscopic nature of dark energy or dark mater.

Investigations will be continued.


Further Reading:


[1] George Gamow, My World Line (Viking, New York). p 44, 1970
[2] Ed Witten, quoted from Renata Kallosh's talk
"Towards String Cosmology", slide 5.
[3] Lawrence Krauss, quoted from
"Physicists debate the nature of space-time", NewScientist Blog, 02/20/07
[4] It's a plural because there is one for every choice of indices μν, and each index runs over 3 space and one time dimension, from 0 to 3. This would make for 16 equations, but the set is symmetric under exchange of the indices, so actually there are only ten different equations.
[5] This is a theoretical physics blog, so Planck's constant and the speed of light is equal to one. This then means the dimension of length is that of time, and both is an inverse of energy. If that doesn't make sense to you, don't worry it's not really relevant. Just accept it as a useful way to check the order of coupling constants.

[6] For details, see arxiv: 0710.5307
[7] See T. Padmanabhan, hep-th/0406060


TAGS: , , , ,

PS on 'Renormalization'

As a post scriptum to my last week's post Renormalization, here is a quotation I just came across:

Thought is constantly creating problems [...] and then trying to solve them. But as it tries to solve them it makes it worse because it doesn't notice that it's creating them, and the more it thinks, the more problems it creates.

After Math: The Lisi-Peak

The graph below shows the visitor statistic for Backreaction during the last month



The usual traffic to this site is almost periodic over the week. The minimum is Saturdays, with around 600, a peak on Mondays around 1000, a lower second peak on Wednesdays, and then the next weekend drop. If we have a longer, well elaborated post with subsequent discussion this shows up as deviation from the standard curve with a one day delay.

Unfortunately, I can't show you the annual traffic, because our first SiteMeter died in April for reasons I still don't understand. The overall trend however seems to be subject to many search engine details that lead people here who are actually looking for something different. In this respect it seems to matter that this blog runs at blogspot, and shows up very prominently for all kinds of keywords, e.g. "first day of fall". We had a fun period last year when we were the first hit on Google image for "Map of America". Either way, this is just a background noise that pushes the statistics. Occasionally it seems, one or the other accidental visitor browses the archives, but most I am afraid are just annoyed by Google's inefficiency.

On the other hand, I found that the blogger 'search' field is completely useless and I search this blog more efficiently by entering a keyword into Google together with the tag 'Backreaction'.

To come back to the curve above. If you look closely, you'll see a first deviation from the usual periodic on Tuesday, Nov 6th, which I think is a delayed response to my post on the Casimir Effect. The post about Garrett's paper went out Tuesday evening, so caused the higher Wednesday peak on Nov. 7th. The unusual peak on Nov. 9th is the link from Peter Woit's blog, which increased the average traffic through the following days. On Nov. 14th the Telegraph article went out. The NewScientist article was subscription only until Nov. 16th. The following peak was caused by a multitude of links, among others from digg/reddit, various other well frequented blogs, traffic through Google searches for "Garrett Lisi", and a lot of links from around the science blogosphere (most show up on the very bottom of the comment section, in case you are interested.)

I hope one or the other random visitor got hooked :-)

Friday, November 16, 2007

Universal Scaling at Deadlines

Nature Physics can have a nice, dry sense of humour - here is a plot from the "Correspondence to the Editor" of the November issue, "Conference registration: how people react to a deadline":


Source: Alfi, Parisi, Pietronero, Nature Physics 3, 746 (2007)

Valentina Alfi, Giorgio Parisi and Luciano Pietronero have analysed the distribution of registrations for two conferences (red triangles and blue circles in the main figure), and after a normalisation with respect to the total number of participants, they can describe this distribution by a simple model, assuming that the "pressure to register" is inversely proportional to the time left before the deadline - that's the solid line.

This same simple model seems not work for the distribution of the payment of the conference fee - the dashed line in the inset. However, since the registration is reversible, while the payment is not, there may be a tendency to postpone the payment until closer to the deadline. Describing this tendency to postpone with a Boltzmann factor, the model can also fit distribution of payments very well - that's the solid line in the inset.

Alfi, Parisi and Pietronero conclude, "People’s behaviour around a deadline does indeed seem to be universal. If the action is reversible ..., the pressure to do it is inversely proportional to the available time before the deadline. For an irreversible action ..., there is a tendency to postpone it until even closer to the deadline, which can be described by a utility function." And they even come up with a rule of thumb to estimate the final number of participants "that may be useful for organisers of future events": Just extrapolate the initial linear behaviour and multiply it by three.

People can behave so predictably!





Source: Nature Physics 3, 746 (2007), doi:10.1038/nphys761 (alas, subscription required for the full text)

Wednesday, November 14, 2007

Anonymity

If you're around in the blogosphere you have probably gotten used to the anonymous background noise of hostile, stupid, or just completely nonsensical comments.

I certainly understand that anonymity can be necessary in certain situations, and I myself have made use of it. Mostly when commenting on problems with former co-workers, in cases where I thought even if I don't name them explicitly it would be easy to find out who I was talking about. But most of the anonymous comments I see are anonymous for no other reason than cowardice. I believe these people chose anonymity because they wish not to be responsible for what has been said, and they are afraid to make a mistake. I don't think this is a good development - not for the scientific content, and not for the atmosphere of the discussion.

To begin with, let me differentiate between 'anonymity' and 'pseudonymity'. Pseudonymity is if you choose a nickname and stick to it, even if you don't connect it to your real name, your job, or affiliation. I have no problem with that. Except that occasionally I would of course like to know more - but that's the story of my life. If you have a pseudonym, you'll create a history and a reputation. If you come back, I will know what scientific knowledge I can expect, know already your opinion on various matters, and it will be much easier to reply to your comments. In addition to this, if I am under time pressure I don't read anonymous comments. Why should I? There are 6 billion people in this world. Of course I distribute my attention primarily to people I know.

The very least you can do is choosing and sticking to one nickname in the comments to one thread. Look, after the 10th 'Anonymous' it becomes really confusing. Can't you at least enumerate yourselves?

In case there are still people who haven't figured it out: if you write a comment you get 3 options - blogger ID, Other, and Anonymous. If you don't have a blogger ID and don't want to get one, you can comment under a nickname choosing the option 'Other'. This does not require you to leave an email address. There are some of you among the frequent visitors (Klaus, changcho), who always comment as anonymous, but sign with a name. You can do this, but in this case I will miss some of your comments, since they appear in my inbox as: Anonymous left a comment... and I ignore them. If you want to get a blogger ID, you don't need to write a blog for that, and I assure you you don't get any spam. The advantage is that the link to your profile confirms it's actually your comment.

If you are afraid to sign your criticism with your name, what light does this shed on our discussions? It is without doubt that writing in anonymity tempts you to put less thought, and less care into your argumentation. Before you leave the next anonymous comment, here or elsewhere, do me the favor and ask yourself whether it is really necessary to do so.

"Anonymity is like a rare earth metal. These elements are a necessary ingredient in keeping a cell alive, but the amount needed is a mere hard-to-measure trace. In larger does these heavy metals are some of the most toxic substances known to a life. They kill. Anonymity is the same. As a trace element in vanishingly small doses, it's good for the system by enabling the occasional whistleblower, or persecuted fringe. But if anonymity is present in any significant quantity, it will poison the system.

[...]

Privacy can only be won by trust, and trust requires persistent identity, if only pseudo-anonymously. In the end, the more trust, the better. Like all toxins, anonymity should be keep as close to zero as possible."

Tuesday, November 13, 2007

Photic sneeze reflex

Do you have to sneeze when looking into bright sunlight? Involuntary sneezing when being exposed to bright light after adapting to the dark is called the photic sneeze reflex. The syndrome was first described in 1978, and seems to be much more common than has been generally recognized. It is believed to be inherited, but identification of the specific genes involved has not been made yet. If you want to sneeze for science, you can do so at UCSF.


And no, I don't.

Monday, November 12, 2007

Science Blogs in German

Back in 2005, when I started looking for blogs to feed my RSS reader, I was a bit disappointed that I couldn't find any interesting blogs about science in German. This situation has not changed much for quite some time - maybe, so my guess, German scientifically minded blogger write in English...

But right now, all of a sudden, a few new interesting blogs are emerging:
  • ScienceBlogs is opening a German branch, called ScienceBlogs
    (Update, Nov. 13: The link is now password protected - ScienceBlogs.de seems not to be officially launched yet).
  • The publisher of Spektrum der Wissenschaft, the German edition of the Scientific American, hosts two science group blogs, wissenslogs.de and brainlogs.de - the latter, as you might guess from the Denglish title, with a focus on neuroscience and psychology.
  • Besides these "organised blogs", wissenschafts-cafe.net is a newly established link collection to independent German-language scientific blogs.

If you read German, it may be worth to follow these developments. Keep in mind that the German word "Wissenschaft" is broader in scope than the English word "science" - besides the natural and social sciences, it includes also the humanities, and even business administration and law.

But my current all-favourite German-speaking blog is lesemaschine.de - a group blog of people reporting their progress in reading books - including The Road to Reality and All Books.

Sunday, November 11, 2007

Renormalization

Today, I want to write about a serious complication that occurs in two particle interactions.


The above depicted gluon exchange is not as simple as it might seem, because virtual particles make contributions to the process


The troublesome thing about these contributions is that there are in principle arbitrarily many of them with arbitrarily high energy


You can easily convince yourself that this process is IQ divergent. One therefore sums up all the virtual contributions, and redefines the initial propagation of exchange particles


This is called renormalization.

(Larger picture)

Saturday, November 10, 2007

Gravity Waves in the Sky

This is what I've seen this Wednesday morning when I looked out of my window:



The sky way grey and overcast, but there was a distinct, wavy pattern in the cloud cover, like ripples on water, or on sand:



The similarity to waves on a water surface is not by chance - in fact, the physical phenomenon, called gravity waves [1] or buoyancy waves, is the same: If two layers of different fluids meet at an interface, disturbances of the interface can spread as waves. For waves on a water surface, the fluids are the water, and the air above it.

But the same can also happen at surfaces between layers of water with different density, for example with different salinity and temperature, or at the interface between distinct layers of air. If the upper layer of air is below the dew point and carries a close cover of cloud, disturbances of the interface layer can show up as a wavy pattern in the ceiling - it's just the same as waves on water. However, since the differences in density between the two fluids are much smaller in the atmosphere than at the interface water-air, the wavelength of gravity waves in the atmosphere is much longer, and the frequency is lower.

But they can look quite impressive, if they come as Giant Atmospheric Waves Over Iowa.




[1] Be careful not to mix up gravity waves - waves at the interface of fluids in a gravitational field - and gravitational waves - undulating disturbances of the space-time metric.

Thursday, November 08, 2007

News from AUGER

The AUGER collaboration has released a new analysis of their data of the ultra high energetic cosmic rays (UHECR). As reported in Science (Science 9 November 2007: 896-897) They find correlations between the events of highest energies and active galactic nuclei (AGN), and are able to reject the hypothesis of an isotropic distribution of these cosmic rays at a confidence level of 99%. This reliably rules out speculations about the origin of these UHECRs in local, galactic sources. Though it has been expected, until now there was no experimental confirmation that they originate outside our galaxy.

What is really cute about their analysis is that this correlation with AGNs up to a distance of ~100 Mpc is present for UHECR with energies higher than a certain threshold, but not for those with lower energies: the correlation increases abruptly at the energy of about 5.7 x 1019 eV, which coincides with the point on the energy spectrum recently reported from the observatory at which the flux is reduced by ~50%. To understand this feature, recall that the GZK-cutoff (for an introduction see here) predicts that the mean free path of protons drops dramatically with increasing energy after a threshold is crossed, and the proton's energy is high enough to scatter at CMB photons to produce pions. In the energy range above ~8 x 1019 eV, protons that come from farther away than ~90 Mpc should be scattered, loose energy, and could never reach the earth. The vanishing of the correlation with AGNs around this threshold is thus is an independent confirmation of the GZK-cutoff, on which we also reported in July.

Aaron Chou from the AUGER collaboration gave a talk on the recent results today at our previously mentioned workshop. The recording and the slides should be available at the PI website soon.

More Info at the AUGER websites.

Update Nov 9th: The talk is now online, see PIRSA 07110054.

Tuesday, November 06, 2007

A Theoretically Simple Exception of Everything

Garrett Lisi, who was featured in our inspiration series back in August, has a new paper on the arxiv about his recent work

    An Exceptionally Simple Theory of Everything
    arXiv: 0711.0770

I met Garrett at the Loops '07 in Morelia, and invited him to PI. He gave a talk here in October, which confirmed my theory that the interest in a seminar is inversely proportional to the number of words in the abstract. In his case the abstract read: "All fields of the standard model and gravity are unified as an E8 principal bundle connection," and during my time at PI it was the best attended Quantum Gravity seminar I've been at.

Anyway, since I've spend some time trying to understand what he's been doing (famously referred to as 'kicking his baby in the head') here is a brief summary of my thoughts on the matter.

Preliminaries

In the 50's physicists were faced with a confusing, and still growing multitude of particles. By introducing new quantum numbers, it was clear that this particle zoo exhibited some kind of pattern. Murray Gell-Mann realized the particles could be classified using the mathematics of Lie-groups. More specifically, he found that the baryons with spin 3/2 known at this time correspond to the weight diagram of the ten-dimensional representation of the group SU(3) [1].



He matched the nine known spin 3/2 baryons (4 Δs, 3 Σ*s, 2 Ξ*s) with the weights of this representation, but there was one particle missing in the pyramid. He therefore predicted a new particle, named Ω-, which was later discovered and had the correct quantum numbers to complete the diagram [2]. Because of the ten baryons in the multiplet, this is also known as the 'baryon decuplet'.


A similar prediction could later be made for the baryon octet, where the center of the diagram should be doubly occupied. The existence of the missing Σ0 was later experimentally confirmed.

After this, the use of symmetry groups to describe nature has repeatedly proven to be an enormously powerful and successful tool. Besides being useful, it is also aesthetically appealing since the symmetry of these diagrams is often perceived as beautiful [3].

GUTs and TOEs

Today we are again facing a confusing multitude of particles, though on a more elementary level. The number of what we now believe are elementary particles hasn't grown for a while, but who knows what the LHC will discover? Given the previous successes with symmetry principles, it is only natural to try to explain the presently known particles in the standard model - their families, generations, and quantum numbers - as arising from some larger symmetry group in a Grand Unified Theory (GUT). One can do so in many ways; typically these models predict new particles, and so far unobserved features like proton decay and lepton number violation. This larger symmetry has to be broken at some high mass scale, leaving us with our present day observations.

Today's Standard Model of particle physics (SM) is based on a local SU(3)xSU(2)xU(1) gauge symmetry (with some additional complications like chirality and symmetry breaking). Unifying the electroweak and strong interaction would be great to begin with, but even then there is still gravity, the mysterious outsider. A theory which would also achieve the incorporation of gravity is often modestly called a 'Theory of Everything' (TOE). Such a theory would hopefully answer what presently is the top question in theoretical physics: how do we quantize gravity? It is also believed that a TOE would help us address other problems, like the observed value of the cosmological constant, why the gravitational interaction is so weak, or how to deal with singularities that classical general relativity (GR) predicts.

Commonly, gravity is thought of as an effect of geometry - the curvature of the space-time we live in. The problem with gravity is then that its symmetry transformations are tied to this space-time. A gauge transformations is 'local' with respect to the space-time coordinates (they are a function of x), but the transformations in space-time are not 'local' with respect to the position in the fibre, i.e. the Lie-Group. That is to say, usually a gauge transformation can be performed without inducing a Lorentz transformation. But besides this, the behavior of particles under rotations and boosts - depending on whether dealing with a vector, spinor or tensor - looks pretty much like a gauge transformation.

Therefore, people have tried to base gravity on an equal footing with the other interactions by either describing both as geometry, both as a gauge theory, or both as something completely different. Kaluza-Klein theory e.g. is an approach to unify GR with gauge theories. This works very nicely for the vector fields, but the difficulty is to get the fermions in. So far I thought there are two ways out of this situation. Either add dimensions where the coordinates have weird properties and make your theory supersymmetric to get a fermion for every boson. Or start by building up everything of fermionic fields.

Exceptional Simplicity

On the algebraic level the problem is that fermions are defined through the fundamental representation of the gauge group, whereas the gauge fields transform under the adjoint representation. Now I learned from Garrett that the five exceptional Lie-groups have the remarkable property that the adjoint action of a subgroup is the fundamental subgroup action on other parts of the group. This then offers the possibility to arrange both, the fermions as well as the gauge fields, in the Lie algebra and root diagram of a single group. Thus, Garret has a third way to address the fermionic problem, using the exceptionality of E8.

His paper consists of two parts. The first is an examination of the root diagram of E8. He shows in detail how this diagram can be decomposed such that it reproduces the quantum numbers of the SM, plus quantum numbers that can be used to label the behaviour under Lorentz transformations. He finds a few additional particles that are new, which are colored scalar fields. This is cute, and I really like this part. He unifies the SM with gravity while causing only a minimum amount of extra clutter. Plus, his plots are pretty. Note how much effort he put in the color coding!

Garrett calls his particle classification the "periodic table of the standard model". The video below shows projections of various rotations of the E8 root system in eight dimensions (see here for a Quicktime movie with better resolution ~10.5 MB)




[Each root of the E8 Lie algebra corresponds to an elementary particle field, including the gravitational (green circles), electroweak (yellow circles), and strong gauge fields (blue circles), the frame-Higgs (squares), and three generations of leptons (yellow and gray triangles) and quarks (rbg triangles) related by triality (lines). Spinning this root system in eight dimensions shows the F4 and G2 subalgebras.]

However, just from the root diagram alone it is not clear whether the additional quantum numbers actually have something to do with gravity, or whether they are just some other additional properties. To answer this question, one needs to tie the symmetry to the base manifold and identify part of the structure with the behaviour under Lorentz transformations. A manifold can have a lot of bundles over it, but the tangential bundle is a special one that comes with the manifold, and one needs to identify the appropriate part of the E8 symmetry with the local Lorentz symmetry in the tangential space. The additional complication is that Garrett has identified an SO(3,1) subgroup, but without breaking the symmetry one doesn't have a direct product of this subgroup with additional symmetries - meaning that gauge transformations mix with Lorentz-transformations.

Garrett provides the missing ingredient in the second part of the paper where he writes down an action that does exactly this. After he addressed the algebraical problem of the fermions being different in the first part, he now attacks the dynamical problem with the fermions: they are different because their action is - unlike that of the gauge fields - not quadratic in the derivatives. As much as I like the first part, I find this construction neither simple nor particularly beautiful. That is to say, I admittedly don't understand why it works. Nevertheless, with the chosen action he is able to reproduce the adequate equations of motion.

This is without doubt cool: He has a theory that contains gravity as well as the other interactions of the SM. Given that he has to choose the action by hand to reproduce the SM (see also update below), one can debate how natural this actually is. However, for me the question remains which problem he can address at this stage. He neither can say anything about the quantization of gravity, renormalizability, nor about the hierarchy problem. When it comes to the cosmological constant, it seems for his theory to work he needs it to be the size of about the Higgs vev, i.e. roughly 12 orders of magnitude too large. (And this is not the common problem with the too large quantum corrections, but actually the constant appearing in the Lagrangian.)

To make predictions with this model, one first needs to find a mechanism for symmetry breaking which is likely to become very involved. I think these two points, the cosmological constant and the symmetry breaking, are the biggest obstacles on the way to making actual predictions [4].

Bottomline

Now I find it hard to make up my mind on Garrett's model because the attractive and the unattractive features seem to balance each other. To me, the most attractive feature is the way he uses the exceptional Lie-groups to get the fermions together with the bosons. The most unattractive feature are the extra assumptions he needs to write down an action that gives the correct equations of motion. So, my opinion on Garrett's work has been flip-flopping since I learned of it.

So far, I admittedly can't hear what Lee referred to in his book as 'the ring of truth'. But maybe that's just because my BlackBerry is beeping all the time. And then there's all the algebra clogging my ears. I think Garrett's paper has the potential to become a very important contribution, and his approach is worth further examination.

Aside: I've complained repeatedly, and fruitlessly, about the absence of coupling constants throughout the paper, and want to use the opportunity to complain one more time.

For more info: Check Garrett's Wiki or his homepage.

Update Nov. 10th: See also Peter Woit's post
Update Nov. 27th: See also the post by Jacques Distler, who objects on the reproduction of the SM.


[1] Note that this SU(3) classification is for quark flavor (the three lightest ones: up, down and strange), and not for color.
[2] For more historical details see Stefan's post
The Omega-Minus gets a spin (part 1) which is still patiently waiting for a part 2.
[3] See also my earlier post on
The Beauty of it All.
[4] If one were to find another action, the cc problem might vanish.


TAGS: , , , ,