Showing posts with label History of Science. Show all posts
Showing posts with label History of Science. Show all posts

Saturday, January 04, 2014

Book review: “Free Radicals” by Michael Brooks

Free Radicals: The Secret Anarchy of Science
By Michael Brooks
Profile Books Ltd (2011)

“Free Radicals” is a selection of juicy bits from the history of science, telling stories about how scientists break and bend rules to push onward and forward, how they fight, cheat and lie to themselves and to others. The reader meets well-known scientists (mostly dead ones) who fudged data, ignored evidence, flirted their way to lab equipment, experimented on themselves or family members, took drugs, publicly ridiculed their colleagues, and wiggled their way out of controversy with rhetorical tricks.

The book is very enjoyable as a collection of anecdotes. It is fast flowing, does not drown the reader in historical, biographical or scientific details, and it is well-written without distracting from the content. (I’ve gotten really tired of authors who want to be terribly witty and can’t leave you alone for a single paragraph).

Michael Brooks tries to convince the reader that there is a lesson to be learned from these anecdotes, which is that science thrives only because of scientists behaving badly in one way or the other. He refers to this as the “secret anarchy of science”. He actually disagrees with himself on that, as it becomes very clear from his stories that far from being anarchic, science is an elitist meritocracy that grandfathers achievers and is biased against newcomers, in particular members of minorities. Anarchy is unstable – it’s a vacuum that gets rapidly filled with rules and hierarchies – and academia is full with these unwritten rules. Science is not and has never been anything like anarchic, neither secretly nor openly, though the house of science has arguably housed its share of rebels.

Worse than that misuse of the term ‘anarchy’ is that Brooks tries to construct his lesson from a small and hand-picked selection of examples and ignores the biggest part of science, which is business as usual. As we discussed in this earlier post, the question is not whether there are people who bent rules and were successful, but how many people bent rules and just wasted everybody’s time, a problem to which no thought is given in the book.

Luckily, Brooks does not elaborate on his lessons too much. The reader gets some of this in the beginning and then again in the end, where Brooks also uses the opportunity and tries to encourage scientists to engage more in policy making. Again he disagrees with himself. After he spent two hundred pages vividly depicting how scientists care about nothing but making progress on their research, arguing that this single-mindedness is the secret to scientific progress, in the last chapter he now wants scientists to engage more in politics, but that square block won’t fit through the round hole.

In summary, the book is a very enjoyable collection of anecdotes from the history of science. It would have benefitted if the author had refrained from trying to turn it into lessons about the sociology of science.

Wednesday, May 22, 2013

Who said it first? The historical comeback of the cosmological constant

I finished high school in 1995, and the 1998 evidence for the cosmological constant from supernova redshift data was my first opportunity to see physicists readjusting their worldview to accommodate new facts. Initially met by skepticism - as all unexpected experimental results - the nonzero value of the cosmological constant was quickly accepted though. (Unlike eg neutrino oscillations, where the situation remained murky, and people remained skeptic, for more than a decade.)

But how unexpected was that experimental result really?

I learned only recently that by 1998 it might not have been so much of a surprise. Already in 1990, Efstathiou, Sutherland and Maddox, argued in a Nature paper that a cosmological constant is necessary to explain large scale structures. The abstract reads:
"We argue here that the successes of the [Cold Dark Matter (CDM)] theory can be retained and the new observations accommodated in a spatially flat cosmology in which as much as 80% of the critical density is provided by a positive cosmological constant, which is dynamically equivalent to endowing the vacuum with a non-zero energy density. In such a universe, expansion was dominated by CDM until a recent epoch, but is now governed by the cosmological constant. As well as explaining large-scale structure, a cosmological constant can account for the lack of fluctuations in the microwave background and the large number of certain kinds of object found at high redshift."
By 1995 a bunch of tentative and suggestive evidence had piled up that lead Krauss and Turner to publish a paper titled "The Cosmological Constant is Back".

I find this interesting for two reasons. First, it doesn't seem to be very widely known, it's also not mentioned in the Wikipedia entry. Second, taking into account that there must have been preliminary data and rumors even before the 1990 Nature paper was published, this means that by the late 1980s, the cosmological constant likely started to seep back into physicists brains.

Weinberg's anthropic prediction dates to 1987, which likely indeed predated observational evidence. Vilenkin's 1995 refinement of Weinberg's prediction was timely but one is lead to suspect he anticipated the 1998 results from the then already available data. Sorkin's prediction for a small positive cosmological constant in the context of Causal Sets seems to date back into the late 80s, but the exact timing is somewhat murky. There is a paper here which dates to 1990 with the prediction (scroll to the last paragraph), which leads me to think at the time of writing he likely didn't know about the recent developments in astrophysics that would later render this paper a historically interesting prediction.

Wednesday, January 25, 2012

The Planck length as a minimal length

The best scientific arguments are those that are surprising at first sight, yet at second sight they make perfect sense. The following argument, which goes back to Mead's 1964 paper "Possible Connection Between Gravitation and Fundamental Length," is of this type. Look at the abstract and note that it took more than 5 years from submission to publication of the paper. Clearely, Mead's argument seemed controversial at this time, even though all he did was to study the resolution of a microscope taking into account gravity.

For all practical purposes, the gravitational interaction is far too weak to be of relevance for microscopy. Normally, we can neglect gravity, in which case we can use Heisenberg's argument that I first want to remind you of before adding gravity. In the following, the speed of light c and Planck's constant ℏ are equal to one, unless they are not. If you don't know how natural units work, you should watch this video, or scroll down past the equations and just read the conclusion.

Consider a photon with frequency ω, moving in direction x, which scatters on a particle whose position on the x-axis we want to measure (see image below). The scattered photons that reach the lens (red) of the microscope have to lie within an angle ε to produces an image from which we want to infer the position of the particle.

According to classical optics, the wavelength of the photon sets a limit to the possible resolution Δx But the photon used to measure the position of the particle has a recoil when it scatters and transfers a momentum to the particle. Since one does not know the direction of the photon to better than ε, this results in an uncertainty for the momentum of the particle in direction xTaken together one obtains Heisenberg's uncertainty principle
We know today that Heisenberg's uncertainty principle is more than a limit on the resolution of microscopes; up to a factor of order one, the above inequality is a fundamental principle of quantum mechanics.

Now we repeat this little exercise by taking into account gravity.

Since we know that Heisenberg's uncertainty principle is a fundamental property of nature, it does not make sense, strictly speaking, to speak of the position and momentum of the particle at the same time. Consequently, instead of speaking about the photon scattering off the particle as if that would happen in one particular point, we should speak of the photon having a strong interaction with the particle in some region of size R (shown in the above image).

With gravity, the relevant question now will be what happens with the measured particle due to the gravitational attraction of the test particle.

For any interaction to take place and subsequent measurement to be possible, the time elapsed between the interaction and measurement has to be at least of the order of the time, τ, the photon needs to travel the distance R, so that τ is larger than R. (The blogger editor has an issue with the "larger than" and "smaller than" signs, which is why I avoid using them.) The photon carries an energy that, though in general tiny, exerts a gravitational pull on the particle whose position we wish to measure. The gravitational acceleration acting on the particle is at least of the orderwhere G is Newton's constant which is, in natural units, the square of the Planck length lPl. Assuming that the particle is non-relativistic and much slower than the photon, the acceleration lasts about the duration the photon is in the region of strong interaction. From this, the particle acquires a velocity of vaRThus, in the time R, the aquired velocity allows the particle to travels a distance of LGω.

Since the direction of the photon was unknown to within ε, the direction of the acceleration and the motion of the is also unknown. Projection on the x-axis then yields the additional uncertainty ofCombining this with the usual uncertainty (multiply both, then take the square root), one obtainsThus, we find that the distortion of the measured particle by the gravitational field of the particle used for measurement prevents the resolution of arbitrarily small structures. Resolution is bounded by the Planck length, which is about 10-33cm. The Planck length thus plays the role of a minimal length.

(You might criticize this argument because it makes use of Newtonian gravity rather than general relativity, so let me add that, in his paper, Mead goes on to show that the estimate remains valid also in general relativity.)

As anticipated, this minimal length is far too small to be of relevance for actual microscopes; its relevance is conceptual. Given that Heisenberg's uncertainty turned out to be a fundamental property of quantum mechanics, encoded in the commutation relations, we have to ask then if not this modified uncertainty too should be promoted to fundamental relevance. In fact, in the last 5 decades this simple argument has inspired a great many works that attempted exactly this. But that is a different story and shall be told another time.

To finish this story, let me instead quote from a letter that Mead, the author of the above argument, wrote to Physics Today in 2001. In it, he recalls how little attention his argument originally received:
"[In the 1960s], I read many referee reports on my papers and discussed the matter with every theoretical physicist who was willing to listen; nobody that I contacted recognized the connection with the Planck proposal, and few took seriously the idea of [the Planck length] as a possible fundamental length. The view was nearly unanimous, not just that I had failed to prove my result, but that the Planck length could never play a fundamental role in physics. A minority held that there could be no fundamental length at all, but most were then convinced that a [different] fundamental length..., of the order of the proton Compton wavelength, was the wave of the future. Moreover, the people I contacted seemed to treat this much longer fundamental length as established fact, not speculation, despite the lack of actual evidence for it."

Wednesday, September 28, 2011

On the universal length appearing in the theory of elementary particles - in 1938

Special relativity and quantum mechanics are characterized by two universal constants, the speed of light, c, and Planck's constant, ℏ. Yet, from these constants one cannot construct a constant of dimension length (or mass respectively as a length can be converted to a mass by use of ℏ and c). In 1899, Max Planck pointed out that adding Newton's constant G to the universal constants c and ℏ allows one to construct units of mass, length and time. Today these are known as Planck-time, Planck-length and Planck-mass respectively. As we have seen in this earlier post, they mark the scale at which quantum gravitational effects are expected to become important. But back in Planck's days their relevance was in their universality, since they are constructed entirely from fundamental constants.

In the early 20th century, with the advent of quantum field theory, it was widely believed that a fundamental length was necessary to cure troublesome divergences. The most commonly used regularization was a cut-off or some other dimensionful quantity to render integrals finite. It seemed natural to think of this pragmantic cut-off as having fundamental significance, though the problems it caused with Lorentz-invariance. In 1938, Heisenberg wrote "Über die in der Theorie der Elemtarteilchen auftretende universelle Länge" (On the universal length appearing in the theory of elementary particles), in which he argued that this fundamental length, which he denoted r0, should appear somewhere not too far beyond the classical electron radius (of the order some fm).

This idea seems curious today, and has to be put into perspective. Heisenberg was very worried about the non-renormalizability of Fermi's theory of β-decay. He had previously shown that applying Fermi's theory to the high center of mass energies of some hundred GeV lead to an "explosion," by which he referred to events of very high multiplicity. Heisenberg argued this would explain the observed cosmic ray showers, whose large number of secondary particles we know today are created by cascades (a possibility that was discussed at the time of Heisenberg's writing already, but not agreed upon). We also know today that what Heisenberg actually discovered is that Fermi's theory breaks down at such high energies, and the four-fermion coupling has to be replaced by the exchange of a gauge boson in the electroweak interaction. But in the 1930s neither the strong nor the electroweak force was known. Heisenberg then connected the problem of regularization with the breakdown of the perturbation expansion of Fermi's theory, and argued that the presence of the alleged explosions would prohibit the resolution of finer structures:

"Wenn die Explosionen tatsächlich existieren und die für die Konstante r0 eigentlich charakeristischen Prozesse darstellen, so vermitteln sie vielleicht ein erstes, noch unklares Verständnis der unanschaulichen Züge, die mit der Konstanten r0 verbunden sind. Diese sollten sich ja wohl zunächst darin äußern, daß die Messung einer den Wert r0 unterschreitenden Genauigkeit zu Schwierigkeiten führt... [D]ie Explosionen [würden] dafür sorgen..., daß Ortsmessungen mit einer r0 unterschreitenden Genauigkeit unmöglich sind."

("If the explosions actually exist and represent the processes characteristic for the constant r0, then they maybe convey a first, still unclear, understanding of the obscure properties connected with the constant r0. These should, one may expect, express themselves in difficulties of measurements with a precision better than r0... The explosions would have the effect... that measurements of positions are not possible to a precision better than r0.")

In hindsight we know that Heisenberg was, correctly, arguing that the theory of elementary particles known in the 1930s was incomplete. The strong interaction was missing and Fermi's theory indeed non-renormalizable, but not fundamental. Today we also know that the standard model of particle physics is perturbatively renormalizable and know techniques to deal with divergent integrals that do not necessitate cut-offs, such as dimensional regularization. But lacking that knowledge, it is understandable that Heisenberg argued gravity had no role to play for the appearance of a fundamental length:

"Der Umstand, daß [die Plancklänge] wesentlich kleiner ist als r0, gibt uns das Recht, von den durch die Gravitation bedingen unanschaulichen Zügen der Naturbeschreibung zunächst abzusehen, da sie - wenigstens in der Atomphysik - völlig untergehen in den viel gröberen unanschaulichen Zügen, die von der universellen Konstanten r0 herrühren. Es dürfte aus diesen Gründen wohl kaum möglich sein, die elektrischen und die Gravitationserscheinungen in die übrige Physik einzuordnen, bevor die mit der Länge r0 zusammenhängenden Probleme gelöst sind."

("The fact that [the Planck length] is much smaller than r0 gives us the right to leave aside the obscure properties of the description of nature due to gravity, since they - at least in atomic physics - are totally negligible relative to the much coarser obscure properties that go back to the universal constant r0. For this reason, it seems hardly possible to integrate electric and gravitational phenomena into the rest of physics until the problems connected to the length r0 are solved.")

Today, one of the big outstanding questions in theoretical physics is how to resolve the apparent disagreements between the quantum field theories of the standard model and general relativity. It is not that we cannot quantize gravity, but that the attempt to do so leads to a non-renormalizable and thus fundamentally nonsensical theory. The reason is that the coupling constant of gravity, Newton's constant, is dimensionful. This leads to the necessity to introduce an infinite number of counter-terms, eventually rendering the theory incapable of prediction.

But the same is true for Fermi's theory that Heisenberg was so worried about that he argued for a finite resolution where the theory breaks down - and mistakenly so since he was merely pushing an effective theory beyond its limits. So we have to ask then if we are we making the same mistake as Heisenberg, in that we falsely interpret the failure of general relativity to extend beyond the Planck scale as the occurence of a fundamentally finite resolution of structures, rather than just the limit beyond which we have to look for a new theory that will allow us to resolve smaller distances still?

If it was only the extension of classical gravity, laid out in many thought experiments (see eg. Garay 1994), that made us believe the Planck length is of fundamental importance, then the above historical lesson should caution us we might be on the wrong track. Yet, the situation today is different from that which Heisenberg faced. Rather than pushing a quantum theory beyond its limits, we are pushing a classical theory and conclude that its short-distance behavior is troublesome, which we hope to resolve with quantizing the theory. And several attempts at a UV-completion of gravity (string theory, loop quantum gravity, asymptotically safe gravity) suggest that the role of the Planck length as a minimal length carries over into the quantum regime as a dimensionful regulator, though in very different ways. This feeds our hopes that we might be working on unraveling another layer of natures secrets and that this time it might actually be the fundamental one.


Aside: This text is part of the introduction to an article I am working on. Is the English translation of the German extracts from Heisenberg's paper understandable? It sounds funny to me, but then Heisenberg's German is also funny for 21st century ears. Feedback would be appreciated!

Wednesday, November 24, 2010

Nonsense people once believed in

I have a list with notes for blogposts, and one topic that's been on it for a while is believes people once firmly held that during the history of science turned out to be utterly wrong.

Some examples that came to my mind were the "élan vital" (the belief that life is some sort of substance), the theory of the four humors (one consequence of which was the wide spread use of bloodletting as medical treatment for all sorts of purposes), the static universe, and the non-acceptance of continental drift. On the more absurd side of things is the belief that semen is produced in the brain (because the brain was considered the seat of the soul), and that women who are nursing turn menstruation blood into breast milk. From my recent read of Annie Paul's book "Origins" I further learned that until only some decades ago it was widely believed that pretty much any sort of toxins are blocked by the placenta and do not reach the unborn child. It was indeed recommended that pregnant women drink alcohol, and smoking was not of concern. This dramatically wrong belief was also the reason why thalidomide was handed out without much concerns to pregnant women, with the know well-known disastrous consequences, and why the fetal alcohol syndrome is a fairly recent diagnosis.

I was collecting more examples, not very actively I have to admit, but I found yesterday that somebody saved me the work! Richard Thaler, director of the Center for Decision Research at the University of Chicago Graduate School of Business, is working on a book about the topic, and he's asked the Edge-club for input:

"The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?"

You find the replies on this website, which include most of my examples and a few more. One reply that I found very interesting is that by Frank Tipler:
"The false belief that stomach ulcers were caused by stress rather than bacteria. I have some information on this subject that has never been published anywhere. There is a modern Galileo in this story, a scientist convicted of a felony in criminal court in the 1960's because he thought that bacteria caused ulcers."

I hadn't known about the "modern Galileo," is anybody aware of the details? Eric Weinstein adds the tau-theta puzzle, and Rupert Sheldrake suggests "With the advent of quantum theory, indeterminacy rendered the belief in determinism untenable," though I would argue that this issue isn't settled, and maybe never will be settled.

Do you know more examples?

Monday, October 04, 2010

Einstein on the discretenes of space-time

I recently came across this interesting quotation by Albert Einstein:
“But you have correctly grasped the drawback that the continuum brings. If the molecular view of matter is the correct (appropriate) one, i.e., if a part of the universe is to be represented by a finite number of moving points, then the continuum of the present theory contains too great a manifold of possibilities. I also believe that this too great is responsible for the fact that our present means of description miscarry with the quantum theory. The problem seems to me how one can formulate statements about a discontinuum without calling upon a continuum (space-time) as an aid; the latter should be banned from the theory as a supplementary construction not justified by the essence of the problem, which corresponds to nothing “real”. But we still lack the mathematical structure unfortunately. How much have I already plagued myself in this way!”

It's from a 1916 letter to Hans Walter Dällenbach, a former student of Einstein. (Unfortunately the letter is not available online.) I hadn't been aware Einstein thought (at least then) that a continuous space-time is not “real.” It's an interesting piece of history.

Friday, February 12, 2010

350 years Royal Society

As Sabine has mentioned earlier today, this year is the 350th anniversary of the Royal Society, the british national academy of science. Going back to a gathering of a few men interested in "Experimental Philosophy" in London in November 1660, the Royal Society is one of the oldest scientific academies in the world.

Outside Britain, it may be best known for its 13th president, Sir Isaac Newton, and for the publication of the "Philosophical Transactions of the Royal Society", the oldest existing scientific journal in continuous publication.

The Royal Society has set up a special website, and a very nice interactive timeline dubbed "trailblazing", which allows a brief virtual journey through the history of science since the 1650s.

Moreover, there will be several commemorative publications free to access over the anniversary year 2010, for example a special issue of the "Philosophical Transactions A". It features articles not requiring the reader to be a specialist to gain understanding of the content, ranging in topics from "Geometry and physics" by Michael Atiyah, Robbert Dijkgraaf and Nigel Hitchin to "Flat-panel electronic displays" by Cyril Hilsum.

And, most important, the Royal Society Digital Journal Archive will free until 28 February 2010 (two more weeks left only, unfortunately). This means full access to all issues of the "Philosophical Transactions" starting back in 1665!

So, for example, we can read about

  • Isaac Newton presenting his "New Theory about Light and Colors", with the description of his experiments with prisms and the spectrum (1671, 6 3075-3087),

  • Benjamin Franklin reporting his experiments "concerning an Electrical Kite" (1751, 47 565-567),

  • John Michell discussing "the Means of Discovering the Distance, Magnitude, &c. of the Fixed Stars, in Consequence of the Diminution of the Velocity of Their Light...", suggesting stars so massive that light cannot escape from them (1784, 74 35-57),

  • Henry Cavendish describing his "Experiments to Determine the Density of the Earth", or to measure Newton's gravitational constant with a torsion balance (1798, 88 469-526),

  • Alexander Volta reporting Galvani's experiments on electricity (the "frog" experiments - 1793, 83 10-44) and his own construction of the "Volta pile", the prototype of an electrical battery (1800, 90 403-431),

  • William Herschel discussing recent developments about "his" planet Uranus (1783, 73 1-3), reasoning "On the Construction of the Heavens" (1785, 75 213-266) and "the Nature and Construction of the Sun and Fixed Stars" (1795, 85 46-72), and describing his discovery of "Solar, and ... Terrestrial Rays that Occasion Heat", now known as infrared light (1800, 90 293-326),

  • Thomas Young arguing for the wave nature of light in "Outlines of Experiments and Inquiries Respecting Sound and Light" (1800, 90 106-150), and reporting the results of his interference experiments (1804, 94 1-16),

  • James Prescott Joule demonstrating the "Mechanical Equivalent of Heat" (1850, 140 61-82), and

  • James Clerk Maxwell introducing the principle of the RGB colour system in "On the Theory of Compound Colours" (1860, 150 57-84), presenting "A Dynamical Theory of the Electromagnetic Field" (1865, 155 459-512) and contributing to the "Dynamical Theory of Gases" (1867, 157 49-88).


More findings are welcome in the comments! Have a great reading weekend!

Tuesday, February 02, 2010

LaserFest 2010

This year, the laser will turn 50! On May 16, 1960, at the Hughes Research Laboratories in Malibu, California, Theodore Maiman realized for the first time "Light Amplification by Stimulated Emission of Radiation", using a tiny ruby crystal.

Actually, Maiman and his small group of coworkers was back then just one of several teams, all at industrial laboratories, intensely searching for ways to create laser beams. At the end of the year, the ruby laser was replicated and improved, and lasing was realized using other crystals, and helium-neon gas mixtures. So, it's just fair that the American Physical Society, the Optical Society, SPIE, and the IEEE Photonics Society have decided to organize a yearlong celebration of the 50th anniversary of the laser - that's LaserFest.

But in fact, the path to the laser had begun much earlier.

Berlin, 1916

In the summer of 1916, Albert Einstein took a break from general relativity and cosmology and tried to make sense, once more, of the riddle of the quantum. Specifically, he thought about ways to combine the recent ideas of Bohr on discrete energy levels in atoms with the Planck spectrum of blackbody radiation.

Atoms in thermal equilibrium with radiation can absorb radiation, thereby transiting to a state of higher energy, and they can drop from an excited state to a state with lower energy spontaneously, thereby emitting radiation. Could it be, so Einstein's idea, that atoms also will transit from an excited to a lower-energy state when they are hit by radiation with suitable energy?

Indeed, assuming a thermal Boltzmann distribution for the states of the atoms interacting with radiation, and equal rates for absorption on the one hand and spontaneous and stimulated emission – as the newly stipulated process came to be called – on the other hand, as one would expect for a thermal equilibrium between the atoms and radiation, Einstein could reproduce the Planck formula for the spectrum of blackbody radiation. "A splendid light has dawned on me about the absorption and emission of radiation," he wrote in a letter to his friend Michele Besso on August 11, 1916.

Einstein's "splendid light" of stimulated emission of radiation: An atom in a state with energy E2 is hit by a photon with energy = E2E1. This can trigger a transition of the atom to the lower energy level E1, accompanied with the emission of a photon with energy , in phase with the initial photon. After this so-called stimulated emission, there are two photons instead of one, both in the same state – a nice manifestation of the "bunching" Bose character of photons.

It was recognized in the 1920s that theoretically the process of stimulated emission could result in "negative absorption", that is, amplification, of radiation, but nobody had a good idea how to demonstrate this effect in practice.

New York, 1954

To achieve amplification of radiation via stimulated emission, it is necessary to have more atoms in the high-energy state than in the low-energy state. Otherwise, a photon hitting an atom will more likely just be absorbed than trigger stimulated emission, and there is no gain in radiation. This requirement for amplification is called "population inversion".

In 1951, Charles Townes had an idea how to create "population inversion" in an ensemble of ammonia molecules. The ammonia molecule comes with two states which are separated by an energy corresponding to microwave frequencies. A beam of ammonia molecules can be split into two in an inhomogeneous electric field, separating molecules in the higher and the lower energy states, respectively, with an arrangement similar to a Stern-Gerlach apparatus.

In April 1954, Townes and his students Jim Gordon and Herbert Zeiger at Columbia University piped a beam of ammonia molecules in the higher-energy state into a microwave cavity resonating at the frequency of the energy difference between the two states, and obtained "microwave amplification by stimulated emission of radiation" - this was the birth of the maser.

Townes soon started to think about ways how to extend the maser principle to infrared or optical frequencies. With graduate student Gordon Gould, he discussed arrangements of mirrors around the medium in which population inversion is created, replacing the microwave cavity. These mirrors make sure that a beam of light is going back and forth through the medium many times, thus being able to "collect" ever more photons every time it crosses the medium.

Gould realized that such an arrangement, for which he coined the term "laser", could create sharply focussed light beams of extreme intensity, which could be used for communication, as a tool, or as a weapon.

As soon as the concept of the "optical maser", as Townes continued to call it, was explained in detail in a paper written together with Arthur Schawlow, many groups embarked on a race to be the first to actually construct such a device.

Malibu, 1960

Theodore Maiman had received his doctorate in Physics from Stanford University in 1955 to take a job at the Hughes Research Laboratories, which moved to Malibu in 1960. At Hughes, Maiman had constructed masers using ruby crystals, and when he learned of the possibility of the laser, he convinced himself that it should be possible to build a laser using ruby as the "lasing" medium.

Ruby is, chemically speaking, a crystal of aluminum oxide doted with chromium ions. The chromium ions have several energy levels which can be excited by irradiation with light, two of which are metastable and can be used as the upper level of a lasing medium. The energy of the transition to the ground state corresponds to red light with a wavelength of 694 nm.

Maiman's idea was to take a rod of ruby with parallel faces, to coat these faces with silver to realize the mirrors, and to put the rod inside a helical flashlight tube. The flashlight then excites the chromium atoms and creates population inversion, and the spontaneous emission of one photon can trigger an avalanche of photons by stimulated emission.

On the afternoon of May 16, 1960, Maiman and his assistant Irnee D’Haenens saw for the first time directed beams of intense red light emerging from the ruby - they had realized the first laser.

Theodore Maiman holding the first laser. It consists of a small ruby crystal and a helical flashlight which serves to stimulate the chromium ions of the ruby, thus creating the population inversion necessary for laser action. The ends of the ruby rod have been coated with silver to mirror back and forth the light stemming from stimulated emission, thus producing sufficient gain. The whole device is placed in the small white casing. (Source)


Maiman is reported to have said that “A laser is a solution seeking a problem”, Gould's visions notwithstanding. I have no specific idea how fast the laser was used for commercial or industrial purposes, but it immediately grasped public imagination.

When the movie Goldfinger is released in 1964, James Bond has to face a huge laser, looking similar to a scaled-up version of Maiman's first tiny ruby device, and replacing the buzz saw of Ian Flemings original 1959 novel. As Auric Goldfinger explains:

l, too, have a new toy, but considerably more practical. You are looking at an industrial laser, which emits an extraordinary light, unknown in nature. It can project a spot on the moon. Or, at closer range, cut through solid metal. I will show you.







At the LaserFest website, you can find a nice description of the mechanism of the ruby laser, and a video with explanations by Theodore Maiman himself. Moroever, there is a long interview with Charles Townes on the history of the maser and the laser.

If you want to know more about the history of the laser, there are two books I can recommend:
  • The history of the laser, by Mario Bertolotti, actually tells much more than just the story of the laser: It starts back at the beginning of the 20th century with the early atom models and the puzzle of blackbody radiation, and traces the path to the laser via spectroscopy, magnetic resonance, and the maser.

  • Beam: the race to make the laser, by Jeff Hecht, focusses on the developments of the late 1950s and 1960, beginning with just two brief chapters on the early history of stimulated emission and the maser. If you get lost in between all the names, there is a list of dramatis personae at the end of the book which I, unfortunately, discovered only after reading the text.

If you have Feynman's lectures at hand, there is a discussion of Einstein's derivation of the blackbody spectrum using stimulated emission and the Einstein coefficients in Section 42-5 of Volume I, and the whole Chapter 9 of Volume III is devoted to explain the principle of the ammonia maser.


Monday, January 11, 2010

A splendid light has dawned on me …

“Es ist mir ein prächtiges Licht über die Absorption und Emission der Strahlung aufgegangen ‒ es wird Dich interessieren. Eine verblüffend einfache Ableitung der Planck’schen Formel, ich möchte sagen die Ableitung. Alles ganz quantisch.”

“A splendid light has dawned on me about the absorption and emission of radiation ‒ it will be of interest to you. A stunningly simple derivation of Planck's formula, I might say the derivation. All completely quantical.”


Albert Einstein in a letter to his friend Michele Besso on August 11, 1916.

The “splendid light” refers to Einstein's insight that stimulated emission (also called induced emission) of light from excited atoms occurs in nature, and that this yields an elementary explanation of Planck's formula for the spectrum of black body radiation.

And, of course, some 46 years later and 50 years ago this May, the “splendid light” of Einstein's idea became a real “splendid light” with the construction of the Laser, based on the principle of stimulated emission of radiation.

Wednesday, October 28, 2009

Science Park "Albert Einstein" Potsdam

Stefan and I, we were in Potsdam the past few days where I was visiting the Albert-Einstein Institute in Golm. While in the area, we also stopped at the "Science Park" in Potsdam. Potsdam may be more famous for the parks of Sanssouci and other palaces of the Prussian kings but this park, on a hill not far off the city center, is definitly worth a visit when you are interested in the history of science.

The park has an interesting past: Named "Telegraphenberg" (Telegraph Hill), it originally was the location of a relais station of an optical telegraph system linking Berlin to the Rhine. The park was designed in the second half of the 19th century, when an Astrophysical Observatory and a Geodetic Institute were installed on the hill.


The park on Telegraph Hill, Potsdam.

It was here that in 1880, Albert Michelson made his first interference experiment to test the direction-dependence of the speed of light. He was a guest scientist at the physics institute of Hermann von Helmholtz in Berlin at the time, and had to move his sensitive experimental setup to quiet Potsdam to escape the noise and vibrations of street traffic in the capital. Of course, Michelson didn't find any signs of the expected ether drift at the time, and thought of his experiment as a failure. Back to the US, he convinced his colleague Morley to collaborate on an improved experimental setup, and the rest is history.


The "Michelson Building" on Telegraph Hill, Potsdam.

The building where Michelson had installed his interferometer in the basement is now called the "Michelson Building", and accommodates the Potsdam Institute for Climate Impact Research.

The most famous monument on Telegraph Hill in Potsdam is the "Einstein Tower," housing a solar telescope. Designed by expressionist architect Erich Mendelson and financed in parts by Carl Bosch (the same Bosch who built the "Villa Bosch" in Heidelberg I visited last year), it is a cute looking phallus symbol whose scientific purpose was to test the redshift of spectral lines of sunlight in the Sun's gravitational field, one of the predictions of Einstein's theory of General Relativity.


The "Einstein Tower" solar observatory on Telegraph Hill, Potsdam.

Also this experiment failed, due to the thermal broadening of spectral lines and the fluctuations of the Sun's surface which, by the Doppler shift, mask the gravitational redshift and form a source of systematic error much higher than originally expected. Evidence for the "Gravitational Displacement of Lines in the Solar Spectrum" eventually came from other observatories, and unambiguous proof of the gravitational redshift finally was provided by the experiments of Rebka and Pound in 1959, using the Mössbauer effect to detect tiny shifts in the gamma ray frequencies of iron nuclei.

Nevertheless, the Einstein Tower is the only observatory on Telegraph Hill still in use for active research: The solar telescope and spectrographs now serve to study magnetic fields in the Sun's photosphere.


The building is quite small. A person in the scene, in this photo Stefan, helps to set a scale.

Directly in front of the Einstein-Tower, I found, to my surprise, a Boltzmann brain popping out of the ground:



Wikipedia informed us later that the bronze brain with the imprint "3 SEC" was put in place by the artist Volker März in 2002. It is titled "The 3 SEC Bronze Brain – Admonition to the Now – Monument to the continuous present” and symbolizes the scientific thesis that “the experience of continuity is based on an illusion" and that "continuity arises through the networking of contents, which in each case are represented in a time window of three seconds."

I wonder what Einstein would have thought of that.

Tuesday, October 13, 2009

125 Years of Greenwich Longitude

The Prime Meridian at Greenwich Observatory (wikipedia)
We are used today to give the coordinates of a place on Earth using latitude and longitude, indicating longitude in degrees east or west, respectively, of the Greenwich Prime Meridian.

Thus, for example, the small amateur observatory Sternwarte Peterberg near the place where I did grow up is located exactly 7 degrees east of Greenwich.

However, looking up the location on historical maps, I don't find this longitude. Actually, the French engineers who around 1800 drew the first detailed topographic maps of the region did measure longitude with respect to the Paris Observatory. Their Prussian successors used the El Heirro Meridian, which goes back to Ptolemy in the 2nd century, and later switched to coordinates centered at Berlin.

Actually, in the second half of the 19th century, more than a dozen "Prime Meridians" were in use, creating increasing confusion for transport, trade, and communication around the globe.



Thus, in October 1884, delegates from 25 nations met in Washington, DC, at a conference to determine a prime meridian, which should be used as a universal reference for measuring longitude, and for a universal time. 125 years ago, on October 13, 1884, the "International Meridian Conference held at Washington for the purpose of fixing a Prime Meridian and a Universal Day" resolved
"That a meridian proper, to be employed as a common zero in the reckoning of longitude and the regulation of time throughout the world, should be a great circle passing through the poles and the centre of the transit instrument at the Observatory of Greenwich."
and
"That the Conference proposes to the Governments here represented the adoption of the meridian passing through the transit instrument at the Observatory of Greenwich as the initial meridian for longitude."

There was one negative vote, and the delegations from Brazil and France abstained from voting. The French delegation led by astronomer Pierre Jules Janssen, the discoverer of helium, had pleaded for keeping the El Heirro Meridian, but it seems that long tables of data, from tonnages of ships to sales figures of nautical charts and almanachs, all using Greenwich as their reference point, convinced most delegates to officially adopt the de-facto standard.

In 1911, also the French switched to Greenwich longitude and Greenwich time.




The complete PROTOCOLS OF THE PROCEEDINGS of the International Meridian Conference are available via the Project Gutenberg. The vote on the adoption of Greenwich meridian is reported on page 99.


Sunday, May 17, 2009

Experimental Interferometry

Yesterday night, I've made a nice experiment. Here is the view from the frontyard of my appartment building along the street, photographed with my old digital camera mounted on a tripod:


(Click for larger view or the original photo)


There is the roof of a parked car in the foreground, and a bright street light about 100 metres away. The dimmer white and blueish lights are street lights on a hill, about 1 kilometre away, and the yellow lights are illuminated windows.

For the experiment, I have equipped the camera with a double-slit aperture, fabricated by printing two letters "l" in sans-serif against a black background on an old-fashioned overhead slide:



The two slits are a bit less than 1 millimetre wide each, and separated by a bit more than 1 millimetre.

I have then fixed the aperture on the lense of the camera with adhesive tape - the casing of the lense is big enough to allow this without glueing the lense. Here is photo of the camera with the double-slit aperture. The quality is not very good, because I have only one camera, so this is a self-portrait in a mirror of the camera equiped with the double-slit aperture:



Now, I have taken another photo of the same nightly street scene, using the double-slit aperture:


(Click for larger view or the original photo)


There is less light entering the camery, so the photo is darker. But wait: The distant street lights now show a very clear interference pattern! Instead of one spot of light, there are three distinct fringes.

I was quite amazed by the result when I looked at the photos on my computer. Here is a detail of the photo:



This startling little experiment demonstrates the principle of interferometry, as it is used in astronomy to measure the diameter of stars, for example.

Actually, I got the idea of the experiment from the chapter Basic concepts: a qualitative Introduction in Labeyrie, Lipson, and Nisenson's An introduction to optical stellar interferometry.

Here is a rough sketch of what is going on:



The double-slit aperture is shown in black, the lens system of the camera in grey, and the CCD chip in blue. The camera is focussed "on infinity", which means that parallel rays of light are bundeled onto one spot on the chip in the focal plane. This is shown for the two yellow rays of light, which may hit the camera from one of the distant street lights.

So far, this is all geometrical optics. But as we know, light is a wave, and the aperture creates the situation of Young's double-slit experiment: According to Huygens' principle, each point of the two slits can be considered as the origin of a spherical wave, which, at a distance, combine again to plane wave fronts. But now, there will be not only the wave front in the direction of the incoming light rays, but also additional, slightly deflected wave fronts. In all these deflected wave fronts, the path difference Γ has to be an integer multiple of the wavelength, λ. The deflected waves will also be focussed on one spot by the lenses. This is shown by the dotted orange lines. In the experiment, there is one clearly visible extra spot on each side of the central spot, meaning that Γ, shown in lightblue, is just one wavelength of visible light.

The angle α between the the two spots is easy to calculate – it is (in radians) just the wavelenght of light, λ, divided by the distance d of the two slits: α = λ/d.

Taking for simplicity λ ≈ 600 nm = 0.6 µm and d = 2 mm = 2000 µm, this is α = 0.0003 ≈ 1' arc minute = 1/60°.

There is an interesting twist to these considerations: When the angular size of the light source is bigger than the angle α, one cannot expect to see the interference pattern, because the image of the source in the focal plane is already as big as the distance between the spots.

So, at a distance of, say, 100 m, the light source should be smaller than 0.0003 · 100 m = 3 cm for the interference pattern to show up. A typical street light is bigger than that, hence, there is no interference pattern visible for the nearby street light. However, for a distance of 1 km, the light source will show the interference fringes if it is smaller than 0.0003 · 1000 m = 30 cm – and this condition is fulfilled for the street lights on the distant hill!

The French physicist Hippolyte Fizeau has suggested in the 1850s to use this method to determine the angular diameter of stars: Spotting a star in a telescope with a two-slit aperture with a small distance beteeen the slits, one would expect to see interference fringes, as stars are very much point-like sources of light. However, increasing the distance between the slits, the critical angle for the loss of the interference pattern shrinks, and from the distance of the slits when the interference pattern disappears, one can calculate the angular diameter of the star.

Edouard Stéphan, astronomer at the observatory of Marseille in France, was the first to put this method in practice, but he always saw interference patterns: He only could establish upper bounds on the angular size of stars.

The first successfull application of the method was by Albert Michelson and Francis Pease in 1920: They could measure the diameter of Betelgeuse, the bright red star in the constellation Orion. It is 0.05 arcseconds, or the size of a street light in a distance of 1250 km.

For such measurements, I'll need better equipment.




Basic concepts: A Qualitative Introduction in Antoine Labeyrie, Stephen G. Lipson, Peter Nisenson: An introduction to optical stellar interferometry.

Florentin Millour:
All you ever wanted to know about optical long baseline stellar interferometry, but were too shy to ask your adviser. arXiv:0804.2368v1 [astro-ph]

Edouard Stéphan: Sur l'extrême petitesse du diamêtre aparent des étoiles fixes. Comptes Rendus de l'Académie des Sciences 78, 1008-1012 (1874).

Albert A. Michelson, Francis G. Pease: Measurement of the diameter of alpha Orionis with the interferometer. Astrophys. J. 53, 249-259 (1921).


Wednesday, November 26, 2008

Galileo and the Discovery of a New World

2009 will be the International Year of Astronomy, commemorating the 400th anniversary of Galileo Galilei's groundbreaking astronomical discoveries with the then new telescope.

It's in this context that the University Heidelberg is organising a series of public lectures, Galilei's first glimpse through the telescope and its consequences today. So, a few days ago I spent a very entertaining evening in the magnificent "Alte Aula" of the university (photo here), listening to a talk by historian of science William Shea of the Cattedra Galileiana di Storia della Scienza, Università degli Studi di Padova, on Galileo and the Discovery of a New World.


Shea explained the main astronomical discoveries of Galileo, and also told quite a few entertaining side notes. For example, Galileo had no real understanding of the detailed workings of his telescope. On the other hand, he was an accomplished artist, who composed his famous drawings of the moon from partial views of the moon disc, as his telescope didn't show the whole moon at once.

If you are interested in a compact and entertaining introductory lecture about Galileo the astronomer, you might want to check out Shea's talk (it's in English, contrary to all the German text on the web page).


TAGS: ,

Sunday, September 21, 2008

100 Years of Space-Time

Die Anschauungen über Raum und Zeit, die ich Ihnen entwickeln möchte, sind auf experimentell-physikalischem Boden erwachsen. Darin liegt ihre Stärke. Ihre Tendenz ist eine radikale. Von Stund' an sollen Raum für sich und Zeit für sich völlig zu Schatten herabsinken und nur noch eine Art Union der beiden soll Selbständigkeit bewahren.

Hermann Minkowski, opening words of his talk "Raum und Zeit" at the 80. Meeting of Natural Scientists and Physicians, Cologne 1908 (English translation see footnote).

Hermann Minkowski, 1864-1909 (from MacTutor History of Mathematics Archive)
September 21, 1908, was a wonderful and sunny late-summer Monday in Cologne, Germany, where scientists from all over the country had come together for the 80th General Meeting of the Society of Natural Scientists and Physicians.

On that day, Hermann Minkowski, a well-known mathematician from Göttingen, gave a talk with the title "Raum und Zeit" – "Space and Time". In this now famous talk, Minkowski proposed a new formulation of the special theory of relativity. His formulation implied a unification of the notions of space and time, which traditionally have been seen as completely independent, to a four-dimensional entity dubbed "space-time".

Points in this "space-time" correspond to "events", e.g. things happening at a certain time and at a certain point in space, and Minkowski proposed to define a distance between events x (at time t and location x, y, z) and x' (at time t' and location x', y', z') by

distance(x, x') = c²(tt')² − (xx')² − (yy')² − (zz')²,

where c is the speed of light. The distance between two events defined in this way is, according to the special theory of relativity, the same for all observers in uniform relative motion, or, using the technical jargon, does not change under Lorentz transformations. This definition is a generalization of the Euclidean distance between two points in space, which does not change ("is invariant") under rotations, and the corresponding four-dimensional space-time is now called "Minkowski space".

All the concepts we now use to describe the kinematics of special relativity – events, worldlines, light cones – were presented in front of a large public audience for the first time one hundered years ago today, in Minkowski's lecture.


Future ("Nachkegel") and past ("Vorkegel") light cones, and timelike ("zeitartiger") and spacelike ("raumartiger") vectors in the writeup of Minkowski's talk (page 82 of Raum und Zeit, Jahresbericht der Deutschen Mathematiker-Vereinigung 18, 1909).

Worldline ("Weltlinie") of a particle in Minkowski spacetime (page 86 of Raum und Zeit).

Hermann Minkowski was born in Lithuania, and had studied mathematics at the University of Königsberg. His contributions to number theory, complex analysis and algebra had made him quite renowned at a young age, and he held positions as professor of mathematics at the universities of Bonn, Zürich (the ETH), and Göttingen. At Göttingen, he shared the interest of Hilbert in the problems of the theory of the electron and special relativity.

Curiously, his worldline hat crossed that of Albert Einstein before: Einstein, as a student of physics at Zürich, had been taught mathematics by Minkowski. But it seems that Minkowski didn't have a very good impression of his student. On the other hand, Einstein had some difficulties to make sense of the reformulation of his theory by his former teacher. Arnold Sommerfeld quotes Einstein as having said in reaction to Minkowski's work that "since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore."

But it's clear that Minkowski's four-dimensional world was an essential conceptual step in the understanding of relativity, and indispensable for the later formulation of general relativity. Unfortunately, Minkowski didn't live to see or even foster these developments. His lecture on "Space and Time" was his last scientific work – he died from a ruptured appendix in January 1909, at the age of 44.


The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.

Translation by W. Perrett and G.B. Jeffery, taken from Hermann Minkowski, "Space and Time", in Hendrik A. Lorentz, Albert Einstein, Hermann Minkowski, and Hermann Weyl, The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity (Dover, New York, 1952), pages 75-91.


For more about Minkowski and his role in special relativity, see e.g. L Corry: Hermann Minkowski and the postulate of relativity, Arch. Hist. Exact Sci. 51 (1997), 273-314 (a free preprint as PDF is here), and Scott Walter: Hermann Minkowski’s approach to physics, Math Semesterber. 55 (2008) 213–235 (preprint as PDF), and Minkowski, Mathematicians, and the Mathematical Theory of Relativity, in H. Goenner, J. Renn, J. Ritter, T. Sauer (eds.): The ExpandingWorlds of General Relativity (Einstein Studies 7), Boston/Basel: Birkhäuser 1999, 45–86 (preprint as PDF).

Sunday, September 07, 2008

Interna II

While my wife is busy with all kinds of last-minute preparations for the conference tomorrow, I've spent the weekend unpacking the last bunch of boxes. Normal life hopefully will resume soon, including an occasional contribution to our blog...

In the meantime, just days before the first beam is supposed to go around in the LHC, I've come across a portrait of physicist Peter Higgs very worth reading in this week's edition of the German newspaper Die Zeit, "Das Teilchen Higgs".

As most of you don't read German, never mind, it's Higgs time in British newspapers anyway: I can refer you instead to the portraits Father of the 'God Particle' by James Randerson in The Guardian of June 30, 2008 , Prof Peter Higgs interview: Smashing atoms at CERN and the hunt for the 'God' particle by Roger Highfield in the Telegraph of August 4, 2008, or The man with the answer to life, the universe and (nearly) everything by Jonathan Leake from The Sunday Times of August 17, 2008.

For a bit more technical background on the prehistory of the "Higgs boson", and the role of many other physicists played in it, check out Peter Higgs: the man behind the boson by Peter Rodgers in Physics World from July 10, 2004, which includes links to all the relevant original papers, or listen to Peter Higgs himself telling the story of My Life as a Boson (recorded on May 21, 2001 at the Michigan Center for Theoretical Physics).

And, of course, Peter Higgs is also on YouTube.





Tag:

Thursday, July 24, 2008

Liquid Helium

This month has seen the centenary of the first liquefaction of helium, the lightest noble gas:

On July 10, 1908, a complicated apparatus working in the laboratory of Heike Kamerlingh Onnes in Leiden, Holland, managed to produce 60 ml of liquid helium, at a temperature of 4.2 Kelvin, or −269°C.

Heike Kamerlingh Onnes (left) and Johannes Diderik van der Waals in 1908 in the Leiden physics laboratory, in front of the apparatus used later to condense helium. (Source: Museum Boerhaave, Leiden)
Kamerlingh Onnes had been experimenting with cold gases since quite some time before, as he was trying to check the theories of his fellow countryman Johannes Diderik van der Waals on the equation of state of real gases. He had been scooped in the liquefaction of hydrogen (at 20.3 K) in 1898 by James Dewar (who, in the process, had invented the Dewar flask).

But as it turned out, the liquefaction of helium required a multi-step strategy and a big laboratory, and this was Kamerlingh Onnes' business: Using first with liquid air, then liquid hydrogen, helium could finally be cooled enough, via the Joule-Thomson effect, to condense into the liquid state. The physics laboratory in Leiden had become the "coldest place on Earth", and immediately turned to the international centre for low-temperature physics.

Three years later, in 1911, Onnes found that mercury lost its electrical resistivity when cooled to the temperature of liquid helium - this was the discovery of superconductivity. In 1913, Kamerlingh Onnes was awarded the Nobel Prize in Physics, "for his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium".

Paul Ehrenfest, Hendrik Lorentz, Niels Bohr, and Heike Kamerlingh Onnes (from left to right) in 1919 in front of the helium liquefactor in the Leiden physics laboratory. (Source: Instituut-Lorentz for Theoretical Physics)



I read about the story of the liquefaction of helium in the July issue of the PhysikJournal (the German "version" of Physics Today - PDF file available with free registration). Moreover, the Museum Boerhaave in Leiden shows a special exhibition to commemorate the event, "Jacht op het absolute nulpunt", but the website seems to be in Dutch only. However, the curator of the exhibition, Dirk van Delft, describes the story in a nice article in the March 2008 issue of Physics Today, "Little Cup of Helium, Big Science", where he makes the point that the Kamerlingh Onnes Laboratory in Leiden marked the beginning of "Big Science" in physics (PDF file available here and here).

One Hundred years later, there is a twist to the story I wasn't aware about at all: Helium is now so much used in science and industry that there may be a serious shortage ahead! [1]

Helium Demand ...


The following graph, plotting data provided by the US Geological Survey, shows how helium is used today in the US:


Helium Usage. Data from US Geological Survey; click to enlarge. (XLS/PDF file)


The biggest chunk of helium is used for technical applications, which include pressurizing and purging, welding cover gas, controlled atmospheres, or leak detection. The second-largest part is already usage in cryogenics, such as in the cooling of superconducting magnets for magnetic resonance imaging (MRI, formerly known as nuclear magnetic resonance, NMR) machines in medicine, and of superconducting cavities and magnets for high-energy particle accelerators. Only then follow applications that include lifting, as in balloons or blimps.

The LHC, for example, needs 120 metric tons of liquid helium to cool down the accelerator to a mere 2.17 Kelvin, when helium becomes a superfluid and an ideal thermal conductor (90 tons are being used in the magnets and the rest in the pipes and refrigerator - see p. 33 of LHC the guide), and 40 more tons to cool down the magnets of the large detectors to 4.5 Kelvin, so that the coils are superconducting [2]. But even this huge amount of helium is just about 5% of the annual US consumption of helium for cryogenics!

...and Helium Supply


Helium is the second-most abundant element in the Universe, but on Earth, it is rare: The atmosphere cannot hold back the light noble gas atoms - ionized helium is transported along magnetic field lines into the upper atmosphere, where it's thermal velocity exceeds the escape velocity of 11.2 km/s [3].

Thus, the constant helium content of about 5 parts per million (ppm) in the atmosphere is maintained only because helium is constantly being produced anew in radioactive decay: for each uranium, thorium or radon nucleus undergoing alpha decay in the Earth's crust, a new helium atom has emerged. This helium gas accumulates in gas fields within Earth, often together with natural gas. That's where helium can be won.

The following figure compares the annual helium production in the US from the exploitation of gas fields with consumption and exports, and with the total World production (data according to the US Geological Survey, who is to blame for the anomaly that the US production can exceed world production):


Annual Helium Production. Data from US Geological Survey; click to enlarge.
(XLS/PDF file)


US helium consumption and exports clearly exceed production, which is possible because the US helium stock is being consumed. World helium production is still raising at the moment, but easily exploitable reservoirs will become rare some time in the future, as they are already now in the US.

Fortunately for future particle accelerators, and all other applications of helium in science and technology, helium can also be won back from the atmosphere, albeit at a higher cost:

The Meissner-Ochsenfeld effect: A superconductor is hovering above a magnet (Source: Wikipedia)
When Walther Meissner succeeded to produce liquid helium in Berlin in 1925 [4], he could not rely on helium supplied from american gas fields because of embargoes in the wake of World War 1 - helium to fill balloons and Zeppelins was considered of highly strategic value. Instead, he cooperated with a company who later sold commercially equipment to liquefy helium. And he could distill enough liquid helium to discover, together with his postdoc Robert Ochsenfeld, that superconductors expel magnetic fields – the Meissner-Ochsenfeld effect.








[1] For the pending helium shortage, see for example
  • The coming helium shortage, by Laura Deakin: "It’s surprising how many scientists and nonscientists alike are oblivious of the pending helium shortage. But it is a fact—we will run out of helium. [...] The question is when, not if, this will happen." (Chemical Innovation 31 No. 6, June 2001, 43–44)
  • Helium shortage hampers research and industry, by Karen H. Kaplan: "If new sources of helium aren't developed, the world's supply of the gas will dwindle and prices will soar." (Physics Today, June 2007, page 31)
  • Helium Supplies Endangered, Threatening Science And Technology: "In America, helium is running out of gas." (ScienceDaily, January 5, 2008)

[2] For the cooling of the LHC, see for example
  • Let the cooling begin at the LHC, by Hamish Johnston: "Tens of thousands of tonnes of equipment must be cooled to near absolute zero before the Large Hadron Collider can detect its first exotic particle. The head of CERN's cryogenics group, Laurent Tavian, tells Hamish Johnston how this will be done." (Physics World, November 7, 2007)
  • Messer to provide helium for LHC project, by Rob Cockerill: "Over the course of the next few years, industrial gas specialist [...] is to provide a 160.000 kg supply of helium to the European Organisation for Nuclear Research (CERN) for the operation of the world’s largest particle accelerator." (gasworld.com, January 23, 2008)
  • Cern lab goes 'colder than space', by Paul Rincon: "A vast physics experiment built in a tunnel below the French-Swiss border is fast becoming one of the coolest places in the Universe." (BBC News, July 18, 2008)
  • Cooldown status - the current state of the cooldown of the LHC, from CERN.

[3] See for example page 250 and 251 of Noble Gas Chemistry by Minoru Ozima, Frank A. Podosek, Cambridge University Press, 2002.

[4] Verflüssigung des Heliums in der Physikalisch-Technischen Reichsanstalt, by Walther Meissner, Naturwissenschaften 13 No 32 (1925) 695-696.

Wednesday, April 23, 2008

Max Planck at 150

Max Planck, April 23 1858 - October 4, 1947.
(Credits: Max Planck Society)
Today is the 150th birthday of Max Planck. He was born on April 23, 1858, the son of a professor of law at Kiel on the Baltic coast in northern Germany, and grew up in Kiel and Munich.

In December 1924, in a lecture at Munich on the occasion of the 50th anniversary of the begin of his studies at the university there, he remembered how he came to study physics. Actually, he had been given quite a discouraging advice by physicist Philipp von Jolly back then in 1874, when young Max Planck was unsure whether to chose physics or music. Jolly was convinced that physics had become a mature field and an elaborate science, crowned by the recent, firm establishment of the principle of the conservation of energy, and that only minor "grains of dust and bubbles" were left to explore. Nevertheless, Planck was fascinated by the then brand-new theories of thermodynamics and electrodynamics, and wanted to understand them in depth. And he succeeded in that.

Applying the concept of entropy to electromagnetic radiation, he found in the late 1890s a new constant of nature - today known as the Planck constant. This constant, when combined with the speed of light and Newton's constant of gravitation, allowed to formulate units of mass, length and time "completely independent of special material bodies and substances, and valid for all times and even extraterrestrial and non-human civilisations" - natural units now known as the Planck units. And of course, most of all, this constant allowed Planck to write down the correct theoretical description for the spectrum of electromagnetic radiation emitted by a hot body. Curiously, this formula implied that the energy of this radiation comes in small packets of energy - it is quantised. The rest is history, as they say.

Happy birthday, Max Planck!





  • For more about Max Planck, check out the biographies at Wikipedia, Encyclopedia Britannica, or MacTutor. His role in establishing quantum theory is discussed by Helge Kragh in a short essay for PhysicsWorld, Max Planck: the reluctant revolutionary.

  • Besides opening the door to the quantum, Max Planck was a very gifted organiser of science and long-term editor of the prestigious Annalen der Physik. He "discovered" and strongly supported Albert Einstein. The Max Planck Society, which arose from the Kaiser Wilhelm Gesellschaft presided by Planck over a long time, has organised an interesting online exhibit on the occasion of the 50th anniversary of his death in 1997.

  • Today's Planck Units see the light of day in an addendum to the paper Über irreversible Strahlungsvorgänge, ("On irreversible radiative processes"), published as Sitzungsbericht Deutsche Akad. Wiss. Berlin, Math-Phys Tech. Kl 5 440-480 (1899), and Annalen der Physik 306 [1] (1900) 69-122. The Planck Spectrum was published in Über das Gesetz der Energieverteilung im Normalspectrum ("On the law of energy distribution in the normnal spectrum"), Annalen der Physik 309 [4] (1901) 553-563.

  • Planck relates the story about Jolly in a guest lecture on Vom Relativen zum Absoluten (From the relative to the absolute) at the University of Munich on December 1,1924. The German text of the lecture can be found in the collection Max Planck: Vorträge, Reden, Erinnerungen.




Tags: ,