Pages

Monday, January 30, 2012

Wolfgang Pauli, 1931, not so dry

In 1931, Wolfgang Pauli went for a long-term stay to Ann Arbor, Michigan. In Ann Arbor, Pauli gave lectures and met, among others, with Otto Laporte, George Uhlenbeck and Arnold Sommerfeld. In the summer 1931, the USA suffered from heat and prohibition. In a letter from July 1st, 1931 to his student Rudolf Peierls, Pauli wrote:
"[T]rotz Gelegenheit zum Schwimmen leide ich sehr unter der großen Hitze hier. Unter der "Trockenheit" leide ich aber gar nicht, da Laporte und Uhlenbeck ausgezeichnet mit Alkohol versorgt sind (man merkt die Nähe der kanadisehen Grenze). Physik (und Physiker) gibt es hier sehr viel, aber ich finde sie zu formal..."

"Despite the opportunity to swim, I suffer from the heat. I do not suffer however from the "dryness," since Laporte and Uhlenbeck have an excellent supply of liquor (one notices the vicinity of the Canadian border). One finds here a lot of physics (and physicists), but most I find too formal..."
Evidently, the supply was ample since, in a letter from later that summer, Pauli reported:
"Dummerweise bin ich neulich (in etwas angeheitertem Zustand) so ungünstig über eine Treppe gefallen, daß ich mir die Schulter gebrochen habe und nun im Bett liegen muß, bis die Knochen wieder ganz sind - sehr langweilig."

"Unfortunately, the other day I fell (somewhat tipsy) on the stairs and broke my shoulder. Now I have to lie in bed till the bones have healed - very boring."

Since drinking was illegal, the official reason for his accident was that he slipped on the tiles at the swimming pool. In the image to the right, you see Pauli with his broken shoulder. Click to enlarge. Image source: CERN archive. Text source: "Wolfgang Pauli: Scientific Correspondence with Bohr, Einstein, Heisenberg a.o." Volume II: 1930-1939, edited by Karl von Meyenn, Springer-Verlag (1985).

Sunday, January 29, 2012

Interna

Our two lovely girls have learned to walk!



Gloria has fallen in love with a plush moose that I bought at the Stockholm airport. When I was pregnant, I gave it to Stefan "for practice," and since then the moose has patiently waited for its cue. It came when Gloria learned to point with her index finger. If her Swedish friend is in sight, she excitedly points and says "Da! Da! Da!" and, if one lets her, she takes the plush moose everywhere.

Lara has learned to drink with a straw, but my efforts to teach Gloria the same have so far been futile. Gloria is generally more picky with things that go into her mouth; she clearly doesn't like vegetables, and every other day refuses to drink juice. On the upside, she has learned that cardboard isn't edible, a lesson that I hope Lara learns before she has eaten up all picture books. We upgraded Lara to the next cloths size; she is now noticeably taller than her sister.

Next week, the babies are scheduled for the meningococcal vaccination, and then we're through with the first round of all the standard vaccinations: diphtheria, tetanus, pertussis, polio, streptococcus pneumoniae, haemophilus influenzae type b, hepatitis b, measles, mumps, rubella and varicella.

I am always shocked when I read about parents who aren't vaccinating their children. I thought that's a problem which exists only in the USA, but our pediatrician puzzled me last year by beginning our first appointment with a forward defense against arguments we hadn't intended to lead.

After some reading, I learned that about 3-5% of Germans believe vaccinations are unnecessary or harmful. UNICEF estimates that in 2009 in Germany the national coverage with the first measles vaccination was 96%. In the USA it was 92%. The basic reproduction number R of measles is estimated to be 12-18. Measles are one of the most contagious diseases known. The percentage of people that have to be immune to prevent a spread of the infection is roughly 1-1/R, for measles that's more than 93%; for mumps and rubella about 80%. However, not everybody who is vaccinated becomes immune.

Too few people know that the reason why the measles, mumps, and rubella (MMR) vaccination is repeated at least once is not that an individual's immunization is improved, but that in at least 5% of all cases the vaccination fails entirely. Our pediatrician said, 5% is what the vaccine producers are claiming, what he sees in practice is 20-30%. One of the probable reasons is that the MMR vaccine has to be kept cold, and any mistake along the delivery line makes the vaccine ineffective. The follow-up vaccination is supposed to bring down the failure rate, 1-(5/100)(5/100) > 0.99, or so the idea. But more realistically 0.96 (1-(20/100)(20/100)) ≈ 92% in Germany, or ≈ 88% in the USA.

And so, measles are far from going extinct and smaller outbreaks still happen. Sadly enough, even in Germany, people still die from measles. The case reported in the article is particularly tragic: A young boy, whose parents refused vaccination, fell sick with measles and, in the doctor's waiting room, infected 6 children, some too young to have been vaccinated; one died.

Ah, I am lecturing again, even though this was supposed to be a family-update post, sorry ;o)

So back on topic, Gloria and Lara had only mild side-effects from the vaccinations. We have exchanged the backward facing baby car seats with forward facing seats, and the girls can now enjoy watching the cars go by, while we can enjoy watching the babies watching. I didn't know how much I hated the backward facing seats till they were gone.

And I should stop referring to Lara and Gloria as "the babies" because they are now officially toddlers.

Wednesday, January 25, 2012

The Planck length as a minimal length

The best scientific arguments are those that are surprising at first sight, yet at second sight they make perfect sense. The following argument, which goes back to Mead's 1964 paper "Possible Connection Between Gravitation and Fundamental Length," is of this type. Look at the abstract and note that it took more than 5 years from submission to publication of the paper. Clearely, Mead's argument seemed controversial at this time, even though all he did was to study the resolution of a microscope taking into account gravity.

For all practical purposes, the gravitational interaction is far too weak to be of relevance for microscopy. Normally, we can neglect gravity, in which case we can use Heisenberg's argument that I first want to remind you of before adding gravity. In the following, the speed of light c and Planck's constant ℏ are equal to one, unless they are not. If you don't know how natural units work, you should watch this video, or scroll down past the equations and just read the conclusion.

Consider a photon with frequency ω, moving in direction x, which scatters on a particle whose position on the x-axis we want to measure (see image below). The scattered photons that reach the lens (red) of the microscope have to lie within an angle ε to produces an image from which we want to infer the position of the particle.

According to classical optics, the wavelength of the photon sets a limit to the possible resolution Δx But the photon used to measure the position of the particle has a recoil when it scatters and transfers a momentum to the particle. Since one does not know the direction of the photon to better than ε, this results in an uncertainty for the momentum of the particle in direction xTaken together one obtains Heisenberg's uncertainty principle
We know today that Heisenberg's uncertainty principle is more than a limit on the resolution of microscopes; up to a factor of order one, the above inequality is a fundamental principle of quantum mechanics.

Now we repeat this little exercise by taking into account gravity.

Since we know that Heisenberg's uncertainty principle is a fundamental property of nature, it does not make sense, strictly speaking, to speak of the position and momentum of the particle at the same time. Consequently, instead of speaking about the photon scattering off the particle as if that would happen in one particular point, we should speak of the photon having a strong interaction with the particle in some region of size R (shown in the above image).

With gravity, the relevant question now will be what happens with the measured particle due to the gravitational attraction of the test particle.

For any interaction to take place and subsequent measurement to be possible, the time elapsed between the interaction and measurement has to be at least of the order of the time, τ, the photon needs to travel the distance R, so that τ is larger than R. (The blogger editor has an issue with the "larger than" and "smaller than" signs, which is why I avoid using them.) The photon carries an energy that, though in general tiny, exerts a gravitational pull on the particle whose position we wish to measure. The gravitational acceleration acting on the particle is at least of the orderwhere G is Newton's constant which is, in natural units, the square of the Planck length lPl. Assuming that the particle is non-relativistic and much slower than the photon, the acceleration lasts about the duration the photon is in the region of strong interaction. From this, the particle acquires a velocity of vaRThus, in the time R, the aquired velocity allows the particle to travels a distance of LGω.

Since the direction of the photon was unknown to within ε, the direction of the acceleration and the motion of the is also unknown. Projection on the x-axis then yields the additional uncertainty ofCombining this with the usual uncertainty (multiply both, then take the square root), one obtainsThus, we find that the distortion of the measured particle by the gravitational field of the particle used for measurement prevents the resolution of arbitrarily small structures. Resolution is bounded by the Planck length, which is about 10-33cm. The Planck length thus plays the role of a minimal length.

(You might criticize this argument because it makes use of Newtonian gravity rather than general relativity, so let me add that, in his paper, Mead goes on to show that the estimate remains valid also in general relativity.)

As anticipated, this minimal length is far too small to be of relevance for actual microscopes; its relevance is conceptual. Given that Heisenberg's uncertainty turned out to be a fundamental property of quantum mechanics, encoded in the commutation relations, we have to ask then if not this modified uncertainty too should be promoted to fundamental relevance. In fact, in the last 5 decades this simple argument has inspired a great many works that attempted exactly this. But that is a different story and shall be told another time.

To finish this story, let me instead quote from a letter that Mead, the author of the above argument, wrote to Physics Today in 2001. In it, he recalls how little attention his argument originally received:
"[In the 1960s], I read many referee reports on my papers and discussed the matter with every theoretical physicist who was willing to listen; nobody that I contacted recognized the connection with the Planck proposal, and few took seriously the idea of [the Planck length] as a possible fundamental length. The view was nearly unanimous, not just that I had failed to prove my result, but that the Planck length could never play a fundamental role in physics. A minority held that there could be no fundamental length at all, but most were then convinced that a [different] fundamental length..., of the order of the proton Compton wavelength, was the wave of the future. Moreover, the people I contacted seemed to treat this much longer fundamental length as established fact, not speculation, despite the lack of actual evidence for it."

Sunday, January 22, 2012

A real thought experiment that shows virtually nothing

Two weeks ago, we discussed Hannah and Eppley's thought experiment. Hannah and Eppley argued that a fundamental theory that is only partly quantized leads to contradictions either with quantum mechanics or special relativity; in particular we cannot leave gravity unquantized.

However, we also discussed that this thought experiment might be impossible to perform in our universe, since it requires a basically noiseless system and detectors more massive than we have mass available. Unless you believe in a multiverse that offers such an environment - somewhere -, this leaves us in a philosophical conundrum, since we conclude that any contradiction in Hannah and Eppley's thought experiment is unobservable, at least for us. And if you do believe in a multiverse, maybe gravity is only quantized in parts of it.

So you might not be convinced and insist that gravity may remain classical. Here I want to examine this option in more detail and explain why it is not a fruitful approach. If you know a thing or two about semi-classical gravity, you can skip the preliminaries.



Preliminaries

If gravity remained classical, we would have a theory that couples a quantum field to classical general relativity (GR). GR describes the curvature of space-time (denoted R with indices) that is caused by distributions of matter and energy, encoded in the so-called "stress-energy-tensor" (denoted T with indices). The coupling constant is Newton's constant G.

In a quantum field theory, the stress-energy-tensor becomes an operator that acts on elements of the Hilbert-space. But in the equations of GR, one can't just replace the classical stress-energy-tensor with a quantum operator, since the latter has non-vanishing commutators that the former doesn't have. Since both would have to be equal to a tensor-valued function of the classical background, this will not work. Instead, we have to take the classical part of the operator that is it's expectation value, in some quantum state, denoted as usual by the bra-kets

This is called semi-classical gravity; quantum fields coupled to a classical background. Why, you might ask, don't we just settle for this?

To begin with, semi-classical gravity doesn't actually solve the problems that we were expecting quantum gravity would solve. In particular, semi-classical gravity is the origin rather than the solution of the black-hole information loss problem. It also doesn't prevent singularities (though in some cases it might help). But, you might argue, maybe we were just expecting too much. Maybe the answers to these problems lie entirely elsewhere. That semi-classical gravity doesn't help us here doesn't mean the theory isn't viable, it just means it doesn't do what we wanted it to do. This explains a certain lack of motivation for studying this option, but isn't a good scientific reason to exclude it.

Okay, you have a point here. But semi-classical gravity doesn't only not solve any problems, it brings with it a bunch of new problems. To begin with, the expectation value of the stress-energy-tensor is divergent and has to be regularized, a problem that becomes considerably more difficult in curved space. This is a technical problem which has been studied for some decades now, and that actually with great success. While some problems remain, you might take the point of view that they will be addressed sooner or later.

But a much more severe problem with the semi-classical equations is the measurement process. If you recall, the expectation value of a field that is in a superposition of states that are with probability 1/2 here, and with probability 1/2 there, has to be updated upon measurement. Suddenly then, the particle and its expectation value are with probability 1 here or there. This process violates local conservation of the expectation value of the stress-energy-tensor. But this local conservation is built into GR: It is necessarily always identically fulfilled. This means that semi-classical gravity can't be valid during the measurement. But still, you might insist, we haven't understood the measurement in quantum mechanics anyway, and maybe the theory has to be modified suitably during measurement, so that in fact the conservation law can be temporarily violated.

You are really stubborn, aren't you?

So you insist, but I hope the latter problem illuminated just how absurd semi-classical gravity is if you think about a quantum state in a superposition of different positions, eg a photon that went through a beam splitter. Quantum mechanically, it had 50% chance to go this or that way. But according to semi-classical gravity, its gravitational field went half both ways! If the photon went left, its gravitational field went half with the photon, and half to the right. Surely, you'd think there must be some way to experimentally exclude this absurdity?



Page and Geilker's experiment

Page and Geilker set out in 1981 to show exactly that, the absurdity of semi-classical gravity with a suitably designed experiment. The most amazing thing about their study is that it got published in PRL, for the experiment is absurd in itself.

Their reasoning was as follows. Consider you have a Cavendish-like setup, consisting of two pairs of massive balls connected by rods, see image below (you are looking at the setup from above)

The one rod (grey) hangs on a wire that has a mirror attached to it, so you can measure its motion by tracking the position of a laser light shining onto the mirror. The other rod (not shown) connecting the two other balls (blue) will be turned to bring the balls into one of two positions A or B. The gravitational attraction between the balls will cause the wire to twist into one of two directions, as indicated by the arrows.

Or so you think if you know classical gravity. But if the blue balls are in a quantum superposition of A and B, then the gravitational attraction of the expectation value of their mass distribution on the grey balls cancels, the wire doesn't twist, and the laser light doesn't move.

To bring the grey balls into a superposition, Page and Geilker used a radioactive sample that decayed with some probability within 30 seconds, and about with equal probability within a longer time-span after this. Depending on the outcome of the decay, the blue balls remain in position A or assume B. The mirror moved, they concluded the gravitational field of the balls can't have been the expectation value of the superpositions A and B, thus semi-classical gravity is wrong.

Well, I hope you saw Schrödinger's cat laughing. While the decay of a radioactive sample is a purely quantum mechanical process, the wavefunction is long decohered by the time the rod has been adjusted. The blue balls have no more been in a quantum superposition than Schrödinger's cat ever was in a superposition of dead and alive.

This begs the question then if not Page and Geilker's experiment can be realized de facto. The problem is, as always with quantum gravity, that the gravitational interaction is very weak. The heaviest masses that can be brought into a superposition of different locations, presently molecules with some thousand GeV, still have gravitational fields far too weak to be measurable. More can be said about this, but that deserves another post another time.


Bottomline

Semi-classical gravity is not considered a fundamentally meaningful description of Nature for theoretical reasons. These are good and convincing reasons, yet semi-classical gravity has stubbornly refused experimental falsification. This tells you just how frustrating the search for quantum gravity phenomenology can be.

Wednesday, January 18, 2012

The Academic Dollar

I didn't know whether to laugh or to cry when I read this article:

The authors are two economists and the above article proposes an improvement to the current publication system in academia. They propose to introduce a virtual currency, the "Academic Dollar" (A$), that would be traded among editors, authors, and reviewers and create incentives for each involved party to improve the quality of articles.

The idea to measure scientific quality by one single parameter, currency in a market economy, is not new. It has been proposed before, in various forms, to rate scientific papers or ideas by monetary value. The problem with this is twofold. First, the scientific community is global and incomes differ greatly from one institution to the next. If money would influence the rating of scientific quality, the largest influence would rest in the wealthy nations' most wealthy institutions. Second, market economies deal very poorly with intangible, long-term, public benefits, which is exactly why most of basic research is tax-funded. It is thus questionable that a neo-liberal reformation of academic culture would be beneficial.

The introduction of an Academic Dollar that could be exchanged according to its own rules circumvents these problems, so it is an interesting idea. Prufer and Zetland motivate their study as follows
"The [auction market for journal articles] quantifies academic output through A$ income, and academics need an accurate measure now more than ever. Long ago, decisions on professional advancement depended on subjective factors. These were replaced over time by "objective" factors such a publication or citation counts. As publication has grown more important, the number of submitted papers has increased... [T]he multiplication of titles has made measurement (and professional decisions) more difficult. Neither tenure candidates nor committees are happy with current evaluation methods; they need a simple indicator."

In more detail, what the authors suggest is the following: The scientist writes a paper and submits it to a journal auction market where editors bid for the papers. The winning bid gets the permission to send the paper to peer review. If it passes peer review satisfactorily, and the editor decides to publish it, the bid in A$ goes to the authors, editors, and referees of the articles that are cited in the auctioned paper.

Let me repeat this so you don't miss the relevant part: the A$ does not go to the author, it goes to the authors, editors and referees of the cited articles. Authors and referees are obliged to reassign their A$ to any editor they chose within one year to close the circle.

The vision is that
"It is a simple step to sum an individual's A$ income... to get an accurate signal of academic productivity. This signal could facilitate decisions on tenure, promotion, grants, and so on."
Five questions that sprang to my mind immediately:

First, I know plenty of researchers who have strong dislikes of certain journals and refuse to work with them. This point the authors address, if I understood correctly, with a "handicap" that the scientist can put on certain journals that would disable or make it more difficult for an editor of these journals to make a bid.

Second, what about self-citations? They write they just wouldn't count them.

Third, where does the A$ come from and who decides who gets what? This is addressed in the article with one bracketed sentence "The initial allocation of A$ may be in proportion to subscribers, citations, impact factor, or some other variable." I am not sure that will be sufficient. There will be a loss of A$ from people who don't care to 'reassign them' for example because they are leaving academia, and a further decrease of the available A$ per person just because the number of scientists is increasing.

Fourth, if the A$ is worth real money because it is relevant for tenure decisions and grants, somebody who has no need for the virtual money will go and trade it for real money. In other words, there'll be a black market for A$, not to mention the problem of smart people hacking the software. The authors write that "The fixed supply of A$, reallocation norm and trading costs are likely to limit the importance of cash in an A$ black market." I think they'd be surprised.

Five, what about editors who are also authors? Are they supposed to have two different accounts of A$ and not mingle them? I couldn't find anything in the paper about this, but suppose this can be addressed somehow.

Prufer and Zetland have added to their paper a calculation of Pareto efficiency, to show that their proposal is beneficial for everybody involved. For this, they have assumed that the quality of a scientific article is a single-valued universal parameter whose optimization is equally well-defined as the optimization of the most cost efficient way to run a factory.

But my biggest problem with the authors proposal is one that we have discussed previously at this blog (for example here). Any measure that is universal streamlines the way research is pursued. Since your measure is in the best case a rough estimate for long-term success, this amplifies behavior that optimizes currently fashionable measures rather than contributes to scientific knowledge in the first line. It might be saving hiring committees time in the short run, but it will cost the community much more time in the long run.

I have preached it many times, and here it is once again: There is no substitute for scientists' judgement. There is no shortcut and there is no universal measure that could improve or replace this individual and, yes, fallible judgement. The individual assessment of quality and potential impact, possibly centuries into the future, if you'd really want to parameterize it, would lie in a very high dimensional space whose dimensions represent very many continuous parameters. If one attempts to project these opinions onto a one-dimensional axis, the universal measure, one inevitably loses information, and optimization becomes dependent on the choice of measure and thus, ultimately ambigious and questionable in its use. At the very least, we should make sure there are several projections and several criteria for what constitutes an "optimal" scientist.

The trend towards use of simple measures is nothing but a way to delegate responsibility for decisions, till they are diluted enough so that one can just go an blame an anonymous "system."

It is far from my intention to make fun of serious and well worked-out proposals to improve the shortcomings of the current academic system, and I find this is a good try. This proposal however has serious shortcomings itself, and it would make a good example for Verschlimmbesserung ;op

Monday, January 16, 2012

Molecei

During the last 50 years, physicists made remarkable progress in creating materials that would not exist on Earth without scientists. Custom designed materials that react to temperature, vibrations, humidity or electric currents, absorb or reflect light in desired ways, absorb or repel liquids where needed, stick or don’t stick, hopefully where you want them, are but a few examples.

The maybe most important development in our ability to create new materials have been a large variety of semi-conductors that are instrumental to many now common gadgets, and high temperature superconductivity though, at typically 70 K, the temperatures at which these materials become superconducting is “high” only compared to outer space (or to a physicist who spent too many of his days with liquid Helium).

The most amazing new developements are graphene nano-structures, light yet strong, thin yet impermeable, with high thermal conductivity (possibly directed), high conductivity, and large capacity for hydrogen storage. Nanotechnology has also many potential medical applications that are currently being explored, but enough for now with the praise of modern science.

With that in mind, let us fast forward in time, into the unknown. Imagine our understanding and technical expertise would allow us to do what we do today with atoms to the constituents of atomic nuclei (the protons and neutrons, collectively called “nucleons”). Imagine we could build structures of nucleons that do not occur in nature, structures that are to nuclei what molecules are to atoms. Let us call them “molecei.”

Humans have already brought into existence formations of nucleons that do not occur in nature. By colliding very heavy nuclei, particle physicists have created ever heavier elements. Most recently, the super-heavy elements darmstadtium (Ds), roentgenium (Rg) and copernicium (Cn) with atomic number 110, 111 and 112 have been added to the periodic table. For practical purposes however, these nuclei are not particularly useful because they are very short-lived. It has long been conjectured however, that at even higher atomic numbers, the lifetimes might increase again.

With today’s knowledge of the forces acting in atomic nuclei, and with presently existing technology, it is not possible to create molecei, and maybe they are fundamentally not possible. But if you had asked alchemists 400 years ago what they thought about wires with memory, aerogel, liquid crystals, and ferrofluids, they’d have declared it either magic or impossible. As history has demonstrated over and over again, even experts often fail to properly distinguish the possible from the impossible. So let us be daring, and leave behind the academic carefulness for a moment to speculate what we could do with molecei.

If a positively charged nucleus has a difficult shape, as it would be with molecei, strange and uncommon electron orbits would be the consequence. Electrons might be very loosely bound or highly degenerate, allowing for astonishing optical and electric properties, possibly including superconductivity at room temperature.

The more complicated the shape of a molecei, the more excitations it would have, which would dramatically affect the ability of phonons to propagate. This could cause a medium doted with molecei to have acoustic, and thermal properties the world has never seen, from perfect soundproofing to liquids with enormous heat capacity.

The maybe most exciting possibility is that suitably designed molecei might enable interactions between atomic nuclei that normally require extreme temperatures or densities. Molecei could act as catalysts for nuclear reactions much like molecules can act as catalysts for chemical reactions; it is the old dream of cold nuclear fusion that could solve all our energy problems – provided it does not take more energy to produce the molecei to begin with.

Finally, molecei would be the next step in our ability to design miniature tools and to unravel nature’s secrets on even smaller distances.

Thursday, January 12, 2012

Away Note

I'll be in Stockholm during the next days for the Nordita Winterschool 2012. I have some issues with the Internet connection in the Stockholm apartment because the provider cuts me off if it hasn't been used for a while. So chances are I'll be offline. In other words, don't worry if you don't hear from me for a while. Back next week.

Monday, January 09, 2012

Eppley and Hannah's thought experiment

We have many reasons to believe that our present knowledge of the fundamental laws of nature is incomplete. Not only because it is unaesthetic that classical general relativity and the quantum field theories of the standard model stand conceptually apart. More pressing is that general relativity, under very general circumstances, brings with it the formation of singularities, and without quantizing gravity black hole evaporation seems incompatible with quantum mechanics. More trivial and, in my opinion, also more pressing is that we don't know what is the gravitational field of a superposition of quantum states, think double slit: Quantum mechanics tells us we know that the particle is neither here nor there, and yet both at once, completely described by its wave-function. In general relativity however its gravitational field is classical and has to have distinct properties. It has to be either here or there, and cannot be both at once.

Eric Hannah and Kenneth Eppley in 1977 presented a thought experiment that illuminated nicely why coupling a quantized to an unquantized field inevitably spells trouble, published in their article "The necessity of quantizing the gravitational field." The experiment is deceptively simple. You prepare a quantum particle in a state with a well-known momentum (in some direction). It doesn't necessarily have to be a momentum eigenstate, but something with a small momentum uncertainty. From Heisenberg's uncertainty principle, we know then that its position uncertainty will be large. Now you measure the position of the particle with a classical gravitational wave.

If gravity wasn't quantized, gravitational waves wouldn't have to fulfill the relation p = ℏk, which was famously shown to hold for photons by Einstein, using the photoelectric effect. It would then be possible to prepare a gravitational wave with a small wavelength (high frequency) but small momentum. If you use this gravitational wave to measure the position of the quantum particle, there are, so argue Hannah and Eppley, three different possible outcomes:

  1. You collapse the wavefunction of the quantum particle and measure its position to a precision determined by the short wavelength of the gravitational wave yet without transferring a large momentum. It is then possible to violate Heisenberg's uncertainty principle, thus the quantum part of the theory doesn't survive.
  2. You collapse the wavefunction of the quantum particle without violating Heisenberg's uncertainty principle, then you will violate energy conservation because your wave can't provide the necessary spread in momentum.
  3. You don't collapse the wavefunction, in which case you can use your measurement for superluminal communication. You then had two types of measurements, one that does and one that doesn't collapse the wavefunction. By spatially separating an entangled state and monitoring one part of it without collapsing it, you can find out, instantaneously, when a collapse was induced in the other part.

Since gravity is an extremely weak interaction, this experiment is far beyond experimental possibility; the detector's mass for example would have to exceed that of our galaxy. Hannah and Eppley claimed that their experiment would at least in principle be possible to construct with the matter content of our universe. It was however later shown by James Mattingly, in his paper Why Eppley and Hannah's Experiment Isn't (the title evidently did not make it through peer review), that Hannah and Eppley underestimated the experimental challenges. Mattingly crunched the numbers and showed that the cosmic background radiation spoils the sensitivity of the detectors and, worse, that the detector would have to be so massive it would sit inside a black hole.

Thus, Hannah and Eppley's experiment isn't even in principle possible. While their reasoning is physically plausible, this puts one into a philosophically difficult spot. There clearly is a theoretical problem with coupling a classical to a quantum field, but if we can show there are no practical consequences in our universe, is it a problem we should worry about?

I like Hannah and Eppley's thought experiment. It is not the best motivation one can have for quantizing gravity, but it is a lean way to illuminate the problem.

Wednesday, January 04, 2012

What is science?

As long as there has been science people have asked themselves how to identify it. Centuries of philosophers have made attempts and I don't intend to offer an answer in the confines of a blogpost. Instead, always the pragmatist, I want to summarize some points of view that I have encountered, more or less explicitly so, and encourage you to share your own in the comments. With this post, I want to pick up a conversation that started in this earlier post.

There is the question of content and that of procedure. The question of content is mainly a matter of definition and custom. When a native English speaker says "science" they almost always mean "natural science." On occasion they include the social sciences too. Even rarer so mathematics. The German word for science is "Wissenschaft" and in its usage is much closer to the Latin root "scientia."

According to the Online Ethymology Dictionary
    Science from Latin scientia "knowledge," from sciens (gen. scientis), present participle of scire "to know," probably originally "to separate one thing from another, to distinguish"

The German "Wissenschaften" include besides the natural sciences not only the social sciences and mathematics, but also "Kunstwissenschaft," "Musikwissenschaft," "Literaturwissenschaft," etc, literally the science of art, the science of music, the science of literature. It speaks for itself that if you Google "Kunstwissenschaft" the first two suggestions are the completions "in English" and "translation." In the following I want to leave the content of "science" as open as the German and Latin expressions leave it, and let it be constrained by procedure, which for me is the more interesting aspect.

As for the procedure, I have come across these three points of view:
  • A: Science is what proceeds by the scientific method

    When pushed, the usually well-educated defender of this opinion will without hesitation produce a definition for scientific method along the lines hypothesis, experimental test, falsification or gradual acceptance as established fact.

    The problem, as Feyerabend pointed out, is that a lot of progress in science did simply not come about this way. Worse, requiring a universal method may in the long run stifle progress for the reason that the scientific method itself can't adapt to changing circumstances. (I'm not sure if Feyerabend said that, but I just did.) Requiring people in a field in which creativity is of vital importance to obey certain rules, however sane they seem, begs for somebody to break the rules - and succeed nevertheless.

    There are many examples of studies that have been pursued for the sake of scientia without the possibility or even intention of experimental test, and they have later become tremendously useful. A lot of mathematics falls into this category and, until not so far ago, a big part of cosmology. Do you know what will be possible in 100 years? Prediction is very difficult, especially about the future, as Niels Bohr said.

    The demand of falsifiability inevitably brings with it the question for patience. How long should we wait for an hypothesis to be tested before we have to discard it as unscientific? And who says so? If you open Pandora's box, out falls string theory and the technological singularity.

    Finally, let me mention that if you sign up to this definition of science, then classifications, that make up big parts of biology and zoology, are not science. Science however are literature studies, for you can well formulate a hypothesis about, say, Goethe's use of the pluralis majestatis and then go and falsify it.

  • B: Science is what scientists do

    This definition begs the question who is a scientist. The answer is that science is a collective enterprise of a community that defines its own membership. Scientists form, if you want to use a fashionable word, a self-organizing system. They define their own rules, and the viability of these rules depends on the rules' success. The rules cannot only change over time, allowing for improvement, there can also exist different ones next to each other that compete in the course of history.

    I personally prefer this explanation of science. I like the way it fits into the evolution of the natural world, and I like how it fits with history. I also like that it's output oriented instead of process oriented: it doesn't matter how you do it as long as it works.

    In this reading, the scientific method, as summarized in A, is so powerful for the same reason that animals have the most amazing camouflage: Selection and adaption. It does not necessitate infallibility. Maybe the criteria of membership we use today are too strict. Maybe in the future they will be different. Maybe there will be several ones.

    The shortcoming of this definition is that there is no clear-cast criterion by which you can tell what of today's efforts are scientific, in much the same way that you can't tell whether some species is well adapted to a changing environment till they go extinct, possibly because they fall prey to a "fitter" species. That means that this definition of science will inevitably be unpopular in circumstances that require short and simple answers, circumstances in which the audience isn't expected to think for themselves.

    Given the time to think, note that the lack of simple criteria doesn't mean one can't say anything. You can clearly say the scientific method, as defined in A, has proven to be enormously successful and, unless you are very certain you have a better idea, discarding it is the intellectual equivalent of an insect dropping its camouflage and hoping birds don't notice. Your act of rebellion might be very short.

    That having been said, in practice there is little difference between A and B. The difference is that B leaves the future open for improvement.

  • C: Science is the creation, collection, and organization of knowledge

    "All science is either physics or stamp collecting," said Ernest Rutherford. This begs the question whether stamp collection is a science. The definition C is the extreme opposite to A; it does not demand any particular method or procedure, just that it results in knowledge. What that knowledge is about or good for, if anything, is left up to the scientist operating under this definition.

    The appeal of this explanation is that scientists are left to do, and collect what they like, with the hope that future generations find something useful in it; it's the "You never know" of the man who never throws anything away, and has carefully sorted and stored his stamps (and empty boxes, and old calendars, and broken pens, and...).

    The problem with this definition is that it just doesn't overlap with most people's understanding of science, not even with the German "Wissenschaft." There is arguably a lot of knowledge that doesn't have any particular use for most people. I know for example that the VW parked in front of the house is our upstairs neighbor's, but who cares. Where exactly does knowledge stop being scientific? Is knowledge scientific if it's not about the real world? These are the question you'll have to answer to make sense of C.


(img sources: click on image)

Sunday, January 01, 2012

Book review: "Quips, Quotes and Quanta" by Anton Z. Capri

Quips, Quotes, and Quanta: An Anecdotal History of Physics
By Anton Z. Capri
World Scientific Publishing (2007)

I came across Capri's book "Quips, Quotes and Quanta" while searching fodder for our 2011 advent calendar with anecdotes about physicists. It took a while for the book to arrive, but I finally received it a few days before Christmas.

Capri's book is a collection of stories and quotations from the history of physics of the late 19th and early 20th century. The author uses these stories to embed the physics of that time and covers some parts of thermodynamics, quantum mechanics and atomic physics around the lives of Dirac, Schrödinger, Pauli, Bohr, Boltzmann, Ehrenfest, Hilbert, Heisenberg, Planck, to only mention the usual suspects. I will admit on not reading the physics elaborations too carefully, but for all I can tell the scientific content was flawless, if with the superficiality that brevity brings.

While it sounds like a nice idea to get across science with anecdotes, the realization of that idea is poor. The writing is uninspired, sloppy and without style. It is so bad that in parts it reads like copy and pasted from Wikipedia; a list of paragraphs with things soandso allegedly said or did, vaguely collected by name or topic. At least one paragraph appears twice in the book (search inside for "Sommerfeld had this to say about Pauli").

The book does not list a single reference. None of the stories or quotations comes with a source, not even the biographical details. I happened to know some of the sources, and the respective paragraphs appear to me just as scrambled enough so they cannot be identified as exact copies. Bohr's theory of the Wild West for example probably originated in Gamow's recollection. Other anecdotes I know to be wrong, for example that of Bohr and the horseshoe and that Donald Glaser allegedly invented the bubble chamber after watching bubbles raise in beer (which even Wikipedia knows to be made up).

The author, Anton Capri is a retired professor for Engineering Physics. He is not a historian, but as a scientist he should have learned to check and list sources. If you have a scale on which you'd want me to rate this book, mark the lowest possible score. Unless you don't care if an allegedly historical anecdote is entirely fabricated, I recommend you do not spend money on this book.