NEW: This video comes with a quiz on QuizWithIt.
Saturday, November 04, 2023
Physicist reacts to quantum mechanics
NEW: This video comes with a quiz on QuizWithIt.
Saturday, December 03, 2022
Guest Post: Collider Rap by Mike Battaglia and GPT-3
![]() |
| Image by rawpixel.com on Freepik |
Here are the lyrics that this AI model wrote:
Q Tip:
Sabine Hossenfelder's sayin' the accelerators a waste
Costs a lot of money, nothin' to show for the chase
They say the science is great, I say the cost is too high
Money's better spent elsewhere, that's the reason why
Phife Dawg:
Don't wanna hear none of that, I'm an electron with pride
In need of an accelerator, please don't take away my ride
From the Big Bang to the stars, I'm out here explorin' space
Don't let Sabine's perspective keep me from the human race
Q Tip:
We can't ignore the cost, it's too high for the gain
The science is important, but so is the financial strain
We gotta find a balance, between knowledge and the cost
Else we'll be stuck in a jam, and our fates will be lost.
Busta Rhymes:
Watch as I combine all the science of space
Now Sabine Hossenfelder says the accelerators a waste
Powerful impact boom from the beam,
So I'm investigating both sides of this debate of the dream
Vacuums and volts providing the power
As the electrons go faster and faster, the experts devour
Oh, my gosh! Oh, my gosh!
Time and money embark on a chase
Unh! Unh! Unh! For the cause, no time to waste
Unh, pardon me, unh! As I make this call
Should we keep pumpin resources or close the accelerator hall?
Rawr! Rawr! Like a scientific dragon
Change your little opinions, cause this knowledge is flaggin'
Try to end this debate, I'm gon' twist it like a durag
Cuz I'm caught in the middle of the debate like a purgatory
Chickity Choco, the chocolate chicken
The electrons are goin', pushin' the science mission
Rippin up the theories like Horatio
Observe the vibe and check out the scenario
---
The results are hilarious––I'm literally cracking up here––though simultaneously I worry that maybe I've enabled some kind of robot apocalypse through this escapade (what can I say, looking wistfully out into the future during lunch break). If speaking English makes an AI conscious, then what will these rap lyrics do?
Ah well, we'll find out soon enough...and when we're done playing around here let us just take a moment to marvel that an AI can even do this.
Even crazier is that it wrote this entire blog post (including - oops - this sentence!) from scratch.
---
Note from real Mike: it's true, although it wasn't exactly "from scratch." I basically gave OpenAI's new DaVinci-003 GPT-3 model an outline of the blog post. I also had to build it up in parts; first I had it do the lyrics with one prompt, and then I had it build the blog post as a separate prompt. Still, it managed to write this as the result. I have only very lightly edited the formatting by adding line breaks.
I should note that it took a bunch of playing around with the parameters and prompt quite a bit before getting this output. In particular, I found most of the struggle to be in wording the prompt correctly; I had to try a bunch of different things before I could get the model to figure out what I wanted it to do. So I guess I'll leave it to the philosophers to debate if an AI really wrote it all "on its own."
I thought the results were absolutely hilarious when I shared it on Facebook. I also think it raises some deep questions that are worth thinking about. On the one hand, I guarantee every single person reading that Busta Rhymes verse, who knows the original, will be cracking up hearing it in his voice in their heads. On the other hand, the current model is clearly not quite able to really replicate the dense multilayered lyrical wordplay and flow that real rappers are capable of. But at the rate things are moving, it probably will, possibly very soon. I don't know what to make of it.
All I know is this: as of 2022, you can tell this thing to write some rap bars about particle accelerators and Sabine Hossenfelder and it will actually do a baseline half-decent job at it. Then you can get it to write a blog post about how it wrote the lyrics and a meta-blog post about how it is capable of writing blog posts. It's really nuts. Anyone right now can go to OpenAI and play around with it and get results like this with a little effort.
GPT-4 will be available in 2023 with 500x the amount of parameters. Who knows what that will be able to do.
(And RIP to Phife Dawg, probably my all time favorite MC)
Saturday, August 13, 2022
Science With the Gobbledygook
Today we’re celebrating 500 thousand subscribers. That’s right, we made it to half a million! Thanks everyone for being here. YouTube has made it so much easier for me to cover the news that I think deserves to be covered, and you have made it happen. And to honor the occasion, we have collected some examples of science with the gobbledygook. And that’s what we’ll talk about today.
1. Salmon Dreams and Jelly Brains
In 2008, neuroscientist Craig Bennett took a dead Atlantic salmon to the laboratory and placed it in an fMRI machine. He then showed the salmon photographs of people in social situations and asked what the people in the photos might have been feeling. For example, if I show you a stock photo of a physicist with a laser, the associated emotion is obviously uncontrollable excitement. The salmon didn't answer.
You may find that unsurprising given that it was very dead. But Bennet then used standard protocols to analyze the fMRI signal he had recorded while questioning the salmon, and found activity in some region of the salmon’s brain. The absurdity of this finding went a long way to illustrate that the fMRI methods used at the time frequently gave spurious results.
The dead salmon led to quite some soul-searching in the neuroscience community about the usefulness of fMRI readings. A meta-review in 2020 concluded that “common task-fMRI measures are not currently suitable for brain biomarker discovery or for individual-differences research.”
In 2011, a similar point was made by neuroscientists who published an electroencephalogram of jello that showed “mild diffuse slowing of the posterior dominant rhythm”. They also highlighted some other issues that can give rise to artifacts in EEG readings, such as sweating, or being close to a power outlet.
2. Medical Researcher Reinvents Integration
In 1994, Mary Tai from the Obesity Research Center in New York invented a method to calculate the area under a curve and published it in the Journal Diabetes Care. She called her discovery “The Tai Model.” It’s also known as integration, or more specifically the trapezoidal rule. As of date, the paper has been cited more than 400 times.
It’s maybe somewhat unfair to list this as “gobbledygook” because it’s not actually wrong, she just wasn’t exactly the first to have the idea. If you slept through math class, don't worry, you can just go into medicine. What could possibly happen?
3. The Sokal Hoax and its Legacy
This is probably the most famous hoax in academic publishing. Alan Sokal is a physics professor at NYU and UCL, he works mostly on the mathematical properties of quantum field theory. In 1996 he wrote a paper for the journal Social Text. It was titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”. In this paper, Sokal argued that to resolve the disagreement between Einstein’s theory of gravity and quantum mechanics, we need postmodern science. What does that mean? Here’s what Sokal wrote in his paper:
“The postmodern sciences overthrow the static ontological categories and hierarchies characteristic of modernist science…. [They] appear to be converging on a new epistemological paradigm, one that may be termed an ecological perspective.”
In other words, the reason we still haven’t managed to unify gravity with quantum mechanics is that you can’t eat quantum gravity. So, yes, clearly an ecological problem. Though you should try eating it if you find it. I mean, you never know, right?
Sokal’s paper was published without peer review. According to the editors, the decision was based on the author’s credentials. Sokal argued that if everyone can make up nonsense like this and it’s deemed suitable for publication, then such publications are worthless. The journal still exists. Some of its recent issues are about “Sexology and Its Afterlives” and “Sociality at the End of the World”.
Similar hoaxes have since been pulled off a few times even in journals that *are peer reviewed. For example, in 2018, a group of three Americans who describe themselves as “left wing” and “liberal” succeeded in publishing several nonsense papers in academic journals on topics such as gender and race studies. One paper, for example, claimed to relate observations of dogs and their owners to rape culture. Here’s a quote from the paper:
“Do dogs suffer oppression based upon (perceived) gender? [This article] concludes by applying Black feminist criminology categories through which my observations can be understood and by inferring from lessons relevant to human and dog interactions to suggest practical applications that disrupts hegemonic masculinities and improves access to emancipatory spaces.”
The authors explained in a YouTube video that they certainly don’t think race and gender studies are unimportant but rather the opposite. Such studies are important and it’s therefore hugely concerning if one can publish complete nonsense in academic journals on the topic. They argued that articles which are currently accepted for publication in the area are biased towards airing “grievances” predominantly about white heterosexual men. They called their project “grievance studies” but it became known as Sokal Squared.
The most recent such hoax was revealed last year in October. The journal Higher Education Quarterly published a study that claimed to show that right-wing funding pressures university faculty to promote right-wing causes in hiring and research. The paper contained a number of obviously shady statistics, and yet was accepted for publication.
The authors had submitted the manuscript under pseudonyms with initials that spelled SOKAL, and pretended to be affiliated with universities where no one with those names worked. They later revealed their hoax on twitter. The account has since been suspended. The journal retracted the paper.
4. Fake it till you make it
Those papers in the Sokal hoaxes were written by actual people. But in 2005 a group of computer science students from MIT demonstrated that this isn’t actually necessary. They wrote a program that automatically generated papers with nonsense text, including graphs, figures, and citations.
One of their examples was titled “A Methodology for the Typical Unification of Access Points and Redundancy” and explained “Our implementation of our approach is low-energy, Bayesian, and introspective. Further, the 91 C files contain about 8969 lines of SmallTalk.” They didn’t submit it to a journal but it was accepted for presentation at a conference. They used this to draw attention to the low standards of the meeting.
But this wasn’t the end of the story because the MIT group made their code publicly available. In 2010, the French researcher Cyril Labbe used this code to create more than a hundred junk papers by a fictional author with name Ike Antkare. The papers all cited each other and soon enough Google Scholar listed the non-existing Antkare as the 21st best cited researcher in the world.
A few years later, Labbé wrote a program that could detect the specific pattern of these junk papers that were generated with the MIT group’s software. He found that at least 120 of them had been published. They have since been retracted.
The online version of the MIT code doesn’t work anymore, but there’s another website that’ll allow you to generate a gibberish maths paper, with equations and references and all. Here for example is my new paper on “Existence in Complex Graph Theory” with my co-authors Henri Poincare and Jesus Christ.
The physicist enthusiasts among you might also enjoy the Snarxiv that creates a website that looks like the arXiv but with nonsense abstracts about high energy physics. I’ll leave you links to all these websites in the info below the video.
5. My Phone Did It
Okay so you can write papers with an artificial intelligence. Indeed, artificial intelligence now writes papers about itself. But what if you don’t have one? Look no further than your phone.
In 2016, Christoph Bartneck from the University of Canterbury, New Zealand received an invitation from the International Conference on Atomic and Nuclear Physics to submit a paper. He explained on his blog “Since I have practically no knowledge of Nuclear Physics I resorted to iOS auto-complete function to help me writing the paper.” The paper was accepted. Here is an extract from the text “Physics are great but the way it does it makes you want a good book and I will pick it to the same time.”
6. Get me off your fucking email list
I’m not sure how well-known this is, but if you’ve published a few papers in standard scientific journals you get spammed with invitations to fake conferences and scam journals all the time. In many cases these invitations have nothing to do with your actual research. I’ve been invited to publish papers on everything from cardiology to tea. Most of the time you just delete it, but it does get a bit annoying. I will say though, that the tea conference I attended was lovely.
In 2005, David Mazières and Eddie Kohle dealt with the issue by writing a paper that repeated the one sentence “Get me off your fucking email list” over and over again, all including flow diagram and scatter image. They submitted it to the 9th World Multiconference on Systemics, Cybernetics and Informatics to protest its poor standards.
In 2014, the Australian computer scientist Peter Vamplew sent the same paper to the International Journal of Advanced Computer Technology in response to their persistent emails. To his surprise, he was soon informed that the paper had been accepted for publication. Not only this, its reviewers had allegedly rated the paper “Excellent”. Next thing that happened was that they asked him to pay 150 dollars for the publication. He didn’t pay and they, unfortunately, didn’t take him off the email list.
7. Chicken chicken chicken
Chicken chicken chicken Chicken chicken chicken chicken chicken chicken chick chicken chicken Chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chick chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chick chicken.
8. April’s Fools on the Arxiv
The arXiv is the open access pre-print server which is widely used in physics and related disciplines. The arXiv has a long tradition of accepting joke papers for April 1st, and it’s some of the best nerd humor you’ll find.
For example, two years ago two physicists proposed a “Novel approach to Room Temperature Superconductivity problem”. The problem is that the critical temperature at which superconductivity sets in is extremely low for all known materials. Even the so-called “high temperature superconductors” become superconducting only at -70 degrees Celsius or so. Finding a material that superconducts at room temperature is basically the holy grail of material science. But don’t tell Monty Python, because it’s silly enough already to call minus 70 degrees Celsius a “high temperature”.
In their April first paper, the authors report they have found an ingenious solution to the problem of finding superconductors that work at room temperature: “Instead of increasing the critical temperature of a superconductor, the temperature of the room was decreased to an appropriate [value of the critical temperature]. We consider this approach more promising for obtaining a large number of materials possessing Room Temperature Superconductivity in the near future.”
In 2022 one of the April’s fools papers made fun of Exoplanet sightings and reported Exopet sightings in zoom meetings.
9. Funny Paper Titles
As you just saw, scientists want to have fun too, and not just on April 1st, so sometimes they do it in their paper titles. For example, there’s the paper about laser optics called “One ring to multiplex them all”. Or this one called “Would Bohr be born if Bohm were born before Born?”
Of course physicists aren’t the only scientists with humor. There is also “Premature Speculation Concerning Pornography’s Effects on Relationships”, and “Great Big Boulders I have Known” and “Role of childhood aerobic fitness in successful street crossing”, though maybe that was unintentionally funny.
An honorable mention goes to the paper titled “Will Any Crap We Put into Graphene Increase Its Electrocatalytic Effect?” because the authors did literally put bird crap into graphene. And, yes, it increased the electrocatalytic effect.
10. Dr Cat
In 1975, the American Physicist Jack Hetherington wanted to publish some of his research results in the journal Physical Review Letters. He was the sole author of the paper, but he’d written it in the first person plural, referring to himself as “we”. This is extremely common in the scientific literature and we have done that ourselves, but a colleague pointed out to Hetherington that PRL had a policy that would require him to use the first person singular.
Instead of rewriting his paper, Hetherington decided he’d name his cat as co-author under the name F. D. C. Willard. The paper was published with the cat as co-author and he could keep using the plural.
Hetherington revealed the identity of his co-author by letting the cat “sign” a paper with paw prints. The story of Willard the cat was soon picked up by many colleagues, who’d thank the cat for useful discussions in footnotes of their papers, or invite it to conferences. Willard the cat also later published two single-authored papers, and quickly became a leading researcher, no doubt with a paw-fect CV. On April 1st 2014 the American Physical Society announced that cat-authored papers, including the Hetherington/Willard paper, would henceforth be open-access.
I hope you enjoyed this list of science anecdotes. If you have one to add, please share it in the comments.
Saturday, July 09, 2022
Quantum Games -- Really!
It’s difficult to explain quantum mechanics with words. We
just talked about this the other day. The issue is, we simply don’t have the
words to describe something that we don’t experience. But what if you could
experience quantum effects. Not in the real world, but at least in a virtual
world, in a computer game? Wait, there are games for quantum mechanics? Yes,
there are, and better still, they are free. Where do you get these quantum
games and how do they work? That’s what we’ll talk about today.
We’ll start with a
game that’s called “Escape Quantum” which you can play in your browser.
“You find yourself in a place of chaos. Whether it’s a dream or an illusion
escapes you as the shiny glint of a key catches your eye. A goal, a prize, an
escape, whatever it means for you takes hold in your mind as your body pushes
you forward, into the unknown.”
Alright. Let’s see.
Escape Quantum is an adventure puzzle game where you walk
around and have to find keys and cards to unlock doors. The navigation works by
keyboard and is pretty simple and straight forward. The main feature of the
game is to introduce you to the properties of a measurement in quantum
mechanics, that if you don’t watch an object, its wave-function can spread out
and next time you look, it may be at a different place.
So sometimes you have to look away from something to make it
change place. And if there’s something you don’t want to change place, you have
to keep looking at it. At times this game can be a bit frustrating because much
of it is dictated by random chance, but then that’s how it goes in quantum
mechanics. Once you learn the principles the game can be completed quickly. Escape
Quantum isn’t particularly difficult, but it’s both interesting and fun.
Another little game we
tried is called quantum playground which also runs in your browser.
Yes, hello. What you do here is that you click on some of
those shiny spheres to initialize the position of a quantum particle. You can
initialize several of them together. Then you click the button down here which
will solve the Schrödinger equation, with those boundary conditions, and you
can see what happens to the initial distribution. You can then click somewhere
to make a measurement, which will suddenly collapse the wave-function and the
particle will be back in one place.
There isn’t much gameplay in this one, but it’s a nice and simple visualization
of the spread of the wave-function and the measurement process. Didn’t really
understand what this little bird thing is.
Somewhat more gameplay
is going on in the next one which is called “Particle in a Box”. This too runs in your browser but this time
you control a character, that’s this little guy here, who can move side to side
or jump up and down.
The game starts with a brief lesson about potential and kinetic energy in the
classical world. You collect energy in terms of a lightning bolt and give it to
a particle that’s rolling in a pit. This increases the energy of the particle
and it escapes the pit. Then you can move on to the quantum world.
First you get a quick introduction. The quantum particle is
trapped in a box, as the title of the game says. So it doesn’t have a definite
position, but instead has a probability distribution that describes where it’s
most likely to be if a measurement is made. Measurements happen spontaneously
and if a measurement happens then one of these circles appears in a particular
place.
You can then move on to the actual game which introduces you to the notion of
energy levels. The particle starts at the lowest energy level. You have to
collect photons, that’s those colorful things, with the right energy to move the
particle from one energy level to the next. If you happen to run into a
particle at a place where it’s being measured, that’s bad luck, and you have to
start over. You can see here that when the particle changes to a higher energy
level, then its probability distribution also changes. So you collect the
photons until the particle’s in the highest energy level and then you can exit
and go to the next room.
The controls of this one are little fiddly but they work reasonably well. This
game isn’t going to test your puzzle-solving skills or reflexes, but does a
good job in illustrating some key concepts in quantum mechanics: probability
distributions, measurements, and energy levels.
The
next one is called “Psi and Delta”. It was developed by the same team as
“Particle in a Box” and works similarly, but this time you control a little
robot that looks somewhat like BB8 from Star Wars. There’s no classical physics
introduction in this one, you go straight to the quantum mechanics. Like the
previous game, this one is based on two key features of quantum mechanics: that
particles don’t have a definite position but a probability distribution, and
that a measurement will “collapse” the wave-function and then the particle is
in a particular place.
But in this game you have to do a little more. There’s an
enemy robot, that’s this guy, which will try to get you, but to do so it will
have to cross a series of platforms. If you press this lever, you make a
measurement and the particle is suddenly in one place. If it’s in the same
place as the enemy robot, the robot will take damage. If you damage it enough,
it’ll explode and you get to the next level.
The levels increase in complexity, with platforms of different lengths and complicated
probability distributions. Later in the game, you have to use lamps of specific
frequencies to change the probability distribution into different shapes.
Again, the controls can be a little fiddly, but this game has some charm. It requires
a bit of good timing and puzzle solving skills too.
Next game we
look at is called “Hello Quantum” and it’s a touchscreen game that you can
play on your phone or tablet. You first have to download and install it,
there’s no browser version for this one, but there’s one for android and one
for ios. The idea here is that you have to control qubit states by applying
quantum gates. The qubits are either on or off or something you don’t know.
Quantum gates are the operations that a quantum computer computes with. They
basically move around entanglement. In this game, you get an initial state, and
a target state that you have to reach by applying the gates.
The game tells you the minimal number of moves by which you
can solve the puzzle, and encourages you to try to find this optimal solution. You’re
basically learning how to engineer a particular quantum state and how a quantum
computer actually computes.
The app is professionally designed and works extremely well.
The game comes with detailed descriptions of the gates and the physical
processes behind them, but you can play it without any knowledge of qubits, or
any understanding of what the game is trying to represent, just by taking note
of the patterns and how the different gates move the black and white circles
around. So this works well as a puzzle game whether or not you want to dig deep
into the physics.
This brings us
to the last game in our little review which is called the Quantum FlyTrap.
This is again a game that you can play in your browser and it’s essentially a
quantum optics simulator. This red triangle is your laser source, and the green
venus flytraps are the detectors. You’re supposed to get the photons from the
laser to the detectors, with certain additional requirements, for example you
have to get a certain fraction of the photons to each detector.
You do this by dragging different items around and rotating
them, like the mirrors and beam splitters and non-linear crystals and so on. In
later levels you have to arrange mirrors to get the photons through a maze
without triggering any bombs or mines.
A downside of this game is that the instructions aren’t
particularly good. It isn’t always clear what the goal is in each level, until
you fail and you get some information about what you were supposed to do in the
first place. That said, the levels are fun puzzles with a unique visual style.
I’ve found this to be a quite remarkable simulator. You can even use it to
click together your own experiment.
Saturday, February 12, 2022
Epic Fights in Science
Scientists are rational by profession. They objectively evaluate the evidence and carefully separate fact from opinion. Except of course they don’t, really. In this episode, we will talk about some epic fights among scientists that show very much that scientists, after all, are only human. Who dissed whom and why and what can we learn from that? That’s what we’ll talk about today.
1. Wilson vs Dawkins
Edward Wilson passed away just a few weeks ago at age 92. He is widely regarded as one of the most brilliant biologists in history. But some of his ideas about evolution got him into trouble with another big shot of biology: Richard Dawkins.
In 2012 Dawkins reviewed Wilson’s book “The Social Conquest of Earth”. He left no doubt about his misgivings. In his review Dawkins wrote:
“unfortunately one is obliged to wade through many pages of erroneous and downright perverse misunderstandings of evolutionary theory. In particular, Wilson now rejects “kin selection” [...] and replaces it with a revival of “group selection”—the poorly defined and incoherent view that evolution is driven by the differential survival of whole groups of organisms.”Wilson idea of group selection is based on a paper that he wrote together with two mathematicians in 2010. When their paper was published in Nature magazine, it attracted criticism from more than 140 evolutionary biologists, among them some big names in the field.
Dawkins finished his review:
“To borrow from Dorothy Parker, this is not a book to be tossed lightly aside. It should be thrown with great force. And sincere regret.”Wilson replied that his theory was mathematically more sound that of kin selection, and that he also had a list of names who supported his idea but, he said,
“if science depended on rhetoric and polls, we would still be burning objects with phlogiston and navigating with geocentric maps.”In a 2014 BBC interview, Wilson said:
“There is no dispute between me and Richard Dawkins and never has been. Because he is a journalist, and journalists are people who report what the scientists have found. And the arguments I’ve had, have actually been with scientists doing research.”Right after Wilson passed away, Dawkins tweeted:
“Sad news of death of Ed Wilson. Great entomologist, ecologist, greatest myrmecologist, invented sociobiology, pioneer of island biogeography, genial humanist & biophiliac, Crafoord & Pulitzer Prizes, great Darwinian (single exception, blind spot over kin selection). R.I.P.”
2. Leibniz vs Newton
Newton and Leibniz were both instrumental in the development of differential calculus, but they approached the topic entirely differently. Newton came at it from a physical perspective and thought about the change of variables with time. Leibniz had a more abstract, analytical approach. He looked at general variables x and y that could take on infinitely close values. Leibniz introduced dx and dy as differences between successive values of these sequences.
The two men also had a completely different attitude to science communication. Leibniz put a lot of thought into the symbols he used and how he explained himself. Newton, on the other hand, wrote mostly for himself and often used whatever notation he liked on that day. Because of this, Leibniz’s notation was much easier to generalize to multiple variables and much of the notation we use in calculus today goes back to Leibniz. Though the notation xdot for speed and x double dot for acceleration that we use in physics comes from Newton.
Okay, so they both developed differential calculus. But who did it *first? Historians say today it’s clear that Newton had the idea first, during the plague years sixteen sixty-five and sixty-six, but he didn’t write it up until 5 years later and it wasn’t published for more than 20 years.
Meanwhile, Leibniz invented calculus in the mid 1670s. So, by the time word got out, it looked as if they’d both had the idea at the same time.
Newton and Leibniz then got into a bitter dispute over who was first. Leibniz wrote to the British Royal Society to ask for a committee to investigate the matter. But at that time the society’s president was… Isaac Newton. And Newton simply drafted the report himself. He wrote “we reckon Mr Newton the first inventor” and then presented it to the members of the committee to sign, which they did.
The document was published in 1712 by the Royal Society with the title Commercium Epistolicum Collinii et aliorum, De Analysi promota. In the modern translation the title would be “Newton TOTALLY DESTROYS Leibniz”.
On top of that, a comment on the report was published in the Philosophical Transactions of the Royal Society of London. The anonymous author, who was also Newton, explained in this comment:
“the Method of Fluxions, as used by Mr. Newton, has all the Advantages of the Differential, and some others. It is more elegant ... Newton has recourse to converging Series, and thereby his Method becomes incomparably more universal than that of Mr. Leibniz.”Leibniz responded with his own anonymous publication, a four page paper which in the modern translation would be titled “Leibniz OWNS Newton”. That “anonymous” text gave all the credit to Leibniz and directly accused Newton of stealing calculus. Leibniz even wrote his own History and Origin of Differential Calculus in 1714. He went so far to change the dates on some of his manuscripts to pretend he knew about calculus before he really did.
And Newton? Well, even after Leibniz died, Newton refused mentioning him in the third edition of his Principia.
You can read the full story in Rupert Hall’s book “Philosophers at war.”
Electric lights came in use around the end of the 19th Century. At first, they all worked with Thomas Edison’s direct current system, DC for short. But his old employee Nicola Tesla had developed a competing system, the alternate current system, or AC for short. Tesla had actually offered it to Edison when he was working for him, but Edison didn’t want it.
Tesla then went to work for the engineer George Westinghouse. Together they created an AC system that was threatening Edison’s dominance on the market. The “war of the currents” began.
An engineer named Harold Brown, later found to be paid by Edison’s company, started writing letters to newspapers trying to discredit AC, saying that it was really dangerous and that the way to go was DC.
This didn’t have the desired effect, and Edison soon took more drastic steps. I have to warn you that the following is a really ugly story and in case you find animal maltreatment triggering, I think you should skip over the next minute.
Edison organized a series of demonstrations in which he killed dogs by electrocuting them with AC, arguing that a similar voltage in DC was not so deadly. Edison didn’t stop there. He went on to electrocute a horse, and then an adult elephant which he fried with a stunning 6000 volts. There’s an old still movie of this, erm, demonstration on YouTube. If you really want to see it, I’ll leave a link in the info below.
Still Edison wasn’t done. He paid Brown to build an electric chair with AC generators that they bought from Westinghouse and Tesla, and then had Brown lobby for using it to electrocute people so the general public would associate AC with death. And that partly worked. But in the end AC won mostly because it’s more efficient when sent over long distances.
Another scientific fight from the 19th Century happened in paleontology, and this one I swear only involves animals that were already dead anyway.
The American paleontologists, Edward Cope and Othniel Marsh met in 1863 as students in Germany. They became good friends and later named some discoveries after each other.
Cope for example named an amphibian fossl Ptyonius marshii, after Marsh and, in return Marsh named a gigantic serpent Mosasaurus copeanus.
However, they were both very competitive and soon they were trying to outdo each other. Cope later claimed it all started when he showed Marsh a location where he’d found fossils and Marsh, behind Cope’s back, bribed the quarry operators to send anything they’d find directly to Marsh.
Marsh’s version of events is that things went downhills after he pointed out that Cope had published a paper in which he had reconstructed a dinosaur fossil but got it totally wrong. Cope had mistakenly reversed the vertebrae and then put the skull at the end of the tail! Marsh claimed that Cope was embarrassed and wanted revenge.
Whatever the reason, their friendship was soon forgotten. Marsh hired spies to track Cope and on some occasions had people destroy fossils before Cope could get his hands on them. Cope tried to boost his productivity by publishing the discovery of every new bone as that of a new species, a tactic which the American paleontologist Robert Bakker described as “taxonomic carpet-bombing.” Cope’s colleagues disapproved, but it was remarkably efficient. Cope would publish about 1400 academic papers in total. Marsh merely made it to 300.
But Marsh eventually became chief paleontologists of the United States Geological Survey, USGS, and used its funds to promote his own research while cutting funds for Cope’s expeditions. And when Cope still managed to do some expeditions, Marsh tried to take his fossils, claiming that since the USGS funded them, they belonged to the government.
This didn’t work out as planned. Cope could prove that he had financed most of his expeditions with his own money. He then contacted a journalist at the New York Herald who published an article claiming Marsh had misused USGS funds. An investigation found that Cope was right. Marsh was expelled from the Society without his fossils, because they had been obtained with USGS funds.
In a last attempt to outdo Marsh, Cope stated in his will that he’d donated his skull to science. He wanted his brain to be measured and compared to that of Marsh! But Marsh didn’t accept the challenge, so the world will never know which of the two had the bigger brain.
Together the two men discovered 136 species of dinosaurs (Cope 56 and Marsh 80) but they died financially ruined with their scientific reputation destroyed.
5. Hoyle vs The World
British astronomer Fred Hoyle is known as the man who discovered how nuclear reactions work inside stars. In 1983, the Nobel Prize in physics was given... to his collaborator Willy Fowler, not to Hoyle. Everyone, including Fowler, was stunned. How could that happen?
Well, the Swedish Royal Academy isn’t exactly forthcoming with information, but over the years Hoyle’s colleagues have offered the following explanation. Let’s go back a few years to 1974.
In that year, the Nobel Prize for physics went to Anthony Hewish for his role in the discovery of pulsars. Upon hearing the news Hoyle told a reporter: “Jocelyn Bell was the actual discoverer, not Hewish, who was her supervisor, so she should have been included.” Bell’s role in the discovery of pulsars is widely recognized today, but in 1974, that Hoyle put in a word for Bell made global headlines.
Hewish was understandably upset, and Hoyle clarified in a letter to The Times that his issue wasn’t with Hewish, but with the Nobel committee: “I would add that my criticism of the Nobel award was directed against the awards committee itself, not against Professor Hewish. It seems clear that the committee did not bother itself to understand what happened in this case.”
Hoyle’s biographer Simon Mitton claimed this is why Hoyle didn’t get the Nobel Prize: The Nobel Prize committee didn’t like being criticized. However, the British scientist Sir Harry Kroto, who won the Nobel Prize for chemistry in 1996, doesn’t think this is what happened.
Kroto points out that while Hoyle may have made a groundbreaking physics discovery, he was also a vocal defender of some outright pseudoscience, for example, he believed that the flu was caused by microbes that rain down on us from outer space.
Hoyle was also, well, an unfriendly and difficult man who had offended most of his colleagues at some point. According to Sir Harry, the actual reason that Hoyle didn’t get a Nobel Prize was that he’d use it to promote pseudoscience. He said
“Hoyle was so arrogant and dismissive of others that he would use the prestige of the Nobel prize to foist his other truly ridiculous ideas on the lay public. The whole scientific community felt that.”So what do we learn from that? Well, one thing we can take away is that if you want to win a Nobel Prize, don’t spread pseudoscience. But the bigger lesson I think is that while some competition is a good thing, it’s best enjoyed in small doses.
Saturday, December 25, 2021
We wish you a nerdy Xmas!
Happy holidays everybody, today we’re celebrating Isaac Newton’s birthday with a hand selected collection of nerdy Christmas facts that you can put to good use in every appropriate and inappropriate occasion.
You have probably noticed that in recent years worshipping Newton on Christmas has become somewhat of a fad on social media. People are wishing each other a happy Newtonmas rather than Christmas because December 25th is also Newton’s birthday. But did you know that this fad is more than a century old?
In 1891, The Japan Daily Mail reported that a society of Newton worshippers had sprung up at the University of Tokyo. It was founded, no surprise, by mathematicians and physicists. It was basically a social club for nerds, with Newton’s picture residing over meetings. The members were expected to give speeches and make technical jokes that only other members would get. So kind of like physics conferences basically.
The Japan Daily Mail also detailed what the nerds considered funny. For example, on Christmas, excuse me, Newtonmas, they’d have a lottery in which everyone drew a paper with a scientists’ name and then got a matching gift. So if you drew Newton you’d get an apple, if you drew Franklin a kite, Archimedes got you a naked doll, and Kant-Laplace would get you a puff of tobacco into your face. That was supposed to represent the Nebular Hypothesis. What’s that? That’s the idea that solar systems form from gas clouds, and yes, that was first proposed by Immanuel Kant. No, it doesn’t rhyme to pissant, sorry.
Newton worship may not have caught on, but nebular hypotheses certainly have.
By the way, did you know that Xmas isn’t an atheist term for Christmas? The word “Christ” in Greek is Christos written like this (Χριστός.) That first letter is called /kaɪ/ and in the Roman alphabet it becomes an X. It’s been used as an abbreviation for Christ since at least the 15th century.
However, in the 20th century the abbreviation has become somewhat controversial among Christians because the “X” is now more commonly associated with a big unknown. So, yeah, use at your own risk. Or maybe stick with Happy Newtonmas after all?
Well that is controversial too because it’s not at all cl
ear that Newton’s birthday is actually December 25th. Isaac Newton was born on December 25, 1642 in England.
But. At that time, the English still used the Julian calendar. That is already confusing because the new, Gregorian calendar, was introduced by Pope Gregory in 1584, well before Newton’s birth. It replaced the older, Julian calendar, that didn’t properly match the months to the orbit of Earth around the sun.
Yet, when Pope Gregory introduced the new calendar, the British were mostly Anglicans and they weren’t going to have some pope tell them what to do. So for over a hundred years, people in Great Britain celebrated Christmas 10 or 11 days later than most of Europe. Newton was born during that time. Great Britain eventually caved in and adopted the Gregorian calendar in 1751. They passed a law that overnight moved all dates forward by 11 days. So now Newton would have celebrated his birthday on January 4th, except by that time he was dead.
However, it gets more difficult because these two calendars continue running apart, so if you’d run forward the old Julian calendar until today, then December 25th according to the old calendar would now actually be January 7th. So, yeah, I think sorting this out will greatly enrich your conversation over Christmas lunch. By the way, Greece didn’t adopt the Gregorian calendar until 1923. Except for the Monastic Republic of Mount Athos, of course, which still uses the Gregorian calendar.
Regardless of exactly which day you think Newton was born, there’s no doubt he changed the course of science and with that the course of the world. But Newton was also very religious. He spent a lot of time studying the Bible looking for numerological patterns. On one occasion he argued, I hope you’re sitting, that the Pope is the anti-Christ, based in part on the appearance of the number 666 in scripture. Yeah, the Brits really didn’t like the Catholics, did they.
Newton also, at the age of 19 or 20, had a notebook in which he kept a list of sins he had committed such as eating an apple at the church, making pies on Sunday night, “Robbing my mother’s box of plums and sugar” and “Using Wilford’s towel to spare my own”. Bad boy. Maybe more interesting is that Newton recorded his secret confessions in a cryptic code that was only deciphered in 1964. There are still four words that nobody has been able to crack. If you get bored over Christmas, you can give it a try yourself, link’s in the info below.
Newton may now be most famous for inventing calculus and for Newton’s laws and Newtonian gravity, all of which sound like he was a pen on paper person. But he did some wild self-experiments that you can put to good use for your Christmas conversations. Merry Christmas, did you know that Newton once poked a needle into his eye? I think this will go really well.
Not a joke. In 1666, when he was 23, Newton, according to his own records, poked his eye with a bodkin, which is more or less a blunt stitching needle. In his own words “I took a bodkine and put it between my eye and the bone as near to the backside of my eye as I could: and pressing my eye with the end of it… there appeared several white dark and coloured circles.”
If this was not crazy enough, in the same year, he also stared at the Sun taking great care to first spend some time in a dark room so his pupils would be wide open when he stepped outside. Here is how he described this in a letter to John Locke 30 years later:
“in a few hours’ time I had brought my eyes to such a pass that I could look upon no bright object with either eye but I saw the sun before me, so that I could neither write nor read... I began in three or four days to have some use of my eyes again & by forbearing a few days longer to look upon bright objects recovered them pretty well.”Don’t do this at home. Since we’re already talking about needles, did you know that pine needles are edible? Yes, they are edible and some people say they taste like vanilla, so you can make ice cream with them. Indeed, they are a good source of vitamin C and were once used by sailors to treat and prevent scurvy.
By some estimate, scurvy killed more than 2 million sailors between the 16th and 18th centuries. On a long trip it was common to lose about half of the crew, but in extreme cases it could be worse. On his first trip to India in 1499, Vasco da Gama reportedly lost 116 of 170 men, almost all to scurvy.
But in 1536, the crew of the French explorer Jacques Cartier was miraculously healed from scurvy upon arrival in what is now Québec. The miracle cure was a drink that the Iroquois prepared by boiling winter leaves and the bark from an evergreen tree, which was rich in vitamin C.
So, if you’ve run out of emphatic sounds to make in response to aunt Emma, just take a few bites off the Christmas tree, I’m sure that’ll lighten things up a bit.
Speaking of lights. Christmas lights were invented by no other than Thomas Edison. According to the Library of Congress, Edison created the first strand of electric lights in 1880, and he hung them outside his laboratory in New Jersey, during Christmastime. Two years later, his business partner Edward Johnson had the idea to wrap a strand of hand-wired red, white, and blue bulbs around a Christmas tree. So maybe take a break with worshipping Newton and spare a thought for Edison.
But watch out when you put the lights on the tree. According to the United States Consumer Product Safety Commission, in 2018, 17,500 people sought treatment at a hospital for injuries sustained while decorating for the holiday.
And this isn’t the only health risk on Christmas. In 2004 researchers in the United States found that people are much more likely to die from heart problems than you expect both on Christmas and on New Year. A 2018 study from Sweden made a similar finding. The authors of the 2004 study speculate that the reason may be that people delay seeking treatment during the holidays. So if you feel unwell don’t put off seeing a doctor even if it’s Christmas.
And since we’re already handing out the cheerful news, couples are significantly more likely to break up in the weeks before Christmas. This finding comes from a 2008 paper by British researchers who analyzed facebook status updates. Makes you wonder, do people break up because they can’t agree which day Newton was born or do they just not want to see their in-laws? Let me know what you think in the comments.
Monday, November 28, 2016
This isn’t quantum physics. Wait. Actually it is.
“Guys, this isn’t quantum physics. Put the stuff in the blender.”Or losing weight:
“if you burn more calories than you take in, you will lose weight. This isn't quantum physics.”Or economics:
“We’re not talking about quantum physics here, are we? We’re talking ‘this rose costs 40p, so 10 roses costs £4’.”You should also know that Big Data isn’t Quantum Physics and Basketball isn’t Quantum Physics and not driving drunk isn’t quantum physics. Neither is understanding that “[Shoplifting isn’t] a way to accomplish anything of meaning,” or grasping that no doesn’t mean yes.
But my favorite use of the expression comes from Noam Chomsky who explains how the world works (so the modest title of his book):
“Everybody knows from their own experience just about everything that’s understood about human beings – how they act and why – if they stop to think about it. It’s not quantum physics.”From my own experience, stopping to think and believing one understands other people effortlessly is the root of much unnecessary suffering. Leaving aside that it’s quite remarkable some people believe they can explain the world, and even more remarkable others buy their books, all of this is, as a matter of fact, quantum physics. Sorry, Noam.
Yes, that’s right. Basketballs, milkshakes, weight loss – it’s all quantum physics. Because it’s all happening by the interactions of tiny particles which obey the rules of quantum mechanics. If it wasn’t for quantum physics, there wouldn’t be atoms to begin with. There’d be no Sun, there’d be no drunk driving, and there’d be no rocket science.
Quantum mechanics is often portrayed as the theory of the very small, but this isn’t so. Quantum effects can stretch over large distances and have been measured over distances up to several hundred kilometers. It’s just that we don’t normally observe them in daily life.
The typical quantum effects that you have heard of – things whose position and momentum can’t be measured precisely, are both dead and alive, have a spooky action at a distance and so on – don’t usually manifest themselves for large objects. But that doesn’t mean that the laws of quantum physics suddenly stop applying at a hair’s width. It’s just that the effects are feeble and human experience is limited. There is some quantum physics, however, which we observe wherever we look: If it wasn’t for Pauli’s exclusion principle, you’d fall right through the ground.
Indeed, a much more interesting question is “What is not quantum physics?” For all we presently know, the only thing not quantum is space-time and its curvature, manifested by gravity. Most physicists believe, however, that gravity too is a quantum theory, just that we haven’t been able to figure out how this works.
“This isn’t quantum physics,” is the most unfortunate colloquialism ever because really everything is quantum physics. Including Noam Chomsky.
Friday, May 30, 2014
Wednesday, November 13, 2013
Physics in product ads
I've been trying to figure out a quick way to make an embeddable slideshow and to that end I collected some physics-themed product names that I found amusing. Hope this works, enjoy :)
Monday, September 30, 2013
Gauge Symmetry Violation (Short film)
Symmetry (Short Film) from Apostolos Vasileiadis on Vimeo.
A physics professor loses control over a false theory of his. A student is there to set things right.
Filmed at Nordita/AlbaNova or in tunnel system between the buildings respectively. Apparently some of the students here have, ehem, dark fantasies.
Sunday, August 18, 2013
Researchers and coffee consumption
![]() |
| Coffee consumption vs number of researchers. The red dot is Germany. |
I passionately hate excel and I have no idea how to convince it to give me a p-value, but I've seen worse correlations being published. More coffee consumption linked to more research!
If you want to play with the data, you can download the excel sheet here. I've left out Singapore from the table because I wasn't sure whether the entry "0" meant there's no data, or nobody in Singapore drinks coffee. I've made a second plot where I left out the 15 main coffee export countries (according to Wikipedia), but visually it doesn't make much of a difference so I'm not showing you the graph. (It's in the excel sheet.) According to chartsbin.com the data on researchers per million inhabitants is from the UNESCO Institute for Statistics, and the data on coffee consumption is from the World Resources Institute.
Don't take this too seriously. I'd guess that you'd find a similar correlation for many consume goods. It has some amusement value though :o)
Monday, August 12, 2013
Book Review: “Information is Beautiful” by David McCandless
By David McCandless
Collins (6 Dec 2012)
The more information, the more relevant it becomes to present it in human-digestible form, whence springs the flood of infographics in your news feed. There are good examples and bad examples of data visualization, and McCandless’ graphics are among the cleanest, neatest and well-designed ones that I’ve come across. McCandless describes himself as a “data journalist” and “information designer” and with that fills in a niche in the economic ecosystem that isn’t presently populated by many.
The book is a print-version of examples from his website. It’s not the kind of book you read front to back, but one that you browse through for the sake of curiosity, for distraction, or in search of a conversation topic. It does this job quite well; it also looks good, feels nice and is interesting. Some of the graphics in the book are however quite useless or seem to be based on data, or interpretation of data, that I find questionable. This is to say, the emphasis of these graphics is on design, not on science.
I got this book as a gift and spent a cozy afternoon with it on the couch, something I haven’t yet managed to achieve with digital media. (Not to mention that I’d rather have the kids wreck a book than a screen, should I fall asleep over it.) I’m more interested in the science of information than the design of information, and from the scientific side the graphics leave wanting. But they’re an interesting reflection on contemporary thought and I’d say the book is is well worth the price.
Thursday, April 12, 2012
Some physics-themed ngram trends
In the first graph below you see "black hole" in blue which peaks around 2002, "big bang" in red which peaks around 2000, "quantization" in green which peaks to my puzzlement around 1995, and "dark matter" in yellow which might peak or plateau around 2000. Data is shown from 1920 to 2008. Click to enlarge.

In the second graph below you see the keywords "multiverse" in blue, which increases since about 1995 but interestingly seems to have been around much before that, "grand unification" in yellow which peaks in the mid 80s and is in decline since, "theory of everything" in green which plateaus around 2000, and "dark energy" in red which appears in the late 90s and is still sharply increasing. Data is shown from 1960 to 2008. Click to enlarge.

This third figure shows "supersymmetry" in blue which peaks around 1985 and 2001, "quantum gravity" in red which might or might not have plateaued, and "string theory" in green which seems to have decoupled from supersymmetry in early 2002 and avoided to drop. Data is shown from 1970 to 2008.

A graph that got so many more hits it wasn't useful to plot it with the others: "emergence" which peaked in the late 90s. Data is shown from 1900 to 2008.

More topics of the past: "cosmic rays" in blue which was hot in the 1960s, "quarks" in green which peaks in the mid 90s, and "neutrinos" in red peak around 1990. Data is shown from 1920 to 2008.

Even quantum computing seems to have maxed (data is shown from 1985 to 2008).

So, well, then what's hot these days? See below "cold atoms" in blue, "quantum criticality" in red and "qbit" in green. Data is shown from 1970 to 2008.

So, condensed matter and cosmology seem to be the wave of the future, while particle physics is in the decline and quantum gravity doesn't really know where to go. Feel free to leave your interpretation in the comments!
Sunday, March 04, 2012
The Edge annual question 2012
As every year half of the respondents used the opportunity to promote their own research. This year they may be forgiven, for they were likely drawn to their research because they found it elegant or beautiful. A notable exception is experimental psychologist Bruce Hood who nominated Fourier's theorem because "psychology ... is rarely elegant."
Some of his colleagues see this differently though. David M. Buss, for example thinks "Sexual Conflict Theory" is an elegant explanation for what he is concerned: "Men are known to feign long-term commitment, interest, or emotional involvement for the goal of casual sex, interfering with women's long-term mating strategy," he writes. He'd better learn string theory to explain everything.
Psychologist Mahzarin Banaji offers "Bounded Rationality," the insight that human beings are not "smart enough [to behave] in line with basic axioms of rationality." The inexistant rational person would say if the subject of your study doesn't behave as your axioms say, you should conclude that you've used the wrong axioms. More replies from the psychological side are that of Emily Pronin, who finds it beautiful that "Human beings are motivated to see themselves in a positive light," and that of Joel Gold who likes Freud's elegant discovery of the unconscious.
Nathan Myhrvold explains that the scientific method "it is the ultimate foundation for anything worthy of the name "explanation,"" and is, surprisingly, the only one to name the scientific method. The double helix and natural selection, as one could expect, appear various times.
Needless to say, the physicists had a large selection of answers to choose from. As Leonard Susskind wrote in his reply "That's a tough question for a theoretical physicist; theoretical physics is all about deep, elegant, beautiful explanations; and there are just so many to choose from." He chose to nominate Boltzmann's explanation of the second law of thermodynamics because his "favorites are explanations that that get a lot for a little." A good choice.
Anton Zeilinger names Einstein's 1905 proposal that light consists of energy quanta, Raphael Bousso on similar reasoning goes for quantum theory, and Satyajit Das for Heisenberg's uncertainty principle.
Steve Giddings and Roger Highfield nominate Einstein's insight that gravity is curvature of spacetime, Lee Smolin's favorite elegant explanation is the principle of inertia, and Sean Carroll, close by, names the universality of gravity. Stephon H. Alexander, always unpredictable, goes for particle creation in time dependent gravitational fields. (Which, incidentally, was the topic of my master's thesis.)
Lawrence M. Krauss goes for electromagnetism, Eric Weinstein favors the deep insight that quantum theory is "actually a natural and elegant self-assembling body of pure geometry that ha[s] fallen into an abysmal state of pedagogy putting it beyond mathematical recognition," Timo Hannay's favorite is QED, Laurence C. Smith goes for continuity equations, Lisa Randall nominates the Higgs mechanism, and Garrett Lisi names a theory of everything that does not yet exist - who knows what might have been on his mind.
Marcelo Gleiser and Bruce Parker nominate atomism. Gregory Benford and Peter Woit reasonably find beauty in the unreasonable effectiveness of mathematics, and Shing-tung Yau, (Co-author of The Shape of Inner Space) keeps it simple and elegant with "A Sphere."
Max Tegmark is as always entertaining:
"My favorite deep explanation is that our baby universe grew like a baby human — literally. Right after your conception, each of your cells doubled roughly daily, causing your total number of cells to increase day by day as 1, 2, 4, 8, 16, etc. Repeated doubling is a powerful process, so your Mom would have been in trouble if you'd kept doubling your weight every day until you were born... Crazy as it sounds, this is exactly what our baby universe did according to the inflation theory pioneered by Alan Guth and others..."The reason to capitalize Mom is that it stands for God in this creation myth. And I guess the navel of the universe lies at MIT.
Jeremy Bernstein, interestingly enough, names the Planck scale as a limit to measurement of time (and space I want to add), which we recently discussed here. Bernstein however credits this insight to Freeman Dyson.
Freeman Dyson himself thinks it is elegant that general relativity remains unquantized and, repeating earlier statements of his, he "propose[s] as a hypothesis... that single gravitons may be unobservable by any conceivable apparatus." I very much like his reply, because I keep using a fairly old quote from Dyson on my slides to enter into an explanation why the detection of gravitons isn't equivalent to evidence for quantum gravity. So now I can use a newer quotation.
Frank Wilczek offers a very good answer: Simplicity, which he thinks of as the length of an algorithm: "Description length is actually a measure of complexity, but for our purposes that's just as good, since we can define simplicity as the opposite—or, numerically, the negative—of complexity." I like his answer because it touches on the question what we actually mean with elegance.
This underlying big question mark is raised also by Rebecca Newberger Goldstein: "Where do we get the idea — a fantastic idea if you stop and think about it — that the beauty of an explanation has anything to do with the likelihood of its being true?" An excellent point that we explored in my post "Is Physics cognitively biased?"
On that note, Frank Tipler favors parallel universes, Andrei Linde thinks "the inflationary multiverse" is a beautiful explanation for everything, and Martin Rees also nominates the multiverse.
Another noteworthy physicist's reply is that of Seth Lloyd, who made the effort to write up the demonstration SU(2) being a double cover of SO(3).
My award for the most bizarre reply goes to Dave Winer who thinks it is elegant that his computer screen "has the time in the upper-right corner."
The most interesting reply I found that from Barry C. Smith who summarizes it as "Lemons are Fast" and explains "When asked to put lemons on a scale between fast and slow almost everyone says 'fast', and we have no idea why." I'm not sure exactly what is elegant about this, but interesting it is without doubt.
For me the most insightful reply was that of Tania Lombrozo who writes:
"Metaphysical half-truths... realism, the existence of other minds, causation... These explanations are so broad and so simple that we let them operate in the background, constantly invoked but rarely scrutinized. As a result, most of us can't defend them and don't revise them. Metaphysical half-truths find a safe and happy home in most human minds.
[T]he depth, elegance, and beauty of our intuitive metaphysical explanations can make us appreciate them less rather than more. Like a constant hum, we forget that they are there."
And the shortest reply is that by Katinka Matson who nominates Occam's Razor.
My nomination for the most beautiful and elegant explanation would have been the variational principle (about whose elegance I wrote here), close to David Dalrymple's reply that named the principle of least action.
Anybody else has the impression that list is getting longer every year? Do they just write more or are there actually more names on the list?
The question that I would like to ask all the smart people is this: If everybody on the planet would read your reply (or have it read to) what would you want to tell them?
Sunday, August 14, 2011
Was there really a man on the moon? Are you sure?
When I write a paper, I usually make an effort to check that the references I am citing do actually show what they claim, at least to some level. Sometimes, digging out the roots of a citation tree holds
I think of myself as a very average person, so I guess that most of you use similar recipes as I to roughly estimate a trust-value of some online recource. The rule of thumb that I use is based on two simple questions: 1) How much effort would one have to make to fake this piece of information in the present form, and 2) How evil would one have to be.
How much effort would one have to make to put up a website about a non-existing animal? Well, you have to invest the time to write the text, get a domain, and upload it. I.e. not so very much. How evil do you have to be? For the purpose of teaching internet literacy, somebody probably believed he was being good. Trust-value of the tree-octopus: Nil. How much effort do you have to make to fake some governmental website? Some. And it’s probably illegal too, so does require some evil. How much effort would you have to make to fake the moon landing?
Of course such truth-value estimates have large error-bars. Faking somebody else’s writing style for example can be quite difficult (if it wasn’t I’d be writing like Jonathan Franzen), but depends on that writing style to begin with. If you’ve never registered a domain before you might vastly overestimate the effort it takes. And how difficult is it really to convince some billion people the Earth is round? (Well, almost.) Or to convince them some omniscient being is watching over them and taking note every time they think about somebody else’s underwear? There you go. (And Bielefeld, btw, doesn’t exist either.)
The trustworthiness of Wikipedia is a question with more than academic value. For better or worse, Wikipedia has become a daily source of reference for hundreds of millions of people. Its credibility comes from its articles being scrutinized by millions of eyes. Yet, it is very difficult to know how many and which people did indeed check some piece of information, and how much they were influenced by the already existent entry. The English Wikipedia site thus, very reasonably, has a policy that information needs to have a source. Reasonable as that may sound, it has its shortcoming, a point that was made very well in a recent NYT article by Noam Cohen who reports on a criticism by Achal Prabhala, an Indian advisor to the Wikimedia foundation.
There is arguably information about the real world that is not (yet?) to be found in any published sources. Think of something trivial like good places in your neighborhood to find blackberries (the fruit)1. More interesting, Prabhala offered the example of a children’s game played in some parts of India, and its Wikipedia article in the local language, Malayalam. Though the game is known by about 40 millions of people, there is no peer reviewed publication on it. So what would have constituted a valid reference for the English version of the website? What counts as a trusted source? Do videos count? Do the authors of the Wikipedia article have to random sample and analyze sources with the same care as a scientific publication would require? It seems then, the information age necessitates some rethinking of what constitutes a trusted source other than published works. Prabhala says:
“If we don’t have a more generous and expansive citation policy, the current one will prove to be a massive roadblock that you literally can’t get past. There is a very finite amount of citable material, which means a very finite number of articles, and there will be no more.”
Stefan remarked dryly they could just add a reference to Ind. J. Anth. Cult. [in Malayalam], and nobody would raise an eyebrow. Among physicists this is, tongue-in-cheek, known as “proof by reference to inaccessible literature” (typically to some obscure Russian journal in the early 1950s). The point is, asking for references is useless if nobody checks even the existence of these references. Most journals do now have software that checks reference lists for accuracy and at the same time for existence. The same software will inevitably spit out a warning if you’re trying to reference a living review.
But to come back to Wikipedia: It strikes me as a philosophical conundrum, a reference work that insists on external references. Not only because some of these references may just not exist, but because with a continuously updated work, one can create circular references. Take as an example the paper “Moisture induced electron traps and hysteresis in pentacene-based organic thin-film transistors” by Gong Gu and Michael G. Kane, Appl. Phys. Lett. 92, 053305 (2008). (Sounds seriously scientific, doesn’t it?) Reference [13] cites Wikipedia as a source on fluorescent lamps. There is a paper published in J. Phys. B that cites Wikipedia as a source for the double-slit experiment, and a PRL that cites the Wikipedia entry on the rainbow. Taemin Kim Park found a total of 139 citations to Wikipedia in the fields of Physics and Astronomy in the Scopus database as of January 20112.
That citation of Wikipedia itself would not be a problem. But the vast majority of people who cite websites do not add the date on which they retrieved the site. More disturbingly, the book “World Wide Mind” that I read recently, had a few “references” to essays by mentioning they can easily be found searching for [keywords], totally oblivious to the fact that the results of this search changes by the day, depends on the person searching, and that websites move or vanish. (Proof by Google?)
While the risk for citation loops increases with frequently updated sources, it is not an entirely new phenomenon. A long practiced variant of the “proof by reference” is citing one’s own “forthcoming paper” (quite common if page restrictions don’t allow further elaboration), but in this forthcoming paper - if it comes forth - one references the earlier paper. After ten or so self-referencing papers one claims the problem solved and anybody who searches for the answer will give up in frustration. (See also: Proof by mutual reference.)
Maybe the Wikipedia entry on the octopus hoax is a hoax?
Take away message: References in the age of the internet are moving targets and tracing back citations can be tricky. Restricting oneselves to published works only leaves out a lot of information. Citation loops by referencing frequently updated websites can create alternate realities. But don’t worry, somewhere in the level 5 multiverse it’s as real as, say, the moon landing.
Have you cited or would you cite a Wikipedia article in a scientific publication? If you did, did you add a date?
1 And why isn't there a website where one can enter locations of fruit trees and bushes that nobody seems to harvest? Because where we live a lot of blackberries, cherries, plums, peas, and apples are just rotting away. It’s a shame, really.
2 From Park's paper, it is not clear how many of these articles citing Wikipedia were also about Wikipedia. The examples I mentioned were dug out by Stefan.
Wednesday, July 06, 2011
Finetuned
Melvin The Magical Mixed Media Machine from HEYHEYHEY on Vimeo.
[via, more info]
Wednesday, June 01, 2011
Four links to Paul Dirac
So, let's see how far I'm away from Paul Dirac coauthor-wise...
Not so far actually, thanks to Lee. Dirac's paper on the list above is a Nature article from 1952 on the question "Is there an Aether?" What about Albert Einstein then?
And go:
With 5 links to Albert Einstein! That's less than I would have guessed. With 6 links you can probably connect any two authors.
Unfortunately, the AMS database doesn't seem to contain experimentalists. Neither could I find any description of the algorithm used. It runs amazingly fast, and it makes me a little suspicious that in no query I tried did I get two paths with the same length, though that might have been coincidence.
So, have fun playing around.
Saturday, September 25, 2010
Dance your PhD
Electrons and Phonons in Superconductors: A Love Story. from Irwin Singer on Vimeo
You can look at more submissions on this website.
The topic of my PhD thesis was "Black Holes in Extra Dimensions: Properties and Detection." (IsMyThesisHotOrNot?!) I'm afraid a video wouldn't have properly captured extra dimensional dancing. I suppose I would have tried to represent collapse and subsequent radiation, increasing temperature, and a final decay with dancers coming together in the center of a room, and later leaving the scene again. More likely though, I wouldn't have spent time on this.
I'm not really sure what to think of such efforts to bring science closer to the public. The above video about the superconductor, frankly, would have been equally instructive without the dancers. Most of the other videos, if you check them out, don't communicate more than a sentence or two of information about the thesis topic. Not so surprisingly - dancing is hardly a good way to get across complex science.
Now don't get me wrong, I'm sure everybody has had a lot of fun with these videos, and one or two people learned a complicated new word they hadn't known before. But let's reverse the roles of art and science for a moment here. It's like trying to get people interested in a Van Gogh by showing them a spectral analysis of the colors used. Science is beautiful in itself. But to see the beauty you must understand. The value of artists representation is in skilled art being able to capture more than the written or spoken word alone. But these dance videos, at least to me, are less. In any case, they might serve as a weekend distraction ;-)


