Saturday, August 13, 2022

Science With the Gobbledygook

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Today we’re celebrating 500 thousand subscribers. That’s right, we made it to half a million! Thanks everyone for being here. YouTube has made it so much easier for me to cover the news that I think deserves to be covered, and you have made it happen. And to honor the occasion, we have collected some examples of science with the gobbledygook. And that’s what we’ll talk about today.

1. Salmon Dreams and Jelly Brains

In 2008, neuroscientist Craig Bennett took a dead Atlantic salmon to the laboratory and placed it in an fMRI machine. He then showed the salmon photographs of people in social situations and asked what the people in the photos might have been feeling. For example, if I show you a stock photo of a physicist with a laser, the associated emotion is obviously uncontrollable excitement. The salmon didn't answer.

You may find that unsurprising given that it was very dead. But Bennet then used standard protocols to analyze the fMRI signal he had recorded while questioning the salmon, and found activity in some region of the salmon’s brain. The absurdity of this finding went a long way to illustrate that the fMRI methods used at the time frequently gave spurious results.

The dead salmon led to quite some soul-searching in the neuroscience community about the usefulness of fMRI readings. A meta-review in 2020 concluded that “common task-fMRI measures are not currently suitable for brain biomarker discovery or for individual-differences research.”

In 2011, a similar point was made by neuroscientists who published an electroencephalogram of jello that showed “mild diffuse slowing of the posterior dominant rhythm”. They also highlighted some other issues that can give rise to artifacts in EEG readings, such as sweating, or being close to a power outlet.

2. Medical Researcher Reinvents Integration

In 1994, Mary Tai from the Obesity Research Center in New York invented a method to calculate the area under a curve and published it in the Journal Diabetes Care. She called her discovery “The Tai Model.” It’s also known as integration, or more specifically the trapezoidal rule. As of date, the paper has been cited more than 400 times.

It’s maybe somewhat unfair to list this as “gobbledygook” because it’s not actually wrong, she just wasn’t exactly the first to have the idea. If you slept through math class, don't worry, you can just go into medicine. What could possibly happen?

3. The Sokal Hoax and its Legacy

This is probably the most famous hoax in academic publishing. Alan Sokal is a physics professor at NYU and UCL, he works mostly on the mathematical properties of quantum field theory. In 1996 he wrote a paper for the journal Social Text. It was titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”. In this paper, Sokal argued that to resolve the disagreement between Einstein’s theory of gravity and quantum mechanics, we need postmodern science. What does that mean? Here’s what Sokal wrote in his paper:

“The postmodern sciences overthrow the static ontological categories and hierarchies characteristic of modernist science…. [They] appear to be converging on a new epistemological paradigm, one that may be termed an ecological perspective.”

In other words, the reason we still haven’t managed to unify gravity with quantum mechanics is that you can’t eat quantum gravity. So, yes, clearly an ecological problem. Though you should try eating it if you find it. I mean, you never know, right?

Sokal’s paper was published without peer review. According to the editors, the decision was based on the author’s credentials. Sokal argued that if everyone can make up nonsense like this and it’s deemed suitable for publication, then such publications are worthless. The journal still exists. Some of its recent issues are about “Sexology and Its Afterlives” and “Sociality at the End of the World”.

Similar hoaxes have since been pulled off a few times even in journals that *are peer reviewed. For example, in 2018, a group of three Americans who describe themselves as “left wing” and “liberal” succeeded in publishing several nonsense papers in academic journals on topics such as gender and race studies. One paper, for example, claimed to relate observations of dogs and their owners to rape culture. Here’s a quote from the paper:

“Do dogs suffer oppression based upon (perceived) gender? [This article] concludes by applying Black feminist criminology categories through which my observations can be understood and by inferring from lessons relevant to human and dog interactions to suggest practical applications that disrupts hegemonic masculinities and improves access to emancipatory spaces.”

The authors explained in a YouTube video that they certainly don’t think race and gender studies are unimportant but rather the opposite. Such studies are important and it’s therefore hugely concerning if one can publish complete nonsense in academic journals on the topic. They argued that articles which are currently accepted for publication in the area are biased towards airing “grievances” predominantly about white heterosexual men. They called their project “grievance studies” but it became known as Sokal Squared.

The most recent such hoax was revealed last year in October. The journal Higher Education Quarterly published a study that claimed to show that right-wing funding pressures university faculty to promote right-wing causes in hiring and research. The paper contained a number of obviously shady statistics, and yet was accepted for publication.

The authors had submitted the manuscript under pseudonyms with initials that spelled SOKAL, and pretended to be affiliated with universities where no one with those names worked. They later revealed their hoax on twitter. The account has since been suspended. The journal retracted the paper.

4. Fake it till you make it

Those papers in the Sokal hoaxes were written by actual people. But in 2005 a group of computer science students from MIT demonstrated that this isn’t actually necessary. They wrote a program that automatically generated papers with nonsense text, including graphs, figures, and citations.

One of their examples was titled “A Methodology for the Typical Unification of Access Points and Redundancy” and explained “Our implementation of our approach is low-energy, Bayesian, and introspective. Further, the 91 C files contain about 8969 lines of SmallTalk.” They didn’t submit it to a journal but it was accepted for presentation at a conference. They used this to draw attention to the low standards of the meeting.

But this wasn’t the end of the story because the MIT group made their code publicly available. In 2010, the French researcher Cyril Labbe used this code to create more than a hundred junk papers by a fictional author with name Ike Antkare. The papers all cited each other and soon enough Google Scholar listed the non-existing Antkare as the 21st best cited researcher in the world.

A few years later, Labbé wrote a program that could detect the specific pattern of these junk papers that were generated with the MIT group’s software. He found that at least 120 of them had been published. They have since been retracted.

The online version of the MIT code doesn’t work anymore, but there’s another website that’ll allow you to generate a gibberish maths paper, with equations and references and all. Here for example is my new paper on “Existence in Complex Graph Theory” with my co-authors Henri Poincare and Jesus Christ.

The physicist enthusiasts among you might also enjoy the Snarxiv that creates a website that looks like the arXiv but with nonsense abstracts about high energy physics. I’ll leave you links to all these websites in the info below the video.

5. My Phone Did It

Okay so you can write papers with an artificial intelligence. Indeed, artificial intelligence now writes papers about itself. But what if you don’t have one? Look no further than your phone.

In 2016, Christoph Bartneck from the University of Canterbury, New Zealand received an invitation from the International Conference on Atomic and Nuclear Physics to submit a paper. He explained on his blog “Since I have practically no knowledge of Nuclear Physics I resorted to iOS auto-complete function to help me writing the paper.” The paper was accepted. Here is an extract from the text “Physics are great but the way it does it makes you want a good book and I will pick it to the same time.”

6. Get me off your fucking email list

I’m not sure how well-known this is, but if you’ve published a few papers in standard scientific journals you get spammed with invitations to fake conferences and scam journals all the time. In many cases these invitations have nothing to do with your actual research. I’ve been invited to publish papers on everything from cardiology to tea. Most of the time you just delete it, but it does get a bit annoying. I will say though, that the tea conference I attended was lovely.

In 2005, David Mazières and Eddie Kohle dealt with the issue by writing a paper that repeated the one sentence “Get me off your fucking email list” over and over again, all including flow diagram and scatter image. They submitted it to the 9th World Multiconference on Systemics, Cybernetics and Informatics to protest its poor standards.

In 2014, the Australian computer scientist Peter Vamplew sent the same paper to the International Journal of Advanced Computer Technology in response to their persistent emails. To his surprise, he was soon informed that the paper had been accepted for publication. Not only this, its reviewers had allegedly rated the paper “Excellent”. Next thing that happened was that they asked him to pay 150 dollars for the publication. He didn’t pay and they, unfortunately, didn’t take him off the email list.

7. Chicken chicken chicken

Chicken chicken chicken Chicken chicken chicken chicken chicken chicken chick chicken chicken Chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chick chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chick chicken.

8. April’s Fools on the Arxiv

The arXiv is the open access pre-print server which is widely used in physics and related disciplines. The arXiv has a long tradition of accepting joke papers for April 1st, and it’s some of the best nerd humor you’ll find.

For example, two years ago two physicists proposed a “Novel approach to Room Temperature Superconductivity problem”. The problem is that the critical temperature at which superconductivity sets in is extremely low for all known materials. Even the so-called “high temperature superconductors” become superconducting only at -70 degrees Celsius or so. Finding a material that superconducts at room temperature is basically the holy grail of material science. But don’t tell Monty Python, because it’s silly enough already to call minus 70 degrees Celsius a “high temperature”.

In their April first paper, the authors report they have found an ingenious solution to the problem of finding superconductors that work at room temperature: “Instead of increasing the critical temperature of a superconductor, the temperature of the room was decreased to an appropriate [value of the critical temperature]. We consider this approach more promising for obtaining a large number of materials possessing Room Temperature Superconductivity in the near future.”

In 2022 one of the April’s fools papers made fun of Exoplanet sightings and reported Exopet sightings in zoom meetings.


9. Funny Paper Titles

As you just saw, scientists want to have fun too, and not just on April 1st, so sometimes they do it in their paper titles. For example, there’s the paper about laser optics called “One ring to multiplex them all”. Or this one called “Would Bohr be born if Bohm were born before Born?”

Of course physicists aren’t the only scientists with humor. There is also “Premature Speculation Concerning Pornography’s Effects on Relationships”, and “Great Big Boulders I have Known” and “Role of childhood aerobic fitness in successful street crossing”, though maybe that was unintentionally funny.

An honorable mention goes to the paper titled “Will Any Crap We Put into Graphene Increase Its Electrocatalytic Effect?” because the authors did literally put bird crap into graphene. And, yes, it increased the electrocatalytic effect.

10. Dr Cat

In 1975, the American Physicist Jack Hetherington wanted to publish some of his research results in the journal Physical Review Letters. He was the sole author of the paper, but he’d written it in the first person plural, referring to himself as “we”. This is extremely common in the scientific literature and we have done that ourselves, but a colleague pointed out to Hetherington that PRL had a policy that would require him to use the first person singular.

Instead of rewriting his paper, Hetherington decided he’d name his cat as co-author under the name F. D. C. Willard. The paper was published with the cat as co-author and he could keep using the plural.

Hetherington revealed the identity of his co-author by letting the cat “sign” a paper with paw prints. The story of Willard the cat was soon picked up by many colleagues, who’d thank the cat for useful discussions in footnotes of their papers, or invite it to conferences. Willard the cat also later published two single-authored papers, and quickly became a leading researcher, no doubt with a paw-fect CV. On April 1st 2014 the American Physical Society announced that cat-authored papers, including the Hetherington/Willard paper, would henceforth be open-access.

I hope you enjoyed this list of science anecdotes. If you have one to add, please share it in the comments.

Saturday, August 06, 2022

How to compute with a computer that doesn't compute

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Thanks for clicking on this video, by which you’ve ruled out a possible world in which you never watched it. This alternative world has become a “counterfactual” reality. For us, counterfactuals are just things that could have happened but didn’t, like my husband mowing the lawn. In quantum mechanics, it’s more difficult. In quantum mechanics, events which could have happened but didn’t still have an influence on what actually happens. Yeah, that’s weird. What does quantum mechanics have to do with counterfactual reality? That’s what we’ll talk about today.

I have only recently begun working in the foundations of quantum mechanics. For the previous decade I have mostly worked on General Relativity, cosmology, dark matter, and stuff like this. And I have to say I quite like working on quantum mechanics because it’s simple. I like simple things. It’s why I have plastic plants instead of a dog.

In case you were laughing, this wasn’t a joke. I actually do have plastic plants, and quantum mechanics is indeed a simple theory – if you look at the mathematics. The difficult part is making sense of it. For General Relativity it’s the other way round, and all the maths in the world won’t help you make sense of dogs.

Okay, I can see you’re not entirely convinced that quantum mechanics is in some sense simple, but please give me a chance to convince you. In quantum mechanics we describe everything by a wave-function. It’s usually denoted psi, which is a Greek letter but maybe not coincidentally also the reaction I get from my friends when I go on about quantum mechanics.

We compute how the wave-function behaves from the Schrödinger equation, but for many cases we don’t need this. For many cases we just need to know that the Schrödinger equation is a kind of machine that takes in a wave-function and spits out another wave-function. And the wave-function is a device from which you calculate probabilities. To keep things simple, I’ll directly talk about the probabilities. This doesn’t always work, so please don’t think quantum mechanics is really just probabilities, but it’s good enough for our purposes.

Here is an example. Suppose we have a laser. Where did we get it from? Well, maybe it was on sale? Or we borrowed it from the quantum optics lab? Maybe the laser fairy brought it? Look, this is theoretical physics, let’s just assume we have a laser, and not ask where we got it, okay?

So, suppose we have a laser. The laser hits a beam splitter. A beam splitter, well, splits a beam. I told you, this isn’t rocket science! In the simplest case, the splitter splits the beam into half, but this doesn’t have to be the case. Could also be a third and two thirds or a tenth and nine tenth, so long as the fractions add up to 1. You get the idea. For now, let’s just take the case with a half-half split.

So far we’ve been talking about a laser beam, but the beam is made up of many quanta of light. The quanta of light are the photons. What happens with the individual quanta when they hit the beam splitter? The quanta are each described by a wave-function. Did I just hear you sigh?

The Schrödinger equation tells you something complicated happens to this wave-function, but let’s forget about this and just look at the outcome. So we say, the beam splitter is a machine that does something to this wave-function. What does it do?

It’s not that the photon which comes out of the beam splitter goes one way half of the time and the other way the other half. Instead, here it comes, the photon itself is split into half, kind of. We can describe this by saying the photon goes in with a wave-function going in this direction. And out comes a wave-function that is a sum of both paths.

I already told you that the wave-function is a device from which we calculate probabilities. More precisely we do this by taking the absolute square of the weights in the wave-function. Since the probabilities are ½ for each possibility, this means the weight for each path in the wave-function is one over square root two. If the beam splitter doesn’t split the beam half-half but, say 1/3 and 2/3, then the weights are square root of 1/3 and square root of 2/3 and so on.

We say that this sum of wave-functions is a “superposition” of both paths. That’s the simple part. The difficult part is the question whether the photon really is on both paths. I’ll not discuss this here, because we just talked about this some weeks ago, so check out my earlier video about this.

That the photon is now in a superposition of both paths tells you the probability to measure the particle on either path. But of course if you do measure the particle, you know for sure where it is. So this means if you measure the photon, it’s no longer in a superposition of both paths; the wave-function has “collapsed” on one of the paths like me after a long hike.

As long as you don’t measure the wave-function this beam splitter also works backwards. If you turn around the directions of those two photons, for example with mirrors, they’ll combine to a photon on one path. You can understand this by remembering that the photon is a wave, and waves can interfere constructively and destructively. So they interfere constructively on this output direction, but destructively on the other. Again, the nice thing here is that you don’t actually need to know this. The beam splitter is just a machine that converts some wave-functions into others.

Let’s look at something a little more useful. We’ll turn this around again, put two mirrors here and combine the two paths at another beam splitter. What happens? Well, this lower beam splitter is exactly the turned-around version of the upper beam splitter, which is what we just looked at. The superposition will recombine to one path, and the photon always goes into detector 2.

Well, actually, I should add this is only the case if the paths are exactly the same length. Because if you change the length of one path, that will shift the phase-relations, and so the interference may no longer be exactly destructive in detector 1. This means a device like this is extremely sensitive to changes in the lengths of the paths. It’s called an interferometer. If you change the orientation of those mirrors and move the second beam splitter near the first, then this is basically how gravitational wave interferometers work. If a gravitational wave comes through, this changes the relative lengths of the paths and that changes the interference pattern.

Okay, so we have an interferometer. It’s called a Mach-Zehnder interferometer by the way. Now let’s make this a little more complicated, which is what I said last time when I bought the 5000 piece puzzle that’s been sitting on the shelf for 5 years, but luckily we don’t need quite as many pieces.

We add two more beam splitters and another mirror. And then we need a third detector here. And, watch out, here’s an added complication. Those two outer beam splitters split a beam into fractions 1/3 and 2/3, and those two inner ones 1/2 each. Yeah, sorry about that, but otherwise it won’t work.

What happens if you send a photon into this setup? Well, this part that we just added here, that’s just another interferometer. So if something goes in up here, it’ll come out down here. So 2/3 chance the photon ends up in detector 3. And a 1/3 chance it goes down here, and then through the second beam splitter. And remember this splitter splits 2 to 1, so it’s 2/9 in detector 2 and 1/9 in detector 1.

Now comes the fun part. Suppose we have a computer, a really simple one. It can only answer questions with “yes” or no”. It’s a programmable device with some inner working that doesn’t need to concern us. It told me we can call it James, and it actually would prefer that we don’t ask any further questions. Only thing we need to know is that once you have programmed your computer, I mean James, you run it by inputting a single photon. If the answer is “yes” the photon goes right through, entirely undisturbed. If the answer is “no”, the photon just doesn’t come out. Keith Bowden suggested one could do this by creating a maze for the photon, where the layout of the maze encodes the program, though I’m not sure how James feels about this.

So let’s assume you have programmed the computer to once and for all settle the question whether it’s okay to put pineapple on pizza. Then you put your computer… here. What happens if you turn on your photon source this time? If the answer to the question is “yes, pineapple is okay” then nothing happens at the computer, and it’s just the same as we just talked about. The photon goes to detector 3 2/3 of the time, and in the other cases it splits up between detector 1 and 2.

But now suppose the answer is “no”. What happens then? Well, one thing that can happen is that the photon goes into the computer and doesn’t come out. Nothing ever appears in any detector, and you know the answer is “no, pineapples are bad, don’t put them on pizza”. This is the boring case and it happens 1/3 of the time, but at least you now know what to think about people who put pineapple on pizza.

Here is the more interesting case. If the photon is in the inner interferometer but does not go into the computer and gets stuck there, then it goes the upper path. But then when it reaches the next beam splitter, it has nothing to recombine with. So then, it gets split up again into a superposition. It either goes into detector 3, this happens 1/6 of the time, or it goes down here and then it recombines with the lower path from the outer interferometer. This happens in half of the cases, and if it happens, then the photon always goes to detector 2, and never to detector 1. This only comes out correctly if the beam splitters have the right ratios, which is why we need this.

Okay, so we see the maths is just adding up fractions, this is the simple part. But now let’s think about what this means. We have seen that the only way we can measure a photon in detector 1 is if the outcome of the computation is “yes”. But we have also seen that if the answer is “yes” and the photon actually goes through this inner part where the computer is located, it cannot reach detector 1. So we know that the answer is “yes” without ever having to run the computer. The photon that goes to detector 1 seems to know what would have happened, had it gone the other path. It knows its own counterfactual reality. In other words, if we had a quantum lawn then it still wouldn’t be mowed, but it’d know what my husband does when he doesn’t mow the lawn. I hope this makes sense now.

And no, this video isn’t a joke, at least not all of it. It’s actually true, you can compute with a computer that doesn’t compute. It’s called “counterfactual computation”. The idea was brought up in the late 1990s by Richard Josza and Graeme Mitchison. The example which we just discussed isn’t particularly efficient because it happens so rarely that you get your answer without running the computer that you’re better off guessing. But if you make the setup more complicated you can increase the probability for finding out what the computer did without running it.

That this indeed works was demonstrated in 2006 where the computer performed a simple search algorithm known as Grover’s algorithm. This doesn’t tell you whether pineapple on pizza is okay, but if you have an unsorted database with different entries, this algorithm will tell you which entry is the same as your input value.

Now, let me be clear, this is a table-top experiment that doesn’t calculate anything of use to anybody. I mean, not unless you want to count the use of publishing a paper about it. The database they used for this experiment had four entries in terms of polarized photons. You might argue that you don’t need an entire laboratory to search for one among the stunning number of four entries, and I would agree. But this experiment has demonstrated that counterfactual computation indeed works.

The idea has led to a lot of follow-up works, which include counterfactual quantum cryptography, and how to use counterfactual computation to speed up quantum computers and so on. There is a lot of controversy in the literature about what this all means, but no disagreement on how it works or that it works. And that pretty much tells you what the current status of quantum mechanics is. We agree on how it works. We just don’t agree on what it all means.

If You Need a Break, Try Some Physics. (By which I really mean, please buy my new book.)


If I could, I would lock myself up in a cabin in the woods and not read any news for two weeks. But I find cabins in the woods creepy, and I’d miss the bunny pics on twitter. And in any case, I have something better to offer. 

If you want to take a step back from current affairs, why not fill your mind with some of the big mysteries of our existence? It works like a charm for my mental health. Why do we only get older and not younger? Are there copies of us in other universes? Can particles think? Has physics ruled out free will? Will we ever have a theory of everything? Does science have limits? Can information be destroyed? Will we ever know how the universe began? Is human behavior predictable? Ponder these mysteries for an hour a day and it’ll clear your head beautifully. I speak from experience.

I discuss this all these questions and many more in my new book “Existential Physics: A Scientist’s Guide to Life’s Biggest Questions” which will be on sale in the USA and Canada beginning next week, on August 9. I hope this book will help you separate what physicists know about those big questions from what they just speculate about. 

You can buy a signed copy from Midtown Scholar here (but note that they ship only in the USA and Canada). The UK Edition will be published on August 18. The publication date for the German translation has tentatively been set to March 28, 2023. There’ll be a couple of more translations following next year. Some more info about the book (reviews etc) here.

Saturday, July 30, 2022

Is the brain a computer?

If you like my content, you may also like my new newsletter to which you can sign up here (bottom at page). It's a weekly summary of the most interesting science news I came across in the past week. It's completely free and you can unsubscribe at any time.


[What follows is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


My grandmother was a computer, and I don’t mean there was a keypad on her chest. My grandmother calculated orbits of stars, with logarithmic tables and a slide ruler. But in which sense are brains similar to the devices we currently call computers, and in which sense not? What’s the difference between what they can do? And is Roger Penrose right in saying that Gödel’s theorem tells us human thought can’t just be computation? That’s what we’ll talk about today.

If you have five apples and I give you two, how many apples do you have in total? Seven. That’s right. You just did a computation. Does that mean your brain is a computer? Well, that depends on what you mean by “computer” but it does mean that I have two fewer apples than I did before. Which I am starting to regret. Because I could really go for an apple right now. Could you give me one of them back?

So whether your brain is a computer depends on what you mean by “computer”. A first attempt at answering the question may be to say a computer is something that does a computation, and a computation, according to Google is “the action of mathematical calculation”. So in that sense the human brain is a computer.

But if you ask Google what a computer is, it says it’s “an electronic device for storing and processing data, typically in binary form, according to instructions given to it in a variable program”. The definition on Wikipedia is pretty much the same and I think this indeed captures what most of us mean by “computer”. It’s those things we carry around to brush up selfies, but that can also be used for, well, calculations.

Let’s look at this definition again in more detail. It’s an electronic device. It stores and processes data. The data are typically in binary form. And you can give it instructions in a variable program. Now the second and last points, storing and processing data, and that you can give it instructions, also apply to the human brain. This leaves the two properties: it’s an electronic device and it typically uses binary data, which makes a computer different to the human brain. So let’s look at these two.

That an electronic computer is “digital” just means that it works with discrete data, so data whose values are separated by steps, commonly in a binary basis. The neurons in the brain, on the contrary, behave very differently. Here’s a picture of a nerve ending. In orange and blue you see the parts of the synapse that release molecules called “neurotransmitters”. Neurotransmitters encode different signals, and neurons respond to those signals gradually and in many different ways. So a neuron is not like a binary switch that’s either on or off.

But maybe this isn’t a very important difference. For one thing, you can simulate a gradual response to input on a binary computer just by giving weights to variables. Indeed, there’s an entire branch of mathematics for reasoning with such inputs. It’s called fuzzy logic and it’s the best logic to pet of all the logic. Trust me, I’m a physicist.

Neural networks which are used for artificial intelligence use a similar idea by giving weights to nodes and sometimes also the links of the network. Of course these algorithms still use a physical basis that is ultimately discrete and digital in binary. It’s just that on that binary basis you can mimic the gradual behavior of neurons very well. This already shows that saying that a computer is digital whereas neurons aren’t may not be all that relevant.

Another reason this isn’t a particularly strong distinction is that digital computers aren’t the only computers that exist. Besides digital computers there are analog computers which work with continuous data, often in electric, mechanical, or even hydraulic form. An example is the slide ruler that my grandma used. But you can also use currents, voltages and resistors to multiply numbers using Ohm’s law.

Analog computers are currently having somewhat of a comeback, and it’s not because millennials want to take selfies with their record players. It’s because you can use analog computers for matrix multiplications in neural networks. In an entirely digital neural network, a lot of energy is wasted in storing and accessing memory, and that can be bypassed by coding the multiplication directly into an analog element. But analog computers are only used for rather special cases exactly because you need to find a physical system that does the computation for you.

Is the brain analog or digital? That’s a difficult question. On the one hand you could say that the brain works with continuous currents in a continuous space, so that’s analog. On the other hand thresholds effects can turn on and off suddenly and basically make continuous input discrete. And the currents in the brain are ultimately subject of quantum mechanics, so maybe they’re partly discrete.   

But your brain is not a good place for serious quantum computing. For one thing, that’s because it’s too busy trying to remember how many seasons of Doctor Who there are just in case anyone stops you on the street and asks. But more importantly it’s because quantum effects get destroyed too easily. They don’t survive in warm and wiggly environments. It is possible that some neurological processes require quantum effects, but just how much is currently unclear, I’ll come back to this later.

Personally I would say that the distinction that the brain isn’t digital whereas typical computers that we currently use are, isn’t particularly meaningful. The reason we currently mostly use digital computers is because the discrete data prevent errors and the working of the machines is highly reproducible.

Saying that a computer is an electronic device whereas the brain isn’t, seems to me likewise a distinction that we make in every-day language, alright, but that isn’t operationally relevant. For one thing, the brain also uses electric signals, but more importantly, I think when we wonder what’s the difference between a brain and a computer we really wonder about what they can do and how they do it, not about what they’re made of or how they are made.

So let us therefore look a little closer at what brains and computers do and how they do it, starting with the latter: What’s the difference between how computers and brains do their thing?

Computers outperform humans in many tasks, for example just in doing calculations. This is why my grandmother used those tables and slide-rulers. We can do calculations if we have to, but it takes a long time and it’s tedious and it’s pretty clear that human brains aren’t all that great at multiplying 20 digit numbers.  

But hey, we did manage to build machines that can do these calculations for us! And along the way we discovered electricity and semi-conductors and programming and so on. So in some sense, you could say, we actually did learn to do those calculations. Just not with our own brains, because those are tired from memorizing facts about Doctor Who. But in case you are good at multiplying 20 digit numbers, you should totally bring that up at dinner parties. That way, you’ll finally will have something to talk about.

This example captures the key difference between computers and human brains. The human brain took a long time to evolve. Natural selection has given us a multi-tasking machine for solving problems, a machine that’s really good in adapting to new situations with new problems. Present-day computers, on the contrary, are built for very specific purposes and that’s what they’re good at. Even neural nets haven’t changed all that much about this specialization.

Don’t get me wrong, I think artificial intelligence is really interesting. There’s a lot we can do with it, and we’ve only just scratched the surface. Maybe one day it’ll actually be intelligent. But it doesn’t work like the human brain.

This is for several reasons. One reason is what we already mentioned above, that in the human brain the neural structure is physical whereas in a neural net it’s software coded on another physical basis.

But this might change soon. There are some companies which are producing computer chips similar to neurons. The devices made of them are called “neuromorphic computers”. These chips have “neurons” that fire independently, so they are not synchronized by a clock, like in normal processors. An example of this technology is Intel’s Loihi 2 which has one million “neurons” interconnected via 120 million synapses. So maybe soon we’ll have computers with a physical basis similar to brains. Maybe I’ll finally be able to switch mine for one that hasn’t forgotten why it went to the kitchen by the time it gets there.

Another difference which may soon fade away is memory storage. At present, memory storage works very differently for computers and brains. In computers, memories are stored in specific places, for example your hard drive, where electronic voltages change the magnetization of small units called memory cells between two different states. You can then read it out again or override it, if you get tired of Kate Bush.

But in the brain, memories aren’t stored in just one place, and maybe not in places at all. Just exactly how we remember things is still subject of much research. But we know for example that motor memories like riding a bike uses brain regions called the basal ganglia and cerebellum. Short-term working memory, on the other hand, heavily uses the prefrontal cortex. Then again, autobiographical memories from specific events in our lives, use the hippocampus and can, over the course of time, be transferred to the neocortex.

As you see memory storage in the brain is extremely complex and differentiated, which is probably why mine sometimes misplace the information about why I went into the kitchen. And not only are there many different types of memory, it’s also that neurons both process and store information, whereas computers use different hardware for both.

However, on this account too, researchers are trying to make computers more similar to brains. For example, researchers from the University of California in San Diego are working in something called memcomputers, which combines data processing and memory storage in the same chip.

Maybe more importantly, the human brain has much more structure than the computers we currently use. It has areas which specialize in specific functions. For example, the so called Broca's area in the frontal lobe specializes in language processing and speech production; the hypothalamus controls, among other things, body temperature, hunger and the circadian rhythm. We are also born with certain types of knowledge already, for example a fear of dangerous animals like spiders, snakes, or circus clowns. We also have brain circuits for stereo vision. If your eyes work correctly, your brain should be able to produce 3-d information automatically, it’s not like you have to first calculate it and then program your brain.

Another example of pre-coded knowledge is a basic understanding of natural laws. Even infants understand, for example, that objects don’t normally just disappear. We could maybe say it’s a notion of basic locality. We’re born with it. And we also intuitively understand that things which move will take some time to come to a halt. The heavier they are, the longer it will take. So, basically Newton’s laws. They’re hardwired. The reason for this is probably that it benefits survival if infants don’t have to learn literally everything from scratch. I was upset to learn, though, that infants aren’t born knowing Gödel’s theorem. I want to talk to them about it, and I think nature needs to work on this.

That some of our knowledge is pre-coded into structure is probably also partly the reason why brains are vastly more energy efficient than today’s supercomputers. The human brain consumes on the average 20 Watts whereas a supercomputer typically consumes a million times as much, sometimes more.

For example, Frontier, hosted at the Oak Ridge Leadership Computing Facility and currently the fastest supercomputer in the world consumes 21MWatt on average and 29MW at peak performance. To run the thing, they had to build a new power line and a cooling system that pumps around 6000 gallons of water. For those of you who don’t know what a gallon is, that’s a lot of water. The US department of energy is currently building a new supercomputer, Aurora, which is expected to become the world’s fastest computer by the end of the year. It will need about 60MW.

Again the reason that the human brain is so much more efficient is almost certainly natural selection, because saving energy benefits survival. Which is also what I tell my kids when they forget to turn the lights off when leaving a room.

Another item we can add to the list of differences is that the brain adapts and repairs itself, at least to some extent. This is why, if you think about it, brains are much more durable than computers. Brains work reasonably well for 80 years on average, sometimes as long as 120 years. No existing computer would last remotely as long. One particularly mind blowing case (no pun intended) is that of Carlos Rodriguez, who had a bad car accident when he was 14. He had stolen the car, was on drugs, and crashed head first. Here he is in his own words.  

Not only did he survive, he is in reasonably good health. Your computer is less likely to survive a crash than you, even if it remembered to wear its seatbelt. Sometimes it just takes a single circuit to fail and it’ll become useless. Supercomputing clusters need to be constantly repaired and maintained. A typical supercomputer cluster has more than a hundred maintenance stops a year and requires a staff of several hundred people. Just to keep working.  

To name a final difference between the ways that brains and computers currently work: brains are still much better at parallel processing. The brain has about 80 billion neurons, and each of them can process more than one thing at a time. Even for so-called massively parallel supercomputers these numbers are still science fiction. The current record for parallel processing is the Chinese supercomputer Sunway TaihuLight. It has 40,960 processing modules, each with 260 processor cores, which means a total of 10,649,600 processor cores! That’s of course very impressive, but still many orders of magnitude from the 80 billion that your brain has. And maybe it would have 90 billion if you stopped wasting all your time watching Doctor Who.

So those are some key differences between how brains and computers do things, now let us talk about the remaining point, what they can do.

Current computers, as we’ve seen, represent everything in bits, but not everything we know can be represented this way. It’s impossible, for example, to write down the number pi or any other irrational number in a sequence of bits. This means that not even the best supercomputer in the world can compute the area of a circle of radius 1, exactly, it can only approximate it. If we wanted to get pi exactly, it would take an infinite amount of time, like me trying to properly speak English. Fun fact: The current record for calculating digits of pi is 62.8 trillion digits.

But even though we can’t write down all the digits of pi, we can work with pi. We do this all the time, though, just among us, it isn’t all that uncommon for theoretical physicists to set pi equal to 1.

In any case, we can deal with pi as an abstract transcendental number, whereas computers are constrained to finitely many digits. So this looks like the human brain can do something that computers can’t.

However, this would be jumping to conclusions. The human brain can’t hold all the digits of pi any more than a computer. We just deal with pi as a mathematical definition with certain properties. And computers can do the same. With suitable software they are capable of abstract reasoning just like we are. If you ask your computer software if pi is a rational number it’ll hopefully say no. Unless it’s kidding in which case maybe you can think of more interesting conversation to have with it.

This brings us to an argument that Penrose has made, that human thought can’t be described by any computer algorithm. Penrose’s argument is basically this. Gödel showed that any sufficiently complex set of mathematical axioms can be used to construct statements which are true, but their truth is unprovable within that system of axioms. The fact that we can see the truth of any Gödel sentence, by virtue of Gödel’s theorem, tells us that no algorithm can beat human thought.

Now, if you look at all that we know about classical mechanics, then you can capture this very well in an algorithm. Therefore, Penrose says, quantum mechanics is the key ingredient for human consciousness. It’s not that he says consciousness affects quantum processes. It’s rather the other way round, quantum processes create consciousness. According to Penrose, at least.

But does this argument about Gödel’s theorem actually work? Think back to what I said earlier, computers are perfectly capable of abstract reasoning if programmed suitably. Indeed, Gödel’s theorem itself has been proved algorithmically by a computer. So I think it’s fair to say that computers understand Gödel’s theorem as much or as little as we do. You can start worrying if they understand it better.

This leaves open the question of course whether a computer would ever have been able to come up with Gödel’s proof to begin with. The computer that proved Gödel’s theorem was basically told what to do. Gödel wasn’t. Tim Palmer has argued that indeed this is where quantum mechanics becomes relevant.

By the way, I explain Penrose’s argument about Gödel’s theorem and consciousness in more detail in my new book Existential Physics. The book also has interviews with Roger Penrose and Tim Palmer.

So let’s wrap up. Current computers still differ from brains in a number of ways. Notably it’s that the brain is a highly efficient multi-purpose apparatus whereas, in comparison, computers are special purpose machines. The hardware of computers is currently very different from neurons in the brain, memory storage works differently, and the brain is still much better at parallel processing, but current technological developments will soon allow building computers that are more similar to brains in these regards.

When it comes to the question if there’s anything that brains can do which computers will not one day also be able to do, the answer is that we don’t know. And the reason is, once again, that we don’t really understand quantum mechanics.

Saturday, July 23, 2022

Does the Past Still Exist?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


One of the biggest mysteries of our existence is also one of the biggest mysteries of physics: time. We experience time as passing, with a special moment that we call “now”. Now you’re watching this video, half an hour ago you were doing something else. Whatever you did, there’s no way to change it. And what you will do in half an hour is up to you. At least that’s how we perceive time. 

But what physics tells us about time is very different from our perception. The person who figured this out was none other than Albert Einstein. I know. That guy again. Turns out he kind of knew it all. What did Einstein teach us about the past, the present, and the future? That’s what we’ll talk about today.

The topic we’re talking about today is covered in more detail in my new book “existential physics” which will be published in August. You find more info about the book at existentialphysics.com

We think about time as something that works the same for everyone and every object. If one second passes for me, one second passes for you, and one second passes for the clouds above. This makes time a universal parameter. This parameter labels how much time passes and also what we all mean by “now”.

Hermann Minkowski was the first to notice that this may not be quite right. He noticed that Maxwell’s equations of electrodynamics make much more sense if one treats time as a dimension, not as a parameter. Just like a ball doesn’t change if you rotate one direction of space into another, Maxwell’s equations don’t change if you rotate one direction of space into time.

So, Minkowski said, we just combine space with time to a 4 dimensional space-time, and then we can rotate space into time just like we can rotate two directions of space into each other. And that naturally explains why Maxwell’s equations have the symmetry they do have. It doesn’t have anything to do with electric and magnetic fields, it comes from the properties of space and time themselves.

I can’t draw a flower, let alone four dimensions, but I can just about manage two straight lines, one for time and the other for at least one dimension of space. This is called a space-time diagram. If you just stand still, then your motion in such a diagram is a straight vertical line. If you move at a constant velocity, your motion is a straight line tilted at some angle. So if you change velocity, you rotate in space-time. The maximal velocity at which you can move is the speed of light, which by convention is usually drawn at a 45-degree angle.

In space we can go forward-backward, left right, or up down. In time we can only go forward, we can’t make a u-turn, and there aren’t any driveways for awkward three-point turns either. So time is still different from space in some respect. But now that time is also a dimension, it’s clear that it’s just a label for coordinates, there’s nothing universal about it. There are many ways to put labels on a two-dimensional space because you can choose your axes as you want. The same is the case now in space-time. Once you have made time into a dimension, the labels on it don’t mean much. So what then is the time that we talk about? What does it even mean that time is a dimension? Do other dimensions exist? Supernatural ones? That could explain the strange sounds you’ve been hearing at night? No. That's a separate problem I'm afraid I can't help you with.

It was Albert Einstein who understood what this means. If we also want to understand it, we need four assumptions. The speed of light in vacuum is finite, it’s always the same, nothing can go faster than the speed of light, and all observers’ viewpoints are equally valid. This formed the basis of Einstein’s theory of Special Relativity. Oh, and also, the observers don’t have to exist. I mean, this is theoretical physics, so we’re talking about theoretical observers, basically. So, if there could be an observer with a certain viewpoint then then that viewpoint is equally valid as yours.

Who or what is an observer? Is an ant an observer? A tree? How about a dolphin? What do you need to observe to deserve being called an observer and what do you have to observe with? Believe it or not, there’s actually quite some discussion about this in the scientific literature. We’ll side-step this, erm, interesting discussion and use the word “observer” the same way that Einstein did, which is a coordinate system. You see, it’s a coordinate system that a theoretical observer might use, dolphin or otherwise. Yeah, maybe not exactly what the FBI thinks an observer is, but then if it was good enough for Einstein, it’ll be good enough for us. So Einstein’s assumption basically means any coordinate system should be as good as any other for describing physical reality.  

These four assumptions sound rather innocent at first but they have profound consequences. Let’s start with the first and third: The speed of light is finite and nothing goes faster than light. You are probably watching this video on a screen, a phone or laptop. Is the screen there now? Unless you are from the future watching this video as a hologram in your space house, I'm going to assume the answer is yes. But a physicist might point out that actually you don’t know. Because the light that’s emitted from the screen now hasn’t reached you yet. Also if you are from the future watching this as a hologram, make sure to look at me from the right. It’s my good side.

Maybe you hold the phone in your hand, but nerve signals are ridiculously slow compared to light. If you couldn’t see your hand and someone snatched your phone, it’d take several microseconds for the information that the phone is gone to even arrive in your brain. So how do you know your phone is there now?

One way to answer this question is to say, well, you don’t know, and really you don’t know that anything exists now, other than your own thoughts. I think, therefore I am, as Descartes summed it up. This isn’t wrong – I’ll come back to this later – but it’s not how normal people use the word “now”. We talk about things that happen “now” all the time, and we never worry about how long it takes for light to travel. Why can’t we just agree on some “now” and get on with it? I mean, think back to that space-time diagram. Clearly this flat line is “now”, so let’s just agree on this and move on.

Okay, but if this is to be physics rather than just a diagram you have to come up with an operational procedure to determine what we mean by “now”. You have to find a way to measure it. Einstein did just that in what he called Gedankenexperiment, a “thought experiment”.

He said, suppose you place a mirror to your right and one to your left. You and the mirrors are at fixed distance to each other, so in the space time diagram it looks like this. You send one photon left and one right, and make sure that both photons leave you at the same time. Then you wait to see whether the photons come back at the same time. If they don’t, you adjust your position until they do.

Now remember Einstein’s second assumption, the speed of light is always the same. This means if you can send photons to both mirrors and they come back at the same time, then you must be exactly in the middle between the mirrors. The final step is then to say that at exactly half the time it takes for the photons to return, you know they must be bouncing off the mirror. You could say “now” at the right moment even though the light from there hasn’t reached you yet. It looks like you’ve found a way to construct “now”.

But here’s the problem. Suppose you have a friend who flies by at some constant velocity, maybe in a space-ship. Her name is Alice, she is much cooler than you, and you have no idea why she's agreed to be friends with you. But here she is, speeding by in her space-ship left to right. As we saw earlier, in your space-time diagram, Alice moves on a tilted straight line. She does the exact same thing as you, places mirrors to both sides, sends photons and waits for them to come back, and then says when half the time has passed that’s the moment the photons hit the mirrors.

Except that this clearly isn’t right from your point of view. Because the mirrors to her right are in the direction of her flight, so the light takes longer to get there than it does to the mirrors on the left, which move towards the light. You would say that the photon which goes left clearly hits the mirror first because the mirror’s coming at it. From your perspective, she just doesn’t notice because when the photons go back to Alice, the exact opposite happens. The photon coming from left takes longer to get back, so the net effect cancels out. What Alice says happens “now” is clearly not what you think happens “now”.

For Alice on the other hand, you are the one moving relative to her. And she thinks that her notion of “now” is right and yours is wrong. So who is right? Probably Alice, you might say. Because she’s much cooler than you. She owns a spaceship, after all. Maybe. But let’s ask Einstein.

Here is where Einstein’s forth assumption comes in. The viewpoints of all observers are equally valid. So you’re both right. Or, to put it differently, the notion of “now” depends on the observer, it is “observer-dependent” as physicists say. Your “now” is not the same as my “now”. If you like technical terms, this is also called the relativity of simultaneity.

These mismatches in what different observers think happens “now” are extremely tiny in every-day life. They only become noticeable when relative velocities are close by the speed of light, so we don’t normally notice them. If you and I talk about who knocked at the door right now, we won’t misunderstand each other. If we’d zipped around with nearly the speed of light, however, referring to “now” would get very confusing.

This is pretty mind-bending already, but wait, it gets wilder. Let us have a look at the space-time diagrams again. Now let us take any two events that are not causally connected. This just means that if you wanted to send a signal from one to the other, the signal would have to go faster than light, so signaling from one to the other isn’t possible. Diagrammatically this means if you connect the two events the line has an angle less than 45 degrees to the horizontal.

The previous construction with the mirrors shows that for any two such events there is always some observer for whom those two events happen at the same time. You just have to imagine the mirrors fly through the events and the observer flies through directly in the middle. And then you adjust the velocity until the photons hit both events at the same time.

Okay, so any two causally disconnected events happen simultaneously for some observer. Now take any two events that are causally connected. Like eating too much cheese for dinner and then feeling terrible the morning after. Find some event that isn’t causally connected to either. Let’s say this event is a supernova going off in a distant galaxy. There are then always observers for whom the supernova and your cheese dinner are simultaneous, and there are observers for whom the supernova and your morning after are simultaneous.

Let’s then put all those together. If you are comfortable with saying that something, anything, exists “now” which isn’t here, then, according to Einstein’s fourth assumption, this must be the case for all observers. But if all the events that you think happen “now” exist and all other observers say the events that happen at the same time as those events, then all events exist “now”. Another way to put it is that all times exist in the same way.

This is called the “block universe”. It’s just there. It doesn’t come into being, it doesn’t change. It just sits there.

If you find that somewhat hard to accept, there is another possibility to consistently combine a notion of existence with Einstein’s Special Relativity. All that I just said came from assuming that you are willing to say something exists now even though you can’t see or experience it in any way. If you are willing to say that only things exist which are now and here, then you don’t get a block universe. But maybe that’s even more difficult to accept.

Another option is to simply invent a notion of “existence” and define it to be a particular slice in space-time for each moment in time. This is called a “slicing” but unfortunately it has nothing to do with pizza. If it had any observable consequences, that would contradict the fourth assumption Einstein made. So it’s in conflict with Special Relativity and since this theory is experimentally extremely well confirmed, this would almost certainly mean the idea is in conflict with observation. But if you just want to define a “now” that doesn’t have observable consequences, you can do that. Though I’m not sure why you would want to.

Quantum mechanics doesn’t change anything about the block universe because it’s still compatible with Special Relativity. The measurement update of the wave-function, which I talked about in this earlier video, happens faster than the speed of light. If it could be observed, you could use it to define a notion of simultaneity. But it can’t be observed, so there’s no contradiction.

Some people have argued that since quantum mechanics is indeterministic, the future can’t already exist in the block universe, and that therefore there must also be a special moment of “now” that divides the past from the future. And maybe that is so. But even if that was the case, the previous argument still applies to the past. So, yeah, it’s true. For all we currently know, the past exists the same way as the present.

Saturday, July 16, 2022

How do painkillers work?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Have you ever taken an Aspirin? Advil? Paracetamol, or Tylenol for the Americans. Most of you probably have. But do you know what’s the difference between them? What’s pain to begin with, where does it come from, how do painkillers work, and why is Sabine suddenly talking about pharmacology? That’s what we’ll talk about today.

Pain is incredibly common. According to a 2019 survey in the United States, almost 60 percent of adults had experienced physical pain in the three months prior to the survey. I’d have guessed that stubbing your toe was the most frequent one, but it’s actually back pain with 39 percent. The numbers in the European Union are similar. The healthcare costs for chronic pain disorders in the European Union alone have been estimated to exceed 400 billion dollars annually. Pain is such a universal problem that the United Nations say access to pain management is a human right.

But just what do we mean by pain? The International Association for the Study of Pain defines it as “an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage.” You probably don’t have to be told it’s unpleasant. But this definition tells you that the “unpleasant experience that accompanies tissue damage” is not always caused by actual tissue damage. We’ll talk about this later, but first we’ll talk about the most common cause of pain.

In most cases, pain is a signal that’s picked up by receptors in some part of the body, and from there it’s sent to your brain. So there’s three parts involved: The receptor, a long transmission channel that goes to the brain, and the brain itself. The most common cause of pain is that the pain receptors, which are called nociceptors, are triggered by cell damage.

What is pain good for? A clue comes from people who can’t feel pain. This is caused by rare genetic mutations that stop pain receptors or their transmission from working. It affects about 1 in 25 thousand people. Infants with this condition may try to chew off their tongue, lips, or fingers, and later accumulate bruises and broken bones.  So, pain is uncomfortable, but is actually good for something. It’s a warning signal that teaches you to not do some things. Still, sometimes you’d rather not have it, so let’s talk about ways to get rid of pain.

The most straightforward way to remove pain are local and regional anesthetics. Those are the ones you get at the dentist, but you also find them in lower doses in some creams. They take effect only at the place where you apply them, and they wear off as the body carries away and takes apart the substance.

Their names usually end in –caine. Like Benzocaine, Novocaine, and also cocaine. Yes, cocaine is a local anesthetic, and it had quite an interesting history, even before it ran wall-street in the 80s. Rohin did a great video about this. There are some exceptions to the nomenclature, such as naturally occurring anesthetics, including Menthol.

Local anesthetics prevent the pain signal from being created by changing the distribution of electric charges in cells. Cells use a difference in electric charges to create a signal. Normally, the outside of a nerve ending is slightly positively charged. With the right environmental trigger, channels open in the cell membrane and increase the number of positive charges inside. A local anesthetic blocks those cell channels, so the pain receptors can’t give alarm. But since *all nerve cells work this way, a local anesthetic doesn’t just take away the pain. It takes away all sensation. So the body part its applied to will feel completely numb.

This isn’t a good solution for any extended duration of time, which brings us to ways to stop the pain specifically and leave other sensation intact. Drugs which do that are called analgesics. To understand how they work, we’ll need a little more detail on what cell damage does.

When cells are damaged they release a chemical called arachidonic acid, which is then converted by certain enzymes into a type of prostaglandin. If someone had stopped me on the street last week and asked what prostaglandin is, I might have guessed it’s a small country in Europe. Turns out it’s yet another chemical that flows in your blood and it’s the one that causes swelling and redness. It also lowers the pain threshold. This means that with the prostaglandin around, the pain receptors fire very readily. And since the swelling can push on the pain receptors, they might fire a lot. So, well, it’ll hurt.

But the prostaglandin itself doesn’t live long in the body. It falls apart in about 30 seconds. This tells us that one way to stop the pain is to knock out the enzymes that create prostaglandin. This is what the most common painkillers do. They are called “nonsteroidal anti-inflammatory drugs”, NSAIDs (N said) for short. Some painkillers in this class are: ibuprofen, which is sold under brand names like Advil, Anadin or Norufen, acetylsalicylic acid that you may know as Aspirin, diclofenac which is the active ingredient in Voltaren and Dicloflex, and so on. I guess at some point pretty much all of us have awkwardly asked around for one of those.

How do they work? Magic. Thanks for watching. No wait. I’m just a physicist but I’ll do my best. The issue is, every article I read on biomedicine seems to come down to saying, there’s this thing which fits to that thing, and NSAIDs are no exception. These enzymes which they block come in two varieties called cox-1 and cox-2.  The painkillers work by latching into surface structures of those cox-enzymes which prevents the enzymes from doing their job. This means less prostaglandin is produced and the pain threshold goes back to normal. They don’t entirely turn pain off, like local anesthetics do, but you don’t feel pain that easily. And unlike local anesthetics, they work only on the pain receptors, not on other receptors.

Most of these pain killers block the cox enzymes temporarily and then fall off. Within a few hours, they’re usually out of the system and the pain comes back. The exception is Aspirin. Aspirin latches onto the coxes and then breaks off, taking them out permanently. The body has to produce new ones which takes time. This is why it takes much longer for the effects of aspirin to wear off, up to 10 days. Seriously. Aspirin is the weird cousin of the pain killers. The one that doesn't talk at family reunions but rearranges your bookshelf by color and 10 days later you haven’t fully recovered from that.

And of course there are side-effects. NSAIDs have a number of side effects in common because the type of prostaglandin which they block is necessary for some other things which they also inhibit. For example, you need it for keeping up the protective lining of the stomach and the rest of the digestive system. So long-term use of NSAIDs can cause stomach ulcers and inner bleeding. I know someone who took Aspirin regularly for years and had a stomach rupture. He survived but it was a *very close call. So, well, be careful.

NSAIDs also increase the risk of cardiovascular events. This doesn’t mean cardiologists will hold more conferences, though this risk also exists. No, cardiovascular events are things like strokes and heart attacks. If you wonder just how much NSAIDs increase the risk, that depends on just exactly which one you’re talking about. This table (3) lists the risk ratio for some common NSAIDs compared to a placebo. The relevant thing to pay attention to is that the numbers are almost all larger than 1. Not dramatically larger, but noticeably larger.

About 20 years ago research suggested that most of the adverse effects of NSAIDs come from blocking the COX-1 enzyme, while the effects that stop the pain come from the COX-2 enzyme. This is why several companies developed drugs to inhibit just the COX-2 enzyme, unlike traditional NSAIDs that block both versions.

These drugs are known as COXIBs. In theory, they should have been equally effective as painkillers as traditional NSAIDS but cause less problems with the digestive system. In practice, most of them were withdrawn from the market swiftly because they increased the risk of heart attacks even more than traditional NSAIDs. Some of them are still available because the risks are kind of similar.

NSAIDs are probably the most widely used over-the-counter painkillers and as you’ve seen scientists understand quite well how they work. Another widely used pain killer has however remained somewhat of a mystery: acetaminophen. In the US it’s sold under the brand name Tylenol, in Europe it’s Paracetamol.

And this brings me to the true reason I’m making this video. It’s because my friend Tim Palmer, the singing climate physicist, told me this joke. “Why is there no aspirin in the jungle? Because the paracetamol.” I didn’t understand this. And since I had nothing intelligent to say, I gave him a lecture about the difference between aspirin and paracetamol which eventually turned into this video. In case you also didn’t understand his joke, I’ll explain it later.

So what is the difference between NSAIDs and acetaminophen? For what pain relief is concerned, they’re kind of similar. Acetaminophen has the advantage that it’s easier on the digestive system. On the flipside, acetaminophen has a small active window, meaning the difference between the dose in which it has the desired effect and the dose where it’s seriously toxic is small, a factor of ten or so. Now take into account that acetaminophen is often added to other drugs, like cough syrup or drugs that help with menstrual cramps, and it becomes easy to accidentally overdose, especially in children. This is why child proof pill containers are so useful. Your children will not be able to open the pill bottle! And sometimes neither will you.

Acetaminophen is currently the most common cause of drug overdoses in the United States, the United Kingdom, Australia, and New Zealand. Indeed, it’s one of the most common causes of poisoning worldwide. Since it’s removed from the body through the liver, the biggest risk is liver damage. In the developed world, acetaminophen overdose is currently the leading cause of acute liver failure.

Of course you generally shouldn’t mix medicine with alcohol, never ever, but especially not acetaminophen. You’ll get drunk very quickly, get a huge hangover, and run the risk of liver damage. Guess how I know. I had a friend who was doing all kinds of illegal drugs who said he doesn’t touch paracetamol – tells you all you need to know.

We’ve been talking about the pain receptors, but remember that there are three parts involved in pain. The receptor, the nerve wiring, and the brain. All of them can be a cause of pain. When pain is caused by damage to the nerve system it’s called neuropathic pain. This type of pain often doesn’t respond to over the counter drugs. It can be caused by direct nerve damage, but also by chemotherapy or diabetes and other conditions. The American National Academy of Sciences has estimated that in the United States neuropathic pain affects as much as one in three adults and leads to an annual loss of productivity exceeding 400 Billion US$.  That’s enough money to build a factory and make your own painkillers – on Mars!

Neuropathic pain and other pain that doesn’t go away with over-the-counter drugs is often treated with opioids. What are opioids and how do they work? Opioids are substances that were originally derived from poppies, but that can now be synthetically produced. They come in a large variety: morphine, codeine, oxycodone, heroine, fentanyl, etc. These don’t work exactly the same way, but the basic mechanism is more or less the same. I’m afraid the explanation is again pretty much that this thing fits to that thing.

The nervous system is equipped with receptors that opioids fit to, they are called – drums please – opioid receptors. These receptors can be occupied by endorphin, which is a substance that the human body produces, among other things to regulate pain. Opioids fit very well to those receptors. They can block them efficiently and for long periods of time. So, this is a very powerful way to reduce pain.

But opioids do a lot of other things in the human body, so there are side-effects. For one thing, opioids also suppress the release of noradrenaline, which is a hormone that among other things controls digestion, breathing, and blood pressure. Consequently, opioids can cause constipation or, in high doses, decrease heart and breathing rates to dangerously low levels. And, I mean, I’m not a doctor, but this doesn’t really sound good.

Opioids also act in the brain where they trigger the release of dopamine. Dopamine is often called the “feel good hormone” and that’s exactly what it does, it makes you feel good. That in and by itself isn’t all that much of a problem, the bigger problem is that the body adapts to the presence of opioids. Exactly what happens isn’t entirely clear, but probably the body decreases the number of opioid receptors and increases the number of receptors for the neurotransmitters that were suppressed. The consequence is that over time you have to increase the opioid dose to get the same results, to which the body adapts again, and so on. It’s a vicious cycle.

When you suddenly stop taking opioids, the number of many hormone receptors isn’t right. It takes time for the body to readjust and that causes a number of withdrawal symptoms, for example an abnormally high heart rate, muscle and stomach aches, fever, vomiting, and so on.

To keep opioid withdrawal symptoms manageable, the CDC recommend to reduce the dose slowly. If you’ve been taking opioids for more than a year, they say to reduce no more than a 10% per month. If you’ve been taking them for a few weeks or months, they recommend a 10% reduction per week. I’ll leave you a link to the CDC guide for how to safely get off opioids in the info below the video.

There are a number of other painkillers that don’t fall into either of these categories. Going through them would be rather tedious, but I want to briefly mention cannabis which has recently become increasingly popular for self-treatments of pain. A meta study published last year in the British Journal of Medicine looked at 32 trials involving over 5000 patients who took cannabis for periods ranging from a month up to half a year. They found that the effect of pain relief does exist, but it’s small.

Let then talk about the third body part that’s involved in pain, which is the brain. The brain plays a huge role in our perception of pain, and scientists are only just beginning to understand this.

A particularly amazing case was reported in the British Medical Journal in 1995. A 29-year-old construction worker was rushed to the emergency department of a hospital in Leicester. He’d jumped onto a 6-inch nail that had gone through the sole of his boot. This is an actual photo from the incident. The smallest movement of the nail was so painful that he was sedated with fentanyl and midazolam. The doctors pulled out the nail and took off the boot. And saw that the nail had gone through between the toes. The foot was entirely uninjured. He felt pain not because he actually had an injury, but because his brain was *convinced he had an injury. It’s called somatic amplification.

The opposite effect, somatic deamplification, also happens. Take for example this other accident that happened to another construction worker. Tthey aren’t paid enough these guys. This 23-year old man from Denver had somewhat of a blurry vision and a toothache. He went to a dentist. The dentist took an x-ray and conclude that the likely source of the toothache was that the man had a 4-inch nail in his skull. He’d probably accidentally shot himself with a nail gun but didn’t notice. Part of reason he wasn’t in more pain was probably that he just didn’t know he had a nail in his head.

Severe pain also changes the brain. It activates a brain region called the hypothalamus which reacts by increasing the levels of several hormones, for example cortisol and pregnenolone. This affects all kinds of things from blood sugar levels to fat metabolism to memory functions. The body is simply unable to produce these hormones at a high level for a long time. But some of those hormones are critical to pain control. A deficiency may enhance pain and slow down healing and may be one of the causes for chronic pain.

Another thing that happens if some part of your body hurts is that you learn incredibly quickly to not touch or move it. This has nothing to do with the signal itself, it’s an adaptation in the brain. This adaptation too, may have something to do with chronic pain. For example, several studies have shown that the severity of tinnitus is correlated with chronic pain, which suggests that some people are prone to develop such conditions, though the details aren’t well understood.

Indeed, scientists have only recently understood that the brain itself plays a big role in how severely we experience pain, something that can now be studied with brain scans. Consequently, some pain treatments have been proposed that target neither pain receptors nor the nervous system, but the brain response to the signals.

For example, there is the idea of audioanalgesia, that’s trying to reduce pain by listening to white noise or music. Or electroanalgesia, which uses electricity to interfere with the electric currents of pain signals. And some people use hypnosis to deal with pain. Does this actually work? We haven’t looked into it, but if you’re interested let us know in the comments and we’ll find out for you.

Saturday, July 09, 2022

Quantum Games -- Really!

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


It’s difficult to explain quantum mechanics with words. We just talked about this the other day. The issue is, we simply don’t have the words to describe something that we don’t experience. But what if you could experience quantum effects. Not in the real world, but at least in a virtual world, in a computer game? Wait, there are games for quantum mechanics? Yes, there are, and better still, they are free. Where do you get these quantum games and how do they work? That’s what we’ll talk about today.

We’ll start with a game that’s called “Escape Quantum” which you can play in your browser.

“You find yourself in a place of chaos. Whether it’s a dream or an illusion escapes you as the shiny glint of a key catches your eye. A goal, a prize, an escape, whatever it means for you takes hold in your mind as your body pushes you forward, into the unknown.”

Alright. Let’s see.

Escape Quantum is an adventure puzzle game where you walk around and have to find keys and cards to unlock doors. The navigation works by keyboard and is pretty simple and straight forward. The main feature of the game is to introduce you to the properties of a measurement in quantum mechanics, that if you don’t watch an object, its wave-function can spread out and next time you look, it may be at a different place.

So sometimes you have to look away from something to make it change place. And if there’s something you don’t want to change place, you have to keep looking at it. At times this game can be a bit frustrating because much of it is dictated by random chance, but then that’s how it goes in quantum mechanics. Once you learn the principles the game can be completed quickly. Escape Quantum isn’t particularly difficult, but it’s both interesting and fun.

Another little game we tried is called quantum playground which also runs in your browser.

Yes, hello. What you do here is that you click on some of those shiny spheres to initialize the position of a quantum particle. You can initialize several of them together. Then you click the button down here which will solve the Schrödinger equation, with those boundary conditions, and you can see what happens to the initial distribution. You can then click somewhere to make a measurement, which will suddenly collapse the wave-function and the particle will be back in one place.

There isn’t much gameplay in this one, but it’s a nice and simple visualization of the spread of the wave-function and the measurement process. Didn’t really understand what this little bird thing is.

Somewhat more gameplay is going on in the next one which is called “Particle in a Box”.  This too runs in your browser but this time you control a character, that’s this little guy here, who can move side to side or jump up and down.

The game starts with a brief lesson about potential and kinetic energy in the classical world. You collect energy in terms of a lightning bolt and give it to a particle that’s rolling in a pit. This increases the energy of the particle and it escapes the pit. Then you can move on to the quantum world.

First you get a quick introduction. The quantum particle is trapped in a box, as the title of the game says. So it doesn’t have a definite position, but instead has a probability distribution that describes where it’s most likely to be if a measurement is made. Measurements happen spontaneously and if a measurement happens then one of these circles appears in a particular place.

You can then move on to the actual game which introduces you to the notion of energy levels. The particle starts at the lowest energy level. You have to collect photons, that’s those colorful things, with the right energy to move the particle from one energy level to the next. If you happen to run into a particle at a place where it’s being measured, that’s bad luck, and you have to start over. You can see here that when the particle changes to a higher energy level, then its probability distribution also changes. So you collect the photons until the particle’s in the highest energy level and then you can exit and go to the next room.

The controls of this one are little fiddly but they work reasonably well. This game isn’t going to test your puzzle-solving skills or reflexes, but does a good job in illustrating some key concepts in quantum mechanics: probability distributions, measurements, and energy levels.

The next one is called “Psi and Delta”. It was developed by the same team as “Particle in a Box” and works similarly, but this time you control a little robot that looks somewhat like BB8 from Star Wars. There’s no classical physics introduction in this one, you go straight to the quantum mechanics. Like the previous game, this one is based on two key features of quantum mechanics: that particles don’t have a definite position but a probability distribution, and that a measurement will “collapse” the wave-function and then the particle is in a particular place.

But in this game you have to do a little more. There’s an enemy robot, that’s this guy, which will try to get you, but to do so it will have to cross a series of platforms. If you press this lever, you make a measurement and the particle is suddenly in one place. If it’s in the same place as the enemy robot, the robot will take damage. If you damage it enough, it’ll explode and you get to the next level.

The levels increase in complexity, with platforms of different lengths and complicated probability distributions. Later in the game, you have to use lamps of specific frequencies to change the probability distribution into different shapes. Again, the controls can be a little fiddly, but this game has some charm. It requires a bit of good timing and puzzle solving skills too.

Next game we look at is called “Hello Quantum” and it’s a touchscreen game that you can play on your phone or tablet. You first have to download and install it, there’s no browser version for this one, but there’s one for android and one for ios. The idea here is that you have to control qubit states by applying quantum gates. The qubits are either on or off or something you don’t know. Quantum gates are the operations that a quantum computer computes with. They basically move around entanglement. In this game, you get an initial state, and a target state that you have to reach by applying the gates.

The game tells you the minimal number of moves by which you can solve the puzzle, and encourages you to try to find this optimal solution. You’re basically learning how to engineer a particular quantum state and how a quantum computer actually computes.

The app is professionally designed and works extremely well. The game comes with detailed descriptions of the gates and the physical processes behind them, but you can play it without any knowledge of qubits, or any understanding of what the game is trying to represent, just by taking note of the patterns and how the different gates move the black and white circles around. So this works well as a puzzle game whether or not you want to dig deep into the physics.

This brings us to the last game in our little review which is called the Quantum FlyTrap. This is again a game that you can play in your browser and it’s essentially a quantum optics simulator. This red triangle is your laser source, and the green venus flytraps are the detectors. You’re supposed to get the photons from the laser to the detectors, with certain additional requirements, for example you have to get a certain fraction of the photons to each detector.

You do this by dragging different items around and rotating them, like the mirrors and beam splitters and non-linear crystals and so on. In later levels you have to arrange mirrors to get the photons through a maze without triggering any bombs or mines.

A downside of this game is that the instructions aren’t particularly good. It isn’t always clear what the goal is in each level, until you fail and you get some information about what you were supposed to do in the first place. That said, the levels are fun puzzles with a unique visual style. I’ve found this to be a quite remarkable simulator. You can even use it to click together your own experiment.