Pages

Saturday, December 31, 2022

Saturday, December 24, 2022

Wednesday, December 21, 2022

Science News Dec 21



In today’s episode, we’ll talk about the recent nuclear fusion headlines, and a new result from the Webb telescope. Then we’ll have a special guest, Fraser Cain, who’ll tell us what we learned from NASA’s Artemis mission. After that, we’ll talk about remote controlled magnetic slime, why atmospheric methane levels increased during the COVID pandemic, non-fogging glasses, algae that might replace beef, the toughest material on earth, self-organised nanobots, dark photons. And of course, the telephone will ring.

Transcript, references, and discussion on Patreon.

Wednesday, December 14, 2022

Science News Dec 14



Today we’ll talk about the 50-year anniversary of Apollo’s blue marble, the Square Kilometre Array, talking to robots, the Yellowstone volcano, nanoparticles that help with carbon capture, a forest bubble – on Mars, a new method of spacecraft propulsion, tests for cancer, earthquake tracking from space, and of course, the telephone will ring.

Transcript, references, and discussion on Patreon.

Saturday, December 10, 2022

Wednesday, December 07, 2022

Science News Dec 7



Today we’ll talk about zink batteries, regulations for StarLink satellites, find out how birds feel about fireworks, learn about a new way to detect sugars, thirty thousand molecules that could make better batteries, and more efficient photosynthesis. I’ll also briefly have to comment on wormholes, again. And of course, the telephone will ring.

Transcript, references, and discussion on Patreon.

Saturday, December 03, 2022

Guest Post: Collider Rap by Mike Battaglia and GPT-3

Image by rawpixel.com on Freepik

These days, it seems like Artificial Intelligence is taking over the world. But you don't have to escape its clutches - why not have a little fun with it? One of my favorite ways to lol at AI's expense is by getting them to mix together ridiculous things and see what they come up with.
This time I asked an AI model to write rap lyrics about particle accelerators pretending to be A Tribe Called Quest. Q-Tip would present the viewpoint that Sabine Hossenfelder has right: Particle accelerators are a waste of time! Then Phife Dawg would pretend he was an electron from inside a particle accelerator and argue for their importance as research tool –– Busta Rhymes does his verse from "Scenario", but changed so that it's all about particle accelerators instead.

Here are the lyrics that this AI model wrote:

Q Tip:
Sabine Hossenfelder's sayin' the accelerators a waste
Costs a lot of money, nothin' to show for the chase
They say the science is great, I say the cost is too high
Money's better spent elsewhere, that's the reason why

Phife Dawg:
Don't wanna hear none of that, I'm an electron with pride
In need of an accelerator, please don't take away my ride
From the Big Bang to the stars, I'm out here explorin' space
Don't let Sabine's perspective keep me from the human race

Q Tip:
We can't ignore the cost, it's too high for the gain
The science is important, but so is the financial strain
We gotta find a balance, between knowledge and the cost
Else we'll be stuck in a jam, and our fates will be lost.

Busta Rhymes:
Watch as I combine all the science of space
Now Sabine Hossenfelder says the accelerators a waste
Powerful impact boom from the beam,
So I'm investigating both sides of this debate of the dream
Vacuums and volts providing the power
As the electrons go faster and faster, the experts devour
Oh, my gosh! Oh, my gosh!
Time and money embark on a chase
Unh! Unh! Unh! For the cause, no time to waste
Unh, pardon me, unh! As I make this call
Should we keep pumpin resources or close the accelerator hall?
Rawr! Rawr! Like a scientific dragon
Change your little opinions, cause this knowledge is flaggin'
Try to end this debate, I'm gon' twist it like a durag
Cuz I'm caught in the middle of the debate like a purgatory
Chickity Choco, the chocolate chicken
The electrons are goin', pushin' the science mission
Rippin up the theories like Horatio
Observe the vibe and check out the scenario

---

The results are hilarious––I'm literally cracking up here––though simultaneously I worry that maybe I've enabled some kind of robot apocalypse through this escapade (what can I say, looking wistfully out into the future during lunch break). If speaking English makes an AI conscious, then what will these rap lyrics do?

Ah well, we'll find out soon enough...and when we're done playing around here let us just take a moment to marvel that an AI can even do this.

Even crazier is that it wrote this entire blog post (including - oops - this sentence!) from scratch.

---

Note from real Mike: it's true, although it wasn't exactly "from scratch." I basically gave OpenAI's new DaVinci-003 GPT-3 model an outline of the blog post. I also had to build it up in parts; first I had it do the lyrics with one prompt, and then I had it build the blog post as a separate prompt. Still, it managed to write this as the result. I have only very lightly edited the formatting by adding line breaks.

I should note that it took a bunch of playing around with the parameters and prompt quite a bit before getting this output. In particular, I found most of the struggle to be in wording the prompt correctly; I had to try a bunch of different things before I could get the model to figure out what I wanted it to do. So I guess I'll leave it to the philosophers to debate if an AI really wrote it all "on its own."

I thought the results were absolutely hilarious when I shared it on Facebook. I also think it raises some deep questions that are worth thinking about. On the one hand, I guarantee every single person reading that Busta Rhymes verse, who knows the original, will be cracking up hearing it in his voice in their heads. On the other hand, the current model is clearly not quite able to really replicate the dense multilayered lyrical wordplay and flow that real rappers are capable of. But at the rate things are moving, it probably will, possibly very soon. I don't know what to make of it.

All I know is this: as of 2022, you can tell this thing to write some rap bars about particle accelerators and Sabine Hossenfelder and it will actually do a baseline half-decent job at it. Then you can get it to write a blog post about how it wrote the lyrics and a meta-blog post about how it is capable of writing blog posts. It's really nuts. Anyone right now can go to OpenAI and play around with it and get results like this with a little effort.

GPT-4 will be available in 2023 with 500x the amount of parameters. Who knows what that will be able to do.

(And RIP to Phife Dawg, probably my all time favorite MC)


Mike Battaglia is a musician, biomedical engineer, and digital signal processing specialist. Check out some of his microtonal music on YouTube and Instagram.

Wednesday, November 30, 2022

Science News Nov 30



Today we’ll talk about Trouble at ITER, robots that build robots, air pollution, AI that classifies supernovae, a small asteroid that hit Canada, Super GPS, a new supercomputer simulation of the sun, a quantum thermometer. And of course, the telephone will ring.

Transcript, references, and discussion on Patreon.

Saturday, November 26, 2022

Wednesday, November 23, 2022

Wednesday, November 16, 2022

Saturday, November 12, 2022

Why are male testosterone levels falling?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Forget war, forget climate change, recessions, pandemics. Today we’ll talk about a real crisis. The decrease of testosterone levels. Just look at all those headlines. There must be something going on. But what? Are testosterone levels really falling? If so, why? And how much of a crisis is it? That’s what we’ll talk about today.

The worry that men are becoming too feminine isn’t new. It’s been in the newspapers since there’ve been newspapers. The blame has, among other things, been put on juice boxes, sleek electronics, face creams, lilac pajamas and embroidered bathrobes, marxism, and living in high rise buildings. New is however the trend to self-medicate with testosterone pills. In the US, annual prescription sales for testosterone supplements have increased from 18 million dollars in 1988 to 70 million in 2000 to more than 2 billion in 2013.

The probably most prominent advocate for boosting your testosterone levels is Tucker Carlson, an American TV host. He’s seriously worried about the supposed decline of manliness and, among other things, suggests that men tan their balls to increase their testosterone levels. This is what his vision of the future man looks like.

So I made a PhD in physics and somehow ended up on YouTube talking about people tanning their balls. How do I explain this to my mom?

Apparently the idea started with a paper from 1939 by researchers from the Psychiatric Unit of the Boston State Hospital. The irradiated five patients with a mercury lamp in different body parts and found that the largest increase of testosterone levels happened when the target was the scrotum.

I don’t know want to know what else happened at that place. But even leaving aside the somewhat questionable circumstances, 5 patients in a psychiatric unit are not a representative sample for half the world population. There’s no evidence that irradiating your family jewels will do your testosterone levels any good. And the US Food and Drug Administration cautions against the use of testosterone unless there’s an underlying medical condition. They say that the benefits and safety of testosterone supplements have not been established.

But let’s step back from the testosterone craze for a moment and talk about the scientific basic. Testosterone and estrogen are the most important human sex hormones. We all have both, but men have higher testosterone levels, while women have higher estrogen levels. Testosterone in men is, among other things, responsible for changes during puberty, muscle mass and strength, hair growth, and sperm production.

Men with low testosterone levels may suffer from fatigue, depression, and sexual dysfunction. But too high testosterone levels also aren’t good. One of the consequences is muscle growth, which is why the stuff’s used for doping. But the side-effects are mood swings, aggressive and risk-taking behavior, skin problems, hair loss, and elevated cholesterol levels. Also, if the testosterone level is too high, the body tries to produce less of it, which leads to a reduced sperm count and shrinking of the testicles. And since the body converts part of the testosterone to a form of estrogen, high testosterone levels can, among other things, lead to breast development. So, more isn’t always better.

All of which brings up the question how much testosterone is normal? This question is surprisingly difficult to answer.

To begin with, testosterone levels change during the day, especially in young men, where they peak in the early morning. That’s why testosterone levels are measured in the morning, and another reason why early-morning classes should be illegal.

But that’s not the only reason testosterone levels vary. According to a 2020 study testosterone levels change with the seasons and are higher in summer. They are also known to change with partnership status. According to a Harvard University study published in 2002, married men have lower levels of testosterone than single men, and the more time they spend with family, the lower the testosterone level. Other studies have shown that men’s testosterone levels drop when holding an infant, or even a baby doll, and that the level goes up again after divorce.

That’s all very interesting, but these are all quite small effects. What we want to know is what’s a normal average level?

In 2014, a group of researchers did a meta-analysis to find out. They collected the data of 13 previously published studies and found that testosterone blood levels in men peak at about 19 years of age with a mean value of 15 point 4 nanomoles per liter. They then fall slightly to about 13 point 0 by age 40. Be careful, this plot has a log scale on the vertical axis. The authors found no evidence for a further drop in mean testosterone with age, although the variation increases as men get older.

Medical guidelines in the United States currently say everything above 11 point 1 nanomoles per liter is normal, below 6 point 9 is too low, and in between there’s a grey area where they send you from one doctor to another until one can decide what to do with you. In Europe they think men should have a little more testosterone and the guidelines are 12 and 8 nanomoles per liter, respectively.

Now that we know what we’re talking about, what’s the deal with the falling testosterone levels? First off – the headlines are right. Testosterone levels really are falling. This has been backed up by several independent studies. And it isn’t even news.

One of the best-known papers about this was published by researchers from the US in 2007. They tracked three groups of randomly selected men from Boston, during the 1980’s, 1990’s and 2000’s.

They found that the later-born men had lower testosterone levels at the same age as the earlier born ones. You can see this in this graph, where the solid lines are the mean values, and the dotted lines are the 95 percent confidence levels.

Don’t get confused about the numbers on the vertical axis. This graph uses different units than the ones we had before, but even so you can clearly see that indeed testosterone levels are falling. For example, a 70-year old man in the first study group from the 1980s had a higher total testosterone level than the youngest man in the second group. They found that the average levels declined by about 1 percent per year, so men born 15 years later would have 15 percent lower testosterone levels at the same age.

In case you think something odd is going on in Boston in particular, similar studies have found the same elsewhere in the US and also in Europe. For example, a Finnish study from 2013 found that older generations of men had higher testosterone levels at any given age range compared to younger generations. It’s not a small difference.

For example, for men aged 60-69 years born in between 1913 and 1922, testosterone levels were 21 point 9 nanomoles per liter, but for those born between 1942 and 51, it was only 13 point 8. Finnish men born during the 1910’s had more testosterone in their sixties than men born in the 1970’s when they were in their twenties.

It's not just testosterone, and it’s not just men. Strength levels seem to be decreasing in general. A 2016 study measured the grip strength of about 250 healthy full-time students aged 20 to 34 at Universities in North Carolina. They compared the results to measurements from 1985 and found that grip strength had significantly increased both for men and for women. It seems firm handshakes really are going out of style.

But in all fairness, this wasn’t a particularly large study. But here’s another example from a meta-analysis of 50 studies that included a total of 25 million children, age 9 to 17, from 28 countries. It was published by a group of Australian researchers in 2013. They reported that children today take 90 seconds longer to run a mile than kids did 30 years ago, and that seems to be a global trend.

Okay, so we’re all getting weaker and slower and spend our days watching YouTube. But why? The brief answer is that no one really knows, there’s just speculation, so let’s have a look at the speculations. First of all, the changes happen too quickly to be genetic adaptations.

But one suspect factor is food. According to a recent meta-analysis done by researchers from the UK eating too much protein can significantly decrease testosterone levels. They found that diets with more than 35 percent protein decreased testosterone levels by 37 percent. 35 percent protein is a lot. The average person in the developed world eats less than half of that, so it doesn’t explain the observed trend. But if you only eat meat, it quite possibly has consequences.

A related issue is the increasing number of obese people, which we just talked about some weeks ago. We know that in men, a high body mass index is correlated with lower testosterone levels. And according to a 2014 paper by researchers from Australia, the causation goes both ways. That is low testosterone can cause weight gain, and weight gain lowers testosterone levels, which can create a self‐perpetuating cycle of metabolic complications. However, the studies that documented the fall in testosterone levels found it even for men with normal body mass index, so this doesn’t explain it either.

Another factor might be smoking, or rather, the lack thereof. That’s because some chemicals contained in tobacco prevent testosterone from converting to other hormones, which can increase the mean testosterone level. A meta analysis from 2016 found that men who smoked had higher mean testosterone levels than non-smokers with the difference being about 1 point 5 nanomoles per liter. In women the difference wasn’t statistically significant. So, the overall decline in smoking might have had an impact on the overall decline in testosterone levels. But again, that alone doesn’t explain it.

What are we to make of this? It seems plausible to me that several factors are at play. As we have seen, testosterone levels change with living circumstances. The world is a more comfortable place today than 50 years ago, so maybe testosterone just isn’t needed as much as it used to. And then the changes in diet and smoking add on top of that. Is that enough to explain it? I don’t know. As scientists like to say “more work is needed”.

Is it something to worry about? Well, that depends on what you want the world to be. Carl Sagan once referred to testosterone as a poison that causes conflict. He said “Why is the half of humanity with a special sensitivity to the preciousness of life, the half untainted by testosterone poisoning, almost wholly unrepresented in defense establishments and peace negotiations worldwide?” However, he then continued, “Testosterone also causes the kind of aggression needed to defend against predators, and without it, we’d all be dead. [...] Testosterone is there for a reason. It’s not an evolutionary mistake.”

Personally I see the decrease of testosterone levels more as a reaction to our changing environment than reason for concern. The world changes and we change with it. We study tree rings to find out which years were good years and which years were bad years for the trees. And maybe in ten thousand years from now, scientists will study testosterone levels to find out which times were good times and which times were bad times for us.

 

Wednesday, November 09, 2022

Science News Nov 9



Note: I am aware that the RSS feed has stopped working properly. Google seems to not maintain this platform. I will soon move this blog elsewhere. Meanwhile, please subscribe to my Patreon or Newsletter.

Saturday, November 05, 2022

Quantum Winter Is Coming

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Quantum technology current attracts a lot of attention and money, which might explain why they’re missing on my end. But it’s not just governments who think quantum everything is where your taxes should go, business investors and companies are willing to putting in big money, too. This has had a dramatic impact on quantum physics research in the past decade. It’s also created a lot of hype, especially around quantum computing. But if so much of quantum computing is hype then why are companies like Google and IBM pouring so much money into it, what’ll happen when the investment bubble bursts, what’s the “quantum winter”, and what does it mean for all of us? That’s what we’ll talk about today.

There are several different quantum technologies, and as a rule of thumb, the fewer headlines they make, the more promising they are. Take for example, quantum metrology. That’s not a mispronunciation of meteorology, that’s making better measurements with quantum effects. You basically never read anything about this. But it’s really promising and already being used by scientists to improve their own experiments. You can learn more about this in my earlier video.

On the other hand, you have those quantum things that you read a lot about but that no one needs or wants, like the quantum internet. And then there is quantum computing which according to countless headlines is going to revolutionize the world. Quantum computers are promising technology, yes, but the same can be said about nuclear fusion and look how that worked out.

A lot of physicists, me included, have warned that quantum computing is being oversold. It’s not going to change the world, it’ll have some niche applications at best, and it’s going to take much longer than many start-ups want you to believe.

Though I admire the optimism of the quantum computing believers and also the vocabulary. For example, there’s a startup called “multiverse” that is “Working with customers in more than 10 verticals”. Some of them don’t find the way back from the bathroom.

Or here’s one called “universal quantum” which has a “fault tolerant team” that “embraces entanglement” and helps everyone find a “super position”. If you look at them, do they collapse?

This company also recently published a blogpost titled “Six reasons Liz Truss needs a quantum computer” which explains for example, “Quantum computers are suited to modelling complex systems, making them better at forecasting both near-term weather patterns and the long-term effects of climate change.”

Last time I looked, no one had any idea how to do a weather forecast on a quantum computer. It’s not just that no one has done it, no one knows if it’s even possible, because weather is a non-linear system whereas quantum mechanics is a linear theory.

Here is an example of some recent quantum hype, from a website called “Investors Chronicle” in an article titled “Quantum computing: a new industrial revolution”. After creating a lot of fog about superpositions and interference, they explain that quantum computers are close to showing “quantum advantage” and that “Quantum advantage, therefore, can be interpreted as being when problem-solving power can be applied to real world issues, which makes it much more interesting for investors.”

That’s just wrong. Quantum advantage has indeed been demonstrated for some quantum computers but that just means the quantum computer did something faster than a conventional computer, not that this was of any use for real world issues. They just produced a random distribution that would take a really long time to calculate by any other means. It’s like this this guy stapling 5 M&M. That’s a world record, hurray, but what are you going to do with it?

Problem is, a lot of CEOs in industry and the financial sector can’t tell a bra from a ket and believe that quantum computing is actually going to be relevant for their business, and that it’s going to be relevant soon. For example, at a recent Quantum Computing conference in London, the managing director of research at Bank of America said that quantum computing will be “bigger than fire”. The only way in I can see this coming true is that it’ll produce more carbon emissions.

This bubble of inflated promises will eventually burst. It’s just a matter of time. This’ll cause a sudden decline of investment in quantum tech overall and be the start of a difficult time for research and development. This scenario has been dubbed the “quantum winter”. And winter is coming. But before we get to the quantum winter, let me briefly summarize how quantum computers work and what their problems are.

A conventional computer works with bits that take on one of two discrete values. A quantum computer instead uses qubits that can be in arbitrary *superpositions of two states. A quantum computer then works by entangling the qubits and shifting this entanglement around. Entanglement is a type of correlation, but it has no analogy in conventional computers. This is why quantum computers can do things that standard computers can’t do.

For some mathematical problems, a quantum computer can give you an answer much faster than any conventional computer possibly could. This is called “quantum advantage”. Those problems include things like factorizing large numbers into prime factors. But also calculating properties of molecules and materials without having to chemically synthesize them first. Putting these questions on a quantum computer could speed up material design and drug discovery. Quantum computers can also solve certain logistic problems or optimize financial systems. This is why, if you’re Bank of America, you think it's bigger than fire.

And like fire, quantum computing is not magic, it’s an application of standard quantum mechanics. There is no speculative new physics involved. It’s rather to the contrary. Claims that quantum computers will *not work rest on speculative new physics. But it’s one thing to say if you could build them, they’d be useful. It’s another thing entirely to actually build them.

So what does it take to build a quantum computer? First of all you need qubits. Then you have to find a way of entangling many of those qubits. And, like a conventional computer, a quantum computer needs an algorithm, that tells it how to move the entanglement around. Eventually, you make a measurement which collapses the wave-function and you read out the final state. This final state should be one that answers your question correctly with high probability. This means most importantly, a quantum computer isn’t a stand-alone device, it needs other devices for the programming and the readout. The quantum part is really just a small piece of the whole thing.

Now let’s talk about the problems. First there’s the qubits. Producing them is not the problem, indeed there are many different ways to produce qubits. I went through the advantages and disadvantages of each approach in an earlier video, so check this out if you want to know more. But a general problem with qubits is decoherence, which means they lose their quantum properties quickly.

The currently most widely developed systems are superconducting qubits and ion traps. Superconducting qbits are used for example by IBM and Google. For them to work, they have to be cooled to 10-20 milli Kelvin, that’s colder than outer space. Even so, they decoherence within 10s of micro-seconds.

Ion traps are used for example by IonQ and Honeywell. They must “only” be cooled to a few Kelvin above absolute zero. They have much longer coherence times, up to some minutes, but they’re also much slower to react to operations, so it’s not a priori clear which approach is better. I’d say they’re both equally bad. The cooling isn’t only expensive and energy-intensive, it requires a lot of equipment and it’s difficult to scale to larger quantum computers. It seems that IBM is trying to do it by breaking world records in building large cryogenic containers. I guess if the thing with quantum computing doesn’t work out, they can rent them out for people to have their heads frozen.

There are some qubits that operate at room temperature, the most promising ones of those are currently nitrogen vacancy systems and photonics on a chip. However, for both of those, no working quantum computer exists to date and it’s unclear even what the challenges may be, let alone how to overcome them.

The next biggest problem is combining these qubits. Again, the issue is that quantum effects are fragile, so the quantum computer is extremely sensitive to noise. The noise brings in errors. You can correct for those to some extent, but this error correction requires more qubits.

More qubits bring problems by themselves, for example, they tend to be not as independent as they should be, an issue known as “crosstalk”. It’s kind of like if you’re trying to write while moving your feet in circles. It gets really difficult. The qubits states are also drifting if you leave them unattended. Indeed it’s somewhat of a mystery at the moment what a quantum computer does if you don’t calculate with it. It’s like it’s difficult to calculate what a big quantum system does. Maybe we can put it on a quantum computer?

And finally there’s the issue of the algorithms: Few algorithms for quantum computers are known, an issue that goes often unmentioned because everyone is focused on the technology. Wikipedia helpfully has a list of quantum algorithms. It’s short. Several of those algorithms don’t compute anything of practical use, and for some it's not known if they lead to any speedup.

As this brief summary makes clear, the challenges are enormous. But how far along is the technology? The largest current quantum computers have somewhere between 50 and 100 qubits, though IBM has a roadmap saying they want to make it to a thousand next year. Two different approaches have demonstrated a “quantum advantage”, that is, they have performed a calculation faster than the currently fastest conventional computer could have done. However in those demonstrations of quantum advantage, the devices were executing algorithms that did not calculate anything of use.

The record breaking “useful” calculation for quantum computers is the prime-number factorization of 21. That’s the number, not the number of digits. Yes, the answer is 3 times 7, but if you do it on a quantum computer you can publish it in Nature. In case you are impressed by this achievement, please allow me to clarify that doing this calculation with the standard algorithm and error correction is way beyond the capacity of current quantum computers. They actually used a simplified algorithm that works for this number in particular.

To be fair, there have been some cute applications of quantum algorithms for simple examples in quantum chemistry and machine learning, but none of this is anywhere even close to being commercially interesting.

How many qubits do you need for a quantum computer to do something commercially interesting? Current estimates say it’s several hundred thousand to a few millions qubits, depending on what you want to calculate and how large your tolerance for errors is.

A lot of quantum computing enthusiasts claim that we’ll get there quickly because of Moore’s law. Unfortunately, I have to inform you that Moore’s law isn’t a law of nature. It worked for conventional computers because those could be miniaturized. However, you can’t miniaturize ions or the Compton wavelength of electrons. They’re already as small as it gets. Nature’s a bitch sometimes.

In the past years there’s been some noise around Noisy intermediate scale quantum computers, or NISQs for short. Those are small quantum computers in which you just accept the noise, kind of like YouTube comment sections. But no one seems to have found anything useful to do with them and the hype around them has noticeably died down recently.

I guess you understand now why I am extremely skeptical that we are anywhere close to commercially relevant applications of quantum computers. But let’s hear what some other people say.

There is for example Mikhail Dyakonov, a physics prof who has worked on quantum things much longer than I have. He’s written a book that was published in 2020 under the title “Will We Ever Have a Quantum Computer?” It has only 49 pages which is what happens if you agree to write a book but then notice half through you’d rather do something else. He finishes by answering his own question:
“No, we will never have a quantum computer. Instead, we might have some special-task (and outrageously expensive) quantum devices operating at millikelvin temperatures. The saga of quantum computing is waiting for a profound sociological analysis, and some lessons for the future should be learnt from this fascinating adventure.”
The brevity of Dyakonov’s book is balanced by another book “Law and Policy for the Quantum Age” by Chris Hoofnagle and Simson Garfinkle, who make it to a whooping 602 pages. Their book was just published earlier this year, it’s freely available online, and it has an adorable cat pic on the cover, so definitely go check it out. Hoofnagle is a professor for law and Garfinkle is a data scientist, but their book has been heavily informed by people who work in quantum computing. They look at the possible future scenarios. The most likely scenario, they say, is the “Quantum Winter” which they describe as follows:
“In this scenario (call it “Quantum Winter”), quantum computing devices remain noisy and never scale to a meaningful quantum advantage… After a tremendous amount of public and private monies are spent pursuing quantum technologies, businesses in the field are limited to research applications or simply fail, and career paths wither. If that happens, funding eventually dries up for quantum computing. Academics and scientists in the field either retool and shift, or simply appear irrelevant, even embarrassing.”
Then there is Victor Galitski, Professor at the Joint Quantum Institute at the University of Maryland who wrote in a 2021 post on LinkedIn:
“The number of known quantum algorithms, which promise advantage over classical computation, is just a few (and none of them will "solve global warming" for sure). More importantly, exactly zero such algorithms have been demonstrated in practice so far and the gap between what’s needed to realize them and the currently available hardware is huge, and it's not just a question of numbers. There are qualitative challenges with scaling up, which will likely take decades to resolve (if ever).”
Most recently, there was an opinion piece by Nikita Gourianov in the Financial Times. Nikita works on computational quantum physics at the University of Oxford. He writes:
“As more money flowed [into quantum computing], the field grew, and it became progressively more tempting for scientists to oversell their results… After a few years of this, a highly exaggerated perspective on the promise of quantum computing reached the mainstream, leading to… the formation of a classical bubble.”
He then points out that no quantum computing company is currently making profit and that:
“The little revenue they generate mostly comes from consulting missions aimed at teaching other companies about “how quantum computers will help their business”.”
I have to disagree on the final point because big companies have another way to make money from quantum computers, namely by renting them out to universities. And since governments are pouring money into research, that’s quite a promising way to funnel tax money into your business. Imagine the LHC was owned by Google and particle physicists had to pay to use it.

That’s how I think it’ll go with quantum computing: First all the smaller startups will falter because they don’t reach their milestones, venture capital will evaporate, and all the overeducated quantum computists in academia will use grant money to pay a few large companies who own the only workable devices. And while those devices are interesting research objects, they’ll not be useful for commercial applications.

I might be totally wrong of course. Maybe one of those start-ups will actually come up with a scalable quantum computing platform. I don’t know, I’m guessing as much as everyone else.

But if quantum winter is coming, what does it mean for you and me? Well, some people will lose a lot of money but that just means they had too much of it to begin with, so can’t say it bothers me all that much. There’ll also be fewer headlines about how quantum computing is supposedly going to revolutionize something or other, which I’d say is a good development. And we’ll see many people who worked in quantum computing going into other professions. Chances are in ten years you can have a nice chat about the finer details of multi-particle entanglement with your taxi driver. I don’t know about you, but I’m looking forward to quantum winter.

Saturday, October 29, 2022

What Do Longtermists Want?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Have you ever put away a bag of chips because they say it isn’t healthy? That makes sense. Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes… That makes you a longtermist. Longtermism is a currently popular philosophy among rich people like Elon Musk, Peter Thiel, and Jaan Tallinn. What do they believe and how crazy is it? That’s what we’ll talk about today.

The first time I heard of longtermism I thought it was about terms of agreement that get longer and longer. But no. Longtermism is the philosophical idea that the long-term future of humanity is way more important than the present and that those alive today, so you, presumably, should make sacrifices for the good of all the generations to come.

Longtermism has its roots in the effective altruism movement, whose followers try to be smart about donating money so that it has the biggest impact, for example by telling everyone how smart they are about donating money. Longtermists are concerned with how our future will look like in some billion years or longer. Their goal is to make sure that we don’t go extinct. So stop being selfish, put away that junk food and make babies.

The key argument of longtermists is that our planet will remain habitable for a few billion years, which means that most people who’ll ever be alive are yet to be born.

Here’s a visual illustration of this. Each grain of sand in this hourglass represents 10 million people. The red grains are those who lived in the past, about 110 billion. The green one are those alive today, that’s about 8 billion more. But that is just a tiny part of all the lives that are yet to come.

A conservative estimate is to assume that our planet will be populated by at least a billion people for at least a billion years, so that’s a billion billion human life years. With today’s typical life span of 100 years, that’d be about 10 to the 16 human lives. If we’ll go on to populate the galaxy or maybe even other galaxies, this number explodes into billions and billions and billions.

Unless. We go extinct. Therefore, the first and foremost priority of longtermists is to minimize “existential risks.” This includes events that could lead to human extinction, like an asteroid hitting our planet, a nuclear world war, or stuffing the trash so tightly into the bin that it collapses to a black hole. Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction.

One person who has been pushing longtermism is the philosopher Nick Bostrom. Yes, that’s the same Bostrom who believes we live in a computer simulation because his maths told him so. The first time I heard him give a talk was in 2008 and he was discussing the existential risk that the programmer might pull the plug on that simulation we supposedly live in. In 2009 he wrote a paper arguing:

“A non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback: a giant massacre for man, a small misstep for mankind”

Yeah, breakdown of global civilization is exactly what I would call a small misstep. But Bostrom wasn’t done. By 2013 he was calculating the value of human lives: “We find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives [in the present]. One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any ‘ordinary’ good, such as the direct benefit of saving 1 billion lives ”

Hey, maths doesn’t lie, so I guess that means okay to sacrifice a billion people or so. Unless possibly you’re one of them. Which Bostrom probably isn’t particularly worried about because he is now director of the Future of Humanity Institute in Oxford where he makes a living from multiplying powers of ten. But I don’t want to be unfair, Bostrom’s magnificent paper also has a figure to support his argument that I don’t want to withhold from you, here we go, I hope that explains it all.

By the way, this nice graphic we saw earlier comes from Our World in Data which is also located in Oxford. Certainly complete coincidence. Another person who has been promoting longtermism is William MacAskill. He is a professor for philosophy at, guess what, the University of Oxford. MacAskill recently published a book titled “What We Owe The Future”.

I didn’t read the book because if the future thinks I owe it, I’ll wait until it sends an invoice. But I did read a paper that MacAskill wrote in 2019 with colleague Hilary Greaves titled “The case for strong longtermism”. Hilary Greaves is a philosopher and director of the Global Priorities Institute which is located in, surprise, Oxford. In their paper they discuss a case of long-termism in which decision makers choose “the option whose effects on the very long-run future are best,” while ignoring the short-term. In their own words:

“The idea is then that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects.”

So in the next 100 years, anything goes so long as we don’t go extinct. Interestingly enough, the above passage was later removed from their paper and can no longer be found in the 2021 version.

In case you think this is an exclusively Oxford endeavour, the Americans have a similar think tank in Cambridge, Massachusetts, called The Future of Life Institute. It’s supported among others by billionaires Peter Thiel, Elon Musk, and Jaan Tallinn who have expressed their sympathy for longtermist thinking. Musk for example recently commented that MacAskill’s book “is a close match for [his] philosophy”. So in a nutshell longtermists say that the current conditions of our living don’t play a big role and a few million deaths are acceptable, so long as we don’t go extinct.

Not everyone is a fan of longtermism. I can’t think of a reason why. I mean, the last time a self-declared intellectual elite said it’s okay to sacrifice some million people for the greater good, only thing that happened was a world war, just a “small misstep for mankind.”

But some people have criticized longtermists. For example, the Australian philosopher Peter Singer. He is one of the founders of the effective altruism movement, and he isn’t pleased that his followers are flocking over to longtermism. In a 2015 book, titled The Most Good You Can Do he writes:

“To refer to donating to help the global poor or reduce animal suffering as a “feel-good project” on which resources are “frittered away” is harsh language. It no doubt reflects Bostrom’s frustration that existential risk reduction is not receiving the attention it should have, on the basis of its expected utility. Using such language is nevertheless likely to be counterproductive. We need to encourage more people to be effective altruists, and causes like helping the global poor are more likely to draw people toward thinking and acting as effective altruists than the cause of reducing existential risk.”

Basically Singer wants Bostrom and his likes to shut up because he’s afraid people will just use longtermism as an excuse to stop donating to Africa without benefit to existential risk reduction. And that might well be true, but it’s not a particularly convincing argument if the people you’re dealing with have a net worth of several hundred billion dollars. Or if their “expected utility” of “existential risk reduction” is that their institute gets more money.

Singers second argument is that it’s kind of tragic if people die. He writes that longtermism “overlooks what is really so tragic about premature death: that it cuts short the lives of specific living persons whose plans and goals are thwarted.”

No shit. But then he goes on to make an important point: “just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin.” Yes, indeed, the entire argument for longtermism depends crucially on how much value you put on future lives. I’ll say more about this in a minute, but first let’s look at some other criticism.

The cognitive scientist Steven Pinker, after reading MacAskill’s What We Owe The Future, shared a similar reaction on twitter in which he complained about: “Certainty about matters on which we’re ignorant, since the future is a garden of exponentially forking paths; stipulating correct answers to unsolvable philosophical conundrums [and] blithe confidence in tech advances played out in the imagination that may never happen.”

The media also doesn’t take kindly to longtermism. Some, like Singer, complain that that longtermism draws followers away from the effective altruism movement. Others argue that the technocratic vision of longtermists is also anti-democratic. For example Time Magazine wrote that Elon Musk has “sold the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”

Christine Emba, in an opinion piece for the Washington Post, argued that “the turn to longtermism appears to be a projection of a hubris common to those in tech and finance, based on an unwarranted confidence in its adherents’ ability to predict the future and shape it to their liking” and that “longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies while patting themselves on the back for their intelligence and superior IQs. The future becomes a clean slate onto which longtermists can project their moral certitude and pursue their techno-utopian fantasies, while flattering themselves that they are still “doing good.””

Okay, so now that we have seen what either side says, what are we to make of this.

The logic of longtermists hinges on the question what the value of a life in the future is compared to ours while factoring in the uncertainty of this estimate. There are two elements which goes into this evaluation. One is an uncertainty estimate for the future projection. The second is a moral value, it’s how much future lives matter to you compared to ours. This moral value is not something you can calculate. That’s why longtermism is a philosophical stance, not a scientific one. Longtermist try to sweep this under the rug by blinding the reader with numbers that look kind of sciencey.

To see how difficult these arguments are, it’s useful to look at a thought experiment known as Pascal’s mugging. Imagine you’re in dark alley. A stranger steps in front of you and says “Excuse me, I’ve forgotten my knife but I’m a mugger, so please give me your wallet.” Do you give him your money? Probably not.

But then he offers to pay back double the money in your wallet next month. Do you give him your money? Hell no, he’s almost certainly lying. But what if he offers a hundred times more? Or a million times? Going by economic logic, eventually the risk of losing your money because he might be lying becomes worth it because you can’t be sure he’s lying. Say you consider the chances of him being honest 1 in 10,000. If he offered to return you 100 thousand times the amount of money in your wallet, the expected return would be larger than the expected loss.

But most people wouldn’t use that logic. They wouldn’t give the guy their money no matter what he promises. If you disagree, I have a friend who is a prince in Nigeria, if you send him 100 dollars, he’ll send back a billion, just leave your email in the comments and we’ll get in touch.

The point of this thought experiment is that there’s a second logical way to react to the mugger. Rather than to calculate the expected wins and losses, you note that if you agree to his terms on any value, then anyone can use the same strategy to take literally everything from you. Because so long as your risk assessment is finite, there’s always some promise that’ll make the deal worth the risk. But in this case you’d lose all your money and property and quite possibly also your life just because someone made a promise that’s high enough. This doesn’t make any sense, so it’s reasonable to refuse giving money to the mugger. I’m sure you’re glad to hear.

What’s the relation to longtermism? In both cases the problem is how to assign a probability to unlikely future events. For Pascal’s mugger that’s the unlikely event that the mugger will actually do what he promised. For longtermism the unlikely events are the existential threats. In both cases our intuitive reaction is to entirely disregard them because if we did, the logical conclusion seems to be that we’d have to spend as much as we can on these unlikely events about which we know the least. And this is basically why longtermists think people who are currently alive are expendable.

However, when you’re arguing about the value of human lives you are inevitably making a moral argument that can’t be derived from logic alone. There’s nothing irrational about saying you don’t care about starving children in Africa. There’s also nothing irrational about saying you don’t care about people who may or may not live on Mars in a billion years. It’s a question of what your moral values are.

Personally I think it’s good to have longterm strategies. Not just for the next 10 or 20 years. But also for the next 10 thousand or 10 billion years. So I really appreciate the longtermists’ focus on the prevention of existential risks. However, I also think they underestimate just how much technological progress depends on the reliability and sustainability of our current political, economic, and ecological systems. Progress needs ideas, and ideas come from brains which have to be fed both with food and with knowledge. So you know what, I would say, grab a bag of chips and watch a few more videos.

Saturday, October 22, 2022

What If the Effect Comes Before the Cause?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


The cause comes before the effect. And I’m glad it does because it’d be awkward if my emails arrived before I’d written them. If that ever happens, it’ll be a case of “retrocausality.” What does that mean? What’s it got to do with quantum mechanics and what is the “transactional interpretation”? That’s what we’ll talk about today.

Causality is a relation between events in space and time, so I’ll be using space-time diagrams again. In such a diagram, the vertical axis is time and the horizontal axis is one dimension of space. Anything that moves with constant velocity is a straight line at some angle. By convention, a 45 degree angle depicts the speed of light.

According to Einstein, yes that guy again, the speed of light is an upper limit for the transmission of information. This means if you take any one point in space-time, then you can only send or receive information from points within cones of less than 45 degrees to the vertical through the point. The boundaries of those areas are called the forward light cone and the backward light cone.

Every philosopher in existence had something to say about causality, so there are many different definitions, but luckily today we’ll need only need two. The first one is space-time causality. Suppose you have two events that are causally related, then each must be inside the other’s light cone, and the one in the past is the cause. It’s as simple as that. This is the notion of causality that’s used in General Relativity. One direction is forward, that’s the future, one direction is backward, that’s the past.

But it turns out that not all space-times allow you to tell apart past from future. This is much like on a Moebius strip you can’t tell the front from the back because they’re the same! In some space-times you can’t tell the past from the future because they’re the same.

Normally this doesn’t happen because you can’t turn around in time. If you wanted to do that, you’d have to go faster than light. But some space-times allow you to go back in time and visit your own past without moving faster than light. It’s called a time-like closed curve. That it’s time-like means you can travel along it below the speed of light, and that it’s closed means it’s a loop.

The simplest example of this is a space-time with a wormhole. Let’s say if you enter the wormhole it transports you from this point A to this point B. From where you started, the wormhole entrance is in your future. But it’s also in your past.

This is weird and it creates some causality problems that we’ll talk about in a bit. However, for all we know time-like closed curves don’t exist on our space-time which is a little bit disappointing because I know you really want to go back and revisit all your Latin exams. This is why I first want to talk about another notion of causality which makes going back in time a little easier. It’s called “interventionist causality”. It’s all about what you can “intervene” with, and it works without a time-order.

To understand interventionist causality you ask which event depends on another one. Think back of the example of writing an email and someone receiving it. If I intervene with you writing the email, for example by distracting you with jokes about retrocausality, then the other person won’t receive it. So neither of the two events happen. But if I intervene with them receiving the email, for example by boring them to death with jokes about retrocausality, you’ll still write the email. According to interventionist causality, the event that you can intervene with to stop both from happening is the cause. In general you don’t have to actually prevent an event from happening, you look at what affects its probability.

Normally the causal order you get from the interventionist approach agrees with the order you get from the space-time approach. So then why use it at all? It’s because in practice we often have correlations in data, but we don’t know the time order, so we need another method to infer causality. We encountered many examples for this in our recent video on obesity. What came first, the obesity or the change in the microbiome? Interventionist causality is really popular in the life sciences, where longitudinal studies are rare, because it gives you an alternative way to analyze data to infer causality.

But back to physics. This notion of interventionist causality is implicitly based on entropy increase. Let me illustrate that with another example. Suppose you flip a switch and the light turns on. If I stop you from flipping the switch, the light doesn’t turn on. But if I stop the light from turning on that doesn’t stop you from flipping the switch. Interventionist causality then tells us that flipping the switch is the cause and the light turning on the effect.

However, from a purely mathematical perspective, there exists some configuration of atoms and photons in which the light stays off in just exactly the right way so that you don’t flip the switch. But such an “intervention” would have to place photons so that they go back into the bulb and the electric signal back into the cable and all the neural signals in your brain go in reverse. And this may be mathematically possible, but it would require an enormous amount of entropy decrease, so it’s not practically possible.

This is why the two notions of causality usually agree. Forward in space-time is the same direction as the arrow of time from entropy increase. But what if that wasn’t the case? What if there’d be places in the universe where the arrow of time went in the other direction than it does here? Then you could have effects coming before their causes, even without wormholes or some other weird space-time geometry. You’d have retrocausality.

As you have probably noticed, emails don’t arrive before you’ve written them, and lights don’t normally cause people to flip switches. So, if retrocausality exists, it’s subtle or it’s rare or it’s elsewhere.

In addition, most physicists don’t like retrocausality because going back and time can create inconsistencies, that’s situations where something both happens and doesn’t happen. The common example is the grandfather paradox, in which you go back in time and kill your own grandfather, accidentally we hope, so you are never born and can’t go back in time to kill him. I guess we could call that a retrocasualty.

The movie industry deals with those causal paradoxes in one of three ways. The first one is that if you go back in time, you don’t go back into your own time, but into a parallel universe which has a similar but slightly different history. Then there’s no inconsistency, but you have the problem that you don’t know how to get back to where you came from. This happens for example in the movie “The Butterfly Effect” in which the protagonist repeatedly travels to the past only to end up in a future that is less and less like what he was hoping to achieve.

Another way of dealing with time-travel paradoxes is that you allow temporary inconsistencies, they just have to be fixed so that everything works out in the end. An example of this is Back to the Future, where Marty accidentally prevents his parents from meeting. He has to somehow fix that issue before he can go back into his own future.

These two ways of dealing with time travel inconsistencies are good for story-telling, but they’re hard to make sense of scientifically. We don’t have any theoretical framework that includes multiple pasts that may join back to the same future.

The option that’s the easiest to make sense of scientifically is to just pick a story that’s consistent in the first place. An example for this is the Time Traveler’s Wife, in which the time traveler meets his future wife the first time when she is a child but he already an adult, and then he meets her again when they’re both the same age. It makes for a really depressing story though.

But even if time travel is consistent, it can still have funny consequences.

Imagine for example you open your microwave and find a notebook in it. The notebook contains instructions for how to turn your microwave into a time machine. But it takes you ten years to figure out how to make it work, and by the time you’re finished the notebook is really worn out. So, you copy its content into a new notebook, put it into the microwave, and send it back in time to your younger self. Where did the notebook come from?

It's often called the “bootstrap paradox” but there’s nothing paradoxical about it in the sense that nothing is inconsistent. We’re just surprised there was nothing before the appearance of the notebook that could have given rise to it, but it can have consequences later on. For starters, you’ll have to buy a new microwave. This means that in the presence of causal loops the past no longer determines the future.

Hmm, indeterminism. Where have we heard that before? In quantum mechanics, right! Could retrocausality have something to do with quantum mechanics?

Indeed, that retrocausality could explain the seemingly strange features of quantum mechanics was proposed by John Cramer in the 1980s. It’s called the Transactional Interpretation. It was further developed by Ruth Kastner but Cramer seems to not be particularly enchanted by Kastner’s version. In a 2015 paper he called it “not incorrect, but we consider it to be unnecessarily abstract.” That’s academia. Nothing quite like being dissed in the 1st person plural.

Kramer’s idea goes back to Wheeler and Feynman who were trying to find a new way to think about electrodynamics. Suppose you have light going from a sender to a receiver. If we draw this into a space-time diagram, we just get this line at a 45 degree angle from the event of emission to the absorption.

Those waves of the light oscillate in the direction of the two dimensions that we didn’t draw. That’s because I can’t afford a graphic designer who works in 4 dimensions, so we’ll just draw another graph down here that shows the phase of the wave as a function of the coordinate. This is how we normally think of light being emitted and absorbed.

But in this case we have to tell the emitter what is the forward direction of time. Wheeler and Feynman didn’t like this. They wanted a version that would treat the future and past the same way. So, they said, suppose the wave that comes from the emitter actually goes into both directions in time but the wave that goes backward in time has the opposite phase. When it arrives at the absorber, it sends back an answer wave. In the range between the emitter and absorber, the answer wave has the same phase as the one that came from the emitter. But it has the opposite phase going forward in time. Because of this, there’s constructive interference between the event of emission and that of absorption, but destructive interference before the emission and after the absorption.

The result looks exactly the same as the normal version of electrodynamics where the wave just starts at the emitter and ends at the absorber. Indeed, it turned out that Wheeler and Feynman’s reinterpretation of electrodynamics was identical to the normal version and they didn’t pursue it any further. However, I want to draw your attention here already to an odd feature of this interpretation. It’s that it suggests a second notion of time which doesn’t exist in the physics.

When we say something like: when the wave from the emitter arrives at the absorber, the absorber returns a wave, that doesn’t play out in time. Because time is the axis on this diagram. If you illustrate the physical process, then both the emission and absorption are in this diagram in the final version, period. They don’t get drawn into it, that’d be a second notion of time.

That said, let’s look at Cramer’s Transactional Interpretation. In this case, we use wave-functions instead of electromagnetic waves, and there isn’t one absorber, but several different ones. The several different absorbers are different possible measurement outcomes.

Suppose for example you emit a single quantum of light, a photon, from a source. You know where the photon came from but you don’t know where it’s going. That’s not because the photon has lost its internet connection and now can’t find a tube entrance, it’s because of the uncertainty principle. This means that its wave-function spreads into all directions. If you then measure the photon at one particular place, the wave-function instantaneously collapses, everywhere. This brings up the question: How did the wave-function on one side know about the measurement on the other side. That’s what Einstein referred to as “spooky action at a distance,” which I talked about in my earlier video.

Let us draw this into our space-time diagram. We have only one direction of space, so the photon wave-function goes left and right with probability ½. If you measure it on one side, say the right side, the probability there jumps to 1 and that on the other side to 0.

Cramer’s transactional interpretation now says that this isn’t what happens. Instead, what happens is this. The source sends out an offer wave, both forward and backward in time. In the forward direction, that approaches the detectors. Again down here we have drawn the phase of that wave. It’s now a probability amplitude rather than the amplitude of an electromagnetic field.

When the offer wave arrives at the detectors, they both send back a confirmation wave. When those waves arrive at the source, the source randomly picks one. Then the waves to that one detection event enter a back-and-forth echoing process, until the probability for that outcome is 1 and that for the other possible outcomes is zero. That reproduces the collapse of the wave-function.

Cramer calls this a “transaction” between the source and the detector. He claims it makes more sense than the usual Copenhagen Interpretation with the collapse, because in the transactional interpretation all causes all propagate locally and in agreement with Einstein’s speed of light limit. You “just” have to accept that some of those causes go back in time.

Take for example the bomb experiment in which you want to find out whether a bomb is live or a dud, but without exploding it because last time that happened, Ken spilled his coffee, and it was a mess. If the bomb is live, a single photon will blow it up. If it’s a dud, the photon just goes through. What you do is that you put the bomb into an interferometer. You send a single photon through and measure it down here. If you measure the photon in this detector, you can tell the bomb is live even though the photon didn’t interact with the bomb because otherwise it’d have blown up. For a more detailed explanation, please watch my earlier video on the bomb experiment.

In Cramer’s interpretation of the bomb experiment an offer wave goes over both paths. But if there’s a live bomb on that path, the offer wave is aborted and can’t go through. The other offer wave still reaches the detector. The detector sends answer waves, and again the answer wave that goes along the bomb path doesn’t pass through. This means the only transaction that can happen is along the other path. This is the same as in quantum mechanics. But in the transactional interpretation the path with the bomb is probed by both the offer wave and the answer wave, so that’s why the measurement can contain information about it.

Great! So quantum mechanics is just a little bit of reverse causality. Does that finally explain it? Not quite. The issue with Cramer’s interpretation is the same as with the Wheeler-Feynman idea. This notion of time with the wave propagating this way and back seems to be a second time, internal to the wave, that has no physical relevance. And the outcome in the end is just the same as in normal quantum mechanics. Indeed, if you use the interventionist notion of causality, then emitting the particle is still the cause of its eventual detection and not the other way round.

Personally I don’t really see why bother with all that sending back and forth if it walks and talks like the usual Copenhagen Interpretation? But if it makes you feel better, I think it’s a consistent way to think of quantum mechanics.

In conclusion, I’m afraid I have to report that physicists have not found a way to travel into the past, at least not yet. But watch out for that notebook in the microwave.

Wednesday, October 19, 2022

Science News Oct 19

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Welcome everyone to this week’s science news. Today we’ll talk about the Quantum Internet Alliance, wormholes in the New York Times, a black hole that doesn’t like its meal, energy worries at CERN, species loss, and the robot which holds the world record for running 100 meters.

The European Quantum Internet Alliance announced on Friday that they have started their 7-year plan to “build a global quantum internet made in Europe.” Dr. Stephan Ritter from the executive team of the Alliance said “Our goal is to create quantum internet innovation that will ultimately be available to everyone.” The Quantum Internet doesn’t only boom in Europe, the US government also has a “Blueprint for the Quantum Internet”. the Chinese are working on it too, and many other countries have similar plans.

What is the quantum internet and, most importantly, will it make YouTube better? I’m afraid not.

The quantum internet would be a network with hubs, cables, relay stations and maybe one day also with satellites. It’d make it possible to exchange particles which maintain their quantum behaviour, like being into two states at once. Why would you want to do that? Because quantum particles have this funny property that if you measure them, then their state suddenly “collapses” from two states at once to one state in particular. This is Einstein’s “spooky action at a distance”.

It does not allow you to communicate faster than light, but you’ll notice if someone tries to intercept your line. Because that’ll collapse the wave-function. For this reason, data transferred through the quantum internet would be secure in the sense that if the receiver doesn’t get the data, then no one else gets it either.

The problem with the quantum internet is that those quantum states are fragile. Using them over macroscopic distances to transmit any interesting amount of data is possible, but impractical and I doubt it’ll ever make financial sense. It's like the soup tube start-up of this woman’s boyfriend on reddit. Put soup pipes under the ground, connect apartment buildings to soup kitchens, then sell soup subscription. That’s certainly possible. But it’s an expensive solution to a problem which doesn’t exist. Just like the quantum internet.

Don’t get me wrong guys, I am totally in favour of the quantum internet. Because it’ll be great for research in quantum foundations which I work on myself. But I’d really like to know how so many governments and CEOs got talked into believing they need a quantum internet. Because if I knew, I’d like to talk to them about my global soup network.

Mr President.

Sure. Chicken soup, tomato soup, noodle soup.

Cheese soup? You’re my kind of president.

According to several headlines you may have seen this week, a black hole has spewed out material, or burped it out, or puked it out. I guess we’re lucky the headlines haven’t used metaphors for worse digestive problems, at least not yet. But of course, black holes don’t puke out stuff. Nothing escapes from black holes, that’s why they’re called black holes. It’s just that when a black hole attracts a star and tears it into pieces, some of the remains may escape or they may orbit around the black hole for a long time without falling in.

Both cases have previously been observed. What’s new is that a black hole trapped a star and ejected the remains with a delay of three years. These observations were just published in The Astrophysical Journal by scientists at Harvard.

The black hole they observed is about six hundred sixty-five million light years away. It was seen sucking a star into its vicinity in 2018, then it was silent until last year June, when it suddenly, and rather unexpectedly, began to reject stuff. Some of the rejected material was seen moving away at half the speed of light, that’s five times faster than what’s been found in previous observations. It’s a remarkably strange and violent event that’ll give astrophysicists a lot to think about. But please. Black holes don’t puke. They just sometimes don’t clear their plate.

But speaking of black holes and puking, a few days ago, the New York Times published a new article by Dennis Overbye (Over-bee) titled “Black Holes May Hide a Mind-Bending Secret About Our Universe”. You may have hoped the mind-bending secret was the last digit of pi or maybe how the Brits managed to find a prime minister even more incompetent than Boris Johnson. But no, it’s a speculation about the relation between wormholes and entanglement.

One problem with the article is that it refers to “entanglement” as “spooky action at a distance”, a mistake that Overbye could have avoided if he’d watch this channel because I just complained about this last week. Einstein wasn’t referring to entanglement when he used that phrase. He was referring to the collapse of the wave-function. But the bigger problem is that Overbye doesn’t explain how little there is to learn from the allegedly “mind bending secret”. So let me fill you in.

The speculation that entanglement might be related to wormholes is based on another speculation, which is that the universe is a hologram. The idea is not supported by evidence. What would it mean if it was correct? It’d mean that you could encode the information inside our universe on its surface, more precisely its conformal surface.

Unfortunately, for all we know, our universe doesn’t have such a surface. The idea that the universe is a hologram only works if the cosmological constant is negative. The cosmological constant in our universe is positive. And even if that wasn’t so, the idea that the universe is a hologram would still not be supported by evidence. What does any of that have to do with real holograms? Nothing. I talked about this in an earlier video.

This means we are dealing with a mathematical speculation that has no connection to observation. It’s for all practical purposes untestable. That entanglement is kind of like being linked by a wormhole is then an interpretation of the mathematics which describes a universe that we don’t inhabit. It’s got nothing to do with real wormholes in the real universe.

It also isn’t a new idea. What’s new is that the physicists who work on this stuff, and that’s mostly former string theorists, want to get a share of all the money that’s now going into quantum technologies. So they’ve converted their wormhole interpretation of the holographic speculation into an algorithm that can be run on a quantum computer. This is then called a quantum simulation. Of course, the quantum computers to do that don’t actually exist, so it’s a hypothetical simulation.

In summary, a more accurate headline of Overbye’s piece would have been “Physicists want to simulate an interpretation of a speculation on a device which doesn’t exist”. This is why they don’t let me write my own headlines.

Liz.

No I don’t want to talk to you.

Because you’re changing laws faster than I can make jokes about them.

Okay, go ahead.

How many u-turns can you make in a ten-dimensional space before tying yourself into a knot. I don’t know but I can get you in touch with a few string theorists, they can teach you some tricks. Sure thing, bye.

NASA confirmed a few days ago that their attempt to change the course of an asteroid was successful. Calculations based on the so far available data indicate that the asteroid’s orbit was shortened by 32 minutes. According to a NASA spokesperson who requested anonymity, the biggest challenge of the mission has been to endure the Bruce Willis jokes.

The European Organization for Nuclear Research, CERN, announced that they will take energy-reduction measures. This step comes after energy prices have sky-rocketed in Europe due to lacking gas-supply from Russia. The main energy consumption at CERN comes from its flagship machine, the Large Hadron Collider, which averages about 1 point 3 terawatt hours a year, that’s about half the energy consumption of the nearby city of Geneva.

According to a press release, CERN will push their year-end technical stop forward by two weeks to November 28th. Next year, operations will be reduced by 20 percent, which will delay many of the planned experiments. I think it’s the right thing to do in the current situation.

Yes, even particle physicists have noticed that those aren’t good times for building power consuming mega machines for no particular purpose, so they, too are hoping to get some money out of the Quantum Tech windfall. Which is why CERN has a Quantum Technology Initiative, like everyone else besides me, it seems. What do they want to do with it? Well, look for supersymmetry for example. Some things don’t change.

But they also want to contribute to societal progress. According to a press release from last week, “CERN has joined a coalition of science and industry partners proposing the creation of an Open Quantum Institute” in Geneva. CERN Director General Fabiola Gianotti said that the members of the institute “will work to ensure that quantum technologies have a positive impact for all of society.” Though possibly the most positive impact for all of society might be to spend the money on something more useful.

Initial tests of a nasal-spray vaccine by Oxford University researchers and AstraZeneca yielded poor results, stunting the potential release of a globally-accepted non-traditional vaccine. According to a press release from Oxford University, the immune response among trial participants was significantly weaker than that from a shot-in-the-arm vaccination. While no safety issues were found, the “weak and inconsistent” results mean that the nasal-spray vaccine will not be rolled out any time soon and will instead go back to the research stage.

The results are disappointing to many after China and India both approved a COVID vaccines that can be delivered by the nose or mouth in September. But at least we won’t have to decide whether to call it a nosiccine or immunosation

Now we come to the depressing part. The new 2022 Living Planet Index was released by the World Wildlife Fund last week. According to the report, the population of 32 thousand vertebrate species has declined an average of 69 percent between nineteen-seventy and twenty-eighteen.

This percentage is not weighted by species population, so it’s difficult to interpret. Here’s an example. Suppose you have two YouTube channels. One channel sees its subscribers increase from one million to two million in a year. That’s a one hundred percent increase. The other had only ten subscribers to begin with and it lost them all. That’s a 100 percent decrease. Taken together you might call this a remarkable success. According to the World Wildlife Fund, that’s an average of zero percent.

So the sixty-nine percent decrease doesn’t mean animal *populations have decreased by that much. It also doesn’t mean that 69 percent of animal species are in decline. Indeed, according to a paper published in Nature two years ago, the index is driven by just three percent of all vertebrate species. If those are removed from the sample, the trend switches to an increase. What does it mean? I don’t know, but I’d say if a number is that difficult to interpret, don’t put it into headlines.

A better report that was recently published but didn’t make as many headlines comes from Bird Life International. They found that 49 percent of bird species are in decline, and one in eight is at risk of extinction. The major factors for the decline of bird populations are linked to humans, including agricultural expansion and intensification, unsustainable logging, invasive species, exploitation, and climate change.

About 5 point three billion mobile phones will be thrown away this year, according to the international waste electrical and electronic equipment (WEEE) forum. They just released the results of a survey they conducted earlier this year in several European countries. The survey was not just about mobile devices, but about small electronic devices in general, including hair dryers and toasters. Many of those contain raw materials that are valuable and could be reused, such as copper, gold, or palladium. However, many people keep devices that they no longer use, rather than recycling them. According to the survey results Europeans hoard on the average 15 percent of those devices. The biggest hoarders are by far the Italians, following a long tradition that goes back to the Roman empire.

Yes.

The electronic waste forum, oh really.

My phone? It’s about as old as I am, 21 years, take or give.

No, I don’t want to recycle it. I’m preserving the natural diversity of the technological ecosystem. Thank you.

That was the depressing news for today, let’s finish on something more cheerful. Cassie the robot has broken the world-record for the fastest 100 meter run by a bi-pee-dal robot.

Cassie was developed by engineers at Oregon State University and has been in development since 2017. The latest achievement became possible thanks to intensive training with advanced machine learning. Cassie was able to complete the 100 meters in 24 point seven three seconds which earned it an entry into the Guinness Book of world records. It also demonstrated that the best way to reduce your marathon time is taking off your upper body.

Saturday, October 15, 2022

Why are we getting fatter?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Today I want to talk about personal energy storage. I don’t mean that drawer we all have that’s full of batteries in the wrong size, I mean our expanding waistlines that store energy in form of fat. What are the main causes of obesity, how much of a problem is it, and what are you to make of the recent claims that obesity is caused by plastic? That’s what we’ll talk about today.

Obesity is common and becoming more common, so common in fact that the World Health Organization says “the issue has grown to epidemic proportions.” The proportions of the issue are, I guess, approaching that of a sphere, and the word “epidemic” just means it’s a widely spread health problem. Though, interestingly enough, obesity is in some sense infectious. Not because it’s spread by a virus, but because eating habits spread in social networks. If your friends are obese, you’re more likely to be obese too.

In the past decades, the fraction of people who are obese has steadily increased everywhere. In the United States, more than a third of adults are now obese. Canada, the UK, and Germany are not far behind. The United States by the way are not world-leaders in obesity, that title goes to a pacific island by name Nauru where 60 percent of adults are obese. I have no idea what is going on there, but I want to move there when I retire.

Jokes aside, the economic burden of obesity is staggering. According to an estimate from the Milken Institute for the United States alone, obesity and overweight account for direct health care costs of at least four-hundred eighty billion dollars per year. The indirect costs due to lost economic productivity are even higher, about 1 point two four trillion dollars per year. Together that’s more than 9 percent of the entire American GDP.

But what, exactly, is obesity? The World Health Organization defines obesity as an “abnormal or excessive fat accumulation that may impair health”. I believe that an “excessive fat accumulation” does not mean stuffing your fridge with so much cheese that it hits you in the face when you open the door. They probably mean an accumulation of fat in the human body. This is why obesity is most commonly measured with the body mass index, BMI for short, that’s your weight in kilogram divided by the square of your height in meters. For adults, a BMI over 25 kilogram per square meter is overweight, and over 30 it’s obese.

The BMI is widely used because it’s easy to measure, but it has several shortcomings. The biggest issue is that it doesn’t take into account how much of your body weight is fat. You can put on muscle rather than fat and your BMI goes up. A more reliable measure is the amount of body fat.

There are no accepted definitions for obesity based on the percent of body fat, because academia wouldn’t be academia if people could just agree on definitions and move on. But a commonly used indicator for overweight is more than 25 percent fat for men and more than 35 percent for women. Turns out that about 12 percent of men who have a BMI about 25 have a normal amount of body fat, so probably just a lot of muscles. And on the flip side, about 15 percent of women with a BMI below 25 have an abnormally high percentage of body fat.

But using body fat as a measure for obesity is also overly simplistic because the problem isn’t actually the fat, it’s where you store it.

You see, the body removes fat from the blood and then tries to store it in fat cells. But in adults the number of fat cells doesn’t increase, it’s just the size of the cells that increases. And there’s only so much fat that a single cell can store. The problem begins when all the fat cells are full. Because then the body has to store the fat in places where it doesn’t belong and it does that notably around the liver and in muscle tissue. That’s right, the fat isn’t the problem, it’s the lack of fat cells that’s the problem. I would like to add that for the same reason books aren’t the problem, it’s the lack of bookshelves that’s the problem.

The out-of-place storage of fat seems to be the major reason for most of the health problems associated with obesity. Among others, that’s an increased risk for heart attacks and strokes, type 2 diabetes, breathing problems, knee, hip, and back problems, and an increased risk for certain types of cancer. But since there are individual differences in how much fat the body can store in the available fat cells, the onset of disease doesn’t happen at any particular amount of fat. About 20 percent of people with a BMI above thirty appear to be metabolically healthy.

This is why researchers have tried to figure out specifically where too much fat is a problem. But it’s not as easy as saying, if your waist circumference is more than so-and-so then that’s too much. To begin with, as you have undoubtably noticed, men and women store fat in different places. Men store it mostly at the belly, women store it primarily at the breasts, thighs (!th), and bottom, which is why you never get to see my bottom. It doesn’t fit on the screen. So, if you use a measure like waist circumference, at the very least you have to use a different one for men and women. To make matters more complicated, the onset of metabolic disease with waist circumference seems to depend on the ethnic group.

To return to the definition from the WHO, we see that it’s not all that easy to figure out when “an accumulation of fat impairs health”. It’s clearly not just the number on your scale that matters and it’s true that some obese people are healthy. Still, after all is said and done, the BMI is very strongly correlated with the ill effects of obesity.

So what are the possible causes of this obesity epidemic? Yes, the universe also expands, but it’s the space between galaxies that expands, not planets, or people on planets. So I’m afraid we can’t blame Einstein for this one. But if not Einstein, then what?

The first possible cause of obesity is too much food. Bet you didn’t think of this one.

Food has become more easily accessible to almost everyone on the planet, but especially in the developed world. It doesn’t help that food companies have an incentive to make you want to eat more. In a familiar pattern, the sugar industry has tried to play down evidence that sugar contributes to coronary heart disease for decades, though the evidence is now so overwhelming that they’ve given up denying it.

But just saying obesity is caused by easy access to food is a poor explanation because not everyone who has food in abundance also gets fat. Let’s therefore look at the second possible cause, the wrong type of food.

Modern food is nothing like the food our ancestors ate. A lot of the stuff we eat today is highly processed, which means for example that meat is shredded to small pieces, mixed with saturated fats and preservatives, and then formed to shapes. Processed food also often lacks fiber and protein and is instead rich in salt, sugar and fat, possibly with chemical compositions that don’t occur in nature.

The issue with processed food is that we may say we’re “burning energy” but, I mean I’m just a physicist, but I believe the human body doesn’t literally burn food. It’s rather that we have to take apart the molecules to extract energy from them. And taking apart the food requires energy, so the net turnout depends on how easy the food is to digest. Processed food is easy to digest, so we get more energy out of it, and put on weight faster.

The correlation between processed food and overweight is well-established. For example, in 2020 a a study looked at a sample of about 6000 adults from the UK. They found that the highest consumption of processed food was associated with 90 percent higher odds for being obese. Another paper from a just a few weeks ago analyzed data from adolescents who had participated in a Health and Nutrition Survey in the United States. They too found that the highest consumption of ultra-processed food was associated with up to 60 percent higher odds of being obese.

A particular problem are trans fats, that are fats which don’t identify with the hydrogen bonds they were assigned at birth. Trans fats are chemically modified so that they can be produced in solid or semi-solid form which makes them handy for the food industry. The consumption of trans fats is positively correlated not only with obesity but also with cardiovascular diseases, disorders of the nervous system, and certain types of cancer, among others. Forty countries have banned or are in the process of banning trans fats. I’m not a doctor but I guess that means they really aren’t healthy.

Let’s then look at the third possible cause, lack of exercise. Just last year, a group of European researchers published a review of reviews on the topic, so I guess that’s a meta-meta-review. They found that exercise led to a significant weight loss in obese people. But when they say “significant” they mean statistically significant not that it’s a lot of weight. If you look at the numbers they are referring to fat loss of about 2 kilogram on average, a difference that most obese people would hardly notice. Part of the reason may be that exercise has a rebound effect. If people start exercising, they also eat more.

Don’t get me wrong, exercise has a lot of health benefits in and by itself, so it’s a good thing to do, but it seems that its impact on reducing obesity is limited.

The next cause we’ll look at are your genes. Obesity has a strong genetic component. Twin, family, and adoption studies show that the chance that obesity is passed down the family line lies between 40 and 70 percent.

The genetic influence on obesity comes in two types, monogenic and polygenic. The monogenic type is caused by only one of a number of genes that affect the regulation of food intake. Monogenic obesity is rare and affects fewer than 5 percent of obese people. Those who are affected usually gain weight excessively already as infants.

Polygenic obesity has contributions from many different genes, though some of them stand out. For example, the so-called FTO gene affects the production of a hormone called ghrelin. Ghrelin is often called the “hunger hormone” because it regulates appetite. The ghrelin level is high if you’re hungry and decreases after you’ve eaten. A study in 2013 found that people with the obesity variant of the FTO gene showed less reduction of ghrelin levels after they’d eaten. So they’ll stay hungry longer, and you don’t need to be Einstein to see how this can lead to weight gain.

How many obese people are obese because of their genes? No one really knows. For one, not all genes that contribute to obesity are known, not all people who carry those genes are obese, and the prevalence of certain genes depends strongly on the population you look at. For example, the variation of the FTO gene that’s correlated with an increase of BMI is present in about 42 of people with European ancestry, but only 12 percent of those with African ancestry. So, it’s complicated, but that’s why you’re here, so let’s make things more complicated and look at the next possible cause of obesity, the microbiome.

The microbiome, that’s all those bacteria, viruses, and fungi that live in your gut. They strongly affect what you can digest and how well. Several studies have shown that obese people tend to have a somewhat different microbiome. However, some other studies seem to contradict that, and it’s also again unclear whether this is a cause or an effect of obesity. So my conclusion on this one is basically, more work is needed. The next suspect is your circadian rhythm.

Your inner clock regulates metabolic processes, that includes digestion. If you mess with it and eat or sleep at the wrong time, you might extract more or less energy from food. A 2018 review paper reported strong evidence that disrupted sleep and circadian misalignment, such a working night shifts, contributes to obesity.

However, while the correlation is again statistically significant, the effect isn’t particularly large. A survey by the American Cancer Society found that in women, missing out on sleep is associated with a BMI that’s greater by 1.39 kilogram per square meter, while in men it’s a difference of up to 0 point 57 kilogram per square meter. That typically converts to one or two kilogram in weight.

Another possible cause of obesity is stress. For example, a 2019 paper looked a group of about 3000 adult Americans and their exposure to a wide range of psychosocial stressors, such as financial strain, relationship trouble, people who believe in the many worlds interpretation of quantum mechanics, etc. They found that stress increased the risk of obesity by 15 to 25 percent.

But correlation doesn’t mean causation. For one thing, stressed people don’t sleep well, which we’ve seen also makes obesity more likely. Or maybe they are stressed because they’re obese to begin with?

Yet another possible cause of obesity that researchers have looked at are vitamin and mineral deficiencies. These are indeed common among overweight and obese individuals. That’s been shown by a number of independent studies. But in this case, too, the direction of causation remains unclear because excess body weight alters how well those nutrients are absorbed and distributed.

Another option to explain why you’re fat is to blame your mother. Indeed, there’s quite convincing evidence that if your mother smoked while pregnant, you’re more likely to be obese. For example, a 2016 meta-review from a group in the UK found that children born to smoking mothers had a 55 percent increased risk of being obese. A similar meta-analysis from 2020 came to a similar conclusion. However, these meta-reviews are difficult to interpret because there are many factors that have to be controlled for. Maybe your mother smoked because she’s stressed and that’s why you didn’t get enough sleep and now you’re fat. This is all getting very confusing, so let’s talk about the two newest ideas that scientists have come up with to explain the obesity epidemic, viruses and plastic.

Did I say obesity is not caused by a virus, oops!

Turns out that a number of viruses are known to cause obesity in chickens, mice, and rats. And it’s not just animals. A meta-analysis from 2013 showed that a previous Adenovirus 36 infection is correlated with an increased risk of obesity of about 60 percent. A few other viruses are suspect, too, but this Adenovirus 36 seems to be the biggest offender. An infection with adenovirus 36 has the symptoms of a common cold but can also lead to eye infections.

A review published in 2021 in the International Journal of Obesity found that 31 out of 37 studies. reported a positive correlation between Adenovirus 36 antibodies and weight gain, obesity, or metabolic changes. However, this virus is incredibly common. About every second person has antibodies against it, not all of them are obese, and not all obese people have had an infection. So this might play a role for some people but it’s unclear at the moment how relevant it is. In any case, face masks also protect from obesity, just don’t take them off.

This finally brings us to the headlines you may have seen some weeks ago claiming that plastic makes us fat. These headlines were triggered by three review papers that appeared simultaneously in the same journal about the role of obesogens in the obesity epidemic.

“Obesogen” is a term coined in 2006 by two researchers from UC Irvine. They are a type of “endocrine disrupting chemicals” that resemble natural hormones and can interfere with normal bodily functions. About a thousand chemicals are currently known to have, or are suspected to have endocrine disrupting effects. About 50 of those are believed to be obesogens that affect the metabolic rate, the composition of the microbiome, or the hormones that influence eating behavior.

Obesogens are in cosmetics, preservatives, sun lotion, furniture, electronics, plastics, and the list goes on. From there they drift into the environment. They have been found also in dust, water, and even in medication and processed foods.

The review papers report fairly convincing evidence that obesogens affect the development of fat and muscle cells in a petri dish, not to be confused with a peach tree disk. There is also some evidence that obesogens affect the development of mice and other animals. However, the evidence that they play a significant role for obesity is not quite as convincing.

The majority of studies on the topic show a positive correlation between obesogen exposure and an elevated BMI, especially when exposure occurs during pregnancy or in childhood. However, as we saw earlier, just because a correlation is statistically significant doesn’t mean the effect is large. How large the effect may be is at present guess work. In a recent interview with The Guardian the lead author of one of the reviews, Robert Lustig, said “If I had to guess, based on all the work and reading I’ve done, I would say obesogens will account for about 15 to 20 percent of the obesity epidemic. But that’s a lot.”

Maybe. If it was correct. But maybe asking the lead author isn’t the most objective way to judge the relevance of a study. There are also some studies which didn’t find correlations between obesogen exposure and obesity risk and again, the results are difficult to interpret, because exposure to certain chemicals depends on living conditions that are correlated with all kinds of demographic factors.

Despite that, the researchers claim that “epidemiological studies substantiate a causal link between obesogen exposure and human obesity”. They don’t stop there but put forward a new hypothesis for the cause of obesity.

In their own words: “This alternative hypothesis states that obesity is a growth disorder, in effect, caused by hormonal/enzymatic defects triggered by exposures to environmental chemicals and specific foods in our diet.” You can see how this would make big headlines. It’s such a convenient explanation, all those damned chemicals are making us fat. However, until there’s data on how big the effect is, I will remain skeptical of this hypothesis.

So, let’s wrap up. The best evidence for factors that make obesity more likely are genetics and the consumption of processed food. Factors that may make the situation worse, are stress, lack of sleep, exercise, or essential vitamins and minerals, and an abnormal microbiome, but in all those cases it’s difficult to tell apart cause and effect. When it comes to viruses and exposure to certain chemicals, the headlines are bigger than the evidence.

If you’re interested in a video on possible treatment options for obesity, let us know in the comments.