Saturday, January 16, 2021

Was the universe made for us?

[This is a transcript of the video embedded below.]


Today I want to talk about the claim that our universe is especially made for humans, or fine-tuned for life. According to this idea it’s extremely unlikely our universe would just happen to be the way it is by chance, and the fact that we nevertheless exist requires explanation. This argument is popular among some religious people who use it to claim that our universe needs a creator, and the same argument is used by physicists to pass off unscientific ideas like the multiverse or naturalness as science. In this video, I will explain what’s wrong with this argument, and why the observation that the universe is this way and not some other way, is evidence neither for nor against god or the multiverse.

Ok, so here is how the argument goes in a nutshell. The currently known laws of nature contain constants. Some of these constants are for example, the fine-structure constant that sets the strength of the electromagnetic force, Planck’s constant, Newton’s constant, the cosmological constant, the mass of the Higgs boson, and so on.

Now you can ask, what would a universe look like, in which one or several of these constants were a tiny little bit different. Turns out that for some changes to these constants, processes that are essential for life as we know it could not happen, and we could not exist. For example, if the cosmological constant was too large, then galaxies would never form. If the electromagnetic force was too strong, nuclear fusion could not light up stars. And so on. There’s a long list of calculations of this type, but they’re not the relevant part of the argument, so I don’t want to go through the whole list.

The relevant part of the argument goes like this: It’s extremely unlikely that these constants would happen to have just exactly the values that allow for our existence. Therefore, the universe as we observe it requires an explanation. And then that explanation may be god or the multiverse or whatever is your pet idea. Particle physicists use the same type of argument when they ask for a next larger particle collider. In that case, they claim it requires explanation why the mass of the Higgs boson happens to be what it is. This is called an argument from “naturalness”. I explained this in an earlier video.

What’s wrong with the argument? What’s wrong is the claim that the values of the constants of nature that we observe are unlikely. There is no way to ever quantify this probability because we will never measure a constant of nature that has a value other than the one it does have. If you want to quantify a probability you have to collect a sample of data. You could do that, for example, if you were throwing dice.Throw them often enough, and you get an empirically supported probability distribution.

But we do not have an empirically supported probability distribution for the constants of nature. And why is that. It’s because… they are constant. Saying that the only value we have ever observed is “unlikely” is a scientifically meaningless statement. We have no data, and will never have data, which allow us to quantify the probability of something we cannot observe. There’s nothing quantifiably unlikely, therefore, there’s nothing in need of explanation.

If you look at the published literature on the supposed “fine-tuning” of the constants of nature, the mistake is always the same. They just postulate a particular probability distribution. It’s this postulate that leads to their conclusion. This is one of the best known logical fallacies, called “begging the question” or “circular reasoning.” You assume what you need to show. And instead of showing that a value is unlikely, they pick a specific probability distribution that makes it unlikely. They could as well pick a probability distribution that would make the observed values *likely, just that this doesn’t give the result they want to have.

And, by the way, even if you could measure a probability distribution for the constants of nature, which you can’t, then the idea that our particular combination of constants is necessary for life would *still be wrong. There are several examples in the scientific literature for laws of nature with constants nothing like our own that, for all we can tell, allow for chemistry complex enough for life. Please check the info below the video for references.

Let me be clear though that finetuning arguments are not always unscientific. The best-known example of a good finetuning argument is a pen balanced on its tip. If you saw that, you’d be surprised. Because this is very unlikely to happen just by chance. You’d look for an explanation, a hidden mechanism. That sounds very similar to the argument for finetuning the constants of nature, but the balanced pen is a very different situation. The claim that the balanced pen is unlikely is based on data. You are surprised because you don’t normally encounter pens balanced on their tip.You have experience, meaning you have statistics. But it’s completely different if you talk about changing constants that cannot be changed by any physical process. Not only do we not have experience with that, we can never get any experience.

I should add there are theories in which the constants of nature are replaced with parameters that can change with time or place, but that’s a different story entirely and has nothing to do with the fine-tuning arguments. It’s an interesting idea though. Maybe I should talk about this some other time? Let me know in the comments.

And for the experts, yes, I have so far specifically referred to what’s known as the frequentist interpretation of probability. You can alternatively interpret the term “unlikely” using the Bayesian interpretation of probability. In the Bayesian sense, saying that something you observe was “unlikely”, means you didn’t expect it to happen. But with the Bayesian interpretation, the whole argument that the universe was especially made for us doesn’t work. That’s because in that case it’s easy enough to find reasons for why your probability assessment was just wrong and nothing’s in need of explaining.

Example: Did you expect a year ago that we’d spent much of 2020 in lockdown? Probably not. You probably considered that unlikely. But no one would claim that you need god to explain why it seemed unlikely.

What does this mean for the existence of god or the multiverse? Both are assumptions that are unnecessary additions to our theories of nature. In the first case, you say “the constants of nature in our universe are what we have measured, and god made them”, in the second case you say “the constants of nature in our universe are what we have measured, and there are infinitely many other unobservable universes with other constants of nature.” Neither addition does anything whatsoever to improve our theories of nature. But this does not mean god or the multiverse do not exist. It just means that evidence cannot tell us whether they do or do not exist. It means, god and the multiverse are not scientific ideas.

If you want to know more about fine-tuning, I have explained all this in great detail in my book Lost in Math.

In summary: Was the universe made for us? We have no evidence whatsoever that this is the case.


You can join the chat on this video today (Saturday, Jan 16) at 6pm CET/Eastern Time here.

Saturday, January 09, 2021

The Mathematics of Consciousness

[This is a transcript of the video embedded below.]


Physicists like to think they can explain everything, and that, of course, includes human consciousness. And so in the last few decades they’ve set out to demystify the brain by throwing math at the problem. Last year, I attended a workshop on the mathematics of consciousness in Oxford. Back then, when we still met other people in real life, remember that?

I find it to be a really interesting development that physicists take on consciousness, and so, today I want to talk a little about ideas for how consciousness can be described mathematically, how that’s going so far, and what we can hope to learn from it in the future.

The currently most popular mathematical approach to consciousness is integrated information theory, IIT for short. It was put forward by a neurologist, Giulio Tononi, in two thousand and four.

In IIT, each system is assigned a number, that’s big Phi, which is the “integrated information” and supposedly a measure of consciousness. The better a system is at distributing information while it’s processing the information, the larger Phi. A system that’s fragmented and has many parts that calculate in isolation may process lots of information, but this information is not “integrated”, so Phi is small.

For example, a digital camera has millions of light receptors. It processes large amounts of information. But the parts of the system don’t work much together, so Phi is small. The human brain on the other hand is very well connected and neural impulses constantly travel from one part to another. So Phi is large. At least that’s the idea. But IIT has its problems.

One problem with IIT is that computing Phi is ridiculously time consuming. The calculation requires that you divide up the system which you are evaluating in any possible way and then calculate the connections between the parts. This takes up an enormous amount of computing power. Estimates show that even for the brain of a worm, with only three hundred synapses, calculating Phi would take several billion years. This is why measurements of Phi that have actually been done in the human brain have used incredibly simplified definitions of integrated information.

Do these simplified definitions at least correlate with consciousness? Well, some studies have claimed they do. Then again others have claimed they don’t. The magazine New Scientist for example interviewed Daniel Bor from the University of Cambridge and reports:
“Phi should decrease when you go to sleep or are sedated via a general anesthetic, for instance, but work in Bor’s lab has shown that it doesn’t. “It either goes up or stays the same,” he says.”
I contacted Bor and his group, but they wouldn’t come forward with evidence to back up this claim. I do not actually doubt it’s correct, but I do find it somewhat peculiar they’d make such a statements to a journalist and then not provide evidence for it.

Yet another problem for IIT is, as the computer scientist Scott Aaronson pointed out, that one can think of rather trivial systems, that solve some mathematical problem, which distribute information during the calculation in such a way that Phi becomes very large. This demonstrates that Phi in general says nothing about consciousness, and in my opinion this just kills the idea.

Nevertheless, integrated information theory was much discussed at the Oxford workshop. Another topic that received a lot of attention is the idea by Roger Penrose and Stuart Hamaroff that consciousness arises from quantum effects in the human brain, not in synapses, but in microtubules. What the heck are microtubules? Microtubules are tiny tubes made of proteins that are present in most cells, including neurons. According to Penrose and Hameroff, in the brain these microtubules can enter coherent quantum states, which collapse every once in a while, and consciousness is created in that collapse.

Most physicists, me included, are not terribly excited about this idea because it’s generally hard to create coherent quantum states of fairly large molecules, and it doesn’t help if you put the molecules into a warm and wiggly environment like the human brain. For the Penrose and Hamaroff conjecture to work, the quantum states would have to survive at least a microsecond or so. But the physicist Max Tegmark has estimated that they would last more like a femtosecond, that’s only ten to the minus fifteen seconds.

Penrose and Hameroff are not the only ones who pursue the idea that quantum mechanics has something to do with consciousness. The climate physicist Tim Palmer also thinks there is something to it, though he is more concerned with the origins of creativity specifically than with consciousness in general.

According to Palmer, quantum fluctuations in the human brain create noise, and that noise is essential for human creativity, because it can help us when a deterministic, analytical approach gets stuck. He believes the sensitivity to quantum fluctuations developed in the human brain because that’s the most energy-efficient way of solving problems, but it only becomes possible once you have small and thin neurons, of the types you find in the human brain. Therefore, palmer has argued that low-energy transistors which operate probabilistically rather than deterministically, might help us develop artificial intelligent that’s actually intelligent.

Another talk that I thought was interesting at the Oxford workshop was that by Ramon Erra. One of the leading hypothesis for how cognitive processing works is that it uses the synchronization of neural activity in different regions of the brain to integrate information. But Erra points out that during an epileptic seizure, different parts of the brain are highly synchronized.

In this figure, for example, you see the correlations between the measured activity of hundred fifty or so brain sites. Red is correlated, blue is uncorrelated. On the left is the brain during a normal conscious phase, on the right is a seizure. So, clearly too much synchronization is not a good thing. Erra has therefore proposed that a measure of consciousness could be the entropy in the correlation matrix of the synchronization. Which is low both for highly uncorrelated and highly correlated states, but large in the middle, where you expect consciousness.

However, I worry that this theory has the same problem as integrated information theory, which is that there may be very simple systems that you do not expect to be conscious but that nevertheless score highly on this simple measure of synchronization.

One final talk that I would like to mention is that by Jonathan Mason. He asks us to imagine a stack of compact disks, and a disk player that doesn’t know which order to read out the bits on a compact disk. For the first disk, you then can always find a readout order that will result in a particular bit sequence, that could correspond, for example, to your favorite song.

But if you then use that same readout order for the next disk, you most likely just get noise, which means there is very little information in the signal. So if you have no idea how to read out information from the disks, what would you do? You’d look for a readout process that maximizes the information, or minimizes the entropy, for the readout result for all of the disks. Mason argues that the brain uses a similar principle of entropy minimization to make sense of information.

Personally, I think all of these approaches are way too simple to be correct. In the best case, they’re first steps on a long way. But as they say, every journey starts with a first step, and I certainly hope that in the next decades we will learn more about just what it takes to create consciousness. This might not only allow us to create artificial consciousness and help us tell when patients who can't communicate are conscious, it might also help us allow to make sense of the unconscious part of our thoughts so that we can become more conscious of them.

You can find recordings of all the talks at the workshop, right here on YouTube, please check the info below the video for references.


You can join the chat about this video today (Saturday, Jan 9) at noon Eastern Time or 6pm CET here.

Saturday, January 02, 2021

Is Time Real? What does this even mean?

[This is a transcript of the video embedded below.]


Time is money. It’s also running out. Unless possibly it’s on your side. Time flies. Time is up. We talk about time… all the time. But does anybody actually know what it is? It’s 3:30. That’s not what I mean. Then what do you mean? What does it mean? That’s what we will talk about today.

First things first, what is time? “Time is what keeps everything from happening at once,” as Ray Cummings put it. Funny, but not very useful. If you ask Wikipedia, time is what clocks measure. Which brings up the question, what is a clock. According to Wikipedia, a clock is what measures time. Huh. That seems a little circular.

Luckily, Albert Einstein gets us out of this conundrum. Yes, this guy again. According to Einstein, time is a dimension. This idea goes back originally to Minkowski, but it was Einstein who used it in his theories of special and general relativity to arrive at testable predictions that have since been confirmed countless times.

Time is a dimension, similar to the three dimensions of space, but with a very important difference that I’m sure you have noticed. We can stand still in space, but we cannot stand still in time. So time is not the same as space. But that time is a dimension means you can rotate into the time-direction, like you can rotate into a direction of space. In space, if you are moving in, say, the forward direction, you can turn forty-five degrees and then you’ll instead move into a direction that’s a mixture of forward and sideways.

You can do the same with a time and a space direction. And it’s not even all that difficult. The only thing you need to do is change your velocity. If you are standing still and then begin to walk, that does not only change your position in space, it also changes which direction you are going in space-time. You are now moving into a direction that is a combination of both time and space.

In physics, we call such a change of velocity a “boost” and the larger the change of velocity, the larger the angle you turn from time to space. Now, as you all know, the speed of light is an upper limit. This means you cannot turn from moving only through time and standing still in space to moving only in space and not in time. That does not work. Instead, there’s a maximal angle you can turn in space-time by speeding up. That maximal angle is by convention usually set to 45 degrees. But that’s really just convention. For the physics it matters only that it’s some angle smaller than ninety degrees.

The consequence of time being a dimension, as Einstein understood, is that time passes more slowly if you move, relative to the case when you were not moving. This is the “time dilation”.

How do we know this is correct? We can measure it. How do you measure a time-dimension? It turns out you can measure the time-dimension with – guess what – the things we normally call clocks. The relevant point here is that this definition is no longer circular. We defined time as a dimension in a specific theory. Clocks are what we call devices that measure this.

How do clocks work? A clock is anything that counts how often a system returns to the same, or at least very similar, configuration. For example, if the Earth orbits around the sun once, and returns to almost the same place, we call that a year. Or take a pendulum. If you count how often the pendulum is, say, at one of the turning points, that gives you a measure of time. The reason this works is that once you have a theory for space-time, you can calculate that the thing you called time is related to the recurrences of certain events in a regular way. Then you measure the recurrence of these events to tell the passage of time.

But then what do physicists mean if they say time is not real, as for example Lee Smolin has argued. As I have discussed in a series of earlier videos, we call something “real” in scientific terms if it is a necessary ingredient of a theory that correctly describes what we observe. Quarks, for example, are real, not because we can see them – we cannot – but because they are necessary to correctly describe what particle physicists measure at the Large Hadron Collider. Time, for the same reason, is real, because it’s a necessary ingredient for Einstein’s theory of General Relativity to correctly describe observations.

However, we know that General Relativity is not fundamentally the correct theory. By this I mean that this theory has shortcomings that have so-far not been resolved, notably singularities and the incompatibility with quantum theory. For this reason, most physicists, me included, think that General Relativity is only an approximation to a better theory, usually called “quantum gravity”. We don’t yet have a theory of quantum gravity, but there is no shortage of speculations about what its properties may be. And one of the properties that it may have is that it does not have time.

So, this is what physicists mean when they say time is not real. They mean that time may not be an ingredient of the to-be-found theory of quantum gravity or, if you are even more ambitious, a theory of everything. Time then exists only on an approximate “emergent” level.

Personally, I find it misleading to say that in this case, time is not real. It’s like claiming that because our theories for the constituents of matter don’t contain chairs, chairs are not real. That doesn’t make any sense. But leaving aside that it’s bad terminology, is it right that time might fundamentally not exist?

I have to admit it’s not entirely implausible. That’s because one of the major reasons why it’s difficult to combine quantum theory with general relativity is that… time is a dimension in general relativity. In Quantum Mechanics, on the other hand, time is not something you can measure. It is not “an observable,” as the physicists say. In fact, in quantum mechanics it is entirely unclear how to answer a seemingly simple question like “what is the probability for the arrival time of a laser signal”. Time is treated very differently in these two theories.

What might a theory look like in which time is not real? One possibility is that our space-time might be embedded into just space. But it has a boundary were time turns to space. Note how carefully I have avoided saying before it turns to space. Before arguably is a meaningless word if you have no direction of time. It would be more accurate to say what we usually call “the early universe” where we expect a “big bang” may actually have been “outside of space time” and there might have been only space, no time.

Another possibility that physicists have discussed is that deep down the universe and everything in it is a network. What we usually call space-time is merely an approximation to the network in cases when the network is particularly regular. There are actually quite a few approaches that use this idea, the most recent one being Stephen Wolfram’s Hypergraphs.

Finally, I should mention Julian Barbour who has argued that we don’t need time to begin with. We do need it in General Relativity, which is the currently accepted theory for the universe. But Barbour has developed a theory that he claims is at least as good as General Relativity, and does not need time. Instead, it is a theory only about the relations between configurations of matter in space, which contain an order that we normally associate with the passage of time, but really the order in space by itself is already sufficient. Barbour’s view is certainly unconventional and it may not lead anywhere, but then again, maybe he is onto something. He has just published a new book about his ideas.

Thanks for your time, see you next week.


You can join the chat on this topic today (Jan 2nd) at 6pm CET/noon Eastern Time here.