Pages

Saturday, July 30, 2022

Is the brain a computer?

If you like my content, you may also like my new newsletter to which you can sign up here (bottom at page). It's a weekly summary of the most interesting science news I came across in the past week. It's completely free and you can unsubscribe at any time.


[What follows is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


My grandmother was a computer, and I don’t mean there was a keypad on her chest. My grandmother calculated orbits of stars, with logarithmic tables and a slide ruler. But in which sense are brains similar to the devices we currently call computers, and in which sense not? What’s the difference between what they can do? And is Roger Penrose right in saying that Gödel’s theorem tells us human thought can’t just be computation? That’s what we’ll talk about today.

If you have five apples and I give you two, how many apples do you have in total? Seven. That’s right. You just did a computation. Does that mean your brain is a computer? Well, that depends on what you mean by “computer” but it does mean that I have two fewer apples than I did before. Which I am starting to regret. Because I could really go for an apple right now. Could you give me one of them back?

So whether your brain is a computer depends on what you mean by “computer”. A first attempt at answering the question may be to say a computer is something that does a computation, and a computation, according to Google is “the action of mathematical calculation”. So in that sense the human brain is a computer.

But if you ask Google what a computer is, it says it’s “an electronic device for storing and processing data, typically in binary form, according to instructions given to it in a variable program”. The definition on Wikipedia is pretty much the same and I think this indeed captures what most of us mean by “computer”. It’s those things we carry around to brush up selfies, but that can also be used for, well, calculations.

Let’s look at this definition again in more detail. It’s an electronic device. It stores and processes data. The data are typically in binary form. And you can give it instructions in a variable program. Now the second and last points, storing and processing data, and that you can give it instructions, also apply to the human brain. This leaves the two properties: it’s an electronic device and it typically uses binary data, which makes a computer different to the human brain. So let’s look at these two.

That an electronic computer is “digital” just means that it works with discrete data, so data whose values are separated by steps, commonly in a binary basis. The neurons in the brain, on the contrary, behave very differently. Here’s a picture of a nerve ending. In orange and blue you see the parts of the synapse that release molecules called “neurotransmitters”. Neurotransmitters encode different signals, and neurons respond to those signals gradually and in many different ways. So a neuron is not like a binary switch that’s either on or off.

But maybe this isn’t a very important difference. For one thing, you can simulate a gradual response to input on a binary computer just by giving weights to variables. Indeed, there’s an entire branch of mathematics for reasoning with such inputs. It’s called fuzzy logic and it’s the best logic to pet of all the logic. Trust me, I’m a physicist.

Neural networks which are used for artificial intelligence use a similar idea by giving weights to nodes and sometimes also the links of the network. Of course these algorithms still use a physical basis that is ultimately discrete and digital in binary. It’s just that on that binary basis you can mimic the gradual behavior of neurons very well. This already shows that saying that a computer is digital whereas neurons aren’t may not be all that relevant.

Another reason this isn’t a particularly strong distinction is that digital computers aren’t the only computers that exist. Besides digital computers there are analog computers which work with continuous data, often in electric, mechanical, or even hydraulic form. An example is the slide ruler that my grandma used. But you can also use currents, voltages and resistors to multiply numbers using Ohm’s law.

Analog computers are currently having somewhat of a comeback, and it’s not because millennials want to take selfies with their record players. It’s because you can use analog computers for matrix multiplications in neural networks. In an entirely digital neural network, a lot of energy is wasted in storing and accessing memory, and that can be bypassed by coding the multiplication directly into an analog element. But analog computers are only used for rather special cases exactly because you need to find a physical system that does the computation for you.

Is the brain analog or digital? That’s a difficult question. On the one hand you could say that the brain works with continuous currents in a continuous space, so that’s analog. On the other hand thresholds effects can turn on and off suddenly and basically make continuous input discrete. And the currents in the brain are ultimately subject of quantum mechanics, so maybe they’re partly discrete.   

But your brain is not a good place for serious quantum computing. For one thing, that’s because it’s too busy trying to remember how many seasons of Doctor Who there are just in case anyone stops you on the street and asks. But more importantly it’s because quantum effects get destroyed too easily. They don’t survive in warm and wiggly environments. It is possible that some neurological processes require quantum effects, but just how much is currently unclear, I’ll come back to this later.

Personally I would say that the distinction that the brain isn’t digital whereas typical computers that we currently use are, isn’t particularly meaningful. The reason we currently mostly use digital computers is because the discrete data prevent errors and the working of the machines is highly reproducible.

Saying that a computer is an electronic device whereas the brain isn’t, seems to me likewise a distinction that we make in every-day language, alright, but that isn’t operationally relevant. For one thing, the brain also uses electric signals, but more importantly, I think when we wonder what’s the difference between a brain and a computer we really wonder about what they can do and how they do it, not about what they’re made of or how they are made.

So let us therefore look a little closer at what brains and computers do and how they do it, starting with the latter: What’s the difference between how computers and brains do their thing?

Computers outperform humans in many tasks, for example just in doing calculations. This is why my grandmother used those tables and slide-rulers. We can do calculations if we have to, but it takes a long time and it’s tedious and it’s pretty clear that human brains aren’t all that great at multiplying 20 digit numbers.  

But hey, we did manage to build machines that can do these calculations for us! And along the way we discovered electricity and semi-conductors and programming and so on. So in some sense, you could say, we actually did learn to do those calculations. Just not with our own brains, because those are tired from memorizing facts about Doctor Who. But in case you are good at multiplying 20 digit numbers, you should totally bring that up at dinner parties. That way, you’ll finally will have something to talk about.

This example captures the key difference between computers and human brains. The human brain took a long time to evolve. Natural selection has given us a multi-tasking machine for solving problems, a machine that’s really good in adapting to new situations with new problems. Present-day computers, on the contrary, are built for very specific purposes and that’s what they’re good at. Even neural nets haven’t changed all that much about this specialization.

Don’t get me wrong, I think artificial intelligence is really interesting. There’s a lot we can do with it, and we’ve only just scratched the surface. Maybe one day it’ll actually be intelligent. But it doesn’t work like the human brain.

This is for several reasons. One reason is what we already mentioned above, that in the human brain the neural structure is physical whereas in a neural net it’s software coded on another physical basis.

But this might change soon. There are some companies which are producing computer chips similar to neurons. The devices made of them are called “neuromorphic computers”. These chips have “neurons” that fire independently, so they are not synchronized by a clock, like in normal processors. An example of this technology is Intel’s Loihi 2 which has one million “neurons” interconnected via 120 million synapses. So maybe soon we’ll have computers with a physical basis similar to brains. Maybe I’ll finally be able to switch mine for one that hasn’t forgotten why it went to the kitchen by the time it gets there.

Another difference which may soon fade away is memory storage. At present, memory storage works very differently for computers and brains. In computers, memories are stored in specific places, for example your hard drive, where electronic voltages change the magnetization of small units called memory cells between two different states. You can then read it out again or override it, if you get tired of Kate Bush.

But in the brain, memories aren’t stored in just one place, and maybe not in places at all. Just exactly how we remember things is still subject of much research. But we know for example that motor memories like riding a bike uses brain regions called the basal ganglia and cerebellum. Short-term working memory, on the other hand, heavily uses the prefrontal cortex. Then again, autobiographical memories from specific events in our lives, use the hippocampus and can, over the course of time, be transferred to the neocortex.

As you see memory storage in the brain is extremely complex and differentiated, which is probably why mine sometimes misplace the information about why I went into the kitchen. And not only are there many different types of memory, it’s also that neurons both process and store information, whereas computers use different hardware for both.

However, on this account too, researchers are trying to make computers more similar to brains. For example, researchers from the University of California in San Diego are working in something called memcomputers, which combines data processing and memory storage in the same chip.

Maybe more importantly, the human brain has much more structure than the computers we currently use. It has areas which specialize in specific functions. For example, the so called Broca's area in the frontal lobe specializes in language processing and speech production; the hypothalamus controls, among other things, body temperature, hunger and the circadian rhythm. We are also born with certain types of knowledge already, for example a fear of dangerous animals like spiders, snakes, or circus clowns. We also have brain circuits for stereo vision. If your eyes work correctly, your brain should be able to produce 3-d information automatically, it’s not like you have to first calculate it and then program your brain.

Another example of pre-coded knowledge is a basic understanding of natural laws. Even infants understand, for example, that objects don’t normally just disappear. We could maybe say it’s a notion of basic locality. We’re born with it. And we also intuitively understand that things which move will take some time to come to a halt. The heavier they are, the longer it will take. So, basically Newton’s laws. They’re hardwired. The reason for this is probably that it benefits survival if infants don’t have to learn literally everything from scratch. I was upset to learn, though, that infants aren’t born knowing Gödel’s theorem. I want to talk to them about it, and I think nature needs to work on this.

That some of our knowledge is pre-coded into structure is probably also partly the reason why brains are vastly more energy efficient than today’s supercomputers. The human brain consumes on the average 20 Watts whereas a supercomputer typically consumes a million times as much, sometimes more.

For example, Frontier, hosted at the Oak Ridge Leadership Computing Facility and currently the fastest supercomputer in the world consumes 21MWatt on average and 29MW at peak performance. To run the thing, they had to build a new power line and a cooling system that pumps around 6000 gallons of water. For those of you who don’t know what a gallon is, that’s a lot of water. The US department of energy is currently building a new supercomputer, Aurora, which is expected to become the world’s fastest computer by the end of the year. It will need about 60MW.

Again the reason that the human brain is so much more efficient is almost certainly natural selection, because saving energy benefits survival. Which is also what I tell my kids when they forget to turn the lights off when leaving a room.

Another item we can add to the list of differences is that the brain adapts and repairs itself, at least to some extent. This is why, if you think about it, brains are much more durable than computers. Brains work reasonably well for 80 years on average, sometimes as long as 120 years. No existing computer would last remotely as long. One particularly mind blowing case (no pun intended) is that of Carlos Rodriguez, who had a bad car accident when he was 14. He had stolen the car, was on drugs, and crashed head first. Here he is in his own words.  

Not only did he survive, he is in reasonably good health. Your computer is less likely to survive a crash than you, even if it remembered to wear its seatbelt. Sometimes it just takes a single circuit to fail and it’ll become useless. Supercomputing clusters need to be constantly repaired and maintained. A typical supercomputer cluster has more than a hundred maintenance stops a year and requires a staff of several hundred people. Just to keep working.  

To name a final difference between the ways that brains and computers currently work: brains are still much better at parallel processing. The brain has about 80 billion neurons, and each of them can process more than one thing at a time. Even for so-called massively parallel supercomputers these numbers are still science fiction. The current record for parallel processing is the Chinese supercomputer Sunway TaihuLight. It has 40,960 processing modules, each with 260 processor cores, which means a total of 10,649,600 processor cores! That’s of course very impressive, but still many orders of magnitude from the 80 billion that your brain has. And maybe it would have 90 billion if you stopped wasting all your time watching Doctor Who.

So those are some key differences between how brains and computers do things, now let us talk about the remaining point, what they can do.

Current computers, as we’ve seen, represent everything in bits, but not everything we know can be represented this way. It’s impossible, for example, to write down the number pi or any other irrational number in a sequence of bits. This means that not even the best supercomputer in the world can compute the area of a circle of radius 1, exactly, it can only approximate it. If we wanted to get pi exactly, it would take an infinite amount of time, like me trying to properly speak English. Fun fact: The current record for calculating digits of pi is 62.8 trillion digits.

But even though we can’t write down all the digits of pi, we can work with pi. We do this all the time, though, just among us, it isn’t all that uncommon for theoretical physicists to set pi equal to 1.

In any case, we can deal with pi as an abstract transcendental number, whereas computers are constrained to finitely many digits. So this looks like the human brain can do something that computers can’t.

However, this would be jumping to conclusions. The human brain can’t hold all the digits of pi any more than a computer. We just deal with pi as a mathematical definition with certain properties. And computers can do the same. With suitable software they are capable of abstract reasoning just like we are. If you ask your computer software if pi is a rational number it’ll hopefully say no. Unless it’s kidding in which case maybe you can think of more interesting conversation to have with it.

This brings us to an argument that Penrose has made, that human thought can’t be described by any computer algorithm. Penrose’s argument is basically this. Gödel showed that any sufficiently complex set of mathematical axioms can be used to construct statements which are true, but their truth is unprovable within that system of axioms. The fact that we can see the truth of any Gödel sentence, by virtue of Gödel’s theorem, tells us that no algorithm can beat human thought.

Now, if you look at all that we know about classical mechanics, then you can capture this very well in an algorithm. Therefore, Penrose says, quantum mechanics is the key ingredient for human consciousness. It’s not that he says consciousness affects quantum processes. It’s rather the other way round, quantum processes create consciousness. According to Penrose, at least.

But does this argument about Gödel’s theorem actually work? Think back to what I said earlier, computers are perfectly capable of abstract reasoning if programmed suitably. Indeed, Gödel’s theorem itself has been proved algorithmically by a computer. So I think it’s fair to say that computers understand Gödel’s theorem as much or as little as we do. You can start worrying if they understand it better.

This leaves open the question of course whether a computer would ever have been able to come up with Gödel’s proof to begin with. The computer that proved Gödel’s theorem was basically told what to do. Gödel wasn’t. Tim Palmer has argued that indeed this is where quantum mechanics becomes relevant.

By the way, I explain Penrose’s argument about Gödel’s theorem and consciousness in more detail in my new book Existential Physics. The book also has interviews with Roger Penrose and Tim Palmer.

So let’s wrap up. Current computers still differ from brains in a number of ways. Notably it’s that the brain is a highly efficient multi-purpose apparatus whereas, in comparison, computers are special purpose machines. The hardware of computers is currently very different from neurons in the brain, memory storage works differently, and the brain is still much better at parallel processing, but current technological developments will soon allow building computers that are more similar to brains in these regards.

When it comes to the question if there’s anything that brains can do which computers will not one day also be able to do, the answer is that we don’t know. And the reason is, once again, that we don’t really understand quantum mechanics.

No comments:

Post a Comment

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.