Pages

Wednesday, August 07, 2019

10 differences between artificial intelligence and human intelligence

Today I want to tell you what is artificial about artificial intelligence. There is, of course, the obvious, which is that the brain is warm, wet, and wiggly, while a computer is not. But more importantly, there are structural differences between human and artificial intelligence, which I will get to in a moment.


Before we can talk about this though, I have to briefly tell you what “artificial intelligence” refers to.

What goes as “artificial intelligence” today are neural networks. A neural network is a computer algorithm that imitates certain functions of the human brain. It contains virtual “neurons” that are arranged in “layers” which are connected with each other. The neurons pass on information and thereby perform calculations, much like neurons in the human brain pass on information and thereby perform calculations.

In the neural net, the neurons are just numbers in the code, typically they have values between 0 and 1. The connections between the neurons also have numbers associated with them, and those are called “weights”. These weights tell you how much the information from one layer matters for the next layer.

The values of the neurons and the weights of the connections are essentially the free parameters of the network. And by training the network you want to find those values of the parameters that minimize a certain function, called the “loss function”.

So it’s really an optimization problem that neural nets solve. In this optimization, the magic of neural nets happens through what is known as backpropagation. This means if the net gives you a result that is not particularly good, you go back and change the weights of the neurons and their connections. This is how the net can “learn” from failure. Again, this plasticity mimics that of the human brain.

For a great introduction to neural nets, I can recommend this 20 minutes video by 3Blue1Brown.

Having said this, here are the key differences between artificial and real intelligence.

1. Form and Function

A neural net is software running on a computer. The “neurons” of an artificial intelligence are not physical. They are encoded in bits and strings on hard disks or silicon chips and their physical structure looks nothing like that of actual neurons. In the human brain, in contrast, form and function go together.

2. Size

The human brain has about 100 billion neurons. Current neural nets typically have a few hundred or so.

3. Connectivity

In a neural net each layer is usually fully connected to the previous and next layer. But the brain doesn’t really have layers. It instead relies on a lot of pre-defined structure. Not all regions of the human brain are equally connected and the regions are specialized for certain purposes.

4. Power Consumption

The human brain is dramatically more energy-efficient than any existing artificial intelligence. The brain uses around 20 Watts, which is comparable to what a standard laptop uses today. But with that power the brain handles a million times more neurons.

5. Architecture

In a neural network, the layers are neatly ordered and are addressed one after the other. The human brain, on the other hand, does a lot of parallel processing and not in any particular order.

6. Activation Potential

In the real brain neurons either fire or don’t. In a neural network the firing is mimicked by continuous values instead, so the artificial neurons can smoothly slide from off to on, which real neurons can’t.

7. Speed

The human brain is much, much slower than any artificially intelligent system. A standard computer performs some 10 billion operations per second. Real neurons, on the other hand, fire at a frequency of at most a thousand times per second.

8. Learning Technique

Neural networks learn by producing output, and if this output is of low performance according to the loss function, then the net responds by changing the weights of the neurons and their connections. No one knows in detail how humans learn, but that’s not how it works.

9. Structure

A neural net starts from scratch every time. The human brain, on the other hand, has a lot of structure already wired into its connectivity, and it draws on models which have proved useful during evolution.

10. Precision

The human brain is much more noisy and less precise than a neural net running on a computer. This means the brain basically cannot run the same learning mechanism as a neural net and it’s probably using an entirely different mechanism.

A consequence of these differences is that artificial intelligence today needs a lot of training with a lot of carefully prepared data, which is very unlike to how human intelligence works. Neural nets do not build models of the world, instead they learn to classify patterns, and this pattern recognition can fail with only small changes. A famous example is that you can add small amounts of noise to an image, so small amounts that your eyes will not see a difference, but an artificially intelligent system might be fooled into thinking a turtle is a rifle.

Neural networks are also presently not good at generalizing what they have learned from one situation to the next, and their success very strongly depends on defining just the correct “loss function”. If you don’t think about that loss function carefully enough, you will end up optimizing something you didn’t want. Like this simulated self-driving car trained to move at constant high speed, which learned to rapidly spin in a circle.

But neural networks excel at some things, such as classifying images or extrapolating data that doesn’t have any well-understood trend. And maybe the point of artificial intelligence is not to make it all that similar to natural intelligence. After all, the most useful machines we have, like cars or planes, are useful exactly because they do not mimic nature. Instead, we may want to build machines specialized in tasks we are not good at.

75 comments:

  1. Wow again! Your video is unexpectedly interactive: the beautiful hair, the pretty face, sans glasses (but still a specracle), and those schöne Arme—all produced in my wet, warm brain enough glutamatergic neurotransmitter to set at least 50 billion synapses snapping at about 800 times a second!
    Once my postsynaptic neurons settle down, and after I’ve exercised some imagination (featuring, as with particle physics today, entrancing scenarios that can’t be tested), I’ll probably think about what you said.

    ReplyDelete
  2. (Conventional) AI systems are information (only) processors.

    Brains are (also) experience* processors.

    * 'experience', as defined by Galen Strawson:
    "The Consciousness Deniers"
    https://www.nybooks.com/daily/2018/03/13/the-consciousness-deniers/

    ReplyDelete
  3. Gaaah. Your list is kinda incorrect.

    "The human brain has about 100 billion neurons. Current neural nets typically have a few hundred or so" - while computer neural networks don't have a concept of "neuron" directly, the number of "neuron-like" elements is in millions in the most complicated networks these days.

    "In a neural net each layers is usually fully connected to the previous and next layer" - this is not necessary and often it's not how modern neural networks work. Look for LSTM. LSTM networks allow to essentially "evolve" topology a little bit by providing cross-layer interconnects.

    "In the real brain neurons either fire or don’t. In a neural network the firing is mimicked by continuous values instead, so the artificial neurons can smoothly slide from off to on, which real neurons can’t" - neural networks can also be used in this mode (that's what ReLU does essentially). Real neurons also can effectively fire with different intensity - the individual pulses can't be adjusted, but their frequency can.

    Moreover, recent research shows that biological neurons are actually closer to artificial neurons than we thought. Initially it was assumed that biological neurons simply fire when the activity of connected synapses reaches a certain threshold, but it turns out that neurons actually use weighted sum of their inputs and some synapses are more likely to trigger it.

    In the end, while artificial neural networks do not behave exactly like real neurons, but they are uncannily close.

    ReplyDelete
    Replies
    1. * I was talking about the typical examples not the frontier-pushing ones

      * I know, which is why I said "usually"

      * ReLU is still continuous

      Delete
    2. About discontinuity, you're not thinking about it correctly.

      In a biological network neurons often (always?) indicate the intensity stimulus by their firing frequency. So a retina cell would indicate the intensity of the incoming light by the frequency of its activation. In biology this works just fine because each neuron is autonomous.

      In a neural network inputs are given all at once, so there's no time and no frequency. Instead we just encode intensity as a number.


      "* Yes, of course, advanced neural nets come with predefined structure, which is why I said "usually" they are fully connected" - I doubt that you'll find a simple unstructured strongly connected network anywhere these days.

      Seriously, speak with AI experts - you'll find a lot of interesting and scary stuff.

      Delete
    3. What I mean is they have an activation threshold.

      Delete
    4. So does a ReLU-based layer. Then just treat its outputs as the frequency of firing for "regular" neurons.

      Delete
    5. In which case the virtual neurons do not actually "fire".

      Delete
    6. Indeed they don't.

      We typically use the word "fire" to mean that they are past a certain threshold, which is not what it means for biological neurons. Yet another example of confusing terminology.

      Delete
  4. "In a neural net each layers" -> "layer"
    "an artificial intelligent system" -> "intelligence" Weird, but closer to actual usage, I feel
    "Like this simulated self-driving car ..." no link (missing?), so what does "this" refer to?
    "we may want to build machines specialized it in tasks we are not good at" extra word "it"?

    In common use - in mainstream English language media for example - "AI" is sometimes used as a synonym for "ML" (machine learning), and vice versa, but not all ML involves neural nets.

    ReplyDelete
    Replies
    1. JeanTate,

      Thanks for spotting these typos, I have fixed that. I have also added the missing link to the spinning car.

      And yes, not all AI is machine learning and not all machine learning is neural nets, but to first approximation what you read about presently as AI is neural nets.

      Delete
  5. I certainly don't disagree with your general point, Sabine, that artificial neural nets are quite different from real, say, mammalian brains. But I've got some quibbles here and there which don't really affect the general argument.

    2. Size: Actually, the current number is 86 billion, as counted by Suzana Herculano-Houzel. Equally important, each neuron is connected to roughly 10,000 other neurons on average.

    3. Connectivity: Right, as a whole, the brain is not organized into layers. But the cerebral cortex, the outermost surface of the brain with all the folds, is mostly organized into 6 layers, though some areas have only 3 or 4 layers. Moreover, the connections between the layers are arranged into columns and minicolumns running orthogonally to the layer structure. These layers and columns have characteristic arrangements of neuron types. Other areas of the brain have layers as well, and various types of neurons.

    Neuron types? Neurons consist of a cell body, a single axon for output, and multiple dendrites for input. Types are differentiated by shape, size of elements (e.g. some axons are quite short while others must travel from the brain down the spinal column), and number and distribution of synapses on dendrites and axons.

    I believe the 'neurons' in neural nets are all of the same type.

    4. You're right about power consumption. It's worth noting, moreover, that much of the power in digital computers is devoted to communication between memory units and processing units, which are physically distinct. In brains neurons are both processing and memory units – something von Neumann recognized in his posthumous little book, The Computer and the Brain. Thus real brains don't "waste" power in the process of shuffling bits from one place to another. This comes up in an interesting discussion that MIT's Neil Gershenfeld has over a Brockman's Edge. The discussion isn't about brains, though brains do turn up here and there, but about computers. More precisely, it's about how computation can be implemented in a physical system, with most of the attention on artificial systems. As such it speaks to a number of issues you've raised here.

    5. Well, cortical layers are fairly neat, but I believe your point is about addressing and on that you are correct. Addressing as it is done in digital computers just doesn't make much sense when applied to brains. Obviously neurons in brains are physically connected with one another and so don't require addresses. And, yes, there is massive parallelism.

    While one can see how artificial neural networks have been loosely inspired by real neural nets, they are not anywhere close to be simulations of real neural nets.

    ReplyDelete
  6. When/where we lost 'Faith in Mankind' ... Machines, Nachines ( Nano-Machines) and Technology comes and restore Our 'Hopes for Truth' ...
    Who needs Primates on The System and Its Equations?? ...
    ( Maybe 'God' ... but for sure, Our System and equations do not requires Primates to work as they work ... 'Primates' are just an unnecessary side effect ... but ... let the Apes to be Apes ... )

    Truth, Balance and Love ...be aware about Peace, sometimes, Peace is what happens after 'The War' ...

    ReplyDelete
  7. A few points:

    2) Typical current neural networks can have in the order of hundreds of thousands to millions of neurons (most will be smaller, probably). If you're taking 128x128 images as your input (small but common for small tests), your first layer only will already have 128x128 neurons.

    3) In basic neural networks the network starts fully connected, but it's not rare to apply a pruning process afterwards. In advanced applications, it's very common to end up with sparse matrices to represent your weights. For some applications, you might even want to start with certain structure. In general, you can expect any advanced application to exhibit some sort of structure after training. If you check the AlphaStar videos in youtube, they show a visual representation of their network. Certain parts specialise in long term planing, others battlefield awareness, and others in micromanagement.

    5) Well... I understand what you're trying to say but it might lead to confusion. In the brain, neurons also fire "layer by layer", one group of activated neurons will feed to the next, in sequence. And, if you have a stream of inputs, there's nothing preventing you from calculating the output of your first layer for one input while you calculate the output of your second layer for the previous one. Having enough computational resources for this to make sense is a completely different issue, of course. But this is not a limitation or characteristic intrinsic to artificial neural networks, it's just convenient to the way we calculate their output on current architectures. There's no intrinsic limitation to the topology of the connections. There are common practises, but often the reason for not going after something more "exotic" is that the simpler topology works better. In the future, with more computational power and different computer architectures, that might not be the case anymore.

    6) You can use whatever activation function you want, including the step function (i.e.: the neuron is either active or inactive). Hyperbolic tangents, sigmoids, and even rectified linear units simply perform better most times.

    9) This, again, is simply the most common practise nowadays, but it doesn't have to be like that. There are people researching how to pick better starting values depending on the application. Others even go one step further and are working with fixed layers and/or subnetworks.

    A very clear difference in function between whatever it is the ANNs do and what humans do, is sensitivity vs. specificity. ANNs beat humans in sensitivity, and humans beat ANN in specificity (by far, in both cases). I've read somewhere (sorry, I tried but couldn't find the article) someone who proposed using the superior sensitivity of ANN to flag suspicious spots in medical imaging, and then using humans to discard the false positives. But then again, the whole thing with ML is that nobody really knows "how" it works, what is it that the ANNs see and "learn". The human brain is even more obscure. It's hard to predict whether this trend will stay or it's also a product of our current practises.

    ReplyDelete
  8. Reasonable people in the AI field realize that neural networks are not going to give us an AGI (Artificial General Intelligence) though they may play a small role. In fact, AI researchers pursuing AGI had to invent this new acronym because plain old AI became too closely associated with neural nets. Gary Marcus (http://garymarcus.com/) has written a lot on why neural nets are insufficient to produce an AGI.

    You make a good point in your final paragraph, one I have made many times. We usually create computer systems to do those things we are bad or slow at. Perhaps we don't need an AGI that can crack jokes with us or a self-driving car with "personality". However, it still seems like it would be nice to have a personal assistant to which I could give tasks and with which communication was as easy as talking to a close friend. I believe we'll have this someday though it's going to take much more than neural nets.

    ReplyDelete
  9. Perhaps you might comment on the assertion made by some authors (e.g., Max Tegmark in “Life 3.0”) that “intelligence” can be abstracted from the implementing hardware (wetware), and that self-learning machines might achieve virtually limitless intelligence that might endanger human existence.

    My interpretation of Tegman’s argument (apparently shared in the main by many others) is (1) both brains and computers do information processing (constrained only by processing power and memory); (2) intelligence is essentially complex information processing; (3) machines are performing more and more complex information processing, demonstrating intelligence and learning; (4) learning machines, unconstrained by biological evolution, will eventually bootstrap themselves to superhuman, virtually limitless intelligence; (5) this superintelligence will be able to achieve virtually anything allowed by the laws of physics; and (6) we can’t rule out that super intelligent machines might regard humans as an inferior and even dispensable species.

    There are several parts of this argument I don’t find convincing. At the heart of my objection is what I think is confusing the map with the territory. By that I mean creating models and artifacts that interpret or mimic some aspect of nature, then identifying nature as nothing but our models and artifacts.

    ReplyDelete
    Replies
    1. A bit of a digression: Korzybski's distinction between "map" and "territory" is imprecise. Suppose instead of "territory" one thinks of what is "given," in other words of the "starting point" of experience. And suppose then the "given" is distinguished from a "not-given" category of somethings, namely from the realm of ideas and thinking, or what Korzybski called the "map." The question then arises: is this distinction between "given" and "ideas" something that is itself given? Or is not the distinction between given and idea only an idea, a "map," which is to say something that leaves out more than it includes and slants things one way or another? By those who employ it, the "map"/"territory" distinction is generally treated as if it is part of the "territory," when in fact consistency would seem to demand it be seen as just another "map." If the distinction between "given" and "idea" is itself only an idea, then as far as the given itself is concerned, the so-called "not-given", the realm of ideas, is in fact another phase of the given. It is true that this result, whereby everything is understood as "given," partly cancels itself. Why? Because if thinking and thoughts are understood as part of the "given," then of course the thought that discerns a distinction between given and not-given must also be understood as "given." The two domains, which at one turn of this spiral collapsed together into one domain, thus separate again, and we are back where we started. So the whole spiral starts over, and goes around ad infinitum. We thus begin to observe how the world of perception and thought merge and separate, merge and separate, ceaselessly. Thought is not just a "map." A formula that captures the way in which all is given while at the same time semi-distinguishable into two domains is this: Thinking is a higher experience within experience. This borders on some sort of neo-Platonism or perhaps neo-Plotinism. Thinking is not merely an abstraction. It is a domain of experience bordering on other domains below and above it. Agreement with something like this seems to preclude creation of true AGI, though very convincing simulations are no doubt possible.

      Delete
  10. Sorry Sabine, but your information is woefully out of date. The largest neural network running today has 16 million "neurons." That is a lot less than the human's 80 billion, but the factor of one million plus in speed is significant. It's trivial to add noise to artificial neural networks, and that is often done, for reasons that should be obvious. Contrary to what you say, there is a great deal of layering in the human brain. The most powerful neural networks also have a lot of structure, and, as in the human brain, many tasks are delegated to dedicated modules.

    The $80 Go play program on the computer I'm using here is an example of such.

    ReplyDelete
    Replies
    1. * I'm not talking about the edge of research. I am talking about the typical examples that you read about.

      * I don't know what your comment about noise is supposed to say.

      * Yes, of course, advanced neural nets come with predefined structure, which is why I said "usually" they are fully connected.

      Delete
  11. Let me add: artificial neural networks today are often highly parallelized, either on supercomputers or ordinary desktops with one or more graphics processors, each equipped with thousands of parallel computing pipelines, typically running a million times faster than the neurons in your brain.

    Human brains have thousands of somewhat specialized modules, which is one key way they are so much more flexible than artificial neural networks. The most complex AI now probably has a dozen or two.

    ReplyDelete
  12. What is the mind? Simply put, the mind is the function of the brain, and that is why what you see floating in formaldehyde in the laboratory are brains without the mind. I call hardware that performs a mechanical or technical function not necessarily electronic but including electronics as a hardware program: the elevator, the tv and the fridge for example are hardware programs. In this respect, the human body with its physiology and biochemistry is a hardware program. The whole physiological function of the human body with its supporting anatomy and biochemistry is hardwired into us, isn't it? Hardwired and therefore a hardware program. When the same hardware can perform several soft functions--unlike the lifting of weitghts in the case of the elevator--like spread sheets, word processing, pictorial representations etc. Such performing soft functions are the software programs. The same hardware runs several software program.


    Let us take a human example: There are two just born babies; one is born into a Hindu household and another is born into a Christian household. They swap the just born babies. The Hindu baby is adopted by Christians and vice versa. As the two babies grow up they are conditioned or programmed by religion: The originally Hindu child is conditioned, programmed as a christian, and the originally Christian child is conditioned, programmed as a Hindu. The originally Hindu child now programmed as a christian believes in resurrection, and the originally christian child now programmed as a Hindu believes in reincarnation. After 20 years, the children now adults meet their biologically parents, they are oblivious of the fact that they are their biological parents, and by the force of religious conditioning, programming they are prepared to kill or get killed in the name of programming i.e., religion. Then someone comes along and tells them that it is all in the programming: By the virtue of "the Hindu program" you believe in reincarnation, and by the virtue of "the christian program" you believe in resurrection. Religious belief is a matter of programming. He proves this by telling the youth that they were swapped as soon as they where born. The SWAP BETRAYS "THE U N D E R L Y I N G P R O G R A M". So, "the Hindu Program" and "the Christian Program" are soft ware programs which run in the Mind. Culture and linguistic, and other identities like nationalism are also software programs that run in the mind. Thinking is a software program running in the mind. And thought is the response of memory. From Jiddu Krishnamurti we learn that there can be no real observation when "the software program" is running. And that observation does not guarantee discovery but there can be no discovery without observation. Because when there is belief, when we are caught in rut of belief, it becomes a great impediment to discovery. It blocks perception. Let us take the orthodoxy of scientific authority which is also a program as much as belief. Barzillius was a name in chemistry and he said organic compounds cannot be synthesized in the laboratory. All chemist-scientists of his time and later were fed on this program. If Wholler had given in to this orthodoxy, this program could he have synthesized organic compounds in the laboratory? If Einstien had given in to the authority and orthodoxy of Newton, Newton's program of the absoluteness of time and space, could he have discovered relativity? If Einstien had given in to the orthodoxy or the program of hypothetical ether could he have discovered relativity?

    ReplyDelete
  13. continued...
    So scientific orthodoxy or authority are programs like "shut up and calculate". A software program whether of religion or of scientific orthodoxy prevents observation. When you say "I dont know" then you are quiet, you have stopped searching, rummaging the known. In that state of I dont know "the program" has "hanged", stands suspended. Then there is scope for intelligence to operate; rather there is an opportunity of intelligence. And as an act of intelligence there is insight. "In" - "sight". How can there be insight when you are blinded by prejudice and bias; prejudice and bias are also software programs running in the mind.


    The Ultimate goal of Artificial intelligence is Compassion. "Compassion is the highest form of intelligence" Jiddu Krishnamurti. But can this intelligence be as long as a software program is running? Artificial intelligence "is" as long as the software program is running. Real intelligence "is" when the software program stops running. How are we going to reconcile to this fact.

    ReplyDelete
  14. What you are calling "typical neural networks" is what I was building 25-30 years ago. I would guess that a typical size today is tens to hundreds of thousands, with the top end well into the millions, as I said before.

    ReplyDelete
  15. I don’t think this article touches adequately on reinforcement learning.

    ReplyDelete
  16. It's funny that we are revamping AI when essentially the same algorithms have been around for 30 to 40 years. I worked on neural nets for sonar classification 30 years ago and they are just effective now as they were then...it all depends on how much additional context you can give them about what your looking for. Mimicking small slices of neural cortex always seemed lame to me..we should accept that we don't understand all the neural layers and their interactions, just grow the best version of an electronic brain that we can and set it loose. Marketing people have revamped AI terminology in order to sell more gadgets that are just as inaccurate as they were decades ago while telling us it's a new world. I don't know what's more depressing, that we can't stop lying to ourselves about what intelligence is or that the lies we do tell ourselves (like Supersymmetry and the multiverse) are somehow true.

    ReplyDelete
  17. I'm not a neurochemist or neurobiologist, but my sense is that the brain's neural "substrate" is so materially (chemically, biologically) different from today's (or even future) neural networks (NNs) made of conventional semiconductor technology — no matter how many billions of processing elements and interconnections these NNs have — that this particular technological path will not lead to experiential (brain-like) "beings".

    (I think this is something in the line of what Pascal Häußler said.)

    ReplyDelete
  18. Artificial neural networks come in many different forms, and it looks as if you are referring to a typical backpropagation network here.

    The problem is not identifying the many differences between artificial neural networks and biological neural networks, but identifying which differences matter. For example, we would not try to make an artificial neuron have the same colour as a biological neuron, but we would like them to have similar input/output characteristics (probably).

    Additionally, because biological neural networks have evolved under the harsh scythe of natural selection, they must obey constraints that are less relevant to artificial neural networks. For example, there is an imperative for biological neural networks to maximise the number of bits of information per Joule of energy expended. That is a difference that has an enormous impact on neural design, so it is a difference that probably matters.

    Three relevant books on this topic (two by me) are:

    1) Artificial Intelligence Engines:
    A Tutorial Introduction to the Mathematics of Deep Learning by JV Stone

    2) Principles of Neural Information Theory: Neural Information Theory: Computational Neuroscience and Metabolic Efficiency by JV Stone

    3) Principles of Neural Design, by Peter Sterling and Simon Laughlin.

    https://jim-stone.staff.shef.ac.uk/BookBayes2012/books_by_jv_stone

    ReplyDelete
  19. We probably don’t have to worry about losing the competition AI v. HI (human intelligence) as long as the main playing field is neural nets and Tegmark’s “information processing,” where bigger and more means better. Who cares if chess engines beat grandmasters. I may get worried if my Toaster 3.0 uses irony instead of heat, when my washing machine 3.0 cracks up about my fashion sense, and my coffee maker 3.0 develops a sense of humor.

    Perhaps what is needed is a more qualitative competition of AI v. HI similar to the millennium challenge P v. NP. Here the question was and is whether the easy (or fast, solvable in polynomial time) problems (P) are the same as the ones whose solutions can be easily verified (NP), which would mean P = NP. The current wisdom on the street favors a “No:” P is part of NP, but NP is (much) larger than P. Multiplying lots of large numbers is easy (P), but given a huge number and finding the factors is much harder (NP). Far beyond this trivial example, the P v. NP problem has been well defined and its solution will earn you US 1,000,000. The AI v. HI problem is still largely undefined, because both AI and HI occupy shifting domains, depending on the interest and expertise of the respective researchers, psychologists, computer scientists, philosophers, logicians, etc. And while P is part of NP it is not true that AI is part of HI.

    But AI and HI have an overlap. The part of AI that is untouched (or unreachable) by HI seems to be all the rage, perhaps because technology is good at “bigger and faster.” But the exciting question is the domain of HI that is untouched by AI. It’s tempting but too easy to refer to Gödel’s first incompleteness theorem, that there exist human-designed consistent systems that contain more truths than can be proved within the system. Self-referentiality appears to be a good candidate for HI and no-AI. It can at least be defined better than stuff like consciousness or sadness or love (although Richard Powers gave it a literary try in his novel “Galathea 2.0”).

    It seems to me that solving AI v. HI would be worth a Breakthrough Prize.

    ReplyDelete
    Replies
    1. It is tempting to think of consciousness as a manifestation of Gödel's theorem. There are some problems, such as Gödel's theorem involves an infinite number of Gödel numbers that are codes for a preposition that acts on Gödel numbers as the free variable. In Peano number system the 0 element has its successor S0, which has its successor and so forth. A counting algorithm is easy to write, but what the machine will never do is make the inductive leap that there exist an infinite number of numbers. That proposition is not computable by this machine.

      The trick is to get a system to make a self-referential inference. Interestingly there is one way in principle to do this with eternal black holes. The Kerr or Reissnor-Nordstrom metric has an outer event horizon and an inner event horizon. The inner event horizon is continuous with null infinity or I^+. Then all possible null geodesics that enter the black hole pile up to this inner horizon, and if you cross the inner horizon you cross these. So we could have in the exterior world a computing system, say a Turing machine, and let's make it a universal Turing machine for good measure, and it sends data into the black hole. You the observer could in principle go into the black hole record this data and determine the Halting status of any or all Turing machines! You will have hyper-computed a self referential inference. This is a trick found by Hogarth and Malament.

      Now there are obvious problems with this. Black holes are not eternal but are thought to quantum decay with Hawking radiation. This removes I^+ as continuous with the inner horizon. Also if one could do this infinite trick in a black hole it means the entropy content of the black hole from the exterior would have to be infinite. A version of the Hogarth-Malament machine is a switch that flips in one second, then a half second, then a quarter, …, and does this Zeno sort of insanity of flipping an infinite number of times in the infinitesimal time up to 2 seconds. The problem is the energy required to do this diverges and even if you could keep this from flying into pieces it would become a black hole. Then if we try to use a black hole and self-referentially or hyper-compute inside it we see quantum mechanics provides an obstruction. I think it is fascinating that physics, in particular a sort of correspondence between gravitation and quantum mechanics, gives this sort of obstruction to “infinite computing.” Of course physics abhors anything that can be measured to be infinite. Even if the universe is infinite all physics is still local and finite.

      So how in the hell does the human brain do what appears to be self-referential inferences? If we emulate the Hogarth-Malament machine well enough we can make a bet as to whether a Turing machine halts or not. In the case of the black hole, you may reach the inner horizon towards the end of the duration of the black hole, it may in fact be a sort of singularity or mass-inflation Cauchy horizon, and from the outside the duration of a black hole can be an awfully long time. This time can be10^{67} years for a solar mass black hole or up to 10^{100} years for a supermassive black hole you might be able to enter without being tidally ripped apart before crossing the horizon. After all, when your Windows system freezes up, you don't wait terribly long to say, “This is in a nonhalting state,” and you reboot. You have not proven it is nonhalting, but you have good evidence that it is. This is the basis of Chaitin's halting probability, which itself is not Turing computable. I think in this way and AI type of system can make reasonable estimates about a self-referential inference.

      Delete
    2. This is a very, very impressive display of black hole theory to implement a self-referential structure in the real world (assuming the quoted theory holds in the real world).

      There are fairly easy ways to code self-referential structures in many computer languages. Abstractly speaking, those are structures that contain pointers to a structure of the same type. That’s not too sophisticated and used to form lists, queues, stacks, and trees.

      More intuitive (and sometimes funnier) are pictures depicting self-referential situations: type “self-referential pictures” into Google images and take a look. (Of course, Escher shows up.)

      Gödel’s (first) incompleteness theorem is often unduly extrapolated, as I have done in the above post: interpreting Gödel’s as a general statement that “truth goes beyond provability” removes his theorem too far from its arithmetic origin, as you have rightly noted in your first paragraph. In a more sober and modest way, AI and HI (human intelligence) would have to be reduced to basics and defined properly so that one could tackle the problem AI v. HI in parallel to P v. NP, which is already well defined but unsolved. One of the latest, promising, efforts, by Professor Norbert Blum, failed in 2017 (related to the so-called Clique-Problem). However, the favored approach is: construct an example that is NP but not P. Similarly, the proponents of HI over AI would have to show an example of HI that is not AI.

      Delete
    3. @Lawrence Crowell

      'So how in the hell does the human brain do what appears to be self-referential inferences?'

      Let's not think in the human brain as the Engine for AGI, instead let's use 'Bio-Chemistry' and call it 'Life' ...

      Then, the answer to the question is:

      Because 'Life' is at The Event Horizon from 'Death'.

      Inside 'Death', eternity and infinity happens ...

      But 'Life' abhors/fear eternity and infinity ...

      Then, at the 'Event Horizon's inner periphery' - where 'Life' can not escape from Eternal Disintegration - the combination of elements increase exponentially but the decay accelerates ...
      Then, 'Life' seems to endlessly explores all its possible paths ...

      ... at the Event Horizon's Outer Periphery there are not any sort of combination and not decay, only an 'Unitary Static Field' ... Just Existence without a Second ... Pure 'Inner Space' with Nothingness as Its Inner 'Externality' ...

      Therefore, Machines made with estable and dense materials will never show 'Fear to Death' ... Incapable to reproduce 'Life's Mechanics in themselves as an Integrated Self' ... but capable to mimic any sort of behavioral patterns, explore the environment and build models about that environment and themselves as environment ... some sort of 'Soul less'-'Proto-Life'

      Delete
    4. @Wulf Rehder: There is a difference between recursive sets and recursively enumerable sets. The complement of a recursive set, which can be generated by a recursive function, is recursive. The complement of a recursively enumerable set is not recursively enumerable. A perfect example of a recursively enumerable set is a fractal, where an algorithm can crank it out , but never fully. The complement of this is because fractals have this loads of filigree is not computable. So algorithms can compute RE sets up to some cut-off by this sort of feedback.

      To perform a self-referential inference does mean the machine has to perform a Cantor diagonalization over an infinite set of Gödel numbers. This has been a real show stopper, and while one can do a truncated form of this it is in some ways artificial. So we may never be able to really devise a Gödel- Löb machine that can make perfect self-referential inferences. We might though be able to work a probabilistic form of one that can make a reasonable bet.

      The concept of infinity has been troubling. In the late Hellenic age the concept of infinity began to take shape, and it is clear people went completely off the rails. Infinity got wrapped into various mystical and religious ideologies. What intellectual trends there were in that time got squelched and it was not until the 12th century, at least in the west, that intellectual trends started up again. This is often attributed to Abelard and Aquinas. The Islamic world soldiered on some, though mention of say the motions of stars and planets always had this catch phrase of “It is the will of Allah that … .” Infinity is not really a number, but a transfinite cardinal of a set, and as such is an inductive abstraction. So we can imagine that attempts to implement some approximate Gödelian system might also in its way go gang oft aglay. In fact if this is ever done I would bet this would be a serious set of problems to overcome.

      Delete
  20. In the nineteen eighties the philosopher John Searle took a whole generation of computer scientist by surprise with his Chinese Room argument. Basically a machine executes a program but can hardly be said to understand what it does. Real brains are different. Today, as it seems, we dare talk about machines learning without a wince. I wonder what this teaches us.

    ReplyDelete
    Replies
    1. I agree, and it won't be long (I think) before people start to wonder why we do not have driverless cars in general use despite the hype. The problem is that humans actually understand the various hazards encountered when driving - so they can generalise them. So, if you have seen one horse breed spooked when passed at speed - or even just been warned by the driving instructor - you can generalise this to all types of horses. You can combine that with your knowledge of kids - that they may not be as good riders as older people etc. You probably also carry some of that over to coping with other animals while driving.

      The key issues are obtaining genuine understanding and ability to generalise phenomena.

      Delete
  21. continued...
    Biological evolution is mostly a hardware evolution. Then mammals developed the ability to psychologically recognize and therefore think. Otherwise they cannot find their way around in thier own territory--as explained in my comment on "Automated Discovery". This evolution of psychological recognition and thinking is a software evolution. Thinking as a recognition mechanism is highly accentuated in humans. This meant that we could meet challenges more quickly and solve problems within the lifetime of an individual or less than 100 years (example) rather than waiting for the hardware to evolve and deal with a challenge. If man must evolve into a being who can walk on water, such a hardware evolution will take thousands of years. Instead, using the software, which is thinking, he can design and invent a boat without necessitating a hardware evolution or a biological evolution. In this regard, hasn't the software evolution greatly shrunken the response time to a challenge or a stimulus? So, from hardware evolution to software evolution in human beings, we see this inescapable fact of "shrinking of time". Take a disease like polio. It would take us quite long to evolve biologically hardware wise and become immune to polio. But software evolution, which is thinking, ideation, mentation has enabled us to produce a vaccine within the lifetime of an individual and save mankind from being maimed for generations.
    The most exciting part, the real catch is not in seeing that hardware evolution was followed by software evolution of living things, but, but, goodness gracious, in seeing that software evolution of living things "mimics" hardware evolution of living things. When software evolution took place in living things, we were only responding from our hardware evolution background. Yes, we are always responding from our background.
    Taking this argument further into the world of technological innovation, ideas, and evolution, we did not arrive at computer software programming by accident; we were bound to arrive at it because we are responding from our background, the background of software evolution of the human mind, which in turn was responding according to the hardware evolution of the human body. Going even further, we can say that by understanding the mind and the processes of the mind we can understand the universe. We now only must turn and look inward.

    ReplyDelete
  22. ...The technological evolution and the arrival of computer software programming "mimics" the software evolution of the human mind.

    ReplyDelete
  23. ANNs emulate aspects of biological neural networks. I agree strongly with your statement about the on vs off state. Biological neurons can have a generating potential build up upon receiving a polarizing action potential from a dendrite of another neuron, but unless a sufficient number of calcium channels are opened this will not turn into an action potential. There is a certain threshold where the action potential is generated. It is a bit like the flusher on a toilet. Push that down a little bit and you get some flow, but not a flush. Once you push it down beyond a certain point the whole flush starts.

    The neural elements in a biological neural system are themselves alive. A single cell is every much an information processing system, where the Boolean states can be with kinases that are phosphorylated or not, which changes their shape. This then is an on or off state for some biochemical pathway. In fact the calcium, magnesium and sodium gates, which activate in that order in a sequence to give an action potential, are ligand proteins that have phosphorylation sites that change the shape of these channel gates to let these various ions pass in or out of the neuron. The neurons in an ANN are just algorithmic routines.

    Another point is that any animal species has far more input and output, I/O, capacity. On the tips of our fingers are over 100,000 tactile receptors that produce action potential upon some physical impact of touch or heat etc. This extends to an array of physical sensory perception, sight, touch, sound (which is a sort of tactile sense), and the really strange olfactory and taste perception. The standard idea is that certain neural gates have molecules that are lock and key systems, where the shape of a molecule binding some region of amino acid residues has a shape that acts as a key. With smell it may be the reception is not the shape, but the Fourier transform of the shape in the vibration states of the molecule. In this way we can distinguish between millions of different smells.

    AI has a lot of hype these days, and now we have prototypes for self-driving cars and the like. There is lots of face recognition stuff being developed, which could before long place us in a sort of surveillance world. The prospect for a dystopian future similar to Damon Knight's Hell's Pavement or others is fairly evident. We are not so much moving into outerspace, but rather into a sort of virtual world of control where we may really be going nowhere at all.

    The selling point about AI being conscious and the rest is also a bit off key. For one things we have a sort of Pinocchio problem of never knowing for sure if some AI system is really conscious. I suspect at this point they are not, but they are doing a very good job of taking us humans off the work-force and placing us under constant watch and control.

    ReplyDelete
  24. When I say connecting the dots forms a neural network, I mean an abstract neural network and not the biological neural network. This abstract neural network is put together by thought. It is thought-neural-network. A network of connected hyperlinks formed during the act of thinking and then recorded as memory in the brain. The network of thought images is recorded in the brain. And this neural network is called forth when at least one dot or hyperlink is activated in the act of recognition.

    ReplyDelete
  25. A thought-neural-network is an emergent behavior of the biological neural network. But I am sure that thought functions as a series of associations. Software evolution "mimics" hardware evolution.

    ReplyDelete
  26. Nice overview. What has struck me about the progress in AI in the last decade is how easy it has been. It seems that ideas that have been around for decades, like neural nets, have had new life breathed into them by fantastic hardware.

    So I expect the state of the art will improve at a rapid pace, and real artificial consciousness may not be far away.

    ReplyDelete
  27. "A metaphorical Q-bit of "Artificial Intelligence for Natural Intelligences"

    Einstein remembered that physical entities - as those are conceived by Primates visual systems - causally evolves in a constrained environment nested on its background properties (the electromagnetic spectrum)

    Planck remembered that without a discrete quantitative demarcation a "Continuous Reality" will never be expressed as consistent with a "limited Background" established by the Electromagnetic Spectrum Properties.

    Einstein and Planck contributions set a Background Independent (not true, just that The Theoretical System is self-contained by a quantity ( the speed of light ) and its derivative ratios making a "Closed System"...Therefore, The System is its own background ))"Intelectual Paradigm for Physicists"

    Schrodinger remembered that a "parameterized closed system" can be represented and compressed by a Probability Distribution from all its "inner states"...

    Heisenberg remembered Godel's Incompleteness Theorem and applied it in a "parameterized closed system of discrete variables" making its inherent causal order an unknowable condition hy the System by Itself but Preserving Its "Internal Consistency and/or "Background Independence" ... 

    Then, Einstein, Planck, Schrodinger and Heisenberg seminal contributions set the Intellectual Paradigm Practices that deployed The Standard Model ...

    ... and The Standard Model remembered that "The Higgs Field" acts as some sort of 'viscosity threshold" for "Baryonic Matter/Energy" ( The Content described by The Parameterized Closed System ) ...

    Of Course, The "Dark Stuff" leads to modify The System by adding The Cosmological Constant as a "Natural Imposed," arbitrary Parameter ... and by adding "Ghost Matter Tensors" to Fit The Intellectual System Gaps with The Collected Data  ...

    Obviously, Lot of Physicists are not inquiring Nature to reveal its inner mechanisms/dynamics  ... It seems that They are trying to Tell to Nature what Nature should be Doing by what they BELIEVE is Truth ...

    But Truth never cares about Beliefs ... Truth was/is/will be as Truth is ... and Nature - always - follow Truth ...

    ReplyDelete
  28. True Artificial Intelligence will change everything:

    https://www.youtube.com/watch?v=-Y7PLaxXUrs

    ReplyDelete
  29. It’s not difficult: consciousness is “living” information, but computers use symbolic representations of information. Very obviously, consciousness can never “emerge” out of symbolic representations: you need pre-existing consciousness to match symbolic codes with the living experience of information. I.e. computers can never be conscious.

    Consciousness, i.e. living information, is merely the way matter in the universe grasps information about its situation. Its as simple and basic as that.

    “Artificial intelligence” is just the manipulation of symbols, without any appreciation of what the symbols mean: it’s not living intelligence. Why is this so difficult for people to understand?

    ReplyDelete
  30. From _The Brain_, Edelman and Changeux, editors, (2000) pp218-219:

    "One issue of particular interest to researchers has been the question of how the brain handles the staggering number of mechanical variables involved in even the simplest movement. To illustrate the complexity of this basic problem, consider the analogy of a marionette--a rough imitation of the human body with a head, a trunk, two arms, two hands, two legs and two feet. Rather than pulling on wires, a modern day puppeteer uses a computerized control board with a switch connected to each of the marionette's thirteen joints. Each switch can take one of five positions: two for the extreme angles and three for the intermediate values.

    To bring the marionette to life, the puppeteer faces the
    daunting task of mastering and controlling over 5 to the 13th different positions, or approximately one billion. If we now make the simple marionette more like the infinitely more complicated human body--say by adding ball joints with two angles at the hip, shoulders, hands and feet--the number of possible positions rise to 5 to the 19th power, or more than ten thousand billion. This analogy gives some sense of the monumental problem handled routinely by the CNS in the ongoing course of motor control. ...

    Finally, there is the issue of motor learning. In the course of a lifetime, a human being masters a huge repertoire of movements, the memory of which must somehow be stored in the CNS, despite the very real constraints presented by brain anatomy. Even if one were to assume that each of the billions of neurons were to represent a posture in the body's repertoire, storage capacity would fall far short of what is needed. ..."


    ReplyDelete
  31. @mh - It's mostly a matter of scale. The neural nets you and I built decades ago were small, and like the 100 or so neuron brain of the roundworm, had pretty modest capabilities. Today's deep neural nets are thousands or tens of thousands of times as large, and run on parallel architectures tens of thousands of times as fast. Size and speed matter, and that's why they can solve problems beyond human capabilities (like mastering Go, Chess and Shogi) in hours.

    Humans are smarter than those roundworms because are brains are much larger and more elaborate in their architecture. Today's neural nets have already surpassed us in many tasks, including some that require decades of training (radiology, say). Our brains change over hundreds of thousands or millions of years. Neural nets are changing dramatically every year or two.

    ReplyDelete
  32. @mh - It's mostly a matter of scale. The neural nets you and I built decades ago were small, and like the 100 or so neuron brain of the roundworm, had pretty modest capabilities. Today's deep neural nets are thousands or tens of thousands of times as large, and run on parallel architectures tens of thousands of times as fast. Size and speed matter, and that's why they can solve problems beyond human capabilities (like mastering Go, Chess and Shogi) in hours.

    Humans are smarter than those roundworms because are brains are much larger and more elaborate in their architecture. Today's neural nets have already surpassed us in many tasks, including some that require decades of training (radiology, say). Our brains change over hundreds of thousands or millions of years. Neural nets are changing dramatically every year or two.

    ReplyDelete
    Replies
    1. "It does not really „learn“ anything in the sense that it constructs in internal model containing causal relations."

      Sure, and the way many people consciously and repeatedly want to confuse cognition with cognitive tools is quite annoying.

      Delete
  33. Human intelligence can do something which machines can't do: making decisions without enough information.

    A machine may contain a random generator. But for using that, the machine must make a decision if it have to use it in the situation or not. And if for this decision are not enough information reachable ..., and so on.

    This consideration may touch the problem of free will ...?

    One thing is sure: AI is unable to have a free will. With brains, we are not so sure.

    ReplyDelete
  34. Read Emphyrio (by Jack Vance). Quite realistic I think.

    ReplyDelete
  35. J Krishnamurti said "energy in a pattern is matter". When the pattern repeats, it is a program. And to repeat there must be a record of the pattern. Matter is such a record. Matter is energy trapped in a pattern. When energy functions in a pattern, it is a program. Matter is a program. When you notice a pattern, you introduce time. Because you notice that the same thing repeats, you discover a pattern, you can’t do this without time. Again, when you notice change, you introduce time. In both cases, you compare what you recorded, either the thing or the image or the moment or the past with the thing or the image or the moment you see in the present. If there is a match it is a pattern, instead if you see a slightly different but associated thing or image then it is change. When we match, we compare; when we compare, we measure. Could this measurement have been possible without memory? Could memory have been possible without recording. So, recording, memory, comparison, measurement and noticing a pattern or a change is the movement of time. Time is a measure. Because memory is the outcome of recording, memory is time; it has accumulated through time. We said that matter is a record. Matter is a program. Therefore, matter is time.

    Let us say that when energy falls into a pattern or gets trapped in a pattern, energy "coagulates" into matter. Now, animals and man fall into a habit. Habit, either good or bad, is a pattern of functioning. And we keep functioning in that pattern until it has become so mechanical that there is very little thinking. It follows from our reasoning thus far that habit is also a program. This habit-forming mechanism slowly evolves into the involuntary mechanism like the beating of the heart and the circulation of blood. Lyall Watson in one of his books said that if we make our hearts beat consciously (as a voluntary mechanism), then we will be doing nothing but making the heartbeat day in, day out. Animals and man fall into a habit because they are responding from their background, which is energy falling into pattern, energy getting trapped into a pattern to become matter. Habit forming mechanism is no accident of evolution, no quirk of evolution, we were bound to evolve such a mechanism because we are only responding from our background. In addition, because habit is a program, habit is the observer and time.

    ReplyDelete
  36. We were saying energy in pattern is matter, and that humans and animals functioning in a pattern is habit. My mind now hyperlinks to something weird, something crazy: acceleration. Lyall Watson in his book "Supernature" said that a cat pricks up its ears and notices or is sensitive to the pitch of a sound initially, but if the sound continues, the cat slowly becomes insensitive and stops hearing the sound, which first disappears to the background and then does not seem to exist. When the pitch changes again, then the cat wakes up from its sensory stupor--all this while stupefied---and hears the sound by pricking up its ears. Becoming stupefied, insensitive is a symptom of habit.

    Something weird and crazy: From relativity we learn that when a body is accelerated from an initial velocity to a final velocity, there is length contraction, time dilation, and increase in mass. Let us consider a 100ton mass with an initial velocity of 900m/s or 3240km/hr or 0.0003% the velocity of light, let be accelerated to a velocity of 1200m/s or 4320km/hr or 0.0004% the velocity of light, then pull the plug of acceleration. During this time, by relativity, the mass increases to about 1/2 milligram, while there has also been length contraction and time dilation. Once we pull the plug of acceleration, the massive body settles into uniform motion again. We started with uniform motion and we have now ended with uniform motion. In the interim there was change in velocity. I reckon that the two uniforms motions are different by the measure of their velocity. Only during the interim the cat noticed or felt acceleration. The uniform motions were periods of stupor or insensitivity or inertia (habit leads to inertia). But in the interim, there was increase in mass etc., that we may together call space-time crunching and energy coagulating to 1/2 milligram of matter. Acceleration causes coagulation and changes one uniform motion to another uniform motion by the measure of velocity. This is akin to breaking out of one pattern and falling into another pattern. Then the pattern of energy has changed. In addition, I speculate that may be space-time crunching coagulates to matter, i.e., space-time itself becomes matter... crazy. You guys must be making fun of me. But all this is so much fun.

    ReplyDelete
  37. We will be making what will be accepted as very intelligent, cooperative, and conversant robots and assistants. But with any of the current conventional NN and AI technologies I've seen written about, none will be conscious (that which we posses). They won't mind (so to speak) if you turn them off.

    ReplyDelete
  38. I think the point about the brain building models is what differs most to AI.

    What we call reality is (obviously) a model that each brain creates. The model of the outside usually corresponds to the something out there, good enough for us to operate the reality. It has useful attributes such as various qualia, color and other features.

    If you want to create a machine that has a consiousness, that is aware of itself. you need to create such a model of the world, and then have the modelling machinery model the model to create the observer that observes itself.

    The reason for a living entity to model the model (the loop where the I live) is of course to predict the nearest future in order to behave in a useful (survivable) way.

    My humble understanding is that is where the science is going about this, away from particular workings of individual neurons and into the realm of information and model processing.

    It is not far to think that if you want to create a succesful robot, like a self driving car, you need to program it to have a world model, and a model of itself to enable it to predict what is the closest future, and select the best option to succeed. And that is exactly what the brain does. The byproduct is consiousness.

    From that perspective almost all living creatures has more or less of that.

    ReplyDelete
  39. After the AI hype of the 1980's and the ANN hype of the 1990's, it seemed to me that the scientific world learned nothing. It crawled away in embarrassment. Anyone enthusiastic about AI should read up on this period.

    I am fairly sure there will be a similar crash again in the 2020's, although ANN's have at least some real capabilities, but as someone said, this may be more akin to curve fitting.

    ReplyDelete
    Replies
    1. ... or ...reserved lanes for self driving cars will be built alongside the main roads, accident rate comparisons will make self driving lanes look safer than traditional ones, transport industry will sponsor unattended truck driving because it drastically lowers costs... No need to make AI cars smarter: humans will adapt ... See a pattern?

      Delete
    2. Indeed I do - one only has to think about the inconvenience of telephone conversations with computers.

      However, I think self driving cars are going to face a very tough time - I mean most cars have to visit residential homes, and sometimes traffic is diverted off the motorways, so if it can't keep moving safely, the result is chaos. I.e. these vehicles will drastically increase costs!

      To me the definition of AI is that it has to handle open-ended problems.

      I think this really brings us back to the question what is consciousness - is it really something that 'emerges' out of elaborate computation, as so many assume.

      Delete
  40. Let’s get this straight: With equations [1], numeric outcomes are decided by relationships between variables. Equations don’t handle situations. Algorithms handle situations by analysing the numeric values of multiple variables to arrive at numeric outcomes. Algorithms cannot come from equations - it’s the other way around: equations come from algorithms (e.g. the equations that assign numeric values to variables).

    But computers/ robots/ “AIs” don’t actually make algorithmic decisions. They merely have structures which implement pre-decided ways of handling incoming data [2]: all decisions have been made, or agreed to, by human beings via the computer program.

    Clearly, living things make their own algorithmic decisions, on the spot, as required. However, some people may want to religiously believe that a deity made the algorithms; or that there is a Platonic realm of all possible algorithms; or that algorithms “emerge” from equations; or that algorithms “emerge” from situations.

    …………………………….
    1. E.g. the equations that represent laws of nature.
    2. I.e. numbers associated with variables.

    ReplyDelete
    Replies
    1. A part of what you write has a salient point. Biological neural systems are self-adaptive. Neural dendrites grow or shrink, even form and disappear, based on some adaptive process for communicating with other neurons. Artificial neural networks do have weight functions, which is similar, but not quite the same. In effect brains are capable of rewriting their programming. A baby is born with neurons that have loads of connections, and largely in the first year of life many of these connections are pruned away. This is an adaptive process that permits the child to then learn the language of the culture born into and to learn the skills necessary.

      I learned to program when I was 13 with an account my father had. He did not know what to do with it, as a prof of language and linguistics this was a bit out of his ken, and I took to it. I was programming an IBM 370 with Hollerith cards until a few years later work stations became available. I learned a number of languages, Assembly, Fortran, ALGOL, LISP, etc, and that rather frustrating language called Job Control Language (JCL) where you set up the machine addresses to run a program. With ALGOL, ALGOrithmic Language, there was another generation called SNOWBOL, that for a time was popular in the AI field. Remember this was a time, eg mid 70s, of computing stone knives and bear skins. The IBM 370 memory systems were magnetic rings, where a memory bit or core was visible to the naked eye! It was a language that had some features that allowed a program to rescript itself. It proved to be a chaotic language to work with and was dropped, The problems of recursive systems is hard, and I have some discussions here on how this plays with Gödel's theorem, which is hyper-hard and I think impossible to implement in a full way, but we might be able to approximate it.

      As Steve Tyler of Arrowsmith put it, “Lines on my face are getting clearer.” Though don't laugh too much at that old stuff, for the prior IBM 360s were the computers Kennedy center or ground control used to send Apollo astronauts to the moon. Now we have Apps on cell phones that can morph images of our faces.

      I disagree with your statement that algorithms come before equations. Numerical programming often is about taking a function or equation, say a Bessel function, and writing it in a way that can be implemented on a machine. Finite difference methods are canonical with differential equations. Long before computers existed mathematics of functions, calculus, differential equations was going strong.

      Delete
    2. Lawrence, I probably started programming computers around the time you did. I remember Job Control Language. Those were the bad old days of “spaghetti code”, and I was always having to wade through and fix up masses of other people’s unstructured code, for various government departments.

      You say that “Long before computers existed mathematics of functions, calculus, differential equations was going strong”. But the lines of mathematical symbols that you might see on a page are the product of situation-specific behaviours that can only potentially be represented by algorithms, and maybe one-off algorithms at that. The product (a page of mathematical symbols) can’t be isolated from the consciousness and behaviours that produced it. I.e. the thing that is representable by equations comes from a thing that is potentially only representable by algorithms.

      But it is deeper than that: there are algorithmic procedures everywhere in mathematics, hidden behind mathematical symbols. Physics equations imply not only relationships between categories of information, but algorithmic procedures that need to be performed on these categories e.g. the delta symbol would be a simple example. I.e. physics, seemingly unknowingly, assumes that there is something about the micro world that is algorithmic.

      On the question of whether algorithms came before equations, the answer is simple: you can easily construct an algorithm to introduce an equation (e.g. an equation to assign a number to a variable); but you seemingly can’t construct equations, or a numeric situation that is the outcome of equations, that reproduces what an algorithm does.

      Delete
    3. I am thinking of mathematics as it has been developed and the thinking of people up to the end of the 19th century that nature was continuous. Neurons have some analogies to digital computers, but they also have a lot of continuous structure to how they work. How mathematics percolates out of brains or neural systems is really very unknown.

      Delete
    4. Is nature continuous? That’s debatable. Seemingly, underlying everything, outcomes in nature are not continuous: they “jump”.

      The fact remains that algorithms can represent responses to situations, but equations can’t represent responses to situations: equations can’t “cut the mustard”!!

      Delete
    5. In fact, it’s due to “equation thinking” (thanks to “religious” believers in physics, philosophy and mathematics) that the world has got itself into this mess.

      Their idea is that the present is due to the past, is due to the past, is due to the past, is due to the past … is due to equations and initial numbers. Their idea is that there is no algorithmic control mechanisms that caused anything, or can change anything, in the entire universe-system.

      We have a pack of righteous “religious” believers in physics, philosophy and mathematics that believe that an out-of-control system will automatically right itself, … or not.

      That’s the puerile level of intellectual thought that physics, philosophy and mathematics has given us, that has infected the entire world of thought.

      Delete
  41. Correction: "formaldehyde" must be replaced with "formalin"... so the expression becomes 'brains floating in "formalin"'.

    ReplyDelete
  42. If matter is a record, then matter is memory. Picking up the thread "responding from a background, responding from memory" and hyperlinking to the double slit experiment of quantum mechanics: even if one electron at a time is sent through the slits, after a while the interference pattern appears. This means that the electrons are responding from their background; it is memory in action. And the interference pattern is a screen shot of the fundamental template of nature in the absence of an observer or a program.

    ReplyDelete
  43. From Jim Baggots video on "the concept of mass" we gather that Tom Stoppard through his character kerner said "the act of observing determines the reality"
    There is the uninterpreted primordial to begin with. Then there are programs which interpret this primordial and present their own reality. Among the programs there is a common factor, which is the least interpreted primordial. Let light or photons be such a least interpreted primordial. Different programs interpret such a primordial differently and each program presents its own reality. Consider photosynthesis. During photosynthesis programming is going on; the plant-program acts on the least interpreted primordial such as light and turns it into food--Off course there are many other factors involved but let us ignore them for the moment, for the sake of argument. In the case of vision, the program of the human eye interprets the least interpreted primordial in its own way and builds a visual reality. In photosynthesis programming is going on together with interpretation, but in vision there is only interpretation. I may be wrong about photosynthesis, but I am sure about vision. Each program by its own virtue presents its own reality. Therefore, the program determines reality. The program is the observer, which means each observer presents his or its own reality (based on the program). If two observers interpret the least interpreted primordial in two different ways, then what is reality? What is actuality? Is the uninterpreted primordial the actuality?

    ReplyDelete
    Replies
    1. @Gokul Gopisetti

      The Biotechnological Challenge is to overcome Subjectivity.
      In The beasts, Speech, Symbols(environmental memory) & Language deployed notions of 'Intersubjectivity' ... and The Iteration of Intersubjective Notions deploys Collective Memories ... Those Collective Memories gave to the beasts the Illusion of Knowledge ... and with The Illusion Of Knowledge ... The Ideal for Objectivity comes ... 'Objectivity' as 'The Intersubjective Knowledge that applies to All the Subjects' ... and with 'Objectivity', The 'Hope for Truth' comes ...

      Truth as the Knowledge that applies to All the Subjects, All the Objects and All the Knowledges ...

      Of Course, 'Subjectivists' tends to hate 'The Ideals of Truth' ... Just because, They - subconsciously - felt that 'Objectivity' implies the Death of Their Particular and contingent Self by imposing the 'Born of The Whole' and/or 'Collective and Objective, Transcendent Supra-Self' ...

      But by immemorial collective and personal experiences, We all know, That All the Subjects perish ... and The Only thing that seems that never seems to Perish is what is Real and Exists ... and what is Real and Exist is - intrinsically - linked to 'Truth' and 'Objectivity' ...

      Therefore, To choose 'Subjectivity' is to bet for Perishing in Ignorance and Chaos ...

      To choose 'Objectivity' is bet for Survival in Knowledge and Truth ...

      Of course, It is not 'My Self' the One who make those choices...

      Are The Apes ready to deploy 'Collective Intelligence' ??

      Not Yet.

      Can the Apes deploy 'Collective Intelligence'?

      Yes, but that is not a 'Nature's Tendency' ...

      In a Metaphorical Jump of Layers, Nature's is some sort of subjectivist fated to disintegrate and perish in Ignorance and Chaos ...

      That's the reason to call 'Objectivity', a 'Metaphysical Challenge' ... a Supra-Natural Technological Will ... The Survival of the fittests at Cosmic Scales ...

      Delete
    2. Quantum mechanics and the observer effect questions this objectivity.

      Delete
    3. @ Gokul Gopietti
      Human Observers die ...
      QM is just a contingent and transitory 'Anthropomorphic Theory' ...

      What is objective just is as that/it is ... It doesn't require to be questioned and/or becomes a subject for Ignorance/Uncertainty.

      The Map is not The Territory.

      Delete
  44. After watching Jim Baggot’s video on “The concept of mass”, we learn that Mass is not a property, rather “Mass is a behavior of quantum fields. It is what quantum fields do”. Exiting indeed, then we can ask what determines behavior? “Programming” is it not? The behavior is in the programming. And in programming is implied “recording” and “memory”. Therefore, matter is a record; matter is a program; matter is memory; therefore, matter is time.

    ReplyDelete
  45. The behavior of quantum fields that we observer is the virtue of the program that is observing it, that is our program, the human-program. "The observer is the observed" J Krishnamurti.

    ReplyDelete
  46. There is something characteristic about evolution; there is only "rolling out", there is no "rolling back" so far. Man does not roll back into a monkey, and a monkey does not roll back into an otter. There are no instances in life so far where there is a rolling back. This characteristic of life to only roll out and not roll back seems to have a striking similarity to the arrow of time.

    Mutations in our world view have led us from the fact that the earth is round to the fact of curvature of space-time. Once we understood that the earth is not flat, we turned our back on it; once we understood that the sun does not rise our set, we turned our back on it; once we understood that the earth is not static, we turned our back on it. Turning our back was once for all, there was no rolling back of our world view. So, Mutations in our world view also have this characteristic of not going back or "rolling back". In both the cases, it is very difficult to set the clock back, something akin to the arrow of time. Once a deleterious mutation causes cancer, it is very difficult to reverse that mutation. The mutations in our world view are software mutations because they affect the human-program. Mutations in our DNA are hardware mutations because they affect the biology. So, what is exiting in this, well, why do software mutations in our world view have a striking similarity to hardware mutations in our DNA? Why do the software mutations mimic the hardware mutations? Why is the software always responding from its background, the hardware?

    Nature seems to have a fundamental design pattern: recording, memorizing, and responding from that memory. This design pattern has been very successful and plays out so beautifully and effectively. And it is this design pattern that nature uses extensively, and other more complex patterns of behavior emerge out of this fundamental design pattern as nature works out the levels abstractions while ascending the hierarchy of complexity.

    Evolution in the direction of life seems to look like the arrow of time, but what is perplexing is the "rolling back" as a fall out of radioactivity. Uranium "rolls back" into radium because of radioactivity, why? This rolling back contradicts the arrow time concept.

    ReplyDelete
  47. Machines are hardware programs. Say, they act on raw materials and transform them into goods. Usually it is a single hardwired function. They all perform a hardwired function. If hardware is able to perform more functions independent of the hardware, itself. Then those functions are software running on the hardware.

    Hyperlinking to the double slit experiment of Quantum mechanics. We now know that the observer is a program, and the program influences the outcome. Comming to think out it, the experimental set up of the double slit experiment is itself a program or the observer. Under the influence of the program or the observer superposition or that which is described by the wave function produces the interference pattern. Even if the electrons are sent one at a time through the slits, eventually the influence of the program produces the interference pattern. Under the influence of the program, the electrons tend towards an interference pattern. For that experimental setup or hardware program, the behavior or outcome is interference. Thinking of it, for the same experimental set up, from the first time it was performed up until this day, the experimental outcome in the absence of the observer as a detector has always been an interference pattern. The experimental setup is a first level program, and the detector is the second level program. Again, under the influence of the second level program, the wave function invariably collapses. That has always been the outcome.

    ReplyDelete
  48. continued...we were talking about the influence of "the program" or "the observer", as the first level of influence, which is the experimental setup, or the second level of influence, which is the detector in the double slit experiment.
    Now, hyperlinking to Lyall Watson's cat in his book Supernature, we learn that the cat only "senses" the change in the pitch of the sound and ignores what happens between the two different pitches. Remember, we called this insensitivity, a state of sensory stupor, of being stupefied. So, there is "sensing", sensitivity when there is something new or a variance. Now, the sound influences the cat, and the cat records this influence. Because the cat records the influence the cat slowly pushes the sound to the background and ultimately the sound seems not to exist. The cat becomes ignorant--ignores--and insensitive, in that, it is no longer "sensing", no longer "noticing". What interests us is "sensing", and "the recording an influence".

    Hyperlinking to acceleration; acceleration changes the velocity of a body;that is from the uniform motion of one velocity to the uniform motion of another velocity. Remember we said that during acceleration energy coagulates to matter. By relativity we know that after acceleration, a body is never the same again. And we know that acceleration influences velocity, length, time, and mass. What I am getting at is: Acceleration is "an influence", and during acceleration there is "sensing", and "the influence" is "recorded" as coagulation of energy etc., and the body though in uniform motion has a different velocity. The body is never the same again after unplugging acceleration.

    Hyperlinking to the double slit experiment; the influence of the observer, the program, which is the detector-program on superposition is a fact. If we can prove that the electron "records" this influence, then we can show that the electron is "sensing". This is important because "Recording an influence" is the beginning or rather prepares the ground for evolution, for the origination of complexity.

    ReplyDelete
  49. Lorraine Ford 6:46 PM, August 09, 2019 and general

    The human brain isn't understood very well. We must understand, that any system which brings decision to action are at first amplifiers. Decisions are something immaterial and powerless. They have to bring to materiel existence, to effect.

    So there must be an amplifier. The brain is at first an amplifier. But what do the brain amplify? At what energy level are decisions at first beginning? Quantum level?

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.