Neural net illustration. Screenshot from this video. |
In the past two years, governments all over the world have launched research initiatives for Artificial Intelligence (AI). Canada, China, the United States, the European Commission, Australia, France, Denmark, the UK, Germany – everyone suddenly has a strategy for “AI made in” whatever happens to be their own part of the planet. In the coming decades, it is now foreseeable, tens of billions of dollars will flow into the field.
But ask a physicist what they think of artificial intelligence, and they’ll probably say “duh.” For them, AI was trendy in the 1980s. They prefer to call it “machine learning” and pride themselves having used it for decades.
Already in the mid 1980s, researchers working in statistical mechanics – a field concerned with the interaction of large number of particles – set out to better understand how machines learn. They noticed that magnets with disorderly magnetization (known as “spin glasses”) can serve as a physical realization for certain mathematical rules used in machine learning. This in turn means that the physical behavior of these magnets shed light on some properties of learning machines, such as their storage capacity. Back then, physicists also used techniques from statistical mechanics to classify the learning abilities of algorithms.
Particle physicists, too, were on the forefront of machine learning. The first workshop on Artificial Intelligence in High Energy and Nuclear Physics (AIHENP) was held already in 1990. Workshops in this series still take place, but have since been renamed to Advanced Computing and Analysis Techniques. This may be because the new acronym, ACAT, is catchier. But it also illustrates that the phrase “Artificial Intelligence” is no longer common use among researchers. It now appears primarily as an attention-grabber in the mass media.
Physicists avoid the term “Artificial Intelligence” not only because it reeks of hype, but because the analogy to natural intelligence is superficial at best, misleading at worst. True, the current models are loosely based on the human brain’s architecture. These so-called “neural networks” are algorithms based on mathematical representations of “neurons” connected by “synapses.” Using feedback about its performance – the “training” – the algorithm then “learns” to optimize a quantifiable goal, such as recognizing an image, or predicting a data-trend.
This type of iterative learning is certainly one aspect of intelligence, but it leaves much wanting. The current algorithms heavily rely on humans to provide suitable input data. They do not formulate own goals. They do not propose models. They are, as far as physicists are concerned, but elaborate ways of fitting and extrapolating data.
But then, what novelty can AI bring to physics? A lot, it turns out. While the techniques are not new – even “deep learning” dates back to the early 2000s – today’s ease of use and sheer computational power allows physicists to now assign computers to tasks previously reserved for humans. It has also enabled them to explore entirely new research directions. Until a few years ago, other computational methods often outperformed machine learning, but now machine learning leads in many different areas. This is why, in the past years, interest in machine learning has spread into seemingly every niche.
Most applications of AI in physics loosely fall into three main categories: Data analysis, modeling, and model analysis.
Data analysis is the most widely known application of machine learning. Neural networks can be trained to recognize specific patterns, and can also learn to find new patterns on their own. In physics, this is used in image analysis, for example when astrophysicists search for signals of gravitational lensing. Gravitational lensing happens when space-time around an object is deformed so much that it noticeably distorts the light coming from behind it. The recent, headline-making, black hole image is an extreme example. But most gravitational lensing events are more subtle, resulting in smears or partial arcs. AIs can learn to identify these.
Particle physicists also use neural networks to find patterns, both specific and unspecific ones. Highly energetic particle collisions, like those done at the Large Hadron Collider, produce huge amounts of data. Neural networks can be trained to flag interesting events. Similar techniques have been used to identify certain types of radio bursts, and may soon help finding gravitational waves.
Machine learning aids the modeling of physical systems both by speeding up calculations and by enabling new types of calculations. For example, simulations for the formation of galaxies take a long time even on the current generation of super-computers. But neural networks can learn to extrapolate from the existing simulations, without having to re-run the full simulation each time, a technique that was recently successfully used to match the amount of dark matter to the amount of visible matter in galaxies. Neural networks have also been used to reconstruct what happens when cosmic rays hit the atmosphere, or how elementary particles are distributed inside composite particles.
For model analysis, machine learning is applied to understand better the properties of already known theories which cannot be extracted by other mathematical methods, or to speed up computation. For example, the interaction of many quantum particles can result in a variety of phases of matter. But the existing mathematical methods have not allowed physicists to calculate these phases. Neural nets can encode the many quantum particles and then classify the different types of behavior.
Similar ideas underlie neural networks that seek to classify the properties of materials, such as conductivity or compressibility. While the theory for the materials’ atomic structure is known in principle, many calculations have so-far exceeded the existing computational resources. Machine learning is beginning to change that. Many hope that it may one day allow physicists to find materials that are superconducting at room temperature. Another fertile area for applications of neural nets is “quantum tomography,” that is the reconstruction of quantum state from the measurements performed on it, a problem of high relevance for quantum computing.
And it is not only that machine learning advances physics, physics can in return advance machine learning. At present, it is not well understood just why neural nets work as well as they do. Since some neural networks can be represented as physical systems, knowledge from physics may shed light on the situation.
In summary, machine learning rather suddenly allows physicists to tackle a lot of problems that were previously intractable, simply because of the high computational burden.
What does this mean for the future of physics? Will we see the “End of Theory” as Chris Anderson oracled in 2008?
I do not think so. There are many different types of neural networks, which differ in their architecture and learning scheme. Physicists now have to understand which algorithm works for which case and how well, the same way they previously had to understand which theory works for which case and how well. Rather than spelling the end of theory, machine learning will take it to the next level.
[You can help me keep my writing freely available by using the donate button in the top right corner of the page.]
Hello Sabine,
ReplyDeleteOf course you are right,this thing that is now called AI not intelligence, it is mostly hype, it has been around for a long time but it is a tool that is gradually becomming more useful. Now let's get back to the interesting stuff. You promised us a resolution to the quantum measurement problem back in October and we are still waiting...
Steve Snow
Just a little remark: the first working deep learning algorithm was proposed much earlier than 2000. In fact, in 1965 Alexey Ivakhnenko and his-coauthor from the Kyiv Polytechnic University (USSR then, Ukraine now) already published a supervised multilayer version of deep learning. In 1971 they already described a deep network with 8 layers. I think what happened by 2000 is that the necessary computing power started to be widely available to allow interesting applications of deep learning.
ReplyDeleteAnd the trio recognized by the 2018 ACM Turing award (Hinton, Le Cun, Bengio) was already working on "deep learning" in the 1980's. But that great name had not been coined yet.
DeleteIsn't artificial intelligence just artisanal competence?
ReplyDeleteI highly, highly, highly recommend the book Infinity and the Mind by Rudy Rucker. He has an absolutely great cartoon on this topic. (The book is mainly about various aspects of infinity and mathematical logic: the Berry paradox, transfinite cardinals, and so on---highly enjoyable, and one of the few books I've read more than once. Also, if in terms of acquaintance and not of papers, he is partly responsible for my Einstein number of 4.)
ReplyDeleteThe author has a free version on his website.
Couple of errors (or incidents of creative typing):
ReplyDeleteprimarily as attention-grabber ->
primarily as an attention-grabber
"for what physicists are concerned"
is an odd phrase, probably meant
"as far as physicists are concerned"
have also been used to o reconstruct ->
have also been used to reconstruct
Thanks! I have fixed those.
DeleteThe catch phrase ”Correlation supersedes causation ...“ in the “End of Theory” sounds a bit like QM measurements.
ReplyDeleteAnd yes, no end in sight - writing algorithms is just the next emergent level.
Most differential equations (DE) need an algorithm to be solved/integrated, but an algorithm doesn´t need a DE. And “Laplace’s clockwork universe with its determinism is based on calculus” as said here.
Sabine just retweeted a recent, well written piece from Nautilus. My favorite part is:
Delete“Third, the scientists failed to wrestle with the 800-pound chaotic butterfly in the room: ... ... Therefore, claiming that a machine learning system has learned to predict a chaotic system over the long term is like saying that it has learned to predict the behavior of a random process, like thermal noise or radioactive decay. It simply can’t be done, regardless of the algorithm.”
This is because randomness is not definable by any finite computing system.
DeleteTherefore, I try to restrict myself to pseudo randomness and to potential infinity.
DeletePotential infinity as in a (limiting) process or algorithm in contrast to completed infinity.
Archimedes with his algorithmic calculation of π started a very successful journey. He recognized that (potential) infinity is the bridge between the curved and the straight, which led to calculus.
And pseudo randomness is the bridge to determinism.
While information (Shannon as always) is the subtle interplay between predictability and randomness (*).
--------------
(*) probably pseudo randomness – the very bridge to determinism.
This then of course leads to the question of whether quantum mechanics, or the occurrence of decoherent or measurement outcomes, are truly random or not. If all randomness is pseudo-random then we might ask whether this would imply some sort of local hidden variable and a suspension of Bell's theorem.
DeleteExperiment tells us that entangled particles when measured are nonlocally correlated (*) – just what QM tells us. No need at all to suspend Bell’s theorem.
DeleteIf measurement outcomes would not be random, you would be able to signal superluminally.
Whether or not this randomness is pseudo randomness does not matter - you cannot tell the difference.
--------------------
(*) assuming statistical independence of Alice’s and Bob’s or better their pseudo random number generators’ choice to steer clear of free will.
Reimond,
Delete“…writing algorithms is just the next emergent level. Most differential equations (DE) need an algorithm to be solved/integrated, but an algorithm doesn´t need a DE.”
In physics, relationships are represented by equations; and the numbers for the variables that represent outcomes are determined by the equations. I.e. outcomes are determined by the equations, not by the situation.
But algorithms represent responses to situations; responding to situations can only be represented algorithmically e.g. a very simple example would be:
“IF variable1 = number1 AND variable2 = number2 THEN variable3 = number3”
(where “variable1 = number1 AND variable2 = number2” represents the situation, and “variable3 = number3” represents the outcome).
Algorithms cannot be derived from, cannot “emerge” from, equations i.e. you cannot get a world that responds to situations from a world that is representable by nothing but equations, variables and numbers. An example of a high-level situation would be a person needing to respond to a tiger approaching.
If you think that something representable by algorithms can be derived from something representable by equations, then give an example of how this could happen. Otherwise, you have to conclude that algorithms represent something fundamental about the world that cannot be represented by equations, variables and numbers.
If quantum statistics is pseudo-random I think it would imply some deterministic system that computes probabilities that would obey the Bell inequalities. If observable outcome occur in a pure random manner there is no such inner deterministic structure. Of course Kolmogorov complexity requires for some string of outcomes to be parsed by an algorithm that algorithm must have as much or more complexity. This is a measure of the algorithmic entropy required to specify some string of symbols. For a set of binary outputs from say n spin 1/2 measurements there are N = 2^n possible strings. If this is finite there will be some that have a pattern, such as the one string with all spin ups and the other with all spin downs. However, for an infinite set the K-complexity is infinite. The Chaitan number or Halting probability rears its head, where the K-entrpy or measure over these sets for N → ∞ will diverge.
DeleteOne might then object that this has a similar construct as a "fair coin toss" in a classical setting. However, a perfectly fair coin is a fiction, as there is discernible dynamics in the motion of the coin or dice that has small hidden weights due to imperfections of the faces. The only fair coin toss is a Stern-Gerlach measurement of a spin 1/2 particle.
This gets into my thesis, which I will not belabor here, on how quantum measurements and their interpretations are connected to Gödel's theorem. Randomness is not computable, and if QM had an underlying pseudo-random nature I think there would be some adherence to Bell inequalities.
Deep learning should be good at fitting equations to data, the point being to guess equations which can then be easily tested. It should be able to explore modifications to GR at the scale of galaxies as an alternative to dark matter, in case any of them work. Maybe there are other problems like that, emergence of time, thermodynamic gravity, etc. But it's important to produce an explicit model that can be tested by the usual means. So-called AI is good at creating convincing simulations of things, such as deep fakes, but these don't contain the information they appear. You see a very convincing face but it's just a virtual artist painting a face according to stereotypical information it has learned. Lay people have trouble understanding what deep learning can do, or even how to reason morally about it. I'm not sure physicists are immune from such confusion.
ReplyDelete"identify certain types of gamma ray bursts"
ReplyDeleteShould be radio bursts
Thanks, very attentive! I have corrected this.
DeleteAnother reason the term "Artificial Intelligence" does the science no favours is the ambiguity of the meaning.
ReplyDelete"Made or produced by human beings rather than occurring naturally" is the intended meaning. But how many, on seeing the term, read into it the second meaning: "Feigned or affected"?
There is indeed a serious concern about an old problem in IT: GIGO (Garbage In Garbage Out). It may be well known in principle, but who's going to dive into the nitty-gritty of some huge "AI-based" model (simulation, data analysis, whatever) to find out whether there are any GIGO steps?
ReplyDeleteFirst a comic look at this:
ReplyDeletehttp://www.smbc-comics.com/comic/superior-intelligence
Artificial neural networks (ANN) were hot in the late 1980s into the 1990s. The connections between neuron have a kinetic part that is very similar to what one finds in mechanics and the neural weights are given by a sort of potential function. It all looks very much like Lagrangian-Hamiltonian mechanics. In fact one can quantize it. This was clearly not lost on Google a few years back with their annealing machine.
ANNs were a big topic for the military 25 years ago. The hope was that an ANN could spot an enemy tank or a target. The problem with this sort of machine learning is it too often turned out that what the machine was learning had nothing to do with what you were trying to teach it. In general this is a big thing to remember with computers in general; they will do exactly what you program them to do, which may not be what you want it to do.
The term artificial intelligence (AI) has been around since the days (1940s) Nerode worked out his little machine graphs for Turing computations. This lead to Assembler Language, which I started doing in junior high school. My father had a computer account and had little idea of what to do with it, and I pretty quickly had lots of fun writing programs on this IBM 370. In fact quantum computing on its core language is QASM = quantum assembler machine-language. So the term has been around for a long time, and it was thought if we could get computers to beat any human at chess then we have a real AI. Now we can do that, even get machines to beat humans at the TV show Jeopardy, but we seem nowhere near getting computers to really be conscious entities.
I read recently that AI system was able to deduce a heliocentric universe based on simple ground theodolite and sextant data. It was able to do this in a matter of hours. https://www.nature.com/articles/d41586-019-03332-7 This is fairly humbling, but of course humans with hands and tools made the instruments necessary to do this. Yet clearly it was a 16 century struggle from the time of Ptolemy to Copernicus, Galileo and Kepler. I suspect that AI may play some role in a breakthrough with quantum gravitation, where this may at least be some sort of algorithmic checking process of human work.
The issue of whether AI can be really conscious or exist in some internal way as we do has been a matter of debate and a theme in science fiction from Asimov's novels as I Robot and the 2000 movie AI. We of course have a bit of a Pinocchio problem of sorts in that consciousness is a first person narrative, while scientific verification is second and third person. So we may never know if we succeed at this.
It may also be that what takes over are not so much AI machines, but rather the neuro-cyber connected system. We human may before long have our brains wired into computer systems to where we are not just receiving cybernetic information, but maybe each others inner thoughts and feelings. The 1990s Star Trek series had this with the BORG.
AI (or machine learning) applied to physics includes, in addition to (deep) neural networks, genetic algorithms and genetic programming (evolution of programs):
ReplyDeleteExplaining quantum correlations through evolution of causal models
arXiv:1608.03281
Materials informatics based on evolutionary algorithms: Application to search for superconducting hydrogen compounds
arXiv:1908.00746
etc.
Epicycles. Just a bit more sophisticated.
ReplyDeleteBefore addressing Sabine's excellent analysis, three comments caught my attention.
ReplyDelete-----
Pascal: … the trio [Geoffrey Hinton, Yann LeCun, and Yoshua Bengio was] recognized by the 2018 ACM Turing award [for their work on] "deep learning". [But] in the 1980's [that phrase] had not been coined yet.
I was delighted when this trio won the Turing Award! During a DoD program review at MIT, I still recall researchers coming to me during breaks to suggest, you know, that while interesting, the ancient neural-net stuff that Yann had just presented on was so out of date compared their work.
Ahem. It didn't quite work out that way, did it, folks?
Yann persevered because he understood that when hardware reached a certain price point, neural nets would explode from being toys to literally transforming the world wide web.
How? By attaching meaning to raw data, e.g. labeling data to say "this is a cat!", no matter how odd the cat looked. I still vividly recall one day when NPR started talking about the same folks I was driving to see. NPR was exulting at how the web now knew what a cat was! That day was a pivot point when folks like Yann transitioned almost overnight from researchers struggling for funding to media celebrities and chief scientists of big companies.
-----
marten: Isn't artificial intelligence just artisanal competence?
Sir marten, you have just given a better definition of neural nets than some entire books I've seen on "deep learning". Neural nets are just carefully trained assistants, capable of doing only what their trainers showed them. But while it takes decades to train one slow human artisan, an artisan made of silicon and bits can be replicated and sped up ad nauseum, until the entire world wide web "knows" that bit of artisanry. That is what makes neural nets so transformative.
Alas, I consider "deep learning" to be one of the most misleading misnomers in all of computer science. Ouch! Why so adamant?
Because what neural nets automate is perception, not learning. They must be trained, and in that trivial sense they do "learn". But what they flatly cannot do is the kind of "Eureka!" learning in which some set of mostly random, disconnected perceptions suddenly come together to form a new and deeper understanding of what the data means. That is real "deep learning", and it must be spoon-fed to most (not all!) neural nets by the only creatures who are truly adept at it, humans.
----------
Lawrence Crowell: It all looks very much like Lagrangian-Hamiltonian mechanics
Yes! Joshua Bengio is now Scientific Director at Mila, a Canadian research group focused on machine learning. A recent Bengio et al paper, "Data-Driven Approach to Encoding and Decoding 3-D Crystal Structures" (use G.S., Google Scholar), is one example of how neural nets can speed data interpretation. My good friend Jean Michel Sellier, who works for Bengio, has been working for years on his ultra-fast quantum computation method that uses Wigner phase space representations of particles in combination with neural nets (e.g. G.S. "Signed particles and neural networks, towards efficient simulations of quantum systems").
Regarding Lagrangians and Hamiltonians, Jean Michel's success suggests that as with the fuzziness of time in Feynman's QED, reality at the smallest scales doesn't like being pigeonholed into overly "sharp", Planck-extremum representations. A second benefit is that such "softer" representations seem to work well with neural nets.
It seems like ages ago, but Dr Sellier and I once did some drafts on phase-aware Wave Integration Neural Networks (WINN), as opposed to currently used Real Unit Integration Neural Networks (RUINN). Interesting, multilevel 3-space modularity seems to pop out fairly naturally from coherent models. I should get back to that, if only for the joy of publishing those two acronyms… }8^)>
True. In fact my classmate wrote his Diploma thesis in 1989 and then his Ph.D. thesis in 1992 on using neural networks for Quantitative Structure-Activity Relationship (QSAR) and rational drug design. Also un Kiev, btw.
ReplyDeleteSabine,
ReplyDeleteI like your analysis. Far from fearing AI, there are good reasons for physicists to embrace it as a tool to help them 'perceive' meaning in otherwise overwhelming data.
I particularly liked your final line:
"Rather than spelling the end of theory, machine learning will take it to the next level."
I could not agree more. In fact, in own poor assessment deep theoretical physics pushes the limits of cognition more than almost any other human endeavor, and so will likely be one of the very last domains to fall to automation.
Huh? Uh, Terry… aren't you the fellow who disdains some of the more arcane forms of mathematical physics as little more than moldering collections of experimentally meaningless, sloppy, and ultimately unverifiable programming in the form of cryptic equations? And besides, aren't equations easier for computers, not harder?
Well, yes… but none of that is really theoretical physics, is it?
Good math is open ended. Once you define and lock down your axioms, you can sail the resulting infinite ocean of combinatorial expansion at your leisure. Folks have noticed that sometimes these explorations lead to unexpected and powerful resonances with real phenomena. When that happens we call it 'mathematical physics'. Encountering such resonances is not as surprising as it might seem, since the human-mind process of defining 'fundamental' axioms is necessarily dictated both by the underlying invariances of our universe, and by the necessity that our survival-focused brains leverage those invariances in ways that help us survive for another day. Abstract mathematical exploration thus sometimes lands on deep physics insights not because math is more fundamental than physics, but because math is nothing more than an unconscious attempt to capture some of the more subtle principles of physics in axiomatic form. (Parallel lines, anyone?)
But true theoretical physics… ah, now that is something entirely different. Far from being a leisurely and open-ended exploration of some infinite sea of theorems, true theoretical physics is more like a halting, stumbling, and continually perplexing attempt to converse with a cranky and well-hidden universe. It is a universe full of subterfuge and unforgiving invariants, written in a language so foreign to our human way of thinking that it has taken millennia of trying to converse with it just to get to where we are today.
We call these attempts at communication 'experiments'. In pure math there is nothing quite like these cranky conversations. And as with any good conversation, we never really know what kind of reply we will get back, or even whether we will get a reply back at all. When a clear reply is received, it is not uncommon to have no idea what it means. Finally, even for the parts we think we understand, the truth is that we may be badly missing or misinterpreting some of the most vital implications of the message. (Newtonian physics, anyone?)
Finally, here's a deep, dark secret: Artificial intelligence is not advanced as folks like to claim or think it is. The sad truth is that most of the 'amazing' AI we see still comes from applying massive computational resources to decades-old ideas for how to imitate small pieces of human-like intelligence, such as game playing.
Conversations, especially halting uncertain conversations, and most especially conversations in which the language of the other party is unknown and utterly foreign, are in contrast among the most difficult goals for AI. More so even than human conversations, deep physics falls into this category.
So, for all of the dozen or so deep theoretical theorists out there, take heart: The machines likely won't come for you until after they first conquer areas like self-programming. We are still a very long way off from truly human-like cognition.
Lol... I'm not really fond of the term "artificial" intelligence.
ReplyDeleteI think intelligence is intelligence, and I know it when I come in contact with it... no matter what the container. ��
So I'll post three links from one of my favorite Star Trek episode...
"The Measure of a Man"
https://www.youtube.com/watch?v=SRcKt4PP0yM
https://www.youtube.com/watch?v=-T9TUeapBSQ
https://www.youtube.com/watch?v=vjuQRCG_sUw
They really are worth a �� .
Intelligence is immeasurable (IQ is not the same), artificial "intelligence" is measurable. Therefore I consider it incorect to use the same word for two principally different concepts (see also my earlier comment)..
Deletemarten: Sounds like hand-waving to me. You need at least a reasonably objective definition of "intelligence" to make any claim like that; and "artisanal competence" is not it. I doubt you have one.
DeleteI have been using genetic algorithms and neural nets (hand coded) for at least twenty years, and I've followed the reports of their accomplishments, and these have developed solutions that are not just "competence" but at times actually creative and surprising. They can prove new mathematical theorems, they can find real and useful relationships no human has ever thought to look for. Genetic algorithms are the only known way to find plausible solutions to some biological analysis problems.
We ARE neural nets, and anything our brain can do, we can also (in principle) simulate in a machine, with all the inherent randomness, and even quantum randomness if that turns out to be important. I work on the fastest supercomputer in the world, we do stuff like that every day, and it matches observations.
So even if I cannot produce an objective definition of Intelligence, if I can state with confidence that some humans have it, then I can state with confidence that a machine can, in principle, have it too.
There is no inherent difference between machine intelligence and human intelligence other than the current scale we can achieve in machines, which is admittedly some orders of magnitude short. We simulate many thousands of neurons, not a hundred billion of them.
But "intelligence" can only be judged subjectively on the quality of the output, it does not matter how long it would take to produce that output, and it does not matter if the neurons are simulated or real neurons, that distinction vanishes if all we can judge is the output.
Intelligence might be immesurable, but stupidity is infinitely incomputable.
Delete@ Dr. A.A.M. Castaldo
DeleteWhy do you need so many words? If you think humans are in fact robots which can be reproduced artificially, just say so. Maybee you are the ultimate artisan. Read Emphyrio (by Jack Vance). Hope you like it.
@Dr. A.M. Castaldo: How prominent would you say independent verification is in the "AI" work you've been involved with?
DeleteAs you likely know, this has become something of considerable concern in some fields (e.g. psychology).
Dr. A.M. Castaldo,
DeleteGenetic algorithms are fascinating, but also frustrating. In one topic deep dive I did, my conclusion was that the goal functions created to guide the evolution process were so formally precise that they qualified as assertion-based programs. The genetic algorithms then very inefficiently created sequential code to implement the goal specifications. In most cases, it would have required less human effort and a lot less computer time if the developers had skipped the programming-via-randomness part entirely, and instead simply converted or compiled the goal function directly into conventional software.
On the flip side, as you noted, genetic algorithms sometimes uncover remarkably non-intuitive solutions. This is particularly true if the "binary blinders" are removed to enable mutations in the analog (or hardware) domain. I recall a set of genetically created electronic devices that worked absurdly well, yet no one could explain how they did what they did. The devices appeared to rely on near-chaotic levels of sensitivity that felt more like mind reading than signaling. Such cases strongly suggest that we are overlooking entire domains of design opportunities by insisting that designs be step-by-step verifiable, and that individual components stay as far from chaos as possible. Nature and living systems are far less risk adverse, and seem to embrace chaos and near-chaos as useful shortcuts for finding solutions to complex challenges.
Yes, we are neural nets. However, we are also still surprisingly far from understanding both (a) how real neurons work, and (b) how to configure neural net components to create the complex and non-linear behaviors characteristic of human-like general intelligence. Current RUINN neural net component models date back to a time when surprisingly little was known about real biological neural networks behavior, so a reexamination of such roots would not hurt. The WINN idea I mentioned earlier is one example, since we now know that bio neural nets pay very close attention to lead-lag phase.
Also, while I agree strongly that human intelligence relies on the same fundamental mechanisms as other forms of bio intelligence, I doubt that the uniquely manipulative power of human intelligence can be explained without adding some kind of sharp transition. Specifically, the internal simulation capability that higher vertebrates developed to help rapidly discard low-value sensory data appears to have been looped back on itself to create a sensory-disconnected virtual world: our imagination. This loop lets lets us examine and entertain ideas that are not direct inputs from the outside world. That in combination with language permits exploration of indefinitely complex concepts. The rather high cost of this innovation is that it also lets us to go insane, in diverse ways, by allowing us to live in unreal worlds. The margin between genius and insanity may be slimmer than we realize.
One way to avoid insanity is to keep this exploration of alternative realities narrow and formally controlled. A good example is the recent development of self-training Go machines that within days developed Go rules never seen before in all of human history. The first time I read that, it literally gave me a shiver.
Creating a human-like general intelligence will, however, require that the system have far more control over its own creation of hypothetical worlds. For this and for other reasons I'll not get into here, my suspicion is that even after we learn how to design systems with human-like imaginations, what we will get at first is not smarter computers, but insane ones.
marten: Unlike you, I am trying to make an argument, not a bare assertion.
DeleteAnd no, I don't think humans are robots. I think neurons and other brain elements are biological machines that can be reliably simulated, in principle. I don't believe there is anything supernatural about consciousness, or self-awareness, or "qualia", or any other spiritualistic mumbo-jumbo; which I think you must subscribe to in order to claim artificial intelligence is just "artisinal competence" and therefore fundamentally different from "real" intelligence.
I believe anything you describe as "real" intelligence can be duplicated by robots, and therefore, I don't think human brains have a monopoly on "real" intelligence.
Artificial intelligence cannot be minimized and dismissed as just artisanal competence.
JeanTate: I would call it work I am familiar with, not work I am "involved" with; but I would say independent verification is quite prominent. Quantum chemistry AI programs are tested against painstakingly worked out examples, which they have to get right.
DeleteOther programs in virtually testing synthetic materials, for example, are intended to be actually synthesized if they show interesting properties; such synthesis may cost a great deal of money (hundreds of times as much as the price of simulating them; hence the cost-effectiveness of simulation first).
In a way this is similar to weather simulation; you can demonstrate the accuracy of your models by measuring your accuracy against real life, or (like weather) measuring if your prediction method properly predicts past behavior which has already been measured.
For a simplified example, if what you want to test is better AI method of predicting the path of a hurricane given existing measurements, we have plenty of data on hundreds of hurricanes with contemporaneous weather data you can play with (train on), and see how your new method stacks up to the existing more traditional methods.
AIs need data to train on, and typically reach some irreducible floor of pure noise in the data. In artificial data we can know what that is, but in real data we probably cannot.
In both we can actually over-train and get lower than the pure noise; i.e. predict the pure noise, but that is an error: What is happening is our AI is allowed to store too much information and is just memorizing where the noise is in the training data!
So we need to be careful, in the information sense, of having enough resources in the AI to capture all of the signal without an oversupply of resources that lets it memorize the purely random noise in the signal; and that can be a difficult balancing act.
We can approximate that balance by working against historical or traditionally worked out examples or synthetic problems with various controlled levels of noise injected, and ensuring our system can capture a signal without memorizing noise.
In any field involving living beings it may be much more difficult to do anything like that; depending on where the living being resides on the spectrum of intelligence and consciousness; from humans to carrots to bacteria). And also how cute people think they are.
🆗...I guess the fancy Windows 10 emoji selection source doesn't work here.
DeleteI gained this guessed idea by observing the odd characters
that appear in the post I made; these... ��
I doubt, though, that I gained any intelligence, measurable or immeasurable...
BTW, Lawrence, your spelling of "immesurable" is incorrect.
Terry Bollinger: I agree, I was trying to be careful to say "in principle". Neither genetic algorithms, neural nets, or hybrids of these are fully mature, or even (IMO) out of childhood.
DeleteBut using childhood as a metaphor, we can often recognize quite early that a child is at times a prodigy, or at least has exceptional talent in some subject (like chess, or music, or realistic drawing).
That is how I feel about genetic algorithms, and to a slightly lesser extent about neural nets; there is work to do, but those two are the only ones I've seen in the decades I have used AI that can actually surprise me with something creative, and figure out something I would praise and admire a human being for figuring out.
I agree with you, neural nets have difficulty with "the complex and non-linear behaviors characteristic of human-like general intelligence."
Which is why, despite their much greater popularity in the current time, I put them one rung below genetic algorithms: I have written genetic algorithms that evolve thresholds for step functions, and parameters for non-linear equations or piecewise functions.
The equivalent of "if f(x) < T; set x=0." where the parameter T is what is to be evolved.
Or say I wish to evolve a piece wise polynomial function (spline) to represent a curve I can't characterize analytically, but I want it non-uniform; i.e. the cut points are not equidistant; and I also don't know how many cut points I need. And I'll throw in that I have unusual fit criteria not easily computed; perhaps I want a statistically robust fit (minimize median error, not mean squared error). I can evolve the number and position of cut points for where we transition to a new line, minimize the number of cut points, and minimize the median error. All of which continuous math has difficulty doing
Neural nets are definitely useful, obviously, as is the sensible stepwise approach of deep learning. But I think of them as more of a math-wise approach to signal analysis. That can also reveal surprises, just like a Fourier transform of what looks like noise can reveal unexpected structure.
What I guess is going on in the human brain is a metaphorical combination of both: the GA part is that, given an issue, ideas (procedures encoded in neural clusters) compete for attention, get evaluated (the NN part) by the frontal cortex for suitability, and one gets chosen to act upon.
If I have to factor an equation, or integrate it or find a derivative, there are dozens of things I can try, but one seems "more promising". Why? To me that is evidence they competed and were evaluated (by the frontal cortex) without actually being implemented. The implementation is then more like an expert system or program of steps or cooking recipe; I regard all these as purely mechanistic, not "intelligence".)
I think both have their place, and can produce different kinds of surprises, but genetic algorithms have the most potential for developing what we would regard as truly creative solutions.
Just like real life, an awful lot of technical progress has been made by people just kind of systematically fooling around with stuff because they wonder what they will find. Just people saying "I wonder what would happen if..." progress, with no clue at all what they will see. I know a few of the new algorithms I have published began that way.
Maybe 99.9 percent of the time nothing at all happens, but that 0.1 percent may be a clue to a breakthrough in understanding.
Dr Castaldo,
DeleteRe “There is no inherent difference between machine intelligence and human intelligence”:
The difference between “artificial intelligence” and living intelligence is that living things process actual information, but “artificial intelligence” processes symbolic representations of actual information. There is a huge gulf between a thing and a symbolic representation of a thing.
The symbolic representations used to represent information and algorithms in computing only mean something from our human point of view, not from the point of view of the computer (not that a computer actually has a point of view).
It’s as simple as that. Could you please stop promulgating nonsense about “machine intelligence”?
Dr A.M. Castaldo:
DeleteAs is the case whenever I encounter particularly intriguing expertise and analysis, I just attempted to find your research publications… to no avail, which is frankly a bit unusual, since that kind of research was part of my day job, and I'm decently good at it. Perhaps you are Anthony M. at UTA, but his publication profile is not a particularly good fit. If you are OK with providing more identification, is there a good place or label for finding more of your published works? Or like me did you have a job that was not particularly conducive to publication?
I mostly found myself saying "Yes!" to your comments, especially in your emphasis of the importance of that 0.1% that points to something new. If we did not have folks who took that principle seriously in this world, we'd still be mired in Newtonian physics, to our enormous technological detriment.
I agree that more often than not it's the experimenters who first uncover the really good stuff, rather than the theorists. I still recall the case of a duo (I can't recall their names) who came up with the foundation algorithms for modern video compression. They were such novices that they were unaware that their goals had been proven theoretically impossible. So they just kept kept mutating and testing new ideas to see what might work, and in the end succeeded far beyond even their own expectations. It took the theorists years to update their theorems to accommodate what their mutation-based trial and error methods had uncovered. (The theory problem? They had forgotten to factor in fully the powerful continuity of most images over time, the 'object-ness' that dominates much of our universe.) Bottom line, skunkworks work, sometimes even for the 'impossible'.
Also, you can see from how I just used the word 'mutation' that I strongly agree with your theme that random mutations are vital for exploring large and largely unknown problem spaces. In fact, one of several explicit cognitive modes that I choose from and attempt to follow in my personal research is 'mutation mode'. In this mode I try to push away the details of what I think I know, and focus instead on capturing and amplifying those quick random thoughts we too often ignore. Human memory seems to assist in this mode, since letting a knotty problem 'age' in the background for a few months often results in oddly different prioritizations of how to solve it. Some say that dreams are just our brains way of doing background refactoring and consolidation of miscellaneous memory data, and I suspect there is some truth in that. (But not for egg-laying echidnas, alas, whose brains apparently don't work the same way as those of most mammals.)
For one miscellaneous comment you made, I hope you do not mind if I offer a counterexample. Close your eyes and imagine a light in your mind, one whose colors alternate between red and green. I think you would agree that this pair of color states represents real, physically meaningful (if not easily detectable) alternation of states within the neural system that is your brain, since they are obviously not connected with your external perception of red and green light. This is the mundane, more scientifically tractable definition of 'qualia', in which defining qualia becomes more a question of when work with fMRI, EEG, and possibly other methods will reach sufficient resolution and theoretical understanding to enable easy external observer identification and classification of such internal neural states. Whether qualia are 'more' than just pulses becomes irrelevant in this definition. Instead, the question to be answered is the more practical one of whether such internal states contribute somehow to faster or more efficient processing of information. I strongly suspect that they do, since for example a primate looking for red fruit in a green tree seems to benefit from a remarkably linear image processing time, regardless of the complexity of the sensory image.
Lorraine says: The difference between “artificial intelligence” and living intelligence is that living things process actual information, but “artificial intelligence” processes symbolic representations of actual information. There is a huge gulf between a thing and a symbolic representation of a thing.
DeleteThat is patently false. The brain processes electrical signals, period. My eyes have simple mechanisms that react to light and cause electrical signals to enter my brain. My ears have tiny hairs that vibrate with sound and create electrical signals. My skin has sensors that do the same; it is electrical representations that travel through biological wires from fingertip to brain to be interpreted by the brain. This is not "direct information", destroy those interfaces (which are NOT brain) and the brain receives nothing; we are blind, deaf, or insensate.
The computer does exactly the same thing; a microphone is its eardrum, and feeds in an electrical signal to be processed by neurons.
It’s as simple as that. Could you please stop promulgating nonsense about “human intelligence” as something precious and special and ineffably mysterious? It is nothing more than supernaturalism.
Terry Bollinger: I am Anthony M; most of my career is unpublished work for corporate clients, I retired from that and returned to college, my only publications were related to finishing my PhD in high performance computing at UTSA in 2010, and I am co-author on an aviation engineering topic (crack growth in components due to stress) for which I invented the statistical fitting algorithm used to extract the parameters of the generalized extreme value distribution. My algorithm was used for that paper, but not described there. (MoM won't work, and existing methods are (IMO) hand-wavy crap; but the mathematical approach I invented remains unpublished.)
DeleteThough of retirement age I am currently a full time Research Scientist on a large government funded project; my work since my PhD has not been very conducive to publication. Although if I can find time to invent something unclassified it is not prohibited.
As for qualia; I have a theory; related to "priming" in the psychological realm. See https://en.wikipedia.org/wiki/Priming_(psychology)
Your brain is composed of models of things, constantly being updated and formed. I don't know how many, but perhaps tens of thousands, maybe more. They are cross-linked and hierarchical; a model of a car is composed of many models of car components. Cross-linked because something like "seat" and "window" appear in many models; and are themselves hierarchical.
If I say think about wheels, you can imagine hundreds of things that have wheels; your mental model of a wheel is connected to all of them. Spoked wheels, roulette wheels, whatever. You don't need words for that, just images count. But inevitably if you call up an image of a wheel on a car, the word "car" is going to be triggered in your mind, because the models for "car" are linked to those word-models; how the word sounds, how the word looks in print. Think on "car" and it's network of models gets activated, models of things that have "car" in them. Traffic jams, car lots, car ads, freeways. Your personal network is primed too, memorable moments for you in which a car played a major role; whether traumatic or enjoyable or hilarious. I can intersect these networks to find particular memories: The hardest you remember laughing in a car.
The brain activates all these networks of models at the subconscious level in response to any stimulus, be it words, sights, sounds, or other sensory stimulus on skin or internally. That is what is going on with priming; and it is called that because this subconscious activation makes those models more likely to be triggered to conscious consideration. Hit it with two or three links, it becomes prominent: Things related to "laughing" is likely (hopefully) too numerous to pick a particular thing; but add "the hardest" and that narrows it, and add "car" may narrow it to few enough events that you can consciously consider them (or it).
(To Be Continued, Length Restriction)
So what is qualia? What is it "like" for me to see red, or you to see red?
DeleteThe word is linked in my brain to the sight and associations of "red", all the models it participates in as a component, from stop signs to blood to innumerable business signs to race cars and religious symbols. Far too many to enumerate, far to many to promote to my conscious attention mechanism, but that is the feeling of "red" for me; seen or imagined. The feeling of all that stuff primed and ready to fire on my next perception. That includes many things not in your brain, my thousands of personal experiences of red; and of course vice versa. My qualia of "red" is not exactly the same as yours, but in large part our qualia will be similar to the extent our life experiences of "red" are similar.
Green, same thing. But there is an obvious intersection of these two colors, especially linked by "alternating". It is a stop light; and one I can't help but think of when I imagine red and green alternating in my mind. Stop, Go, Stop, Go. Driving, traffic, work, grocery shopping, and going to restaurants, which is what most of my "driving" (and experience of stop lights) is linked to. A dozen or so cross country trips, or driving in foreign cities or countries.
That is qualia; it is how our brains work as (IMO) proven by myriad priming experiments (some of which I find fascinating). It is a massively parallel system, all those subconsciously activated models are themselves linked to emotions, sounds, memories, and implications; the subconscious brain is constantly and relentlessly trying to predict the present and future.
(The "present" in the sense that it expects certain immediate things; and is "surprised" when those expectations are not met, which focuses your attention on the unexpected change in your sensory environment.)
So yes, via priming, internal states DO contribute to more efficient processing of information. What does it mean that the primate is "looking for a red fruit"? Her brain is already primed by hunger or pleasure seeking to preferentially fire on the networks of "red" and "fruit" which may link to a model of the size and shape of the fruit. It has been negatively primed (repressed) models related to anything else, green, brown, blue or white from the sky, etc. Scanning the trees, nothing happens, but the signal from her eyes indicating "red" is primed to activate her attention mechanism to determine if it is fruit, perhaps by size, shape and position. If it is, she plans, considering how to specifically apply models of action to get it. If it doesn't match (it is a flower or redbird) then her primed brain continues the scan. And may ignore unusual phenomenon that are repressed.
(IN a famous example, subjects are asked to count ball passes in a basketball game where players pass a great deal, and the subjects are so primed on that task, most did not even notice a guy in a gorilla costume walk across the court in front of them. None of the players reacted and the subjects were too busy following the ball and remembering the count.)
I don't regard qualia as anything difficult to explain at all; I think it is an emergent property of our natural massively parallel processing with a huge number of models and our limited capacity for paying attention. Priming helps us think quicker, it is a prioritization scheme, similar to how caching works on a computer and can drastically increase processing speed.
Dr A.M. Castaldo: I do hope you will write more! Perhaps you could publish a book on your take on such topics? At least one fellow got a Nobel Prize in economics for talking about priming in terms of "fast thinking" (that book still delights and amuses me), so who knows, your work might have more impact than you might think.
DeleteRegarding priming: For my AI and robotics work, I interacted mostly (though not entirely) with military components of the US federal government. In one study that I recall vividly, interviewers attempted to understand in quantifiable, computer-compatible terms how an experienced team leader assessed the dangers of a particular situation. For the team leader it was just a matter of looking quickly over a field and saying "That looks bad." For the interviewer, this same nearly instantaneous assessment devolved into hours of highly interactive interviews aimed at ascertaining in unambiguous terms what features of that field triggered the leader's nearly instantaneous negative assessment.
High degrees in marital arts are to me an even more fascinating example of how sophisticated and effective a well-trained fast-think neural network can be. At my alma mater a quite good but arrogant American student of Taekwondo once kept trying to provoke a somewhat elderly visiting Korean master into a fight. The master resisted at first, but finally gave in. He broke the American student's leg in three places on his first move. Later, after medical treatment and a cast, the student came back to the master laughing and said "I deserved that!" I think he took pride in having become a living example of how remarkably effective this kind of neural training can be when it is done with sufficient dedication and practice.
Terry Bollinger: Thanks. What happens with martial arts (and typing, driving, riding a bike, playing a musical instrument, etc) is not priming, but what we commonly call "muscle memory", and in actuality this is practicing something so much the brain creates special neural circuits to deal with just that task.
DeleteI discussed this (40 years ago) with a Kung Fu instructor and students at various levels; they referred to it as "becoming Kung Fu". One student was a bar bouncer at a strip club that got rowdy customers fairly often; he'd been studying for about four years. He said his hands and arms had become Kung Fu, but his legs were only partially there. Meaning, if somebody grabbed him, his arms and hands seemed to respond correctly without him thinking about it. This kind of "hard wiring" (as opposed to following a memorized recipe) really does bypass the conscious thought process; which speeds up reaction times to nearly instinctive reflex speeds. That is importantly efficient when typing, and also when fighting somebody trying to kill you.
One of the priming experiments I liked was the hot/cold assessment test.
Students are paid to (purportedly) see if we can judge personality types based on facial images. The actual facial images used are people with null expressions; a professionally curated collection of these posed images is given.
Each student is alone in a room when they take this test, they are supposed to write down personality characteristics they think apply to each facial image.
Simple! Here's the twist: We are testing priming. The professor gives them a room to attend, and meets them there. She is always the same, she is carrying books and test materials in one arm, and a coffee in the other. She asks the student to hold her coffee while she unlocks the door, then they go in, she takes her coffee back, and gives them their packet and verbal instructions, then leaves and waits for them outside.
For half the students, the coffee is hot and the cup is warm. For half, the coffee is iced, and the cup is cold.
Students that held a warm cup for a minute will assign more "warm" personality traits to the neutral faces, they are funny, happy, caring, etc.
Students that held a cold cup for a minute will assign more "cold" personality traits to the neutral faces, they are distant, judgmental, lonely, etc.
It's a reproducible and significant result.
In the absence of any actual clues from the picture, the personality traits are not chosen at random; the "cold" or "warm" sensation "primed" everything in the student's mind linked to "cold" or "warm", including personality traits; so when mentally searching for personality traits, their internal models of primed traits had an extra point, and crossed the threshold to consciousness first, and then selecting a cold trait in turn likely sets up a feedback cycle that made "cold" traits more likely to be selected.
Stage magicians (in particular mentalists) use priming words and gestures in their patter to make people think what they want them to think, so they can "read their mind" and it really seems like telepathy to their victim, because the priming is typically sub-conscious. It is also quite common in professional advertising, to be aware of such associations and choose words known to trigger certain thoughts.
Excuse me, Dr Castaldo, did I say or imply that “human intelligence” [is] something precious and special and ineffably mysterious”?; did I say or imply that “supernaturalism” was involved? Answer: NO, that is your own rather weird misinterpretation of what I said.
DeleteI was talking about the issue of symbolic representation: how symbols, and arrangements of symbols, that have meaning to human beings cannot mean anything from the point of view of computers. Human beings understand symbolic representations e.g. words and sentences in a particular language, equations, variables and numbers. These symbols, and their arrangement, mean something from the point of view of human beings; but these symbols, and their symbolic re-representation in the form of strings of zeroes and ones, can mean nothing to a computer.
As I said, could you please stop promulgating nonsense about “machine intelligence”?
Lorraine Ford: Yes, you do imply that, have implied it frequently, and are implying it again. You say symbols have "meaning" to human beings and cannot mean anything to computers. That implies real neurons are doing something different than circuits are, and by extension that circuits are fundamentally incapable of behaving the same as a neuron. That is false.
DeleteIt implies there is some non-physical and incomputable component to the behavior of a neuron, or interactions of a neuron. There is not; they are molecular machines made of predictable atoms processing electro-chemical signals that can indeed be represented by zeros, ones, and formula.
We can accurately simulate quantum chemistry. We do that on a tiny scale, but in principle it can be done and we don't need anything exotic in the neuron to explain it's behavior, or to simulate it. That is what I mean when I say you are treating neurons as if they are "special" or "ineffably mysterious". You insist upon this distinction all the time, and it is a false distinction.
Neurons are pattern recognition devices, they take inputs (sometimes thousands) and if the relationships amongst the inputs represents some useful pattern, the neuron signals that to other neurons, as one of their inputs.
With caveats, neurons need oxygen, and supplies. They produce waste that can interfere with their operation, and even kill them if not eliminated (that is what sleep is for, the cleaning cycle).
Our thinking and understanding are done with neurons that process electrical signals. Symbolic representation is a non-issue, symbols are not special, meaning is not special.
Something means something when it implies or predicts other traits or qualities or events. That's all, symbols trigger a network of associations, sometimes involving feelings or emotions, but that is all that "meaning" means.
Yes, you are implying supernaturalism when you insist on this difference. Everything we sense is turned into electrical signals for our brain and everything we feel and think and do is a result of processing those electrical inputs. There are no "symbols" in our brain other than patterns of neurons, any "symbol" you discern by sight, sound, touch, smell taste or any combination of senses is, to your brain, a pattern of electrical signals. The only thing that makes it "symbolic" is what the signal pattern causes to happen in your brain, the cascade of neurons it primes or causes to fire and their effects.
The brain is an analog signal processor and the signals are electrical impulses. It isn't anything more than that. It's behavior can be, in principle, duplicated electronically. There is nothing special about it.
So again, please stop promulgating nonsense about how the brain does something special and ineffably mysterious with "symbols" that no machine can ever do. That is engaging in supernaturalism.
Dr Castaldo,
DeleteI am not saying anything at all about the brain.
I am just stating for a fact that the symbols used in computing are human artefacts. Humanly devised symbols and arrangements of symbols (e.g. words, sentences, equations, algorithms) are re-represented in a humanly devised system of zeroes and ones; and these zeroes and ones are then re-represented as electrical voltages in a computer. These voltages mean nothing to a computer: from the point of view of a computer the voltages are not symbols (not that a computer has a point of view).
Dr. A.M. Castaldo: You said: If I say think about wheels, you can imagine hundreds of things that have wheels; your mental model of a wheel is connected to all of them. Spoked wheels, roulette wheels, whatever. You don't need words for that, just images count. But inevitably if you call up an image of a wheel on a car, the word "car" is going to be triggered in your mind, because the models for "car" are linked to those word-models; how the word sounds, how the word looks in print. Think on "car" and it's network of models gets activated, models of things that have "car" in them. Traffic jams, car lots, car ads, freeways.
DeleteThere is an extensive literature modeling such linkages dating back to the era of symbolic modeling in AI and computational linguistics, roughly the the 1970s through the mid-1980s. If you are at all curious John Sowa, Knowledge Representation: logical, philosophical, and computational foundations (1999), discusses such models.
Terry Bollinger & Dr. A.M. Castaldo: On martial arts, music, etc, yes. Certainly muscle memory, but also beyond the motor systems. A skilled jazz musician, for example, certainly has to have thousands and thousands of patterns in and at their fingertips, if you will. But that's not enough to thread your way through the chord changes to, say, "I Got Rhythm", "Giant Steps", a simple blues, or, for that matter, a modal tune with hardly any changes at all, such as "So What". You have to keep a continuous stream of melody going and you have no time to think. The moment you start having to think about what you're playing you're dead. (I'm speaking from a half century of practical experience.)
As usual I drift back to 1980 when I first read these words:
ReplyDelete"At present, computers are a useful aid to research, but they have to be directed by human minds. However, if one extrapolates their recent rapid rate of development, it would seem quite possible that they will take over altogether in theoretical physics. So, maybe the end is in sight for theoretical physicists if not for theoretical physics." (Stephen Hawking, Inaugural Lecture).
I scoffed at the line back in 1980, I do so in 2019.
@gary alan. The progress in AI has been mostly on the marketing side.
DeleteIt seems that this could apply to both Artificial Intelligence and Theoretical Physics:
ReplyDeleteIt has produced some useful things, but also a lot of hype.
This is an optimization of big data processing in the mathematical space of knowledge. Therefore, I am trying to create an artificial consciousness based on physical principles. The space of knowledge of physics is much wider than any other.
ReplyDeleteIt does seem that AI is only the latest fake news hype.
ReplyDeleteWe had a long run of “nanotechnology”, every academic who could attach the term to their work getting air-time in search of grant money. That faded out, and there was a bit of quantum computing and three-d-printing. And now artificial intelligence, embraced by the media uncritically, without question, as the key to the future.
In control systems we had this in the early two-thousands. It was then called “expert-systems”. Actually small neural nets. It died, partly because for practical purposes it was indeterminate. I mean what are you going to say in court or at the coroner’s enquiry, when they ask, why did the rector blow up, or the airplane dive into the sea? Aaaaah. Well, maybe the training set wasn’t quite right?
Good that the physicists have a use for it. I wouldn’t put it in a self-driving car.
In the last sentece of this paragraph:
ReplyDelete"This type of iterative learning is certainly one aspect of intelligence, but it leaves much wanting. The current algorithms heavily rely on humans to provide suitable input data. They do not formulate own goals."
Shouldn't it say:
"They do not formulate THEIR own goals" ?
What information does this add to the sentence?
DeleteThanks for replying Dr. Sabine.
ReplyDeleteIt's not information, it's (english) grammar for the adjective 'own'.
For example: I have own life,
it should say: I have my own life
Check the online Webster dictionary.
I Googled for "make own decisions" and found like 100,000 headlines that contain the phrase, including examples like
DeleteLet pupils make own decisions over climate strikes
Docs Feel Stress When Patients Can't Make Own Decisions
Let’s teach children to make own decisions
If Google demonstrates that the vast majority of people use and understand what I mean, that's good enough for me, regardless of what Webster says. Thanks for the feedback.
As a scientist you care about rigour and precision.
DeleteYes, the usage without the possesive pronoun is widespread, but in spite of the hundreds of thousands
the gramatically correct form with the pronoun is preferable (quality over guantity).
regards.
Pubbli, fixating on grammar and spelling lowers the level of debate and is petty (and you mis-spelled "grammatically" and "possessive", and wrote "English" with a lower-case "e"). ;)
Deletewhich algorithm works for which case and how well, the same way they previously had to understand which theory works for which case and how well.. well said
ReplyDeleteI've been thinking about what algorithms will do to *us*...!
ReplyDeleteThree years ago, AlphaGo beat the worlds best player, Lee Sedol 4 games to 1 and since then people have said that the game of go is over.
But this misconstrues what playing a game is about. Its not just about winning but enjoying playing the game. When I first came across go when I was a young teenager I appreciated its bare simplicity. Stone on wood. Black and white. A grid of lines. I found that simplicity appealing especially when compared to chess where every piece was different, and each piece could move in different ways. Go was different, I realised, because when the 21 x 21 board game was finished, you can move to a 30 x 30 board.
Had I access to AlphaGo I would have been curious to see how it would cope had we expanded the board in that way; would it plateau when we expanded it to 30 x 30; or to 100 x 100? and what would happen if we expanded it into the third or fourth dimension? What would the natural laws of capture be then ..?
Instead we get the corporate media boasting about how machine learning has beaten the best human player without asking questions or asking to experiment with the algorithm. I'd call that uncritical adulation.
The game of go is not over - and this is from someone who is not particularly good at go...
Hi Sabine, I'm a physicist who studied in Bologna and I'm new of this wonderful blog! I think it's right to call them "machines learning" because simply they lack the consciousness, the capability to decide "what to do". Sometimes I noticed the can simulate the behavior of a simple animal intelligence but they are now only an extension of the human brain. Anyway.. . We keep thinking!!
ReplyDeleteHuman theorists are trained by their "elders and betters" to embrace or reject data, and see it as either important or irrelevant, depending on prevailing social fads and fashions. This often makes the human development of theories deeply flawed. The individuals who are good at it are few and far between, because to be a great theorist requires one to reject the teachings of previous generations, and the "filter" for people joining the physics community usually requires applicants to embrace those teachings. We recruit conformists and then expect them to be original thinkers.
ReplyDeleteSo the hope of using AI was to eliminate the human bias and reveal deep patterns in the data that humans either wouldn't see, or would incorrectly reject as being wrong or unimportant due to educational prejudices.
The problem with using AI to generate the laws of physics is that human beings are still involved at the teaching and "quality control" stages. If an AI starts making suggestions that disagree with the laws of physics that we already "know"(!) are correct, then the human programmer is liable to tell the AI that it has made a stupid mistake, and instruct it to avoid making the same error in future. So the human-developed AI is liable to be trained to make the same stupid mistakes as a human student. We'll encourage it when it makes the same mistakes as us, and we'll give it the equivalent of a cattle prod up the bottom when it does something profound and correct, whose worth we don't have the conceptual vocabulary to recognise.
When US banks tried using AI to do mortgage assessments, using historical mortgage applications data and outcomes to train an AI, they supposedly found that the AI was learning to emulate the same racist behaviours as the earlier human operators: if your first name was listed as "John", you probably got the mortgage, if it was "Jahil" you probably didn't. If we train an AI on experimental data that humans have pre-vetted as conforming to our expectations, then the AI is liable to reverse-engineer whatever theory we believed in when we did the data-vetting. We'd have to be prepared to feed the AI raw data that we believed to be faulty or irrelevant, so that it could analyse that data and tell us that we were wrong. Unfortunately, human physicists often don't deal well with being told that they're wrong, even by other humans.