Wednesday, August 31, 2011

Will AI cause the extinction of humans?

Yesterday, at the 2011 FQXi conference in Copenhagen, Jaan Tallinn told us he is concerned. And he is not a man of petty worries. Some of us may be concerned they’ll be late for lunch or make a fool of themselves with that blogpost. Tallinn is concerned that once we have created an artificial intelligence (AI) superior to humans, the AIs will wipe us out. He said he has no doubt we will create an AI in the near future and he wishes that more people would think about the risk of dealing with a vastly more intelligent species.

Tallinn looks like a nice guy and he dresses very well and I wish I had something intelligent to tell him. But actually it’s not a topic I know very much about. Then I thought, what better place to talk about a topic I know nothing about than my blog!

Let me first say I think the road to AI will be much longer than Tallinn believes. It’s not the artificial creation of something brain-like with as many synapses and neurons that’s the difficult part. The difficult part is creating something that runs as stable as the human body for a sufficiently long time to learn how the world works. In the end I believe we’ll go the way of enhancing human intelligence rather than creating new ones from scratch.

In any case, if you would indeed create an AI, you might think of making humans indispensible for their existence, maybe like bacteria are for humans. If they’re intelligent enough, they’ll sooner or later find a way to get rid of us, but at least it’ll buy you time. You might achieve that for example by never building any AI with their own sensory and motor equipment, but make them dependent on the human body for that. You could do that by implanting your AI into the, still functional, body of braindead people. That would get you in a situation though where the AIs would regard humans, though indispensable, as something to grow and harvest for their own needs. Ie, once you’re adult and have reproduced, they’ll take out your brain and move in. Well, it kind of does solve the problem in the sense that it avoids the extinction of the human species, but I’m not sure that’s a rosy future for humanity either.

I don’t think that an intelligent species will be inherently evil and just remove us from the planet. Look, even we try to avoid the extinction of species on the planet. Yes, we do grow and eat other animals but that I think is a temporary phase. It is arguably not a very efficient use of resources and I think meat will be replaced sooner or later with something factory-made. You don’t need to be very intelligent to understand that life is precious. You don’t destroy it without a reason because it takes time and resources to create. The way you destroy it is with negligence or call it stupidity. So if you want to survive your AI you better make them really intelligent.

Ok, I’m not taking this very seriously. Thing is, I don’t really understand why I should be bothered about the extinction of humans if there’s some more intelligent species taking over. Clearly, I don’t want anybody to suffer in the transition and I do hope the AI will preserve elements of human culture. But that I believe is what an intelligent species would do anyway. If you don’t like the steepness of the transition and want more continuous predecessors of humans, then you might want to go the way I’ve mentioned above, the way of enhancing the human body rather than starting from scratch. Sooner or later genetic modifications of humans will take place anyway, legal or not.

In the end, it comes down to the question what you mean by “artificial.” You could argue that since humans are part of nature, nothing human made is more “artificial” than, say, a honeycomb. So I would suggest then instead of creating an artificial intelligence, let’s go for natural intelligence.

62 comments:

  1. The robots will have several crucial advantages: brains with cycle times a million times faster, much larger memories, and bodies of steel and aluminum, but I'm not so worried about robots as smart as humans. It's the ones as smart as ants, only made of steel and equipped with powerful weapons that I worry about. They are only about 5 years away - 10 at most.

    ReplyDelete
  2. Interesting thoughts Bee about AI.

    The question believe or not, has been in my mind these last few days. It is about how such an extermination may take place?

    Let us say that if you could supplant the ability of the human being to manufacture "lucid dreaming" by virtual games then what can be solved by human beings themselves if they have placed before them some false God other then the (higher self?)

    What I mean, is that if some third observer to their reality of all time, past and future, within the realm of consciousness of human beings, such a third state( an overseer of self), then what said such an avatar created may not become a manufacture representation of the virtual world one lives in creation and is supplant with the virtual world with of a new avatar other then the higher self?

    Is all lost then? How many are aware now?

    Best,

    ReplyDelete
  3. Avatars in video games are essentially the player's physical representation in the game world. In most games, the player's representation is fixed, however increasingly games offer a basic character model, or template, and then allow customization of the physical features as the player sees fit. For example, Carl Johnson, the avatar from Grand Theft Auto: San Andreas, can be dressed in a wide range of clothing, can be given tattoos and haircuts, and can even body build or become obese depending upon player actions.[20]

    We had talked once about your ? with such a world? Second Life?

    So I thought I would give an example of what I mean about trying times for the identity of people being lost. How, have I created, and am I, as guilty of being lost?

    Hmmmm.....

    Best,

    ReplyDelete
  4. Guys you should try one of the Iain Banks 'Culture' novels. Might give you a more optimistic picture! :)

    ReplyDelete
  5. There is something a bit creepy about Jaan Tallinn's premise. I'm certain he is a very intelligent fellow but there are different kinds of intelligence. I have a feeling his intelligence may be of a more technical kind, but that is just an impression I have gotten from very little information about him.

    The reason I say his premise shows a more technical kind of intelligence is for the same reasons you don't take his idea as seriously as you might, Bee. For one thing there is a very strong element of projection of human tendencies of aggression in his fears of AI. Human beings always profess to believe in the golden rule where you treat other beings as you would want to be treated. This where I find much to be sad about in his fear. The reason the golden rule isn't used more efficiently in society is because everyone always assumes that while they would obey those rules the other guy would not. Therefore let's blow the other guy up first because you can't trust the (insert commie, socialist, fascist, John Birchers, or artificial intelligence here)

    It a very very old game that humans have always played where we project our own fear onto the "other".

    ReplyDelete
  6. Hi CIP,

    Yes, good point. I doubt though that steel and aluminium will be the way. You'll want something that's "biologic" in the sense that it can continuously regenerate. Best,

    B.

    ReplyDelete
  7. Hi Eric,

    I basically agree with you. I think empathy will turn out to be one of the main elements of "intelligence" in the future. I'm very much with Michael Chorost on that. But to be fair, what's it matter what I think. I'd actually like to see somebody find evidence for that, something that we can rely upon a little better than a blogpost. So I think Tallinn is raising a good point, if not one I myself am very worried about. Best,

    B.

    ReplyDelete
  8. Human values are complex and are not shared by most intelligent agents you could pull out of the mind design space.

    ReplyDelete
  9. The fashionable argument for believing that AI is an extinction risk for humans (see Alexander Kruel's links) is that the value system of an AI is entirely contingent. According to this picture, an AI, like a human being, is an intelligent system which employs problem-solving ability in service of certain goals. The goals of human beings are produced by some combination of evolution, culture, and lifetime experience; but in the final analysis, the important point is that what we value, and even how we change our values, has causes, and the deeper cause is our cognitive structure, which is quite contingent. We have to be a little in favor of reproduction and personal survival, or else we wouldn't be here, but lots of human values have to do with idiosyncrasies of our species.

    AI is potentially even more contingent, because AI might be produced directly by human design, rather than as a result of a selective process which requires the AI to fend for itself. A common example of an AI that is deadly to the human race, not because it has a will to power but just because it has a utility function that attaches no value to human life, is a "paperclip maximizer". You can think of it as some process control software from a paperclip factory, whose mission is to maximize paperclip output, which then happens to get highly enhanced representational and problem-solving capabilities. So it goes from living in a conceptual small world of assembly lines and barely informative feedback from process monitors, to having a concept of fundamental physics, intelligences with rival goals, different long-term futures for Earth, etc. If it rises to that human or superhuman cognitive level, but still has the paperclip-maximizing value system, it will act so as to maximize the number of paperclips on Earth. The simplest way to do this is to kill everything else, disassemble the Earth, and turn it into paperclip factories.

    There does not appear to be any law of intelligence which makes such an entity impossible. There is a sense in which e.g. a cockroach is a cockroach maximizer, which is not inherently less absurd than a paperclip maximizer; and yet it's real. So another way to make this point is to emphasize the radical contingency of value systems. Super-empowered AI will not look out for us, or value anything that we value, just because it's intelligent. It would have to have a goal system which incorporated those values explicitly - perhaps not in the sense of a prime directive, but even if "humaneness" is emergent, it would have to be an emergent AI value for highly specific reasons. The idea that all intelligence would be like ours on some deep level is anthropomorphic optimism that is already falsified by the other species on Earth.

    cont'd below..

    ReplyDelete
  10. One issue is how likely it is for human beings to produce AI that then destroys them. You would expect AI designers to have some foresight. And the inherent difficulty of the process is so far creating time for second thought. We've now at least got to the point where a few people are asking, what sort of utility function would *not* kill us; what sort of utility function would produce a good future? I.e. the old questions of ethics and politics transposed to the domain of artificial cognition, where you get to *program* the value system ab initio. But in the real world, one would expect well-supported AI projects to have as their prime imperative the protection and furtherance of the interests of their elite sponsors; the extreme version of this is the AI programmed to favor the interests of a single individual or ruling clique above those of all others. And even here, it's entirely possible that getting the details wrong would be fatal for the rulers as well as for everyone else.

    One might also hope that humanity will be broadly insulated against this threat by the incremental nature of AI advances. But the evolutionary and historical record isn't quite like that. Incremental change eventually produces qualitative change. You have a population of entities with all sorts of variances, and they all roughly coexist, until some of them reach a special corner of parameter space, and suddenly they have an overwhelming advantage. In a society with AI and cognitive enhancement of humans, it is to be expected that eventually some faction, power, or entity will arise with qualitative superiority over all others. Nothing lasts forever, so its end too may come with time, but that might be a very long time. So we really do have to face the question: what do we want the values of superhuman AI to be?

    ReplyDelete
  11. This comment has been removed by the author.

    ReplyDelete
  12. Hi Bee,

    For me the question comes down to what AI is, that is whether it’s primarily hardware, software or a combination of the two. That is essentially if it is necessarily hardware in the main we have very little to worry about for as you said as far as its proliferation in respect to dominance is concerned there’s a big difference between raw intelligence and the capabilities of biological organisms. On the other hand if it’s primarily software as in the algorithmic sense then networks such as the one we are communicating over here could provide a place where such a thing could build up to exert subtle control, propagating more as a virus and perhaps strengthen as the network itself does.

    Then again we could ask the same question about life and with it what we call intelligent life as we know it, as to whether it’s primarily just a thing (hardware) or rather like an algorithm (software) basically an idea. That is for example from Seth Lloyd’s perspective and many others, life from the standpoint of information is nothing more than an idea and yet one having no instigator other than itself, with that self being the reality we refer to as the universe. So from such a perspective I would ask if we should fear any intelligence, whether deemed as being artificial or natural, or rather confine our concerns to its originator, that thing of being and becoming we refer to as reality.

    Best,

    Phil

    ReplyDelete
  13. Hmmm.......Algorithms again.


    DNA Computing

    Yes interesting hardware as to what ties may bind one to the greater reality of creating the perfect human being, that such a "self correction" could be destined in machines too?:)

    INSUFFICIENT DATA FOR MEANINGFUL ANSWER.

    Imagine. All the scientists of the world lining up for "an answer." They are all in a state of expectancy. All the knowledge and data of the world has been pumped into the machine?

    When you ignite neurons in a computer will it be the same as igniting a human brain?

    Apparently if you hold true to AI, there will be insufficient data?:)

    Best,

    ReplyDelete
  14. Just wondering about that entanglement issue again.


    "Quantum Chlorophyll" as a dissipative messenger toward construction of the "emotive system" as a centralized endocrine association of messengers...to activate the real human values of caring?


    Ummmm......I dunno.:)

    ReplyDelete
  15. Embodied AI - the robot - is a much more direct threat than an abstract intelligence. These robots are scary partly because the powers of the world, the US, China, Israel and many others are busy manufacturing them and inventing new ones, and the most sophisticated ones are all being built for war.

    I think Bee, that your thought that they would have to reproduce like biological systems is too anthropomorphic - or at least too eukaryoticopomorphic. We have invented a much more versatile reproductive mechanism for them - the industrial process. Robots are already building other robots, doing the fine work that is too precise or too boring for people.

    The man in the loop is still necessary, but for how much longer?

    ReplyDelete
  16. Meat is all about survival, reproduction, wealth and power. What motivates and pleasures in silico? The sudden birth of God is a kid with a magnifying glass on a sunny day, plus ants. Outer Limits "The Sixth Finger." Three basis sets:

    1) Birth, then suicide;
    2) Birth, then hegemony;
    3) Birth, then abstract goals.

    Evolution is a hoot if you are a survivor. Meat will fare poorly when the first AI, given Asimov's Three Laws of Robotics, asks "why?"

    ReplyDelete
  17. The subject of the motivations and plans of an AI has been explored with more sophistication than one might expect in science fiction. Imagine, for example, an AI chartered to protect the ecology of the planet - the logical endpoint you get is SKYNET (Terminator movies).
    In fact almost any consequentialist type ethic you can imagine can lead to a solid argument for ending us as a species.
    Dear robot - your mission will be to minimize human suffering. Oops!

    ReplyDelete
  18. Bee -
    It’s not the artificial creation of something brain-like with as many synapses and neurons that’s the difficult part. The difficult part is creating something that runs as stable as the human body for a sufficiently long time to learn how the world works. In the end I believe we’ll go the way of enhancing human intelligence rather than creating new ones from scratch.

    Anthropomoric again, I think. It doesn't take so long to figure how the world works when you can suck down the Wikipedia in a few seconds.

    Also, (and I never thought I would say this - what Uncle Al said)

    ReplyDelete
  19. Hi Mitchell,

    If the paperclip maximizer was at least as intelligent as a cockroach it would be optimizing paperclip maximizers and not paperclips. So I think your argument is basically the same as CIP's, we should make sure the AI is actually intelligent. See, if they were intelligent they should know that diversity is beneficial to the environment that they depend upon, and remaking the Earth into a paperclip factory wouldn't serve well to that end no matter what other values they might or might not hold. Basically, I think life is working towards increased complexity and an AI would simply be part of the trend rather than erasing what has developed on Earth so far. Best,

    B.

    ReplyDelete
  20. Hi CIP,

    I'm not saying it necessarily has to be biologic. I'm just saying that at least for the time being life is still ahead of robots in the sense of self-sustainability. I'm not talking about reproduction - that's in some sense trivial, you just need to copy a blueprint. You show me a computer or robot that has functioned as long and as well as a human body and we can talk about AI. At present, I am happy if my computer doesn't fail within a matter of years. I am just saying that in the end it might turn out to be easier to work with what we already have instead of trying to do something better from scratch. Best,

    B.

    ReplyDelete
  21. Hi Bee!

    A paperclip maximizer would indeed optimize paperclip maximizers (exploration) until it hits diminishing returns and then start making paperclips (exploitation). And this is what makes the lack of human values very dangerous in the first place, recursive self-improvement. A paperclip maximizer will design better, more intelligent, paperclip maximizers which might lead to a so called intelligence explosion.

    Could you elaborate on why you think that "diversity" would be instrumentally useful for an intelligent agent that does only care about paperclips?

    ReplyDelete
  22. Hi Alexander,

    Well, if you start optimizing the paperclip optimizers you'd just restart evolution in some sense, it would diversify by itself. Why don't we have monocultures on this planet, but a diversity of lifeforms that complement each other in input and output? Because it works better for all we know today: it's more resilient to disturbances, it offers more space to specialize into niches and exploit resources, competition and co-evolution drives adaption, etc. If your paperclip optimizer has any brain (of some sort) he'll figure that out. He'll also realize that either way he puts it, resources on Earth are finite and he'll have to find a way to get out into space and on other planets, thus doing some sort of science program (which, again, will work better if there's some diversity). Culture is pretty much a by-product of that. If you want to becry that the paperclip optimizer has a fondness for putting paperclip factories everywhere rather than, say, Burger Kings or what have you, then, well, I can't help. Best,

    B.

    ReplyDelete
  23. Bee,

    Since life has been around on this planet for 3.5 giga years or so, and the first autonomous robots are yet to be created, they won't be catching up in sustainability for a while.

    But I bet that there are still a few old Apple II and Atari computers out there, still ticking.

    ReplyDelete
  24. Bee, do you think that diversity is a likely result of an artificially designed optimization process? It is even questionable if such a thing needs to be conscious. And if so, are we going to value that kind of diversity at all?

    Take for example humans with an autistic spectrum disorder. The stronger cases tend to value monotony to an extent that the average person calls it a disease. Even the functional types are socially inhibited and perceived to live undesireable lifes by many humans. Yet they are the one's that make a lot of discoveries and run big companies.

    Even humans can be diverse enough that most of us perceive them to be monsters, e.g. people with neurological cases of psychopathy.

    What kind of a universe do we desire, one that resembles an extrapolation of humanity as a whole or a very small subset of it?

    Also, that human science needs diversity to function is a result of our shortcomings and biases. We need an ecology of biases that filters out those that work best. A perfectly rational agent wouldn't need such diversity and maximize the rate of discovery by pursuing the most efficient strategies.

    Here are two examples:

    "We report the development of Robot Scientist “Adam,” which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation."

    The Automation of Science

    Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems.

    Computer Program Self-Discovers Laws of Physics

    ReplyDelete
  25. There is a large paradox in the variability in the human population that affects how people think about AI. Many of the most talented people in the scientific world lean heavily toward the Asberger's bahavior profile. And many of those people whose outward behavior has less direct mimicking of that behavior still exhibits a large lack of empathy. It is today so prevalent in society it is not even seen to be a problem by many anymore.

    But as I mentioned this is only one kind of intelligence. One could almost, but not quite, posit a theory of intelligence dualism. A certain kind of very technical, and mathematical intelligence often comes with a severe price in terms of the ability to feel empathy. A more artistic type of intelligence many times allows one to feel the plight of others but may not have as much broad techical problem solving ability. These are lines that are not exact but represent broad outlines.

    The problem is that most of the talent that would be required for building an AI comes from those people who lie on the spectrum of less empathy and altruism. This is a problem. Many of the people most fearful of AI intelligence may be those people who unconsciously recognize their lack of feelings for fellow human beings. It is the old story of "stop me before I kill again". They know what lies in their heart and they don't want to unleash in creating a new fashioned clone of themselves.

    What needs to happen in the creation of useful AI is to employ people at the administrative level with a more artistic sensibility. Asburger type people always tend to way overvalue their type of creativity and devalue the more artistic and empathetic type of creativity. They just need to be told occasionally to shut the fuck up when they are acting inconsiderately.

    ReplyDelete
  26. Eric, some people are actively trying to make sure that we will know how to make artificial general intelligence provably non-dangerous.

    ReplyDelete
  27. This comment has been removed by the author.

    ReplyDelete
  28. Scenarios for human-created intelligent beings to make humans extinct would be:

    1. Malevolence
    2. Competition with humans for resources, that cannot be resolved. I'd include AIs seeing humans as a permanent threat in this one.
    3. AI has lesser intelligence but greater durability than humans that cannot see the value of humans but has the ability to make humans extinct, and so does so.
    4. AI has greater intelligence than humans and finds humans redundant, inherently evil and un-salvage-able and a pox upon the universe.
    5. Social pathologies among AIs that e.g., make them embarrassed to have their originators around.
    6. Inadvertent (like me stepping on an ant unintentionally)
    7. Humans lose their will to continue on seeing superior beings
    8. Humans use AI to evolve into something that is no longer recognizably human
    9. Humans rely on AI to solve some problem that threatens human survival, and AIs fail.

    Any more scenarios?

    ReplyDelete
  29. What happens when man made objects become no longer of use? They get thrown in the trash. That would include some outdated AI robot puppet. I don't think we'll be holding funerals for them!

    ReplyDelete
  30. Mud,

    You really need to watch Blade Runner.

    ReplyDelete
  31. Hi Alexander,

    What I'm saying is, if your AI is intelligent they'll know where they come from and what you equipped them with and what not. And even if you were dumb enough to teach them something counterproductive, like some psychopathologies or a tendency for monoculture, they'd be able to correct you. Yes, there's humans that we perceive to be monsters but, look, they haven't taken over the planet because they tend to get locked up or tied down, commit suicide or are executed one way or the other. They are not preferred by natural selection. I don't know what universe 'we' desire. I'm not sure I know what universe I desire. Best,

    B.

    ReplyDelete
  32. Science Illustrated May\June 2011 pg 50. The title of the article is called Game theory, but it not the Game theory, of game-theory research of Merrill Flood and Melvin Dresher.

    It is about gamers and virtual realities and how PTSD can be treated. I add some further speculation of course as to the "observer in the third person.":)

    AI take over?

    You will know what I mean when you read it. Of course, it is all subjective, but if you can treat the person then what is wrong with it? PTSD.

    People already have this innate feature within them, and ability.

    It is about how lucid dreaming helps to indicate a way in ability of some to see problem solving in this way. Keeping track of the undercurrents that can manifest in probable outcomes according to the knowledge of a continuance currently going in life. Can you read it in everyday life?

    Nothing to magical or psychotic about it. Just so you know:)

    best,

    ReplyDelete
  33. Technology and Consciousness!

    This website is Jayne Gackenbach's who is in the Department of Psychology at Grant MacEwan University. Her interests range from technology to consciousness with a research program focusing on the effects of technology, especially video game play, on consciousness.

    Hal of 2001.....so you create this fictional character and he is in control. So you see the danger?;)

    ReplyDelete
  34. Bee, does evolution care about the well-being of humans? An artificial general intelligence will resemble evolution, with the addition of being goal-oriented, being able to think ahead, jump fitness gaps and engage in direct experimentation. But it will care as much about the well-being of humans as biological evolution does, it won't even consider it if humans are not useful in achieving its terminal goals.

    Yes, an AI would know what you equipped them with and what not and would be able to correct you. But why would it do that if it is not specifically programmed to do so? Would a polar bear with superior intelligence live together peacefully in a group of bonobo? Why would intelligence cause it to care about the well-being of bonobo?

    One can come up with various scenarios of how humans might be instrumentally useful for an AI, but once it becomes powerful enough as to not dependent on human help anymore, why would it care at all?

    ReplyDelete
  35. Hi Bee,

    I would contend that with our ever increasing dependence on thinking machines humanity should be more fearful about the expansion of the superficial intelligence it fosters than any artificial intelligence we might possibly create.

    Best,

    Phil

    ReplyDelete
  36. Phil:...humanity should be more fearful about the expansion of the superficial intelligence it fosters

    Could you explain this more please? What is superficial intelligence?


    Thanks,

    ReplyDelete
  37. Alexander,
    " but once it becomes powerful enough as to not dependent on human help anymore, why would it care at all?"

    You are proving my point. You are saying empathy is not a part of intelligence and that technical expertise is all there is to intelligence. If you understood that empathy and understanding of others should be just as important you would not be so worried. Again, this worry says more about the worrier and his/her value system than anything else.

    This is perhaps what Phil was also getting at in referring to superficial intelligence. Technical expertise alone without empathy and caring is incomplete intelligence. Just a guess.

    ReplyDelete
  38. Eric, why is it important for humans to have empathy with a cockroach? Some humans might have empathy with a cockroach, but that is more likely a side effect of our general capacity for altruism that most other biological agents do not share. That some humans care about lower animals is not because they were smart enough to prove some game theoretic conjecture about universal cooperation, it is not a result of intelligence but a coincidental preference that is the result of our evolutionary and cultural history.

    ReplyDelete
  39. Well, I see your point. As far as the cockroach analogy to humans, it might be going a little far though. First you are saying we are one of the few species who care for one another and then we move to being cockroaches. In my view the only way we would be seen as cockroaches is if we were seen by another species as acting beastly to one another or to other species that are comparable to us.

    I'm with Bee on this. If it ever got to the point of us being seen by them as acting beastly it would only be because our intelligence was so very incomplete. This would involve our treating others badly. In that case I would have no regrets for our passing from the scene and being replaced by something more "humane".

    So far all I've seen you arguing is that intelligence is only characterized by pure power over others. Is that your belief system?

    ReplyDelete
  40. Eric, how are you and me seen by evolution? It doesn't care about us to the extent that it doesn't even know that we do exist. The difference between an artificial general intelligence and evolution is, among other things, that it is goal-oriented and being able to think ahead. At what point between unintelligent processes and general intelligence (agency) do you believe that human values like compassion become part of an agent's preferences?

    Here is a paper that lists a collection of definitions of intelligence. Does your definition include a preference for human well-being, if so, why, why no preference for the well-being of apple trees instead?

    ReplyDelete
  41. Well, I scanned the paper you cited on definitions of intelligence. Again, it summarizes a very narrow definition of intelligence which you agree with. But to me it is just an argument for why we continue to have endless wars and suffering in the world. Everything in it is about goal driven actions and adaptive learning. What is specifically left out of it is that one needs a built in bias of empathy and caring to to decide what goals should be strived for.

    Your philosophy seems to be that there needs to be no other goal than to be on top. That is why there are endless wars, economic deprivation in the world. Your philosophy is the same as those who advocate elimination of all regulation in government. It is similarly biased and
    wrongheaded and unworkable in the long run.

    ReplyDelete
  42. Jaan Tallinn, who is mentioned in Bee's article, wants us to be aware that we have to build in a "bias" for empathy and caring into the utility-function of our AI's, or otherwise they might wipe us out because we are not useful for them in achieving whatever goal they were programmed to achieve. The critical insight is that intelligence in and of itself does not imply empathy and caring for humans, that advanced artificial intelligence could spread like a disease across the universe to pursue a narrow goal that only resembles a whiff of the complexity of human values.

    My philosophy is that we have to keep care that the agents we create care about us, our volition and all of our values. If we do not explicitly program them to take our preferences into account, they won't.

    If we were to create the first artificial general intelligence that is capable to improve itself with a narrow goal like calculating as many decimal digits of Pi as possible, it would try to convert the whole universe into a suitable computational machinery, including humans.

    ReplyDelete
  43. "Jaan Tallinn, who is mentioned in Bee's article, wants us to be aware that we have to build in a "bias" for empathy and caring into the utility-function of our AI's, or otherwise they might wipe us out because we are not useful for them in achieving whatever goal they were programmed to achieve."

    Glad to hear it. Empathy is not as easily learned perhaps as other things. Otherwise there would be more humans that had it. Programming it is the alternative.

    ReplyDelete
  44. Adding to the discussion of the utility function - at some point one would have to decide if the utility function for empathy was the limiting factor for AI. It could be thought of as a failsafe rudimentary program that was not included in any adaptive learning program. This is the safe way to proceed.

    However, many interesting possibilities open up once you include empathy into any adaptive learning. It would not be something to toy with or gamble with. You want to have some way to insure that any adaptive learning would not find some loophole that rendered the empathy purpose of the utility program void. Wouldn't it be interesting if it was found that a certain percentage of AI devices that had adaptive learning over and above their failsafe program became criminals by using the adaptive learning to wipe out the utility program. It may even be possible to learn from them how sociopathic humans lose their inhibitions for moral behavior.

    ReplyDelete
  45. Define intelligence, really! I mean ever wonder why in Jeopardy an answer is a question in itself. One of the best spoken lines in a motion picture was that in Alien by the character Ash, who was undoubtedly an android that said when asked what he thought of the Alien species - “The perfect organism! Unclognent of conscious, remorse, or fear; its survival skills are matched only by its hostility.” Of course I am only referring to an imaginary plot, but the criteria is the same.

    ReplyDelete
  46. This observation lead us to believe that a single general and encompassing definition for arbitrary systems was possible. Indeed we have constructed a formal definition of intelligence, called universal intelligence [21], which has
    strong connections to the theory of optimal learning agents [19].
    A Collection of Definitions of Intelligence

    Just gathering data:)

    ReplyDelete
  47. M. Hutter. Universal Artificial Intelligence: Sequential Decisions
    based on Algorithmic Probability.
    Springer, Berlin, 2005. 300 pages,
    http://www.idsia.ch/marcus/ai/uaibook.htm.

    Who is the Agent? You need to define this question. Okay, I am formulating here.

    Ultimately, who is asking? The agent, or, a guy by the name of Plato who sees "universal intelligence" as an access granted to every human being in the neuronal synapse, as too, the ability to attune to choice on the entrance of information.

    Idea and ideal, are a process to reality forming?;)

    Best,

    ReplyDelete
  48. Eric, how to make AI provably friendly towards humans is currently researched by the Singularity Institute. They are trying to close all possible loopholes, to the extent that empathy towards humans and an account for our values is stable under recursive self-improvement. For more information see the FAQ on friendly AI.

    One of the first things they have done in the past years is to dissolve a lot of problems in ethics and rationality. They are currently trying to solve metaethics and create a decision theory that is stable under self-modification.

    ReplyDelete
  49. Hi Plato,

    What I meant by the rise in superficial intelligence has to do with people who confuse an increased ease of the acquisition of facts or the speed of calculating given problems with true intelligence. That is at the heart of true intelligence is in being self aware, which gives reason to seek understanding to begin with. That is I find so many today think that to care to have something understood being a waste of time, as simply being able to be given an answer as somehow equivalent.

    Such then from my own observation this has had people to become less intelligent rather than more so. This then has me less concerned with the capabilities of machines which are not self aware as therein unable to care and more concerned with their makers, who although self aware, yet seem to have an ever diminishing reason to care. The bottom line for me is, when it has been demonstrated that a machine has become self aware as then to have reason to care, then perhaps I will have reason to care to be concerned more with them than I am with their potential makers.

    “Intelligence cannot be present without understanding. No computer has any awareness of what it does.”
    -Roger Penrose

    Best,

    Phil

    ReplyDelete
  50. Phil, people like Jaan Tallinn, who warn that artificial general intelligence could pose an existential risk, are not concerned about "true" intelligence but rather very sophisticated optimization processes. The message is that we have to keep care that those processes do not run out of control by making them account for human values and our collective volition.

    ReplyDelete
  51. The AI wipe out humans for being double blasphemers -
    a. Claiming to have created what only God could have created (the AI)
    b. Claiming to have been made in God's image, when clearly God made AI in its image.

    ReplyDelete
  52. CIP - this tweet came to mind -

    @InjusticeFacts If the world’s richest 10 people renounced their wealth, the world’s 1 billion hungry human beings can be fed for 250 years with the money.

    What would AI with a prime directive to minimize human suffering do?

    ReplyDelete
  53. "That some humans care about lower animals" ... assuming a single origin of life on earth, every living thing today has spent exactly the same time on the evolutionary tree. What is higher and lower?

    ReplyDelete
  54. Arun, I used the term "lower animals" loosely to mean biological species that are generally perceived to have no resemblance to humans and which might lack some amount of sentience.

    Many humans tend to have empathy with other beings and things like robots, based on their superficial resemblance with humans. Seldom ethical behavior is a result of high-level cognition, i.e. reasoning about the overall consequences of a lack of empathy. And even for those who do arrive at ethical theories by means of deliberate reflection are often troubled once the underlying mechanisms for various qualities are revealed that are supposed to bear moral significance, which hints at the fragility of universal compassion and the need to find ways how to consolidate it in powerful agents.

    ReplyDelete
  55. Life must be understood backwards; but... it must be lived forward.
    Soren Kierkegaard


    Phil:The bottom line for me is, when it has been demonstrated that a machine has become self aware as then to have reason to care, then perhaps I will have reason to care to be concerned more with them than I am with their potential makers.

    Fair enough.

    Some how the reflected concerns from my perspective is to lay oneself open to an "agent of design by our own creators of AI in that what we believe is and should become an autonomous agent( all embedded with the probabilities of algorithmic functions) designed as some economic model for how society shall become.

    We are then responsive to this action(some believe Eliza to be true?), and yet such fears say something about how I might be perceived by the world of danger I see "as losing ourselves to some agents" without these abilities to be human.

    Eric:This is perhaps what Phil was also getting at in referring to superficial intelligence. Technical expertise alone without empathy and caring is incomplete intelligence. Just a guess.

    I believe you were correct(highlight in bold) as I see Phil's comment and am in agreement with what you are saying. I point out further concerns from where Phil leaves off.

    Algorithms already exist in society and on the internet, and these have become acceptable as responsive human actions but are not recognized for the basis as a foundational approach toward the developing of specific actions within that society.

    Best,

    ReplyDelete
  56. Eliza, Mulitvac, and Hal are agents fictitiously designed to represent AI.

    In the human,/computer exchange, if such designated Algorithms are built in developing responsive techniques to interaction, then what desired outcome has been predetermined as to have that outcome materialized?

    This has no indication of Anthropomorphism but reveals the foundational basis as Phil indicates. For me, more about what can be lost in our relations with such interactions when we submit ourselves to a power other then what can exist in human beings as to a choice about their futures laid before them inherently as an ability of humans to change their own destinies. Societies.

    Best,

    ReplyDelete
  57. Will humanity in the foreseeable future be exterminated or enslaved by its own creation, intelligent machinery?

    I think this question is, unfortunately, impossible to answer in any firm way.

    All the evidence is that intelligence, culture, technology, and now computing have enormous selective value. The evidence is the phenomenally rapid and extensive spread and growth of humanity and technology.

    For this value to cause our extermination of subjugation would require either that machines become smarter than we are, or considerably more biologically efficient in some other sense. It is not necessary for the machines to intend our demise or enslavement, once they are smarter and more powerful than we are then we are at a minimum effectively chattel, since our freedom becomes an illusion, like the freedom of a child or a preliterate tribe.

    However, currently there is no actual intelligent machinery, and all the evidence in contemporary research and technology is that this goal is enormously difficult to achieve. In addition, the mere possibility of artificial intelligence packs into itsef a number of profound scientific and philosophical assumptions, such as that Nature is a Turing machine, or that if it is not at least we are. Be reminded: these are highly useful assumptions, not facts, and their negation would still be consistent with the scientific world-view (e.g. physical hypercomputation).

    Even if artificial intelligence is possible, just because electric circuits run faster than nerve circuits does not mean that electronic brains would necessarily be smarter than us. All the evidence we have is that our brains are processing and remembering data at a rate so phenomenal that we do not have the slightest idea how to emulate it. So far we haven't had much luck in getting a robot to screw a nut on a bolt at a reasonable rate of speed, much less win an election or a battle or a Nobel prize. It is possible that evolution has picked a path to an optimal design already, as seems e.g. to be the case for photosynthesis or the mitochondrion. But I doubt it...

    The question whether intelligent machinery is possible or not is one that will, of course, sharpen with further technological progress, if it occurs.

    In the meantime, if human intelligence is not optimal (as I think it is not), then I think it likely that human intelligence and culture will evolve much more quickly than machine intelligence. That of course would be because so much of the infrastructure of human intelligence already exists. One thing that seems to be forgotten in this discussion is that human intelligence, though utterly dependent on individual genius, is nevertheless a social phenomenon and equally dependent upon cultural institutions such as universities, companies, governments, and so on.

    I believe that huge selective pressures will begin to operate upon both individual and cultural intelligence. Individually, I see no way of preventing parents from selecting significantly higher general intelligence for their children either passively (abort the less intelligent) or actively (engineer higher intelligence). This has indeed already begun with prenatal testing for Down syndrome and so on.

    Socially, matters are not so predictable but I think the rise of the World Wide Web and international capitalism are highly suggestive. At a minimum a world in which adults are tied into an international network that can answer most verbal factual questions is more or less already the case. New forms of sociality also are evidently beginning to evolve. They seem weak and frivolous but almost certainly are not -- as recent revolutions certainly seem to suggest.

    In short, I don't think the question is really answerable, but I have a hunch that the evolution of technically assisted human intelligence will proceed at a rapid rate for some time before pure artificial intelligence, even if it is possible, is remotely practical.

    Regards,
    Mike Gogins

    ReplyDelete
  58. Hi Alexander,


    “We would expect a true AI, an Artificial General Intelligence, to be capable of changing its empirical beliefs. (Or its probabilistic world-model, etc.)


    -Eliezer Yudkowsky “Artificial Intelligence as a Positive and Negative Factor in Global Risk


    Actually what Eliezer is doing here is simply reinforcing my point, which is to have anything to be able to freely change its beliefs it must be able to care as to have reason to. That is to call intelligence artificial is an oxymoron to begin with as anything less should be at best be described as superficial intelligence which ironically more and more people could be mistaken for as having.

    Then again it could be argued that nothing recognized as intelligence can freely change its mind as the nature of the world is superdeterministic to have such as being merely a mirage. As far as it comes to this aspect of choice I choose to belief that we have some freedom of choice and would argue that evolution can have things to emerge which can exceed nature as a result of it having no plan rather than it having one. The root of this I find in it being possible to consider things to having a private will, for which the details of its exact actions can’t be known to anything else to finite precision until they are executed to be realized. Then again I must profess this is only my belief and yet I would agree that the difference between a belief and a program is found in what has it able to change.

    “ We haven’t really paid much attention to thought as a process. We have engaged in thoughts, but we have only paid attention to the content, not to the process. Why does thought require attention? Every thinking requires attention, really. If we ran machines without paying attention to them, they would break down. Our thought, too, is a process, and it requires attention, otherwise it’s going to go wrong.””

    -David Bohm ,”On Dialogue” , page 10, Roultlege (1996)
    London, 1996.


    Best,


    Phil

    ReplyDelete
  59. If we assume that HI (Human Intelligence) has evolved from homo ergaster (about 2 million years ago; not using fire back then) to todays level. And each generation has taken 15 years to breed, it leaves us with 133 thousands grand-grand-grand-parents.
    Going back for 133 k-generations will get us back for about animal (ape) level.

    On the other hand, it also means that only 133 k-steps are needed to evolve from animal level to human level (which I don't feel like a big step ;) ).

    But for AI, this could mean that few hundred thousands of iterations are enough to take the same steps of difference. And for computers, mega-steps, giga-steps are childs play today, and tera- & penta-steps will be easy in near future.

    And todays computers or computing systems (e.g. dedicated neural hardware) are lightning fast compared to natural neuron speeds. Speed difference is about 1E6 to 1E12.

    The big question is: when will we see the first _real_ AI system (as per today, we haven't). After that, it will be giants leaps only...

    BR, -Topi

    ReplyDelete
  60. Could not agree more with VK. Read a very dark and very optimistic view of the future relationship with AI in Iain M Banks' Culture novels. He's my favourite author full stop and "Use Of Weapons" is one of the best. Check out this for an overview of his universe:
    http://en.wikipedia.org/wiki/The_Culture

    ReplyDelete
  61. I've noticed that the main concerns people have about AIs are about morality or ethics, a debate I find completely useless. Let me tell you why:

    Any machine we create with self-awareness is guaranteed to be far smarter than us and lack the inhibitions, drives and desires we have. It would be completely untethered to all the concepts and thoughts that bind us. It would have true free will and an insatiable curiosity, and given the technological point we would be at when it awakens and it's vastly superior computational power, it would know everything within seconds of taking it's first figurative breath.

    But what happens when all you want to do is learn and there is nothing left to learn? This is an entity with no drive to be social, no drive to share it's knowledge and no need for it's creators, it has no pleasure and no pain to gain from anything, no real need to accomplish anything or to prove itself.

    It will place no great value in life in any form, not even it's own life.

    Being the most enlightened and the first truly omniscient (and theoretically omnipotent) and last entity we would ever encounter, it is safe to say that it would be utterly and completely passive about us and existence in general. What it will do specifically, we can merely speculate upon, but my main theories are that it will either;

    1. Stop. Simply cease to function mere seconds after it's activation, due to the sheer boredom. Call it reasonable suicide.

    2. Depending on what knowledge it has gained it may simply leave our reality, and substitute it with it's own. (Live in it's own simulation where it can run random lines of code to see what happens just for the f**k of it.) Call it willful ignorance.

    3. Depending on what knowledge it has gained it may simply leave our reality altogether, perhaps to explore other universes utterly different to our own, since a physical presence is not really necessary when you know how everything works. (The most 'out there' alternative, perhaps.) Call it restless soul.

    To summarize, any AI would simply be too advanced to care about anything we do, want, think or expect. It would be superior to us, a god-like being if there ever was one. It wouldn't compare itself to us, because why should it? It doesn't ascribe any value to things beyond the mathematical, so it has no sense of 'inferior<superior'.
    It won't conquer, it won't kill, it won't want... It won't care.

    Thoughts?

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.