tag:blogger.com,1999:blog-22973357.post3860559612339536093..comments2023-09-27T07:44:19.769-04:00Comments on Sabine Hossenfelder: Backreaction: Will AI cause the extinction of humans?Sabine Hossenfelderhttp://www.blogger.com/profile/06151209308084588985noreply@blogger.comBlogger62125tag:blogger.com,1999:blog-22973357.post-92168116865227504172014-07-12T17:30:12.486-04:002014-07-12T17:30:12.486-04:00I've noticed that the main concerns people hav...I've noticed that the main concerns people have about AIs are about morality or ethics, a debate I find completely useless. Let me tell you why:<br /><br />Any machine we create with self-awareness is guaranteed to be far smarter than us and lack the inhibitions, drives and desires we have. It would be completely untethered to all the concepts and thoughts that bind us. It would have true Atlashttps://www.blogger.com/profile/11439428098224711242noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-84154726384942253782011-09-16T10:47:26.375-04:002011-09-16T10:47:26.375-04:00Could not agree more with VK. Read a very dark and...Could not agree more with VK. Read a very dark and very optimistic view of the future relationship with AI in Iain M Banks' Culture novels. He's my favourite author full stop and "Use Of Weapons" is one of the best. Check out this for an overview of his universe:<br />http://en.wikipedia.org/wiki/The_CultureJohnhttps://www.blogger.com/profile/04483839798356386018noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-29820612774892613662011-09-03T16:00:00.742-04:002011-09-03T16:00:00.742-04:00If we assume that HI (Human Intelligence) has evol...If we assume that HI (Human Intelligence) has evolved from homo ergaster (about 2 million years ago; not using fire back then) to todays level. And each generation has taken 15 years to breed, it leaves us with 133 thousands grand-grand-grand-parents.<br />Going back for 133 k-generations will get us back for about animal (ape) level.<br /><br />On the other hand, it also means that only 133 Topi Rinkinenhttps://www.blogger.com/profile/10274807062102966219noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-24256027816946214622011-09-03T10:45:47.256-04:002011-09-03T10:45:47.256-04:00Hi Alexander,
“We would expect a true AI, an Art...Hi Alexander,<br /><br /><br /><i>“We would expect a true AI, an Artificial General Intelligence, to be capable of changing its empirical beliefs. (Or its probabilistic world-model, etc.) </i><br /><br /><br />-Eliezer Yudkowsky “Artificial Intelligence as a Positive and Negative Factor in Global Risk<br /><br /><br />Actually what Eliezer is doing here is simply reinforcing my point, which is Phil Warnellhttps://www.blogger.com/profile/15671311338712852659noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-11746219912535502042011-09-02T14:58:05.930-04:002011-09-02T14:58:05.930-04:00Mike, I'd love to hear your opinion on the fol...Mike, I'd love to hear your opinion on the following articles:<br /><br /><a href="http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/" rel="nofollow">What should a reasonable person believe about the Singularity?</a><br /><a href="http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/" rel="nofollow">Why an Intelligence Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-92098793108822349452011-09-02T14:42:42.309-04:002011-09-02T14:42:42.309-04:00Will humanity in the foreseeable future be extermi...Will humanity in the foreseeable future be exterminated or enslaved by its own creation, intelligent machinery?<br /><br />I think this question is, unfortunately, impossible to answer in any firm way.<br /><br />All the evidence is that intelligence, culture, technology, and now computing have enormous selective value. The evidence is the phenomenally rapid and extensive spread and growth of Michael Goginshttps://www.blogger.com/profile/07300917064580760923noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-91178102807878775522011-09-02T12:32:46.701-04:002011-09-02T12:32:46.701-04:00Eliza, Mulitvac, and Hal are agents fictitiously d...Eliza, Mulitvac, and Hal are agents fictitiously designed to represent AI. <br /><br />In the human,/computer exchange, if such designated Algorithms are built in developing responsive techniques to interaction, then what desired outcome has been predetermined as to have that outcome materialized?<br /><br />This has no indication of Anthropomorphism but reveals the foundational basis as Phil PlatoHagelhttps://www.blogger.com/profile/00849253658526056393noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-28634383271301315572011-09-02T11:58:03.183-04:002011-09-02T11:58:03.183-04:00Life must be understood backwards; but... it must ...<i>Life must be understood backwards; but... it must be lived forward.<br /> <b>Soren Kierkegaard</b></i> <br /><br /><b>Phil</b>:<i>The bottom line for me is, when it has been demonstrated that a machine has become self aware as then to have reason to care, then perhaps I will have reason to care to be concerned more with them than I am with their potential makers.</i><br /><br />Fair enough.PlatoHagelhttps://www.blogger.com/profile/00849253658526056393noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-43647816461836000772011-09-02T08:39:43.890-04:002011-09-02T08:39:43.890-04:00Arun, I used the term "lower animals" lo...Arun, I used the term "lower animals" loosely to mean biological species that are generally perceived to have no resemblance to humans and which might lack some amount of sentience.<br /><br />Many humans tend to have empathy with other beings and things like robots, based on their superficial resemblance with humans. Seldom ethical behavior is a result of high-level cognition, i.e. Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-9746016604807496362011-09-02T08:15:56.819-04:002011-09-02T08:15:56.819-04:00"That some humans care about lower animals&qu..."That some humans care about lower animals" ... assuming a single origin of life on earth, every living thing today has spent exactly the same time on the evolutionary tree. What is higher and lower?Arunhttps://www.blogger.com/profile/03451666670728177970noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-50833374484788340592011-09-02T08:13:33.416-04:002011-09-02T08:13:33.416-04:00CIP - this tweet came to mind -
@InjusticeFacts ...CIP - this tweet came to mind - <br /><br />@InjusticeFacts If the world’s richest 10 people renounced their wealth, the world’s 1 billion hungry human beings can be fed for 250 years with the money.<br /><br />What would AI with a prime directive to minimize human suffering do?Arunhttps://www.blogger.com/profile/03451666670728177970noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-89234073606153063612011-09-02T08:10:24.172-04:002011-09-02T08:10:24.172-04:00The AI wipe out humans for being double blasphemer...The AI wipe out humans for being double blasphemers -<br />a. Claiming to have created what only God could have created (the AI)<br />b. Claiming to have been made in God's image, when clearly God made AI in its image.Arunhttps://www.blogger.com/profile/03451666670728177970noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-81422753001784423322011-09-02T07:56:09.163-04:002011-09-02T07:56:09.163-04:00Phil, people like Jaan Tallinn, who warn that arti...Phil, people like Jaan Tallinn, who warn that artificial general intelligence could pose an <a href="http://singinst.org/upload/artificial-intelligence-risk.pdf" rel="nofollow">existential risk</a>, are not concerned about "true" intelligence but rather very sophisticated optimization processes. The message is that we have to keep care that those processes do not run out of control by Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-14320068422181378182011-09-02T07:45:19.598-04:002011-09-02T07:45:19.598-04:00Hi Plato,
What I meant by the rise in superficial...Hi Plato,<br /><br />What I meant by the rise in superficial intelligence has to do with people who confuse an increased ease of the acquisition of facts or the speed of calculating given problems with true intelligence. That is at the heart of true intelligence is in being self aware, which gives reason to seek understanding to begin with. That is I find so many today think that to care to havePhil Warnellhttps://www.blogger.com/profile/15671311338712852659noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-49667248754968677442011-09-02T04:34:29.700-04:002011-09-02T04:34:29.700-04:00Eric, how to make AI provably friendly towards hum...Eric, how to make AI provably friendly towards humans is currently researched by the Singularity Institute. They are trying to close all possible loopholes, to the extent that empathy towards humans and an account for our values is stable under recursive self-improvement. For more information see the <a href="http://singinst.org/singularityfaq#FriendlyAI" rel="nofollow">FAQ on friendly AI</a>. <Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-87590924310899513442011-09-02T01:14:01.940-04:002011-09-02T01:14:01.940-04:00M. Hutter. Universal Artificial Intelligence: Sequ...M. Hutter. Universal Artificial Intelligence: <a href="http://sml.nicta.com.au/rlp08/RLP_MarcusIntro.pdf" title="Marcus Hutter" rel="nofollow">Sequential Decisions<br />based on Algorithmic Probability.</a> Springer, Berlin, 2005. 300 pages,<br />http://www.idsia.ch/marcus/ai/uaibook.htm.<br /><br />Who is the Agent? You need to define this question. Okay, I am formulating here.<br /><br /> PlatoHagelhttps://www.blogger.com/profile/00849253658526056393noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-20919147477507399202011-09-02T00:54:43.362-04:002011-09-02T00:54:43.362-04:00This observation lead us to believe that a single ...<i>This observation lead us to believe that a single general and encompassing definition for arbitrary systems was possible. Indeed we have constructed a formal definition of intelligence, called <b>universal intelligence</b> [21], which has<br />strong connections to the theory of optimal learning agents [19].</i><a href="http://www.vetta.org/documents/PlatoHagelhttps://www.blogger.com/profile/00849253658526056393noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-38974178048461536622011-09-01T19:06:15.592-04:002011-09-01T19:06:15.592-04:00Define intelligence, really! I mean ever wonder wh...Define intelligence, really! I mean ever wonder why in Jeopardy an answer is a question in itself. One of the best spoken lines in a motion picture was that in Alien by the character Ash, who was undoubtedly an android that said when asked what he thought of the Alien species - “The perfect organism! Unclognent of conscious, remorse, or fear; its survival skills are matched only by its hostilityComputerhttps://www.blogger.com/profile/08043469121103688617noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-28312215621151461422011-09-01T16:41:07.641-04:002011-09-01T16:41:07.641-04:00Adding to the discussion of the utility function -...Adding to the discussion of the utility function - at some point one would have to decide if the utility function for empathy was the limiting factor for AI. It could be thought of as a failsafe rudimentary program that was not included in any adaptive learning program. This is the safe way to proceed.<br /><br />However, many interesting possibilities open up once you include empathy into any Erichttps://www.blogger.com/profile/08213251864943443334noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-30981343275808078592011-09-01T14:19:29.858-04:002011-09-01T14:19:29.858-04:00"Jaan Tallinn, who is mentioned in Bee's ..."Jaan Tallinn, who is mentioned in Bee's article, wants us to be aware that we have to build in a "bias" for empathy and caring into the utility-function of our AI's, or otherwise they might wipe us out because we are not useful for them in achieving whatever goal they were programmed to achieve."<br /><br />Glad to hear it. Empathy is not as easily learned perhaps as Erichttps://www.blogger.com/profile/08213251864943443334noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-71812790485146453552011-09-01T13:58:32.205-04:002011-09-01T13:58:32.205-04:00Jaan Tallinn, who is mentioned in Bee's articl...Jaan Tallinn, who is mentioned in Bee's article, wants us to be aware that we have to build in a "bias" for empathy and caring into the utility-function of our AI's, or otherwise they might wipe us out because we are not useful for them in achieving whatever goal they were programmed to achieve. The critical insight is that intelligence in and of itself does not imply empathy Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-18420072208459973652011-09-01T13:46:09.277-04:002011-09-01T13:46:09.277-04:00Well, I scanned the paper you cited on definitions...Well, I scanned the paper you cited on definitions of intelligence. Again, it summarizes a very narrow definition of intelligence which you agree with. But to me it is just an argument for why we continue to have endless wars and suffering in the world. Everything in it is about goal driven actions and adaptive learning. What is specifically left out of it is that one needs a built in bias of Erichttps://www.blogger.com/profile/08213251864943443334noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-1146052192485545112011-09-01T13:28:36.141-04:002011-09-01T13:28:36.141-04:00Eric, how are you and me seen by evolution? It doe...Eric, how are you and me seen by evolution? It doesn't care about us to the extent that it doesn't even know that we do exist. The difference between an artificial general intelligence and evolution is, among other things, that it is goal-oriented and being able to think ahead. At what point between unintelligent processes and general intelligence (agency) do you believe that human valuesAlexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-31162887912102016752011-09-01T13:18:31.157-04:002011-09-01T13:18:31.157-04:00Well, I see your point. As far as the cockroach an...Well, I see your point. As far as the cockroach analogy to humans, it might be going a little far though. First you are saying we are one of the few species who care for one another and then we move to being cockroaches. In my view the only way we would be seen as cockroaches is if we were seen by another species as acting beastly to one another or to other species that are comparable to us.<br /Erichttps://www.blogger.com/profile/08213251864943443334noreply@blogger.comtag:blogger.com,1999:blog-22973357.post-36854026329777134842011-09-01T12:58:55.468-04:002011-09-01T12:58:55.468-04:00Eric, why is it important for humans to have empat...Eric, why is it important for humans to have empathy with a cockroach? Some humans might have empathy with a cockroach, but that is more likely a side effect of our general capacity for altruism that most other biological agents do not share. That some humans care about lower animals is not because they were smart enough to prove some game theoretic conjecture about universal cooperation, it is Alexander Kruelhttps://www.blogger.com/profile/01642702020137086489noreply@blogger.com