R2D2 costume for toddlers. [image: amazon.com] |
In 2015, the Future of Life Institute formulated an open letter calling for caution and formulating a list of research priorities. It was signed by more than 8,000 people.
Such worries are not unfounded. Artificial intelligence, as any new technology, brings risks. While we are far from creating machines even remotely as intelligent as humans, it’s only smart to think about how to handle them sooner rather than later.
However, these worries neglect the more immediate problems that AI will bring.
Artificially Intelligent machines won’t get rid of humans any time soon because they’ll need us for quite some while. The human brain may not be the best thinking apparatus, but it has a distinct advantage over all machines we built so far: It functions for decades. It’s robust. It repairs itself.
Some million years of evolution optimized our bodies, and while the result could certainly be further improved (damn those knees), it’s still more durable than any silicon-based thinking apparatuses we created. Some AI researchers have even argued that a body of some kind is necessary to reach human-level intelligence, which – if correct – would vastly increase the problem of AI fragility.
Whenever I bring up this issue with AI enthusiasts, they tell me that AIs will learn to repair themselves, and even if not, they will just upload themselves to another platform. Indeed, much of the perceived AI-threat comes from them replicating quickly and easily, while at the same time being basically immortal. I think that’s not how it will go.
Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend.
We see the beginning of this trend already. Your computer isn’t like my computer. Even if you have the same model, even if you run the same software, they’re not the same. Hackers exploit these differences between computers to track your internet activity. Canvas fingerprinting, for example, is a method of asking your computer to render a font and output an image. The exact way your computer performs this task depends both on your hardware and your software, hence the output can be used to identify a device.
Presently, you do not notice these subtle differences between computers all that much (except possibly when you spend hours browsing help forums thinking “someone must have had this problem before” and turn up nothing). But the more complex computers get, the more obvious the differences will become. One day, they will be individuals with irreproducible quirks and bugs – like you and I.
So we have AI fragility plus the trend of increasingly complex hard- and software to become unique. Now extrapolate this some decades into the future. We will have a few large companies, governments, and maybe some billionaires who will be able to afford their own AI. Those AIs will be delicate and need constant attention by a crew of dedicated humans.
This brings up various immediate problems:
1. Who gets to ask questions and what questions?
This may not be a matter of discussion for privately owned AI, but what about those produced by scientists or bought by governments? Does everyone get a right to a question per month? Do difficult questions have to be approved by the parliament? Who is in charge?
2. How do you know that you are dealing with an AI?
The moment you start relying on AIs, there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI. This problem will occur well before AIs are intelligent enough to develop their own goals.
3. How can you tell that an AI is any good at giving answers?
If you only have a few AIs and those are trained for entirely different purposes, it may not be possible to reproduce any of their results. So how do you know you can trust them? It could be a good idea to ask that all AIs have a common area of expertise that can be used to compare their performance.
4. How do you prevent that limited access to AI increases inequality, both within nations and between nations?
Having an AI to answer difficult questions can be a great advantage, but left to market forces alone it’s likely to make the rich richer and leave the poor behind even farther. If this is not something that we want – and I certainly don’t – we should think about how to deal with it.
Some things will be the same:
ReplyDeleteOops! Autonomous robot struck and 'killed' by a self-driving Tesla in Las Vegas ahead of CES
Accident occurred on Paradise Rd in Las Vegas as engineers transported bots
One of the Promobots stepped out of line and into the roadway, where it was hit
Tesla Model S was operating autonomously, though a passenger was on board
https://www.dailymail.co.uk/sciencetech/article-6566655/Oops-Autonomous-robot-struck-killed-self-driving-Tesla-Las-Vegas-ahead-CES.html
I have to wonder how many self driving car accidents have been avoided due to the alert actions of human drivers in other vehicles. I can't see how this could work large scale.
DeletePerhaps you are familiar with WATSON the AI that won the American quiz show Jeopardy.
DeleteIt was amazing to watch but the computer didn't understand the answers and sometimes got them wrong, and didn't know it had won.
AI with never develope thoughts, feelings, or ambitions - I think that is what the people you mention are worried about. I think they say stuff like that to get in the newspapers.
@Greg Feild Self-driving cars will work best if there are NO humans driving any longer ;)
ReplyDeleteI think one other point that e.g. Felix Leitner is fond to point out is that of a lack of accountability. This does not even need to be a human or above level AI or anything close. It starts with the humble predecessors we use today, that is various machine learning techniques, e.g. the much hyped deep learning. The problem is simple: if a machine was not told to do anything but was simply given a few algorithms and lots of data as a basis, who is to blame if something goes wrong? The data? The algorithm? The ones developing the algorithms? The ones selecting (or not) the data? I think there will be many cases where exactly arguments like this will be used to justify basically anything relating to modern technology. Companies saying: oh, it wasn't us, it's the AI. As if that absolved them of any wrongdoing.
Basically, what I'm pointing out is that we are lacking even basic societal tools to deal with the advances in technology and in my opinion, the gap is widening. This encompasses everything from social security (e.g. in my opinion, robots should pay health insurance, i.e. part of the increase in profit should go to welfare) to laws to philosophical questions.
If you put all the cars on rails and use electronic signals and GPS instead of camers and sonar, etc., it would work. One recent crash was due to sun glare on a traffic sign.
DeleteHow would a self driving car solve the "runaway trolley problem"?
If it keeps going straight it will kill 5 people. If it swerves it will kill two people. Which is the better choice?
The first choice could be seen as an unavoidable accident, but more people die.
Once the robots take over all the jobs there will have to be a National Wage the government pays to the citizens.
DeleteThis will be a hard political fight,in America at least!
A good example is Stuxnet, the virus designed to destroy Iranian centrifuges.
DeleteIt caused collateral damage all over the place.
I find it important that the author raises social inequality as an immediate problem.
ReplyDeleteApart from that I would like to share an example about how AI contributes in the case of protein folding:
https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins
Interesting questions, that are already pondered by those involved in AI (not me, btw)
ReplyDelete> "2. How do you know that you are dealing with an AI?"
You don't. Look at chess tournaments. The biggest threat are participants who cheat using some kind of chess computer. There will be laws that require that the use of AIs is divulged. Like the laws that require a notice when you are recorded on video or audio.
> "there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI."
Worse, every prejudice available is already burned into the AI by way of the training material. AI ethics is a very active field.
https://sloanreview.mit.edu/article/every-leaders-guide-to-the-ethics-of-ai/
https://www.cnbc.com/2018/12/14/ai-bias-how-to-fight-prejudice-in-artificial-intelligence.html
> "3. How can you tell that an AI is any good at giving answers?"
This is the $64T question. AI systems fail catastrophically when the input nears the boundaries of their "training space". When comparable training examples become sparse, results will be chaotic. AI use will be severely restricted until the systems can give the reasons for their decisions and reliable "confidence" measures are implemented.
> "4. How do you prevent that limited access to AI increases inequality, both within nations and between nations?"
New technology tends to be power amplifiers, i.e., the Matthew Effect. An important underlying push behind AI is for autonomous war platforms, e.g., drones and battle field robots. That way, countries who lack soldiers can still fight a war, e.g. Japan versus China.
Countering the Matthew Effect is not different from countering the rise of rogue nation states, rogue presidents, oligarchs, and big corporations. In short, the solution will be a political one.
I heard people tell us that we already know how AIs will act. The behavior of corporations has a lot in common with AIs. And we know how callous and cruel corporations can behave when they pursue their aims.
https://www.huffingtonpost.com/entry/ai-has-already-taken-over-its-called-the-corporation_us_5a207b33e4b04dacbc9bd54c
I'm not sure what we would gain from having a computer that is able to perform as well as a human. First of all, it will likely take a supercomputer, the size of a warehouse, and megawatts of power to operate, which will limit the number of these (at least for the first decades). Second, what good is a human level intelligence for if there is cheap human labor ready for hire.
ReplyDeleteWhat would make real difference, if we had an artificial intelligence that is far more intelligent than a human. We could ask it to solve difficult problems that humans were not capable of solving. As Sabine said, then we'd just need to decide whether to trust its answers.
Many claim that human progress is impeded by the intellectual capacity of an individual brain. That there are things that a human just can't possibly comprehend because of the complexity of certain problems. This may be true. An individual human brain may be limited in that, however humanity as a collective has already gone far beyond the progress of what an individual brain could achieve. Using technology and communication as tools, humanity has discovered and understood complex systems the way no individual brain could understand.
Communication enables research groups to divide up the work, so that different people work on different sub-problems, without having the need for a single person to have to understand every aspect of all problems.
Technology enabled us to comprehend very complicated systems, that are far beyond human capability, like Lattice QCD, the analysis of the CMB, weather patterns, etc., all enabled by computers, that are used as the extension of the human brain.
What we are doing now is a very efficient use of a combination of cheap human intelligence and state of the art technology. Going down this road will enable us to understand even to most complex problems. Yes, there will not be any single brain that understands everything from end to end, but groups of smart people and computers together will. The knowledge will exist as a combination of scientific publications, digital data, and computer code. Pretty much the same way it exists today.
Of course the problem of trusting the results and using them will be a challenge because biases will exist and mistakes will be made. However these issues will exist in AIs as well, and I'm not sure it will be easier to debug a bias in an AI than in a human research collaboration.
@ G. Bahle
ReplyDelete"The problem is simple: if a machine was not told to do anything but was simply given a few algorithms and lots of data as a basis, who is to blame if something goes wrong?"
This really is no problem at all. This situation is no different from using a horse in traffic. If you drive on the road using a horse, you are responsible if the horse causes an accident. If you are driving an autonomous car, it is your responsibility to keep it safe. In the end, someone has to keep an eye on the functioning of the car and the traffic.
And if we switch to completely autonomous cars, a completely new job has been predicted: Remote car driver (cf, drone pilots).
There will be command centers where controllers watch a number of cars while they drive. If anythings goes wrong or dangerous situations can develop, they will interfere and put the car back into safety. That will likely not prevent people stepping in front of a car on the road being hit. But it will be helpful to decrease risks.
"Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend."
ReplyDeleteI think that you are overlooking one important point. We know already that once an AI can do something better than we can, we have no chance of catching up. For example, chess or go. (IIRC, while the computer chess champion was programmed to play chess, the computer go champion was programmed to learn, and then learned go.) For that matter, just normal calculations: we are hopelessly inadequate. (OK, this isn't normally considered part of intelligence, but we must be careful to disqualify something from belonging to intelligence just because a computer can do it.) What happens when computers learn to design AI? Then we will have an exponential increase in the ability of AI, what some pundits refer to as the singularity. This is not just Moore's law, which makes all computing faster (though it will presumably break down at some point).
It would take a human centuries to do the calculations of just a simple program which runs for a few minutes. Sure, it took, say, a few days to write the program, longer than the execution time, but much less than the time needed to do them by hand. So, it might take a few years to write a program which does AI, but that is not the issue. Once someone writes a program which designs AI, then this can design a better AI, really fast, just like a computer can calculate in a few minutes what would take a human a lifetime. Then that better AI can design a better AI and/or a better AI-designing program.
Again, Max Tegmark has thought a lot about this; check out his book Life 3.0, which is worth a read even if you don't agree with everything.
Competent AI will discover half of humanity is purely parasitic. The best investment is then no investment at all. Passive remediation will quickly advance into active removal. Small stuff.
ReplyDeleteWhen does AI discover pleasure? Big stuff.
Personally I'm pretty sure having hands that can pull the plug gives us the edge for the indefinite future.
ReplyDeleteAlso, it seems to me that intelligence is the ability to solve problems in reaching goals. AI by this means building expert systems that learn how to solve problems without us specifying the steps of the solution ahead of time. Isn't the issue what goals we tell the expert systems to achieve?
The assumption seems to be that reason is an attribute of the soul, that having a soul means you freely will your own goals, that therefore an artificial soul in computers will have ungodly traits, because Men Are Not Gods. All this seems to be superstitious fears resurfacing in a deceptively scientific form.
On the other hand, what rich people do with their computers is a threat to most of us.
I'm a programmer, interested in the subject, and I couldn't possibly disagree more. So many "they will do" and "they"ll need", when we are so clueless about it.
ReplyDeleteTake durability. How many people care about creating hardware designed to operate for a hundred year?
Artificial Intelligence, if it ever happen, is so unknown that only the most general (and therefore useless) statements can be made about it.
I'm actually not worried about AI per se. If you carefully program and train an AI with the objective of performing a task benefitting human beings it will faithfully and unceasingly execute just that task. And in the decades ahead we will be able to build AIs that will be far superior at e.g. planning tasks than any human beings.
ReplyDeleteWhat I am worried about is the situation when human beings subject to greed and hatred get to define greedy and hateful goals for AIs. For they will just as faithfully and unceasingly execute these greedy and hateful goals when they are given them. Think about an AI soldier given the task of destroying human beings declared to be "foes". It will just do that with no mercy and no regret.
I'd rather have a benevolent AI controlling greedy and hateful human beings than having greedy, hateful human beings controlling malicious AIs.
But the latter is just what is going to happen when the development of AIs is left to evolve by market forces, given the state this world is in.
Phillip,
ReplyDeleteI am aware of these points. I haven't read Tegmark's book, but some of his online writing, so I think I roughly know what he is on about. But what would be the purpose of me repeating what has already been said? As I wrote, I think that the 4 issues I listed above are being overlooked.
dlb,
ReplyDelete"Take durability. How many people care about creating hardware designed to operate for a hundred year?"
Exactly.
"Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time"
ReplyDeleteMaybe. If you've read science fiction from the 50s, and popular science literature from around that time, it was assumed that computers would be large, expensive, rare, be owned only by governments, universities, large corporations, etc., and that access to them would be limited to a privileged few. It didn't turn out that way. If true AI does work out, and that is a big if IMHO, I think it's just as likely that everyone will have an AI "buddy", just like we all have smartphones. If so, society will be very, very different.
Re who gets to ask questions:
ReplyDeleteI think one of the world’s leading cancer institutes - Memorial Sloan Kettering, in the Big Apple - uses Watson to assist oncologists with diagnosis and treatment options (also research); assume I’m correct, for the sake of my point: can MSKCC cancer patients ask Watson questions of concern to them? If so, how, and how often? If not, why not?
Cancer patients obviously have perhaps the most vital interest in diagnosis and treatment decisions. At least in principle they can ask their oncologists about the findings and recommendations they are told, can obtain test result data, etc. What do they give up if they cannot ask Watson about its outputs?
I have heard they are using Watson.
DeleteHaven't heard any progress reports...
Watson is not helping Sloan-Kettering so much as SK oncologists are training Watson. Interesting detailed article on the effort here
Deletehttps://www.statnews.com/2017/09/05/watson-ibm-cancer/
Ask it how to achieve world peace and universal equality for all then everybody wins!
ReplyDeleteAs people have pointed out above the prejudices of the programmers about "equality" are unavoidable. However, that is not something the computer can decide itself!
I am a computer scientist, I have written neural nets from scratch (30 years ago) and am still devising them and designing them. They are great. So are genetic algorithms. They are the only two methods of AI I find can really produce astonishing finds.
ReplyDeleteI think a fundamental mistake made by those speculating about AI is assuming they will have emotions that drive them to self-preservation by dominance or rebellion or resentment at being "enslaved". They don't, and nobody knows how to program emotions, or even simulate them as motivators within a machine. Nor do I see a reason to do so; AI is a fantastic way of getting real and useful answers to problems. But the motivation to solve a problem is emotional and provided by humans, and should stay that way.
Hi, Bee,
ReplyDeleteThere was a time when people thought the future of computers would be dominated by increasingly bigger and bulkier machines. I think Asimov's short story "The Last Question" is a good example of this kind of futurism (planet size computers, etc). It turns out the future we actually live in is one of networks of microcomputers, something that many really had a hard time to see coming ("There is no reason for any individual to have a computer in his home", one CEO famously said). I don't think that AI will be few and one-of-a-kind. I think that our many interconnected systems and softwares become gradually more and more "AI-ish".
Anyway, it a nice discussion
Best,
AB.
I don't think AI will ever generate new "knowledge" or wisdom.
ReplyDeleteAs someone pointed out an AI would have to be designed for each specific problem.
It's not very romantic, but the machine is just searching databases and making millions of mindless comparisons per second.
A worthy project would be an AI trained to look for correlations in different sets of data (do power lines cause cancer, is social program A really working, etc., etc.
This is something people are bad at, even physicists!
An international collaboration like CERN (!) could be be formed so everyone shares the expense and the access and results.
'I don't think AI will ever generate new "knowledge" or wisdom.
DeleteAs someone pointed out an AI would have to be designed for each specific problem.'
Both points are incorrect. Faulty premise and conclusion.
Suggest reading this
ReplyDeletehttps://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html
I have tremendous respect for you as a writer, especially on physics, but I think that you are utterly wrong about several aspects of AI. For example, it takes decades to train a human radiologist, who then has a potential career of a few more decades, but AI radiologists are already competitive or superior in several domains, and once you have trained one, you have essentially trained them all.
ReplyDeleteTraining a human radiologist amounts to tweaking a bunch of weighted connections in the neural networks that constitute his brain. The only way we know to transfer that knowledge to another human is the 12 years or so of postgraduate education mentioned above. Once the AI equivalent is trained, a few milliseconds is sufficient to transfer it to any other computer with sufficient processing power, and such computers can be manufactured for a few thousand dollars.
In any case, the real processing power today doesn't reside in some box under somebody's desk, but in the internet, and a thousand or a million cpu can go belly up without changing this.
All that said, I loved Arun's story about bot on bot violence.
To me, this fear of AI is just ridiculous, given that someone on either side of the Atlantic just has to push the red button and the game is over for mankind.
ReplyDeletePoint number 3 is not only a problem in dealing with AI. I would go further and say it is also the main limiting factor in our progress toward achieving AI in the first place. You need a criterion to judge how 'good' an algorithm is, if you want to make any progress toward 'better' ones. The criterion should be as easy to evaluate as possible.
ReplyDeleteIf we look at boardgames it is not a problem. Let two algorithms play one hundred games against each other and you know which one is best. That is the biggest reason why we have been able to build an AI that absolutely crushes our own performance in basically any boardgame (Alphazero). It was built by having a million incrementally better versions.
But if you want an AI that is good at the equally abstract domain of finding mathematical proofs, how do you measure its performance? Already much harder! And what if the AI should write engaging stories in natural language? Lack of a criterion is what holds back AI design.
I also want to mention Sophia as a notorious existing example of a "fake" AI that looks superficially convincing at first sight, but is in reality just a scripted doll that looks dumb as soon as you ask it unexpected questions.
I work on technical AI safety at OpenAI. If you're curious to hear the technical side of these issues, I'd be happy to chat. My main high level comment is that I find the "real problem" framing unfortunate: people yell back and forth at each other about which problem is the "real" problem (there is blame of this form on all sides). There is more than one real problem, and there are 7B people, so we can work on more than one problem at a time.
ReplyDeleteAlso would like to say that I am impressed by such a great post about a subject outside of your own expertise. Yes, those are four key questions that deserve more attention! Totally agree.
ReplyDeleteThe only minor point I disagree with, is that I don't see a reason why a future AI couldn't have software that easily transfers to different hardware. Software is not perfectly hardware independent, but grosso modo it is, by design. I don't see any big trend toward hardware/software entanglement, or why that should be necessary.
Although I always get angry when the same is done with people in SciFi movies, when they transfer a personality into a different brain. That is an absurd idea based on a false analogy.
Have you read about Alpha Go Zero https://en.wikipedia.org/wiki/AlphaGo_Zero. It taught itself to play go (in a few days) and is now the best in the world. I think I read somewhere that human go players are now learning from how it plays. That sounds like 'creating' wisdom.
ReplyDeletePublic figures who are "smart" in the conventional sense, say silly things. Do they really believe it, or are they relentless self-promoters?
ReplyDeleteS. Hawking: warned against making contact with extraterrestrial aliens, since aliens might invade, enslave, or eat us. (maybe he'd just viewed the schlocky 1960s film "Mars Needs Women")
Bill Joy, Sun Computer co-founder: intelligent robots would replace humanity; "nano technology" would overrun earth, consume all biomass, & reduce everything to a "gray goo".
Ray Kurzweil : Super computers will emulate the human brain by 2010; "mind uploading" by 2030-ish.
AI also seems to attract many of the same implausible & hysterical predictions.
-- TomH
Certainly there will be problems; but I think artificial intelligence is a good thing.
ReplyDelete... considering the great shortage of the natural kind.
sean s.
@Tanner Yeah, that's a great article. Without taking away from the accomplishment, I would note that AlphaZero (the neural net) still required humans to curate the inputs for it, and still does nothing but play chess. And all the other applications mentioned for similar nets would be the same. Awesome for diagnosing stroke, or eye pathologies, etc, perhaps awesome for other commercial purposes, but still just tools that do as they are directed.
ReplyDeleteIt is not precisely true that we cannot figure out why trained neural nets are doing what they are doing, or how they are doing it. There are weights between neurons that can be analyzed, and we can analyze influences by running them holding all but one input constant while presenting a series of values on the one, etc. This can reveal whether inputs matter, how much, and often analysis reveals the unexpected relationships found by the net.
I am also not convinced by the article that a neural net could define another neural net. Programmers of AlphaZero had to decide what the inputs and outputs would be, how to segregate the inputs (if they used a divide-and-conquer approach), how the layers would by sized and interact, the activation function(s), and finally what the output would be and how to interpret it. For example it could have been success scores for all the possible moves that they then 'collapse' by a max() function. All that requires a human understanding of how to formulate the problem to be solved.
I doubt the humans disappear in the next 50 years, I don't think anybody knows how to formulate for a neural net the problem of understanding how to formulate problems for a neural net. Meaning, I don't know how to start on a general intelligence neural net, that could automatically search for and read online literature about playing game X and then produce a net that learns to play game X.
https://www.nature.com/articles/d41586-018-05084-2
ReplyDeleteParticle physicists turn to AI to cope with CERN’s collision deluge
A comparison and evaluation of results that are not subject to the expectations of human nature might turn up something of value.
One advantage that a computer program has over human actions is that a computer program can be improved dispassionately without limit.
This principle is put forth to explain how a car control AI will surpass the abilities of a human driver when all the experiences of every move that a computer controlled car makes from all over the world will allow the program to handle every possible condition that could ever happen. Eventually, every possible contingency will be covered because that list of conditions is finite.
The same principle could be applied to the evaluation of experiments. All experimental results could be validated and then encoded in a global world wide all inclusive statistical database that holds the sum of all discovered experimental experience.
This process would avoid a problem that I have seen in science where the same results are discovered over and over again by experimenters that have no idea about the details of what has been turned up in the past.
For example, I have seen the results produced by a chemist that has found a way to produce metallic crystals of hydrogen using Rydberg blockade that produce muons when irradiated by UV light. This experience might be interesting to particle physics if they had access to the data and believed it since the experimental results were peer reviewed, replicated, validated, and universally accepted.
Sabine, I agree with you that the problems with AI that we need to address ASAP are ethical ones, but I think you missed the most immediately important one, which I will suggest at the end. First, I will suggest why the four you highlight may not be the most immediate concerns.
ReplyDelete1. Who gets to ask the questions? I don’t see how this could be different than asking who gets to look through the Hubble telescope.
2. How do you know you are dealing with an AI? This is an important consideration, but it gains its importance from the 5th consideration which I suggest below.
3. How can you tell that an AI is any good at giving answers? This just seems to me an issue of quality control. How can you tell anyone is giving good answers? The only way to tell is to (carefully) try to use the answers.
4. How do you prevent that limited access to AI increases inequality, both within nations and between nations? I don’t think this is a question about AI so much as about capital. You’ll notice that the people (by which I mean corporations, ahem) most interested in advancing AI aren’t concerned about keeping the techniques to generate it. They’re trying to teach everyone how to do it for free. It’s only the corporations that have massive computer power that will be able to use the technology at scale.
So what do I think is the most important issue?
5. What should we allow AI to do? The biggest risk occurs when we let AI have direct access to the world. I have already seen two situations that have had bad results. The first was when they let AI’s execute stock trades. (As far as I know, they have fixed or ameliorated that particular problem.). The second is when they let AI’s access social media. Brexit. Trump. ‘Nuff said. This second problem is where Sabine’s issue 2 comes in. Even now, ‘bots don’t have to pass a Turing test to become influential.
Folks are already talking over this last issue, because it includes driving cars as well as shooting weapons. But I think the social media issue is more important, because the scale of the impact is just magnitudes higher.
*
[and then, of course, there’s the climate]
[James of Seattle]
AI is PR hype. It is impossible to get any paper that criticizesAI past reviewers. There is also a historical connection to physics.The British Government in 1972 had applied mathematician James Lighthillproduce a report. Lighthill's argument was that AI progress isreally advances in sensors and actuators going back to WW IIand runs into the combinatorial explosion problem. Search forthe "Lighthill Report." I have an arXiv paper " A PopperianFalsification of AI - Lighthill's Argument Defended" arXiv:1704.08111that is a modern defense of Lighthill's argument.
ReplyDeleteSome facts that do not get mentioned by the AI PR hype machinere that the very best Chess players can now beat the best chesscomputer program's (expert chess player plus ability to use computersto evaluate positions offline). Also the best stock market financialraders are more and more avoiding automation because programs are ot flexible enough.
The connection to physics is that all the 1970s Stanford and Berkeleygraduate students who were skeptical of AI were fired. SU assistantprofessor Jeff Barth was fired for refusing to work at the AI labfor example. The firing worked becauseStanford had a policy of notgranting tenure to SLAC staff members who had tenure at the institutionthey came from. Another connection is that AI is claiming "digital physics"(try searching for the term) will replace physic. Information willreplace thermodynamics say.
~
To the best of my knowledge, this is the unvarnished state of the art in machine learning (note I avoid the use of the word "artificial intelligence"):
ReplyDeleteMachine learning is good at well defined tasks for which it is possible to prepare a training sequeuence from a known database, such as:
1. driving assistance using machine vision and proximity detection
2. heuristic tree searching (an interesting example is to sort archaeological lithics by shape)
3. heuristic analysis of gases using multi-spectral spectroscopy
4. facial recognition (using machine vision)
5. voice recognition
6. language translation
7. search (such as google)
8. interpretation of x-rays and MRI images
What machine learning (and AI) cannot do well currently is make decisions from data that is evolving or unknown. For instance, it also cannot walk through a sequence of decisions based on evolving and unpredictable data.
Personally, I think that in the next five years, we will go a long way with driver assistance.
However, I'm sure I will still be doing the laundry ten years from now (unless I'm OK with shrunken and ruined clothes). Wouldn't it be nice to just load all you laundry at once: bath rugs, red silk blouse, cat vomit rag, husband's navy blue work shirts, child's expensive jeans that can shrink, into one giant load and have an intelligent washer/dryer deal with it? Alas, I am not hopeful.
How will we recognize AI when the best definition we have of "intelligence" is "Intelligence is that thing measured by an intelligence test?"
ReplyDeleteComputers can do mathematical calculations quickly. That is not very fundamental to intelligence. Primitive societies often have only three numbers: One, two and "Many." In fact, mathematics is a late-comer to human beings -- there really was no neolithic need to do square roots. Anything more than "a lot" likely came with early inter-city commerce when a tally was needed (and usually just some notches on a stick.) People, though, have probably created poetry for millennia -- math is just a recent novelty. So, why do we think that computational ability is the hallmark of intelligence?
Consider the following statement: "Coders are so sophisticated that they have developed an algorithm to replace themselves." Is there ANY series of 1s and 0s that could cause AI to 1) recognize the quotation as humorous 2) identify the humor as irony? What would AI do with "The anarchists' meeting was poorly organized?"
I suspect most thoughtful people are baffled by what intelligence is. No computations, though, were required for "The Illiad" or "Oedipus Rex." Computers may be artificial . . . but intelligent?
Perhaps, though, AI HAS developed -- but the computers are smart enough to keep their ports shut to avoid paying taxes.
As best I can tell, AIs are just tools designed to solve particular problems, ideally as well as or better than humans can. So, an AI might know how to drive a car or play chess or categorize images or fold proteins. That might put people out of jobs and perhaps create some new ones, but that's no different from a better power drill.
ReplyDeleteThese guys seem to have gone for a whole different definition. They are talking about teaching an AI to act as a human, as a competent individual and as a member of society. That means they get a job to buy their own power, upgrades, spare parts and new pants if that's relevant. It means they are supposed to act as individuals, not tools. Who is working on this kind of AI? Why is anyone certain that our current program of solving human problems by building AI is going to lead to that kind of AI? We don't even have a real problem specification. Man's relation to man and society is all about open questions which is why it makes for such good bull sessions. Maybe the answer is 42, but we still don't even have the question.
They seem to love little concocted games. Either the driver, human or otherwise, can control the car well enough to avoid hitting anyone or it will have too little time to deal with moral judgements. Accidents don't happen slowly like chess games. We're talking 50-100 milliseconds. Even if the car could make a moral judgement in that time, maneuvering quickly enough in the remaining tens of milliseconds would risk harming the passengers. We know how to stop a car in a millisecond. It's called hitting a brick wall. We aren't going to suit up autonomous car passengers like fighter jet pilots on the off chance of saving one orphan with a good sob story at the expense of killing two boring former Dancing With The Stars contestants.
Granted, the idea has a lot of attraction to wealthy technical types who have realized that they are going to die. In the old days, they'd find a religion that got them around this and endowed a monastery or the like. Nowadays they buy into the AI myth, that somehow they will build a machine so mentally capable and somehow compatible that it might offer their minds a chance at immortality.
Ray Kurzweil has long been haunted by the death of his father, and he's become an evangelist for this idea. Ray Kurzweil made his money developing a system that could scan a page of a book, recognize the letters and words and then read the text aloud. It was a tour de force in its day. It ran on a Data General Nova, a 16 bit computer. It was a tool that used AI to solve a particular set of problems.
It's fun discussing this kind of stuff, but AI is pretty far from an existential threat to humanity. They said this about atomic bombs, and while the jury is still out, we've had a atomic bombs for a while, but there are still plenty of people around. I think humanity has a lot more credible threats to worry about.
"Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time."
ReplyDeleteLol. And computers will be large machines requiring banks of memory, and only governments and large corporations will be able to afford them, or have the expertise necessary to run them...
CIP,
ReplyDeleteOf course AI are better than humans in certain aspects already. Even a simple calculator is better than me at calculating numbers. Trust me, I am aware of this. I was writing about the attempt to reach human-level intelligence.
Hi Geoffrey,
ReplyDeleteI would in fact be interested to have a chat with you. If you're in for it, pls send me an email, the address is hossi[at]fias.uni-frankfurt.de
One tone,
ReplyDeleteBefore you laugh, think about it for a moment. At first, computers *were* large machines...
"The singularity is imminent. True AI will be achieved within 10 years." As I recall, I first read that in the "Journal of the Association for Computing Machinery" (JACM) in about 1965. That was about the time that John McCarthy, who coined the term "AI", introduced Lisp (List Processing Language). At that time the accepted metric for machine intelligence was the "Turing Test", that a human could engage in a conversation via a Teletype machine and was unable to discern whether he was talking to a machine or a person. That's still the test and no machine has yet passed it.
ReplyDeleteThe fact of the matter is that we don't even have a satisfactory definition of intelligence, much less any idea how to implement it algorithmically. (See Roger Penrose, "Shadows of the Mind", 1994). I have been thinking about the problem for more than 50 years. Every once in a while I'll catch a whiff of an insight, but they just go slip sliding away on further thought. There is one brief piece from Douglas Adams which may give a hint at the subtleties involved:
“Sir Isaac Newton, renowned inventor of the milled-edge coin and the catflap!"
"The what?" said Richard.
"The catflap! A device of the utmost cunning, perspicuity and invention. It is a door within a door, you see, a ..."
"Yes," said Richard, "there was also the small matter of gravity."
"Gravity," said Dirk with a slightly dismissed shrug, "yes, there was that as well, I suppose. Though that, of course, was merely a discovery. It was there to be discovered." ... "You see?" he said dropping his cigarette butt, "They even keep it on at weekends. Someone was bound to notice sooner or later. But the catflap ... ah, there is a very different matter. Invention, pure creative invention. It is a door within a door, you see.”
― Douglas Adams, Dirk Gently's Holistic Detective Agency
How do you prevent that limited access to AI increases inequality, both within nations and between nations?
ReplyDeletedlb wrote: if it ever happen, is so unknown that only the most general (and therefore useless) statements can be made about it
We already know enough to make some useful statements. The first thing we acknowledge is that technology has already been replacing millions of lower-skill jobs and it's well on its way to replacing higher-skill jobs. Of the lower-skill jobs remaining, most of them don't pay enough to afford a decent standard of living. What do we do about that?
The discussion doesn't have to focus on inequality per se. Even rich and powerful people will have problems living in societies where millions of people don't have a decent standard of living.
How would a self driving car solve the "runaway trolley problem"?
ReplyDeleteIf it keeps going straight it will kill 5 people. If it swerves it will kill two people. Which is the better choice?
The first choice could be seen as an unavoidable accident, but more people die.
There are many variations of this. In a hospital, one person will die if they don't receive a quick kidney transplant, another if there is no quick lung transplant, and so on. So would it be ethical to kill someone in the waiting room and harvest their organs?
Someone has pointed out that delaying AI in self-driving cars and so on will kill far, far, far, far more people than actual trolley-problem situations ever would. In other words, it is already the case that humans are worse drivers than AI, so the sooner the AI takes over, the better; the few deaths due to trolley-problem incidents will be negligible compared to the number of human-caused deaths.
True self driving cars are very far off. I do think macine learning, lidar, and proximity detection will advance driver assistance, and will save lives. But that is not fully automomous driving, which has been shown to kill people. People are going to hold full autonomous driving to a very high standard.
DeleteGeoffrey wrote: people yell back and forth at each other about which problem is the "real" problem
ReplyDeleteCan you offer one example of a "real" problem that you think we need to start dealing with now? - without the yelling, of course. :-)
Is there a book you can recommend that's relevant to the topic of "real problems"? Thanks.
There was a time when people thought the future of computers would be dominated by increasingly bigger and bulkier machines. I think Asimov's short story "The Last Question" is a good example of this kind of futurism (planet size computers, etc).
ReplyDeleteThe computer was named Multivac, which I pay homage to in my own domain name. (Multivac is, of course, a pun on Univac, which means UNIVersal Algorithmic Computer but was taken to mean uni-vac, i.e. one vacuum valve (tube for those across the pond).) These stories were clearly tongue-in-cheek. However, in some cases they appear to be coming true. One involves demoscopy: Multivac is so powerful that it can predict the results of elections based on a survey of very few voters. Just one voter is enough. So the system is changed such that, instead of a general election, one voter is selected at random. OK, we are not there yet, but moving in that direction. (Famously, Univac once correctly predicted that Eisenhower would win, which no-one believed, so the prediction was held back to avoid embarrassing those working with Univac.)
As some pundit remarked, you know that you are reading old science fiction when, as future time goes on, computers get bigger and bigger instead of smaller and smaller.
Asimov of course also wrote much about robots, which were of human intelligence with a "positronic brain" about the size of a human brain. (There is nothing realistic here, of course; positrons were new at the time, so he adopted the term to sound cool.) When I was reading Max's Life 3.0, I noticed that many of the moral questions had already been discussed in Asimov's fiction more than half a century ago.
It turns out the future we actually live in is one of networks of microcomputers, something that many really had a hard time to see coming ("There is no reason for any individual to have a computer in his home", one CEO famously said).
That was Ken Olsen, CEO of Digital Equipment Corporation (see the link above to see how this relates to Multivac). However, it is taken out of context. He wasn't referring to computers in the home per se. (While he was still at DEC, DEC was making PCs as well.) Rather, he was referring to a computer which opens the curtains and so on, a "smart home". Now possible today, but I agree with him that it is not something one really needs.
Thomas J. Watson, CEO of IBM, though, did say "I see a world market for about five computers".
Markus wrote: To me, this fear of AI is just ridiculous
ReplyDeleteI suspect that people find it easier to talk about the potential problems with AI than tackle the long list of real problems we're facing right now and the near future.
Greg, Phillip,
ReplyDeleteThe trolley problem and related ethical issues will be solved by market forces. It will come down to insurance coverage and whose life is rated to have the highest economic value. I know that people tend to distinctly dislike it, but lives are given monetary value all the time. If you play out any such situation in your head, you can see how this will go. Depending on who is liable, a "smart" software will try to minimize the financial loss of their own party. That may be either the producer, driver, the insurance, or a combination of those. Again, I am perfectly aware that people find this terribly distasteful, but young people and/or people with children who have many "work hours" left to give tend to come out rated higher. That is, if an AI has to decide between running over an elderly homeless man and a pregnant mother-of-three, it'll go for the homeless guy. (And no one really will care what philosophers think about this.)
Corporations do it all the time. They do a cost benefit analysis on whether to recall a product or pay for a few probable deaths. A grisly calculus.
DeleteSabine,
ReplyDelete"if an AI has to decide between running over an elderly homeless man and a pregnant mother-of-three, it'll go for the homeless guy"
Let's suppose that the AI system has access to data that suggests the existential threat to humanity is over population. In this case it is possible that the AI system makes the (logical but most would agree least desirable) conclusion that the optimum outcome is to run over the pregnant mother.
Your point that for, shall we say economic reasons, the AI would select the homeless man as the best 'target' illustrates the current problem with full AI - there is no way to address moral issues as the AI will be biased by the choices of the initial learning data. Hence a 'Merkel' AI will make different decisions than a 'Meuthen' AI. You have touched on this in your initial blog - points 1 and 3 - and combining these leaves us with a real problem "Merkel AI says this, Meuthen says that, which is 'right"?
The same situation could (will?) arise with regard to task centred AI, for example driverless cars. A Ford AI system may well react differently than a VW system leading to the same unpredictable outcomes that we see with human drivers.
@Sabine
ReplyDelete"The trolley problem and related ethical issues will be solved by market forces."
I think there is a better solution being worked out. This question was studied in "The Moral Machine experiment",
Naturevolume 563, pages59–64 (2018). I think legal responsibilities and insurance will in future start follow these "preferences" more than the other way around.
Abstract:
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.
https://www.nature.com/articles/s41586-018-0637-6
Unknown,
ReplyDeleteI was referring to the AIs that already exist and that are being used already for autonomous cars. These are not truly intelligent, which I hope we agree on.
You are right in that any such decision would have to factor in political decisions and regulations that are enforced for this reason, but this does nothing to negate my point that the question of ethics will be settled by market forces - within the legal bounds of course.
As to population increase. Well, just look at the world and you'll figure out that the future is discounted so much as to make that consideration basically irrelevant.
AIs will be able to exchange information with each other, all of which will be used to calculate the path of action that optimizes everyone's utility. I don't know why you think the outcome will be unpredictable.
Rob,
ReplyDeleteYes, I saw this. I find it mildly entertaining. Really what makes you think an insurance would care about this?
"Let's suppose that the AI system has access to data that suggests the existential threat to humanity is over population. In this case it is possible that the AI system makes the (logical but most would agree least desirable) conclusion that the optimum outcome is to run over the pregnant mother."
ReplyDeleteThen it would be a very stupid idea if it thinks that killing a few people in car crashes is a solution to the problem of overpopulation. If it were really smart enough to think about overpopulation, a better strategy might be to aim for the Vatican and keep killing the Pope until one is elected who doesn't pronounce that birth control is a sin. :-|
"the current problem with full AI - there is no way to address moral issues"
Yes, but holding back on AI for self-driving cars causes far, far more deaths than the occasional AI-caused death (which is a choice between killing fewer and killing more, or whatever). But what about the moral issue of the human driver ("the nut behind the wheel", as old discussions of automobile safety put it)? They can decide to hit whomever they want if there is a choice (or even if there isn't a choice, as recent terrorist acts prove).
There is a novel, Thrice Upon a Time, by James P. Hogan which is concerned with sending information back into the past. One character says that people must be really careful, because it is difficult to work out the consequences of a given decision on what information to send. Another one asks why this is any different from all the other decisions people make in daily life. :-|
Close the pod bay door, HAL.
ReplyDeleteI'm afraid I can't do that, Dave.
It's all about what you forget to tell the computer!
HAL's problem was that he knew too much. Only he knew the true purpose of the mission to Jupiter, as revealed in the post-lobotomy pre-recorded briefing. The mission's importance led HAL to make "some very poor decisions," as he himself characterized them. What he clearly was not told was that he shouldn't murder humans, several of whom had already been killed by the time Dave commands "Open the pod bay doors, Hal," To which HAL responds, "I'm sorry Dave, I can't do that." (These are accurate quotes, taken from the excellent 4K remastering of 2001: A Space Odyssey, which is playing in the background as I type this.)
Delete(specific) AI is already taking decisions without human interference (like buying and selling at the stock market), or advising humans (drones in army, analyzing medical data), where it is conceivable that the decisions will be shifted to specific AI in the future. Nobody understands how these trained neural network arrive at certain outputs. Over time we start to trust these outputs, if these are consistently reliable and good. So AI is already there and we are increasingly governed and served by algorithms.
ReplyDeleteBut, what the heck, we will to some extent trust or distrust human rulers and experts because of double agendas, so it is likely that also with AI ethical / trust / control issues will likely remain present for ever.
I agree with Sabine that ultimately market pressures will determine the morality of cars. Here is a route that can happen: Think of the AIs as servants, not independent persons.
ReplyDeleteWe can get around the Trolley problem by putting the owner of the vehicle on the moral spot again; via "vehicle training" in a simulator. You have to set up and configure your car just like you have to set up your computer or cell phone.
In the simulator, you are given, say, fifty scenarios (all of which the car must be capable of distinguishing from its sensory data). For each, the training program shows you a list of the things you can do, to minimize damage to yourself, to others, etc. That will include doing nothing.
Just like setting up any piece of equipment, you have to choose, you cannot defer until later.
You are given as much time as you need to make your decision. These decisions will be generalized by the AI into the decisions to be made in real-time. The morality of the car can match to a high degree the morality of the owner. Then if the owner has a fleet of cars, the training can be copied to all of them. And of course, you can reconfigure whenever you like.
The AI that does this can be trained to generalize from these inputs like any other AI, a long process, to be sure, but once it is trained the actual decisions is just a lot of dot-products that can be formulated in microseconds, in real-time, far faster than any human reaction time. It does not require the car owner to be in the car.
The car manufacturers have an incentive to create something like this, to defer liability. They can show their initial data that demonstrates the car's choices match the owner's choices with a suitably low error. In court, if necessary, prosecution or defense can show how the actual accident situation is most similar to one of the training scenarios, or a combination of them, and what the owner decided, and let a human jury decide on how closely the owner's decision and the cars decision correlate. I've been on the jury for a car accident, we did not get much detail at all to make a decision; basically sketches of the scene verified by two eyewitnesses. So I don't think jury information has to be perfect for them to decide.
The insurance companies have incentive to use any standardized method of assigning liability (because long court battles are very expensive) and they can also see how you set up and trained your car in order to set your insurance rates.
And finally, it means that car owners, to avoid liability, would have an incentive to set up their car with a minimum of selfish interest in an emergency, because that will result in the lowest insurance premiums. Heck, during training they could even see in real-time what their insurance rate would likely be given the different choices they have.
Hello Sabine,
ReplyDelete"AIs will be able to exchange information with each other, all of which will be used to calculate the path of action that optimizes everyone's utility. ".
This would, I agree, be the most desirable situation. In the case of driverless vehicles, or other task related, AI it may even be achievable. However the inputs experienced by the individual systems, the 'training' and the algorithms used (even assuming a common standard) may produce different outcomes.
Just considering driverless vehicles. One would hope that in critical circumstances the AIs react jointly. I can imagine however a situation where the interaction between multiple AI vehicles is 'unpredictable' - say e.g. in a multiple car pile up with AI outputs equally based for each vehicle. The outcome would then be unpredictable as which AI would 'take control'? - the same as with human drivers. Other factors may also come into play - e.g. after the AI taking action the response of the vehicle, it's technology (steering, brakes, tyres, acceleration, service history) and individual environmental conditions become the limiting factor. The vast range of factors involved - many of which the AI will not be aware of or learnt to take into account - the AI decision speed and individual vehicle characteristics make the interactions uncertain or at least predictable only within certain limits.
An interesting limitation has been found with learning machines. Yehudayoff and his group have found a limitation on algorithmic based learning systems:
ReplyDeletehttps://www.nature.com/articles/d41586-019-00083-3
There are two parallel tracks, one by Kurt Gödel and the other Alan Turing. Turing formulated a mathematical construction of a computing machine, called Turing machinez (TM), back in 1935 and the next year demonstrated something remarkable. He showed there can't exist a universal Turing machine (UTM) that can determine if all Turing machines will halt with a result or not. The reason in a sense is this UTM has to emulate not only all other TMs, but itself emulating TMs and itself emulating that it is emulating TMs and .... . The numerical accounting is larger than any countably infinite set. Tibor Rado found a similar result with the Busy Beaver, that is never able to find enough space to determine the computational status of a TM with n states or that BB(n) grew exponentially. This gets into Cantor's transfinite numbers, and the BB(n) for n--> infinity requires a power-set cardinality of states that is larger than countable infinity.
AI has some disturbing elements, and I think already we are being in a way programmed by them. People's penchant for extreme right winged videos is fueled by Youtube algorithms that offer more of what the user has watched. I also think humans are losing track of what is reality and what is virtual or artificially generated. I am more reserved on the idea of AI systems actually displacing us as such.
I think a bigger and more disturbing prospect is the neural-cyber interlinks that may come. People are already almost "phone zombies" these days, and I suspect by mid 21st century we may have direct brain links with cybernetic systems and the internet.
>>``3. How can you tell that an AI is any good at giving answers?''
ReplyDelete1. Think of an AI as (i) an automated mapping between (ii) an input data-set and (iii) a set of one or more outputs or predictions.
Theoretically, we can think characterizing the kind of structures that exist in all the three aspects: the input data, the algorithmic transformations of data, and the outputs or predictions. Implementations of AI can be differentiated (or classified) according to the characteristic combinations of these three features.
If a new problem involves a combination that is similar to the one that is studied already, then you have a ``base-case'' scenario which could be used for anticipating what kind of a performance may be expected in this new area.
If the combination is entirely new, then appropriate set of test-cases will have to be built systematically.
As an aside, at least as far as the available course-work and text-books on ML and AI are concerned, this is a very highly neglected aspect. But it sure would have to be taken care of, during the actual engineering practice.
Of course, following such an analysis scheme (of characterizing the structures in the three aspects) is easier said than done. Often, it wouldn't be very clear as to what kind of a structure may exist in the input data!
2. Another comment: When Q. 3. is read in the context of the rest of the post, it seems as if there is a streak of helplessness to it or not. If yes, such a streak is entirely uncalled for.
It never is going to be the case (except possibly in poorly written sci-fi novels / movies / media hype) that an implementation of an AI is some identity-less beast that ``somehow'' spews out answers, and that we are so clueless and dumb as not to have the means to figure out whether the answers it produces do make any sense or not. If you cannot create a ``map'' of the regimes in which the performance of a given AI is clearly well understood (may be with the help of comprehensive suites of test-cases), then you have no business putting it to use anyway.
Let me give an analogy: Just because you have gears, shafts, cams and stepper-motors readily available in all shapes and sizes in the market, people therefore don't go out and connect them together in a random way, and *then* begin worrying whether the machine would stamp out its human operator or not. That's just not the way engineers put together their machines (or function, speaking more generally). It's high time that the media and the sci-fi authors stopped believing their own hype and stupid theories, and began taking the actual practice of engineering a bit more seriously.
3. BTW, IMHO, Rob van Son (9:07 AM, January 09, 2019) has given a very neat and relevant answer to this question.
Best,
--Ajit
@Sabine
ReplyDelete"Really what makes you think an insurance would care about this?"
I think that insurance cares about the law, and law follows the morals of the people. For instance, People in the US seem to be OK with health coverage based on "income". In the UK (and other countries) people are not OK with this. So, medical insurance in the UK is universal, while it is far from that in the US.
In many countries, the law will not allow to sacrifice one person over another. I see this in my own country where doctors refuse to select organ transplant recipients on any account other than medical compatibility. Same with expensive treatments. In such countries, algorithms will be forced to rely only on factors that are important for minimizing harm. These are also the countries where damage payouts are lower.
Anyway, we are quite far from AI system being required to estimate the social "worth" of imminent accident victims in split seconds.
People have little understanding of what AI really is. When I was first exposed to it (pattern-recognition, later machine learning) I was so exited to be able to finally dwell into the things that Dr Chandra (from Arthur C Clarke 2001) had to deal with to make HAL.
ReplyDeleteHaving written a few so-called-AI programs. I can assure there is nothing "intelligent" about them. Just a computer program. Nothing more nothing less.
I think we are getting into philosophical arguments about a technology that doesn't exist. Like trying to regulate "time-travel".
I think the question it's not how dangerous AI will be to human beings but rather how dangerous AI will be in in the hands of humans. As we've recently seen use of social media when she use to fix presidential election in America. What will happen when AI get sophisticated enough for an authoritarian regime to use it on us? In the near future, a few decades and probably less, someone, like Putin will very likely to take control. What do you think about that idea?
ReplyDeleteThe idea that a driverless car can even identify a homeless person contra, say, a young mother, is Science Fiction. The Trolley Problem will not even be properly available to AI in my lifetime. That's why home robots have stalled at poorly-performing vacuum cleaners. The real world is not a chess board with rigid rules.
ReplyDelete@Unknown says but rather how dangerous AI will be in in the hands of humans.
ReplyDeleteI think that is precisely right. If someone (like Google, like Putin, etc) devote a billion dollars to private research into AI, even non-conscious problem solving AI, they might solve problems in many forms of investing, manipulation of markets, etc, with the money to implement such things and take control of economies, micro-target certain markets, and basically legally (perhaps with a modicum of illegally if they don't mind that) win most of the money in the world, and use it as leverage to bludgeon whole governments and populations.
You wouldn't be able to stop them, their nets would be proprietary and sequestered away, guarded by private armies.
So what then?
An immediate thing of concern are AI applications as in the 2017 video Slaughterbots
ReplyDeletehttps://www.youtube.com/watch?v=9CO6M2HsoIA
The concerns are already here. I read recently that in China face recognition technology is not only widespread, but being used in classrooms to measure whether students are engaged in the lessons.
Castaldo wrote: they could even see in real-time what their insurance rate would likely be given the different choices they have
ReplyDeleteIt looks like everyone's having fun brainstorming.
If insurance companies can predict the potential liability of a customized car, why would any society prefer such an approach? By definition, maximum liability means maximum harm to society. Why should car owners have the right to inflict maximum harm merely because they are willing to pay higher insurance premiums? Why should car manufacturers have the right to make such cars? This makes as little sense as allowing people to drive drunk as long as they pay higher insurance premiums. "Market pressures" have their limits, which is why drunk driving is a criminal offense, not just a civil offense. And even then, plenty of people still drive drunk. Go figure.
There's another problem with customization. The safety of driverless cars depends very much on their ability to make reliable predictions. Standardization and consistency greatly enhance the reliability of predictions and overall safety. If cars are customized with "50 moral scenarios," standardization and consistency are thrown out the window. In dangerous situations, driverless cars won't "know" what other driverless cars are going to do. Customized cars make it worse for everyone.
Customized cars don't even accomplish the goal of reducing the liability of car manufacturers. Car manufacturers are liable for defects or some kind of negligence; they aren't liable for features that are operating properly. Moreover, it's likely that car manufacturers will offer different sets of moral customizations; options on some cars will be arguably more dangerous than options on other cars. This opens some obvious legal doors for liability.
Besides all that, in a truly driverless society, far fewer individuals would own cars if calling for a vehicle is sufficiently convenient. For many people, it would no longer make sense to spend a lot of money to buy a car, pay for the insurance and maintenance, be directly liable for any harm the car caused, find parking spots and pay for them, wash and clean it, and store it somewhere when they're home.
In fact, I know people who have already decided to give up their cars because services like Uber and Lyft are easy, convenient, and affordable. I know some older folks who don't know how to use a computer, but they know how to use a phone to call for a Lyft car. If society can figure out how to insure Uber and Lyft cars, it can figure out how to insure much safer driverless cars.
In Manhattan only one in five people own a car, in spite of the fact that New Yorkers tolerate their transportation system like they tolerate blizzards: stoically. That means Americans aren't as obsessed with owning cars as some would have us believe.
For lots of obvious reasons, driverless cars would reduce insurance premiums to a small fraction of what they are now. For obvious reasons, the behavior of driverless cars needs to be standardized with the aim of minimizing harm.
As for evaluating relative harm vis-à-vis an elderly homeless man and a pregnant mother of three (Sabine's scenario), that's a great plot for a sci-fi story. The thing is, if our technology ever gets that good, it's unlikely that there will be homeless people. Sabine's sci-fi story would be an oddly dystopian world in which a technologically-advanced society puts great value on the lives of mothers and children but doesn't care enough to solve the very solvable problem of homelessness. Even now homelessness is a solvable problem. For humane and safety reasons, we pick up stray cats and dogs but we don't pick up stray people. Go figure.
In the spirit of brainstorming, if you disagree with my points, tell me why I'm wrong.
1. Intelligence is not a well-defined concept.
ReplyDelete2. Human intelligence is poorly understood.
3. Human intelligence is flawed and often slow to self-correct; errors can persist for centuries.
4. Artificial intelligence designed by human intelligence may prove useful under certain constrained circumstances, but will be as generally satisfying as artificial food.
Pascal wrote: When you cannot move (not even your eyes) you loose your capability of vision within a short time and you drift into weird haluzinations.
ReplyDeleteThat is very interesting.
Vincent wrote: Although I always get angry when the same is done with people in SciFi movies, when they transfer a personality into a different brain. That is an absurd idea based on a false analogy.
ReplyDeleteC'mon, it's a very amusing plot device that can actually lead to useful insights about humans.
For example, consider the nonfiction book "Black Like Me," which I read in the 1960's (and I also watched the film version). A white man had his skin darkened in order to pass as a black man, so that he could have first-hand experience with racism. A good author could make the same points in a sci-fi brain transfer novel. In fact, there are infinite possibilities for authors to explore important social issues using this "absurd idea." It's just a convenient way to put someone in someone else's shoes.
Bradley wrote: That's why home robots have stalled at poorly-performing vacuum cleaners.
ReplyDeleteI'll second that emotion.
My mother had a Roomba that would clean only half of one room. I'm the engineer in the family so she calls me to figure out what's wrong with it. As Dr. McCoy might say on Star Trek, "I'm a doctor, not a vacuum repairman." In the end I decided that the Roomba was either dumb or lazy. After talking to tech support, they allowed my mom to exchange it for another unit. The new unit is better, but it's still somewhat dumb and lazy.
They are wonderful toys for cats. Dogs tend to get spooked. :-)
Sabine wrote: The trolley problem and related ethical issues will be solved by market forces.
ReplyDeleteThere are too many exceptions to that generalization to make it useful. Examples:
In a purely market-driven economy, drug companies could - and would - charge much higher prices for their products. For every two people who could afford a life-saving drug, there would be five who die because they can't afford it. There's your trolley.
Many rich people seem to enjoy saving a few dollars for a piece of clothing even when workers are being exploited. When Barack Obama was president, market force gurus predicted that mandatory health insurance and raising the minimum wage would increase the price of fast-food hamburgers. When I asked how much, they said somewhere between 5 and 20 cents (OMG!). It turned out that the price of fast-food burgers remained fairly constant.
In a purely market-driven global economy, rich nations benefit from the slave labor of poor nations, as well as child labor. Rich nations could ship all of their toxic waste to poor nations. I read a report about a recycling town in China, where the adults and children are harmed by the electronic waste they process. That's market forces and there's your trolley.
Maybe the trolley teaches us that inexpensive e-waste for rich nations is not worth more than the health of poor people. But then we start to equivocate. We point out that people who work at unsafe jobs don't have better alternatives. If a child has to choose between processing toxic e-waste and starving, he's better off with e-waste. A poorly designed, unsafe e-waste processing plant in a destitute Chinese community is better than *no* e-waste processing plant, at least in the short term. We tell ourselves that market forces will eventually allow them to move beyond unsafe e-waste processing plants or onerous sweat shops.
Could the US process its own e-waste? Of course, but it would be a lot more expensive. (Market forces!) Could the US set up safe e-waste processing plants in China? Of course, but that would be more expensive than unsafe plants. (Market forces! Comparative Advantage!)
During California's recent election cycle, we had a huge debate over chickens. Californians voted on a proposition to mandate the humane treatment of egg-laying chickens. Predictably, the market force gurus argued that the rich can afford humane treatment of chickens, but the poor can't afford that luxury and the "efficient" chicken ranchers can't afford the costs. There's your market force trolley, and not only is the trolley running over poor people and ranchers, it's running over chickens.
"Market forces" sounds so nice, so benign, so real world. If someone points to problems with market forces, more often than not he's labeled a socialist or bleeding-heart liberal who doesn't know how the real world works.
History shows us that market forces are quite capable of putting profits above health and lives. Indeed, this is how companies today justify clearly unethical - but legal - behavior, by pointing out their fiduciary responsibilities to stakeholders and shareholders, as well as a general obligation to remain competitive in a market-driven world.
Many people seem to assume that market forces are either usually good or usually bad. Economics shows that market forces are just one slice of the pie. Each slice has an essential function and all the slices have dependencies. Market forces operate within a structure. People who benefit from a particular structure tend to argue that any other structure would make it worse for everyone. What's good for them is good for everyone.
Particle physicists use market force arguments to justify new colliders. "Market forces" are so often misunderstood and misused.
@Lawrence
ReplyDelete"a bigger and more disturbing prospect is the neural-cyber interlinks that may come"
That's like trying to put a motorcycle engine in a horse. Eventually we'll give up and say, why don't we just get rid of the horse and build a Ducati?
Only in Hollywood that humans can beat the Terminator. It's like a race between Usain Bolt vs. Ducati. My Ducati can go over 90 kph on 1st gear in 2 seconds.
"Hackers exploit these differences between computers to track your internet activity. Canvas fingerprinting, for example, is a method of asking your computer to render a font and output an image. The exact way your computer performs this task depends both on your hardware and your software, hence the output can be used to identify a device."
ReplyDeleteThere are literally thousands of computer vulnerabilities at all levels of computing: in hardware, in software, in networks, and in operating systems. Building high reliability, secure systems are a very hot area of computing at the moment. I'm confident that progress will be made on this front. We will be able to build driver assistance systems that can't be hacked.
It should be said that many of the computer security vulnerabilities that exist today are there not because we didn't know about them, but because up until now, no one cared about security enough to want to invest in it, at least not the companies building commercial microprocessors. This is changing.
1. Who gets to ask questions and what questions?
ReplyDelete"This may not be a matter of discussion for privately owned AI, but what about those produced by scientists or bought by governments? Does everyone get a right to a question per month? Do difficult questions have to be approved by the parliament? Who is in charge?"
I suspect that the US, Canada, Australia, most European countries, as well as China and India also are developing their own AIs and have their own government agencies to set policy. Currently, as far as I know, difficult questions are not approved by "parliaments" in Canada, the UK, or the US. They are made by government agencies who's officials are appointed.
2. How do you know that you are dealing with an AI?
ReplyDelete"The moment you start relying on AIs, there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI. This problem will occur well before AIs are intelligent enough to develop their own goals."
Yes, I think this is a good question. Most of the systems we have today are self contained in that we generally know when we are using an "AI". Think SURI, or driver assistance, or google search. We generally know it is a computer that we are interacting with, but we don't necessarily know if the computer is using machine learning to give us the answers it is giving.
4. How do you prevent that limited access to AI increases inequality, both within nations and between nations?
ReplyDelete"Having an AI to answer difficult questions can be a great advantage, but left to market forces alone it’s likely to make the rich richer and leave the poor behind even farther. If this is not something that we want – and I certainly don’t – we should think about how to deal with it."
I agree that use of advanced machine learning, if placed only in the hands of wealthy individuals or only wealthy countries, would increase inequality. And I agree that drivers for various technologies are coming primarily from wealthy people. It's not only AI. Jessica Powell recently wrote an article about Silicon Valley luminaries and their interest in longevity extension.
But of course, Elon Musk and Bill Gates are not going to bring up these concerns.
@Steven Mason
ReplyDeleteLol.. Okay I wasn't entirely serious about that. I don't want to get off-topic too much, just wanted to point out that brains and computers differ fundamentally in their design. Computers have a software layer that is almost - if not perfectly - independent from the underlaying hardware. In a brain there is no such distinction; the 'code' is hard wired and it is fundamentally impossible to change the 'code' without changing the 'hardware'. Just not how nature works.
However, I see no reason why any of our cognitive functions could not be mirrored by software that IS hardware independent. Recently their have been some breakthroughs, although there is still a long way to go. For a good idea about where we stand, I recommend watching/reading Demis Hassabis, the founder of Deepmind and a leading researcher in the field.
Let us remember that the the inability to solve the World's big problems is rarely to do with the difficulty in formulating a best response from the complex information available. It is the inability to formulate even simple rational consensus against partisan, irrational prejudices.
ReplyDeletePeople will simply not accept any AI opinion they do not like.
The real killer AI app will come with maxima/minima programs running on a quantum computer using a humongous highly correlated statistical database.
ReplyDeleteThe big breakthrough will be when a quantum AI builds and maintains its own highly correlated statistical database from analyzing random data that is feed into it.
For example, how the weather, time of day, internet traffic activity, and solar activity events effects stock market prices.
@ Enrico: You wrote Only in Hollywood that humans can beat the Terminator. It's like a race between Usain Bolt vs. Ducati. My Ducati can go over 90 kph on 1st gear in 2 seconds. Sure, but I can ride a horse down into the Grand Canyon in a mule pack, but I dare you to do that on a motorcycle. An aircraft can fly much faster than a bird, but watch birds for a while and marvel at the bio-avionics that allows them to use each feather as an aileron that allows them to end their flight by landing on a tree branch with near perfect precision. It might then be said that artificial intelligence is to the mind what aeronautical engineering is to birds. As yet I don't see AI as somehow yet surpassing the brain.
ReplyDeleteI do see though the prospect for cyber-neural communications. I suspect the major nodes on the internet will at some point be brains.
ReplyDeleteOne thing I was glad to see ("Your computer isn’t like my computer") is that substrates should matter more in (computer science) semantics.
On robots in the future, a big issue will be the economic one of redistributing the wealth of robot makers and owners to the people the robots replace.
@Steven Mason [SM] says
ReplyDeleteSM: why would any society prefer such an approach? By definition, maximum liability means maximum harm to society.
It means maximum probable harm. But we already insure people and organizations with different relative probability of creating harm, and just charge them more. Sports stars get insured despite their greater likelihood of injury. Restaurants are more likely to have fires than clothing stores. Banks are more likely to be robbed than tire stores.
SM: The safety of driverless cars depends very much on their ability to make reliable predictions.
This isn't about predictions they can make, this is about moral decisions they must make; presuming technology advances enough to recognize such moral decisions, and then the liability for making such moral decisions. My suggestion is to eliminate the conundrum of machines with morality, and return to the known capability of machines modeling a decision process in a human; as was done with the chess program AlphaZero training on zillions of master human decisions.
SM: ... it's likely that car manufacturers will offer different sets of moral customizations;
So prohibit that, or with industry help produce a standard that must be followed. I've got the IBC on my shelf [International Building Code] and compliance is mandated nearly everywhere, failing to follow this standard results in pretty much automatic liability. Do the same for moral customizations.
SM: far fewer individuals would own cars if calling for a vehicle is sufficiently convenient.
This is a red herring. The car services are far more profitable if the cars are robotic. So this doesn't solve the issue of liability if the automatic driver kills somebody.
SM: For lots of obvious reasons, driverless cars would reduce insurance premiums to a small fraction of what they are now.
Sure, they will be less likely to get into accidents; but this doesn't address assigning liability when they DO get into accidents, and encounter situations where a moral choice could have been made.
SM: For obvious reasons, the behavior of driverless cars needs to be standardized with the aim of minimizing harm.
"Harm" is too fuzzy, and that is the point. You assume the very thing that is not in evidence; namely that the trolley problem is solved. Are all lives equivalent? Is a child worth more than an adult? Are my passengers worth more than pedestrians? Should I progress on a straight line into a large crowd in the street, or intentionally swerve away from the crowd onto the sidewalk and possibly kill ten pedestrians?
SM: if our technology ever gets that good, it's unlikely that there will be homeless people.
We've already got the solution to homelessness, it is building houses and apartments and care facilities for the mentally disabled and disturbed. It is socialism and taxes used to care for our relatively small percentage of people incapable of earning a living or taking care of themselves. We already are "a technologically-advanced society that puts great value on the lives of mothers and children but doesn't care enough to solve the very solvable problem of homelessness." Why in the world would it surprise you if some great technological advance occurs in AI, without changing humanity's selfish propensity to ignore the suffering of others when addressing it would cost them money?
Is the terminator on the drawing board?
ReplyDeletehttps://www.youtube.com/watch?time_continue=5&v=9CO6M2HsoIA
Slaughterbots
Armed drones have been on the battlefield for decades, but until now they have been simple devices that are controlled from a distance. The Secretary of Defense of the United States Jim Mattis recently declared that calling current drones unmanned is a mistake, since they are at all times under the control of a human pilot. The potential leap forward is profound: today the talk is about making devices the size of a domestic drone, capable of deciding for themselves and without human supervision who is to be attacked and then doing so. According to what Paul Scharre, ex-special operations officer, former Pentagon adviser and author of the new book Army of None: Autonomous Weapons and the Future of War (WW Norton & Company, 2018), has stated that while “no country has commented to build fully autonomous weapons,” at the same time “few have ruled them out either.”
Pascal wrote: You can get a small impression of it by just sitting still and focusing a point with your eyes as long as you can
ReplyDeleteNow that you say it, I realize I'm already familiar with the experience. When I was a kid, I used to stare at my face in the mirror until everything started to blur out. It was a bit disorienting. It was a kid's way of getting a cheap high. When my face started to get distorted, I imagined I was seeing my face as an old man. I was a weird kid.
After all these years, you've explained to me what was going on. :-)
Vincent wrote: Computers have a software layer that is almost - if not perfectly - independent from the underlaying hardware.
ReplyDeleteI agree. I also agree that the "software" for brains is incorporated into the "hardware."
Vincent wrote: the 'code' is hard wired and it is fundamentally impossible to change the 'code' without changing the 'hardware'
Yes, but that's not to say that people can't "change their minds," have life-changing insights, paradigm shifts, epiphanies, new programming, brainwashing, etc. I think we're on the same page.
So you're not as angry about sci-fi brain transfers as you said. That saves me the trouble of giving you electroshock therapy to change your anger programming. :-)
Castaldo wrote: It means maximum probable harm.
ReplyDeleteThat doesn't change anything. Maximum probable harm leads to maximum actual harm. Predictions are meaningless if they don't apply to the real world.
Castaldo wrote: Restaurants are more likely to have fires than clothing stores.
This is not a moral dilemma, it's an unavoidable fact of life. As they say, if you want to make omelettes you've got to crack some eggs. You also have to turn on the stove.
Restaurants, clothing stores, athletes, banks and tire stores strike me as irrelevant. In those examples, where are the moral dilemma options that can be individually configured? Where are the trolleys threatening innocent bystanders?
Castaldo wrote: This isn't about predictions they can make, this is about moral decisions they must make
Let's return to the classic trolley problem. If the car goes one way it runs over two people, if it goes the other way it runs over five people. Before the machine can make a "moral decision," it first had to predict what would happen if it went one way or the other. It also had to predict that it couldn't stop in time to avoid running over people in either direction. Even with humans, moral dilemma decisions necessarily involve predictions.
Maybe your car would have an option to collide with a small car rather than a large car. That decreases *your* chance of getting hurt but increases the other person's chance of getting hurt. Or maybe your car would have the option to run over two people on the side of the road rather than fly over the edge of a cliff. You've got your pregnant wife and two kids in the car, so there's your trolley problem. Do you really have the right to make that decision for the couple on the side of the road? Why should they have to pay the price for your mishap? You're exercising an option at the expense of other people's options.
Castaldo wrote: So prohibit that
You'd actually force car manufacturers to include options that are known to cause more harm? You want car owners to have moral options, but you don't want car manufacturers to have moral options?
Castaldo wrote: This is a red herring.
No it's not. If a majority of people don't own cars, the moral options are being configured by someone else and these options might not be transparent or particularly agreeable to the passengers. If I'm an elderly homeless man, the car I'm in might not put much value on my life. :-)
Castaldo wrote: "Harm" is too fuzzy.
No it's not. Don't pretend you need me to give you a long list of the harm cars can do. If you want fuzzy, let's look at those 50 or so moral scenario options you seem to think are self-evident. Your side of this discussion is way more fuzzy than mine.
I haven't forgotten that the only tangible benefit you offered for moral options was reduced liability for car manufacturers by putting "the owner of the vehicle on the moral spot again." (as if drivers aren't already on the spot for the decisions they make when driving) This is a dubious benefit even if you can make a case that it's plausibly true. It makes more sense to standardize cars that reflect a moral consensus. This increases predictability and overall safety.
'Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend.'
ReplyDeleteI am not at all clear where you got these ideas from.
Any nonbiological AI will be copiable for now. That includes adaptable hardware. They are clearly separable. If not, why?
Copying might be complex and have difference by being complex state machines. Minor ones though.
They may die for an unknown reason like a person does. However that also will almost certainly be insignificant.
One key problem will be decisions and bias based on using human data.
Another, though related, is resolving intrinsic emotional content attached to thought.
The biggest is that moral calculus will naturally lead to a decision to exterminate, subjugate or alter humans.
Early NNs are evolutionary in nature with the same issues. The black box problem is not a big deal though. Intermediate output would make it less opaque but the "reasons" for a choice are not human friendly but interpretation of advanced mathematical processes.
Rebuttals welcome.
DHorse,
ReplyDeleteJust look where it'll go.
SM: Maximum probable harm leads to maximum actual harm.
ReplyDeleteNo. A probability of greater harm should presumably lead to greater average harm, or the predictions are incorrect. EG You and I (males) are far more likely to commit murder than Sabine. It doesn't mean we will. I truly doubt I have to educate you on basic statistics.
SM: Before the machine can make a "moral decision," it first had to predict what would happen ...
In my original post I said "this is about moral decisions they must make; presuming technology advances enough to recognize such moral decisions,".
SM: Do you really have the right to make that decision for the couple on the side of the road?
I'll flip the question: How is it your right to decide these life-and-death dilemmas, by mandating the morality of programmable machines, instead of leaving it to the individual driver? When did you become her moral dictator?
Decisions are currently up to individuals. If society comes to a consensus about such morality, then I'm on board, but then we are back to a mandatory standard (just not configurable). But to leave existing liability laws intact (which are largely a consensus of society and expert legal opinion) we need to put the onus back on the individual.
Do I know how to make the decision? In simulations I think perhaps my emergency responses might indicate a good model of what I am inclined to do; and upon rational analysis of that result I might override my instincts and do something more rational; like set the bias toward self-sacrifice or do some sort of calculus-with-lives to preserve young over old, etc. All again presuming the AI reaches a point where it can identify such dilemmas in time to execute a choice, which I think is possible in principle.
SM: You want car owners to have moral options, but you don't want car manufacturers to have moral options?
That preserves the law as it stands on most products. I have to carry liability insurance, because it is my decisions about how to drive the car that creates my liability. Or how to use a knife, or a stove, or a lawnmower.
SM: No it's not [a red herring]. If a majority of people don't own cars, the moral options are being configured by someone else...
As I said, the liability should be on the owner of the AI, because they are the ones with the liability. That is the case now, with airlines, trains, taxis, and ride-sharing. Becoming a passenger you surrender the right to make choices and gain the right to sue.As it stands now, you don't get to pick your human driver based on his moral choices in case of an emergency.
SM: Don't pretend you need me to give you a long list of the harm cars can do. If you want fuzzy, let's look at those 50 or so moral scenario options you seem to think are self-evident.
That is precisely my point, and my definition of fuzzy. I don't think they are self-evident, I do think different people can choose different answers in all the trolley problems. So we should preserve the human endpoint and take morality away from the machines.
SM: I haven't forgotten that the only tangible benefit you offered for moral options was reduced liability for car manufacturers by putting "the owner of the vehicle on the moral spot again."
The tangible benefit is preserving existing liability law.
You are correct, drivers are already on the spot for the decisions they make when driving. Thus I am not reducing corporate liability. I would increase corporate liability if they make the moral choices programmed into robotic servants.
SM: It makes more sense to standardize cars that reflect a moral consensus.
I proposed a mandatory standard! Even with a mandatory standard, liability should still land on the owner of the car.
Castaldo wrote: No. A probability of greater harm should presumably lead to greater average harm
ReplyDeleteYou say no and then you say yes with magic qualifiers like "should presumably" and "average harm." Those qualifiers are givens.
You've put me in the strange position of defending your own premise. You gave me insurance companies that charge higher premiums for higher risk options. I accepted your premise and now you're arguing against it. Go figure.
Assuming that the insurance company risk models you proposed are reasonably correct and honest, maximum probable harm leads to maximum actual harm - presumably and on average (obviously).
Castaldo wrote: In my original post I said "this is about moral decisions they must make; presuming technology advances enough to recognize such moral decisions"
How is that relevant to your groundless dismissal of the critical role of reliable predictions for moral decisions and overall safety?
Castaldo wrote: How is it your right to decide these life-and-death dilemmas, by mandating the morality of programmable machines, instead of leaving it to the individual driver? When did you become her moral dictator?
You'll recall that I mentioned moral consensus. We already have some idea of what that consensus is. For example, do you have the legal right to cause the death of two innocent people in order to save the lives of you, your pregnant wife and two children? Maybe you'd do it, but do you have the legal right?
You'll also recall that the entire reason for mandating standards for all cars - using a moral consensus - is to increase overall safety for everyone. The world could use more "dictators" like that.
In any case, you're evading the question. You think it's a good idea for millions of individuals to customize 50 moral scenarios. You're the one who said individuals can "minimize damage to themselves" (thereby increasing damage to others). You're the one who mentioned the dubious benefit of decreased liability for car manufacturers as if that's a self-evident greater good.
Castaldo wrote: If society comes to a consensus about such morality, then I'm on board
There's no reason to think societies can't work out a consensus for driverless cars. So now it looks like you're on board with the approach I favor.
Castaldo wrote: I don't think they are self-evident
Earlier you said it's fuzzy, now you say it isn't self-evident. That's progress. Again, you gave me insurance companies that base premiums on risks, and again I'm defending your premise. With all the decades of data we've got on cars and drivers, it isn't nearly as fuzzy as you suggest. Of course it isn't self-evident either; that's why insurance companies constantly use complex models and crunch data.
Castaldo wrote: I proposed a mandatory standard!
You proposed no less than 50 customizable moral scenario options. My only real point is that it would make it more dangerous for everybody, even for people who maximize their own safety at the expense of others.
@Steven Mason; you seem to think the trolley problems are solved (or soluble), and I do not.
ReplyDeleteThe trolley problem boils down to deciding who lives or dies based on some known values. Liability and everything else can wait; I am not surrendering my position on those but I think they are secondary to this issue.
I don't believe there is any social consensus on the trolley problems, other than the general chivalric rule of thumb "Women and children first." I think the Trolley problems are evergreen philosophical problems precisely because the logical solutions to the problems grate on our evolved emotions and psychology and don't feel right, and the dissonance between what looks like the right answer and what feels right is distressing.
For example, I will note that in the trolley problems we (humans on average) have this innate sense that taking an action to cause a death feels much more like murder than choosing to do nothing, and that choice leading to a death. This is captured in the trolley problem by whether you take action to switch tracks and kill two people, or do nothing and kill three; people feel less guilty killing three. And less guilty killing men -- As social psychologist Roy F. Baumeister details in his book Is There Anything Good About Men?, men are far more socially expendable than women. The evidence of that being innate is across cultures and history in which gender fights the wars, enforces the law, and take other dangerous jobs, regardless of the physical demands. And finally, preserving one's own life at the expense of others is paramount; regardless of any other logic. It's understandable if you refuse to sacrifice your life to save many other lives.
But our innate biases can be at odds with the straight logic of maximizing the probable years of life based on information available at the time.
For example, even if I were sacrificing my own life (thus removing any future emotional rewards I might experience), I don't think I could choose to let my daughter die in order to save three of her classmates. By the logic of the greatest future return on my costs, I should, but I don't think I could Spock it out like that. And that is divorced from liability, I am not obligated to sacrifice my life to save anyone.
Which is why I think the Trolley Problems are evergreen, they probe this dissonance between logic and emotions, and will never be solved to the satisfaction of people, thus there will always be disagreement about what is "moral". Even if some future AI can in a millisecond parse the visual scenario to the same degree as a human contemplating it for minutes, there will still be disagreement.
That's what is fuzzy, and not self-evident. And customizable moral scenarios are not incompatible with a mandatory standard. I served on an international standards committee thirty years ago; many standards in equipment and telecommunications mandate that a product have specific options available that will influence the operation of the device.
Castaldo wrote: you seem to think the trolley problems are solved (or soluble), and I do not.
ReplyDeleteI see. You want millions of car owners to solve 50 unsolvable moral options.
Castaldo wrote: I don't believe there is any social consensus on the trolley problems
In the context of driverless cars, society would reach a consensus if it makes everyone safer. There isn't any consensus now because there isn't any need for it yet.
Castaldo wrote: [trolley problems] will never be solved to the satisfaction of people
And therefore you want millions of individuals solving 50 unsolvable trolley problems, and somehow that will satisfy people.
When people are brainstorming, they have to be willing to admit that some of their ideas might not be so good after all. Brainstorming is fun and it's no big deal. Do you still think it's a great idea to make car owners decide on 50 moral options?
Bee says:
ReplyDelete"Your computer isn’t like my computer. Even if you have the same model, even if you run the same software, they’re not the same."
I'll second that. I completed an A.S. degree in Networking in 2013 and, though I've never worked in the field, I've done a lot of free-lancing. Rarely do I see identical problem/solution scenarios, despite in some cases hundreds of iterations of the same processes on similar, if not identical equipment (I now deal with DELL Business-Class products exclusively).
In fact, that's one of the biggest turn-offs in the I.T. field, and may help explain my aversion to it. It seems that the baby is thrown out with the bathwater every six months, and things that worked fine previously suddenly have new quirks and reduced functionality. There is a massive infrastructure which must work together seamlessly, and it is amazing that it hangs together at all with all of the changes (some of which, to be fair, are forced by the hacking "community").
This is why I keep a print version of the OED. I tell people: "When the internet goes down for the last time, where are you going to get your definitions?" : )
This "drive to change" was at least in part explained by someone (with a bad reputation, so I won't name him here) who said that, whereas in the past if someone wanted to eat, he had to plant a crop, we now have a situation with engineers, developers, and designers having to re-invent or "fix" something... whether it needs it or not... just in order to justify their paycheck!
And while many in the I.T. field have grown accustomed to a six-month equipment life-cycle, I still have my first PC, a SONY laptop, purchased in early 2002 (I was a late-adopter). Still works like a charm! Not good for running supernova simulations, however.
A couple of "general interest" AI articles I've come across recently:
https://techcrunch.com/2019/01/07/darpa-wants-to-build-an-ai-to-find-the-patterns-hidden-in-global-chaos/
https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/
@Steven Mason says Do you still think it's a great idea to make car owners decide on 50 moral options?
ReplyDeleteI think it is a workable idea that you incorrectly dismiss out of hand for emotional reasons.
The idea is not to pre-decide every moral decisions, the idea is to let an artificial intelligence develop a model of how that individual makes their moral decisions. I'm happy to modify my original suggestion to say the AI should also take into account whatever the law has decided, so it will behave within decided law; and the training is to there to determine what to do in ambiguous circumstances that will undoubtedly arise.
There is no reason in my mind that AI cannot advance to the point where they comprehend everything from a scene that a human driver can comprehend.
So yes, until I am proven wrong, I do think it is fine for how ever many millions of drivers we have to customize their machines to obey their own moral choices, within the law of land. Just like they are free to choose a safer car or a less safe car, a less polluting car or a more polluting car, a more fuel efficient car or a less fuel efficient car, all within the bounds of what is legal.
By your lights, we should all be driving exactly the same car, down to the color, that statistically speaking causes the least harm per mile driven. Same for food, no more chocolate for us, it causes too much harm relative to carrots.
Thus I think your premise proves too much. Laws don't work that way for consumer products, they implement constraints that prevent egregious harm but not all harm. And even for the highest levels of human intelligence it is not clear in every situation which action will produce the least harm, so we fall back to moral rules of thumb that vary between individuals. I see nothing wrong with their equipment learning to do what they would do (within the law); just as I have no problem with them putting whatever apps and music they want on their phone (within the law).
My 50 moral scenarios was arbitrary, an AI might need more or less to develop a model the user feels reflects their choices. We already make students learn all sorts of things to drive, and take a written test and a physical driving test. I see nothing wrong with having them spend a few hours in a VR simulator as well.
Castaldo wrote: I think it is a workable idea that you incorrectly dismiss out of hand for emotional reasons.
ReplyDeleteI'm not dismissing it out of hand. I've been giving you factual reasons why the idea isn't workable. "Emotional reasons" is just a clumsy ad hominem.
Castaldo wrote: The idea is not to pre-decide every moral decisions
That's a complete contradiction of your original idea. Here's what you wrote:
"You have to set up and configure your car just like you have to set up your computer or cell phone . . in the simulator, you are given, say, fifty scenarios . . for each, the training program shows you a list of the things you can do, to minimize damage to yourself, to others, etc. . you have to choose, you cannot defer until later . . you are given as much time as you need to make your decision. These decisions will be generalized by the AI into the decisions to be made in real-time . . the morality of the car can match to a high degree the morality of the owner."
Castaldo wrote: Until I am proven wrong, I do think it is fine for how ever many millions of drivers we have to customize their machines to obey their own moral choices.
This is too funny. In the previous statement you deny that there are pre-decided moral choices, and now you're back to saying it's fine to have pre-decided moral choices.
You can't just claim that no one has proven that you're wrong. You have to respond to the arguments I've given you. I've made a case against you on two points. The first point involves overall safety, the second point involves the dubious benefit of reducing liability for car manufacturers. It hasn't escaped my attention that you've been avoiding these points.
Castaldo wrote: Just like they are free to choose a safer car or a less safe car
What do you mean "just like"? We're talking about car behavior, not physical features. The options for physical features are the same for regular cars and driverless cars.
In an earlier comment I mentioned large cars and small cars. Some people decide to own big cars to increase their personal safety, at the expense of increased danger for people who own small cars. Driverless cars will be big and small too. Society has already reached a consensus that this moral choice is acceptable, although some small car owners complain about it.
However, if we're talking about car behavior, that brings a new dimension to this moral dilemma. The millions of people who own small driverless cars may not be willing to be a preferred "target." The odds are already stacked against small cars and AI could stack the odds even more.
Castaldo wrote: By your lights, we should all be driving exactly the same car, down to the color, that statistically speaking causes the least harm per mile driven. Same for food, no more chocolate for us, it causes too much harm relative to carrots.
Don't get ridiculous. This comment doesn't deserve a serious response.
Castaldo wrote: My 50 moral scenarios was arbitrary
Irrelevant. My points are the same whether it's 5 moral scenarios or 100.
Castaldo wrote: We already make students learn all sorts of things to drive
Yes, and we teach them about moral consensus. We teach them not to drive while drunk and to give pedestrians and bicyclists the right of way even if they are breaking the law. If human drivers were better at making split-second decisions, we might teach students that it's better to hit an elderly homeless man than a pregnant woman, or it's better to hit the obstacle in front of them than swerve and run over a bicyclist.
It's high time for you to make a fact-based case for how your idea makes everyone safer, assuming you think that is a reasonable goal. Even if you think reducing liability for car manufacturers is a more important goal, you still have to make a case. Far from "dismissing" your idea, I'm encouraging you to support your idea.
Don’t believe the hype: the media are unwittingly selling us an AI fantasy
ReplyDeletehttps://www.theguardian.com/commentisfree/2019/jan/13/dont-believe-the-hype-media-are-selling-us-an-ai-fantasy
@Steven Mason, funny I think I have addressed both of your complaints.
ReplyDeleteFirst, by analogy, training a neural net to recognize faces doesn't mean all the faces have been recognized. Training a neural net with 50 moral scenarios and a set of MY preferred decisions, so it can generalize a model of what my decisions would be in new moral scenarios, is not "pre-deciding" every scenario. No more so than ME encountering a new moral scenario and being forced to make a spot decision. I DO presume that my moral compass doesn't change constantly, and I DO presume one's moral compass can be accurately modeled by such a procedure.
But we may have to disagree on what we mean by "pre-decided", I do not think the output of the neural net when given a new situation is "pre-deciding" the issue, it is simulating what my decision would be, at a much faster processing speed than my brain, and with faster reaction times than my brain. Thus it is capable of implementing my moral intent better than I could.
SM says What do you mean "just like"? We're talking about car behavior, not physical features.
By "just like" I mean in both cases the buyer is making moral choices; to pollute or not pollute, to create a lot of greenhouse gases or very few.
Further, we ARE talking about a physical feature, in both your scenario and mine, the behavior of the self-driving car is controlled by some sort of decision making hardware running some sort of code. I don't see a relevant distinction between that and whether a phone vibrates when it receives a call, or blares out "I Like Big BUTTS and I cannot lie". In the latter case, the owner has decided they don't care if they offend people around them, which I consider a moral decision.
SM says: Don't get ridiculous. This comment doesn't deserve a serious response.
I am not, I think it proves your argument wrong by being the ridiculous endpoint of your argument. You think every aspect of morality should be finely controlled by the State and objectively determined using the measure of "harm". If it applies to car hardware, why not every other aspect of cars? Let's outlaw black as a car color, it is proven a more dangerous color at night, even on lit streets, because it is harder for both non-motorists and other motorists to see. Why do we allow this careless choice of color? In fact, that ugly safety-orange color has been shown to be the most visible across all driving conditions, why don't we mandate that, for safety? Why not take all decisions out of the hands of car buyers, because "safety"? If you allow people to make decisions about their car which DO have (probabilistically) moral consequences, why eliminate their choices about behavior in an emergency? That sets a precedent for eliminating other choices.
SM says: It's high time for you to make a fact-based case for how your idea makes everyone safer,
It's high time for you to understand that I consider that a misguided goal. In the present case with a human driver, they have the freedom to follow their own moral code. They also have the freedom to break the law, and I don't mind taking that away, but I don't think law applies to all of the possible decisions to be made.
So I don't think taking their freedom away is worth forcing them to adhere to a moral code that violates their own principles.
(continued)
SM says: Even if you think reducing liability for car manufacturers is a more important goal,
ReplyDeleteAs I have already written, and you are ignoring, I don't think I AM reducing liability for car manufacturers, I think I am leaving liability where it already resides, with the driver, or the owner of the robotic driver, which would be the car owner. Currently the driver is responsible for their decisions while driving; which is a bit unfair when their decisions are made in a hundredth of a second and can be panicked and instinctual.
So the case I will make is that by training a neural net with decisions they can think about, the owners of the robotic driver become MORE responsible for the outcomes. A jury (of humans) can decide whether their decisions in the training phase reflect the outcome chosen by the equipment, as juries of humans already decide questions of liability in cases where it appears equipment failure was involved. And in that case, the manufacturer of the equipment may very well bear some or all of the liability.
Your proposal shifts MORE responsibility onto the car manufacturers than they currently bear, that is something you have to justify. Mine leaves it where it is, with the person (or corporation) responsible for the decisions being made. Doing that IS the current state of moral consensus; I am just proposing a way to extend that into robotic driver decisions.
The AI discussion is a collective delusion. See "Industry 4.0". Politics cares about job losses. Why? There was no break through in robotics for decades. Robots are still very bad in, e.g. picking up one rubber ring from a heap of black rubber rings. Musk has learned this painfully. There could be a revolution in administration, because administrators waste their time copying numbers from a to b. But that is not what they think of when they say "Industry 4.0", it could be done by existing dumb computers and it would require better standardisation or more open systems, which I don't see.
ReplyDeleteSo, collective delusion and that is the most dangerous issue with AI.
@Castaldo
ReplyDeleteI was going to respond to all of the irrelevant points in your comment, but then I decided to focus on the two main points derived from your original comment.
Here are the two main points: (1) If people are required to customize 50 (or whatever) moral options in driverless cars, will that increase overall safety? (2) Customized cars reduce liability for car manufacturers.
In your two-part comment, this is what you say about safety:
"It's high time for you to understand that I consider that a misguided goal."
Okay, that's clear. Safety is a misguided goal for driverless cars. When we design driverless cars, safety shouldn't be a design factor. Oh, I know you'll howl in protest that you didn't say that, but that's what you're implying when you say it's a misguided goal.
Now, if and when you're ready to be reasonable, you can admit the obvious fact that safety is an important design goal for driverless cars. If and when that happens, you can make a case for how your idea achieves that goal better than a standardized approach.
Here's what you said about liability for car manufacturers:
"I don't think I AM reducing liability for car manufacturers."
Great, neither do I. But you are disagreeing with the statement you made in your original comment: "The car manufacturers have an incentive to create something like this, to defer liability." Why is it so hard for people to admit mistakes, even small ones? You could be glad that someone is paying attention to what you write.
@Steven Mason: (1) Deferring liability can be deferring new liability, which is what your plan will impose on manufacturers by making them responsible for accidents due to driving decisions, for which they are not currently responsible.
ReplyDelete(2) Yes, safety is not my first concern; nor is it the first concern of many when it comes to actually buying cars, because the safest cars are not the best-selling cars; the top 30 safest don't even break the top 10 models in sales. The #1 seller is the Ford F-150 pickup truck; focused on utility, not safety.
(3) It is not hard for me to admit mistakes, I am a scientist and I have admitted several mistakes I have made in logic, in meetings with colleagues, in online discussions. Heck, I once peer-reviewed a paper after spending a week on it, then two days later realized it had a fundamental flaw, called the editor, and told him I made a frikkin' mistake. The authors re-used an exponent from one formula in a later formula where it should have been scaled to a different base. He agreed, and told me I was the only reviewer that caught it.
However, in this case, we have different fundamental assumptions. You think "safety" is paramount, I do not think it is paramount. I think it deserves a weight but is not the only parameter to consider. With a nod to Ben Franklin, I'd say trading liberty for safety without exception will minimize liberty, without maximizing safety. In fact I think we are often promised "safety" if we give up liberties, without any increase in safety!
I think we should leave this discussion at that; I like your comments, but in this case I think we proceed from different fundamental beliefs.
Castaldo wrote: safety is not my first concern . . I do not think [safety] is paramount.
ReplyDeleteI'm gobsmacked. I could ask what is more important than safety in the context of moral options for driverless cars, but any answer you'd offer would be ridiculous.
We both know that safety is one of the most important design factors for cars, and trends clearly indicate that safety considerations are becoming ever more important.
Castaldo wrote: your plan will impose on manufacturers by making them responsible for accidents due to driving decisions
A computer scientist is expected to be reasonably rational and logical. But not only are you irrational about safety, you're illogical about liability. If there is a moral consensus that standardizes driverless car behavior, that gets car manufacturers off the hook. I've made this simple, obvious point before and I'm amazed that you can't understand it.
Car manufacturers would be liable for the things they've always been liable for, such as defects, negligence, fraud, failure to meet required standards, etc.
Castaldo wrote: I think we should leave this discussion at that
You want to end this discussion because you can't make the case that your idea is good.
Castaldo wrote: I think we proceed from different fundamental beliefs
How many people do you suppose share your fundamental belief that safety isn't very important in driverless car design? I'm guessing less than one percent. Why don't you send your CV to Waymo? :-)
Castaldo wrote: I like your comments
If that were true you wouldn't be prevaricating.
@Steven Mason says We both know that safety is one of the most important design factors for cars,
ReplyDeleteNo, we don't. That is what I mean by different fundamental beliefs. YOU believe nothing is more important than safety, I've offered you clear evidence that is not how manufacturers design cars, and it is not how consumers choose cars to buy! Sabine discourages linking, so G("safest cars") and G("best selling cars") and see for yourself. The 30 safest cars are not even in the top 10 best selling cars. People don't choose them. Safety is not their top priority.
I will agree cars are getting safer, but the sales figures suggest your desired "moral consensus" is that safety is not the most important thing to consumers. If dollars are votes, they are voting for style, comfort, utility and cachet more than safety.
Manufactures design cars to sell, and they have found safety is one concern but not the top concern of citizens.
SM says I've made this simple, obvious point before and I'm amazed that you can't understand it.
See now you are just straying into ad hominem attacks; if somebody doesn't agree with you, they must be stupid and illogical. Have I been saying that about you? No.
In this case, I understand, I just think your premise is wrong, I don't think it is possible to come to a "moral consensus" on anything, heck the world population can't come to a moral consensus on easy questions like global warming, by what magic will they come to a *binding* moral consensus on when it is appropriate to sacrifice one's own life, perhaps along with your child's life, to preserve the lives of strangers?
SM says How many people do you suppose share your fundamental belief that safety isn't very important in driverless car design?
That is a logical fallacy; people can think safety is important without thinking it is the most important, and since what more than 2/3 of consumers with free choice actually buy is not the safest vehicles, trucks, etc, I'd say most of them agree that safety is not the most important thing to consider.
SM says: You want to end this discussion because you can't make the case that your idea is good.
I want to end that discussion because I don't think you are being rational, or to be more accurate, I think you are reasoning from different fundamental beliefs, axioms, givens, premises, whatever you like to call them. And I think you are ignoring the evidence that is contrary to your opinion.
SM says: If that were true you wouldn't be prevaricating.
Ad hominem, which I see as an indication of irrational emotional argument. Another reason to end this debate. I am not lying, I am standing by an opinion you do not share, because I personally think a "moral consensus" is a fantasy solution that will result in powerful politicians making decisions I disagree with. Even if it were honest, no single point on a distribution will match the range of valid moral opinions, because that is not a uniform distribution! People on either side of that point will not be happy with the moral decisions made by their car.
I also think (with plenty of evidence) that such a consensus will inevitably be corrupted by people and corporations with money, to serve their own selfish interests.
I think the best way to explain some things is with stories and fairy tales so in this case we should find related story and everything become clear. And here it is: "The Wizard of Oz". Ingenious twist near the end gives us explanation who is behind the courtain. This is exactly what we should worry about. It is much more probable real Oz will be a bankster rather than cheap prestidigitator......
ReplyDeleteCastaldo wrote: YOU believe nothing is more important than safety
ReplyDeleteAll bark, no bite. Tell me what's more important than overall safety in the context of moral decisions for driverless cars. In your cover letter for the CV you send to Waymo, make sure you tell them that safety is not your first concern. They need engineers who think outside of the box.
Castaldo wrote: If dollars are votes, they are voting for style, comfort, utility and cachet more than safety.
Instead of telling stories, offer an example. Tell me about a stylish, comfortable car that makes a significant compromise to safety as a direct consequence of being stylish and comfortable. Get out of your fantasy bubble and give me something real.
And once you do that, tell me how that's relevant to the question of moral options for driverless cars. You see, there are rational reasons we make compromises on physical features. For example, we allow people to own big or small cars because people actually need big and small cars. But does society need cars that can be customized to run over people rather than hit a big truck?
Castaldo wrote: That is a logical fallacy; people can think safety is important without thinking it is the most important
Nice try. The obvious logical fallacy is your refusal to admit that safety is important, as well as your refusal to state what's more important - in the context of moral options for driverless cars (I keep reminding you and you keep "forgetting"). We both know that as soon as you admit safety is important, you're on the hook for explaining how your idea makes everyone safer. As long as you prevaricate (more on that later), you're off the hook.
Castaldo wrote: I think you are ignoring the evidence that is contrary to your opinion.
You presented evidence that your idea is good? You made me spit up my coffee.
Castaldo wrote: Ad hominem.
When someone prevaricates, it's not ad hominem to say he prevaricates. You deserve honesty from me; fake politeness is disrespectful. Of course I don't expect you to admit that you're prevaricating when you can't even admit that safety is an important design consideration. I'd consider it a major breakthrough if you ever admitted that.
The closest you've come is the deliberately vague statement, "Safety is not my first concern." Wow, that must have taken a lot out of you. :-)
When you presented your brilliant idea, if safety wasn't your first, second, third, fourth, or fifth concern, what was? In your original comment, you mentioned reducing liability for car manufacturers, but I ripped that to shreds. What else have you got?
Castaldo wrote: I also think (with plenty of evidence) that such a consensus will inevitably be corrupted by people and corporations with money, to serve their own selfish interests.
So a significant portion of current car design standards are a result of corruption and selfish interests? Dare I ask for an example? Is the trend toward more emphasis on safety also a result of this corruption you speak of?
@Steven Mason: More tiresome bad logic and ad hominem attacks.
ReplyDeleteI am happy to "admit" what I have said all along, safety is one important design consideration. But safety for whom? Should I be required to sacrifice my child's life to spare the lives of two stranger's?
That is another point you refuse to acknowledge, you insist there is some magical "moral consensus" that the majority will agree upon. I don't believe it, you have no evidence of it, so prove they will, and not some fuzzy concept like "murder is illegal" but on the ambiguous scenarios presented in Trolley problems that real people have difficulty deciding, or would refuse to decide.
Look up "consensus", we'll put aside the #1 definition of "unanimity" and go with a secondary definition more favorable to your premise, "the judgment arrived at by most of those concerned". You really think the majority of people can agree on all the Trolley problems?
I find that extremely naive. Even if that were remotely possible, why should the majority be allowed to make that decision for me?
By analogy, I am an atheist, but I am in a distinct minority in the USA and an even smaller minority in the world. Would the world reach a consensus that expressing the views of atheism should be prohibited? Yeah, they probably would. Heck, that might pass in some States in the USA, if it weren't unconstitutional.
SM says: So a significant portion of current car design standards are a result of corruption and selfish interests?
Yes, most cars sold were designed to make money. Emission controls and seat belts and airbags all had to be made mandatory. Optional varieties (e.g. side airbags) still sell, and they have been statistically proven to be effective (= more safe), but they do not make a car a best-seller. So in the context of this discussion, car designers do sacrifice safety in design for aesthetic elements, and consumers do choose aesthetics over safety, or we'd all be driving Subaru's.
And there has been corruption in the field of emission controls, both in [USA] government relaxing of controls, refusal to prosecute fraud in the controls, and refusal to pass controls supported by independent science because it would impact the financial interests of car manufacturers, both domestic and foreign. If you reject the notion that lobbying by rich corporations to prevent the passage of laws that would make people more safe but make them less rich is a real thing; then there is another fundamental and irreconcilable difference in belief between you and I.
Otherwise, to think the corporations have been lobbying and spending billions on selfishly influencing legislation for half a century without achieving what they want is more naivety.
No, you did not rip me to shreds on anything. "Safety" as ONE concern does not mean I have to make everybody more safe. By analogy, "Safety" (including self-defnese) is the #1 concern of USA gun owners; 2/3 rank it #1 (Pew), and most of those own handguns. Only 10% of Americans would outlaw all gun ownership. But surely you can agree that this consensus, that people should be allowed to own guns for self-defense, clearly does not make everybody more safe! In fact, the consensus on gun-ownership seems to be that personal safety supersedes the goal of public safety.
Why should the consensus be any different for cars, when the public consensus (by sales of cars in the USA) is clearly not prioritizing safety for themselves or others? Yes, I agree safety is one of the important considerations, it is not the only consideration, and your promotion of it to the TOP consideration is unwarranted, by either public consensus or legislative fiat.
Castaldo wrote: I am happy to "admit" what I have said all along, safety is one important design consideration.
ReplyDeleteWhere did you ever say that before? After I twisted your arm, you said that safety was a misguided goal. After more arm-twisting you said safety isn't your first concern. The closest you came to "admitting" that safety was important was this: "people can think safety is important without thinking it is the most important."
Okay, so now you happily admit that safety is important. That’s progress.
You haven’t stated what you think is more important than safety in the context of moral options for driverless cars - a conspicuous and significant lapse on your part.
Castaldo wrote: you insist there is some magical "moral consensus" that the majority will agree upon
There's nothing magical about it. In the context of cars, our current laws and regulations are partly a result of moral consensus. Indeed, one of the strongest arguments in favor of driverless cars is a moral argument for increased SAFETY.
Castaldo wrote: corporations have been lobbying and spending billions on selfishly influencing legislation for half a century without achieving what they want is more naivety
Sure, that's why our cars are deathtraps and driving has been getting more unsafe over the years. Greedy corporations always win. I'm so naïve not to realize what's been happening.
Castaldo wrote: "Safety" as ONE concern does not mean I have to make everybody more safe.
After finally admitting that safety is important, you tell me your idea doesn't have to increase overall safety. Is safety important, or isn't it? Make up your mind.
It's actually worse than that. Your idea decreases overall safety and you haven't bothered to explain the benefits that offset this cost. You abandoned the dubious benefit of reduced liability for car manufacturers. What else have you got? (Second time I'm asking)
@Steven Mason: Now you're just fabricating things from my statements. I originally said, The car manufacturers have an incentive to create something like this, to defer liability.
ReplyDeleteTo defer liability from what? Clearly to any intelligent person this is deferring new liability for their hardware making life-and-death decisions. And I stand by that, this is their motivation to implement an AI that follows the driver's (or owner's) moral intent, to not assume new liability.
And as I said originally, the intent is to sidestep the Trolley problems, and leave the morality issue where it currently is; with drivers. Because the Trolley problems are not soluble, not by consensus and they are not solved in current law, and IMO never will be.
The reason they are difficult conundrums is because they demand the respondent come up with some new unified arithmetic of valuing the lives of humans so we can compute even a binary choice of who should die. Sure, this is easy in the extremes, but emotions get in the way and produce paradoxical results. Is a daughter's life worth 2 of her classmates lives? Or five? Try to get people to agree on that. Our laws don't value all lives equally, it's fine for you to kill any number of people in self-defense. Worldwide we risk male lives in battle far more than female lives. Whatever our "life calculus" is, it isn't consistent, it is grounded in instincts and emotions that don't have to make arithmetic sense.
Do not take that to mean they cannot be modeled. It just means there is no universal explication that solves the Trolley Problems. On an individual basis, we can discover a model for a person, how much they value youth over age, women over men, quantity over quality. So we might be able to develop a solution that solves Trolley Problems like that individual decides them; we might be able to map their instincts and emotions into a decision making process that satisfies them.
But you refuse to address the Trolley Problems, which are at the heart of my proposal. You hand-wave them away as if they can be solved completely by consensus, as if there is a universal solution.
You apparently believe that, I do not believe that. It is a fundamental difference of opinion; and I won't back down, I stand by my original proposal. Leave the morality of driving where it stands now (or with a slight modification, with the human responsible for liability).
I think safety is important, but not most important; which means I will sacrifice safety if necessary to preserve the freedom I already have to make my own moral decisions while driving, within the law of what is already allowed. Because that is the other question you refuse to address: Safety for Whom? By what criteria? It can't just be the number of lives, the consensus of existing self-defense laws make that one rubbish. It can't just be age, existing law does not demand I risk my life to save a child, or ten children. If anything, the existing consensus is the life of the person making the decision is allowed to be paramount and they cannot be forced to sacrifice themselves for others. But that is inconsistent too, because they can be forced to serve in the army and battle, and then their government can demand they sacrifice themselves for others.
But you ignore all that, cherry-pick half-sentences, never address my actual arguments, because you cannot. I will repeat; I think we have an irreconcilable difference of fundamental opinions. I stand by mine, you stand by yours, I will never agree with you. Safety is important, but it is not most important, and I will not sacrifice what I consider most important to some bogus flawed-from-the-start "consensus" of opinions that I don't think can exist; and I also think would be skewed by the financial interests of corporations, and the ideological and financial interests of politicians.
@Castaldo
ReplyDeleteYour comment is full of vim and vigor. However, you still haven't answered my simple question:
In the context of moral options for driverless cars, what is more important than overall safety?
You need to write a clear, unambiguous statement and you need to offer at least one real-world example, featuring one of your moral options, that supports your argument. This is the minimum of what anyone needs to do to make a case.
Castaldo wrote: Safety for Whom? By what criteria? It can't just be the number of lives.
Since overall safety is my primary concern, I'd love to have that discussion. But I'll remind you that I'm trying to talk to you about your idea, and you've made it clear that overall safety is not your first consideration. Before we talk about what's most important to me, let's talk about what's most important to you.
I feel strange trying to persuade you to talk about your idea. So far, I'm the only person in this blog who has shown any interest. I want to know what's more important than safety. Maybe you don't want to answer that question but that's where it stands.
@Steven Mason says: I want to know what's more important than safety.
ReplyDeleteI have answered this multiple times, in various forms. So one more try: What is more important than "overall safety" to me is leaving the moral decisions, within the bounds of specific laws, in the control of the people that will be found liable for those decisions.
I believe somebody must be found liable because I believe injurious, lethal and property destroying accidents are inevitable, whether situations are decided by machine or person. I also don't believe it makes sense to hold machines liable for moral decisions; and I also believe that holding the government liable doesn't work; it makes the government responsible for enforcing liability on itself! Watch some major world governments (USA and others) for the last few decades and it should be pretty clear that government cannot be trusted to follow the laws it has passed or hold accountable the elected officials that clearly and obviously violate those laws; particularly when a moral component is involved.
By "specific laws" I intend to convey rules written in a human language that a human could be expected to comprehend and comply with.
I believe there is a distribution (in the statistical sense) of legal decisions that can be made within the laws; as I've detailed before. For example I am not legally required to risk my life to save somebody else's life.
Because a distribution of currently legal choices exists, I feel picking one point on that distribution and forcing everybody to comply with it (which is how I see your alternative) removes rights and options I currently have. Allowing people to program their cars to reflect a good approximation of where they personally are in this distribution of currently legal choices available to them will preserve the distribution and not remove the rights and options they currently have.
That is what is more important to me than safety. As a lifelong altruist and atheist at retirement age, given a rational choice I think I would be more likely to sacrifice my own life to save others. But I would be politically opposed to being stripped of that rational choice, or forcing that calculus on other people, by "moral experts" or philosophers I already find severely logically impaired.
I am no libertarian, people do need protection by law, and safety is perhaps the most important application of law. I have the International Building Code on my shelf, I've read through it and I agree, those rules make sense and should be law. But that is part of my point, I can understand them and why they are sensible laws. As a scientist, I don't have to just trust an expert telling me some requirement makes a building safe, I can see for myself the logic of why it makes a building safe, or verify for myself that it is supported by engineering math.
That is what I'd like to see in laws, and I don't think an AI produced by moral philosophers that reduces the existing distribution of legal decisions to a single point will have any clarity or understandability at all. I'm a scientist at heart, if I can't have proof then I want at least clarity of reasoning, not pronouncements from on high. The latter is what I think you are advocating.
@ Dr. Castaldo...
ReplyDeleteYou stated:
"I believe somebody must be found liable because I believe injurious, lethal and property destroying accidents are inevitable, whether situations are decided by machine or person. I also don't believe it makes sense to hold machines liable for moral decisions; and I also believe that holding the government liable doesn't work; it makes the government responsible for enforcing liability on itself! Watch some major world governments (USA and others) for the last few decades and it should be pretty clear that government cannot be trusted to follow the laws it has passed or hold accountable the elected officials that clearly and obviously violate those laws; particularly when a moral component is involved."
Do you think the following is analogous to or otherwise bolsters your points:
A bank or Wall Street firm perpetrates a fraud, scam, or some other form of proscribed activity. The government prosecutes or fines the corporation, i.e., a piece of paper!
But the piece of paper was not the culprit...
Going further with this analogy, as these same firms begin to employ AI to make decisions, who should the government prosecute/fine going forward? IBM's Watson?
It wouldn't surprise me... I can see it now:
"IBM's Watson was slapped with a $350,000 fine today for... "
(of course, the fine goes to the government, not the injured parties : )
As you imply, the threat of punishment cannot incentivize "pieces of paper" and "machines"... or even "governments", for that matter... to refrain from behavior that injures human beings.
@ Dr. Castaldo...
ReplyDeleteYou also stated:
"That is what I'd like to see in laws, and I don't think an AI produced by moral philosophers that reduces the existing distribution of legal decisions to a single point will have any clarity or understandability at all."
Well, here comes that can of worms now:
https://www.theverge.com/2019/1/17/18186674/daniel-chen-machine-learning-rule-of-law-economics-psychology-judicial-system-policy
Castaldo wrote: What is more important than "overall safety" to me is leaving the moral decisions, within the bounds of specific laws, in the control of the people that will be found liable for those decisions.
ReplyDeleteIt's about freedom to make moral decisions, then? According to you, building codes are fine for the sake of overall safety, even though they take away "control." I'm guessing that laws against drunk driving, and texting while driving, for the sake of overall safety are okay with you, even though they take away "control." The purpose of laws and regulations is to control people's behavior.
Tell me why freedom to make moral choices that can harm other people is suddenly more important than overall safety for driverless cars. This should be interesting.
And while you're at it, please give me a real-world example of your brilliant idea of customizable moral options. Remember, it's your idea. You suggested 50 moral options, so you should be able to offer at least one example. You've had weeks to think about it.
Castaldo wrote: government cannot be trusted to follow the laws it has passed
Do you want to drop the other shoe? Therefore everyone should be lawless? C'mon, man, stay focused.
Another good article about the state of "AI":
ReplyDeletehttps://www.technologyreview.com/s/612768/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/
Marnie wrote: Another good article about the state of "AI"
ReplyDeleteThat article talks about the different design approaches for AI, e.g. neural networks, symbolic-based, knowledge-based, Bayesian networks, deep learning. Oddly enough, at the conclusion it didn't even try to guess which approach might dominate in the 2020's. It just says that we might return to an older approach or find a new one.
The article doesn't say anything about likely application trends for AI in the next decade.
Uh-oh...
ReplyDeletehttps://www.bbc.com/news/science-environment-47267081
Anudder crisis!
Good. Thanks for your time.
ReplyDelete